On Thursday 03 January 2008 1:33:05 pm Joe Perches wrote:
> On Thu, 2008-01-03 at 12:25 -0500, Paul Moore wrote:
> > diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> > index 5b4ce9b..c726cd4 100644
> > --- a/net/core/skbuff.c
> > +++ b/net/core/skbuff.c
> > @@ -407,31 +407,29 @@ static void __copy_skb_header(struct sk_buff
> > *new, const struct sk_buff *old)
> >
> >  static struct sk_buff *__skb_clone(struct sk_buff *n, struct
> > sk_buff *skb) {
> > -#define C(x) n->x = skb->x
> > -
> >     n->next = n->prev = NULL;
> >     n->sk = NULL;
> >     __copy_skb_header(n, skb);
> >
> > -   C(len);
> > -   C(data_len);
> > -   C(mac_len);
> > +   n->iif = skb->iif;
> > +   n->len = skb->len;
> > +   n->data_len = skb->data_len;
> > +   n->mac_len = skb->mac_len;
> >     n->cloned = 1;
> >     n->hdr_len = skb->nohdr ? skb_headroom(skb) : skb->hdr_len;
> >     n->nohdr = 0;
>
> To reduce possible cacheline bounces, shouldn't the order of
> operation on the elements be in struct order?

Sounds reasonable to me, I'll adjust the function to match the field 
offsets.

-- 
paul moore
linux security @ hp
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to