On Fri, Jun 07, 2019 at 12:17:47AM +0200, Stefano Brivio wrote: > On Thu, 6 Jun 2019 21:44:58 +0000 > Martin Lau <ka...@fb.com> wrote: > > > > + if (!(filter->flags & RTM_F_CLONED)) { > > > + err = rt6_fill_node(net, arg->skb, rt, NULL, NULL, NULL, 0, > > > + RTM_NEWROUTE, > > > + NETLINK_CB(arg->cb->skb).portid, > > > + arg->cb->nlh->nlmsg_seq, flags); > > > + if (err) > > > + return err; > > > + } else { > > > + flags |= NLM_F_DUMP_FILTERED; > > > + } > > > + > > > + bucket = rcu_dereference(rt->rt6i_exception_bucket); > > > + if (!bucket) > > > + return 0; > > > + > > > + for (i = 0; i < FIB6_EXCEPTION_BUCKET_SIZE; i++) { > > > + hlist_for_each_entry(rt6_ex, &bucket->chain, hlist) { > > > + if (rt6_check_expired(rt6_ex->rt6i)) > > > + continue; > > > + > > > + err = rt6_fill_node(net, arg->skb, rt, > > > + &rt6_ex->rt6i->dst, > > > + NULL, NULL, 0, RTM_NEWROUTE, > > > + NETLINK_CB(arg->cb->skb).portid, > > > + arg->cb->nlh->nlmsg_seq, flags); > > Thanks for the patch. > > > > A question on when rt6_fill_node() returns -EMSGSIZE while dumping the > > exception bucket here. Where will the next inet6_dump_fib() start? > > And thanks for reviewing. > > It starts again from the same node, see fib6_dump_node(): w->leaf = rt; > where rt is the fib6_info where we failed dumping, so we won't skip > dumping any node. If the same node will be dumped, does it mean that it will go through this loop and iterate all exceptions again?
> > This also means that to avoid sending duplicates in the case where at > least one rt6_fill_node() call goes through and one fails, we would > need to track the last bucket and entry sent, or, alternatively, to > make sure we can fit the whole node before dumping. My another concern is the dump may never finish. > > I don't think that can happen in practice, or at least I haven't found a > way to create enough valid exceptions for the same node. That I am not sure. It is not unusual to have many pmtu exceptions in a gateway node. > > Anyway, I guess that would be nicer, but the fix is going to be much > bigger, and I don't think we even have to guarantee that. I'd rather > take care of that as a follow-up. Any preferred solution by the way? > > -- > Stefano