On Tue, Jul 10, 2007 at 03:10:34PM +0200, Jarek Poplawski wrote: > On Tue, Jul 10, 2007 at 02:20:12PM +0200, Patrick McHardy wrote: > > Jarek Poplawski wrote: > > > On Tue, Jul 10, 2007 at 01:09:07PM +0300, Ranko Zivojnovic wrote: > > > > > >>However I decided not to use _rcu based iteration neither the > > >>rcu_read_lock() after going through the RCU documentation and a bunch of > > >>examples in kernel that iterate through the lists using non _rcu macros > > >>and do list_del_rcu() just fine. > > >> > > >>For readability, the reference to list_del_rcu as well as call_rcu, I > > >>believe, should be enough of the indication. Please do correct me if I > > >>am wrong here. > > > > > > > > > It's only my opinion, and it's probably not very popular at least > > > at net/ code, so it's more about general policy and not this > > > particular code. But: > > > - if somebody is looking after some rcu related problems, why can't > > > he/she spare some time by omitting lists without _rcu and not > > > analyzing why/how such lists are used and locked? > > > > > > RCU is used for the read-side, using it on the write-side just makes > > things *less* understandable IMO. It will look like the read-side > > but still do modifications. > > > > From Documentation/RCU/checklist: > > "9. All RCU list-traversal primitives, which include > list_for_each_rcu(), list_for_each_entry_rcu(), > list_for_each_continue_rcu(), and list_for_each_safe_rcu(), > must be within an RCU read-side critical section. RCU > read-side critical sections are delimited by rcu_read_lock() > and rcu_read_unlock(), or by similar primitives such as > rcu_read_lock_bh() and rcu_read_unlock_bh().
...But, on the other hand, Ranko didn't use any of these primitives... So, first I said he should use this, and than I asked why there was no rcu_read_locks around... I really start to amaze with my consistency. I hope some day you'll forgive me (no hurry, I can wait). Cheers, Jarek P. - To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html