On 9/13/07, Justin Tulloss <[EMAIL PROTECTED]> wrote:
>
>
> On 9/13/07, Adam Olsen <[EMAIL PROTECTED]> wrote:
> >
> > Basically though, atomic incref/decref won't work.  Once you've got
> > two threads modifying the same location the costs skyrocket.  Even
> > without being properly atomic you'll get the same slowdown on x86
> > (who's cache coherency is fairly strict.)
>
>
> I'm a bit skeptical of the actual costs of atomic incref. For there to be
> contention, you would need to have to be modifying the same memory location
> at the exact same time. That seems unlikely to ever happen. We can't bank on
> it never happening, but an occasionally expensive operation is ok. After
> all, it's occasional.

That was my initial expectation too.  However, the incref *is* a
modification.  It's not simply an issue of the "exact same time", but
anything that causes the cache entries to bounce back and forth and
delay the rest of the pipeline.  If you have a simple loop like "for i
in range(count): 1.0+n", then the 1.0 literal will get shared between
threads, and the refcount will get hammered.

Is it reasonable to expect that much sharing?  I think it is.
Literals are an obvious example, but there's also configuration data
passed between threads.  Pystone seems to have enough sharing to kill
performance.  And after all, isn't sharing the whole point (even in
the definition) of threads?

-- 
Adam Olsen, aka Rhamphoryncus
_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com

Reply via email to