Hello,

Thanks to commit f02dbbe1ae[0], the Memcached cache backend using pylibmc can 
now keep connections open between requests. Establishing a new TCP connection 
is rather expensive and each saved round trips to the cache server allows to 
shave a few ms of response time.

It appears that in a multithreaded environment we could improve the situation 
even more by letting threads share the same `PyLibMCCache` instance.

Currently in Django, each thread asking for a cache backend gets its own 
personal Backend object[1], thus each thread also get its own connection pool 
to memcached. After a few requests the process ends up opening as many 
connections to memcached as there are threads.

If instead we allowed the connection pool to be shared between threads, 
connections would only be opened when necessary (other threads using all the 
pooled connections).

Now the important questions. Why do we have thread locals in the first place? 
Can we share Backend instances between threads?

After looking at the code of all cache backends I feel that nothing prevents 
dropping threadlocal altogether. Did I miss something?

[0] 
https://github.com/django/django/commit/f02dbbe1ae02c3258fced7b7a75d35d7745cc02a
[1] 
https://github.com/django/django/blob/master/django/core/cache/__init__.py#L64
-- 
Nicolas Le Manchet

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers  (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at https://groups.google.com/group/django-developers.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-developers/1520452437.1399418.1295099912.51D6677F%40webmail.messagingengine.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to