Ryan, Sorry if you felt I was ignoring your reply to focus on the discussion with Andrew. You both made a lot of the same points at about the same time but I did want to touch on a couple things.
On Thursday, May 5, 2016 at 4:21:59 PM UTC-4, Ryan Hiebert wrote: > > Thank you, Mark, for starting this discussion. I, too, found myself simply > accepting that channels was the right way to go, despite having the same > questions you do. I realize this shouldn't be, so I've chimed in on some of > your comments. > > > On May 5, 2016, at 2:34 PM, Mark Lavin <markd...@gmail.com <javascript:>> > wrote: > > > > [snip] > > > > The Channel API is built more around a simple queue/list rather than a > full messaging layer. [snip] Kombu supports [snip]. > > The API was purposefully limited, because channels shouldn't need all > those capabilities. All this is spelled out in the documentation, which I > know you already understand because you've mentioned it elsewhere. I think > that the choice to use a more limited API makes sense, though that doesn't > necessarily mean that it is the right choice. > > > > [snip description of architecture] > > First off, the concerns you mention make a lot of sense to me, and I've > been thinking along the same lines. > > I've been considering if having an alternative to Daphne that only used > channels for websockets, but used WSGI for everything else. Or some > alternative split where some requests would be ASGI and some WSGI. I've > tested a bit the latency overhead that using channels adds (on my local > machine even), and it's not insignificant. I agree that finding a solution > that doesn't so drastically slow down the requests that we've already > worked hard to optimize is important. I'm not yet sure the right way to do > that. > > As far as scaling, it is apparent to me that it will be very important to > have the workers split out, in a similar way to how we have different > celery instances processing different queues. This allows us to scale those > queues separately. While it doesn't appear to exist in the current > implementation, the channel names are obviously suited to such a split, and > I'd expect channels to grow the feature of selecting which channels a > worker should be processing (forgive me if I've just missed this > capability, Andrew). > Similar to Celery, the workers can listen on only certain channels or exclude listening on channels which is sort of a means of doing priority https://github.com/andrewgodwin/channels/issues/116 I would also like to see this expanded or more have the use case more clearly documented. > > > > [[ comments on how this makes deployment harder ]] > > ASGI is definitely more complex that WSGI. It's this complexity that gives > it power. However, to the best of my knowledge, there's not a push to be > dropping WSGI. If you're doing a simple request/response site, then you > don't need the complexity, and you probably should be using WSGI. However, > if you need it, having ASGI standardized in Django will help the community > build on the power that it brings. > > > Channels claims to have a better zero-downtime deployment story. > However, in practice I’m not convinced that will be true. [snip] > > I've been concerned about this as well. On Heroku my web dynos don't go > down, because the new ones are booted up while the old ones are running, > and then a switch is flipped to have the router use the new dynos. Worker > dynos, however, do get shut down. Daphne won't be enough to keep my site > functioning. This is another reason I was thinking of a hybrid WSGI/ASGI > server. > > > > There is an idea floating around of using Channels for background > jobs/Celery replacement. It is not/should not be. [snip reasons] > > It's not a Celery replacement. However, this simple interface may be good > enough for many things. Anything that doesn't use celery's `acks_late` is a > candidate, because in those cases even Celery doesn't guarantee delivery, > and ASGI is a simpler interface than the powerful, glorious behemoth that > is Celery. This isn't the place for a long discussion about the inner workings of Celery but I don't believe this is true. The prefetched tasks are not acknowledged until they are delivered to a worker for processing. Once delivered, the worker might die/be killed before it can complete the task but the message was delivered. That's the gap that acks_late solves: between the message delivery and the completion of the task. Not all brokers support message acknowledgement natively and so that feature is emulated which could lead to prefetched message loss or delay. I've certainly seen this when using Redis as the broker but never with RabbitMQ which has native support for acknowledgement. > There's an idea that something like Celery could be built on top of it. > That may or may not be a good idea, since Celery uses native protocol > features of AMQP to make things work well, and those may not be available > or easy to replicate accurately with ASGI. I'll be sticking with Celery for > all of those workloads, personally, at least for the foreseeable future. > > > > [snip] locks everyone into using Redis. > > Thankfully, I know you're wrong about this. Channel layers can be built > for other things, but Redis is a natural fit, so that's what he's written. > I expect we'll see other channel layers for queues like AMQP before too > long. > See my previous response to Andrew on this point. > > > > I see literally no advantage to pushing all HTTP requests and responses > through Redis. > > It seems like a bad idea to push _all_ HTTP requests through Redis given > the latency it adds, but long-running requests can still be a good idea for > this case, because it separates the HTTP interface from the long-running > code. This can be good, if used carefully. > All of the examples I've seen have pushed all HTTP requests through Redis. I think some of the take-aways from this conversation will be to move away from that and recommend Channels primarily for websockets and not for WSGI requests. > > > What this does enable is that you can continue to write synchronous > code. To me that’s based around some idea that async code is too hard for > the average Django dev to write or understand. Or that nothing can be done > to make parts of Django play nicer with existing async frameworks which I > also don’t believe is true. Python 3.4 makes writing async Python pretty > elegant and async/await in 3.5 makes that even better. > > Async code is annoying, at best. It can be done, and it's getting much > more approachable with async/await, etc. But even when you've done all > that, there's stuff that, for many reasons, either cannot be written async > (using a non-async library), or isn't IO-bound and async could actually > _hurt_ the performance. ASGI doesn't force you to write anything > synchronous _or_ asynchronous, and that's part of the beauty: it doesn't > care. > Yes if you are willing to sacrifice latency. That seems like the disconnect for me. Websockets are meant to be "real-time" so I would expect projects using them would want low latency. For me that means writing code which is as close as possible to the client socket. The design of Channels adds a backend layer in between the client socket and the application logic. I guess it depends what performance characteristic you care about. > > > > Thanks for taking the time to read this all the way through and I > welcome any feedback. > > I hope my comments have been a help for this discussion. I'll be giving a > talk on Channels at the local Python user group later this month, so this > is of particular timely importance to me. > > -- You received this message because you are subscribed to the Google Groups "Django developers (Contributions to Django itself)" group. To unsubscribe from this group and stop receiving emails from it, send an email to django-developers+unsubscr...@googlegroups.com. To post to this group, send email to django-developers@googlegroups.com. Visit this group at https://groups.google.com/group/django-developers. To view this discussion on the web visit https://groups.google.com/d/msgid/django-developers/afd9fe27-05cf-405d-b718-29471de0ed65%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.