Thanks guys for the answers. For some reason I didn't consider push-pull in the first place, but it suits my need quite well. :)
My only concern is about recovery from failures. Let's pretend I create a pipeline with 3 stages, and 2 processing nodes for each stage. Now, one of the nodes in the middle stage fail, making the pipeline lose half of the processed messages. What are my options to handle such scenarios? I glanced over The guide, but didn't find anything relevant... What kind of failures I want to handle? - the business logic in a processing node fails (because of a problem with the input data), but the node does not crash. In this case I'm thinking of pushing forward an error message instead of a result. This way the sink would detect the failure immediately and could act upon it. This case is ok. - the node crashes while processing and doesn't push anything forward. How should this be handled? I can think of a timeout in the sink, but is there something better? Regards, Gyorgy On Wed, Dec 6, 2017 at 3:32 AM, Stephen Riesenberg < [email protected]> wrote: > It obviously gets complicated in a hurry, but to add to that: > > Can you create a broker per scaling group, which subscribes to the > relevant topic and uses push sockets to load balance automagically to > workers? It shouldn’t get overloaded if it’s just shuffling messages though > it’s always possible, and that’s when you may need to switch away from > pub/sub and use credit based flow or something. > > Or possibly you could even use a majordomo broker per group? You’d need to > manage subscribers yourself probably like you suggested with router/dealer. > This seems more complicated. Benefit is you could possibly reuse everything > you’ve already built and just add new components to manage the pub/sub > aspect to extend the network. If you’re using containers you’d just need to > level up your automation to handle the added complexity of more networking > configs, groups/clusters and the like. > > Or I wonder if you could combine majordomo brokers with zyre groups and > use SHOUT to publish to your brokers (or a proxy for the broker). A > subscription is a zyre group membership. A publisher shouts at that zyre > group. The proxy, a zyre client, forwards messages to the actual broker (on > localhost) who load balances for a scaling group. > > This all depends on volume and failure scenarios too. > > On Tue, Dec 5, 2017 at 5:38 AM Luca Boccassi <[email protected]> > wrote: > >> On Tue, 2017-12-05 at 09:03 +0100, Gyorgy Szekely wrote: >> > Hi ZeroMQ community, >> > In our application we use ZeroMQ for communication between backend >> > services >> > and it works quite well (thanks for the awesome library). Up to now >> > we >> > relied on the request/reply pattern only (a majordomo derivative >> > protocol), >> > where a broker distributes tasks among workers. Everything runs in >> > it's own >> > container, and scaling works like charm: if workers can't keep up >> > with the >> > load, we can simply start some more, and the protocol handles the >> > rest. So >> > far so good. >> > >> > Now, we would like to use pub/sub: a component produces some data, >> > and >> > publishes an event about it. It doesn't care (and potentially can't >> > even >> > know) who needs it. Interested peers subscribe to the topic. What I'm >> > puzzled with is scaling. If a subscriber can't keep up with the load >> > I >> > would like to scale it up just like the workers. But in this case the >> > events won't be distributed, but all instances receive the same set, >> > increasing CPU load, but not throughput. >> > >> > I would like a pub/sub where load is distributed among identical >> > instances. >> > ZeroMQ has all kinds of fancy patterns (pirates and stuff), is there >> > something for this problem? >> > >> > What I had in mind is equipping subscribers with a "groupId", which >> > is the >> > same in scaling groups. Subscribers send their id's on connection to >> > the >> > broker, which publishes the topics to only one subscriber in each >> > group. >> > This means I can't use pub/sub sockets, but I have to reimplement the >> > behavior on router/dealer, but that's ok. >> > >> > What do you think, is there a better way? >> > >> > Regards, >> > Gyorgy >> >> Sounds like what you want is similar to push-pull - load balancing is >> embedded in that pattern, have a look at the docs >> >> -- >> Kind regards, >> Luca Boccassi_______________________________________________ >> zeromq-dev mailing list >> [email protected] >> https://lists.zeromq.org/mailman/listinfo/zeromq-dev >> > > _______________________________________________ > zeromq-dev mailing list > [email protected] > https://lists.zeromq.org/mailman/listinfo/zeromq-dev > >
_______________________________________________ zeromq-dev mailing list [email protected] https://lists.zeromq.org/mailman/listinfo/zeromq-dev
