Hi James:

I’m doing something similar on the service discovery end, but it’s a work in 
progress, so take this with the appropriate amount of salt ;-)

It seems a good idea to minimize state as much as possible, especially 
distributed state, so I have so far avoided the central “registrar”, preferring 
to distribute that functionality out to the nodes, and to delegate as much 
functionality as possible to ZeroMQ itself.

I’ve got a single well-known endpoint, which is a process running zmq_proxy 
(actually multiple processes, but let’s keep it simple).  Nodes use PUB/SUB 
messaging to exchange discovery messages with the proxy, and use the discovery 
messages to establish direct PUB/SUB connections to peer nodes over a second 
socket pair.  I let ZeroMQ deal with the filtering by topic.  I also let ZeroMQ 
deal with ignoring multiple connection attempts to the same endpoint, which  
greatly simplifies the discovery protocol.  (If you decide to do something like 
that, you probably want to make sure you are working with a relatively recent 
version of ZeroMQ — there  have been some recent changes in that functionality: 
https://github.com/zeromq/libzmq/pull/2879 
<https://github.com/zeromq/libzmq/pull/2879>).

The result of this is a fully-connected network, with each node having direct 
PUB/SUB connections to every other node.  That may or may not work for your 
application, but for mine it is fine (~100 nodes total).

As mentioned, there’s a somewhat complicated protocol that ensures that every 
node gets to see all the discovery messages, but without flooding the network.  
That part is still a work-in-progress, but it’s looking pretty reliable so far.

If you decide to do something similar, let me suggest you take a look at the 
excellent ZMQ_XPUB_WELCOME_MSG socket option contributed by Doron Somech 
(https://somdoron.com/2015/09/reliable-pubsub/ 
<https://somdoron.com/2015/09/reliable-pubsub/>).  I use this to get a 
notification when the discovery SUB socket is connected to the zmq_proxy, which 
triggers publication of discovery messages on the discovery PUB socket.

Hope this helps...

Regards,

Bill

> On Jun 23, 2018, at 12:13 AM, James Addison <[email protected]> wrote:
> 
> Looking for a little guidance/advice on ZMQ implementation.
> 
> The following demonstrates the simplistic architecture that I'm considering. 
> It doesn't take into consideration redundancy, load balancing at all levels 
> (yet). The general flow of request/response traffic would be:
> 
> -> HTTP request from internet
> -> nginx (1 node)
> -> aiohttp + zmq-based frontend (1 or more nodes depending on system demands)
> -> zmq-based router (1 node)
> -> zmq-based worker (n nodes; scalable depending on dynamic demand)
> 
> I want my system to work in environments where multicast/broadcast is not 
> available (ie. AWS EC2 VPC) - so I believe a well-known node for service 
> discovery is needed. 
> 
> With that in mind, all zmq-based nodes would:
> 
> - register with the 'central' service discovery (SD) node on startup to make 
> other nodes aware of its presence
> - separately SUBscribe to the service discovery node's PUB endpoint to 
> receive topics of pertinent peer nodes' connection details
> 
> In the nginx config, I plan to have an 'upstream' defined in a separate file 
> that is updated by a zmq-based process that also SUBscribes to the service 
> discovery node.
> 
> ZMQ-based processes, and their relation to other ZMQ-based processes:
> 
> - service discovery (SD)
> - zmq-based nginx upstream backend updater; registers with SD, SUBs to 
> frontend node topic (to automatically add frontend node connection details to 
> nginx config and reload nginx)
> - frontend does some request validation and caching; registers with SD, SUBS 
> to router node topic (to auto connect to the router's endpoint)
> - router is the standard zmq DEALER/ROUTER pattern; registers with SD
> - worker is the bit that handles the heavy lifting; registers with SD, SUBS 
> to router node topic (to auto connect to the router's endpoint)
> 
> The whole point of this is that each node only ever needs to know the 
> well-known service discovery node endpoint - and each node can auto-discover 
> and hopefully recover in most downtime scenarios (excluding mainly if the SD 
> node goes down, but that's outside of scope at the moment).
> 
> Questions!
> 
> 1. Does this architecture make sense? In particular, the single well-known 
> service discovery node and every other node doin PUB/SUB with it for relevant 
> endpoint topics?
> 2. Who should heartbeat to who? PING/PONG? ie. when a given node registers 
> with the SD node, should the registering node start heartbeating on the same 
> connection to the SD node, or should the SD node open a separate new socket 
> to the registering node? The SD node is the one that will need to know if 
> registered nodes drop off the earth, I think?
> 
> I'll likely have followup questions - hope that's ok!
> 
> Thanks,
> James
> _______________________________________________
> zeromq-dev mailing list
> [email protected]
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev

_______________________________________________
zeromq-dev mailing list
[email protected]
https://lists.zeromq.org/mailman/listinfo/zeromq-dev

Reply via email to