I am developing a daemon that I want to control via ZeroMQ sockets. The messages are JSON objects. Communication would work like this:

* N clients can connect to the daemon simultaneously and send messages to the daemon * Messages might be synchronous (= the client waits for a response from the server) or asynchronous (= the client just sends, and does not wait for a response) * The server sends out either responses to client messages, or it sends out asynchronous messages (the latter are sent as broadcast, that is, there is no distinction made between clients with broadcasts)

Essential requirements are:

* Messages _must not_ get lost
* If the connection is dropped, it has to be reestablished automatically and transparently; again, no messages are allowed to get lost, so during reconnecting, the messages have to be buffered

ZeroMQ fulfills these requirements, but I am wondering about the topology. With TCP, it is relatively easy at first - the server has one listening socket, synchronous messages go over the individual TCP socket between server and client, and broadcasts are done by letting the server send the message over all sockets. But with ZeroMQ, it seems that I'd need two sockets - one for Request-Reply, one for Publish-Subscribe. Is this correct? I would need two sockets? Or is there a better option?

Also note that I don't expect many clients to be connected at the same time. Maybe 10-20 at most, not thousands. Most of the time, it will be 1, maybe 2.

Of course, I could do this with TCP, but then there is the handling of connection failures. Doing this in a robust manner can be tricky. ZeroMQ handles this for me, which is nice. Plus, ZeroMQ allows me to connect via IPC, inproc etc. which are more lightweight than TCP.

So, thoughts?
_______________________________________________
zeromq-dev mailing list
[email protected]
https://lists.zeromq.org/mailman/listinfo/zeromq-dev

Reply via email to