It is possible to do this with IP Multicast. The query goes out on the
multicast and all query servers read it. The servers wait for a random
amount of time, then transmit the answer. Here's the trick: it's
multicast. All of the query servers listen to each other's responses,
and drop out when another server answers the query. The server has to
decide whether to do the query before responding; this would take some
tuning.
Having all participants snoop on their peers is a really powerful
design. I worked on a telecom system that used IP Multicast to do
shortest-path-first allocation of T1 lines. Worked really well. It's a
shame Enron never used it.
On 01/24/2013 04:17 PM, Chris Hostetter wrote:
: For example perhaps a load balancer that sends multiple queries
: concurrently to all/some replicas and only keeps the first response
: might be effective. Or maybe a load balancer which takes account of the
I know of other distributed query systems that use this approach, when
query speed is more important to people then load and people who use them
seem to think it works well.
given that it synthetically multiplies the load of each end user request,
it's probably not something we'd want to turn on by default, but a
configurable option certainly seems like it might be handy.
-Hoss