Hi there,

We have couple use cases that are doing fanout read for their data, means
one single read request from client contains multiple keys which live on
different physical hosts. (I know it's not recommended way to access C*).

Right now, on the coordinator, it will issue separate read commands even
though they will go to the same physical host, which I think is causing a
lot of overheads.

I'm wondering is it valuable to provide a new read command, that
coordinator can batch the reads to one datanode, and send to it in one
message, and datanode will return the results for all keys belong to it?

Any similar ideas before?


-- 
Dikang

Reply via email to