Assume a map/reduce program which needs to update some values during
ingest, and needs to perform read operations on 100 keys each of which
have say 50 different columns. This happens many times for a given
reduce task in the cluster. Shouldn't that be handled by the server
as a single call?
On
No. At that point you basically have no overhead advantage vs just
doing multiple single-row requests.
On Thu, Jun 17, 2010 at 2:39 PM, Sonny Heer wrote:
> Any plans for this sort of call?
>
>
> Instead of:
>
> public Map> multiget_slice(String
> keyspace, List keys, ColumnParent column_paren
Any plans for this sort of call?
Instead of:
public Map> multiget_slice(String
keyspace, List keys, ColumnParent column_parent,
SlicePredicate predicate, ConsistencyLevel consistency_level) throws
InvalidRequestException, UnavailableException, TimedOutException,
TException;
---
public