Whenever I mention the limit in a talk I say, "2 Billion columns" in a faux
10 year old voice :). Cassandra can have a 2billion column row. A 60MB row
in row cache will make the JVM sh*t the bed. (row cache you should not use
anyway). As rcoli points out a 35 GB row, I doubt you can do anything wit
On Sun, May 12, 2013 at 6:26 PM, Edward Capriolo wrote:
> 2 billion is the maximum theoretically limit of columns under a row. It is
> NOT the maximum limit of a CQL collection. The design of CQL collections
> currently require retrieving the entire collection on read.
Each column has a byte over
Collections that big are likely not what you want. Many people are using
cassandra because they want low latency reads <10ms on smallish row keys or
key slices. Attempting to get 10K + columns in one go generally does not
work well. First, there is network issues 100K columns of 5 bytes requires
la
In the CQL3 protocol the sizes of collections are unsigned shorts, so the
maximum number of elements in a LIST<...> is 65,536. There's no check,
afaik, that stops you from creating lists that are bigger than that, but
the protocol doesn't handle returning them (you get the first N - 65536 %
65536 i
2 billion is the maximum theoretically limit of columns under a row. It is
NOT the maximum limit of a CQL collection. The design of CQL collections
currently require retrieving the entire collection on read.
On Sun, May 12, 2013 at 11:13 AM, Robert Wille wrote:
> I designed a data model for my
I designed a data model for my data that uses a list of UUID's in a
column. When I designed my data model, my expectation was that most of the
lists would have fewer than a hundred elements, with a few having several
thousand. I discovered in my data a list that has nearly 400,000 items in
it. When