Thanks Dean, very helpful info.
Javier
On Tue, Feb 26, 2013 at 7:33 AM, Hiller, Dean wrote:
> Oh, and 50 CF's should be fine.
>
> Dean
>
> From: Javier Sotelo javier.a.sot...@gmail.com>>
> Reply-To: "user@cassandra.apache.org<mailto:user@cassandra.apach
Aaron,
Would 50 CFs be pushing it? According to
http://www.datastax.com/dev/blog/whats-new-in-cassandra-1-0-improved-memory-and-disk-space-management,
"This has been tested to work across hundreds or even thousands of
ColumnFamilies."
What is the bottleneck, IO?
Thanks,
Javier
On Sun, Feb 24,
Looks like someone beat me to it,
https://issues.apache.org/jira/browse/CASSANDRA-4321
On Fri, Jun 8, 2012 at 9:06 AM, Javier Sotelo wrote:
> Different node same hardware now gets the stack overflow error but I found
> the part of the stack trace that is more interesting:
>
>
)
at
org.apache.cassandra.db.compaction.LeveledCompactionTask.execute(LeveledCompactionTask.java:50)
Is it time for a JIRA ticket?
On Thu, Jun 7, 2012 at 7:03 AM, Javier Sotelo wrote:
> nodetool ring showed 34.89GB load. Upgrading from 1.1.0. One small
> keyspace with no compression,
stractCassandraDaemon.java (line
> 122) Heap size: 1525415936/1525415936
> The JVM only has 1.5 G of ram, this is at the lower limit. If you have
> some data to load I would not be surprised if it failed to start.
>
> Cheers
>
> -
> Aaron Morton
> Freelance
Hi All,
On SuSe Linux blade with 6GB of RAM.
with disk_access_mode mmap_index_only and mmap I see OOM map failed error
on SSTableBatchOpen thread. cat /proc//maps shows a peak of 53521
right before it dies. vm.max_map_count = 1966080 and /proc//limits
shows unlimited locked memory.
with disk_acc