Thanks a lot Jonathan! That seems to be it, since the exact same
configuration w/ the same data starts up and works fine on a different
server.
-Aram
On Wed, Dec 1, 2010 at 5:24 PM, Jonathan Ellis wrote:
> Stack trace looks like an OS-level thread limit causing problems, not
> actually memory.
>
Stack trace looks like an OS-level thread limit causing problems, not
actually memory.
On Wed, Dec 1, 2010 at 7:05 PM, Aram Ayazyan wrote:
> Hi Aaron,
>
> OOM is happening both after the system has been running for a while as
> well as when I restart it afterwards. The only way to make it run
> a
Regarding caches, I haven't explicitly enabled them and the
"saved_caches" directory is empty.
-Aram
On Wed, Dec 1, 2010 at 5:05 PM, Aram Ayazyan wrote:
> Hi Aaron,
>
> OOM is happening both after the system has been running for a while as
> well as when I restart it afterwards. The only way to
Hi Aaron,
OOM is happening both after the system has been running for a while as
well as when I restart it afterwards. The only way to make it run
after it has crashed, is to remove everything from data and commitlog
directories. Unfortunately I don't have the original log from when
cassandra cras
Do you have a log message for the OOM? And some GC messages around it? Have you tried watching the server with jconsole?Is the OOM happening on system start or after it's been running ? Or both?Do you have any row/key caches? Cannot remember but is 0.6* has this but have you enabled the save cache
Hi,
We have a small cluster of 3 Cassandra servers running w/ full
replication. Every once in a while we get an OutOfMemory exception and
have to restart servers. Sometimes just restarting doesn’t do it and
we have to clean the commitlog or data directory.
We are running Cassandra 0.6.8. There is