Re: 0.6.1 insert 1B rows, crashed when using py_stress

2010-04-23 Thread Brandon Williams
On Fri, Apr 23, 2010 at 4:59 AM, richard yao wrote: > I got the same question, and after that cassandra cann't be started. > I want to know how to restart the cassandra after it crashed. > Thanks for any reply. > Perhaps supply the error when you restart it? -Brandon

Re: 0.6.1 insert 1B rows, crashed when using py_stress

2010-04-23 Thread richard yao
I got the same question, and after that cassandra cann't be started. I want to know how to restart the cassandra after it crashed. Thanks for any reply.

RE: 0.6.1 insert 1B rows, crashed when using py_stress

2010-04-20 Thread Mark Jones
insert 1B rows, crashed when using py_stress On Mon, Apr 19, 2010 at 7:12 PM, Brandon Williams wrote: > On Mon, Apr 19, 2010 at 9:06 PM, Schubert Zhang wrote: >> >> 2. Reject the request when be short of resource, instead of throws OOME >> and exit (crash). > > Rig

Re: 0.6.1 insert 1B rows, crashed when using py_stress

2010-04-20 Thread Tatu Saloranta
On Mon, Apr 19, 2010 at 7:12 PM, Brandon Williams wrote: > On Mon, Apr 19, 2010 at 9:06 PM, Schubert Zhang wrote: >> >> 2. Reject the request when be short of resource, instead of throws OOME >> and exit (crash). > > Right, that is the crux of the problem  It will be addressed here: > https://iss

Re: 0.6.1 insert 1B rows, crashed when using py_stress

2010-04-20 Thread Eric Evans
On Tue, 2010-04-20 at 10:39 +0800, Ken Sandney wrote: > Sorry I just don't know how to resolve this :) http://wiki.apache.org/cassandra/FAQ#slows_down_after_lotso_inserts > On Tue, Apr 20, 2010 at 10:37 AM, Jonathan Ellis > wrote: > > > Ken, I linked you to the FAQ answering your problem in the

Re: 0.6.1 insert 1B rows, crashed when using py_stress

2010-04-20 Thread Benjamin Black
Not so reasonable, given what you are trying to accomplish. A 1GB heap (on a 2GB machine) is fine for development and functional testing, but I wouldn't try to deal with the number of rows you are describing with less than 8GB/node with 4-6GB heap. b On Mon, Apr 19, 2010 at 7:32 PM, Ken Sandney

Re: 0.6.1 insert 1B rows, crashed when using py_stress

2010-04-19 Thread Schubert Zhang
Jonathan, Thanks. Yes, the scale of GC grath is different from the throughput one. I will do more check and tuning in our next test immediately. On Tue, Apr 20, 2010 at 10:39 AM, Ken Sandney wrote: > Sorry I just don't know how to resolve this :) > > > On Tue, Apr 20, 2010 at 10:37 AM, Jonathan

Re: 0.6.1 insert 1B rows, crashed when using py_stress

2010-04-19 Thread Ken Sandney
Sorry I just don't know how to resolve this :) On Tue, Apr 20, 2010 at 10:37 AM, Jonathan Ellis wrote: > Ken, I linked you to the FAQ answering your problem in the first reply > you got. Please don't hijack my replies to other people; that's rude. > > On Mon, Apr 19, 2010 at 9:32 PM, Ken Sandne

Re: 0.6.1 insert 1B rows, crashed when using py_stress

2010-04-19 Thread Jonathan Ellis
Ken, I linked you to the FAQ answering your problem in the first reply you got. Please don't hijack my replies to other people; that's rude. On Mon, Apr 19, 2010 at 9:32 PM, Ken Sandney wrote: > I am just running Cassandra on normal boxes, and grants 1GB of total 2GB to > Cassandra is reasonable

Re: 0.6.1 insert 1B rows, crashed when using py_stress

2010-04-19 Thread Ken Sandney
I am just running Cassandra on normal boxes, and grants 1GB of total 2GB to Cassandra is reasonable I think. Can this problem be resolved by tuning the thresholds described on this page , or just be waiting for the 0.7 release as Brandon mentione

Re: 0.6.1 insert 1B rows, crashed when using py_stress

2010-04-19 Thread Jonathan Ellis
Schubert, I don't know if you saw this in the other thread referencing your slides: It looks like the slowdown doesn't hit until after several GCs, although it's hard to tell since the scale is different on the GC graph and the insert throughput ones. Perhaps this is compaction kicking in, not GC

Re: 0.6.1 insert 1B rows, crashed when using py_stress

2010-04-19 Thread Brandon Williams
On Mon, Apr 19, 2010 at 9:06 PM, Schubert Zhang wrote: > > 2. Reject the request when be short of resource, instead of throws OOME and > exit (crash). > Right, that is the crux of the problem It will be addressed here: https://issues.apache.org/jira/browse/CASSANDRA-685 -Brandon

Re: 0.6.1 insert 1B rows, crashed when using py_stress

2010-04-19 Thread Schubert Zhang
-Xmx1G is too small. In my cluster, 8GB ram on each node, and I grant 6GB to cassandra. Please see my test @ http://www.slideshare.net/schubertzhang/presentations 幻灯片 5 –Memory, GC..., always to be the bottleneck and big issue of java-based infrastructure software! References: –http://wiki.apach

Re: 0.6.1 insert 1B rows, crashed when using py_stress

2010-04-19 Thread Ken Sandney
here is my JVM options, by default, I didn't modify them, from cassandra.in.sh # Arguments to pass to the JVM JVM_OPTS=" \ -ea \ -Xms128M \ -Xmx1G \ -XX:TargetSurvivorRatio=90 \ -XX:+AggressiveOpts \ -XX:+UseParNewGC \ -XX:+UseConcMar

Re: 0.6.1 insert 1B rows, crashed when using py_stress

2010-04-19 Thread Schubert Zhang
Seems you should configure larger jvm-heap. On Tue, Apr 20, 2010 at 9:32 AM, Schubert Zhang wrote: > Please also post your jvm-heap and GC options, i.e. the seting in > cassandra.in.sh > And what about you node hardware? > > On Tue, Apr 20, 2010 at 9:22 AM, Ken Sandney wrote: > >> Hi >> I am do

Re: 0.6.1 insert 1B rows, crashed when using py_stress

2010-04-19 Thread Schubert Zhang
Please also post your jvm-heap and GC options, i.e. the seting in cassandra.in.sh And what about you node hardware? On Tue, Apr 20, 2010 at 9:22 AM, Ken Sandney wrote: > Hi > I am doing a insert test with 9 nodes, the command: > >> stress.py -n 10 -t 1000 -c 10 -o insert -i 5 -d >> 10.0.

Re: 0.6.1 insert 1B rows, crashed when using py_stress

2010-04-19 Thread Jonathan Ellis
http://wiki.apache.org/cassandra/FAQ#slows_down_after_lotso_inserts On Mon, Apr 19, 2010 at 8:22 PM, Ken Sandney wrote: > Hi > I am doing a insert test with 9 nodes, the command: >> >> stress.py -n 10 -t 1000 -c 10 -o insert -i 5 -d >> 10.0.0.1,10.0.0.2. > > and  5 of the 9 nodes were

0.6.1 insert 1B rows, crashed when using py_stress

2010-04-19 Thread Ken Sandney
Hi I am doing a insert test with 9 nodes, the command: > stress.py -n 10 -t 1000 -c 10 -o insert -i 5 -d > 10.0.0.1,10.0.0.2. and 5 of the 9 nodes were cashed, only about 6'500'000 rows were inserted I checked out the system.log and seems the reason are 'out of memory'. I don't if th