Can you upgrade to the official 0.8 release and try again with logging set to
DEBUG ?
Cheers
-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com
On 6 Jun 2011, at 23:41, Mario Micklisch wrote:
> :-)
>
> There are several data Files:
>
> # l
:-)
There are several data Files:
# ls -al *-Data.db
-rw-r--r-- 1 cassandra cassandra 53785327 2011-06-05 14:44
CFTest-g-21-Data.db
-rw-r--r-- 1 cassandra cassandra 56474656 2011-06-05 18:04
CFTest-g-38-Data.db
-rw-r--r-- 1 cassandra cassandra 21705904 2011-06-05 20:02
CFTest-g-45-Data.db
-rw
Ops, I misread "150 GB" in one of your earlier emails as "150 MB" so forget
what I said before. You have loads of free space :)
How many files do you have in your data directory ? If it's 1 then that log
message was a small bug, that has been fixed.
Cheers
-
Aaron Morton
Freel
I found a patch for the php extension here:
https://issues.apache.org/jira/browse/THRIFT-1067
… this seemed to fix the issue. Thank you Jonathan and Aaron for taking time
to provide me with some help!
Regarding the compaction I would still love to hear your feedback on how to
configure Cassandra
I tracked down the timestamp submission and everything was fine within the
PHP Libraries.
The thrift php extension however seems to have an overflow, because it was
now setting now timestamps with also negative values ( -1242277493 ). I
disabled the php extension and as a result I now got correct
Thanks for the feedback Aaron!
The schema of the CF is default, I just defined the name and the rest is
default, have a look:
Keyspace: TestKS
Read Count: 65
Read Latency: 657.8047076923076 ms.
Write Count: 10756
Write Latency: 0.03237039791744143 ms.
Pending Tasks: 0
Column Family: CFTest
SSTa
It is rarely a good idea to let the data disk get to far over 50% utilisation.
With so little free space the compaction process will have trouble running
http://wiki.apache.org/cassandra/MemtableSSTable
As you are on the RC1 I would just drop the data and start again. If you need
to keep it you
Yes, checked the log file, no errors there.
With debug logging it confirms to receive the write too and it is also in
the commitlog.
DEBUG 22:00:14,057 insert writing local RowMutation(keyspace='TestKS',
key='44656661756c747c6532356231342d373937392d313165302d613663382d31323331336330616334
Did you check the server log for errors?
See if the problem persists after running nodetool compact. If it
does, use sstable2json to export the row in question.
On Sat, Jun 4, 2011 at 3:21 PM, Mario Micklisch
wrote:
> Thank you for the reply! I am not trying to read a row with too many columns
>
Thank you for the reply! I am not trying to read a row with too many columns
into memory, the lock I am experiencing is write-related only and happening
for everything added prior to an unknown event.
I just ran into the same thing again and the column count is maybe not the
real issue here (as I
It sounds like you're trying to read entire rows at once. Past a
certain point (depending on your heap size) you won't be able to do
that, you need to "page" through them N columns at a time.
On Sat, Jun 4, 2011 at 12:27 PM, Mario Micklisch
wrote:
> Hello there!
> I have ran into a strange proble
Hello there!
I have ran into a strange problem several times now and I wonder if someone
here has an solution for me:
For some of my data I want to keep track all the ID's I have used. To do
that, I am putting the ID's as column into rows.
At first I wanted to put all ID's into one row (because
12 matches
Mail list logo