Hi!
Just wondering why this doesn't already exist: wouldn't it make sense to have
decorating data types that compress (gzip, snappy) other data types (esp.
UTF8Type,
AsciiType) transparently?
-tcn
On 6/16/11 10:12, Timo Nentwig wrote:
On 6/16/11 10:06, Sasha Dolgy wrote:
The JSON you are showing below is an export from cassandra?
Yes. Just posted the solution:
https://issues.apache.org/jira/browse/CASSANDRA-2780?focusedCommentId=13050274&
0274
Guess this could simply be done in the quote() method.
{ "74657374": [["data", "{"foo":"bar"}", 1308209845388000]] }
Does this work?
{
74657374: [["data", {foo:"bar"}, 1308209845388000]]
}
-sd
On Thu, Jun 16, 2011 at 9:
On 6/15/11 17:41, Timo Nentwig wrote:
(json can likely be boiled down even more...)
Any JSON (well, probably anything with quotes...) breaks it:
{
"74657374": [["data", "{"foo":"bar"}", 1308209845388000]]
}
[default@foo] set transactions[t
Hi!
Couldn't google anybody having yet experienced this, so I do (0.8):
{
"foo":{
"foo":{
"foo":"bar",
"foo":"bar",
"foo":"bar",
"foo":"",
"foo":"bar",
"foo":"bar",
"id":123456
} },
"foo":null
}
(json can likely be boiled down even more...)
On 6/5/11 16:26, Timo Nentwig wrote:
$ CLASSPATH=~/sqlshell/lib/ ~/sqlshell/bin/sqlshell
org.apache.cassandra.cql.jdbc.CassandraDriver,jdbc:cassandra:foo/bar@localhost:9160/ks
2011-06-05 16:21:54,452 INFO [main] org.apache.cassandra.cql.jdbc.Connection -
Connected to localhost:9160
2011-06-05
$ CLASSPATH=~/sqlshell/lib/ ~/sqlshell/bin/sqlshell
org.apache.cassandra.cql.jdbc.CassandraDriver,jdbc:cassandra:foo/bar@localhost:9160/ks
2011-06-05 16:21:54,452 INFO [main] org.apache.cassandra.cql.jdbc.Connection -
Connected to localhost:9160
2011-06-05 16:21:54,517 ERROR [main]
org.apache
On 5/25/11 14:08, Timo Nentwig wrote:
On 5/25/11 13:45, Watanabe Maki wrote:
I think I don't get your situation yet, but if you use RF=2, CL=QUORUM is
identical with CL=ALL.
Does it explain your experience?
If it was CL=ALL, it would explain it, however I does not explain why it works
On 5/25/11 13:45, Watanabe Maki wrote:
I think I don't get your situation yet, but if you use RF=2, CL=QUORUM is
identical with CL=ALL.
Does it explain your experience?
If it was CL=ALL, it would explain it, however I does not explain why it works
when
I decommission one node. RF=2 means that
Hi!
5 nodes, replication factor of 2, fifth node down.
As long as I write a single column with hector or pelops, it works. With 2
columns it fails
because there are supposed to few servers to reach quorum. Confusing. If I
decommission the fifth
node with nodetool quorum works again and I can s
On Apr 27, 2011, at 16:59, Timo Nentwig wrote:
> On Apr 27, 2011, at 16:52, Edward Capriolo wrote:
>
>> The method being private is not a deal-breaker.While not good software
>> engineering practice you can copy and paste the code and renamed the
>> class SSTable2MyJson
On Apr 27, 2011, at 17:10, Edward Capriolo wrote:
> I would think most people who watch dev watch this list.
>
> http://wiki.apache.org/cassandra/HowToContribute
So, here it is: https://issues.apache.org/jira/browse/CASSANDRA-2582
On Apr 27, 2011, at 16:52, Edward Capriolo wrote:
> The method being private is not a deal-breaker.While not good software
> engineering practice you can copy and paste the code and renamed the
> class SSTable2MyJson or whatever.
Sure I can do this but I'd like to have it just available in the d
On Apr 27, 2011, at 15:58, Edward Capriolo wrote:
> Hacking a separate copy of SSTable2json is trivial. Just look for the
> section of the code that writes the data and change what it writes. If
I did. The method's private...
> you can make it a knob --nottl then it could be included in Cassand
Hi!
What about a simple option for sstable2json to not print out expiration
TTL+LocalDeletionTime (maybe even ignore isMarkedForDelete)? I want to move old
data from a live cluster (with TTL) to an archive cluster (->data does not
expire there).
BTW is there a smarter way to do this? Actually
Did somebody try -XX:+UseCompressedStrings with cassandra? Sounds very
promising and reasonable.
On Feb 8, 2011, at 21:23, Aaron Morton wrote:
>>> 1) Is data stored in some external data structure, or is it stored in an
>>> actual Cassandra table, as columns within column families?
Yes. Own files next to the CF files and own node IndexColumnFamilies in JMX.
And they are built asynchronousl
On Feb 8, 2011, at 13:41, Stephen Connolly wrote:
> On 8 February 2011 10:38, Timo Nentwig wrote:
>> This is not what it's supposed to be like, is it?
Looks alright:
>> [default@foo] get foo[page-field];
>> => (super_column=20110208,
>> (column=82f4c650
This is not what it's supposed to be like, is it?
[default@foo] get foo[page-field];
=> (super_column=20110208,
(column=82f4c650-2d53-11e0-a08b-58b035f3f60d, value=msg1,
timestamp=1297159430471000)
(column=82f4c650-2d53-11e0-a08b-58b035f3
On Jan 21, 2011, at 16:46, Maxim Potekhin wrote:
> But Timo, this is even more mysterious! If both conditions are met, at least
> something must be returned in the second query. Have you tried this in CLI?
> That would allow you to at least alleviate client concerns.
I did this on the CLI only s
On Jan 21, 2011, at 13:55, buddhasystem wrote:
> if I use multiple secondary indexes in the query, what will Cassandra do?
> Some examples say it will index on first EQ and then loop on others. Does it
> ever do a proper index product to avoid inner loops?
Just asked the same question on the hect
On Jan 18, 2011, at 18:53, Nate McCall wrote:
> When doing mixed types on slicing operations, you should use
> ByteArraySerializer and handle the conversions by hand.
>
> We have an issue open for making this more graceful.
Pls. have a look at
http://groups.google.com/group/hector-dev/browse_t
On Jan 18, 2011, at 12:05, Timo Nentwig wrote:
>
> On Jan 18, 2011, at 12:02, Aaron Morton wrote:
>
>> Does wrapping foo in single quotes help?
>
> No.
>
>> Also, does this help
>> http://www.datastax.com/blog/whats-new-cassandra-07-secondary-indexes
>
", 1970L);
indexedSlicesQuery.addEqualsExpression("state", "UT");
indexedSlicesQuery.setColumnFamily("users");
indexedSlicesQuery.setStartKey("");
QueryResult> result =
indexedSlicesQuery.execute();
> Aaron
>
> On 18/01/2011, at 11:54 PM, Timo
I put a secondary index on rc (IntegerType) and user_agent (AsciiType).
Don't understand this bevahiour at all, can somebody explain?
[default@tracking] get crawler where user_agent=foo and rc=200;
0 Row Returned.
[default@tracking] get crawler where rc=200 and user_agent=foo;
---
Hi!
Idea: instead of simply deleting data when it TTL has passed why not make the
logic that's supposed to be executed at that point in time a pluggable
strategy. I could think e.g. of a strategy that moves old data to an archive DB.
-tcn
Hi!
Idea: instead of simply deleting data when it TTL has passed why not make the
logic that's supposed to be executed at that point in time a pluggable
strategy. I could think e.g. of a strategy that moves old data to an archive DB.
-tcn
On Dec 27, 2010, at 14:34, Timo Nentwig wrote:
> On Dec 24, 2010, at 14:33, Timo Nentwig wrote:
>> Any advice what to do with it?
>
> So, to continue this monologue: I reduced the memtable size for that CF and
> the by means of the MBeans figured out that the secondary index
On Dec 24, 2010, at 14:33, Timo Nentwig wrote:
> Any advice what to do with it?
So, to continue this monologue: I reduced the memtable size for that CF and the
by means of the MBeans figured out that the secondary index is a CF as well
which presumably also holds up to 3 memtables in mem
On Dec 23, 2010, at 12:34, Timo Nentwig wrote:
> On Dec 23, 2010, at 9:34, Timo Nentwig wrote:
>
>> I was about to add a secondary index (which apparently failed) to existing
>> data. When I restarted the node it crashed (!) with:
>
> It crashed because it ran out
On Dec 23, 2010, at 9:34, Timo Nentwig wrote:
> I was about to add a secondary index (which apparently failed) to existing
> data. When I restarted the node it crashed (!) with:
It crashed because it ran out of heap space (2G). So I increased to 3.5G but
after a whlie it's caught
I was about to add a secondary index (which apparently failed) to existing
data. When I restarted the node it crashed (!) with:
INFO 09:21:36,510 Opening /var/lib/cassandra/data/test/tracking.6b6579-tmp-e-1
ERROR 09:21:36,512 Exception encountered during startup.
java.lang.ArithmeticException: /
On Dec 22, 2010, at 16:20, Peter Schuller wrote:
> In any case: Monitoring disk-space is very very important.
So, why doesn't cassandra monitor it itself and stop accepting writes if it
runs out of space?
On Dec 22, 2010, at 16:20, Peter Schuller wrote:
>> And the data could be more evenly balanced, obviously. However the nodes
>> fails to startup because due of lacking disk space (instead of starting up
>> and denies further writes it appears to try to process the [6.6G!] commit
>> logs). So,
So, this is my ring, the third node ran out of disk space:
Address Status State LoadOwnsToken
139315361777093290765734121398073449298
192.168.68.76 Up Normal 37.83
On Dec 14, 2010, at 23:23, Timo Nentwig wrote:
> On Dec 14, 2010, at 21:07, Peter Schuller wrote:
>
>> In that case, based on the strack trace, I wonder if you're hitting
>> what I was hitting just yesterday/earlier today:
>>
>> https://issues.ap
On Dec 14, 2010, at 21:07, Peter Schuller wrote:
> In that case, based on the strack trace, I wonder if you're hitting
> what I was hitting just yesterday/earlier today:
>
> https://issues.apache.org/jira/browse/CASSANDRA-1860
>
> Which is suspected (currently being tested that it's gone with
On Dec 14, 2010, at 19:38, Peter Schuller wrote:
> For debugging purposes you may want to switch Cassandra to "standard"
> IO mode instead of mmap. This will have a performance-penalty, but the
> virtual/resident sizes won't be polluted with mmap():ed data.
Already did so. It *seems* to run more
On Dec 14, 2010, at 19:45, Peter Schuller wrote:
>> java.lang.OutOfMemoryError: Java heap space
>>at java.nio.HeapByteBuffer.(HeapByteBuffer.java:39)
>>at java.nio.ByteBuffer.allocate(ByteBuffer.java:312)
>>at
>> org.apache.cassandra.utils.FBUtilities.readByteArray(FBUtil
uot;,
name))
.addInsertion(id, "tracking", HFactory.createStringColumn("value",
value))
.execute();
Thinking of CASSANDRA-475 (this was also OOM, IIRC), is it possibly a
hector/thrift bug?
> On Tue, Dec 14, 2010 at 9:15 AM, Timo Nentwig
> wrote:
>&
On Dec 14, 2010, at 15:31, Timo Nentwig wrote:
> On Dec 14, 2010, at 14:41, Jonathan Ellis wrote:
>
>> This is "A row has grown too large" section from that troubleshooting guide.
>
> Why? This is what a typical "row" (?) looks like:
>
t in 0.7 docs. Didn't
find any related WARNs for some default value in the log also.
> On Tue, Dec 14, 2010 at 5:27 AM, Timo Nentwig
> wrote:
>
> On Dec 12, 2010, at 17:21, Jonathan Ellis wrote:
>
> > http://www.riptano.com/docs/0.6/troubleshooting/index#nodes-are-dyi
[0x0007fae0,
0x0007fcbc2000, 0x0008)
>
> On Sun, Dec 12, 2010 at 9:52 AM, Timo Nentwig
> wrote:
>
> On Dec 10, 2010, at 19:37, Peter Schuller wrote:
>
> > To cargo cult it: Are you running a modern JVM? (Not e.g. openjdk b17
> > in lenn
On Dec 10, 2010, at 19:37, Peter Schuller wrote:
> To cargo cult it: Are you running a modern JVM? (Not e.g. openjdk b17
> in lenny or some such.) If it is a JVM issue, ensuring you're using a
> reasonably recent JVM is probably much easier than to start tracking
> it down...
I had OOM problems
t; you have
> >>> 2 replicas. And since quorum is also 2 with that replication factor,
> >>> you cannot lose
> >>> a node, otherwise some query will end up as UnavailableException.
> >>>
> >>> Again, this is not related to the total number o
;> Again, this is not related to the total number of nodes. Even with 200
>>> nodes, if
>>> you use RF=2, you will have some query that fail (altough much less that
>>> what
>>> you are probably seeing).
>>>
>>> On Thu, Dec 9, 2010 at
> you are probably seeing).
>
> On Thu, Dec 9, 2010 at 5:00 PM, Timo Nentwig wrote:
> >
> > On Dec 9, 2010, at 16:50, Daniel Lundin wrote:
> >
> >> Quorum is really only useful when RF > 2, since the for a quorum to
> >> succeed RF/2+1 replicas must
LL yield the same
> result.
>
> /d
>
> On Thu, Dec 9, 2010 at 4:40 PM, Timo Nentwig wrote:
>> Hi!
>>
>> I've 3 servers running (0.7rc1) with a replication_factor of 2 and use
>> quorum for writes. But when I shut down one of them UnavailableExceptions
>
Hi!
I've 3 servers running (0.7rc1) with a replication_factor of 2 and use quorum
for writes. But when I shut down one of them UnavailableExceptions are thrown.
Why is that? Isn't that the sense of quorum and a fault-tolerant DB that it
continues with the remaining 2 nodes and redistributes the
49 matches
Mail list logo