On Wed, May 27, 2015 at 5:10 PM, Jason Unovitch
wrote:
> Simple and quick question, can anyone point me to where the Cassandra
> 1.2.x series EOL date was announced? I see archived mailing list
> threads for 1.2.19 mentioning it was going to be the last release and
> I see CVE-2015-0225 mention
I figured it out. Another process on that machine was leaking threads.
All is well!
Thanks guys!
Oleg
On 2013-12-16 13:48:39 +, Maciej Miklas said:
the cassandra-env.sh has option
JVM_OPTS="$JVM_OPTS -Xss180k"
it will give this error if you start cassandra with java 7. So increase
the
Try using jstack to see if there are a lot of threads there.
Are you using vNodea and Hadoop ?
https://issues.apache.org/jira/browse/CASSANDRA-6169
Cheers
-
Aaron Morton
New Zealand
@aaronmorton
Co-Founder & Principal Consultant
Apache Cassandra Consulting
http://www.thelas
the cassandra-env.sh has option
JVM_OPTS="$JVM_OPTS -Xss180k"
it will give this error if you start cassandra with java 7. So increase the
value, or remove option.
Regards,
Maciej
On Mon, Dec 16, 2013 at 2:37 PM, srmore wrote:
> What is your thread stack size (xss) ? try increasing that, that
What is your thread stack size (xss) ? try increasing that, that could
help. Sometimes the limitation is imposed by the host provider (e.g.
amazon ec2 etc.)
Thanks,
Sandeep
On Mon, Dec 16, 2013 at 6:53 AM, Oleg Dulin wrote:
> Hi guys!
>
> I beleive my limits settings are correct. Here is the o
On Mon, Aug 26, 2013 at 5:39 AM, Denis Kot wrote:
> Please help. We spent almost 3 days trying to fix it with no luck.
>
Did you ultimately succeed in this task?
=Rob
On Mon, Aug 26, 2013 at 5:39 AM, Denis Kot wrote:
> 2) Stop gossip
>
> 3) Stop thrift
> 4) Drain
> 5) Stop Cassandra 6) Move all data to ebs (we using ephemeral volumes for
> data)
> 7) Stop / Start instance
> 8) Move data back
> 9) Start Cassandra
>
10) stop cassandra
11) set auto_bootstrap:fals
> Are you sure that it is a good idea to estimate remainingKeys like that?
Since we don't want to scan every row to check overlap and cause heavy
IO automatically, the method can only do the best-effort type of
calculation.
In your case, try running user defined compaction on that sstable
file. It
Thanks for the answer.
It means that if we use randompartioner it will be very difficult to find
a sstable without any overlap.
Let me give you an example from my test.
I have ~50 sstables in total and an sstable with droppable ratio 0.9. I use
GUID for key and only insert (no update -delete) s
> Can method calculate non-overlapping keys as overlapping?
Yes.
And randomized keys don't matter here since sstables are sorted by
"token" calculated from key by your partitioner, and the method uses
sstable's min/max token to estimate overlap.
On Tue, May 21, 2013 at 4:43 PM, cem wrote:
> Than
Thank you very much for the swift answer.
I have one more question about the second part. Can method calculate
non-overlapping keys as overlapping? I mean it uses max and min tokens and
column count. They can be very close to each other if random keys are used.
In my use case I generate a GUID fo
> Why does Cassandra single table compaction skips the keys that are in the
> other sstables?
because we don't want to resurrect deleted columns. Say, sstable A has
the column with timestamp 1, and sstable B has the same column which
deleted at timestamp 2. Then if we purge that column only from
Thanks Sylvain. BTW, what's the status of the java-driver? When will it be GA?
On Feb 12, 2013, at 1:19 AM, Sylvain Lebresne wrote:
> Yes, it's called atomic_batch_mutate and is used like batch_mutate. If you
> don't use thrift directly (which would qualify as a very good idea), you'll
> need
> ** **
>
> ** **
>
> *De :* Sylvain Lebresne [mailto:sylv...@datastax.com]
> *Envoyé :* mardi 12 février 2013 10:19
> *À :* user@cassandra.apache.org
> *Objet :* Re: Cassandra 1.2 Atomic Batches and Thrift API
>
> ** **
>
> Yes, it's called atomic_batch_mutate a
.org
Objet : Re: Cassandra 1.2 Atomic Batches and Thrift API
Yes, it's called atomic_batch_mutate and is used like batch_mutate. If you
don't use thrift directly (which would qualify as a very good idea), you'll
need to refer to whatever client library you are using to see if 1) s
Yes, it's called atomic_batch_mutate and is used like batch_mutate. If you
don't use thrift directly (which would qualify as a very good idea), you'll
need to refer to whatever client library you are using to see if 1) support
for that new call has been added and 2) how to use it. If you are not su
On Jan 17, 2013, at 11:54 AM, Sylvain Lebresne wrote:
> Now, one of the nodes dies, and when I bring it back up, it does'nt join the
> cluster again, but becomes it own node/cluster. I can't get it to join the
> cluster again, even after doing 'removenode' and clearing all data.
>
> That obvi
>
> Now, one of the nodes dies, and when I bring it back up, it does'nt join
> the cluster again, but becomes it own node/cluster. I can't get it to join
> the cluster again, even after doing 'removenode' and clearing all data.
>
That obviously should not have happened. That being said we have a f
> Any idea whether interoperability b/w Thrift and CQL should work properly in
> 1.2?
AFAIK the only incompatibility is CQL 3 between pre 1.2 and 1.2.
Cheers
-
Aaron Morton
Freelance Cassandra Developer
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 16/01/2013, at 1:2
Hi,
Is there any document to follow, in case i migrate cassandra thrift API to
1.2 release? Is it backward compatible with previous releases?
While migrating Kundera to cassandra 1.2, it is complaining on various data
types. Giving weird errors like:
While connecting from cassandra-cli:
"
Excepti
On Mon, Jan 14, 2013 at 11:55 PM, aaron morton wrote:
> Sylvain,
> Out of interest if the select is…
>
> select * from test where interval = 7 and severity = 3 order by id desc
> ;
>
> Would the the ordering be a no-op or would it still run ?
>
Yes, as Shahryar said this is currently rejected b
Aaron
If you have order buy whit a column with a secondary index in a where
clause it fails with:
Bad Request: ORDER BY with 2ndary indexes is not supported.
Best Regards
Shahryar
On Mon, Jan 14, 2013 at 5:55 PM, aaron morton wrote:
> Sylvain,
> Out of interest if the select is…
>
> select
Sylvain,
Out of interest if the select is…
select * from test where interval = 7 and severity = 3 order by id desc ;
Would the the ordering be a no-op or would it still run ?
Or more generally does including an ORDER BY clause that matches the CLUSTERING
ORDER BY DDL clause incur ove
On Mon, Jan 14, 2013 at 5:04 PM, Shahryar Sedghi wrote:
> Can I always count on this order, or it may change in the future?
>
I would personally rely on it. I don't see any reason why we would change
that internally and besides I suspect you won't be the only one to rely on
it so we won't take
If you want the "row key", just query it (we prefer the term "partition
key" in CQL3 and that's the term you'll find in documents like
http://cassandra.apache.org/doc/cql3/CQL.html but it's the same thing) and
it'll be part of the return columns.
I understand that, as i am able to fetch "partition
>
> How to fetch and populate "row key" from CqlRow api then?
If you want the "row key", just query it (we prefer the term "partition
key" in CQL3 and that's the term you'll find in documents like
http://cassandra.apache.org/doc/cql3/CQL.html but it's the same thing) and
it'll be part of the re
Is it documented somewhere? How to fetch and populate "row key" from
CqlRow api then?
-Vivek
On Mon, Jan 14, 2013 at 7:18 PM, Sylvain Lebresne wrote:
> On Mon, Jan 14, 2013 at 12:48 PM, Vivek Mishra wrote:
>
>> I am getting an issue, where "key" attribute's in byte[] is returned as
>> empty va
On Mon, Jan 14, 2013 at 12:48 PM, Vivek Mishra wrote:
> I am getting an issue, where "key" attribute's in byte[] is returned as
> empty value.
>
We don't return this anymore as this doesn't make much sense for CQL3. Same
as in CqlMetadata we don't return a default_name_type and
default_value_type
Hi,
I am trying to migrate Kundera Thrift API from 1.1.6 from 1.2 and changing *
execute_cql_query* to* execute_cql3_query*(with consistenceLevel). I am
getting an issue, where "key" attribute's in byte[] is returned as empty
value.
Though same is working with 1.1.6
-Vivek
On Mon, Jan 14, 2013
On Sun, Jan 13, 2013 at 5:59 PM, Shahryar Sedghi wrote:
> Since new cql3 methods require ConsistencyLevel.xxx, is consistency level
> at the query has precedence over this level at the api or not.
>
There is no "consistency level at the query level" anymore. That's one of
the breaking change (no
I finally realized that Thrift API has changed from 1.1 to 1.2 and my code
and modified JDBC driver works well except I get an exception on the system
log when I close the connection. Looks like it is an old issue reappearing.
I have evaluated new Java driver, it is easier and more practical than
Thanks Brian
it is not the same issue, and stack trace is different. It is a simple test
case and I have 3 columns and I populate all of them with:
cqlsh:somedb> CREATE TABLE test(interval int,id text, body text, primary
key (interval, id));
cqlsh:somedb> insert into test (interval, id, body) val
I reported the issue here. You may be missing a component in your column name.
https://issues.apache.org/jira/browse/CASSANDRA-5138
-brian
On Jan 12, 2013, at 12:48 PM, Shahryar Sedghi wrote:
> Hi
>
> I am trying to test my application that runs with JDBC, CQL 3 with Cassandra
> 1.2. After
On Mon, Jan 7, 2013 at 12:10 PM, Tristan Seligmann
wrote:
> I am guessing the strange results you get are a bug; Cassandra should
> either refuse to execute the query
>
That is correct. I've created
https://issues.apache.org/jira/browse/CASSANDRA-5122 and will attach a
patch shortly.
--
Sylvain
If you use PRIMARY KEY ((a, b)) instead of PRIMARY KEY (a, b), the
partition key will be a composite of both the a and b values; with
PRIMARY KEY (a, b), the partition key will be a, and the column names
will be a composite of b and the column name (c being the only regular
column here).
I am gues
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
COMPACT STORAGE seems to be really the issue why I can not see my table
using pycassa. Unfortunately COMPACT STORAGE does not seem to support
collection types *sigh*.
- -aj
aaron morton wrote:
>> I know work is in progress to fix this...
> AFAIK CF's
I have a C++ driver that is nearly complete for the new binary protocol,
and I plan on creating wrappers for python and ruby. Was going to announce
next week hopefully. It needs some more unit tests and docs. It also
currently lacks connection pooling and retry.
https://github.com/mstump/libcql
This is perhaps my issue :-)
Thanks for the pointer with COMPACT STORAGE.
Andreas
aaron morton wrote:
>> I know work is in progress to fix this...
> AFAIK CF's created by CQL 3 using COMPACT STORAGE are visible to
> thrift. Those created without it are not, and will not, be visible to
> thrift.
> I know work is in progress to fix this...
AFAIK CF's created by CQL 3 using COMPACT STORAGE are visible to thrift. Those
created without it are not, and will not, be visible to thrift.
Cheers
-
Aaron Morton
Freelance Cassandra Developer
New Zealand
@aaronmorton
http://www.the
I know one outstanding issue is that CQL3 created column families won't be
listed as CQL3 column families aren't exposed by the old thrift calls.
I know work is in progress to fix this...
On Jan 6, 2013, at 8:01 PM, "aaron morton"
mailto:aa...@thelastpickle.com>> wrote:
I'm not aware of any is
I'm not aware of any issues with using the thrift API with
http://pycassa.github.com/pycassa/ and 1.2
Cheers
-
Aaron Morton
Freelance Cassandra Developer
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 7/01/2013, at 8:21 AM, Adam Venturella wrote:
> I have been using
I have been using this successfully so far:
http://pypi.python.org/pypi/cql
On Sun, Jan 6, 2013 at 11:18 AM, Andreas Jung wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Are there any up-to-date Python bindings available that
> work with Cassandra 1.2?
>
> - -aj
> -BEGIN PGP S
Hello,
Removing the extra parenthesis around the primary key definition seems to
work as expected:
CREATE TABLE foo2 (a int, b text, c uuid, PRIMARY KEY (a, b) );
INSERT INTO foo2 (a, b , c ) VALUES ( 1 , 'aze',
'4d481800-4c5f-11e1-82e0-3f484de45426');
INSERT INTO foo2 (a, b , c ) VALUES ( 1 ,
https://issues.apache.org/jira/browse/CASSANDRA/fixforversion/12323284
On Wed, Oct 10, 2012 at 1:41 AM, Alexey Zotov wrote:
> Hi Guys,
>
> What known critical bugs are there that couldn't allow to use 1.2 beta 1 in
> production?
> We don't use cql and secondary indexes.
>
>
> --
>
> Best regards
44 matches
Mail list logo