You are right.I have already change cold_reads_to_omit to 0.0.
--
曹志富
手机:18611121927
邮箱:caozf.zh...@gmail.com
微博:http://weibo.com/boliza/
2015-02-03 14:15 GMT+08:00 Roland Etzenhammer :
> Hi,
>
> maybe you are running into an issue that I also had on my test
Hi,
maybe you are running into an issue that I also had on my test cluster.
Since there were almost no reads on it cassandra did not run any minor
compactions at all. Solution for me (in this case) was:
ALTER TABLE WITH compaction = {'class':
'SizeTieredCompactionStrategy', 'min_threshold':
https://issues.apache.org/jira/browse/CASSANDRA-8635
On Tue, Feb 3, 2015 at 5:47 AM, 曹志富 wrote:
> Just run nodetool repair.
>
> The nodes witch has many sstables are newest in my cluster.Before add
> these nodes to my cluster ,my cluster have not compaction automaticly
> because my cluster is an
Just run nodetool repair.
The nodes witch has many sstables are newest in my cluster.Before add these
nodes to my cluster ,my cluster have not compaction automaticly because my
cluster is an only write cluster.
thanks.
--
曹志富
手机:18611121927
邮箱:caozf.zh...@gmai
Did you run incremental repair? Incremental repair is broken in 2.1 and
tends to create way too many SSTables.
On 2 February 2015 at 18:05, 曹志富 wrote:
> Hi,all:
> I have 18 nodes C* cluster with cassandra2.1.2.Some nodes have aboud
> 40,000+ sstables.
>
> my compaction strategy is STCS.
>
> Coul
Hi,all:
I have 18 nodes C* cluster with cassandra2.1.2.Some nodes have aboud
40,000+ sstables.
my compaction strategy is STCS.
Could someone give me some solution to deal with this situation.
Thanks.
--
曹志富
手机:18611121927
邮箱:caozf.zh...@gmail.com
微博:http://wei
For the benefit of others, I ended up finding out that the CQL library I
was using (https://github.com/gocql/gocql) at this time leaves paging page
size defaulted to no paging, so Cassandra was trying to pull all rows of
the partition into memory at once. Setting the page size to a reasonable
numbe
I'll try your recommendations and would update on the same
Thanks so much
Cheers
Asit
On Mon, Feb 2, 2015, 9:56 PM Eric Stevens wrote:
> Just a minor observation: those field names are extremely long. You store
> a copy of every field name with every value with only a couple of
> exceptions:
>
Colin, I'm not familiar with Ceph, but it sounds like it's a more
sophisticated version of a SAN.
Be aware that running Cassandra on absolutely anything other than local
disks is an anti-pattern. It will have a profound negative impact on
performance, scalability, and reliability of your cluster.
Just a minor observation: those field names are extremely long. You store
a copy of every field name with every value with only a couple of
exceptions:
http://www.datastax.com/documentation/cassandra/1.2/cassandra/architecture/architecturePlanningUserData_t.html
Your partition key column name (lo
HI Asit;
The Partition key is only a part of the performance. Recommend reading this
article: Advanced Time Series with Cassandra
| |
| | | | | | | |
| Advanced Time Series with CassandraDataStax - Software, support, and training
for Apache Cassandra |
| |
| View on www.datast
A leading wildcard is one of the slowest things you can do with Lucene, and
not a recommended practice, so either accept that it is slow or don't do it.
That said, there is a trick you can do with a reverse wildcard filter, but
that's an expert-level feature and not recommended for average develop
HI All
We are working on a application logging project and this is one of the
search tables as below :
CREATE TABLE logentries (
logentrytimestamputcguid timeuuid PRIMARY KEY,
context text,
date_to_hour bigint,
durationinseconds float,
eventtimestamputc timestamp,
ipaddr
What about Java clients that were built for 1.2 and how they work with 2.0 ?
On 2015-02-02 14:32:53 +, Carlos Rolo said:
Using Pycassa (https://github.com/pycassa/pycassa)I had no trouble with
the Clients writing/reading from 1.2.x to 2.0.x (Can't recall the minor
versions out of my head
Using Pycassa (https://github.com/pycassa/pycassa)I had no trouble with the
Clients writing/reading from 1.2.x to 2.0.x (Can't recall the minor
versions out of my head right now).
Regards,
Carlos Juzarte Rolo
Cassandra Consultant
Pythian - Love your data
rolo@pythian | Twitter: cjrolo | Linkedi
Sure but the question is really about going from 1.2 to 2.0 ...
On 2015-02-02 13:59:27 +, Kai Wang said:
I would not use 2.1.2 for production yet. It doesn't seem stable enough
based on the feedbacks I see here. The newest 2.0.12 may be a better
option.
On Feb 2, 2015 8:43 AM, "Sibbald, C
Our minor version is 1.2.15 ...
I am not looking forward to the experience, and would like to gather as
much information as possible.
This presents an opportunity to also review the data structures we use
and possibly move them out of Cassandra.
Oleg
On 2015-02-02 13:42:52 +, Sibbald,
I would not use 2.1.2 for production yet. It doesn't seem stable enough
based on the feedbacks I see here. The newest 2.0.12 may be a better option.
On Feb 2, 2015 8:43 AM, "Sibbald, Charles"
wrote:
> Hi Oleg,
>
> What is the minor version of 1.2? I am looking to do the same for 1.2.14
> in a ver
Hi Oleg,
What is the minor version of 1.2? I am looking to do the same for 1.2.14
in a very large cluster.
Regards
Charles
On 02/02/2015 13:33, "Oleg Dulin" wrote:
>Dear Distinguished Colleagues:
>
>We'd like to upgrade our cluster from 1.2 to 2.0 and then to 2.1 .
>
>We are using Pelops Thr
Dear Distinguished Colleagues:
We'd like to upgrade our cluster from 1.2 to 2.0 and then to 2.1 .
We are using Pelops Thrift client, which has long been abandoned by its
authors. I've read that 2.x has changes to the Thrift protocol making
it incompatible with 1.2 (and of course now the link t
At least I cannot think of any reason why it wouldn't work. As you said, you
might lose the data but if you can live with that then why not.
Hannu
> On 02.02.2015, at 14:21 , Gabriel Menegatti wrote:
>
> Hi Colin,
>
> Yes, we don't want to use the C* in-memory, we just want to mount the
> ke
Hi Colin,
Yes, we don't want to use the C* in-memory, we just want to mount the keyspace
data directory to RAM instead of leaving it on the spinning disks.
My question is more related to the technical side of mounting the keyspace data
folder to the ram memory than checking if Cassandra has som
Hi Jan,
Thanks for your reply, but C* in-memory just supports 1 GB keyspaces at the
moment, what is not enough for us.
My question is more related to the technical side of mounting the keyspace data
folder to the ram memory than checking if Cassandra has some in-memory feature.
My intention is
Hi, Holmberg,
I tried your suggestion and run the following command:
keytool –exportcert –keystore path-to-my-keystore-file –storepass
my-keystore-password –storetype JKS –file path-to-outptfile and
I got following error:
keytool error: java.lang.Exception: Alias does not exist
Do you know how
Thanks a lot ;)
I’ll try your suggestions.
From: Adam Holmberg [mailto:adam.holmb...@datastax.com]
Sent: 2015年1月31日 1:12
To: user@cassandra.apache.org
Subject: Re: FW: How to use cqlsh to access Cassandra DB if the
client_encryption_options is enabled
Assuming the truststore you are referencing
You can also try Stratio Cassandra, which is based in Cassandra 2.1.2, the
latest version of Apache Cassandra:
https://github.com/Stratio/stratio-cassandra
It provides an open sourced implementation of the secondary indexes of
Cassandra, which allows you to perform full-text queries, distributed
r
Yes but the stargate-core project is using native lucene libraries but yes
it would be dependent on the stargate-core developer.
I find that very easy and doing more analysis on this.
Regards
Asit
On Mon, Feb 2, 2015 at 12:50 PM, Colin wrote:
> I use solr and cassandra but not together. I wri
27 matches
Mail list logo