Hi,
the steps are:
- ALTER KEYSPACE to change your replication strategy
- "nodetool repair -pr " on ALL nodes or full repair
"nodetool repair " on enough replica to distribute and
rebalance your data to replicas
- nodetool cleanup on every node to remove superfluous data
Please note that you'd be
Schema are propagated by GOSSIP
you can check schema propagation cluster wide with nodetool describecluster
or "nodetool gossipinfo | grep SCHEMA | cut -f3 -d: | sort | uniq -c"
You'd better send your DDL instruction to only one node (for example by
using the whitelist load balancing policy with
As said already by Alain you should make this as short as possible:
- streaming operations won't work (repair, bootstrap)
- Hinted Handoff won't work as 2 differents major version of cassandra
can't shared the same schema version
- So no DDL operations (CREATE/ALTER) as you change won't be propagat
If you don't want tombstones, don't generate them ;)
More seriously, tombstones are generated when:
- doing a DELETE
- TTL expiration
- set a column to NULL
However tombstones are an issue only if for the same value, you have many
tombstones (i.e you keep overwriting the same values with datas an
Are you using repairParallelism = sequential or parallel ?
As said by Alain:
- try to decrease streamthroughput to avoid overflooding nodes with a lots
of (small) streamed sstables
- if you are using // repair, switch to sequential
- don't start too much repair simultaneously.
- Do you really need
will find a way to mitigate thing though, or already have. Bonne
> chance ;-).
>
> C*heers,
> ---
> Alain Rodriguez - al...@thelastpickle.com
> France
>
> The Last Pickle - Apache Cassandra Consulting
> http://www.thelastpickle.com
>
>
>
Are you running repairs ?
You may try:
- increase concurrentçcompaction to 8 (max in 2.1.x)
- increase compaction_throupghput to more than 16MB/s (48 may be a good start)
What kind of data are you storing in theses tables ? timeseries ?
2016-03-21 23:37 GMT+01:00 Gianluca Borello :
> Thank yo
Any news on this ?
We also have issues during repairs when using many LCS tables. We end
up with 8k sstables, many pending tasks and dropped mutations
We are using Cassandra 2.0.10, on a 24 cores server, with
multithreaded compactions enabled.
~$ nodetool getstreamthroughput
Current stream throu
Are your commitlog and data on the same disk ? If yes, you should put
commitlogs on a separate disk which don't have a lot of IO.
Others IO may have great impact impact on your commitlog writing and
it may even block.
An example of impact IO may have, even for Async writes:
https://engineering.li
ithub.com/apache/cassandra/blob/cassandra-2.0/NEWS.txt#L195-L198
>>
>> It's a good idea to move up to 2.0.12 while your at it. There have been a
>> number of bugfixes.
>>
>> On Tue, Mar 3, 2015 at 12:37 PM, Fabrice Facorat
>> wrote:
>>>
>>>
Hi,
we have a 52 Cassandra nodes cluster running Apache Cassandra 1.2.13.
As we are planning to migrate to Cassandra 2.0.10, we decide to do
some tests and we noticed that once a node in the cluster have been
upgraded to Cassandra 2.0.x, restarting a Cassandra 1.2.x will fail.
The tests were done
>From what I understand, this can happen when having many nodes and
vnodes by node. How many vnodes did you configure on your nodes ?
2014-03-04 11:37 GMT+01:00 Phil Luckhurst :
> The VMs are hosted on the same ESXi server and they are just running
> Cassandra. We seem to get this happen even if t
> Yes, it is expected behavior since
> 1.2.5(https://issues.apache.org/jira/browse/CASSANDRA-5424).
> Since you set foobar not to replicate to stats dc, primary range of
> foobar keyspace for nodes in stats is empty.
>
>
> On Thu, Feb 27, 2014 at 10:16 AM, Fabrice Facorat
&
Hi,
we have a cluster with 3 DC, and for one DC ( stats ), RF=0 for a
keyspace using NetworkTopologyStrategy.
cqlsh> SELECT * FROM system.schema_keyspaces WHERE keyspace_name='foobar';
keyspace_name | durable_writes | strategy_class
| strategy_options
+
2013/6/19 Takenori Sato :
> GC options are not set. You should see the followings.
>
> -XX:+PrintGCDateStamps -XX:+PrintPromotionFailure
> -Xloggc:/var/log/cassandra/gc-1371603607.log
>
>> Is it normal to have two processes like this?
>
> No. You are running two processes.
It's "normal" as this i
At Orange portails we are presently testing Cassandra 1.2.0 beta/rc
with Java 7, and presnetly we have no issues
2012/12/22 Brian Tarbox :
> What I saw in all cases was
> a) set JAVA_HOME to java7, run program fail
> b) set JAVA_HOME to java6, run program success
>
> I should have better notes but
16 matches
Mail list logo