Hi,
did you also consider to “tame” your spark job by reducing it’s executors?
Probably the Job will have a longer runtime in exchange to reducing the stress
on the Cassandra cluster.
Regards
Christian
Von: "ZAIDI, ASAD A"
Antworten an: "user@cassandra.apache.org"
Datum: Donnerstag, 25. Juli
I fully agree to Jon here. We previously also used MV’s and major problems
popped up on decommissioning/commissioning nodes.
After replacing them by doing the MV’s job “manually” by code, we did not face
those issues anymore.
Regards,
Christian
Von: Jon Haddad
Antworten an: "user@cassandra.apa
hould be safe to replace the disk and run repair - 6696 will
keep data for a given token range all on the same disks, so the resurrection
problem is solved.
--
Jeff Jirsa
On Aug 14, 2018, at 6:10 AM, Christian Lorenz
mailto:christian.lor...@webtrekk.com>> wrote:
Hi,
given a cluster with
Hi,
given a cluster with RF=3 and CL=LOCAL_ONE and application is deleting data,
what happens if the nodes are setup with JBOD and one disk fails? Do I get
consistent results while the broken drive is replaced and a nodetool repair is
running on the node with the replaced drive?
Kind regards,
he timeline? If you can manage with a
maintenance window the snapshot / move and restore method may be the fastest.
Streaming data can take a long time to sync two DCs if there is a lot of data.
--
Rahul Singh
rahul.si...@anant.us
Anant Corporation
On Jun 14, 2018, 4:11 AM -0400, Christian Lorenz
Hi,
we need to move our existing cassandra cluster to new hardware nodes. Currently
the cluster size is 8 members, they need to be moved to 8 new machines.
Cassandra version in use is 3.11.1. Unfortunately we use materialized views in
production. I know that they have been marked retroactively
batches table should go down as they get processed (although
100GB is a pretty huge batch log...)
Do you use Materialized Views in your data model ?
You just bootstrapped a new node and the table grew on all other nodes ?
On Thu, Dec 7, 2017 at 12:25 PM Christian Lorenz
mailto:christian.lor...@
I think we’ve hit the Bug described here:
https://issues.apache.org/jira/browse/CASSANDRA-14096
Regards,
Christian
Von: Christian Lorenz
Antworten an: "user@cassandra.apache.org"
Datum: Freitag, 1. Dezember 2017 um 10:04
An: "user@cassandra.apache.org"
Betreff: Re: No
Hi,
after joining a node into an existing cluster, the table system.batches became
quite large (100GB) which is about 1/3 of the nodes size.
Is it safe to truncate the table?
Regards,
Christian
does repair based on tokenrange (or even part of it), that's
why it can manage to require a small merkle tree.
Regards,
Javier.
2017-11-30 6:48 GMT-03:00 Christian Lorenz
mailto:christian.lor...@webtrekk.com>>:
Hello,
after updating our cluster to Cassandra 3.11.1 (previously 3.9) r
Hello,
after updating our cluster to Cassandra 3.11.1 (previously 3.9) running a
‘nodetool repair –full’ leads to the node crashing.
Logfile showed the following Exception:
ERROR [ReadRepairStage:36] 2017-11-30 07:42:06,439 CassandraDaemon.java:228 -
Exception in thread Thread[ReadRepairStage:36
Hi,
I’ve tried to decommission a Node in a cassandra 3.11.1 cluster.
The following warning appear in the logfile of the other nodes:
WARN [StreamReceiveTask:10] 2017-11-20 17:06:00,735 StorageProxy.java:790 -
Received base materialized view mutation for key
DecoratedKey(-3222800209314657990, 2
12 matches
Mail list logo