Hi,
Here are some upgrade options: - Standard rolling upgrade: node by node -
Fast rolling upgrade: rack by rack. If clients use CL=LOCAL_ONE then it's OK
as long as one rack is UP. For higher CL it's possible assuming you have no
more than one replica per rack e.g. CL=LOCAL_QUORUM with RF
3,5,7,9,11,13,15,17,19,21,23
There is nothing in server logs.
On monday I will activate debug and try again to startup cassandra node
Thanks
Francesco Messere
On 09/11/2018 18:51, Romain Hardouin wrote:
Ok so all nodes in Firenze are down. I thought only one was K
h the cassandra.yaml file
Regards
Francesco Messere
On 09/11/2018 17:48, Romain Hardouin wrote:
Hi Francesco, it can't work! Milano and Firenze, oh boy, Calcio vs Calcio
Storico X-D
Ok more seriously, "Updating topology ..." is not a problem. But you have low
re
Hi Francesco, it can't work! Milano and Firenze, oh boy, Calcio vs Calcio
Storico X-D
Ok more seriously, "Updating topology ..." is not a problem. But you have low
resources and system misconfiguration:
- Small heap size: 3.867GiB From the logs: "Unable to lock JVM memory
(ENOMEM). This can r
Note that one "user"/application can open multiple connections. You have also
the number of Thrift connections available in JMX if you run a legacy
application.
Max is right, regarding where they're come from you can use lsof. For instance
on AWS - but you can adapt it for your needs:
IP=...REG
Also, you didn't mention which C*2.0 version you're using but prior to upgrade
to 2.1.20, make sure to use the latest 2.0 - or at least >= 2.0.7
Le vendredi 3 août 2018 à 13:03:39 UTC+2, Romain Hardouin
a écrit :
Hi Joel,
No it's not supported. C*2.0 can'
Hi Joel,
No it's not supported. C*2.0 can't stream data to C*3.11.
Make the upgrade 2.0 -> 2.1.20 then you'll be able to upgrade to 3.11.3 i.e.
2.1.20 -> 3.11.3. You can upgrade to 3.0.17 as an intermediary step (I would
do), but don't upgrade to 2.2. Also make sure to read carefully
https://g
Rocksandra is very interesting for key/value data model. Let's hope it will
land in C* upstream in the near future thanks to pluggable storage.Thanks
Dikang!
Le mardi 6 mars 2018 à 10:06:16 UTC+1, Kyrylo Lebediev
a écrit :
#yiv7016643451 #yiv7016643451 -- P
{margin-top:0;margin-bot
At Teads we use Terraform, Chef, Packer and Rundeck for our AWS infrastructure.
I'll publish a blog post on Medium which talk about that, it's in the pipeline.
Terraform is awesome.
Best,
RomainLe vendredi 9 février 2018 à 00:57:01 UTC+1, Ben Wood
a écrit :
Shameless plug of our (DC/
-adaptor ? -
how do we send these tables to cassandra? does a simple SCP work? - what is the
recommended size for sstables for when it does not fit a single executor
On 5 February 2018 at 18:40, Romain Hardouin
wrote:
Hi Julien,
We have such a use case on some clusters. If you want to insert
Hi Julien,
We have such a use case on some clusters. If you want to insert big batches at
fast pace the only viable solution is to generate SSTables on Spark side and
stream them to C*. Last time we benchmarked such a job we achieved 1.3 million
partitions inserted per seconde on a 3 C* nodes
Hi,
We also noticed an increase of CPU - both system and user - on our c3.4xlarge
fleet. So far it's really visible with max(%user) and especially max(%system),
it has doubled!I graphed a ratio "write/s / %system", it's interesting to see
how the value dropped yesterday, you can see it here: h
Does "nodetool describecluster" shows an actual schema disagreement?You can
try "nodetool resetlocalschema" to fix the issue on the node experiencing
disagreement.
Romain
Le jeudi 9 novembre 2017 à 02:55:22 UTC+1, Erick Ramirez
a écrit :
It looks like you have a schema disagreement in
Hi,
You should read about repair maintenance:
http://cassandra.apache.org/doc/latest/operating/repair.htmlConsider installing
and running C* reaper to do so: http://cassandra-reaper.io/STCS doesn't work
well with TTL. I saw you have done some tuning, hard to say if it's OK without
knowing the
Hi,
It might be useful to enable compaction logging with log_all subproperties.
Best,
Romain
Le vendredi 8 septembre 2017 à 00:15:19 UTC+2, kurt greaves
a écrit :
Might be worth turning on debug logging for that node and when the compaction
kicks off and CPU skyrockets send through the
Hi,
Before: 1 cluster with 2 DC. 3 nodes in each DCNow: 1 cluster with 1 DC. 6
nodes in this DC
Is it right?
If yes, depending on the RF - and assuming NetworkTopologyStrategy - I would
do: - RF = 2 => 2 C* rack, one rack in each AZ - RF = 3 => 3 C* rack, one
rack in each AZ
In other words, I
e context on this?
Thanks,kant
On Fri, Mar 3, 2017 at 4:42 AM, Romain Hardouin wrote:
Also, I should have mentioned that it would be a good idea to spawn your three
benchmark instances in the same AZ, then try with one instance on each AZ to
see how network latency affects your LWT rate. The lo
r sure
anymore if I saw something in system before the revert.
Anyway, hopefully it was just a fluke. We have some crazy ML libraries running
on it maybe Cassandra just gave up? Ohh well, Cassandra is a a champ and we
haven't really had issues with it before.
On Thu, Mar 2, 2017 at 6:51 P
Also, I should have mentioned that it would be a good idea to spawn your three
benchmark instances in the same AZ, then try with one instance on each AZ to
see how network latency affects your LWT rate. The lower latency is achievable
with three instances on the same placement group of course bu
helps.
Thanks,kant
On Tue, Feb 28, 2017 at 7:51 PM, Kant Kodali wrote:
Hi Romain,
Thanks again. My response are inline.
kant
On Tue, Feb 28, 2017 at 10:04 AM, Romain Hardouin wrote:
> we are currently using 3.0.9. should we use 3.8 or 3.10
No, don't use 3.X in production unless you
Did you inspect system tables to see if there is some traces of your keyspace?
Did you ever drop and re-create this keyspace before that?
Lines in debug appear because fd interval is > 2 seconds (logs are in
nanoseconds). You can override intervals via -Dcassandra.fd_initial_value_ms
and -Dcassa
Does this
help?
...
Daemeon C.M. Reiydelle
USA (+1) 415.501.0198
London (+44) (0) 20 8144 9872
On Wed, Mar 1, 2017 at 3:30 AM, Romain Hardouin wrote:
Hi all,
AWS launched i3 instances few days ago*. NVMe SSDs seem very promising!
Did someone already benchmark an i3 with Cassandra? e.g. i2 vs
Hi all,
AWS launched i3 instances few days ago*. NVMe SSDs seem very promising!
Did someone already benchmark an i3 with Cassandra? e.g. i2 vs i3If yes, with
which OS and kernel version?Did you make any system tuning for NVMe? e.g. PCIe
IRQ? etc.
We plan to make some benchmarks but Debian is not
> we are currently using 3.0.9. should we use 3.8 or 3.10
No, don't use 3.X in production unless you really need a major feature.I would
advise to stick to 3.0.X (i.e. 3.0.11 now).You can backport CASSANDRA-11966
easily but of course you have to deploy from source as a prerequisite.
> I haven't
Hi,
Regarding shared pool workers see CASSANDRA-11966. You may have to backport it
depending on your Cassandra version.
Did you try to lower compaction throughput to see if it helps? Be sure to keep
an eye on pending compactions, SSTables count and SSTable per read of course.
"alloc" is the memo
se those.
But I would like to solve this.
Cheers,
Michael
On 02.02.2017 15:06, Romain Hardouin wrote:
> Hi,
>
> What's your C* 3.X version?
> I've just tested it on 3.9 and it works:
>
> cqlsh> SELECT * FROM test.idx_static where id2=22;
>
> id |
Hi,
What's your C* 3.X version?I've just tested it on 3.9 and it works:
cqlsh> SELECT * FROM test.idx_static where id2=22;
id | added | id2 | source |
dest-+-+-++-- id1 |
2017-01-27 23:00:00.00+ | 22 | src1
Default TTL is nice to provide information on tables for ops guys. I mean we
know that data in such tables are ephemeral at a glance.
Le Mercredi 1 février 2017 21h47, Carlos Rolo a écrit :
Awsome to know this!
Thanks Jon and DuyHai!
Regards,
Carlos Juzarte RoloCassandra Consultant /
Just a side note: increase system_auth keyspace replication factor if you're
using authentication.
Le Jeudi 12 janvier 2017 14h52, Alain RODRIGUEZ a
écrit :
Hi,
Nodetool repair always list lots of data and never stays repaired. I think.
This might be the reason:
"incremental: tru
uction any time soon.
Thanks again!
| |
| Shalom Sagges |
| DBA |
| T: +972-74-700-4035 |
|
|
| | | |
| We Create Meaningful Connections |
|
| |
On Mon, Dec 26, 2016 at 7:37 PM, Romain Hardouin wrote:
Hi Shalom,
I assume you'll use KVM virtualization so pay attention t
Hi Shalom,
I assume you'll use KVM virtualization so pay attention to your stack at every
level:- Nova e.g. CPU pinning, NUMA awareness if relevant, etc. Have a look to
extra specs.- libvirt - KVM- QEMU
You can also be interested by resources quota on other OpenStack VMs that will
be colocated w
Hi all,
Many people here have troubles with repair so I would like to share my
experience regarding the backport of CASSANDRA-12580 "Fix merkle tree size
calculation" (thanks Paulo!) in our C* 2.1.16. I was expecting some minor
improvements but the results are impressive on some tables.
Becaus
Hi Jean,
I had the same problem, I removed the lines in /etc/init.d/cassandra template
(we use Chef to deploy) and now the HeapDumpPath is not overridden anymore.The
same goes for -XX:ErrorFile.
Best,
Romain
Le Mardi 4 octobre 2016 9h25, Jean Carlo a
écrit :
Yes, we did it.
So if th
Hi,
@Edward > In older versions you can not control when this call will
timeout,truncate_request_timeout_in_ms is available for many years, starting
from 1.2. Maybe you have another setting parameter in mind?
@GeorgeTry to put cassandra logs in debug
Best,
Romain
Le Mercredi 28 septembre 2
Hi Julian,
The problem with any deletes here is that you can *read* potentially many
tombstones. I mean you have two concerns: 1. Avoid to read tombstones during a
query 2. How to evict tombstones as quickly as possible to reclaim disk space
The first point is a data model consideration. Gene
OK. If you still have issues after setting streaming_socket_timeout_in_ms != 0,
consider increasing request_timeout_in_ms to a high value, say 1 or 2 minutes.
See comments in https://issues.apache.org/jira/browse/CASSANDRA-7904Regarding
2.1, be sure to test incremental repair on your data before
Alain, you replied faster, I didn't see your answer :-D
If so, how do we do that?
Thanks.
George.
On Thu, Sep 22, 2016 at 6:23 AM, Romain Hardouin wrote:
I meant that pending (and active) AntiEntropySessions are a simple way to check
if a repair is still running on a cluster. Also have a look at Cassandra reaper:
>- https://github.co
I meant that pending (and active) AntiEntropySessions are a simple way to check
if a repair is still running on a cluster. Also have a look at Cassandra reaper:
- https://github.com/spotify/cassandra-reaper
- https://github.com/spodkowinski/cassandra-reaper-ui
Best,
Romain
Le Mercredi 21 sept
Do you see any pending AntiEntropySessions (not AntiEntropyStage) with nodetool
tpstats on nodes?
Romain
Le Mercredi 21 septembre 2016 16h45, "Li, Guangxing"
a écrit :
Alain,
my script actually grep through all the log files, including those
system.log.*. So it was probably due to a
Hi,
Do you shuffle the replicas with
TokenAwarePolicy?TokenAwarePolicy(LoadBalancingPolicy childPolicy, boolean
shuffleReplicas)
Best,
RomainLe Mardi 20 septembre 2016 15h47, Pranay akula
a écrit :
I was a able to find the hotspots causing the load,but the size of these
partitions ar
Also for testing purposes, you can send only one replica set to the Test DC.
For instance with a RF=3 and 3 C* racks, you can just rsync/sstableload one
rack. It will be faster and OK for tests.
Best,
Romain
Le Mardi 20 septembre 2016 3h28, Michael Laws a
écrit :
I put together a shel
Hi,
You should make a benchmark with cassandra-stress to find the sweet spot. With
NVMe I guess you can start with a high value, 128?
Please let us know the results of your findings, it's interesting to know if we
can go crazy with such pieces of hardware :-)
Best,
Romain
Le Mardi 20 septem
Hi,
You can read and write the value of the following MBean via
JMX:org.apache.cassandra.db:type=CompactionManager - CoreCompactorThreads
- MaximumCompactorThreads
If you modify CoreCompactorThreads it will be effective immediatly, I mean
assuming you have some pending compactions, you will se
Hi,
> More recent (I think 2.2) don't have this problem since they write hints to
>the file system as per the commit log
Flat files hints were implemented starting from 3.0
https://issues.apache.org/jira/browse/CASSANDRA-6230
Best,
Romain
its each. So I can use blob as a primary key without hesitation.
Best regards,Alexandr
On Fri, Sep 9, 2016 at 1:20 AM, Romain Hardouin wrote:
Hi,
Disk-wise it's the same because a bigint is serialized as a 8 bytes ByteBuffer
and if you want to store a Long as bytes into a blob type it
Hi,
Disk-wise it's the same because a bigint is serialized as a 8 bytes ByteBuffer
and if you want to store a Long as bytes into a blob type it will take 8 bytes
too, right?The difference is the validation. The blob ByteBuffer will be stored
as is whereas the bigint will be validated. So technic
sults instantly from cqlsh
On Tue, Sep 6, 2016 at 1:57 PM, Romain Hardouin wrote:
There is nothing special in the two sstablemetadata outuputs but if the
timeouts are due to a network split or overwhelmed node or something like that
you won't see anything here. That said, if you have the
1) Is it a typo or did you really make a giant leap from C* 1.x to 3.4 with all
the C*2.0 and C*2.1 upgrades? (btw if I were you, I would use the last 3.0.X)
2) Regarding NTR all time blocked (e.g. 26070160 from the logs), have a look to
the patch "max_queued_ntr_property.txt":
https://issues.ap
ur repo.
On Mon, Sep 5, 2016 at 8:11 PM, Romain Hardouin wrote:
Yes dclocal_read_repair_chance will reduce the cross-DC traffic and latency, so
you can swap the values ( https://issues.apache.org/ji ra/browse/CASSANDRA-7320
). I guess the sstable_size_in_mb was set to 50 because back in the da
Hi,
You don't have to worry about that unless you write with CL = ANY. The sole
method to force hints that I know is to invoke scheduleHintDelivery on
"org.apache.cassandra.db:type=HintedHandoffManager" via JMX but it takes an
endpoint as argument. If you have lots of nodes and several DCs, make
ncremented based on the number of writes
for a given name(key?) and value. This table is heavy on reads and writes. If
so, the value should be much higher?
On Mon, Sep 5, 2016 at 7:35 AM, Romain Hardouin wrote:
Hi,
Try to put org.apache.cassandra.db. ConsistencyLevel at DEBUG level, it could
Hi,
Try to put org.apache.cassandra.db.ConsistencyLevel at DEBUG level, it could
help to find a regular pattern. By the way, I see that you have set a global
read repair chance: read_repair_chance = 0.1And not the local read repair:
dclocal_read_repair_chance = 0.0 Is there any reason to d
Hi Jérôme,
The code in 2.2.6 allows -local and
-pr:https://github.com/apache/cassandra/blob/cassandra-2.2.6/src/java/org/apache/cassandra/service/StorageService.java#L2899
But... the options validation introduced in CASSANDRA-6455 seems to break this
feature!https://github.com/apache/cassandra/bl
#yiv4120164789 {margin:72.0pt 90.0pt 72.0pt
90.0pt;}#yiv4120164789 div.yiv4120164789WordSection1 {}#yiv4120164789 yes, we
use Cassandra 2.1.11 in our latest release. From: Romain Hardouin
[mailto:romainh...@yahoo.fr]
Sent: 2016年8月19日 17:36
To: user@cassandra.apache.org
Subject: Re: A question to
ka is the 2.1 format... I don't understand. Did you install C* 2.1?
Romain
Le Vendredi 19 août 2016 11h32, "Lu, Boying" a écrit :
#yiv1355196952 #yiv1355196952 -- _filtered #yiv1355196952
{font-family:宋体;panose-1:2 1 6 0 3 1 1 1 1 1;} _filtered #yiv1355196952
{font-family:宋体;panose-1:2
Hi,
There are two ways to upgrade SSTables: - online (C* must be UP): nodetool
upgradesstables - offline (when C* is stopped): using the tool called
"sstableupgrade". It's located in the bin directory of Cassandra so
depending on how you installed Cassandra, it may be on the path. See
htt
Hi,
Try this and check the yaml file path: strace -f -e open nodetool
upgradesstables 2>&1 | grep cassandra.yaml
How C* is installed (package, tarball)? Other nodetool commands run fine?Also,
did you try offline SSTable upgrade with the sstableupgrade tool?
Best,
Romain
Le Vendredi 12 aoû
nodes they know
about.
unreachableNodes = probe.getUnreachableNodes(); ---> i.e if nodedon't
publish heartbeats on x seconds (using gossip protocol), it's therefore marked
'DN: down' ?
That's it?
2016-08-11 13:51 GMT+01:00 Romain Hardouin :
Hi Jean Paul,
Yes, th
Hi Jean Paul,
Yes, the gossiper is used. Example with down nodes:1. The status command
retrieve unreachable nodes from a NodeProbe instance:
https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/tools/nodetool/Status.java#L64
2. The NodeProbe list comes from a StorageServic
Yes. You can even see that some caution is taken in the code
https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/config/Config.java#L131
(But if I were you I would not rely on this. It's always better to be
explicit.)
Best,
Romain
Le Mercredi 10 août 2016 17h50, sai
> Curious why the 2.2 to 3.x upgrade path is risky at best. I guess that
>upgrade from 2.2 is less tested by DataStax QA because DSE4 used C* 2.1, not
>2.2.I would say the safest upgrade is 2.1 to 3.0.x
Best,
Romain
That's a good news if describecluster shows the same version on each node. Try
with a high timeout like 120 seconds to see if it works. Is there a VPN between
DCs? Is there room for improvement at the network level? TCP tuning, etc. I'm
not saying you won't have unreachable nodes but it's worth
Hi,
The latency is high...
Regarding the ALTER, did you try to increase the timeout with "cqlsh
--request-timeout=REQUEST_TIMEOUT"? Because the default is 10 seconds. Apart
the unreachable nodes, do you know if all nodes have the same schema version?
Best,
Romain
Just to know, did you get some errors during the nodetool upgradesstables?
Romain
Le Mardi 2 août 2016 8h40, Julien Anguenot a écrit :
Hey Oskar,
I would comment and add all possible information to that Jira issue…
J.
--Julien Anguenot (@anguenot)
On Aug 2, 2016, at 8:36 AM, Oskar K
DSE 4.8 uses C* 2.1 and DSE 5.0 uses C* 3.0. So I would say that 2.1->3.0 is
more tested by DataStax than 2.2->3.0.
Le Jeudi 14 juillet 2016 11h37, Stefano Ortolani a
écrit :
FWIW, I've recently upgraded from 2.1 to 3.0 without issues of any sort, but
admittedly I haven't been using a
On Thu, Jul 14, 2016 at 10:54 AM, Romain Hardouin wrote:
Do you run C* on physical machine or in the cloud? If the topology doesn't
change too often you can have a look a Zabbix. The downside is that you have to
set up all the JMX metrics yourself... but that's also a good point bec
Do you run C* on physical machine or in the cloud? If the topology doesn't
change too often you can have a look a Zabbix. The downside is that you have to
set up all the JMX metrics yourself... but that's also a good point because you
can have custom metrics. If you want nice graphs/dashboards y
Did you upgrade from a previous version? DId you make some schema changes like
compaction strategy, compression, bloom filter, etc.?What about the R/W
requests? SharedPool Workers are... shared ;-) Put logs in debug to see some
examples of what services are using this pool (many actually).
Bes
Same behavior here with a very different setup.After an upgrade to 2.1.14 (from
2.0.17) I see a high load and many NTR "all time blocked". Offheap memtable
lowered the blocked NTR for me, I put a comment on CASSANDRA-11363
Best,
Romain
Le Mercredi 13 juillet 2016 20h18, Yuan Fang a
écrit
Put the driver logs in debug mode to see what's happen.Btw I am surprised by
the few requests by connections in your setup:
.setConnectionsPerHost(HostDistance.LOCAL, 20, 20)
.setMaxRequestsPerConnection(HostDistance.LOCAL, 128) It looks like a
protocol v2 settings (Cassandra 2.0
Indeed when you want to flush the system keyspace you need to specify it. The
flush without argument filters out the system keyspace. This behavior is still
the same in the trunk. If you dig into the sources, look at
"nodeProbe.getNonSystemKeyspaces()" when "cmdArgs" is empty:-
https://github.c
> Would you know why the driver doesn't automatically change to LOCAL_SERIAL
> during a DC outage ?
I would say because *you* decide, not the driver ;-) This kind of fallback
could be achieved with a custom downgrading policy
(DowngradingConsistencyRetryPolicy [*] doesn't handle ConsistencyLevel
Hi Jason,
It's difficult for the community to help you if you don't share the error
;-)What the logs said when you ran a major compaction? (i.e. the first error
you encountered)
Best,
Romain
Le Mercredi 8 juin 2016 3h34, Jason Kania a écrit :
I am running a 3 node cluster of 3.0.6 inst
Hi,
You can't yet, see https://issues.apache.org/jira/browse/CASSANDRA-10857Note
that secondary indexes don't scale. Be aware of their limitations.If you want
to change the data model of a CF, a Spark job can do the trick.
Best,
Romain
Le Mardi 7 juin 2016 10h51, "Lu, Boying" a écrit :
tion during autobootstrap :)
Thanks
Anuj
----
On Tue, 26/4/16, Romain Hardouin wrote:
Subject: Re: Inconsistent Reads after Restoring Snapshot
To: "user@cassandra.apache.org"
Date: Tuesday, 26 April, 2016, 12:47 PM
You can make a restore on t
You can make a restore on the new node A (don't forget to set the token(s) in
cassandra.yaml), start the node with -Dcassandra.join_ring=false and then run a
repair on it. Have a look at
https://issues.apache.org/jira/browse/CASSANDRA-6961
Best,
Romain
Le Mardi 26 avril 2016 4h26, Anuj Wad
Yes you are right Anishek. If you write with LOCAL_ONE, values will be the same.
Would you mind pasting the ouput for both nodes in gist/paste/whatever?
https://gist.github.com http://paste.debian.net
Le Jeudi 11 février 2016 11h57, kedar a écrit :
Thanks for the reply.
ls -l cassandra/data/* lists various *.db files
This problem is on both nodes.
Thanks,
Kedar Parikh
What is the output on both nodes of the following command?
ls -l /var/lib/cassandra/data/system/*
If one node seems odd you can try "nodetool resetlocalschema" but the other
node must be in clean state.
Best,
Romain
Le Jeudi 11 février 2016 11h10, kedar a écrit :
I am using cqlsh 5.0.1 | Cas
As Mohammed said "nodetool clearsnaphost" will do the trick.
Cassandra takes a snapshot by default before keyspace/table dropping or
truncation.
You can disable this feature if it's a dev node (see auto_snapshot in
cassandra.yaml) but if it's a production node is a good thing to keep auto
snapsh
> What is the best practise to create sstables?
When you run a "nodetool flush" Cassandra persists all the memtables on disk,
i.e. it produces sstables.
(You can create sstables by yourself thanks to CQLSSTableWriter, but I don't
think it was the point of your question.)
Did you run "nodetool flush" on the source node? If not, the missing rows could
be in memtables.
Hi,
I assume a RF > 1. Right?What is the consistency level you used? cqlsh use ONE
by default. Try: cqlsh> CONSISTENCY ALLAnd run your query again.
Best,Romain
Le Vendredi 29 janvier 2016 13h45, Arindam Choudhury
a écrit :
Hi Kai,
The table schema is:
CREATE TABLE mordor.things_value
Hi Dillon,
CMIIW I suspect that you use vnodes and you want to "move one of the 256 tokens
to another node". If yes, that's not possible."nodetool move" is not allowed
with vnodes:
https://github.com/apache/cassandra/blob/cassandra-2.1.11/src/java/org/apache/cassandra/service/StorageService.j
smaller cluster. I will try a few things as soon
as i get time and update here.
On Thu, Nov 19, 2015 at 5:48 PM, Peer, Oded wrote:
Have you read the DataStax documentation?
http://docs.datastax.com/en/cassandra/2.0/cassandra/operations/ops_snapshot_restore_new_cluster.html
From: Romain
You can take a snapshot via nodetool then load sstables on your test cluster
with sstableloader:
docs.datastax.com/en/cassandra/2.1/cassandra/tools/toolsBulkloader_t.html
Sent from Yahoo Mail on Android
From:"Anishek Agarwal"
Date:Wed, Nov 18, 2015 at 11:24
Subject:Strategy tools for taking s
Cassandra can handle many more columns (e.g. time series).
So 100 columns is OK.
Best,
Romain
tommaso barbugli a écrit sur 03/07/2014 21:55:18 :
> De : tommaso barbugli
> A : user@cassandra.apache.org,
> Date : 03/07/2014 21:55
> Objet : Re: keyspace with hundreds of columnfamilies
>
> tha
how?
>
> Thanks
> Tommaso
>
> 2014-07-02 17:21 GMT+02:00 Romain HARDOUIN :
> The trap is that each CF will consume 1 MB of memory due to arena
allocation.
> This might seem harmless but if you plan thousands of CF it means
> thousands of mega bytes...
> Up to 1,000 CF I
The trap is that each CF will consume 1 MB of memory due to arena
allocation.
This might seem harmless but if you plan thousands of CF it means
thousands of mega bytes...
Up to 1,000 CF I think it could be doable, but not 10,000.
Best,
Romain
tommaso barbugli a écrit sur 02/07/2014 10:13:41
So you have to install a backup client on each Cassandra node. If the
NetBackup client behaves like EMC Networker, beware the resources
utilization (data deduplication, compression). You could have to boost
CPUs and RAM (+2GB) of each nodes.
Try with one node: make a snapshot with nodetool and
Hi Maria,
It depends which backup software and hardware you plan to use. Do you
store your data on DAS or SAN?
Some hints regarding Cassandra is either to drain the node to backup or
take a Cassandra snapshot and then to backup this snapshot.
We backup our data on tape but we also store our data
Well... you have already changed the limits ;-)
Keep in mind that changes in the limits.conf file will not affect
processes that are already running.
opensaf dev a écrit sur 21/05/2014 06:59:05 :
> De : opensaf dev
> A : user@cassandra.apache.org,
> Date : 21/05/2014 07:00
> Objet : Memory is
Hi,
You have to define limits for the user.
Here is an example for the user cassandra:
# cat /etc/security/limits.d/cassandra.conf
cassandra - memlock unlimited
cassandra - nofile 10
best,
Romain
opensaf dev a écrit sur 21/05/2014 06:59:05 :
> De : opensaf dev
> A : u
RF=1 means no replication
You have to set RF=2 in order to set up a mirroring
-Romain
ng a écrit sur 13/05/2014 19:37:08 :
> De : ng
> A : "user@cassandra.apache.org" ,
> Date : 14/05/2014 04:37
> Objet : Datacenter understanding question
>
> If I have configuration of two data center with o
Hi,
See data_file_directories and commitlog_directory in the settings file
cassandra.yaml.
Cheers,
Romain
Hari Rajendhran a écrit sur 07/04/2014 12:56:37
:
> De : Hari Rajendhran
> A : user@cassandra.apache.org,
> Date : 07/04/2014 12:58
> Objet : Cassandra Disk storage capacity
>
> Hi T
cassandra*.noarch.rpm -> Install Cassandra Only
dsc*.noarch.rpm -> DSC stands for DataStax Community. Install Cassandra +
OpsCenter
Donald Smith a écrit sur 27/03/2014
20:36:57 :
> De : Donald Smith
> A : "'user@cassandra.apache.org'" ,
> Date : 27/03/2014 20:37
> Objet : Question about rpms
It looks like MagnetoDB for CloudStack.
Nice Clojure project.
Pierre-Yves Ritschard a écrit sur 27/03/2014 08:12:15 :
> De : Pierre-Yves Ritschard
> A : user ,
> Date : 27/03/2014 08:12
> Objet : [ANN] pithos is cassandra-backed S3 compatible object store
>
> Hi,
>
> If you're already using
4 GB is OK for a test cluster.
In the past we encountered a similar issue due to VMWare ESX's memory
overcommit (memory ballooning).
When you talk about overcommit, you talk about Linux (vm.overcommit_*) or
hypervisor (like ESX)?
prem yadav a écrit sur 24/03/2014 12:11:31 :
> De : prem yada
You have to tune Cassandra in order to run it under a low memory
environment.
Many settings must be tuned. The link that Michael mentions provides a
quick start.
There is a point that I haven't understood. *When* did your nodes die?
Under load? Or can they be killed via OOM killer even if they
1 - 100 of 145 matches
Mail list logo