>From my experience, and what I've read, more the RAM the better. Any excess
>memory can be used as disk cache, which should help with your reads a lot.
-Arindam
-Original Message-
From: Paolo Crosato [mailto:paolo.cros...@targaubiest.com]
Sent: Tuesday, September 09, 2014 12:53 PM
To:
>> For new nodes that you want to bootstrap into the cluster you can specify
>> any nodes you wish.
Note that you want to keep your seed list consistent across your cluster though.
From: Mark Reddy [mailto:mark.re...@boxever.com]
Sent: Tuesday, April 29, 2014 2:26 AM
To: user@cassandra.apache.o
This thread should answer your questions:
http://stackoverflow.com/questions/15925549/how-cassandra-split-keyspace-data-when-multiple-dirctories-found
From: Yatong Zhang [mailto:bluefl...@gmail.com]
Sent: Wednesday, April 30, 2014 2:03 AM
To: user@cassandra.apache.org
Subject: Can Cassandra work
Answers inline.
From: Lu, Boying [mailto:boying...@emc.com]
Sent: Wednesday, April 30, 2014 12:20 AM
To: user@cassandra.apache.org
Subject: Some questions to adding a new datacenter into cassandra cluster.
Hi, All,
I need to add a Cassandra (running on a three nodes) fresh installed in a
datac
What you have described below should work just fine.
When I was replacing nodes in my ring, I ended up creating a new datacenter
with the new nodes, but I was upgrading to vnodes too at the time.
-Arindam
From: nash [mailto:nas...@gmail.com]
Sent: Monday, April 28, 2014 10:52 PM
To: user@cassan
If you don’t have row caching turned on, you can try changing settings to get
your memtables to flush faster (read more at the datastax documention at [1]).
Btw, if you are using a pre-1.2 version, your bloom filter might come into play
as well.
All said and done, if your write request rate is
As an update - finally got the node to join the ring.
Restarting all the nodes in the cluster, followed by a clean bootstrap of the
node that was stuck did the trick.
-Arindam
From: Arindam Barua [mailto:aba...@247-inc.com]
Sent: Monday, February 24, 2014 5:04 PM
To: user@cassandra.apache.org
I am running Cassandra 1.2.12 on CentOS 5.10.
Was running 1.1.15 previously without any issues as well.
-Arindam
From: Donald Smith [mailto:donald.sm...@audiencescience.com]
Sent: Tuesday, February 25, 2014 3:40 PM
To: user@cassandra.apache.org
Subject: RE: Supported Cassandra version for CentOS
eamer.java (line 246)
Streaming from /10.67.XXX.XXX failed
From: Arindam Barua [mailto:aba...@247-inc.com]
Sent: Tuesday, February 18, 2014 5:16 PM
To: user@cassandra.apache.org
Subject: RE: Bootstrap stuck: vnode enabled 1.2.12
I believe you are talking about CASSANDRA-6685, which was introduced in
apache.org
Subject: Re: Bootstrap stuck: vnode enabled 1.2.12
There is a bug where a node without schema can not bootstrap. Do you have
schema?
On Tue, Feb 18, 2014 at 1:29 PM, Arindam Barua
mailto:aba...@247-inc.com>> wrote:
The node is still out of the ring. Any suggestions on how to get it
The node is still out of the ring. Any suggestions on how to get it in will be
very helpful.
From: Arindam Barua [mailto:aba...@247-inc.com]
Sent: Friday, February 14, 2014 1:04 AM
To: user@cassandra.apache.org
Subject: Bootstrap stuck: vnode enabled 1.2.12
After our otherwise successful
After our otherwise successful upgrade procedure to enable vnodes, when adding
back "new" hosts to our cluster, one non-seed host ran into a hardware issue
during bootstrap. By the time the hardware issue was fixed a week later, all
other nodes were added successfully, cleaned, repaired. The di
or 1.2.14
subsequently.
-Original Message-
From: Chris Burroughs [mailto:chris.burrou...@gmail.com]
Sent: Monday, January 06, 2014 10:00 AM
To: user@cassandra.apache.org
Subject: Re: vnode in production
On 01/02/2014 01:51 PM, Arindam Barua wrote:
> 1. the stability of vnodes in pr
Hello all,
Just wanted to check if anyone has any experiences to share regarding
1. the stability of vnodes in production
2. upgrading to vnodes in production
We recently upgraded to 1.2.12 in production and were planning to turn on
vnodes using the "adding a new datacenter" metho
Do you have any snapshots on the nodes where you are seeing this issue?
Snapshots will link to sstables which will cause them not be deleted.
-Arindam
From: Narendra Sharma [mailto:narendra.sha...@gmail.com]
Sent: Sunday, December 15, 2013 1:15 PM
To: user@cassandra.apache.org
Subject: Cassandra
Aaron Morton
New Zealand
@aaronmorton
Co-Founder & Principal Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com
On 21/11/2013, at 12:42 pm, Arindam Barua
mailto:aba...@247-inc.com>> wrote:
>
> Thanks for the suggestions Aaron.
>
> As a follow up,
want to put the reads down as well.
It's easier to tune the system if you can provide some info on the workload.
Cheers
-
Aaron Morton
New Zealand
@aaronmorton
Co-Founder & Principal Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com
On 7/11/2013,
I see 100 000 recommended in the Datastax documentation for thenofile limit
since Cassandra 1.2 :
http://www.datastax.com/documentation/cassandra/2.0/webhelp/cassandra/install/installRecommendSettings.html
-Arindam
From: Pieter Callewaert [mailto:pieter.callewa...@be-mobile.be]
Sent: Thursday,
We want to upgrade our Cassandra cluster to have newer hardware, and were
wondering if anyone has suggestions on Cassandra or linux config changes that
will prove to be beneficial.
As of now, our performance tests (our application specific as well as
cassandra-stress) are not showing any signif
gt; - changing the "memtable_total_space_in_mb" to 1024
> - increasing the heap to 10GB.
>
> Hope this will help somehow :).
>
> Good luck
>
>
> 2013/10/16 Arindam Barua
>
>
> During performance testing being run on our 4 node Cassandra 1.1.5 cluster
pickle.com
On 23/10/2013, at 1:08 PM, Arindam Barua
mailto:aba...@247-inc.com>> wrote:
We are not doing deletes, but are setting ttls of 8 days on most of our columns
(these are not updates to existing columns). Hence it seems safe to reduce
gc_grace_seconds to even 0 from a tombstones
hen.
Thanks,
Arindam
[1] http://www.datastax.com/docs/1.0/operations/cluster_management
From: Robert Coli [mailto:rc...@eventbrite.com]
Sent: Friday, October 18, 2013 11:50 AM
To: user@cassandra.apache.org
Subject: Re: upgrading Cassandra server hardware best practice?
On Fri, Oct 18, 2013 a
We currently have 2 datacenters and a ring of 5 Cassandra servers on each
datacenter. We are getting new hardware, and after evaluating them, plan to
upgrade the ring to the new hardware.
Is there any recommended procedure for doing so? This is one of the options we
are considering:
1.
very high.
The disk setup is three 1TB disks with software RAID 0, to appear as a single
disk of 3TB.
Does this confirm that I have a slow disk problem? If so, any other ways to
help other than moving to SSDs and adding more Cassandra nodes.
Thanks,
Arindam
From: Arindam Barua [mailto:aba...@247-i
We don't do any deletes in our cluster, but do set ttls of 8 days on most of
the columns. After reading a bunch of earlier threads, I have concluded that I
can safely set gc_grace_seconds to 0 and not have to worry about expired
columns coming back to life. However, I wanted to know if there is
During performance testing being run on our 4 node Cassandra 1.1.5 cluster, we
are seeing warning logs about the heap being almost full - [1]. I'm trying to
figure out why, and how to prevent it.
The tests are being run on a Cassandra ring consisting of 4 dedicated boxes
with 32 GB of RAM each
up
memtable_flush_writers (incrementally and separately for both of
course so you can see the effect).
On Thu, Jun 27, 2013 at 2:27 AM, Arindam Barua
mailto:aba...@247-inc.com>> wrote:
In our performance tests, we are seeing similar FlushWriter, MutationStage,
MemtablePostFlusher pendin
Check that the node
is part of cluster and check for a split schema using cassandra-cli (FAQ on the
wiki has help for split schema)
Cheers
-
Aaron Morton
Freelance Cassandra Consultant
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 27/06/2013, at 8:57 AM, Arindam B
In our performance tests, we are seeing similar FlushWriter, MutationStage,
MemtablePostFlusher pending tasks become non-zero. We collect snapshots every 5
minutes, and they seem to clear after ~10-15 minutes though. (The flush writer
has an 'All time blocked' count of 540 in the below example).
to:rc...@eventbrite.com]
Sent: Tuesday, June 25, 2013 11:15 AM
To: user@cassandra.apache.org
Subject: Re: Problems with node rejoining cluster
On Mon, Jun 24, 2013 at 11:19 PM, Arindam Barua wrote:
> - We do not specify any tokens in cassandra.yaml relying on
> bootstrap assigning
We need to do a rolling upgrade of our Cassandra cluster in production, since
we are upgrading Cassandra on solaris to Cassandra on CentOS.
(We went with solaris initially since most of our other hosts in production are
solaris, but were running into some lockup issues during perf tests, and
de
using Hector's SliceQuery() and reading into a List, or Astynax seem to
result in similar times too.
Thanks,
Arindam
-Original Message-----
From: Arindam Barua [mailto:aba...@247-inc.com]
Sent: Wednesday, October 03, 2012 10:54 AM
To: user@cassandra.apache.org
Subject: RE: Read lat
Did you use the "--cql3" option with the cqlsh command?
From: Vivek Mishra [mailto:mishra.v...@gmail.com]
Sent: Monday, October 08, 2012 7:22 PM
To: user@cassandra.apache.org
Subject: Using compound primary key
Hi,
I am trying to use compound primary key column name and i am referring to:
http:
.gov>
> > > To: user@cassandra.apache.org<mailto:user@cassandra.apache.org>
> > > Date: Tue, 2 Oct 2012 06:41:09 -0600
> > > Subject: Re: Read latency issue
> > >
> > > Interesting results. With PlayOrm, we did a 6 node test of reading 100
>
t a higher throughput though as
> > reading from more disks will be faster. Anyways, you may want to play with
> > more nodes and re-run. If you run a test with PlayOrm, I would love to know
> > the results there as well.
> >
> > Later,
> > D
We are trying to setup a Cassandra cluster and have low read latency
requirements. Running some tests, we do not see the performance that we were
hoping for. Wanted to check if anyone has thoughts on:
1. If these are expected latency times for the data/machine config, etc
2. If not
36 matches
Mail list logo