Hi Ali,
The best practice is to use the noop scheduler on array of SSDs behind
your block device (Hardware RAID controller).
If you are using only one SSD disk, the deadline scheduler is the best
choice to reduce IO latency.
It is not recommended to set cfq on SSDs disks.
Regards,
Roni
Hi there,
What is the best way to downgrade a C* 2.1.3 cluster to the stable 2.0.12?
I know it's not supported, but we are getting too many issues with the 2.1.x...
It is leading us to think that the best solution is to use the stable version.
Is there a safe way to do that?
Cheers,
Roni
in this situation?
Regards,
Roni
r your nodes to be sure that the value is not too
high. You may get too much IO if you increase concurrent compactors
when using spinning disks.
Regards,
Roni Balthazar
On 25 February 2015 at 16:37, Ja Sam wrote:
> Hi,
> One more thing. Hinted Handoff for last week for all nodes was less t
Hi Piotr,
Are your repairs finishing without errors?
Regards,
Roni Balthazar
On 25 February 2015 at 15:43, Ja Sam wrote:
> Hi, Roni,
> They aren't exactly balanced but as I wrote before they are in range from
> 2500-6000.
> If you need exactly data I will check them tomorrow
Hi Piotr,
What about the nodes on AGRAF? Are the pending tasks balanced between
this DC nodes as well?
You can check the pending compactions on each node.
Also try to run "nodetool getcompactionthroughput" on all nodes and
check if the compaction throughput is set to 999.
Che
ool cfstats" on your nodes.
Cheers,
Roni Balthazar
On 25 February 2015 at 13:29, Ja Sam wrote:
> I do NOT have SSD. I have normal HDD group by JBOD.
> My CF have SizeTieredCompactionStrategy
> I am using local quorum for reads and writes. To be precise I have a lot of
> writes
Try repair -pr on all nodes.
If after that you still have issues, you can try to rebuild the SSTables using
nodetool upgradesstables or scrub.
Regards,
Roni Balthazar
> Em 18/02/2015, às 14:13, Ja Sam escreveu:
>
> ad 3) I did this already yesterday (setcompactionthrouput also).
you getting when running repairs.
Regards,
Roni Balthazar
On Wed, Feb 18, 2015 at 1:31 PM, Ja Sam wrote:
> Can you explain me what is the correlation between growing SSTables and
> repair?
> I was sure, until your mail, that repair is only to make data consistent
> between nodes.
pactions must decrease as well...
Cheers,
Roni Balthazar
On Wed, Feb 18, 2015 at 12:39 PM, Ja Sam wrote:
> 1) we tried to run repairs but they usually does not succeed. But we had
> Leveled compaction before. Last week we ALTER tables to STCS, because guys
> from DataStax suggest us
/dml_config_consistency_c.html
Cheers,
Roni Balthazar
On Wed, Feb 18, 2015 at 11:07 AM, Ja Sam wrote:
> I don't have problems with DC_B (replica) only in DC_A(my system write only
> to it) I have read timeouts.
>
> I checked in OpsCenter SSTable count and I have:
> 1) in DC_A same +-
(eg: driver's timeout, concurrent reads and
so on)
Regards,
Roni Balthazar
On Wed, Feb 18, 2015 at 9:51 AM, Ja Sam wrote:
> Hi,
> Thanks for your "tip" it looks that something changed - I still don't know
> if it is ok.
>
> My nodes started to do more compac
HI,
Yes... I had the same issue and setting cold_reads_to_omit to 0.0 was
the solution...
The number of SSTables decreased from many thousands to a number below
a hundred and the SSTables are now much bigger with several gigabytes
(most of them).
Cheers,
Roni Balthazar
On Tue, Feb 17, 2015
ht or when your IO is not busy.
>From http://wiki.apache.org/cassandra/NodeTool:
0 0 * * * root nodetool -h `hostname` setcompactionthroughput 999
0 6 * * * root nodetool -h `hostname` setcompactionthroughput 16
Cheers,
Roni Balthazar
On Mon, Feb 16, 2015 at 7:47 PM, Ja Sam wrote:
> On
://pastebin.com/jbAgDzVK
Thanks,
Roni Balthazar
On Fri, Jan 9, 2015 at 12:03 PM, datastax wrote:
> Hello
>
> You may not be experiencing versioning issues. Do you know if compaction
> is keeping up with your workload? The behavior described in the subject is
> typically
me know if I need to provide more information.
Thanks,
Roni Balthazar
On Thu, Jan 8, 2015 at 5:23 PM, Robert Coli wrote:
> On Thu, Jan 8, 2015 at 11:14 AM, Roni Balthazar
> wrote:
>
>> We are using C* 2.1.2 with 2 DCs. 30 nodes DC1 and 10 nodes DC2.
>>
>
> https:/
moryError:
Java heap space"
Any hints?
Regards,
Roni Balthazar
Hi,
We use Puppet to manage our Cassandra configuration. (http://puppetlabs.com)
You can use Cluster SSH to send commands to the server as well.
Another good choice is Saltstack.
Regards,
Roni
On Thu, Oct 23, 2014 at 5:18 AM, Alain RODRIGUEZ wrote:
> Hi,
>
> I was wondering abo
I have a 0.6.4 Cassandra cluster of two nodes in full replica (replica
factor 2). I wants to add two more nodes and balance the cluster (replica
factor 2).
I want all of them to be seed's.
What should be the simple steps:
1. add the "true" to all the nodes or only
the new ones?
2. add the "[
I have a 0.6.4 Cassandra cluster of two nodes in full replica (replica
factor 2). I wants to add two more nodes and balance the cluster (replica
factor 2).
I want all of them to be seed's.
What should be the simple steps:
1. add the "true" to all the nodes or only
the new ones?
2. add the "[
20 matches
Mail list logo