> we had an awful performance/throughput experience with 3.x coming from 2.1.
> 3.11 is simply a memory hog, if you are using batch statements on the client
> side. If so, you are likely affected by
> https://issues.apache.org/jira/browse/CASSANDRA-16201
>
Confirming what Thomas writes, hea
From: Leon Zaruvinsky
Sent: Wednesday, October 28, 2020 5:21 AM
To: user@cassandra.apache.org
Subject: Re: GC pauses way up after single node Cassandra 2.2 -> 3.11 binary
upgrade
Our JVM options are unchanged between 2.2 and 3.11
For the sake of clarity, do you mean:
(a) you
> Our JVM options are unchanged between 2.2 and 3.11
>>
>
> For the sake of clarity, do you mean:
> (a) you're using the default JVM options in 3.11 and it's different to the
> options you had in 2.2?
> (b) you've copied the same JVM options you had in 2.2 to 3.11?
>
(b), which are the default opt
>
> Our JVM options are unchanged between 2.2 and 3.11
>
For the sake of clarity, do you mean:
(a) you're using the default JVM options in 3.11 and it's different to the
options you had in 2.2?
(b) you've copied the same JVM options you had in 2.2 to 3.11?
The distinction is important because at
Thanks Erick.
Our JVM options are unchanged between 2.2 and 3.11, and we have disk access
mode set to standard. Generally we’ve maintained all configuration between
the two versions.
Read throughput (rate, bytes read/range scanned, etc.) seems fairly
consistent before and after the upgrade ac
I haven't seen this specific behaviour in the past but things that I would
look at are:
- JVM options which differ between 3.11 defaults and what you have
configured in 2.2
- review your monitoring and check read throughput on the upgraded node
as compared to 2.2 nodes
- possibly no
On Wed, 28 Oct 2020 at 14:41, Rich Hawley wrote:
> unsubscribe
>
You need to email user-unsubscr...@cassandra.apache.org to unsubscribe from
the list. Cheers!
unsubscribe
On Tue, Oct 27, 2020 at 11:40 PM Leon Zaruvinsky
wrote:
> Hi,
>
> I'm attempting an upgrade of Cassandra 2.2.18 to 3.11.6, but had to abort
> because of major performance issues associated with GC pauses.
>
> Details:
> 3 node cluster, RF 3, 1 DC
> ~2TB data per node
> Heap Size: 12G
Hi,
I'm attempting an upgrade of Cassandra 2.2.18 to 3.11.6, but had to abort
because of major performance issues associated with GC pauses.
Details:
3 node cluster, RF 3, 1 DC
~2TB data per node
Heap Size: 12G / New Size: 5G
I didn't even get very far in the upgrade - I just upgraded a binary o
On Sat, Nov 10, 2012 at 6:16 PM, Drew Kutcharian wrote:
> Thanks Rob, this makes sense. We only have one rack at this point, so I think
> it'd be better to start with PropertyFileSnitch to make Cassandra think that
> these nodes each are in a different rack without having to put them on
> diffe
Thanks Rob, this makes sense. We only have one rack at this point, so I think
it'd be better to start with PropertyFileSnitch to make Cassandra think that
these nodes each are in a different rack without having to put them on
different subnets. And I will have more flexibility (at the cost of ke
On Mon, Nov 5, 2012 at 12:23 PM, Drew Kutcharian wrote:
>> Switching from SimpleStrategy to RackAware can be a pain.
>
> Can you elaborate a bit? What would be the pain point?
If you don't maintain the same replica placement vis a vis nodes on
your cluster, you have to dump and reload.
Simple ex
I understand that with one node we will have no HA, but since we are just
starting out we wanted to see what would be the bare minimum to go to
production with and as we see traction we can add more nodes.
> Switching from SimpleStrategy to RackAware can be a pain.
Can you elaborate a bit? What
Should be fine if one node can deal with your read and write load.
Switching from SimpleStrategy to RackAware can be a pain. That¹s a
potential growth point way down the line (if you ever have your nodes on
different switches). You might want to just setup your keyspace as
RackAware if you intend t
On Mon, Nov 5, 2012 at 12:49 PM, Drew Kutcharian wrote:
> Hey Guys,
>
> What should I look out for when deploying a single node installation? We want
> to launch a product that uses Cassandra and since we are going to have very
> little load initially, we were thinking of just going live with on
Hey Guys,
What should I look out for when deploying a single node installation? We want
to launch a product that uses Cassandra and since we are going to have very
little load initially, we were thinking of just going live with one node and
eventually add more nodes as the load (hopefully) grow
cation".
>> Both ware part of the high availability / redundancy. So you would really
>> need to backup your single-node-"cluster" to some other external location.
>>
>> Good luck!
>>
>>
>> 2012/3/15 Drew Kutcharian
>> Hi,
>>
>>
Hi Drew,
>>>>
>>>> One other disadvantage is the lack of "consistency level" and
>>>> "replication". Both ware part of the high availability / redundancy. So you
>>>> would really need to backup your single-node-"cluster&q
art of the high availability / redundancy. So you
>>> would really need to backup your single-node-"cluster" to some other
>>> external location.
>>>
>>> Good luck!
>>>
>>>
>>> 2012/3/15 Drew Kutcharian
>>>
>>>>
t; external location.
>>
>> Good luck!
>>
>>
>> 2012/3/15 Drew Kutcharian
>>
>>> Hi,
>>>
>>> We are working on a project that initially is going to have very little
>>> data, but we would like to use Cassandra to ease the futur
>
> 2012/3/15 Drew Kutcharian
> Hi,
>
> We are working on a project that initially is going to have very little data,
> but we would like to use Cassandra to ease the future scalability. Due to
> budget constraints, we were thinking to run a single node Cassandra for now
&
on.
>
> Good luck!
>
>
> 2012/3/15 Drew Kutcharian
>
>> Hi,
>>
>> We are working on a project that initially is going to have very little
>> data, but we would like to use Cassandra to ease the future scalability.
>> Due to budget constraints, we we
Kutcharian
> Hi,
>
> We are working on a project that initially is going to have very little
> data, but we would like to use Cassandra to ease the future scalability.
> Due to budget constraints, we were thinking to run a single node Cassandra
> for now and then add more nodes as re
Hi,
We are working on a project that initially is going to have very little data,
but we would like to use Cassandra to ease the future scalability. Due to
budget constraints, we were thinking to run a single node Cassandra for now and
then add more nodes as required.
I was wondering if it is
On Wed, Oct 5, 2011 at 7:42 AM, Jeremiah Jordan <
jeremiah.jor...@morningstar.com> wrote:
> But truncate is still slow, especially if it can't use JNA (windows) as it
> snapshots. Depending on how much data you are inserting during your unit
> tests, just paging through all the keys and then dele
eate.
>>
>> On Tue, Oct 4, 2011 at 9:15 AM, Joseph Norton
>> wrote:
>>>
>>> Hello.
>>>
>>> For unit test purposes, I have a single node Cassandra cluster. I need to
>>> drop and re-create several keyspaces between each test iterati
>>
>> Joseph Norton
>> nor...@alum.mit.edu
>>
>>
>>
>> On Oct 4, 2011, at 11:19 PM, Jonathan Ellis wrote:
>>
>>> Truncate is faster than drop + recreate.
>>>
>>> On Tue, Oct 4, 2011 at 9:15 AM, Joseph Norton
>>
Jonathan Ellis wrote:
>
>> Truncate is faster than drop + recreate.
>>
>> On Tue, Oct 4, 2011 at 9:15 AM, Joseph Norton
>> wrote:
>>>
>>> Hello.
>>>
>>> For unit test purposes, I have a single node Cassandra cluster. I need to
>
at 11:19 PM, Jonathan Ellis wrote:
> Truncate is faster than drop + recreate.
>
> On Tue, Oct 4, 2011 at 9:15 AM, Joseph Norton
> wrote:
>>
>> Hello.
>>
>> For unit test purposes, I have a single node Cassandra cluster. I need to
>> drop and re-creat
Truncate is faster than drop + recreate.
On Tue, Oct 4, 2011 at 9:15 AM, Joseph Norton wrote:
>
> Hello.
>
> For unit test purposes, I have a single node Cassandra cluster. I need to
> drop and re-create several keyspaces between each test iteration. This
> process take
Hello.
For unit test purposes, I have a single node Cassandra cluster. I need to drop
and re-create several keyspaces between each test iteration. This process
takes approximately 10 seconds for a single node installation.
Can you recommend any tricks or recipes to reduce the time required
I'm looking for advice for running cassandra 8.+ on a single node. Would love
to hear stories about how much RAM you succeeded with, etc.
Currently we are running with a 4GB heap size. Hardware is 4 cores and 8GB
physical memory. We're not opposed to going to 16GB of memory or even 32GB.
W
On Sun, Mar 20, 2011 at 4:42 PM, aaron morton wrote:
> When compacting it will use the path with the greatest free space. When
> compaction completes successfully the files will lose their temporary status
> and that will be their new home.
>
> On 18 Mar 2011, at 14:10, John Lewis wrote:
>
>> |
ter in
>> storage-config.xml (or cassandra.yaml in 0.7+).
>>
>> commit_log_directory : where commitlog will be written
>> data_file_directories : data files
>> saved_cache_directory : saved row cache
>>
>> maki
>>
>>
>> 2011/3/17 Komal Goyal :
>>> Hi,
>
directories : data files
> saved_cache_directory : saved row cache
>
> maki
>
>
> 2011/3/17 Komal Goyal :
> > Hi,
> > I am having single node cassandra setup on a windows machine.
> > Very soon I have ran out of space on this machine so have increased the
> > hardi
-config.xml (or cassandra.yaml in 0.7+).
>
> commit_log_directory : where commitlog will be written
> data_file_directories : data files
> saved_cache_directory : saved row cache
>
> maki
>
>
> 2011/3/17 Komal Goyal :
>> Hi,
>> I am having single node cassandra setu
row cache
maki
2011/3/17 Komal Goyal :
> Hi,
> I am having single node cassandra setup on a windows machine.
> Very soon I have ran out of space on this machine so have increased the
> hardisk capacity of the machine.
> Now I want to know how I configure cassandra to start storin
Hi,
I am having single node cassandra setup on a windows machine.
Very soon I have ran out of space on this machine so have increased the
hardisk capacity of the machine.
Now I want to know how I configure cassandra to start storing data in these
high space partitions?
Also how the existing data
38 matches
Mail list logo