: [EXTERNAL] Re: Cleanup
Hi Marc, Changes done using "nodetool setcompactionthroughput" will only be
applicable till Cassandra service restart. The throughput value will revert
back to the settings inside cassandra. yaml post service restart. On Fri, Feb
17,
Hi Marc,
Changes done using
it is altered via nodetool, is it altered until manually changed
> or service restart, so must be manually put pack?
>
>
>
> *From:* Aaron Ploetz
> *Sent:* Thursday, February 16, 2023 4:50 PM
> *To:* user@cassandra.apache.org
> *Subject:* Re: Cleanup
>
>
>
…and if it is altered via nodetool, is it altered until manually changed or
service restart, so must be manually put pack?
From: Aaron Ploetz
Sent: Thursday, February 16, 2023 4:50 PM
To: user@cassandra.apache.org
Subject: Re: Cleanup
EXTERNAL
So if I remember right, setting
wrote:
> Compaction_throughtput_per_mb is 0 in cassandra.yaml. Is setting it in
> nodetool going to provide any increase?
>
>
>
> *From:* Durity, Sean R via user
> *Sent:* Thursday, February 16, 2023 4:20 PM
> *To:* user@cassandra.apache.org
> *Subject:* RE: Cleanup
Compaction_throughtput_per_mb is 0 in cassandra.yaml. Is setting it in nodetool
going to provide any increase?
From: Durity, Sean R via user
Sent: Thursday, February 16, 2023 4:20 PM
To: user@cassandra.apache.org
Subject: RE: Cleanup
EXTERNAL
Clean-up is constrained/throttled by
nodes in a DC at the same time. Think of it as compaction and consider your
cluster performance/workload/timelines accordingly.
Sean R. Durity
From: manish khandelwal
Sent: Thursday, February 16, 2023 5:05 AM
To: user@cassandra.apache.org
Subject: [EXTERNAL] Re: Cleanup
There is no advantage of
There is no advantage of running cleanup if no new nodes are introduced. So
cleanup time should remain same when adding new nodes.
Cleanup is a local to node so network bandwidth should have no effect on
reducing cleanup time.
Dont ignore cleanup as it can cause you disks occupied without any u
Hello,
'*nodetool cleanup*' use to be mono-threaded (up to C*2.1) then used all
the cores (C*2.1 - C*2.1.14) and is now something that can be controlled
(C*2.1.14+):
'*nodetool cleanup -j 2*' for example would use 2 compactors maximum (out
of the number of concurrent_compactors you defined (probab
Nodetool will eventually return when it’s done
You can also watch nodetool compactionstats
--
Jeff Jirsa
> On Oct 22, 2018, at 10:53 AM, Ian Spence wrote:
>
> Environment: Cassandra 2.2.9, GNU/Linux CentOS 6 + 7. Two DCs, 3 RACs in DC1
> and 6 in DC2.
>
> We recently added 16 new nodes to
as
>
>
>
> *From:* kurt greaves [mailto:k...@instaclustr.com]
> *Sent:* Montag, 15. Jänner 2018 06:18
> *To:* User
> *Subject:* Re: Cleanup blocking snapshots - Options?
>
>
>
> Disabling the snapshots is the best and only real option other than
> upgrading at the mom
ent: Montag, 15. Jänner 2018 06:18
To: User
Subject: Re: Cleanup blocking snapshots - Options?
Disabling the snapshots is the best and only real option other than upgrading
at the moment. Although apparently it was thought that there was only a small
race condition in 2.1 that triggered this and
I re-try. Due to the issue we simply omitted running cleanup then, but
> as disk space is becoming some sort of bottle-neck again, we need to
> re-evaluate this situation J
>
>
>
> Regards,
>
> Thomas
>
>
>
> *From:* kurt greaves [mailto:k...@instaclustr.c
to re-evaluate this situation ☺
Regards,
Thomas
From: kurt greaves [mailto:k...@instaclustr.com]
Sent: Montag, 15. Jänner 2018 06:18
To: User
Subject: Re: Cleanup blocking snapshots - Options?
Disabling the snapshots is the best and only real option other than upgrading
at the moment. Although
Disabling the snapshots is the best and only real option other than
upgrading at the moment. Although apparently it was thought that there was
only a small race condition in 2.1 that triggered this and it wasn't worth
fixing. If you are triggering it easily maybe it is worth fixing in 2.1 as
well.
I see the SSTable in this log statement: Stream context metadata (along
with a bunch of other files)but I do not see it in the list of files
"Opening" (which I see quite a bit of, as expected).
Safe to try moving that file off server (to a backup location)? If I tried
this, would I want to
Check the SSTable is actually in use by cassandra, if it’s missing a component
or otherwise corrupt it will not be opened at run time and so not included in
all the fun games the other SSTables get to play.
If you have the last startup in the logs check for an “Opening… “ message or an
ERROR a
On Fri, Nov 8, 2013 at 10:31 AM, Elias Ross wrote:
> On Thu, Nov 7, 2013 at 7:01 PM, Krishna Chaitanya
> wrote:
>
>> Check if its an issue with permissions or broken links..
>>
>>
> I don't think permissions are an issue. You might be on to something
> regarding the links.
>
>
As it turns out (
On Thu, Nov 7, 2013 at 7:01 PM, Krishna Chaitanya wrote:
> Check if its an issue with permissions or broken links..
>
>
I don't think permissions are an issue. You might be on to something
regarding the links.
I've been seeing this on 4 nodes, configured identically.
Here's what I think the prob
Check if its an issue with permissions or broken links..
On Nov 6, 2013 11:17 AM, "Elias Ross" wrote:
>
> I'm seeing the following:
>
> Caused by: java.lang.RuntimeException: java.io.FileNotFoundException:
> /data05/rhq/data/rhq/six_hour_metrics/rhq-six_hour_metrics-ic-1-Data.db (No
> such file o
On Wed, Nov 6, 2013 at 9:10 AM, Keith Freeman <8fo...@gmail.com> wrote:
> Is it possible that the keyspace was dropped then re-created (
> https://issues.apache.org/jira/browse/CASSANDRA-4857)? I've seen similar
> stack traces in that case.
>
>
Thanks for the pointer.
There's a program (RHQ) that
Is it possible that the keyspace was dropped then re-created (
https://issues.apache.org/jira/browse/CASSANDRA-4857)? I've seen similar
stack traces in that case.
On 11/05/2013 10:47 PM, Elias Ross wrote:
I'm seeing the following:
Caused by: java.lang.RuntimeException: java.io.FileNotFoundEx
> But, that is still awkward. Does cleanup take so much disk space to
complete the compaction operation? In other words, twice the size?
Not really, but logically yes.
According to 1.0.7 source, cleanup checks if there's enough space that is
larger than the worst scenario as below. If not, the ex
Thanks for the answers.
I got it. I was using cleanup, because I thought it would delete the
tombstones.
But, that is still awkward. Does cleanup take so much disk space to
complete the compaction operation? In other words, twice the size?
*Atenciosamente,*
*Víctor Hugo Molinar - *@vhmolinar
Hi Victor,
As Andrey said, running cleanup doesn't work as you expect.
> The reason I need to clean things is that I wont need most of my
inserted data on the next day.
Deleted objects(columns/records) become deletable from sstable file when
they get expired(after gc_grace_seconds).
Such d
On Tue, May 28, 2013 at 7:39 AM, Víctor Hugo Oliveira Molinar
wrote:
> So I'd like to know more about what does happens in a cleanup operation.
> Appreciate any help.
./src/java/org/apache/cassandra/db/compaction/CompactionManager.java"
line 591 of 1175
"
logger.info("Cleaning up " + sstable);
cleanup removes data which doesn't belong to the current node. You have to
run it only if you move (or add new) nodes. In your case there is no any
reason to do it.
On Tue, May 28, 2013 at 7:39 AM, Víctor Hugo Oliveira Molinar <
vhmoli...@gmail.com> wrote:
> Hello everyone.
> I have a daily main
What version of Cassandra are you using. If you're using 1.2.0 (or *were*
using 1.2.0 when the 2 nodes were removed), you might be seeing
https://issues.apache.org/jira/browse/CASSANDRA-5167.
> Or I have to delete the row in the table
That should work.
On Mon, May 6, 2013 at 4:22 PM, Shahryar S
If you don't need LCS, it will be safer to move back to STCS.
If you need LCS, you can manually demote the sstables by editing the
manifest file.
I'm not a man in datastax, and it is a workaround I've just found. Please
test it well before
applying it on your production service.
Background:
You h
Thanks, good to know there's a fix on the way.
In the mean time is there any way to work around this? Or perhaps should I
move back to SizeTiered until 1.0.9 is released?
On Wed, Mar 14, 2012 at 3:00 PM, Maki Watanabe wrote:
> Fixed in 1.0.9, 1.1.0
> https://issues.apache.org/jira/browse/CASSANDR
Fixed in 1.0.9, 1.1.0
https://issues.apache.org/jira/browse/CASSANDRA-3989
You should better to avoid to use cleanup/scrub/upgradesstable if you
can on 1.0.7 though
it will not corrupt sstables.
2012/3/14 Thomas van Neerijnen :
> Hi all
>
> I am trying to run a cleanup on a column family and am g
Thanks, folks.
I think I must have read compaction, thought cleanup, and gotten muddled
from there.
David
On Nov 30, 2011 6:45 PM, "Edward Capriolo" wrote:
> Your understanding of nodetool cleanup is not correct. cleanup is used
> only after cluster balancing like adding or removing nodes. It r
Your understanding of nodetool cleanup is not correct. cleanup is used only
after cluster balancing like adding or removing nodes. It removes data that
does not belong on the node anymore (in older versions it removed hints as
well)
Your debate is needing to run companion . In a write only workloa
I believe you are mis-understanding what cleanup does. Cleanup is used
to remove data from a node that the node no longer owns. For example
when you move a node in the ring, it changes responsibility and gets
new data, but does not automatically delete the data it used to be
responsible for but no
> is there a techincal problem with running a nodetool move on a node while a
> cleanup is running?
Cleanup is removing data that the node is no longer responsible for while move
is first removing *all* data from the node and then streaming new data to it.
I'd put that in the crossing the s
34 matches
Mail list logo