Is one tombstone scanned per query causing any issue? I mean real
issues, not the scanning of tombstone itself.
On 24/10/2024 04:56, Naman kaushik wrote:
Thanks everyone for your responses.
We have columns with |list| and |list| types, and after
using |sstabledump|, we found that the
solely used for data processing.
Despite no update or delete operations occurring, I'm observing one
tombstone scanned per query. The TTL is set to 0, and I’ve manually
attempted to compact the table on each node, yet the tombstone remains.
What could be the possible reason for this behavior
blade.comOn Tue, Oct 8, 2024 at 10:52 PM Naman kaushik <namankaush...@gmail.com> wrote:Hi Community,We are currently using Cassandra version 4.1.3 and have encountered an issue related to tombstone generation. We have two tables storing monthly data: table_september and table_october. Each ta
blade Consulting
>> rustyrazorblade.com
>>
>>
>> On Tue, Oct 8, 2024 at 10:52 PM Naman kaushik
>> wrote:
>>
>>> Hi Community,
>>>
>>> We are currently using Cassandra version 4.1.3 and have encountered an
>>> issue related to tombs
ustyrazorblade Consulting
> rustyrazorblade.com
>
>
> On Tue, Oct 8, 2024 at 10:52 PM Naman kaushik
> wrote:
>
>> Hi Community,
>>
>> We are currently using Cassandra version 4.1.3 and have encountered an
>> issue related to tombstone generation. We have tw
Are you using collections?
—
Jon Haddad
Rustyrazorblade Consulting
rustyrazorblade.com
On Tue, Oct 8, 2024 at 10:52 PM Naman kaushik
wrote:
> Hi Community,
>
> We are currently using Cassandra version 4.1.3 and have encountered an
> issue related to tombstone generation. We hav
Hi Community,
We are currently using Cassandra version 4.1.3 and have encountered an
issue related to tombstone generation. We have two tables storing monthly
data: table_september and table_october. Each table has a TTL of 30 days.
For the month of October, data is being inserted into the
Hello Aneesh,
Reading your message and answers given, I really think this post I wrote
about 3 years ago now (how quickly time goes through...) about tombstone
might be of interest to you:
https://thelastpickle.com/blog/2016/07/27/about-deletes-and-tombstones.html.
Your problem is not related to
On Tue, Jun 18, 2019 at 8:06 AM ANEESH KUMAR K.M wrote:
>
> I am using Cassandra cluster with 3 nodes which is hosted on AWS. Also we
> have NodeJS web Application which is on AWS ELB. Now the issue is that,
> when I add 2 or more servers (nodeJS) in AWS ELB then the delete queries
> are not work
This is nearly impossible to answer without much more info, but suspect you’re
either:
Using very weak consistency levels or some weirdness with data centers /
availability zones (like simplestrategy and local_*), or
Have bad clocks / no ntp / wrong time zones,
> On Jun 17, 2019, at 11:05 PM,
Hi,
I am using Cassandra cluster with 3 nodes which is hosted on AWS. Also we
have NodeJS web Application which is on AWS ELB. Now the issue is that,
when I add 2 or more servers (nodeJS) in AWS ELB then the delete queries
are not working on Cassandra.
Its working when there is only one server in
;
> Normal deletes are fine.
>
> Sadly there's a lot of hand wringing about tombstones in the generic
> sense which leads people to try to work around *every* case where
> they're used. This is unnecessary. A tombstone over a single row
> isn't a problem, especial
nt any deleted data reappearing.
Regards
Alok
> On 9 Apr 2019, at 15:56, Jon Haddad wrote:
>
> Normal deletes are fine.
>
> Sadly there's a lot of hand wringing about tombstones in the generic
> sense which leads people to try to work around *every* case where
> they&
Normal deletes are fine.
Sadly there's a lot of hand wringing about tombstones in the generic
sense which leads people to try to work around *every* case where
they're used. This is unnecessary. A tombstone over a single row
isn't a problem, especially if you're only fetchi
uot; would be affected, right? Would query "SELECT * FROM
myTable WHERE course_id = 'C' AND assignment_id = 'A2';" be affected too?
For query "SELECT * FROM myTable WHERE course_id = 'C';", to workaround the
tombstone problem, we are thinking abou
ht after a
successful repair.
We have a few posts on our blog <http://thelastpickle.com/blog/> that cover
the tombstones and compaction strategies topic (search for "tombstone" on
that page), notably this one:
http://thelastpickle.com/blog/2016/07/27/about-deletes-and-tombstones.html
Cheers,
Hey guys,
Can someone give me some idea or link some good material for determining a good
/ aggressive tombstone strategy? I want to make sure my tombstones are getting
purged as soon as possible to reclaim disk.
Thanks
uyHai Doan
>> Hello all
>>
>> I have tried to sum up all rules related to tombstone removal:
>>
>> ----
>> --
>>
>> Given a tombstone written at timestamp (t) for a part
Yes it does. Consider if it didn't and you kept writing to the same
partition, you'd never be able to remove any tombstones for that partition.
On Tue., 6 Nov. 2018, 19:40 DuyHai Doan Hello all
>
> I have tried to sum up all rules related to t
Hello all
I have tried to sum up all rules related to tombstone removal:
--
Given a tombstone written at timestamp (t) for a partition key (P) in
SSTable (S1). This tombstone will be removed:
1) after
Executive Officer
m 202.905.2818
Anant Corporation
1010 Wisconsin Ave NW, Suite 250
Washington, D.C. 20007
We build and manage digital business technology platforms.
On Aug 24, 2018, 1:46 AM -0400, Charulata Sharma (charshar)
, wrote:
> Hi All,
>
> I have shared my experience of
Hi All,
I have shared my experience of tombstone clearing in this blog post.
Sharing it in this forum for wider distribution.
https://medium.com/cassandra-tombstones-clearing-use-case/the-curios-case-of-tombstones-d897f681a378
Thanks,
Charu
hing else lined up properly to solve a queue problem.
>
>
> Sean Durity
>
> From: Abhishek Singh
> Sent: Tuesday, June 19, 2018 10:41 AM
> To: user@cassandra.apache.org
> Subject: [EXTERNAL] Re: Tombstone
>
> The Partition key is made of datetime(b
:41 AM
To: user@cassandra.apache.org
Subject: [EXTERNAL] Re: Tombstone
The Partition key is made of datetime(basically date
truncated to hour) and bucket.I think your RCA may be correct since we are
deleting the partition rows one by one not in a batch files maybe overlapping
> >
> > Hi all,
> >We using Cassandra for storing events which are time series
> based for batch processing once a particular batch based on hour is
> processed we delete the entries but we were left with almost 18% deletes
> marked as Tombstones.
> >
d as
> Tombstones.
> I ran compaction on the particular CF tombstone didn't come
> down.
> Can anyone suggest what is the optimal tunning/recommended
> practice used for compaction strategy and GC_grace period with 100k entries
> and deletes every
deletes marked as
> Tombstones.
> I ran compaction on the particular CF tombstone didn't come
> down.
> Can anyone suggest what is the optimal tunning/recommended
> practice used for compaction strategy and GC_grace period with 100k entries
>
tombstone didn't
come down.
Can anyone suggest what is the optimal tunning/recommended
practice used for compaction strategy and GC_grace period with 100k entries
and deletes every hour.
Warm Regards
Abhishek Singh
andoff to replay the database mutationsthe node missed while it was
> down. Cassandra does not replay a mutation for a tombstoned record
> during its grace period.".
>
> The tombstone here is on the recovered node or coordinator?
> The tombstone is a special write record, so it
toned record
during its grace period.".
The tombstone here is on the recovered node or coordinator?
The tombstone is a special write record, so it must have writetime.
We could compare the writetime between the version in the hint and the
version of the tombstone, which is enough to make choice,
enevides
wrote:
> Dear community,
>
> I have been using TWCS in my lab, with TTL'd data.
> In the debug log there is always the sentence:
> "TimeWindowCompactionStrategy.java:65 Disabling tombstone compactions for
> TWCS". Indeed, the line is always repeated.
>
&g
Dear community,
I have been using TWCS in my lab, with TTL'd data.
In the debug log there is always the sentence:
"TimeWindowCompactionStrategy.java:65 Disabling tombstone compactions for
TWCS". Indeed, the line is always repeated.
What does it actually mean? If my data gets expir
Hello Simon.
Tombstone is a tricky topic in Cassandra that brought a lot of questions
over time. I exposed my understanding in a blog post last year and thought
it might be of interest for you, even though things probably evolved a bit,
principles and tuning did not change that much I guess
Got it. Thank you.
From: Meg Mara
Date: 2017-12-05 01:54
To: user@cassandra.apache.org
Subject: RE: Tombstone warnings in log file
Simon,
It means that in processing your queries, Cassandra is going through that many
tombstone cells in order to return your results. It is because some of the
Simon,
It means that in processing your queries, Cassandra is going through that many
tombstone cells in order to return your results. It is because some of the
partitions that you are querying for have already expired. The warning is just
cassandra's way of letting you know that your
Hi,
My cluster is running 2.2.8, no update and deletion, only insertion with TTL.
I saw below warnings reacently. What's the meaning of them and what's the
impact?
WARN [SharedPool-Worker-2] 2017-12-04 09:32:48,833 SliceQueryFilter.java:308 -
Read 2461 live and 1978 tombston
t;>>
>>>> On Sat, Sep 2, 2017 at 8:34 PM, Jeff Jirsa wrote:
>>>>
>>>>> If you're on 3.0 (3.0.6 or 3.0.8 or newer I don't remember which),
>>>>> TWCS was designed for ttl-only time series use cases
>>>>>
>>>
>>>
>>> On Sat, Sep 2, 2017 at 8:34 PM, Jeff Jirsa wrote:
>>>
>>>> If you're on 3.0 (3.0.6 or 3.0.8 or newer I don't remember which), TWCS
>>>> was designed for ttl-only time series use cases
>>>>
>>>
're on 3.0 (3.0.6 or 3.0.8 or newer I don't remember which), TWCS
>>> was designed for ttl-only time series use cases
>>>
>>> Alternatively, if you have IO to spare, you may find LCS works as well
>>> (it'll cause quite a bit more compaction, but a m
sa wrote:
>
>> If you're on 3.0 (3.0.6 or 3.0.8 or newer I don't remember which), TWCS
>> was designed for ttl-only time series use cases
>>
>> Alternatively, if you have IO to spare, you may find LCS works as well
>> (it'll cause quite a bit more co
a bit more compaction, but a much higher chance to
> compact away tombstones)
>
> There are also tombstone focused sub properties to more aggressively
> compact sstables that have a lot of tombstones - check the docs for
> "unchecked tombstone compaction" and "tombst
bstones)
There are also tombstone focused sub properties to more aggressively compact
sstables that have a lot of tombstones - check the docs for "unchecked
tombstone compaction" and "tombstone threshold" - enabling those will enable
more aggressive automatic single-sstable
Yes, your are right. I am using STCS compaction strategy with some kind of
timeseries model. Too much disk space has been occupied.
What should I do to stop the disk full ?
I only want to keep 100 days data most recently, so I set
default_time_to_live = 864(100 days ).
I know I need
Path" | java -jar
>>>> /opt/cassandra/tools/jmx/jmxterm-1.0-alpha-4-uber.jar -l
>>>> localhost:7199
>>>>
>>>> In the above, I am using a jmx method. But it seems that the file size
>>>> doesn’t change. My command is wrong ?
>>>
ndra/tools/jmx/jmxterm-1.0-alpha-4-uber.jar -l localhost:7199
>>>>
>>>> In the above, I am using a jmx method. But it seems that the file size
>>>> doesn’t change. My command is wrong ?
>>>>
>>>> > 在 2017年9月1日,下午2:17,Jeff Jirsa >
t;>>
>>> > 在 2017年9月1日,下午2:17,Jeff Jirsa 写道:
>>> >
>>> > User defined compaction to do a single sstable compaction on just that
>>> sstable
>>> >
>>> > It's a nodetool command in very recent versions, or a jmx method
ach node has about 1.5T data in the disk.
>> I found some sstables file are over 300G. Using the sstablemetadata command, I found it: Estimated droppable tombstones: 0.9622972799707109.
>> It is obvious that too much tombstone data exists.
>> The default_time_to_live = 864(10
017年9月1日,下午2:17,Jeff Jirsa 写道:
>>>> >
>>>> > User defined compaction to do a single sstable compaction on just
>>>> that sstable
>>>> >
>>>> > It's a nodetool command in very recent versions, or a jmx method in
>>>> o
1日,下午2:17,Jeff Jirsa 写道:
>>> >
>>> > User defined compaction to do a single sstable compaction on just that
>>> sstable
>>> >
>>> > It's a nodetool command in very recent versions, or a jmx method in
>>> older versions
>>&g
gt;
>> > --
>> > Jeff Jirsa
>> >
>> >
>> >> On Aug 31, 2017, at 11:04 PM, qf zhou > >> <mailto:zhouqf2...@gmail.com>> wrote:
>> >>
>> >> I am using a cluster with 3 nodes and the cassandra version is 3.0.9.
&g
qf zhou wrote:
>> >>
>> >> I am using a cluster with 3 nodes and the cassandra version is
>> 3.0.9. I have used it about 6 months. Now each node has about 1.5T data in
>> the disk.
>> >> I found some sstables file are over 300G. Using the sstable
; >> On Aug 31, 2017, at 11:04 PM, qf zhou >> <mailto:zhouqf2...@gmail.com>> wrote:
> >>
> >> I am using a cluster with 3 nodes and the cassandra version is 3.0.9. I
> >> have used it about 6 months. Now each node has about 1.5T data in the disk.
&g
in
> the disk.
> >> I found some sstables file are over 300G. Using the sstablemetadata
> command, I found it: Estimated droppable tombstones: 0.9622972799707109.
> >> It is obvious that too much tombstone data exists.
> >> The default_time_to_live = 864
ode has about 1.5T data in the disk.
>> I found some sstables file are over 300G. Using the sstablemetadata
>> command, I found it: Estimated droppable tombstones: 0.9622972799707109.
>> It is obvious that too much tombstone data exists.
>> The default_time_to_live = 86400
sandra version is 3.0.9. I
> have used it about 6 months. Now each node has about 1.5T data in the disk.
> I found some sstables file are over 300G. Using the sstablemetadata command,
> I found it: Estimated droppable tombstones: 0.9622972799707109.
> It is obvious that too much to
.
It is obvious that too much tombstone data exists.
The default_time_to_live = 864(100 days) and gc_grace_seconds = 432000(5
days). Using nodetool compactionstats, I found the some compaction processes
exists.
So I really want to know how to clear tombstone data ? otherwise the disk
Accoding to http://docs.datastax.com/en/cql/3.1/cql/ddl/ddl_when_use_
index_c.html#concept_ds_sgh_yzz_zj__upDatIndx
> Cassandra stores tombstones in the index until the tombstone limit
reaches 100K cells. After exceeding the tombstone limit, the query that
uses the indexed value will fail.
Very appreciate to all of you, I’ll study the blog.
From: Alain RODRIGUEZ [mailto:arodr...@gmail.com]
Sent: 2016年11月16日 23:26
To: user@cassandra.apache.org
Cc: Fabrice Facorat
Subject: Re: Some questions to updating and tombstone
Hi Boying,
Old value is not tombstone, but remains until
Hi Boying,
Old value is not tombstone, but remains until compaction
Be careful, the above is generally true but not necessary.
Tombstones can actually be generated while using update in some corner
cases. Using collections or prepared statements.
I wrote a detailed blog post about deletes and
es, don't generate them ;)
>
> More seriously, tombstones are generated when:
> - doing a DELETE
> - TTL expiration
> - set a column to NULL
>
> However tombstones are an issue only if for the same value, you have many
> tombstones (i.e you keep overwriting the same values wi
with datas and
tombstones). Having 1 tombstone for 1 value is not an issue, having 1000
tombstone for 1 value is a problem. Do really your use case overwrite data
with DELETE or NULL ?
So that's why what you may want to know is how many tombstones you have on
average when reading a value. This
updating and tombstone
Hi Boying,
I agree with Vladimir.If compaction is not compacting the two sstables with
updates soon, disk space issues will be wasted. For example, if the updates are
not closer in time, first update might be in a big table by the time second
update is being written in a
time stamp. Old value is not tombstone, but
remains until compaction. gc_grace_period is not related to this.
Best regards, Vladimir Yudovin,
Winguzone - Hosted Cloud Cassandra
Launch your cluster in minutes.
On Mon, 14 Nov 2016 03:02:21 -0500Lu, Boying wrote
Hi, All,
Will the
Hi Boying,
UPDATE write new value with new time stamp. Old value is not tombstone, but
remains until compaction. gc_grace_period is not related to this.
Best regards, Vladimir Yudovin,
Winguzone - Hosted Cloud Cassandra
Launch your cluster in minutes.
On Mon, 14 Nov 2016 03:02
Hi, All,
Will the Cassandra generates a new tombstone when updating a column by using
CQL update statement?
And is there any way to get the number of tombstones of a column family since
we want to void generating
too many tombstones within gc_grace_period?
Thanks
Boying
unsubsribe
Sent from Yahoo Mail on Android
On Tue, 8 Nov, 2016 at 2:11 pm, Oleg Krayushkin wrote:
Hi, could you please clarify: 100k tombstone limit for SE is per CF, cf-node,
original sstable or (very unlikely) partition?
Thanks!--
Oleg Krayushkin
Hi, could you please clarify: 100k tombstone limit for SE is per CF,
cf-node, original sstable or (very unlikely) partition?
Thanks!
--
Oleg Krayushkin
he
>>>>> precise command line for that, does it run on several nodes at the same
>>>>> time, etc...
>>>>> What is your gc_grace_seconds ?
>>>>> Do you see errors in your logs that would be linked to repairs
>>>>> (Validation
gc_grace_seconds ?
>>>> Do you see errors in your logs that would be linked to repairs
>>>> (Validation failure or failure to create a merkle tree)?
>>>>
>>>> You seem to mention a single node that went down but say the whole
>>>> cluster seem
the node that went down and the
>>> fact that deleted data comes back to life ?
>>> What is your strategy for cyclic maintenance repair (schedule, command
>>> line or tool, etc...) ?
>>>
>>> Thanks,
>>>
>>> On Thu, Sep 29, 2016 at 10:4
; Thanks,
>>
>> On Thu, Sep 29, 2016 at 10:40 AM Atul Saroha
>> wrote:
>>
>>> Hi,
>>>
>>> We have seen a weird behaviour in cassandra 3.6.
>>> Once our node was went down more than 10 hrs. After that, we had ran
>>> Nodetool repair mult
hu, Sep 29, 2016 at 10:40 AM Atul Saroha
> wrote:
>
>> Hi,
>>
>> We have seen a weird behaviour in cassandra 3.6.
>> Once our node was went down more than 10 hrs. After that, we had ran
>> Nodetool repair multiple times. But tombstone are not getting sync prop
edule, command line
or tool, etc...) ?
Thanks,
On Thu, Sep 29, 2016 at 10:40 AM Atul Saroha
wrote:
> Hi,
>
> We have seen a weird behaviour in cassandra 3.6.
> Once our node was went down more than 10 hrs. After that, we had ran
> Nodetool repair multiple times. But tombstone
Hi,
We have seen a weird behaviour in cassandra 3.6.
Once our node was went down more than 10 hrs. After that, we had ran
Nodetool repair multiple times. But tombstone are not getting sync properly
over the cluster. On day- today basis, on expiry of every grace period,
deleted records start
ns of cassandra that will multiple tombstones during
> compaction. 2.1.12 SHOULD correct that, if you’re on 2.1.
>
>
>
> From: Kai Wang
> Reply-To: "user@cassandra.apache.org"
> Date: Monday, December 7, 2015 at 3:46 PM
> To: "user@cassandra.apache.org"
;user@cassandra.apache.org"
Date: Monday, December 7, 2015 at 3:46 PM
To: "user@cassandra.apache.org"
Subject: lots of tombstone after compaction
I bulkloaded a few tables using CQLSStableWrite/sstableloader. The data are
large amount of wide rows with lots of null's. It takes one day or t
The nulls in the original data created the tombstones. They won’t go away until
gc_grace_seconds have passed (default is 10 days).
On Dec 7, 2015, at 4:46 PM, Kai Wang wrote:
> I bulkloaded a few tables using CQLSStableWrite/sstableloader. The data are
> large amount of wide rows with lots of
I bulkloaded a few tables using CQLSStableWrite/sstableloader. The data are
large amount of wide rows with lots of null's. It takes one day or two for
the compaction to complete. sstable count is at single digit. Maximum
partition size is ~50M and mean size is ~5M. However I am seeing frequent
read
Hello, I have a question about the tombstone removal process for leveled
compaction strategy. I am migrating a lot of text data from a cassandra
column family to elastic search. The column family uses leveled compaction
strategy. As part of the migration, I am deleting the migrated rows from
Great !!! Thanks Andrei !!! Thats the answer I was looking for :)
Thanks
Anuj Wadehra
Sent from Yahoo Mail on Android
From:"Andrei Ivanov"
Date:Thu, 23 Apr, 2015 at 11:57 pm
Subject:Re: Drawbacks of Major Compaction now that Automatic Tombstone
Compaction Exists
Just in case it
nding how tombstone threshold is
> implemented. And ticket also says that running major compaction weekly is
> an alternative. I actually want to understand if I run major compaction on
> a cf with 500gb of data and a single giant file is created. Do you see any
> problems with Cassandra proces
Thanks Robert!!
The JIRA was very helpful in understanding how tombstone threshold is
implemented. And ticket also says that running major compaction weekly is an
alternative. I actually want to understand if I run major compaction on a cf
with 500gb of data and a single giant file is created
On Tue, Apr 14, 2015 at 8:29 PM, Anuj Wadehra
wrote:
> By automatic tombstone compaction, I am referring to tombstone_threshold
> sub property under compaction strategy in CQL. It is 0.2 by default. So
> what I understand from the Datastax documentation is that even if a sstable
> d
Hi Robert,
Any comments or suggestions ?
Thanks
Anuj Wadehra
Sent from Yahoo Mail on Android
From:"Anuj Wadehra"
Date:Wed, 15 Apr, 2015 at 8:59 am
Subject:Re: Drawbacks of Major Compaction now that Automatic Tombstone
Compaction Exists
Hi Robert,
By automatic tombstone compac
Hi Robert,
By automatic tombstone compaction, I am referring to tombstone_threshold sub
property under compaction strategy in CQL. It is 0.2 by default. So what I
understand from the Datastax documentation is that even if a sstable does not
find sstables of similar size (STCS) , an automatic
On Mon, Apr 13, 2015 at 12:26 PM, Rahul Neelakantan wrote:
> Does that mean once you split it back into small ones, automatic
> compaction a will continue to happen on a more frequent basis now that it's
> no longer a single large monolith?
>
That's what the word "size tiered" means in the phras
Rob,
Does that mean once you split it back into small ones, automatic compaction a
will continue to happen on a more frequent basis now that it's no longer a
single large monolith?
Rahul
> On Apr 13, 2015, at 3:23 PM, Robert Coli wrote:
>
>> On Mon, Apr 13, 2015 at 10:52 AM, Anuj Wadehra
>>
On Mon, Apr 13, 2015 at 10:52 AM, Anuj Wadehra
wrote:
> Any comments on side effects of Major compaction especially when sstable
> generated is 100+ GB?
>
I have no idea how this interacts with the automatic compaction stuff; if
you find out, let us know?
But if you want to do a major and don't
Any comments on side effects of Major compaction especially when sstable
generated is 100+ GB?
After Cassandra 1.2 , automated tombstone compaction occurs even on a single
sstable if tombstone percentage increases the tombstone_threshold sub property
specified in compaction strategy. So
f sstables (CASSANDRA-9146). In order to bring situation under
control and make sure reads are not impacted, we were left with no option but
to run major compaction to ensure that thousands of tiny sstables are compacted.
Queries:
Does major compaction has any drawback after automatic tombstone comp
ds are not impacted, we were left with no option
> but to run major compaction to ensure that thousands of tiny sstables are
> compacted.
>
> Queries:
> Does major compaction has any drawback after automatic tombstone
> compaction got implemented in 1.2 via tombstone_threshol
compacted.
Queries:
Does major compaction has any drawback after automatic tombstone compaction got
implemented in 1.2 via tombstone_threshold sub-property(CASSANDRA-3442)?
I understand that the huge SSTable created after major compaction wont be
compacted with new data any time soon but is
2
>
> 30M ./md_normal
>
>
>
> Feel of the data that we have is -
>
> 8000 rowkeys per day and columns are added throughout the day. 300K
> columns on an average per rowKey.
>
>
>
>
>
>
>
> *From:* Alain RODRIGUEZ [mailto:arodr...@gmail.com]
>
eout.
Next day sizes were -
30M ./md_forcecompact
4.0K./md_test
304K./md_test2
30M ./md_normal
Feel of the data that we have is -
8000 rowkeys per day and columns are added throughout the day. 300K columns on
an average per rowKey.
From: Alain RODRIGUEZ [mailto:arodr...@gmail.
frequently updated rows (like when using wide rows / time
series) your only way to get rid of tombstone is a major compaction.
That's how I understand this.
Hope this help,
C*heers,
Alain
2015-01-30 1:29 GMT+01:00 Mohammed Guller :
> Ravi -
>
>
>
> It may help.
>
>
&g
: user@cassandra.apache.org
Subject: RE: Tombstone gc after gc grace seconds
Hi,
I saw there are 2 more interesting parameters –
a. tombstone_threshold - A ratio of garbage-collectable tombstones to all
contained columns, which if exceeded by the SSTable triggers compaction (with
no other
. unchecked_tombstone_compaction - True enables more aggressive than
normal tombstone compactions. A single SSTable tombstone compaction runs
without checking the likelihood of success. Cassandra 2.0.9 and later.
Could I use these to get what I want?
Problem I am encountering is even long
My understanding is consistent with Alain's, there's no way to force a
tombstone-only compaction, your only option is major compaction. If you're
using size tiered, that comes with its own drawbacks.
I wonder if there's a technical limitation that prevents introducing a
s
awal :
> Hi,
>
> I want to trigger just tombstone compaction after gc grace seconds is
> completed not nodetool compact keyspace column family.
>
> Anyway I can do that?
>
>
>
> Thanks
>
>
>
>
>
Hi,
I want to trigger just tombstone compaction after gc grace seconds is completed
not nodetool compact keyspace column family.
Anyway I can do that?
Thanks
1 - 100 of 199 matches
Mail list logo