If data size not big, you may try copy primary key values to a file, then
copy back to table, then do compaction.
Both copy and compact may set some throttles. If size not so big, you may
try get partition key values first, then loop partition key values to get
all primary key values to file.
On
That's an awesome tool! I forgot that it's included in
https://cassandra.apache.org/_/ecosystem.html. 👍
On Thu, 30 Sept 2021 at 16:27, Stefan Miklosovic <
stefan.mikloso...@instaclustr.com> wrote:
> Hi Raman,
>
> we at Instaclustr have created a CLI tool (1) which can strip TTLs
> from your SSTab
Hi Raman,
we at Instaclustr have created a CLI tool (1) which can strip TTLs
from your SSTables and you can import that back to your node. Maybe
that is something you find handy.
We had some customers who had data which expired and they wanted to
resurrect them - so they took SSTables with expire
Thanks Eric for the update.
On Tue, 14 Sept 2021 at 16:50, Erick Ramirez
wrote:
> You'll need to write an ETL app (most common case is with Spark) to scan
> through the existing data and update it with a new TTL. You'll need to make
> sure that the ETL job is throttled down so it doesn't overloa
You'll need to write an ETL app (most common case is with Spark) to scan
through the existing data and update it with a new TTL. You'll need to make
sure that the ETL job is throttled down so it doesn't overload your
production cluster. Cheers!
>
HI all,
1. I have a table with default_time_to_live = 31536000 (1 year) . We want
it to reduce the value to 7884000 (3 months).
If we alter the table , is there a way to update the existing data?
1. I have a table without TTL we want to add TTL = 7884000 (3 months) on
the table.
If we alter the