>
> Perhaps a log structured database with immutable data files is not best suited
> for this use case?
Perhaps not, but I have other data structures I¹m moving to Cassandra as
well. This is just the first. Cassandra has actually worked quite well for
this first step, in spite of it not being an
On Tue, Jan 28, 2014 at 7:57 AM, Robert Wille wrote:
> I have a dataset which is heavy on updates. The updates are actually
> performed by inserting new records and deleting the old ones the following
> day. Some records might be updated (replaced) a thousand times before they
> are finished.
>
LeveledCompactionStrategy is ideal for update heavy workloads. If you are
using a pre 1.2.8 version make sure you set the sstable_size_in_mb up to
the new default of 160.
Also, keep an eye on "Average live cells per slice" and "Average tombstones
per slice" (available in versions > 1.2.11 - so I g
I have a dataset which is heavy on updates. The updates are actually
performed by inserting new records and deleting the old ones the following
day. Some records might be updated (replaced) a thousand times before they
are finished.
As I watch SSTables get created and compacted on my staging serve