Hi Eric and all,
I almost expected this kind answer. I did a nodetool compactionstats
already to see if those sstables are beeing compacted, but on all nodes
there are 0 outstanding compactions (right now in the morning, not
running any tests on this cluster).
The reported read latency is ab
By chance, are you not performing any reads on that table, only writes? If
you are performing reads, what sorts of reads are you doing?
If you're not doing any reads, please try altering the compaction strategy
options on that table as follows:
ALTER TABLE WITH compaction = {'class':
'SizeTiere
> Is size-tiered compaction easier on the CPU than leveled compaction?
I don't think so. It's easier on I/O though, so if you're not I/O bound,
that probably makes you more likely to become CPU bound.
Have you looked at nodetool setcompactionthroughput?
On Tue, Jan 13, 2015 at 4:01 AM, William
Cassandra 2.1.2 with size-tiered compaction worked well during an initial test
run when data was first written to the cluster, but during the second run when
the initial data got overwritten we noticed that two nodes stopped compacting
altogether and the number of SSTables grew dramatically. Wha
On Thu, Jan 15, 2015 at 5:52 AM, Parth Setya wrote:
> I am attempting to add a cassandra node which has some existing data on it
> to an existing clutser. Is this a legit thing to do?
>
Sure, it's similar to running "nodetool refresh" but without the lack of
safety. It also may interfere with bo
On Thu, Jan 15, 2015 at 9:09 AM, Michał Łowicki wrote:
> We were using LOCAL_QUROUM. C* 2.1.2. Two datacenters. We didn't get any
> exceptions while inserts or deletes. BatchQuery from cqlengine (0.20.0) has
> been used.
>
> If BatchQuery is not used:
>
> Everything is fine. We don't have more
Yes, many sstables can have a huge negative impact read performance, and
will also create memory pressure on that node.
There are a lot of things which can produce this effect, and it strongly
also suggests you're falling behind on compaction in general (check
nodetool compactionstats, you should
On Thu, Jan 15, 2015 at 6:30 AM, Richard Dawe
wrote:
>
> I thought it might be quorum consistency level, because of the because I
> was seeing with cqlsh. I was testing with ccm with C* 2.0.8, 3 nodes,
> vnodes enabled ("ccm create test -v 2.0.8 -n 3 --vnodes -s”). With all
> three nodes up, my
@DENIZ, Jon's point is that CQL is the new standard, Thrift is frozen and
being deprecated. Anything you build using the Thrift interface will hurt
you over time, so you ought to just go for CQL. There really is next to no
reason not to use CQL aside from personal preference, and that argument
do
It seems like you should be able to solve it with two more queries
immediately after your first query:
SELECT * FROM timeseries WHERE tstamp < ${MIN(firstQuery.tstamp)} LIMIT 1
SELECT * FROM timeseries WHERE tstamp > ${MAX(firstQuery.tstamp)} LIMIT 1
On Tue, Jan 13, 2015 at 9:31 AM, Hugo José Pi
Hi,
We've two tables in:
* First one *entity *has log-like structure - whenever entity is modified
we create new version of it and put into the table with new mtime which is
part of compound key. Old one is removed.
* Second one called *entity_by_id *is manually managed index for *entity*.
By havi
Hi
I am attempting to add a cassandra node which has some existing data on it
to an existing clutser. Is this a legit thing to do?
And what will happen if the same data with different timestamps exists on
the node to be added and the existing cluster?
What will happen if auto_bootstrapping propert
Hi Tyler,
Thank you for your quick reply; follow-up inline below.
On 14/01/2015 19:36, "Tyler Hobbs"
mailto:ty...@datastax.com>> wrote:
On Wed, Jan 14, 2015 at 5:13 PM, Richard Dawe
mailto:rich.d...@messagesystems.com>> wrote:
I’ve been trying to find the Java code where the schema migration
Hi,
I'm testing around with cassandra fair a bit, using 2.1.2 which I know
has some major issues,but it is a test environment. After some bulk
loading, testing with incremental repairs and running out of heap once I
found that now I have a quit large number of sstables which are really
small:
14 matches
Mail list logo