> I don't know. How do I find out? The only mention about query plan in
> Cassandra I found is your article on your site, from 2011 and considering
> version 0.8.
See the help for TRACE in cqlsh
My general approach is to solve problems with the read path by making changes
to the write path. So
Hi,
by the way, some of the issues are summarised here:
https://issues.apache.org/jira/browse/CASSANDRA-6586 and here:
https://issues.apache.org/jira/browse/CASSANDRA-6587.
regards,
ondrej cernos
On Tue, Jan 14, 2014 at 9:48 PM, Ondřej Černoš wrote:
> Hi,
>
> thanks for the answer and sorry
Hi,
thanks for the answer and sorry for the delay. Let me answer inline.
On Wed, Dec 18, 2013 at 4:53 AM, Aaron Morton wrote:
> > * select id from table where token(id) > token(some_value) and
> secondary_index = other_val limit 2 allow filtering;
> >
> > Filtering absolutely kills the performa
Thanks all for your responses. We've downgraded from 2.0.3 to 2.0.0 and
everything became normal.
2013/12/8 Nate McCall
> If you are really set on using Cassandra as a cache, I would recommend
> disabling durable writes for the keyspace(s)[0]. This will bypass the
> commitlog (the flushing/rota
> * select id from table where token(id) > token(some_value) and
> secondary_index = other_val limit 2 allow filtering;
>
> Filtering absolutely kills the performance. On a table populated with 130.000
> records, single node Cassandra server (on my i7 notebook, 2GB of JVM heap)
> and secondary
Hi all,
we are reimplementing a legacy interface of an inventory-like service
(currently built on top of mysql) on Cassandra and I thought I would share
some findings with the list. The interface semantics is given and cannot be
changed. We chose Cassandra due to its multiple datacenter capabiliti
If you are really set on using Cassandra as a cache, I would recommend
disabling durable writes for the keyspace(s)[0]. This will bypass the
commitlog (the flushing/rotation of which my be a good-sized portion of
your performance problems given the number of tables).
[0]
http://www.datastax.com/do
On Thu, Dec 5, 2013 at 6:33 AM, Alexander Shutyaev wrote:
> We've plugged it into our production environment as a cache in front of
> postgres. Everything worked fine, we even stressed it by explicitly
> propagating about 30G (10G/node) data from postgres to cassandra.
>
If you just want a cachin
Thanks for your answers,
Jonathan, yes it was load avg and iowait was lower than 2% all that time -
the only load was the user one.
Robert, we had -Xmx4012m which was automatically calculated by the default
cassandra-env.sh (1/4 of total memory - 16G) - we didn't change that.
2013/12/5 Robert C
On Thu, Dec 5, 2013 at 4:33 AM, Alexander Shutyaev wrote:
> Cassandra version is 2.0.3. ... We've plugged it into our production
> environment as a cache in front of postgres.
>
https://engineering.eventbrite.com/what-version-of-cassandra-should-i-run/
> What can be the reason? Can it be high n
Do you mean high CPU usage or high load avg? (20 indicates load avg to
me). High load avg means the CPU is waiting on something.
Check "iostat -dmx 1 100" to check your disk stats, you'll see the columns
that indicate mb/s read & write as well as % utilization.
Once you understand the bottlenec
Hi all,
We have a 3 node cluster setup, single keyspace, about 500 tables. The
hardware is 2 cores + 16 GB RAM (Cassandra chose to have 4GB). Cassandra
version is 2.0.3. Our replication factor is 3, read/write consistency is
QUORUM. We've plugged it into our production environment as a cache in
fr
12 matches
Mail list logo