It has given me weird results in terms of performance. You can try with
what Netflix is using they seem to be pretty happy with this version of
Cassandra.
http://techblog.netflix.com/2012/07/benchmarking-high-performance-io-with.html
On Fri, Jul 20, 2012 at 2:34 AM, aaron morton wrote:
> Can a r
One pointer to look at would be the memory snapshot and try tuning GC
around that if you find something fishy. Could be that it tries to load
everything in memory and then it has to do garbage collection which adds
pause times to it.
Turn on the GC logging by changing the parameters in conf/cassa
Go and look in the data directory on the disk and check if the files still
exist there?
On Thu, Jul 19, 2012 at 2:56 PM, Kirk wrote:
> What does "show schema" show? Is the CF showing up?
>
> Are the data files for the CF on disk?
>
> If you poke around with the system CFs, is there any data sti
Thanks for the suggestion. I was able to get better results tuning the GC
settings but still not that great. I was seeing reading the netflix blog
for the settings they have done and they have posted on blog. But i could
not get close to what they are saying.
http://techblog.netflix.com/2012/07/be
I was aware of the read-then-write pattern for counters, but not
secondary indexes. I'll have to take a look into that.
Thanks.
On 07/20/2012 02:32 AM, aaron morton wrote:
I'm assuming the logical row is in a CQL 3 CF with composite
PRIMARYKEY http://www.datastax.com/dev/blog/whats-new-in-cql
Can I drop composite index in CLI? What’s syntax? Or do I have to use cqlsh?
[default@mobilelogks] drop index on
MobilePushNotificationLog.retryCount;
Column 'retryCount' does
not have an index.
[default@mobilelogks] help drop index;
drop index on .;
Drops index on specified column of t
No, but I see message of "Creating new index" after most recent restart of
Cassandra which is at 2012-07-18 13:51:37,306.
grep -i "index" /data/cassandra/log/system/system.log.2|grep -v IndexInfo
INFO [main] 2012-07-18 13:53:49,398 DatabaseDescriptor.java (line 170)
DiskAccessMode 'auto' deter
I am developing an automated script for our server maintenance. It would
execute a nodetool repair ever weekend. We have 3 nodes in DC1 and 3 in
DC2. We are currently on Cassandra 0.8.4.
I am trying to understand effects of what would happens if connectivity
between DC1 and DC2 is lost or couple o
Local storage is more the just the norm. Unless you have a very good
reason you should not be using NFS.
Edward
On Fri, Jul 20, 2012 at 4:55 AM, aaron morton wrote:
> 45 minutes for 90GB is high.
>
> The odd one out here is using NFS, local storage is the norm.
>
> I would look into the NFS firs
Hi,
I'm currently testing the restore of a Cassandra 1.1.2 snapshot.
The steps to reproduce the problem:
- snapshot a 3-node production cluster (1.1.2) with RF=3 and LCS (leveled
compaction) ==> 8GB data/node
- create a new 3-node cluster (node1,2,3)
- stop node1 / copy data (SSTables) from
On Fri, Jul 20, 2012 at 11:17 AM, aaron morton wrote:
> Ordering the rows by row key locally would mean that every row on the node
> would have to be scanned to find the ones whose token was in the required
> token range.
I don't know much about Cassandra internals, but from a user point of
view,
On 20.07.2012 11:02, aaron morton wrote:
I would check for stored hints in /var/lib/cassandra/data/system
hmm where i can find this kind of info?
i can see HintsColumnFamily cf inside system kespace, but it`s empty...
Putting nodes in different racks can make placement tricky so…
Are you runni
Thanks for the reply Aaron.
I was thinking along the same lines as well.. as its only specific nodes
that were showing excessive writes.. during the heavy read operations.
We will be performing the same exercise again today.. where can I see within
the JMX info if a specific node is performing a
Agree to aaron's point. But still I think there are ways to overcome
this problem, as I believe that the row-scan use-case is very
important. One simple ( but expensive ) approach could be to duplicate
the rows with RP tokens on primary node, which will be used only in
case of repairs.
Any thought
>> Can a rolling upgrade be done or is it all-or-nothing?
>
Rolling upgrade, take a look at news….
https://github.com/apache/cassandra/blob/cassandra-1.1/NEWS.txt
(my personal approach is to test in dev, and upgrade a single node for a few
hours to make sure everything is ok)
Cheers
I'm assuming the logical row is in a CQL 3 CF with composite PRIMARYKEY
http://www.datastax.com/dev/blog/whats-new-in-cql-3-0
It will still be a no look write. The exception being secondary indexes and
counters which include reads in the write path.
Cheers
-
Aaron Morton
Freel
Hi Aaron,
I have repaired and cleanup both nodes already and I did it after any
change on my ring (It tooks me a while btw :)).
The node *.211 is actually out of the ring and out of my control
'cause I don't have the server anymore (EC2 instance terminated a few
days ago).
Alain
2012/7/20 aaron
> But isn't QUORUM on a 2-node cluster still 2 nodes?
Yes.
3 is where you start to get some redundancy -
http://thelastpickle.com/2011/06/13/Down-For-Me/
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 20/07/2012, at 10:24 AM, Kirk True w
> INFO [OptionalTasks:1] 2012-07-18 14:05:27,648 SecondaryIndexManager.java
> (line 183) Creating new index : ColumnDefinition{name=74696d657374616d70,
> validator=org.apache.cassandra.db.marshal.DateType, index_type=KEYS,
> index_name='MtsTrackingData_timestamp_idx'}
Is the system reading the
My first guess would be read repair, are you seeing any increase in
ReadRepairStage tasks ?
RR (in 1.X) is normally only enabled for 10% of the request.
cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 20/07/2012, at 5:17 AM, jmodha wrote:
Repair and token moves work on ranges of Tokens, not row keys. These operations
need to scan through all the rows in the token range.
Ordering the rows by row key locally would mean that every row on the node
would have to be scanned to find the ones whose token was in the required token
range
Nothing jumps out, can you reproduce the problem ?
If you can repo it let us know and the RF / CL.
Good luck.
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 20/07/2012, at 1:07 AM, cbert...@libero.it wrote:
> Hi all, I have a problem with coun
I would:
* run repair on 10.58.83.109
* run cleanup on 10.59.21.241 (I assume this was the first node).
It looks like 0.56.62.211 is out of the cluster.
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 19/07/2012, at 9:37 PM, Alain RODRIG
I would check for stored hints in /var/lib/cassandra/data/system
Putting nodes in different racks can make placement tricky so…
Are you running a multi DC setup ? Are you using the NTS ? What is the RF
setting ? What setting do you have for the Snitch ? What is the full node
assignments.
Cheer
45 minutes for 90GB is high.
The odd one out here is using NFS, local storage is the norm.
I would look into the NFS first, low network IO and low CPU would suggest it is
waiting on disk IO. The simple thing would be to try starting from local disk
and see how much faster it is. Or look at th
>From cassandra-cli help:
To disable compression just set compression_options to null like this
compression_options = null
so
[default@XXXKeyspace] update column family YYY with compression_options = null;
Best regards / Pagarbiai
Viktor Jevdokimov
Senior Developer
Email: viktor.jevdoki...@
[default@XXXKeyspace] update column family YYY with compression_options
=[{}];
Command not found: `update column family YYY with compression_options
=[{}];`. Type 'help;' or '?' for help.
[default@XXXKeyspace]
2012/7/20 Viktor Jevdokimov
> First you update schema for CF, then you run nodetool u
First you update schema for CF, then you run nodetool upgradesstables on each
node:
nodetool -h [HOST] -p [JMXPORT] upgradesstables [keyspace] [cfnames]
For me sometimes it works only after node restart (upgrade leaves previous
format, compressed or uncompressed).
Best regards / Pagarbiai
Hello!
how can I run "update" command on column family to disable compression
(without re-creating CF) ?
Cheers,
Ilya Shipitsin
29 matches
Mail list logo