Everything still runs smooth. It's really plausible that the 1.1.3 version
resolved this bug.
2012/8/13 Robin Verlangen
> 3 hours ago I finished the upgraded of our cluster. Currently it runs
> quite smooth. I'll give an update within a week if this really solved our
> issues.
>
> Cheers!
>
>
>
Tyler
Thanks much
From: Tyler Hobbs [mailto:ty...@datastax.com]
Sent: Tuesday, August 14, 2012 3:49 PM
To: user@cassandra.apache.org
Subject: Re: Question regarding thrift login api and relation to
access.properties and passwd.properties
access.properties and passwd.properties are only
Jim:
thanks a lot for the info.
when you say "old nodes sometimes hanging around as "unreachable nodes"
when describing cluster", you mean after the new node boots up and assumes
ownership of the same token, you have not manually run nodetool
removeToken, right? this kind of makes sense --- sinc
We use priam to replace nodes using replace_token. We do see some issues
(currently on 1.0.9, as well as earlier versions) with replace_token.
Apparently there are some known issues with replace_token. We have experienced
the old nodes sometimes hanging around as "unreachable nodes" when descr
thanks Aaron, it has been a while since i last checked the code, I'll read
it to understand it more
On Aug 14, 2012 8:48 PM, "aaron morton" wrote:
> Using this method, when choosing the new , should we still use the
> T-1 ?
>
> (AFAIK) No.
> replace_token is used when you want to replace a node
Aaron,
Thank you very much. I will do as you suggested.
One last question regarding restart:
I assume, I should do it node by node.
Is there anything to do before that? like drain or flush?
I am also considering enabling incremental backups on my cluster. Currently
I take a daily full snapshot of
Hi, I have a CF with a composite type (LongType, IntegerType) with some data
like this:
RowKey: hihi
=> (column=1000:1, value=616263)
=> (column=1000:2, value=6465)
=> (column=1000:3, value=66)
=> (column=1000:4, value=6768)
=> (column=2000:1, value=616263)
=> (column=2000:2, value=6465)
=> (colu
> Using this method, when choosing the new , should we still use the T-1
> ?
(AFAIK) No.
replace_token is used when you want to replace a node that is dead. In this
case the dead node will be identified by its token.
> if so, would the duplicate token (same token but different ip) cause problem
previously when a node dies, I remember the documents describes that it's
better to assign T-1 to the new node,
where T was the token of the dead node.
the new doc for 1.x here
http://wiki.apache.org/cassandra/Operations#Replacing_a_Dead_Node
shows a new way to pass in cassandra.replace_token
access.properties and passwd.properties are only used by the example
implementations, SimpleAuthenticator and SimpleAuthority. Your own
implementation (which requires a custom class) certainly does not have to
use these, it can use any other source to make the authn/authz decision.
On Tue, Aug 14
ah... my bad, thanks for the explanation!
On Tue, Aug 14, 2012 at 1:57 PM, aaron morton wrote:
> The Priam code is looking for the //backups
> directory created by cassandra during incremental backups. If it finds it
> the files are uploaded to S3.
>
> It's taking the built in incremental backups
The datastax documentation concisely describes how to configure and
assure that the properties are used in client access. Question is this:
if using the thrift api login, does C* use the Authentication class to
determine access privileges based on the access/passwd properties?
These questions
The Priam code is looking for the //backups directory
created by cassandra during incremental backups. If it finds it the files are
uploaded to S3.
It's taking the built in incremental backups off node. (AFAIK)
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.
> According to cfstats there are the some CF with high Comacted row maximum
> sizes (1131752, 4866323 and 25109160). Others max sizes are < 100. Are
> these considered to be problematic, what can I do to solve that?
They are only 1, 4 and 25 MB. Not too big.
> What should be the values of
in the initial incremental backup implementation,
the hardlinking to the backup dir was in the CFS.addSSTable() code, so
it's part of the Cassandra code.
I looked at Priam,
https://github.com/Netflix/Priam/blob/master/priam/src/main/java/com/netflix/priam/backup/IncrementalBackup.java
this code
Hi!
It helps, but before I do more actions I want to give you some more info,
and ask some questions:
*Related Info*
1. According to my yaml file (where do I see these parameters in the jmx?
I couldn't find them):
in_memory_compaction_limit_in_mb: 64
concurrent_compactors: 1, but it i
DeĀ : mdione@orange.com [mailto:mdione@orange.com]
> In particular, I'm thinking on a restore like this:
>
> * the app does something stupid.
> * (if possible) I stop writes to the KS or CF.
In fact, given that I'm about to restore the KS/CF to an old state, I can
safely do this:
*
Thanks Omid
I've changed into Sun's java and now it works just fine
rds /Robban
From: Omid Aladini [mailto:omidalad...@gmail.com]
Sent: den 13 augusti 2012 18:14
To: user@cassandra.apache.org
Subject: Re: 1.1.3 crasch when initializing column family
It works
> optimize the Cassandra for performance in general
It's a lot easier to answer specific questions. Cassandra is fast, and there
are way to make it faster in specific use cases.
> improve the performance for "select * from X" type of queries
Ah. Are you specifying a row key or are you trying to g
There are a couple of steps you can take if compaction is causing GC.
- if you have a lot of wide rows consider reducing the
in_memory_compaction_limit_in_mb yaml setting. This will slow down compaction
but will reduce the memory usage.
- reduce concurrent_compactors
Both of these may slow
20 matches
Mail list logo