no.. we can't allow you to leave.
On Mon, Nov 16, 2015 at 4:25 AM, Tanuj Kumar wrote:
>
In this can does it make sense to remove newly added nodes, correct the
configuration and have them rejoin one at a time ?
Thx
Sent from my iPhone
> On Oct 18, 2015, at 11:19 PM, Jeff Jirsa wrote:
>
> Take a snapshot now, before you get rid of any data (whatever you do, don’t
> run cleanup).
wont ur data be out of sync?
On Wed, Sep 9, 2015 at 2:19 AM, a a wrote:
> Hi all.
>
> I have running cassandra cluster and i want add new node in new
> datanenter(need 100% of data).
> But i don't want full data copy by network.
> Can i copy data manually and then sync new node to cluster?
>
> S
with n-depth slicing of that partitioned row given an
>>> arbitrary query syntax if range queries on clustering keys was allowed
>>> anywhere.
>>>
>>> At present, you can either duplicate the data using the other clustering
>>> key (transaction_time) a
I don't think thats solves my problem. The question really is why can't we
use ranges for both time columns when they are part of the primary key.
They are on 1 row after all. Is this just a CQL limitation?
-Raj
On Sat, Feb 14, 2015 at 3:35 AM, DuyHai Doan wrote:
> "I am
01-02 00:00:00' and
transaction_time < '2015-01-02 00:00:00'
It works if I use an equals clause for the event_time. I am trying to get
the state as of a particular transaction_time
-Raj
ty
is reserved in heap still. Any plans to move it off-heap?
-Raj
On Tue, Nov 25, 2014 at 3:10 PM, Robert Coli wrote:
> On Tue, Nov 25, 2014 at 9:07 AM, Raj N wrote:
>
>> What's the latest on the maximum number of keyspaces and/or tables that
>> one can have in Cassandra
What's the latest on the maximum number of keyspaces and/or tables that one
can have in Cassandra 2.1.x?
-Raj
We are planning to upgrade soon. But in the meantime, I wanted to see if we
can tweak certain things.
-Rajesh
On Wed, Nov 5, 2014 at 3:10 PM, Robert Coli wrote:
> On Tue, Nov 4, 2014 at 8:51 PM, Raj N wrote:
>
>> Is there a good formula to calculate heap utilization in Cassandr
pling going on
as well. I have around 800 million rows. Is there a way to estimate how
much space this would add up to?
What else?
-Raj
On Mon, Jun 23, 2014 at 6:54 AM, Alain RODRIGUEZ wrote:
> Anyone has any clue of what is happening in our cluster with the given
> information?
>
> What other informations could help you to help me :-D ?
>
>
>
>
> 2014-06-18 21:07 GMT+02:00 Robert Coli :
>
>> On Wed, Jun 18, 2014 at 5:36 AM, Alai
--
Data Architect ❘ Zephyr Health
589 Howard St. ❘ San Francisco, CA 94105
m: +1 9176477433 ❘ f: +1 415 520-9288
o: +1 415 529-7649 | s: raj.janakarajan
http://www.zephyrhealth.com
Thanks Eric for the information. It looks like it will be supported in
future versions.
Raj
On Mon, May 19, 2014 at 10:03 AM, Eric Plowe wrote:
> Collection types cannot be used for filtering (as part of the where
> statement).
> They cannot be used as a primary key or part of a pr
Thank you Patricia. This is helpful.
Raj
On Mon, May 19, 2014 at 10:54 AM, Patricia Gorla wrote:
> Raj,
>
> Secondary indexes across CQL3 collections were introduced into 2.1 beta1,
> so will be available in future versions. See
> https://issues.apache.org/jira/browse/CASSAN
re eligibility_state IN (CA, NC)
Eligibility_state being a collection. The above query would be used
frequently.
Would you recommend collections for modeling from a performance perspective?
Raj
--
Data Architect ❘ Zephyr Health
589 Howard St. ❘ San Francisco, CA 94105
m: +1 9176477433 ❘ f: +1 41
One of our nodes keeps crashing continuously with out of memory errors. I
see the following error in the logs -
INFO 21:03:54,007 Creating new commitlog segment
/local3/logs/cassandra/commitlog/CommitLog-1348016634007.log
Java HotSpot(TM) 64-Bit Server VM warning: Attempt to allocate stack guard
Hi,
I have a 2 DC setup(DC1:3, DC2:3). All reads and writes are at
LOCAL_QUORUM. The question is if I do reads at LOCAL_QUORUM in DC1, will
read repair happen on the replicas in DC2?
Thanks
-Raj
Hi experts,
I am planning to upgrade from 0.8.4 to 1.+. Whats the latest stable
version?
Thanks
-Rajesh
Great stuff!!!
On Tue, Jun 26, 2012 at 5:25 PM, Edward Capriolo wrote:
> Hello all,
>
> It has not been very long since the first book was published but
> several things have been added to Cassandra and a few things have
> changed. I am putting together a list of changed content, for example
> fe
n Tue, Jun 19, 2012 at 11:11 PM, Raj N wrote:
> > But wont that also run a major compaction which is not recommended
> anymore.
> >
> > -Raj
> >
> >
> > On Sun, Jun 17, 2012 at 11:58 PM, aaron morton
> > wrote:
> >&g
How did you solve your problem eventually? I am experiencing something
similar. Did you run cleanup on the node that has 80GB data?
-Raj
On Mon, Aug 15, 2011 at 10:12 PM, aaron morton wrote:
> Just checking do you have read_repair_chance set to something ? The second
> request is going
But wont that also run a major compaction which is not recommended anymore.
-Raj
On Sun, Jun 17, 2012 at 11:58 PM, aaron morton wrote:
> Assuming you have been running repair, it' can't hurt.
>
> Cheers
>
> -
> Aaron Morton
> Freelance
Have you seen any issues with data this size on a node?
-Raj
On Tue, Jun 19, 2012 at 3:30 PM, Edward Capriolo wrote:
> Hey my favorite question! It is a loaded question and it depends on
> your workload. The answer has evolved over time.
>
> In the old days <0.6.5 the only way to re
data that is never updated then it probably makes sense to not run
major compaction. But if you have data which can be deleted or overwritten
does it make sense to run major compaction on a regular basis?
Thanks
-Raj
Nick, do you think I should still run cleanup on the first node.
-Rajesh
On Fri, Jun 15, 2012 at 3:47 PM, Raj N wrote:
> I did run nodetool move. But that was when I was setting up the cluster
> which means I didn't have any data at that time.
>
> -Raj
>
>
> On Fr
I did run nodetool move. But that was when I was setting up the cluster
which means I didn't have any data at that time.
-Raj
On Fri, Jun 15, 2012 at 1:29 PM, Nick Bailey wrote:
> Did you start all your nodes at the correct tokens or did you balance
> by moving them? Moving nodes a
Actually I am not worried about the percentage. Its the data I am concerned
about. Look at the first node. It has 102.07GB data. And the other nodes
have around 60 GB(one has 69, but lets ignore that one). I am not
understanding why the first node has almost double the data.
Thanks
-Raj
On Fri
be
the cause for this?
Thanks
-Raj
ranges its responsible for(which is everything). So unless I upgrade to
1.0.+, where I can use the -pr option is it advisable to just run repair on
the first node.
-Raj
On Tue, May 22, 2012 at 5:05 AM, aaron morton wrote:
> I also dont understand if all these nodes are replicas of each other why
> i
Can I infer from this that if I have 3 replicas, then running repair
without -pr won 1 node will repair the other 2 replicas as well.
-Raj
On Sat, Apr 14, 2012 at 2:54 AM, Zhu Han wrote:
>
> On Sat, Apr 14, 2012 at 1:57 PM, Igor wrote:
>
>> Hi!
>>
>> What is the
eally appreciated.
Thanks
-Raj
://issues.apache.org/jira/browse/CASSANDRA-2324
But that says it was fixed in 0.8 beta. Is this still broken in 0.8.4?
I also don't understand why the data was inconsistent in the first place. I
read and write at LOCAL_QUORUM.
Thanks
-Raj
On Sun, Apr 29, 2012 at 2:06 AM, Watanabe Maki wrote:
just drop column families and bulk load the data
again?
Thanks
-Raj
nable to reduce heap usage since there are no dirty column families
nodetool ring shows 48GB of data on the node.
My Xmx is 2G. I rely on OS caching more than Row caching or key caching.
Hence the column families are created with default settings.
Any help would be appreciated.
Thanks
-Raj
How do I ensure it is indeed using the SerializingCacheProvider.
Thanks
-Rajesh
On Tue, Jul 12, 2011 at 1:46 PM, Jonathan Ellis wrote:
> You need to set row_cache_provider=SerializingCacheProvider on the
> columnfamily definition (via the cli)
>
> On Tue, Jul 12, 2011 at 9:57 AM,
Do we need to do anything special to turn off-heap cache on?
https://issues.apache.org/jira/browse/CASSANDRA-1969
-Raj
I know it doesn't. But is this a valid enhancement request?
On Tue, Jul 5, 2011 at 1:32 PM, Edward Capriolo wrote:
>
>
> On Tue, Jul 5, 2011 at 1:27 PM, Raj N wrote:
>
>> Hi experts,
>> Are there any benchmarks that quantify how long nodetool repair
>>
decide which nodes can run repair concurrently. For example, if RF =3, and I
have 6 nodes, then 2 replicas which are responsible for different ranges in
the ring can run repair concurrently.
Thanks
-Raj
questions
-
Do you guys see any flaws with this approach?
What happens when DC1 comes back up and we start reading/writing at QUORUM
again? Will we read stale data in this case?
Thanks
-Raj
Hi all -
We're new to Cassandra and have read plenty on the data model, but we wanted
to poll for thoughts on how to best handle this structure.
We have simple objects that have and ID and we want to maintain a history of
all the revisions.
e.g.
MyObject:
ID (long)
name
other fields
My question is in context of a social network schema design
I am thinking of following schema for storing a user's data that is
required as he logs in & is led to his homepage:-
(I aimed at a schema design such that through a single row read query
all the data that would be required to put up the
Guys,
Correct me if I am wrong. The whole problem is because a node missed an
update when it was down. Shouldn’t HintedHandoff take care of this case?
Thanks
-Raj
-Original Message-
From: Jonathan Ellis [mailto:jbel...@gmail.com]
Sent: Wednesday, August 18, 2010 9:22 AM
To: user
43 matches
Mail list logo