Re: error='Cannot allocate memory' (errno=12)

2015-05-12 Thread J. Ryan Earl
I see your ulimit -a above, missed that. You should increase nofile ulimit. If you used JNA, you'd need to increase memlock too, but you probably aren't using JNA. 1024 nofile is default and far too small, try making that like 64K. Thread handles can count against file descriptors limits, simil

Re: error='Cannot allocate memory' (errno=12)

2015-05-12 Thread J. Ryan Earl
What's your ulimit -a output? Did you adjust nproc and nofile ulimits up? Do you have JNA installed? What about memlock ulimit and in sysctl.conf => kernel.shmmax? What's in cassandra.log? On Mon, May 11, 2015 at 7:24 AM, Rahul Bhardwaj < rahul.bhard...@indiamart.com> wrote: > Hi Robert, > > I

dynamic models (data modelling)

2015-05-12 Thread Sunil Ghai
Hi, I’ve a query regarding data modelling in Cassandra, would appreciate some suggestions. We’re trying to build a multi-tenant application where we expect data structure to be defined by the users. A user may define a data source, number of fields, their data types, ordering etc. and then upload

Re: Consistency Issues

2015-05-12 Thread Robert Coli
On Tue, May 12, 2015 at 12:35 PM, Michael Shuler wrote: > This is a 4 node cluster running Cassandra 2.0.6 >> > > Can you reproduce the same issue on 2.0.14? (or better yet, the > cassandra-2.0 branch HEAD, which will soon ship 2.0.15) If you get the same > results, please, open a JIRA with the r

Re: Out of memory on wide row read

2015-05-12 Thread Jack Krupansky
Sounds like it's worth a Jira - Cassandra should protect itself from innocent mistakes or excessive requests from clients. Maybe there should be a timeout or result size (bytes in addition to count) limit. Something. Anything. But OOM seems a tad unfriendly for an innocent mistake. In this particul

Re: Consistency Issues

2015-05-12 Thread Michael Shuler
On 05/12/2015 04:50 AM, Jared Rodriguez wrote: I have a specific update and query that I need to ensure has strong consistency. To that end, when I do the write, I set the consistency level to ALL. Shortly afterwards, I do a query for that record with a consistency of ONE and somehow get back s

Re: Unexpected behavior after adding successffully a new node

2015-05-12 Thread Analia Lorenzatto
Just in case I want to clarify that after bootstrapping the third node, it got data and seemed to be working fine. But it was the last night when the cluster started behaving in a weird way. The last node (successfully added last week) were being reported up and down all the time. After restarti

Re: Out of memory on wide row read

2015-05-12 Thread Robert Coli
On Tue, May 12, 2015 at 8:43 AM, Kévin LOVATO wrote: > My question is the following: Is it possible to prevent Cassandra from > OOM'ing when a client does this kind of requests? I'd rather have an error > thrown to the client than a multi-server crash. > You can provide a default LIMIT clause, b

Re: Unexpected behavior after adding successffully a new node

2015-05-12 Thread Robert Coli
On Tue, May 12, 2015 at 9:59 AM, arun sirimalla wrote: > Try running repair on node 3. > Mostly disagree. If a node is empty after a bootstrap, remove it and re-bootstrap it. =Rob

Re: Unexpected behavior after adding successffully a new node

2015-05-12 Thread arun sirimalla
Analia, Try running repair on node 3. On Tue, May 12, 2015 at 7:39 AM, Analia Lorenzatto < analialorenza...@gmail.com> wrote: > Hello guys, > > > I have a cluster 2.1.0-2 comprised of 3 nodes. The replication factor=2. > We successfully added the third node last week. After that, We ran clean

Re: Multiple Cluster vs Multiple DC

2015-05-12 Thread Alain RODRIGUEZ
Hi Anuj, thanks for the answer. "Multiple DC is usually useful in case u need Geo Redundancy or have distributed workload" --> I agree. "Do u have these clusters at same physical location?" --> I am asking a question about theory actually... The exact question is: What is the difference of havin

Out of memory on wide row read

2015-05-12 Thread Kévin LOVATO
Hi, We experienced a crash on our production cluster following a massive wide row read. A client tried to read a wide row (~4GB of raw data) without specifying any slice condition, which resulted in the crash of multiple nodes (as many as the replication factor) after long garbage collections. We

RE: Adding New Node Issue

2015-05-12 Thread Thomas Miller
Hello, I just wanted to post a quick update on this issue. After talking with our engineers it appears that our abstraction layers use of the c# driver was the culprit. There was a config setting that was causing the cluster to repeatedly try to reconnect to an unusable node (which it seems a

Unexpected behavior after adding successffully a new node

2015-05-12 Thread Analia Lorenzatto
Hello guys, I have a cluster 2.1.0-2 comprised of 3 nodes. The replication factor=2. We successfully added the third node last week. After that, We ran clean ups on one node at that time. Then we ran repairs on all the nodes, and finally compactions on all the CFs. Last night, I noticed the c

Re: CQL Data Model question

2015-05-12 Thread Jack Krupansky
Porting an SQL data model to Cassandra is an anti-pattern - don't do it! Instead, focus on developing a new data model that capitalizes on the key strengths of Cassandra - distributed, scalable, fast writes, fast direct access. Complex and ad-hoc queries are anti-patterns as well. I'll leave it to

RE: CQL Data Model question

2015-05-12 Thread Ngoc Minh VO
Hello, The problem with your approach is: you will need to specify all the 30 filters (in the pre-defined order in PK) when querying. I would go for this data model: CREATE TABLE t ( name text, filter_name1 text, filter_value1 text, filter_name2 text, filter_value2 text, filter_n

Reads failing at around 4000 QPS

2015-05-12 Thread Anishek Agarwal
Hello everyone, i have a 3 node cluster with Cassandra 2.0.14 on centos in the same Data center with RF=3 and i am using CL=Local_Quorum by default for the read and write operations. I have given about 5 GB of heap space to cassandra. I have 40 core machines with 3 separate SATA disks with commitl

Consistency Issues

2015-05-12 Thread Jared Rodriguez
I have a specific update and query that I need to ensure has strong consistency. To that end, when I do the write, I set the consistency level to ALL. Shortly afterwards, I do a query for that record with a consistency of ONE and somehow get back stale data. This is a 4 node cluster running Cass