I see your ulimit -a above, missed that. You should increase nofile
ulimit. If you used JNA, you'd need to increase memlock too, but you
probably aren't using JNA. 1024 nofile is default and far too small, try
making that like 64K. Thread handles can count against file descriptors
limits, simil
What's your ulimit -a output? Did you adjust nproc and nofile ulimits up?
Do you have JNA installed? What about memlock ulimit and in sysctl.conf =>
kernel.shmmax?
What's in cassandra.log?
On Mon, May 11, 2015 at 7:24 AM, Rahul Bhardwaj <
rahul.bhard...@indiamart.com> wrote:
> Hi Robert,
>
> I
Hi,
I’ve a query regarding data modelling in Cassandra, would appreciate some
suggestions.
We’re trying to build a multi-tenant application where we expect data
structure to be defined by the users. A user may define a data source,
number of fields, their data types, ordering etc. and then upload
On Tue, May 12, 2015 at 12:35 PM, Michael Shuler
wrote:
> This is a 4 node cluster running Cassandra 2.0.6
>>
>
> Can you reproduce the same issue on 2.0.14? (or better yet, the
> cassandra-2.0 branch HEAD, which will soon ship 2.0.15) If you get the same
> results, please, open a JIRA with the r
Sounds like it's worth a Jira - Cassandra should protect itself from
innocent mistakes or excessive requests from clients. Maybe there should be
a timeout or result size (bytes in addition to count) limit. Something.
Anything. But OOM seems a tad unfriendly for an innocent mistake. In this
particul
On 05/12/2015 04:50 AM, Jared Rodriguez wrote:
I have a specific update and query that I need to ensure has strong
consistency. To that end, when I do the write, I set the consistency
level to ALL. Shortly afterwards, I do a query for that record with a
consistency of ONE and somehow get back s
Just in case I want to clarify that after bootstrapping the third node, it
got data and seemed to be working fine. But it was the last night when
the cluster started behaving in a weird way. The last node (successfully
added last week) were being reported up and down all the time. After
restarti
On Tue, May 12, 2015 at 8:43 AM, Kévin LOVATO wrote:
> My question is the following: Is it possible to prevent Cassandra from
> OOM'ing when a client does this kind of requests? I'd rather have an error
> thrown to the client than a multi-server crash.
>
You can provide a default LIMIT clause, b
On Tue, May 12, 2015 at 9:59 AM, arun sirimalla wrote:
> Try running repair on node 3.
>
Mostly disagree. If a node is empty after a bootstrap, remove it and
re-bootstrap it.
=Rob
Analia,
Try running repair on node 3.
On Tue, May 12, 2015 at 7:39 AM, Analia Lorenzatto <
analialorenza...@gmail.com> wrote:
> Hello guys,
>
>
> I have a cluster 2.1.0-2 comprised of 3 nodes. The replication factor=2.
> We successfully added the third node last week. After that, We ran clean
Hi Anuj, thanks for the answer.
"Multiple DC is usually useful in case u need Geo Redundancy or have
distributed workload" --> I agree.
"Do u have these clusters at same physical location?" --> I am asking a
question about theory actually...
The exact question is: What is the difference of havin
Hi,
We experienced a crash on our production cluster following a massive wide
row read.
A client tried to read a wide row (~4GB of raw data) without specifying any
slice condition, which resulted in the crash of multiple nodes (as many as
the replication factor) after long garbage collections.
We
Hello,
I just wanted to post a quick update on this issue.
After talking with our engineers it appears that our abstraction layers use of
the c# driver was the culprit. There was a config setting that was causing the
cluster to repeatedly try to reconnect to an unusable node (which it seems a
Hello guys,
I have a cluster 2.1.0-2 comprised of 3 nodes. The replication factor=2.
We successfully added the third node last week. After that, We ran clean
ups on one node at that time. Then we ran repairs on all the nodes, and
finally compactions on all the CFs.
Last night, I noticed the c
Porting an SQL data model to Cassandra is an anti-pattern - don't do it!
Instead, focus on developing a new data model that capitalizes on the key
strengths of Cassandra - distributed, scalable, fast writes, fast direct
access. Complex and ad-hoc queries are anti-patterns as well. I'll leave it
to
Hello,
The problem with your approach is: you will need to specify all the 30 filters
(in the pre-defined order in PK) when querying.
I would go for this data model:
CREATE TABLE t (
name text,
filter_name1 text, filter_value1 text,
filter_name2 text, filter_value2 text,
filter_n
Hello everyone,
i have a 3 node cluster with Cassandra 2.0.14 on centos in the same Data
center with RF=3 and i am using CL=Local_Quorum by default for the read and
write operations. I have given about 5 GB of heap space to cassandra.
I have 40 core machines with 3 separate SATA disks with commitl
I have a specific update and query that I need to ensure has strong
consistency. To that end, when I do the write, I set the consistency level
to ALL. Shortly afterwards, I do a query for that record with a
consistency of ONE and somehow get back stale data.
This is a 4 node cluster running Cass
18 matches
Mail list logo