The answer to this questions is very much dependent on the throughput,
desired latency and access patters (R/W or R/O)? In general what I have
seen working for high throughput environment is to either use a distributed
file system like Ceph/Gluster or object store like S3 and keep the pointer
in th
Some other ways to track old records is:
1) Use external queues - One queue per week or month for instance and pile
up data on the queue cluster
2) Create one more table in C* to track the keys per week or month that you
can scan to read the keys of the audit table. Make sure you delete the
entir
For large volume big data scenarios we don't recommend using Cassandra as a
blob storage simply because of intensive IO involved during compation,
repair etc. Cassandra store is only well suited for metadata type storage.
However, if you are fairly low volume then it's a different story, but if
you
On Thu, Feb 20, 2014 at 4:37 PM, Edward Capriolo wrote:
> Recomendations in cassandra have a shelf life of about 1 to 2 years. If
> you try to assert a recomendation from year ago you stand a solid chance of
> someone telling you there is now a better way.
>
> Casaandra once loved being a schemale
+1
I like hector client that uses thrift interface and exposes APIs that is
similar to how Cassandra physically stores the values.
On Thu, Feb 20, 2014 at 9:26 AM, Peter Lin wrote:
>
> I disagree with the sentiment that "thrift is not worth the trouble".
>
> CQL and all SQL inspired dialects li
In our testing USB tends to be slower. If there is something more integrated
internally would give you better performance
Sent from my iPhone
On Nov 16, 2013, at 8:30 AM, Dan Simpson wrote:
> It doesn't seem like a great idea. The USB drives typically use dynamic wear
> leveling. See this a
Post your gc logs
Sent from my iPhone
On Nov 3, 2013, at 6:54 AM, Oleg Dulin wrote:
> Cass 1.1.11 ran out of memory on me with this exception (see below).
>
> My parameters are 8gig heap, new gen is 1200M.
>
> ERROR [ReadStage:55887] 2013-11-02 23:35:18,419 AbstractCassandraDaemon.java
> (l
are migrating to 1.2.X
> though. We had tuned bloom filters (0.1) and AFAIK making it lower than
> this won't matter.
>
> Thanks !
>
>
> On Tue, Oct 1, 2013 at 11:54 PM, Mohit Anchlia wrote:
>
>> Which Cassandra version are you on? Essentially heap size is func
Which Cassandra version are you on? Essentially heap size is function of
number of keys/metadata. In Cassandra 1.2 lot of the metadata like bloom
filters were moved off heap.
On Tue, Oct 1, 2013 at 9:34 PM, srmore wrote:
> Does anyone know what would roughly be the heap size for cassandra with
>
Your ParNew size is way too small. Generally 4GB ParNew (-Xmn) works out
best for 16GB heap
On Mon, Sep 23, 2013 at 9:05 PM, 谢良 wrote:
> it looks to me that "MaxTenuringThreshold" is too small, do you have any
> chance to try with a bigger one, like 4 or 8 or sth else?
>
> __
Did you start out your cluster after wiping all the sstables and commit
logs?
On Fri, Sep 20, 2013 at 3:42 PM, Suruchi Deodhar <
suruchi.deod...@generalsentiment.com> wrote:
> We have been trying to resolve this issue to find a stable configuration
> that can give us a balanced cluster with equal
ich we have
>> encountered before).
>>
>> I've attached the output of "nodetool ring" here.
>>
>>
>> On Thu, Sep 19, 2013 at 8:35 PM, Mohit Anchlia wrote:
>>
>>> Other thing I noticed is that you are using mutiple RACKS and that might
he column-family that had 6527744 keys before (load
> is now 1.08 GB as compares to 1.05 GB before), while the smallest node now
> has 71808 keys as compared to 3840 keys before (load is now 31.89 MB as
> compares to 1.12 MB before).
>
>
> On Thu, Sep 19, 2013 at 5:18 PM, Mohit
sing the Murmur3 partitioner with NetworkTopologyStrategy.
>
> Thanks,
> Suruchi
>
>
>
> On Thu, Sep 19, 2013 at 3:59 PM, Mohit Anchlia wrote:
>
>> Can you check cfstats to see number of keys per node?
>>
>>
>> On Thu, Sep 19, 2013 at 12:36 PM, Suruchi Deodhar <
>
Can you check cfstats to see number of keys per node?
On Thu, Sep 19, 2013 at 12:36 PM, Suruchi Deodhar <
suruchi.deod...@generalsentiment.com> wrote:
> Thanks for your replies. I wiped out my data from the cluster and also
> cleared the commitlog before restarting it with num_tokens=256. I then
I agree. We've had similar experience.
Sent from my iPhone
On Sep 7, 2013, at 6:05 PM, Edward Capriolo wrote:
> I have found row cache to be more trouble then bene.
>
> The term fools gold comes to mind.
>
> Using key cache and leaving more free main memory seems stable and does not
> have a
Are you not using RF >= 3 ?
On Fri, Sep 6, 2013 at 10:14 AM, Thapar, Vishal (HP Networking) <
vtha...@hp.com> wrote:
> My usage requirements are such that there should be least possible data
> loss even in case of a poweroff. When you say clean shutdown do you mean
> Cassandra service stop?
>
> I
In general with LOCAL_QUORUM you should not see such an issue when one node
is slow. However, it could be because Client's are still sending requests
to that node. Depending on what client library you are using , you could
try to take that node out of your connection pool. Not knowing exact issue
y
If you have multiple DCs you at least want to upgrade to 1.0.11. There is
an issue where you might get errors during cross DC replication.
On Fri, Aug 30, 2013 at 9:41 AM, Mike Neir wrote:
> In my testing, mixing 1.0.9 and 1.2.8 seems to work fine as long as there
> is no need to do streaming op
You need to get it to 50% on each to equally distribute the has range. You
need to 1) Calculate new token 2) move nodes to that token or use vnodes
For the first option see:
http://www.datastax.com/docs/0.8/install/cluster_init
On Mon, Aug 12, 2013 at 12:06 PM, Morgan Segalis wrote:
> Hi ever
But node might be streaming data as well, in that case only option is to
restart node that started streaming operation
Sent from my iPhone
On Aug 8, 2013, at 5:56 PM, Andrey Ilinykh wrote:
> nodetool repair just triggers repair procedure. You can kill nodetool after
> start, it doesn't change
We currently run automated repairs sequentially on all the nodes. However,
as we grow the cluster we now need to run repair on multiple nodes in
parallel to be able to finish it withing gcgrace seconds. Before I write
the script I was wondering if somebody already has a tool or a script that
figure
What's your replication factor? Can you check tp stats and net stats to see if
you are getting more mutations on these nodes ?
Sent from my iPhone
On Jul 16, 2013, at 3:18 PM, Jure Koren wrote:
> Hi C* user list,
>
> I have a curious recurring problem with Cassandra 1.2 and what seems like a
There is a new tracing feature in Cassandra 1.2 that might help you with
this.
On Tue, Jul 9, 2013 at 1:31 PM, Blair Zajac wrote:
> No idea on the logging, I'm pretty new to Cassandra.
>
> Regards,
> Blair
>
> On Jul 9, 2013, at 12:50 PM, hajjat wrote:
>
> > Blair, thanks for the clarification!
dra (out of 4GB).
>
>
> 2013/6/19 Mohit Anchlia
>
>> How much data do you have per node?
>> How much RAM per node?
>> How much CPU per node?
>> What is the avg CPU and memory usage?
>>
>> On Wed, Jun 19, 2013 at 12:16 AM, Joel Samuelsson <
How much data do you have per node?
How much RAM per node?
How much CPU per node?
What is the avg CPU and memory usage?
On Wed, Jun 19, 2013 at 12:16 AM, Joel Samuelsson wrote:
> My Cassandra ps info:
>
> root 26791 1 0 07:14 ?00:00:00 /usr/bin/jsvc -user
> cassandra -home /opt
Is your young generation size set to 4GB? Can you paste the output of ps
-ef|grep cassandra ?
On Tue, Jun 18, 2013 at 8:48 AM, Joel Samuelsson
wrote:
> Yes, like I said, the only relevant output from that file was:
> 2013-06-17T08:11:22.300+: 2551.288: [GC 870971K->216494K(4018176K),
> 145.18
Can you paste you gc config? Also can you take a heap dump at 2 diff points so
that we can compare it?
Quick thing to do would be to do a histo live at 2 points and compare
Sent from my iPhone
On Jun 15, 2013, at 6:57 AM, Takenori Sato wrote:
> > INFO [ScheduledTasks:1] 2013-04-15 14:00:02,74
Roughly how much data do you have per node?
Sent from my iPhone
On Feb 20, 2013, at 10:49 AM, "Hiller, Dean" wrote:
> I took this jmap dump of cassandra(in production). Before I restarted the
> whole production cluster, I had some nodes running compaction and it looked
> like all memory had
Can you post gc settings? Also check logs and see what it says
Also post how many writes and reads along with avg row size
Sent from my iPhone
On Dec 29, 2012, at 12:28 PM, rohit bhatia wrote:
> i assume u mean 8 seconds and not 8ms..
> thats pretty huge to be caused by gc. Is there lot of lo
How can this be resolved in this case?
On Wed, Sep 12, 2012 at 3:53 PM, Rob Coli wrote:
> On Tue, Sep 11, 2012 at 4:21 PM, Edward Sargisson
> wrote:
> > If the downed node is a seed node then neither of the replace a dead node
> > procedures work (-Dcassandra.replace_token and taking initial_to
Are both running on the same host?
On Fri, Sep 7, 2012 at 11:53 PM, Manu Zhang wrote:
> When I run Cassandra-trunk in Eclipse, nodetool fail to connect with the
> following error
> "Failed to connect to '127.0.0.1:7199': Connection refused"
> But if I run in terminal, all will be fine.
>
of my back log?
>
> Although we know when a network is flaky, we are interested in knowing how
> much data is piling up in local DC that needs to be transferred.
>
> Greatly appreciate your help.
>
> VR
>
>
> On Wed, Sep 5, 2012 at 8:33 PM, Mohit Anchlia wrote:
>
&g
As far as I know Cassandra doesn't use internal queueing mechanism specific
to replication. Cassandra sends the write the remote DC and after that it's
upto the tcp/ip stack to deal with buffering. If requests starts to timeout
Cassandra would use HH upto certain time. For longer outage you would h
strategy_options I should be using the DC name
> from properfy file snitch right? Ours is “Fisher” and “TierPoint” so
> that’s what I used.****
>
> ** **
>
> *From:* Mohit Anchlia [mailto:mohitanch...@gmail.com]
> *Sent:* Monday, August 27, 2012 1:21 PM
>
> *To:* user@ca
> ** **
>
> On 25/08/2012, at 6:53 PM, Bryce Godfrey
> wrote:
>
>
>
>
>
> Yes
>
>
>
> [default@unknown] describe cluster;
>
> Cluster Information:
>
> Snitch: org.apache.cassandra.locator.PropertyFileSnitch
>
&g
use nodetool decommission and nodetool removetoken
On Sun, Aug 26, 2012 at 5:31 PM, Senthilvel Rangaswamy wrote:
> We have a cluster of 9 nodes in the ring. We would like SSD backed boxes.
> But we may not need 9
> nodes in that case. What is the best way to downscale the cluster to 6 or
> 3 nod
If you are starting out new use composite column names/values or you could
also use JSON style doc as a column value.
On Fri, Aug 24, 2012 at 2:31 PM, Rob Coli wrote:
> On Fri, Aug 24, 2012 at 4:33 AM, Amit Handa wrote:
> > kindly help in resolving the following problem with respect to super
>
That's interesting can you do describe cluster?
On Fri, Aug 24, 2012 at 12:11 PM, Bryce Godfrey
wrote:
> So I’m at the point of updating the keyspaces from Simple to
> NetworkTopology and I’m not sure if the changes are being accepted using
> Cassandra-cli.
>
> ** **
>
> I issue the change:*
Going through this page and it looks like indexes are stored locally
http://www.datastax.com/dev/blog/cassandra-with-solr-integration-details .
My question is what happens if one of the solr nodes crashes? Is the data
indexed again on those nodes?
Also, if RF > 1 then is the same data being indexe
I agree with Edward. We always develop our own stress tool that tests each
use case of interest. Every use case is different in certain ways that can
only be tested using custom stress tool.
On Fri, Aug 10, 2012 at 7:25 AM, Edward Capriolo wrote:
> There are many YCSB forks on github that get opt
On Mon, Jul 23, 2012 at 11:16 AM, Ertio Lew wrote:
> I want to read columns for a randomly selected list of userIds(completely
> random). I fetch the data using userIds(which would be used as column names
> in case of single row or as rowkeys incase of 1 row for each user) for a
> selected list o
On Mon, Jul 23, 2012 at 11:00 AM, Ertio Lew wrote:
> For each user in my application, I want to store a *value* that is queried
> by using the userId. So there is going to be one column for each user
> (userId as col Name & *value* as col Value). Now I want to store these
> columns such that can
On Mon, Jul 23, 2012 at 10:53 AM, Ertio Lew wrote:
> Actually these columns are 1 for each entity in my application & I need to
> query at any time columns for a list of 300-500 entities in one go.
Can you describe your situation with small example?
On Mon, Jul 23, 2012 at 10:07 AM, Ertio Lew wrote:
> My major concern is that is it too bad retrieving 300-500 rows (each for a
> single column) in a single read query that I should store all these(around
> a hundred million) columns in a single row?
You could create multiple rows and each row
Sent from my iPad
On Jun 28, 2012, at 8:45 AM, Christof Bornhoevd wrote:
> Hi,
>
> we are using Cassandra v1.0.8 with Hector v1.0-5 and would like to move our
> current system to an operational setting based on Amazon AWS. What are best
> practices for addessing security for Cassandra on A
estion. In general I don't think you
can selectively decide on HH. Besides HH should only be used when the
outage is in mts, for longer outages using HH would only create memory
pressure.
> On Tuesday, June 26, 2012, Mohit Anchlia wrote:
>
>>
>> On Tue, Jun 26, 2012 at 7:52
On Tue, Jun 26, 2012 at 7:52 AM, Karthik N wrote:
> My Cassandra ring spans two DCs. I use local quorum with replication
> factor=3. I do a write in DC1 with local quorum. Data gets written to
> multiple nodes in DC1. For the same write to propagate to DC2 only one
> copy is sent from the coordin
I agree with Brandon. We only use it for enhancing authz and authn modules
to use LDAP that C* currently doesn't provide.
On Mon, May 14, 2012 at 11:08 PM, Brandon Williams wrote:
> On Tue, May 15, 2012 at 12:53 AM, Ertio Lew wrote:
> > @Brandon : I just created a jira issue to request this typ
That's right. Create class that implements the required interface and then
drop that jar in lib directory and start the cluster.
On Mon, May 14, 2012 at 11:41 AM, Kirk True wrote:
> Disclaimer: I've never tried, but I'd imagine you can drop a JAR
> containing the class(es) into the lib directory
I thought so. Is there a way I can unload and load data after dropping CF
and re-creating it with reversed type?
On Sat, May 5, 2012 at 7:11 AM, Edward Capriolo wrote:
> You can not update comparators because they effect the on disk ordering.
>
> On Sat, May 5, 2012 at 2:11 AM, Mohi
Is it possible to update CF definition to use "reversed" type? If it's
possible then what happens to the old values, do they still remain ordered
in ascending order?
+1
On Tue, May 1, 2012 at 12:06 PM, Edward Capriolo wrote:
> Also there are some tickets in JIRA to impose a max sstable size and
> some other related optimizations that I think got stuck behind levelDB
> in coolness factor. Not every use case is good for leveled so adding
> more tools and optimi
at columns that falls outside of it
>
> **
>
> *Von:* Mohit Anchlia [mailto:mohitanch...@gmail.com]
> *Gesendet:* Freitag, 30. März 2012 16:57
>
> *An:* user@cassandra.apache.org
> *Betreff:* Re: cassandra gui
>
> ** **
>
> On Thu, Mar 29, 2012 at 10:08 PM, Mar
On Thu, Mar 29, 2012 at 10:08 PM, Markus Wiesenbacher | Codefreun.de <
m...@codefreun.de> wrote:
> Hi,
>
> yes you can insert data into cassandra with apollo, just try the demo
> center: http://www.codefreun.de/apolloUI/
>
> You can login by just press the login-button (autologin) and play around
.0.0 does not generate cross-dc forwarding message at all, so you're
> safe on that side.
>
> Is cross-dc forwarding different than replication?
> --
> Sylvain
>
> On Thu, Mar 29, 2012 at 9:33 PM, Mohit Anchlia
> wrote:
> > Any updates?
> >
> >
> >
Any updates?
On Thu, Mar 29, 2012 at 7:31 AM, Mohit Anchlia wrote:
> This is from NEWS.txt. So my question is if we are on 1.0.0-2 release do
> we still need to upgrade since this impacts releases between 1.0.3-1.0.5?
> -
> If you are running a multi datacenter setup, you shoul
or any
> details on the upgrade path for these versions).
> The incompatibility here is only between 1.1.0-beta1 and 1.1.0-beta2.
>
> --
> Sylvain
>
> On Thu, Mar 29, 2012 at 2:50 AM, Mohit Anchlia
> wrote:
> > We are currently using 1.0.0-2 version. Do we still need
We are currently using 1.0.0-2 version. Do we still need to migrate to the
latest release of 1.0 before migrating to 1.1? Looks like incompatibility
is only between 1.0.3-1.0.8.
On Tue, Mar 27, 2012 at 6:42 AM, Benoit Perroud wrote:
> Thanks for the quick feedback.
>
> I will drop the schema t
ickle.com
>
> On 27/03/2012, at 6:21 AM, Mohit Anchlia wrote:
>
> Thanks but if I do have to specify start and end columns then how much
> overhead roughly would that translate to since reading metadata should be
> constant overall?
>
> On Mon, Mar 26, 2012 at 10:18 AM, aa
/07/04/Cassandra-Query-Plans/
>
> Tl;Dr; Select columns with no start, in the natural Comparator order.
>
> Cheers
>
>
>-
> Aaron Morton
> Freelance Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 25/03/2012, at 2:25 PM, Mohit Anchl
I have rows with around 2K-50K columns but when I do a query I only need to
fetch few columns between start and end columns. I was wondering what
performance overhead does it cause by using slice query with start and end
columns?
Looking at the code it looks like when you give start and end column
On Sun, Feb 26, 2012 at 12:18 PM, aaron morton wrote:
> Nathan Milford has a post about taking a node down
>
> http://blog.milford.io/2011/11/rolling-upgrades-for-cassandra/
>
> The only thing I would do differently would be turn off thrift first.
>
> Cheers
>
Isn't decomission meant to do the sa
;
> On 2/22/2012 1:34 PM, Mohit Anchlia wrote:
>
> Outside on the file system and a pointer to it in C*
>
> On Wed, Feb 22, 2012 at 10:03 AM, Rafael Almeida wrote:
>
>> Keep them where?
>>
>> --
>> *From:* Mohit Anchlia
>
Outside on the file system and a pointer to it in C*
On Wed, Feb 22, 2012 at 10:03 AM, Rafael Almeida wrote:
> Keep them where?
>
> --
> *From:* Mohit Anchlia
> *To:* user@cassandra.apache.org
> *Cc:* potek...@bnl.gov
> *Sent:* Wednesday, Febr
In my opinion if you are busy site or application keep blobs out of the
database.
On Wed, Feb 22, 2012 at 9:37 AM, Dan Retzlaff wrote:
> Chunking is a good idea, but you'll have to do it yourself. A few of the
> columns in our application got quite large (maybe ~150MB) and the failure
> mode was
Does it work with iptables disabled?
You could add log to your firewall rules to see if firewall is
dropping the packets.
On Sun, Feb 5, 2012 at 5:35 PM, Roshan wrote:
> Hi
>
> I have 2 node Cassandra cluster and each linux box configured with a
> firewall. The ports 7000, 7199 and 9160 are open
hen read from.
>
> On Fri, Feb 3, 2012 at 10:31 AM, Mohit Anchlia wrote:
>> On Fri, Feb 3, 2012 at 7:32 AM, Jonathan Ellis wrote:
>>> It's a warn because it's nonsense for the JVM to report that an column
>>> + overhead, takes less space than just the col
d on WARN and ERROR. But if there is nothing to do then it
probably is just an INFO.
> On Tue, Jan 31, 2012 at 9:41 PM, Mohit Anchlia wrote:
>> I guess this is not really a WARN in that case.
>>
>> On Tue, Jan 31, 2012 at 4:29 PM, aaron morton
>> wrote:
>>> The r
I guess this is not really a WARN in that case.
On Tue, Jan 31, 2012 at 4:29 PM, aaron morton wrote:
> The ratio is the ratio of serialised bytes for a memtable to actual JVM
> allocated memory. Using a ratio below 1 would imply the JVM is using less
> bytes to store the memtable in memory than i
I have the same experience. Wondering what's causing this? One thing I
noticed is that this happens if server is idle for some time and then
load starts going high is when I start to see these messages.
On Mon, Jan 30, 2012 at 4:54 PM, Roshan wrote:
> Hi All
>
> Time to time I am seen this below
I think the problem stems when you have data in a column that you need
to run adhoc query on which is not denormalized. In most cases it's
difficult to predict the type of query that would be required.
Another way of solving this could be to index the fields in search engine.
On Fri, Jan 20, 2012
What's the version of Java do you use? Can you try reducing NewSize
and increasing Old generation? If you are on old version of Java I
also recommend upgrading that version.
On Thu, Jan 19, 2012 at 3:27 AM, Rene Kochen
wrote:
> Thanks for your comments. The application is indeed suffering from a
You need to shard your rows
On Wed, Jan 18, 2012 at 5:46 PM, Kamal Bahadur wrote:
> Anyone?
>
>
> On Wed, Jan 18, 2012 at 9:53 AM, Kamal Bahadur
> wrote:
>>
>> Hi All,
>>
>> It is great to know that Cassandra column family can accommodate 2 billion
>> columns per row! I was reading about how Cas
Have you tried running repair first on each node? Also, verify using
df -h on the data dirs
On Tue, Jan 17, 2012 at 7:34 AM, Marcel Steinbach
wrote:
> Hi,
>
> we're using RP and have each node assigned the same amount of the token
> space. The cluster looks like that:
>
> Address Status
Is it possible to add Brisk only nodes to standard C* cluster? So if
we have node A,B,C with standard C* then add Brisk node D,E,F for
analytics?
What's the best way to install C*? Any good links?
Is it better to just create instances and install rpms on it first,
just like regular cluster and then create image from it? I am assuming
it's possible.
Are there any known issues when running C* on EC2?
How do other C* users deal with instance fa
ets it looks like this has been tried
> before, and for various reasons was not added. It's definitely
> non-trivial to get right.
>
> On Fri, 6 Jan 2012 13:33:02 -0800
> Mohit Anchlia wrote:
>> This looks like right way to do it. But remember this still doesn't
>
andra
>> a month or so back on this list.
>>
>> -Jeremiah
>>
>> On 01/06/2012 02:42 PM, Bryce Allen wrote:
>> > On Fri, 6 Jan 2012 10:38:17 -0800
>> > Mohit Anchlia wrote:
>> >> It could be as simple as reading before writing to make sure tha
the "tracker" CF too, no?
>
>
> On Jan 6, 2012, at 10:38 AM, Mohit Anchlia wrote:
>
>> On Fri, Jan 6, 2012 at 10:03 AM, Drew Kutcharian wrote:
>>> Hi Everyone,
>>>
>>> What's the best way to reliably have unique constraints like function
On Fri, Jan 6, 2012 at 10:03 AM, Drew Kutcharian wrote:
> Hi Everyone,
>
> What's the best way to reliably have unique constraints like functionality
> with Cassandra? I have the following (which I think should be very common)
> use case.
>
> User CF
> Row Key: user email
> Columns: userId: UUID
Are all your nodes equally balanced in terms of read requests? Are you
using RandomPartitioner? Are you reading using indexes?
First thing you can do is compare iostat -x output between the 2 nodes
to rule out any io issues assuming your read requests are equally
balanced.
On Fri, Jan 6, 2012 at
You could read using Cassandra client and write to HDFS using Hadoop FS Api.
On Fri, Dec 23, 2011 at 11:20 PM, ravikumar visweswara
wrote:
> Jeremy,
>
> We use cloudera distribution for our hadoop cluster and may not be possible
> to migrate to brisk quickly because of flume/hue dependencies. Did
Increasing memory in this case may not solve the problem. Share some
information about your workload. Cluster configuration, cache sizes etc.
You can also try getting java heap historgram to get more info on what's on
the heap.
On Mon, Dec 19, 2011 at 7:35 AM, Rene Kochen
wrote:
> I recently se
> bart@node1:~$ nodetool -h localhost getendpoints A UserDetails 4545027
> 192.168.81.5
> 192.168.81.2
> 192.168.81.3
Can you see what happens if you stop C* say on node .5 and write and
read at quorum?
On Wed, Dec 14, 2011 at 7:06 AM, Bart Swedrowski wrote:
>
>
> On 14 December 2011 14:58, wro
On Mon, Nov 21, 2011 at 11:47 AM, Edward Capriolo wrote:
>
>
> On Mon, Nov 21, 2011 at 3:30 AM, Philippe wrote:
>>
>> I don't remember your exact situation but could it be your network
>> connectivity?
>> I know I've been upgrading mine because I'm maxing out fastethernet on a
>> 12 node cluster.
On Sun, Nov 20, 2011 at 4:01 AM, Boris Yen wrote:
> A quick question, what if DC2 is down, and after a while it comes back on.
> how does the data get sync to DC2 in this case? (assume hint is disable)
> Thanks in advance.
Manually, use nodetool repair in rolling fashion on all the nodes of DC2
f GC logs
including ParNew and other major phases recorded in the logs.
Are there any significant writes, memtable flushes etc occuring during
this time? How many read/sec and writes/sec?
What's the size of your row and columns that you are trying to retrieve?
>
> On 11/18/11 2:40 PM
On Fri, Nov 18, 2011 at 1:46 PM, Todd Burruss wrote:
> Ok, I figured something like that. Switching to
> ConcurrentLinkedHashCacheProvider I see it is a lot better, but still
> instead of the 25-30ms response times I enjoyed with no caching, I'm
> seeing 500ms at 100% hit rate on the cache. No o
On Fri, Nov 18, 2011 at 9:42 AM, Sylvain Lebresne wrote:
> On Fri, Nov 18, 2011 at 6:31 PM, Mohit Anchlia wrote:
>> On Fri, Nov 18, 2011 at 7:47 AM, Sylvain Lebresne
>> wrote:
>>> On Fri, Nov 18, 2011 at 4:23 PM, Mohit Anchlia
>>> wrote:
>>>> On F
On Fri, Nov 18, 2011 at 7:47 AM, Sylvain Lebresne wrote:
> On Fri, Nov 18, 2011 at 4:23 PM, Mohit Anchlia wrote:
>> On Fri, Nov 18, 2011 at 6:39 AM, Sylvain Lebresne
>> wrote:
>>> On Fri, Nov 18, 2011 at 1:53 AM, Todd Burruss wrote:
>>>> I'm using c
Secondary indexes in Cassandra are not good fit for High Cardinality values
On Fri, Nov 18, 2011 at 7:14 AM, Dan Hendry wrote:
> I they are not limited to repeating values but the Datastax docs[1] on
> secondary indexes certainly seem to indicate they would be a poor fit for
> this case (high rea
On Fri, Nov 18, 2011 at 6:39 AM, Sylvain Lebresne wrote:
> On Fri, Nov 18, 2011 at 1:53 AM, Todd Burruss wrote:
>> I'm using cassandra 1.0. Been doing some testing on using cass's cache.
>> When I turn it on (using the CLI) I see ParNew jump from 3-4ms to
>> 200-300ms. This really screws with
On Mon, Nov 14, 2011 at 4:44 PM, Jake Luciani wrote:
> Re Simpler "elasticity":
> Latest opscenter will now rebalance cluster optimally
> http://www.datastax.com/dev/blog/whats-new-in-opscenter-1-3
>
Does it cause any impact on reads and writes while re-balance is in
progress? How is it handled
Can you temporarily increase the size of Heap and try?
On Fri, Nov 11, 2011 at 5:21 PM, Oleg Tsvinev wrote:
> Hi everybody,
>
> We set row cache too high, 1 or so and now all our 6 nodes fail
> with OOM. I believe that high row cache causes OOMs.
>
> Now, we trying to change row cache sizes u
We lockdown ssh to root from any network. We also provide individual
logins including sysadmin and they go through LDAP authentication.
Anyone who does sudo su as root gets logged and alerted via trapsend.
We use firewalls and also have a separate vlan for datastore servers.
We then open only speci
Transparent on disk encryption with pluggable keyprovider will also be
really helpful to secure sensitive information.
On Sun, Nov 6, 2011 at 9:42 AM, Aaron Turner wrote:
> The intent was to have a lighter solution for common problems then
> having to go with Hadoop or streaming large quantities
On Thu, Nov 3, 2011 at 5:46 AM, Peter Tillotson wrote:
> I'm using Cassandra as a big graph database, loading large volumes of data
> live and linking on the fly.
Not sure if Cassandra is right fit to model complex vertexes and edges.
> The number of edges grow geometrically with data added, and
On Sun, Oct 30, 2011 at 6:53 PM, Chris Goffinet wrote:
>
>
> On Sun, Oct 30, 2011 at 3:34 PM, Sorin Julean
> wrote:
>>
>> Hey Chris,
>>
>> Thanks for sharing all the info.
>> I have few questions:
>> 1. What are you doing with so much memory :) ? How much of it do you
>> allocate for heap ?
>
On Sat, Oct 29, 2011 at 11:23 AM, Aditya Narayan wrote:
> @Mohit:
> I have stated the example scenarios in my first post under this heading.
> Also I have stated above why I want to split that data in two rows & like
> Ikeda below stated, I'm too trying out to prevent the frequently accessed
> row
1 - 100 of 121 matches
Mail list logo