Policy' and are the clients from
the datacenter related to DC2, using 'new DCAwareRoundRobinPolicy("DC2")'?
This is really the only thing I can think about right now...
C*heers,
---
Alain Rodriguez - al...@thelastpickle.com<mailto:al...@thelastpickle.
snitch, please share
cassandra-topology.properties too.
Thanks
Anuj
Sent from Yahoo Mail on
Android<https://overview.mail.yahoo.com/mobile/?.src=Android>
On Wed, 13 Apr, 2016 at 9:46 PM, Walsh, Stephen
mailto:stephen.wa...@aspect.com>> wrote:
Right again Alain
We use the DCAwareRo
uot;)' on client that should be using 'DC2'.
Make sure ports are open.
This should be it,
C*heers,
---
Alain Rodriguez - al...@thelastpickle.com<mailto:al...@thelastpickle.com>
France
The Last Pickle - Apache Cassandra Consulting
http://www.thelastpickle.com
2016-04-13 16:
the policy
https://datastax.github.io/python-driver/api/cassandra/policies.html
http://stackoverflow.com/questions/22813045/ability-to-write-to-a-particular-cassandra-node
Regards,
Bhuvan
On Wed, Apr 13, 2016 at 6:54 PM, Walsh, Stephen
mailto:stephen.wa...@aspect.com>> wrote:
Hi there,
So we have
Hi there,
So we have 2 datacenter with 3 nodes each.
Replication factor is 3 per DC (so each node has all data)
We have an application in each DC that writes that Cassandra DC.
Now, due to a miss configuration in our application, we saw that our
application in both DC’s where pointing to DC1.
. Sorry for referring to you by the last name in my last email, I got
confused.
On Thu, Dec 10, 2015 at 2:09 AM, Walsh, Stephen
mailto:stephen.wa...@aspect.com>> wrote:
8GB is the max recommended for heap size and that’s if you have 32GB or more
available.
We use 6GB on our 16GB machines a
8GB is the max recommended for heap size and that’s if you have 32GB or more
available.
We use 6GB on our 16GB machines and its very stable
The out of memory could be coming from cassandra reloading
compactions_in_progress into memory, you can check this from the log files if
needs be.
You can
/qsSimpleClientCreate_t.html
Just to make sure it really is connecting only to the local cluster and using
round robin and whether it is token aware.
-- Jack Krupansky
On Fri, Dec 4, 2015 at 10:51 AM, Walsh, Stephen
mailto:stephen.wa...@aspect.com>> wrote:
Thanks for your input, but I think I’ve a
Thanks for your input, but I think I’ve already answered most of your questions.
How many clients do you have performing reads?
--
On Wed, Dec 2, 2015 at 6:44 PM, Walsh, Stephen
mailto:stephen.wa...@aspect.com>> wrote
….
There are 2 application (1 for each DC) who re
le" replica on
each request to avoid always returning the primary replica.
On Wed, Dec 2, 2015 at 6:44 PM, Walsh, Stephen
mailto:stephen.wa...@aspect.com>> wrote:
Very good questions.
We have reads and writes at LOCAL_ONE.
There are 2 application (1 for each DC) who read and write a
DCAwareRoundRobin ?
On Wed, Dec 2, 2015 at 3:36 PM, Walsh, Stephen
mailto:stephen.wa...@aspect.com>> wrote:
Hey all,
Thanks for taking the time to help.
So we have 6 cassandra nodes in 2 Data Centers.
Both Data Centers have a replication of 3 – so all nodes have all the data.
Over the last
Hey all,
Thanks for taking the time to help.
So we have 6 cassandra nodes in 2 Data Centers.
Both Data Centers have a replication of 3 - so all nodes have all the data.
Over the last 2 days we've noticed that data reads / writes has shifted from
balanced to unbalanced
(Nodetool status still sho
Hey all,
We're testing Cassandra failover over 2 Datacentre's.
There are 3 nodes on each.
All CF's have a Replication of 2 on both Datacentre's (DC1:2, DC2:2)
When one Datacentre goes down then all queries go to the other.
This works fine for LOCAL_QUOURM queries. As 2 replicas of the data exist
Thanks to both Nate and Jeff, for both the bug highlighting and the configure
issues.
We've upgraded to 2.1.11
Lowered our memtable_cleanup_threshold to .11
Lowered out thrift_framed_transport_size_in_mb to 15
We kicked off another run.
The results was that the cassandra failed after 1 hour.
SS
Hey all,
First off, thank you for taking the time to read this.
---
SYSTEM SPEC
---
We're using Cassandra Version 2.1.6 (please don't ask us to upgrade just yet
less you are aware of an existing bug for this issue)
We are running on AWS 4 core . 16 GB server
We a
It did, but a ran it again on one node – that node never recovered. ☹
From: Robert Coli [mailto:rc...@eventbrite.com]
Sent: 02 October 2015 21:20
To: user@cassandra.apache.org
Subject: Re: Consistency Issues
On Fri, Oct 2, 2015 at 1:32 AM, Walsh, Stephen
mailto:stephen.wa...@aspect.com>>
0.100
14.187 Allocation Failure No GC
0.00 70.57 90.30 26.29 97.86 96.62119 14.087 20.100
14.187 Allocation Failure No GC
0.00 70.57 90.40 26.29 97.86 96.62119 14.087 20.100
14.187 Allocation Failure No GC
From: Walsh, Stephen [mailto:steph
m/javase/7/docs/technotes/tools/share/jstat.html#gccause_option
On Thu, Oct 1, 2015 at 4:50 AM Walsh, Stephen
mailto:stephen.wa...@aspect.com>> wrote:
If you’re looking for the clean-up of the old gen in the jvm heap, it doesn’t
happen.
We have a new gen turning 15 times before its pushed t
Thanks Jake, I’ll try test out 2.1.9 to see if it resolved the issue and ill
try “nodetool resetlocalschema” now to see if it helps.
Cassandra is 2.1.6
OpsCenter is 5.2.1
From: Jake Luciani [mailto:jak...@gmail.com]
Sent: 01 October 2015 14:00
To: user
Subject: Re: Consistency Issues
Onur, was
tware Engineer | @calonso<https://twitter.com/calonso>
On 1 October 2015 at 11:09, Walsh, Stephen
mailto:stephen.wa...@aspect.com>> wrote:
I did think of that and they are all the same version ☺
From: Carlos Alonso [mailto:i...@mrcalonso.com<mailto:i...@mrcalonso.com>]
Sent:
cs are taking?
Do you see any especially long ones?
On 1 Oct 2015 09:37, "Walsh, Stephen"
mailto:stephen.wa...@aspect.com>> wrote:
There is no load balancer in front of Cassandra, it’s in front of our
application.
Everyone seems hung up on this point? But it’s not the root causing of the
ancho [mailto:sancho.rica...@gmail.com]
Sent: 01 October 2015 09:39
To: user@cassandra.apache.org
Subject: RE: Consistency Issues
Can you tell us how much time your gcs are taking?
Do you see any especially long ones?
On 1 Oct 2015 09:37, "Walsh, Stephen"
mailto:stephen.wa...@aspect.com>> wr
September 2015 18:45
To: user@cassandra.apache.org
Subject: Re: Consistency Issues
On Wed, Sep 30, 2015 at 9:06 AM, Walsh, Stephen
mailto:stephen.wa...@aspect.com>> wrote:
We never had these issue with our first run. Its only when we added another 25%
of writes.
As Jack said, you are pr
hnology and transactional backbone of choice for the worlds most innovative
companies such as Netflix, Adobe, Intuit, and eBay.
On Wed, Sep 30, 2015 at 12:06 PM, Walsh, Stephen
mailto:stephen.wa...@aspect.com>> wrote:
Many thanks all,
The Load balancers are only between our own no
stly above the minimum required to avoid OOM.
-- Jack Krupansky
On Wed, Sep 30, 2015 at 11:22 AM, Walsh, Stephen
mailto:stephen.wa...@aspect.com>> wrote:
More information,
I’ve just setup a NTP server to rule out any timing issues.
And I also see this in the Cassandra node log files
osing
org.apache.cassandra.db.UnknownColumnFamilyException: Couldn't find
cfId=cf411b50-6785-11e5-a435-e7be20c92086
Any idea what this is related too?
All these tests are run with a clean setup of Cassandra nodes followed by a
nodetool repair.
Before any data hits them.
From: Walsh, Stephen [mailto:
Hi there,
We are having some issues with consistency. I'll try my best to explain.
We have an application that was able to
Write ~1000 p/s
Read ~300 p/s
Total CF created: 400
Total Keyspaces created : 80
On a 4 node Cassandra Cluster with
Version 2.1.6
Replication : 3
Consistency (Read & Write)
Although I didn't get an answer on this, it's worth noting the removing the
compaction_in_progress folder resolved the issue.
From: Walsh, Stephen
Sent: 17 September 2015 16:37
To: 'user@cassandra.apache.org'
Subject: RE: Cassandra shutdown during large number of compact
ile
dfile org.apache.cassandra.io.compress.CompressedRandomAccessReader path =
"/var/lib/cassandra/data/system/compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-ka-71661-Data.db"
131840 104
Steve
From: Walsh, Stephen
Sent: 17 September 2015 15:33
To: user@cassandr
Hey all, I was hoping someone had a similar issue.
We're using 2.1.6 and shutdown a testbed in AWS thinking we were finished with
it,
We started it backup today and saw that only 2 of 4 nodes came up.
Seems there was a lot of compaction happening at the time it was shutdown,
cassandra tries to s
between SQL and CQL and assuming that one
could actually drop a table and recreate the table as a method of deleting all
the data...totally crazy, I know...
On Fri, May 22, 2015 at 11:06 AM, Walsh, Stephen
mailto:stephen.wa...@aspect.com>> wrote:
Thanks for the link,
I don’t think your link i
able to any size.
With more than 500 customers in 45 countries, DataStax is the database
technology and transactional backbone of choice for the worlds most innovative
companies such as Netflix, Adobe, Intuit, and eBay.
On Fri, May 22, 2015 at 7:53 AM, Walsh, Stephen
mailto:stephen.wa...@aspe
Can someone share the content on this link please, I’m aware of issues where
recreating key spaces can cause inconsistency in 2.0.13 if memTables are not
flushed beforehand , is this the issues that is resolved?
From: Ken Hancock [mailto:ken.hanc...@schange.com]
Sent: 21 May 2015 17:13
To: user
wrote:
For security reason, Cassandra changes JMX to listen localhost only
since version 2.0.14/2.1.4.<http://2.1.4.>
From NEWS.txt:
"The default JMX config now listens to localhost only. You must enable
the other JMX flags in cassandra-env.sh manually. "
On Thu, May 21, 2015 at
Just wondering if anyone else is seeing this issue on the nodetool after
installing 2.1.5
This works
nodetool -h 127.0.0.1 cfstats keyspace.table
This works
nodetool -h localhost cfstats keyspace.table
This works
nodetool cfstats keyspace.table
This doesn't work
nodetool -h 192.168.1.10 cfsta
rom: Walsh, Stephen [mailto:stephen.wa...@aspect.com]
Sent: Thursday, May 14, 2015 11:39 AM
To: user@cassandra.apache.org<mailto:user@cassandra.apache.org>
Subject: RE: Insert Vs Updates - Both create tombstones
Thank you,
We are updating the entire row (all columns) each second via the “insert”
different columns then different columns might expire at different times.
From: Walsh, Stephen
[mailto:stephen.wa...@aspect.com<mailto:stephen.wa...@aspect.com>]
Sent: Wednesday, May 13, 2015 1:35 PM
To: user@cassandra.apache.org<mailto:user@cassandra.apache.org>
Subject: Insert Vs Updat
Quick Question,
Our team is under much debate, we are trying to find out if an Update on a row
with a TTL will create a tombstone.
E.G
We have one row with a TTL, if we keep "updating" that row before the TTL is
hit, will a tombstone be created.
I believe it will, but want to confirm.
So if t
eased keeping in mind read latency needs.
Thanks
Anuj Wadehra
Sent from Yahoo Mail on
Android<https://overview.mail.yahoo.com/mobile/?.src=Android>
From:"Walsh, Stephen"
mailto:stephen.wa...@aspect.com>>
Date:Wed, 22 Apr, 2015 at 7:
m not familiar with the java driver - but 'file not found'
indicates something is inconsistent.
On Tue, Apr 21, 2015 at 12:22 PM, Walsh, Stephen
> wrote:
Thanks for all your help Michael,
Our data will change through the day, so data with a TTL will eventually get
dropped, and new data
for sure.
We also run periodic repairs prophylactically.
But if you never delete and always ttl by the same amount, you do not have to
worry about zombie data being resurrected - the main reason for running repair
within gc_grace_seconds.
On Tue, Apr 21, 2015 at 11:49 AM, Walsh, St
do. There have been discussions on the list over the last few
years re this topic.
ml
On Tue, Apr 21, 2015 at 11:14 AM, Walsh, Stephen
mailto:stephen.wa...@aspect.com>> wrote:
We were chatting to Jon Haddena about a week ago about our tombstone issue
using Cassandra 2.0.14
To Summarize
We were chatting to Jon Haddena about a week ago about our tombstone issue
using Cassandra 2.0.14
To Summarize
We have a 3 node cluster with replication-factor=3 and compaction = SizeTiered
We use 1 keyspace with 1 table
Each row have about 40 columns
Each row has a TTL of 10 seconds
We insert a
43 matches
Mail list logo