No, this is a different cluster.
Kunal
On 13-Mar-2018 6:27 AM, "Kenneth Brotman"
wrote:
Kunal,
Is this the GCE cluster you are speaking of in the “Adding new DC?” thread?
Kenneth Brotman
*From:* Kunal Gangakhedkar [mailto:kgangakhed...@gmail.com]
*Sent:* Sunday, March 11,
t;
> Kenneth Brotman
>
>
>
> *From:* Kunal Gangakhedkar [mailto:kgangakhed...@gmail.com]
> *Sent:* Monday, March 12, 2018 4:24 PM
> *To:* user@cassandra.apache.org
> *Cc:* Nikhil Soman
> *Subject:* Re: [EXTERNAL] RE: Adding new DC?
>
>
>
>
>
> On 13 Mar
ogle cloud instances (not restarted the
service as per the doc).
In the AWS instance, I had added 'auto_bootstrap: false' - the doc says we
need to do "nodetool rebuild" and hence no automatic bootstrapping.
But, haven't gotten to that step yet.
Thanks,
Kunal
>
> K
is the cluster to migrate (# of nodes and size of data). The
> preferred method might depend on how much data needs to move. Is any
> application outage acceptable?
>
>
>
> Sean Durity
>
> lord of the (C*) rings (Staff Systems Engineer – Cassandra)
>
> *From:* Ku
> lord of the (C*) rings (Staff Systems Engineer – Cassandra)
>
> *From:* Kunal Gangakhedkar [mailto:kgangakhed...@gmail.com]
> *Sent:* Sunday, March 11, 2018 10:20 PM
> *To:* user@cassandra.apache.org
> *Subject:* [EXTERNAL] RE: Adding new DC?
>
>
>
> Hi Kenneth,
&
nt regions - GCE setup is in us-east while AWS setup
is in Asia-south (Mumbai)
Thanks,
Kunal
Kenneth Brotman
*From:* Kunal Gangakhedkar [mailto:kgangakhed...@gmail.com]
*Sent:* Sunday, March 11, 2018 2:32 PM
*To:* user@cassandra.apache.org
*Subject:* Adding new DC?
Hi all,
We curren
Hi all,
We currently have a cluster in GCE for one of the customers.
They want it to be migrated to AWS.
I have setup one node in AWS to join into the cluster by following:
https://docs.datastax.com/en/cassandra/2.1/cassandra/operations/ops_add_dc_to_cluster_t.html
Will add more nodes once the f
Finally, got a chance to work on it over the weekend.
It worked as advertised. :)
Thanks a lot, Chris.
Kunal
On 8 March 2018 at 10:47, Kunal Gangakhedkar
wrote:
> Thanks a lot, Chris.
>
> Will try it today/tomorrow and update here.
>
> Thanks,
> Kunal
>
> On 7 Ma
Thanks a lot, Chris.
Will try it today/tomorrow and update here.
Thanks,
Kunal
On 7 March 2018 at 00:25, Chris Lohfink wrote:
> While its off you can delete the files in the directory yeah
>
> Chris
>
>
> On Mar 6, 2018, at 2:35 AM, Kunal Gangakhedkar
> wrote:
>
>
napshots? What files exist there that are taking
> up space?
>
> > On Mar 5, 2018, at 1:02 AM, Kunal Gangakhedkar
> wrote:
> >
> > Hi all,
> >
> > I have a 2-node cluster running cassandra 2.1.18.
> > One of the nodes has run out of disk space
Hi all,
I have a 2-node cluster running cassandra 2.1.18.
One of the nodes has run out of disk space and died - almost all of it
shows up as occupied by size_estimates CF.
Out of 296GiB, 288GiB shows up as consumed by size_estimates in 'du -sh'
output.
This is while the other node is chugging alo
> process with some metadata updates.
>>
>> 2017-04-21 11:21 GMT+02:00 Kunal Gangakhedkar :
>>
>>> Hi all,
>>>
>>> We have a CF that's grown too large - it's not getting actively used in
>>> the app right now.
>>> The on-disk
Hi all,
We have a CF that's grown too large - it's not getting actively used in the
app right now.
The on-disk size of the . directory is ~407GB and I have only ~40GB
free left on the disk.
I understand that if I trigger a TRUNCATE on this CF, cassandra will try to
take snapshot.
My question:
Is
ks,
Kunal
On 13 January 2017 at 18:30, Kunal Gangakhedkar
wrote:
> Great, thanks a lot to all for the help :)
>
> I finally took the dive and went with Razi's suggestions.
> In summary, this is what I did:
>
>- turn off incremental backups on each of the nodes in rollin
mpacted to produce sstable-E, representing all the data from C
> and D.
>
> Now, sstable-E will live in your main table directory, and the hardlinks
> to sstable-C and sstable-D will be deleted in the main table directory, but
> sstable-D will continue to exist in /backups.
>
> A
sstable is
> written to disk. At the time, they take up (almost) no space, so they
> aren't a big deal, but when the sstable gets compacted, they stick around,
> so they end up not freeing space up.
>
>
>
> Usually you use incremental backups as a means of moving the
backups?
This is my first production deployment - so, still trying to learn.
Thanks,
Kunal
On 10 January 2017 at 21:36, Jonathan Haddad wrote:
> You can just delete them off the filesystem (rm)
>
> On Tue, Jan 10, 2017 at 8:02 AM Kunal Gangakhedkar <
> kgangakhed...@gmail.com>
Hi all,
We have a 3-node cassandra cluster with incremental backup set to true.
Each node has 1TB data volume that stores cassandra data.
The load in the output of 'nodetool status' comes up at around 260GB each
node.
All our keyspaces use replication factor = 3.
However, the df output shows the
able to any
> size. With more than 500 customers in 45 countries, DataStax is the
> database technology and transactional backbone of choice for the worlds
> most innovative companies such as Netflix, Adobe, Intuit, and eBay.
>
> On Fri, Jul 10, 2015 at 2:44 PM, Kunal Gangakhedkar
And here is my cassandra-env.sh
https://gist.github.com/kunalg/2c092cb2450c62be9a20
Kunal
On 11 July 2015 at 00:04, Kunal Gangakhedkar
wrote:
> From jhat output, top 10 entries for "Instance Count for All Classes
> (excluding platform)" shows:
>
> 2088
al of 8739510 instances occupying 193607512 bytes.
JFYI.
Kunal
On 10 July 2015 at 23:49, Kunal Gangakhedkar
wrote:
> Thanks for quick reply.
>
> 1. I don't know what are the thresholds that I should look for. So, to
> save this back-and-forth, I'm attaching the cfstats output
delivering Apache Cassandra to the world’s most innovative enterprises.
> Datastax is built to be agile, always-on, and predictably scalable to any
> size. With more than 500 customers in 45 countries, DataStax is the
> database technology and transactional backbone of choice for the wor
> facebook.png] <https://www.facebook.com/datastax> [image: twitter.png]
> <https://twitter.com/datastax> [image: g+.png]
> <https://plus.google.com/+Datastax/about>
> <http://feeds.feedburner.com/datastax>
>
> <http://cassandrasummit-datastax.com/>
then an 8GB system
> is probably too small. But the real issue is that you need to keep your
> partition size from getting too large.
>
> Generally, an 8GB system is okay, but only for reasonably-sized
> partitions, like under 10MB.
>
>
> -- Jack Krupansky
>
> On
; are reasonably small. Any large partitions could blow you away.
>
> -- Jack Krupansky
>
> On Fri, Jul 10, 2015 at 4:22 AM, Kunal Gangakhedkar <
> kgangakhed...@gmail.com> wrote:
>
>> Attaching the stack dump captured from the last OOM.
>>
>> Kunal
>>
>>
Attaching the stack dump captured from the last OOM.
Kunal
On 10 July 2015 at 13:32, Kunal Gangakhedkar
wrote:
> Forgot to mention: the data size is not that big - it's barely 10GB in all.
>
> Kunal
>
> On 10 July 2015 at 13:29, Kunal Gangakhedkar
> wrote:
>
>>
Forgot to mention: the data size is not that big - it's barely 10GB in all.
Kunal
On 10 July 2015 at 13:29, Kunal Gangakhedkar
wrote:
> Hi,
>
> I have a 2 node setup on Azure (east us region) running Ubuntu server
> 14.04LTS.
> Both nodes have 8GB RAM.
>
> One of
Hi,
I have a 2 node setup on Azure (east us region) running Ubuntu server
14.04LTS.
Both nodes have 8GB RAM.
One of the nodes (seed node) died with OOM - so, I am trying to add a
replacement node with same configuration.
The problem is this new node also keeps dying with OOM - I've restarted the
28 matches
Mail list logo