52 AM, Faraz Mateen wrote:
> Thanks for the response guys.
>
> Let me try setting token ranges manually and move the data again to
> correct nodes. Will update with the outcome soon.
>
>
> On Tue, Apr 17, 2018 at 5:42 AM, kurt greaves
> wrote:
>
>> Sorry for the delay.
lica for all the data in those SSTables and consequently
> you'll lose data (or it simply won't be available).
>
>
--
Faraz Mateen
s?
On Tue, Apr 10, 2018 at 4:28 PM, Faraz Mateen wrote:
> Sorry for the late reply. I was trying to figure out some other approach
> to it.
>
> @Kurt - My previous cluster has 3 nodes but replication factor is 2. I am
> not exactly sure how I would handle the tokens. Can you explain
t show up if the keyspaces are
>> visible. Shouldnt that be a meta data that can be edited once and then be
>> visible?
>>
>> Affan
>>
>> - Affan
>>
>> On Thu, Apr 5, 2018 at 7:55 PM, Michael Shuler
>> wrote:
>>
>>> On
eloader or remote
seeding are also a couple of options but they will take a lot of time. Does
anyone know an easier way to shift all my data to new setup on DC/OS?
--
Faraz Mateen
unable to hold data in
the memory for 128 ms considering that I have 30 GB of RAM for each node.
On Wed, Mar 14, 2018 at 2:24 PM, Faraz Mateen wrote:
> Thanks for the response.
>
> Here is the output of "DESCRIBE" on my table
>
> https://gist.github.com/farazmateen/1c88f6a
>
>
>
> On Tue, Mar 13, 2018 at 5:17 PM, Goutham reddy > wrote:
>
>> Faraz,
>> Can you share your code snippet, how you are trying to save the entity
>> objects into cassandra.
>>
>> Thanks and Regards,
>> Goutham Reddy Aenugu.
>>
>>
Hi everyone,
I seem to have hit a problem in which writing to cassandra through a python
script fails and also occasionally causes cassandra node to crash. Here are
the details of my problem.
I have a python based streaming application that reads data from kafka at a
high rate and pushes it to c
irs). Inconsistency can result from a whole
> range of conditions from nodes being down the cluster being overloaded to
> network issues.
>
> Cheers
> Ben
>
> On Tue, 6 Mar 2018 at 22:18 Faraz Mateen wrote:
>
>> Thanks a lot for the response.
>>
>> Setting cons
;> Now, while you are running these queries is there another process or
>> thread that is writing also at the same time ? If yes then your results are
>> fine but If it's not, you may want to try nodetool flush first and then run
>> these iterations again?
>>
>>
Hi everyone,
I am trying to use spark to process a large cassandra table (~402 million
entries and 84 columns) but I am getting inconsistent results. Initially
the requirement was to copy some columns from this table to another table.
After copying the data, I noticed that some entries in the new
11 matches
Mail list logo