Re: Changing existing Cassandra cluster from single rack configuration to multi racks configuration

2019-03-11 Thread Laxmikant Upadhyay
Hi Alex, Regarding your below point the admin need to take care of temporary uneven distribution of data util the entire process is done: "If you can't, then I guess you can for each node (one at a time), decommission it, wipe it clean and re-bootstrap it after setting the appropriate rack." I b

Re: Migrate large volume of data from one table to another table within the same cluster when COPY is not an option.

2019-03-11 Thread Stefan Miklosovic
The query which does not work should be like this, I made a mistake there cqlsh> SELECT * from my_keyspace.my_table where number > 2; InvalidRequest: Error from server: code=2200 [Invalid query] message="Cannot execute this query as it might involve data filtering and thus may have unpredictable

Re: Migrate large volume of data from one table to another table within the same cluster when COPY is not an option.

2019-03-11 Thread Stefan Miklosovic
Hi Leena, "We are thinking of creating a new table with a date field as a clustering column to be able to query for date ranges, but partition key to clustering key will be 1-1. Is this a good approach?" If you want to select by some time range here, I am wondering how would making datetime a clu

Re: too many logDroppedMessages and StatusLogger

2019-03-11 Thread Nate McCall
Are you using queries with a large number of arguments to an IN clause on a partition key? If so, the coordinator has to: - hold open the client request - unwind the IN clause into individual statements - scatter/gathering those statements around the cluster (each at the requested consistency level

Migrate large volume of data from one table to another table within the same cluster when COPY is not an option.

2019-03-11 Thread Leena Ghatpande
We have a table with over 70M rows with a partition key that is unique. We have a created datetime stamp on each record, and we have a need to select all rows created for a date range. Secondary index is not an option as its high cardinality and could slow performance doing a full scan on 70M

Re: removenode force vs assasinate

2019-03-11 Thread onmstester onmstester
The only option to stream decommissioned node's data is to run "nodetool decommission" on the decommissioned node (while cassandra is running on the node) removenode only streams data from node's relpica, so any data that only stored on decommissioned node would be lost. You should monitoring

Re: removenode force vs assasinate

2019-03-11 Thread Ahmed Eljami
Thx onmstester, I Thnik that remove node dont stream data : refer to blog of TLP : http://thelastpickle.com/blog/2018/09/18/assassinate.html Will *NOT* stream any of the decommissioned node’s data to the new replicas. Anyway, I have already launched a remove node but it continues to appear DL af

Re: removenode force vs assasinate

2019-03-11 Thread onmstester onmstester
You should first try with removenode which triggers cluster streaming, if removenode failes or stuck, Assassinate is the last solution. Sent using https://www.zoho.com/mail/ On Mon, 11 Mar 2019 14:27:13 +0330 Ahmed Eljami wrote Hello, Can someone explain me the differenc

removenode force vs assasinate

2019-03-11 Thread Ahmed Eljami
Hello, Can someone explain me the difference between removenode foce and assasinate in a case where a node staying in status DL ? Thx