Re: High IO and poor read performance on 3.11.2 cassandra cluster

2018-09-11 Thread Elliott Sims
>>> Could you decrease chunk_length_in_kb to 16 or 8 and repeat the test. >>> >>> On Wed, Sep 5, 2018, 5:51 AM wxn...@zjqunshuo.com >>> wrote: >>> >>>> How large is your row? You may meet reading wide row problem. >>>> >>>&

Re: High IO and poor read performance on 3.11.2 cassandra cluster

2018-09-09 Thread Laxmikant Upadhyay
wxn...@zjqunshuo.com >> wrote: >> >>> How large is your row? You may meet reading wide row problem. >>> >>> -Simon >>> >>> *From:* Laxmikant Upadhyay >>> *Date:* 2018-09-05 01:01 >>> *To:* user >>> *Subject:* High IO

Re: High IO and poor read performance on 3.11.2 cassandra cluster

2018-09-04 Thread Alexander Dejanovski
>> -Simon >> >> *From:* Laxmikant Upadhyay >> *Date:* 2018-09-05 01:01 >> *To:* user >> *Subject:* High IO and poor read performance on 3.11.2 cassandra cluster >> >> We have 3 node cassandra cluster (3.11.2) in single dc. >> >> We

Re: High IO and poor read performance on 3.11.2 cassandra cluster

2018-09-04 Thread CPC
> *Subject:* High IO and poor read performance on 3.11.2 cassandra cluster > We have 3 node cassandra cluster (3.11.2) in single dc. > > We have written 450 million records on the table with LCS. The write > latency is fine. After write we perform read and update operations. >

Re: High IO and poor read performance on 3.11.2 cassandra cluster

2018-09-04 Thread wxn...@zjqunshuo.com
How large is your row? You may meet reading wide row problem. -Simon From: Laxmikant Upadhyay Date: 2018-09-05 01:01 To: user Subject: High IO and poor read performance on 3.11.2 cassandra cluster We have 3 node cassandra cluster (3.11.2) in single dc. We have written 450 million records on

High IO and poor read performance on 3.11.2 cassandra cluster

2018-09-04 Thread Laxmikant Upadhyay
We have 3 node cassandra cluster (3.11.2) in single dc. We have written 450 million records on the table with LCS. The write latency is fine. After write we perform read and update operations. When we run read+update operations on newly inserted 1 million records (on top of 450 m records) then t