Hi,
Cassandra uses last-writetime-win strategy.
In memory data doesn't mean it is the latest data due to custom write time,
if data is also in Sstable, Cassandra has to read it and reconcile.
Jasonstack
On Mon, 27 Mar 2017 at 7:53 PM, 赵豫峰 wrote:
> hello, I get the message that "If the memtable
1. usually before storing object, serialization is needed, so we can know
the size.
2. add "chunk id" as last clustering key.
Vikas Jaiman 于2016年10月21日周五 下午11:46写道:
> Thanks for your answer but I am just curious about:
>
> i)How do you identify the size of the object which you are going to chunk?
Hi Varun,
It looks like a scheduled job that runs "nodetool drain"..
Zhao Yang
Varun Barala 于2016年9月25日周日 下午7:45写道:
> Jeff Jirsa thanks for your reply!!
>
> We are not using any chef/puppet and It happens only at one node other
> nodes are working fine.
> And all machines are using same AMI ima
Hi,
Can you check LoadBalancing Policy -> whiteList ?
jasonstack
Varun Barala 于2016年5月5日周四 下午5:40写道:
> Hi Siddharth Verma,
>
> You can define consistency level LOCAL_ONE.
>
> and you can applyh consistency level during statement creation.
>
> like this -> statement.setConsistencyLevel(Consisten
Hi,
Currently StatusLogger will log info when there are dropped messages or GC
more than 200 ms.
In my use case, there are about 1000 tables. The status-logger is logging
too many information for each tables.
I wonder is there a way to reduce this log? for example, only print the
thread pool in
the different (probably competing) workloads
>> effectively.
>>
>> Mike
>>
>> On Tue, Apr 5, 2016 at 8:40 PM, jason zhao yang <
>> zhaoyangsingap...@gmail.com> wrote:
>>
>>> Hi Jack,
>>>
>>> Thanks for the reply.
>>&g
to scale with the table count. For one each
>> table/CF has some fixed memory footprint on *ALL* nodes. The consensus is
>> you shouldn't have more than "a few hundreds" of tables.
>>
>> On Mon, Apr 4, 2016 at 10:17 AM, jason zhao yang <
>> zhaoyangsin
oogle.com/forum/#!topic/nosql-databases/IblAhiLUXdk
>
> In short C* is not designed to scale with the table count. For one each
> table/CF has some fixed memory footprint on *ALL* nodes. The consensus is
> you shouldn't have more than "a few hundreds" of tables.
>
> O
Hi,
This is Jason.
Currently, I am using C* 2.1.10, I want to ask what's the optimal number of
tables I should create in one cluster?
My use case is that I will prepare a keyspace for each of my tenant, and
every tenant will create tables they needed. Assume each tenant created 50
tables with no