x27;ve opened CASSANDRA-1729 to fix it in
> 0.6, in case we start reusing row buffers.
>
> Thanks for the report!
> Stu
>
> -Original Message-
> From: "Schubert Zhang"
> Sent: Thursday, November 11, 2010 2:19am
> To: dev@cassandra.apache.org, u...@cassan
Hi JE,
0.6.6:
org.apache.cassandra.service.AntiEntropyService
I found the rowHash method uses "row.buffer.getData()" directly.
Since row.buffer.getData() is a byte[], and there may have some junk bytes
in the end by the buffer, I think we should use the exact length.
private MerkleTree.
Hi Jonathan,
Could you please have a check this?
On Wed, May 5, 2010 at 6:19 PM, Schubert Zhang wrote:
> Include dev@cassandra.apache.org
>
>
> On Wed, May 5, 2010 at 3:09 PM, Anty wrote:
>
>> HI:All
>>
>> In source code of 0.6.1 ,in SSTableWriter,
>>
Include dev@cassandra.apache.org
On Wed, May 5, 2010 at 3:09 PM, Anty wrote:
> HI:All
>
> In source code of 0.6.1 ,in SSTableWriter,
> private void afterAppend(DecoratedKey decoratedKey, long dataPosition, int
> dataSize) throws IOException
> {
> String diskKey = partitioner.convertT
, May 4, 2010 at 1:10 AM, Schubert Zhang wrote:
> We make a patch to 0.6 branch and 0.6.1 for this feature.
>
> https://issues.apache.org/jira/browse/CASSANDRA-1041
>
We make a patch to 0.6 branch and 0.6.1 for this feature.
https://issues.apache.org/jira/browse/CASSANDRA-1041
Since the scale of GC graph in the slides is different from the throughput
ones. I will do another test for this issue.
Thanks for your advices, Masood and Jonathan.
---
Here, i just post my cossandra.in.sh.
JVM_OPTS=" \
-ea \
-Xms128M \
-Xmx6G \
-XX:Tar