these sstables to see when
> and under what circumstances they got written?
>
> --
> / Peter Schuller (@scode on twitter)
>
--
Huy Le
Spring Partners, Inc.
http://springpadit.com
there any chance that the entire file from source node got streamed to
destination node even though only small amount of data in hte file from
source node is supposed to be streamed destination node?
>
> I'm just wildly speculation, but it would be nice to get to the bottom of
> this.
>
> --
> / Peter Schuller (@scode on twitter)
>
--
Huy Le
Spring Partners, Inc.
http://springpadit.com
e going
> form a total load of 40 gig to hundreds of gigs (so even with all
> cf:s streaming, that's unexpected).
>
> Do you have any old left-over streams active on the nodes? "nodetool
> netstats". If there are "stuck" streams, they might be causing sstable
> retention beyond what you'd expect.
>
> --
> / Peter Schuller (@scode on twitter)
>
--
Huy Le
Spring Partners, Inc.
http://springpadit.com
aron Morton
> Freelance Cassandra Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 19/08/2011, at 6:36 AM, Huy Le wrote:
>
> Thanks. I won't try that then.
>
> So in our environment, after upgrading from 0.6.11 to 0.8.4, we have to run
> scrub on all no
epair?
Huy
On Thu, Aug 18, 2011 at 10:22 AM, Philippe wrote:
> Unfortunately repairing one cf at a time didn't help in my case because it
> still streams all CF and that triggers lots of compactions
> On Aug 18, 2011 3:48 PM, "Huy Le" wrote:
>
--
H
ent compactions because it appears
> that "Validation" are not constrained by it. That way, I would have fewer
> concurrent compactions are the disk would not fill up (as fast?)
>
>
> 2011/8/17 Huy Le
>
>> I restarted the cluster and kicked off repair on the same nod
a problem. Does anyone have any
further info on the issue? Thanks!
Huy
On Wed, Aug 17, 2011 at 11:13 AM, Huy Le wrote:
> Sorry for the duplicate thread. I saw the issue being referenced to
> https://issues.apache.org/jira/browse/CASSANDRA-2280. However, I am
> running version 0
n Aug 17, 2011 4:03 PM, "Huy Le" wrote:
> > Hi,
> >
> > After upgrading to cass 0.8.4 from cass 0.6.11. I ran scrub. That worked
> > fine. Then I ran nodetool repair on one of the nodes. The disk usage on
> > data directory increased from 40GB to 480GB, a
using this issue?
Huy
--
Huy Le
Spring Partners, Inc.
http://springpadit.com
/cassandra/Operations#Repairing_missing_or_inconsistent_data>
> Aaron
>
> On 16 Mar 2011, at 06:58, Daniel Doubleday wrote:
>
> At least if you are using RackUnawareStrategy
>
> Cheers,
> Daniel
>
> On Mar 15, 2011, at 6:44 PM, Huy Le wrote:
>
> Hi,
>
> We h
Hi,
We have a cluster with 12 servers and use RF=3. When running nodetool
repair, do we have to run it on all nodes on the cluster or can we run on
every 3rd node? Thanks!
Huy
--
Huy Le
Spring Partners, Inc.
http://springpadit.com
Yes, we had setting at 75 but JVM did not have enough time to do GC, so it
abort GC'ing. We lowered it to 50, but still had issue, so we lowered it
again to 35.
On Thu, Feb 10, 2011 at 12:11 PM, Oleg Anastasyev wrote:
> Huy Le springpartners.com> writes:
>
> >
any
insight as to why the row mentioned is so expensive to have?
Thanks!
Huy
On Wed, Feb 9, 2011 at 2:34 PM, Robert Coli wrote:
> On Wed, Feb 9, 2011 at 11:04 AM, Huy Le wrote:
> > Memory usage grows overtime.
>
> It is relatively typical for caches to exert memory pressure over
a 3 gig heap size and the other nodes stay at 500 mb,
> the question is why *don't* they increase in heap usage. Unless your
> 500 mb is the report of the actual live data set as evidenced by
> post-CMS heap usage.
>
>
What's considered to be "live data"? If we clear
>
> Are you using standard, mmap_index_only, or mmap io? Are you using JNA?
>
>
We use standard disk access mode with JNA.
Huy
--
Huy Le
Spring Partners, Inc.
http://springpadit.com
Hi,
There is already an email thread on memory issue on this email list, but I
creating a new thread as we are experiencing a different memory consumption
issue.
We are 12-server cluster. We use random partitioner with manually generated
server tokens. Memory usage on one server keeps growing o
16 matches
Mail list logo