On Mon, Dec 29, 2014 at 3:24 PM, mck wrote:
>
> Especially in CASSANDRA-6285 i see some scary stuff went down.
>
> But there are no outstanding bugs that we know of, are there?
>
Right, the question is whether you believe that 6285 has actually been
fully resolved.
It's relatively plausible tha
> Perf is better, correctness seems less so. I value latter more than
> former.
Yeah no doubt.
Especially in CASSANDRA-6285 i see some scary stuff went down.
But there are no outstanding bugs that we know of, are there?
(CASSANDRA-6815 remains just a wrap up of how options are to be
presente
On Mon, Dec 29, 2014 at 2:03 PM, mck wrote:
> We saw an improvement when we switched to HSHA, particularly for our
> offline (hadoop/spark) nodes.
> Sorry i don't have the data anymore to support that statement, although
> i can say that improvement paled in comparison to cross_node_timeout
> whi
> > Should I stick to 2048 or try
> > with something closer to 128 or even something else ?
2048 worked fine for us.
> > About HSHA,
>
> I anti-recommend hsha, serious apparently unresolved problems exist with
> it.
We saw an improvement when we switched to HSHA, particularly for our
offline
On Mon, Dec 29, 2014 at 2:29 AM, Alain RODRIGUEZ wrote:
> Sorry about the gravedigging, but what would be a good start value to tune
> "rpc_max_threads" ?
>
Depends on whether you prefer that clients get a slow thread or none.
> I mean, default is unlimited, the value commented is 2048. Native
Hi,
Sorry about the gravedigging, but what would be a good start value to tune "
rpc_max_threads" ?
I mean, default is unlimited, the value commented is 2048. Native protocol
seems to only allow 128 simultaneous threads. Should I stick to 2048 or try
with something closer to 128 or even something
That definitely appears to be the issue. Thanks for pointing that out!
https://issues.apache.org/jira/browse/CASSANDRA-8116
It looks like 2.0.12 will check for the default and throw an exception
(thanks Mike Adamson) and also includes a bit more text in the config
file but I'm thinking that 2.0.12
Hi Peter, are you using the hsha RPC server type on this node? If you are, then
it looks like rpc_max_threads threads will be allocated on startup in 2.0.11
while this wasn't the case before. This can exhaust your heap if the value of
rpc_max_threads is too large (eg if you use the default).
On a 3 node test cluster we recently upgraded one node from 2.0.10 to
2.0.11. This is a cluster that had been happily running 2.0.10 for
weeks and that has very little load and very capable hardware. The
upgrade was just your typical package upgrade:
$ dpkg -s cassandra | egrep '^Ver|^Main'
Mainta
DuyHi and Rob, Thanks for your feedbacks.
Yeah, that's exactly the point I found. Some may want to run read repair even
on tombstones as before, but others not like Rob and us.
Personally, I take read repaid as a nice to have feature, especially for
tombstones, where a regular repair is anyway
On Tue, Oct 7, 2014 at 1:57 AM, DuyHai Doan wrote:
> Read Repair belongs to the Anti-Entropy procedures to ensure that
> eventually, data from all replicas do converge. Tombstones are data
> (deletion marker) so they need to be exchanged between replicas. By
> skipping tombstone you prevent the
Hello Takenori
Read Repair belongs to the Anti-Entropy procedures to ensure that
eventually, data from all replicas do converge. Tombstones are data
(deletion marker) so they need to be exchanged between replicas. By
skipping tombstone you prevent the data convergence with regard to
deletion.
On
Hi,
I have filed a fix as CASSANDRA-8038, which would be a good news for those
who has suffered from overwhelming GC or OOM by tombstones.
Appreciate your feedbacks!
Thanks,
Takenori
Agree to Peter Schuller.
On Sun, Jul 18, 2010 at 8:40 PM, Jonathan Ellis wrote:
> On Sun, Jul 18, 2010 at 2:45 AM, Schubert Zhang wrote:
> > In a heavy inserting (many client threads), the memtable flush (generate
> new
> > sstable) is frequent (e.g. one in 30s).
>
> This is a sign you should i
On Sun, Jul 18, 2010 at 2:45 AM, Schubert Zhang wrote:
> In a heavy inserting (many client threads), the memtable flush (generate new
> sstable) is frequent (e.g. one in 30s).
This is a sign you should increase your memtable thresholds, btw. If
you wrote out larger sstables, there would be less
(adding dev@)
> (2) Can we implement multi-thread compaction?
I think this is the only way to scale. Or at least to implement
concurrent compaction (whether it is by division into threads or not)
of multiple size classes. As long as the worst-case compactions are
significantly slower than best-ba
Benjamin and Jonathan,
It is not difficult to stack thousands of small SSTables.
In a heavy inserting (many client threads), the memtable flush (generate new
sstable) is frequent (e.g. one in 30s).
The compaction only run in a single thread and is CPU bound. Consider the
compactionManager is com
Benjamin,
It is not difficult to stack thousands of SSTables.
In a heavy inserting (many client threads), the memtable flush (generate new
sstable) is fren
On Mon, Jun 14, 2010 at 2:03 AM, Benjamin Black wrote:
> On Sat, Jun 12, 2010 at 7:46 PM, Anty wrote:
> > Hi:ALL
> > I have 10 nodes clus
On Sat, Jun 12, 2010 at 7:46 PM, Anty wrote:
> Hi:ALL
> I have 10 nodes cluster ,after inserting many records into the cluster, i
> compact each node by nodetool compact.
> during the compaciton process ,something wrong with one of the 10 nodes ,
> when the size of the compacted temp file rech ne
> We've also seen similar problems
>
> https://issues.apache.org/jira/browse/CASSANDRA-1177
To be clear though; un-*flushed* data is very different from
un-*compacted* data and the above seems to be about unflushed data?
In my test case there was no problem at all flushing data. But my test
was
> If you were just inserting a lot of data fast, it may be that
> background compaction was unable to keep up with the insertion rate.
> Simply leaving the node(s) for a while after the insert storm will let
> it catch up with compaction.
>
> (At least this was the behavior for me on a recent trunk
> No, i do not disable compaction during my inserts. It is weird the minor
> compaction is triggered less ofen.
If you were just inserting a lot of data fast, it may be that
background compaction was unable to keep up with the insertion rate.
Simply leaving the node(s) for a while after the insert
THX for your reply ,Jonathan.
On Sun, Jun 13, 2010 at 11:27 AM, Jonathan Ellis wrote:
> On Sat, Jun 12, 2010 at 7:46 PM, Anty wrote:
> > Hi:ALL
> > I have 10 nodes cluster ,after inserting many records into the cluster, i
> > compact each node by nodetool compact.
>
> 5000 uncompacted sstables
On Sat, Jun 12, 2010 at 7:46 PM, Anty wrote:
> Hi:ALL
> I have 10 nodes cluster ,after inserting many records into the cluster, i
> compact each node by nodetool compact.
5000 uncompacted sstables is unusual. did you disable compaction
during your inserts? that is dangerous.
> during the compa
GC storm begins,following is the log fragment:
INFO [GC inspection] 2010-06-13 10:02:59,420 GCInspector.java (line 110) GC
for ConcurrentMarkSweep: 43683 ms, 126983320 reclaimed
leaving 14910532584 used; max is 15153364992
INFO [GC inspection] 2010-06-13 10:03:43,571 GCInspector.java (line 110
25 matches
Mail list logo