Will try if we get to replicate the node upgrade on that node, or if we
replicate in a lower env.
Thanks
On Wed, Apr 17, 2019 at 1:49 PM Jon Haddad wrote:
> Let me be more specific - run the async java profiler and generate a
> flame graph to determine where CPU time is spent.
>
> On Wed, Apr
Let me be more specific - run the async java profiler and generate a
flame graph to determine where CPU time is spent.
On Wed, Apr 17, 2019 at 11:36 AM Jon Haddad wrote:
>
> Run the async java profiler on the node to determine what it's doing:
> https://github.com/jvm-profiling-tools/async-profil
Run the async java profiler on the node to determine what it's doing:
https://github.com/jvm-profiling-tools/async-profiler
On Wed, Apr 17, 2019 at 11:31 AM Carl Mueller
wrote:
>
> No, we just did the package upgrade 2.1.9 --> 2.2.13
>
> It definitely feels like some indexes are being recalculate
Hi, we want to add a new DC to our existing 2 DC cluster Cassandra 3.11.4. In
our preparations we noticed the system_distributed keyspace still on
SimpleStrategy and decided to do alter keyspace to change it into
NetworkTopologyStrategy according to documentation online on the existing 2 DC
clu
No, we just did the package upgrade 2.1.9 --> 2.2.13
It definitely feels like some indexes are being recalculated or the entire
sstables are being scanned due to suspected corruption.
On Wed, Apr 17, 2019 at 12:32 PM Jeff Jirsa wrote:
> There was a time when changing some of the parameters (es
There was a time when changing some of the parameters (especially bloom
filter FP ratio) would cause the bloom filters to be rebuilt on startup if
the sstables didnt match what was in the schema, leading to a delay like
that and similar logs. Any chance you changed the schema on that table
since th
Oh, the table in question is SizeTiered, had about 10 sstables total, it
was JBOD across two data directories.
On Wed, Apr 17, 2019 at 12:26 PM Carl Mueller
wrote:
> We are doing a ton of upgrades to get out of 2.1.x. We've done probably
> 20-30 clusters so far and have not encountered anything
We are doing a ton of upgrades to get out of 2.1.x. We've done probably
20-30 clusters so far and have not encountered anything like this yet.
After upgrade of a node, the restart takes a long time. like 10 minutes
long. ALmost all of our other nodes took less than 2 minutes to upgrade
(aside from
Thank you gentlemen for all your responses. Reading through them I was able
to resolve the issue by doing the following,
a. Creating an index on one of the query fieldsb. Setting page size to 200
Now the query runs instantaneously.
On Wednesday, April 17, 2019, 7:12:21 AM PDT, Shaurya Gup
As already mentioned in this thread, ALLOW FILTERING should be avoided in
any scenario.
It seems to work in test scenarios, but as soon as the data increases to
certain size(a few MBs), it starts failing miserably and fails almost
always.
Thanks
Shaurya
On Wed, Apr 17, 2019, 6:44 PM Durity, Sea
I am wrong in this paragraph:
>> On the other hand, a node was down, it was TTLed on healthy nodes and
>> tombstone was created, then you start the first one which was down and
>> as it counts down you hit that node with update.
It does not matter how long that dead node was dead. Once you start
If you are just trying to get a sense of the data, you could try adding a limit
clause to limit the amount of results and hopefully beat the timeout.
However, ALLOW FILTERING really means "ALLOW ME TO DESTROY MY APPLICATION AND
CLUSTER." It means the data model does not support the query and wil
I do not use table default ttl (every row has its own TTL) and also no update
occurs to the rows.
I suppose that (because of immutable nature of everything in cassandra)
cassandra would keep only the insertion timestamp + the original ttl and
computes ttl of a row using these two and current
unsubscribe
De : Shravan R [mailto:skr...@gmail.com]
Envoyé : mardi 16 avril 2019 17:04
À : user@cassandra.apache.org
Objet : Re: multiple snitches in the same cluster
Thanks Paul
On Tue, Apr 16, 2019 at 9:52 AM Paul Chandler
mailto:p...@redshots.com>> wrote:
Hi Shravan,
We did not see any dow
TTL value is decreasing every second and it is set to original TTL
value back after some update occurs on that row (see example below).
Does not it logically imply that if a node is down for some time and
updates are occurring on live nodes and handoffs are saved for three
hours and after three hou
Lastly I wonder if that number is very same from every node you
connect your nodetool to. Do all nodes see very similar false
positives ratio / number?
On Wed, 17 Apr 2019 at 21:41, Stefan Miklosovic
wrote:
>
> One thing comes to my mind but my reasoning is questionable as I am
> not an expert in
One thing comes to my mind but my reasoning is questionable as I am
not an expert in this.
If you think about this, the whole concept of Bloom filter is to check
if some record is in particular SSTable. False positive mean that,
obviously, filter thought it was there but in fact it is not. So
Cass
We cannot run any repairs on these tables. Whenever we tried it
(incremental or full or partitioner range), it caused a node to run out of
disk space during anticompaction. We'll try again once Cassandra 4.0 is
released.
On Wed, Apr 17, 2019 at 1:07 PM Stefan Miklosovic <
stefan.mikloso...@insta
if you invoke nodetool it gets false positives number from this metric
https://github.com/apache/cassandra/blob/cassandra-3.11/src/java/org/apache/cassandra/metrics/TableMetrics.java#L564-L578
You get high false positives so this accumulates them
https://github.com/apache/cassandra/blob/cassandr
Both tables use the default bloom_filter_fp_chance of 0.01 ...
CREATE TABLE ... (
a int,
b int,
bucket timestamp,
ts timeuuid,
c int,
...
PRIMARY KEY ((a, b, bucket), ts, c)
) WITH CLUSTERING ORDER BY (ts DESC, monitor ASC)
AND bloom_filter_fp_chance = 0.01
AND compaction =
What is your bloom_filter_fp_chance for either table? I guess it is
bigger for the first one, bigger that number is between 0 and 1, less
memory it will use (17 MiB against 54.9 Mib) which means more false
positives you will get.
On Wed, 17 Apr 2019 at 19:59, Martin Mačura wrote:
>
> Hi,
> I have
Hi,
I have a table with poor bloom filter false ratio:
SSTable count: 1223
Space used (live): 726.58 GiB
Number of partitions (estimate): 8592749
Bloom filter false positives: 35796352
Bloom filter false ratio: 0.68472
Hi,
According to these Facts:
1. If a node is down for longer than max_hint_window_in_ms (3 hours by
default), the coordinator stops writing new hints.
2. The main purpose of gc_grace property is to prevent Zombie data and also
it determines for how long the coordinator should keep hinted files
W
23 matches
Mail list logo