Hi users,
I just add to it that there was recently added a dependency check ant
target (by myself) to scan the deps on CVE's. People can execute that
themselves by "ant dependency-check" and it will scan the database of
vulnerabilities automatically against Cassandra libraries we ship.
Regards
O
Hi Tom,
while I am not completely sure what might cause your issue, I just
want to highlight that schema agreements were overhauled in 4.0 (1) a
lot so that may be somehow related to what that ticket was trying to
fix.
Regards
(1) https://issues.apache.org/jira/browse/CASSANDRA-15158
On Fri, 1
useful, in case you have any questions, feel
free to reach us on GitHub issues.
(1) https://www.instaclustr.com/cassandra-tools-updated-cassandra-4-0/
Regards
Stefan Miklosovic
Hi Raman,
we at Instaclustr have created a CLI tool (1) which can strip TTLs
from your SSTables and you can import that back to your node. Maybe
that is something you find handy.
We had some customers who had data which expired and they wanted to
resurrect them - so they took SSTables with expire
Hi,
I have tested both alpha and alpha2 and 3.11.5 on Centos 7.7.1908 and
all went fine (I have some custom images for my own purposes).
Update between alpha and alpha2 was just about mere version bump.
Cheers
On Thu, 31 Oct 2019 at 20:40, Abdul Patel wrote:
>
> Hey Everyone
>
> Did anyone was
Hi,
for example compaction uses a lot of disk space. It is quite common so
it is not safe to have your disk utilised like on 85% because
compactions would not have room to comapact and that node would be
stuck. This happens in production quite often.
Hence, having it on 50% and having big buffer
f I run above command from dc3. Does it get the data only from dc3?
>
>
>
> On Wed, Aug 21, 2019, 6:46 AM Stefan Miklosovic
> wrote:
>>
>> Hi Rahul,
>>
>> what is your motivation behind this? Why do you want to make sure the
>> count is same? What is the
Hi Rahul,
what is your motivation behind this? Why do you want to make sure the
count is same? What is the purpose of that? All you should care about
is that Cassandra will return you right results. It was designed from
the very bottom to do that for you, you should not be bothered too
much about
You have to basically create new table and include that column either
as part of primary key or you make it a clustering column. Avoid using
allow filtering, it should not be used in production nor any serious
app.
On Sun, 18 Aug 2019 at 21:57, Rahul Reddy wrote:
>
> Hello,
>
> We have a table an
Hi Ralph,
yes this is completely fine, even advisable. You can further extend
this idea to have sessions per keyspace for example if you really
insist, and it could be injectable based on some qualifier ... thats
up to you.
On Wed, 12 Jun 2019 at 11:31, John Sanda wrote:
>
> Hi Ralph,
>
> A sess
My guess is that the "latest" schema would be chosen but I am
definitely interested in in-depth explanation.
On Tue, 21 May 2019 at 00:28, Alexey Korolkov wrote:
>
> Hello team,
> In some circumstances, my cluster was split onto two schema versions
> (half on one version, and rest on another)
> I
what are your replication factors for that keyspace? why are you using
each quorum?
might be handy
https://docs.datastax.com/en/cassandra/3.0/cassandra/dml/dmlConfigSerialConsistency.html
On Wed, 1 May 2019 at 17:57, Bhavesh Prajapati
wrote:
>
> I had two queries run on same row in parallel (th
eplicas, my intention of removing it at 6/17
> should not be changed!
>
> Would you suggest that my idea of "gc_grace = max_hint = 3 hours" for a time
> serie db is not reasonable?
>
> Sent using Zoho Mail
>
>
>
> On Wed, 17 Apr 2019 17:13:02 +0430 S
TTL value is decreasing every second and it is set to original TTL
value back after some update occurs on that row (see example below).
Does not it logically imply that if a node is down for some time and
updates are occurring on live nodes and handoffs are saved for three
hours and after three hou
Lastly I wonder if that number is very same from every node you
connect your nodetool to. Do all nodes see very similar false
positives ratio / number?
On Wed, 17 Apr 2019 at 21:41, Stefan Miklosovic
wrote:
>
> One thing comes to my mind but my reasoning is questionable as I am
> not
during anticompaction. We'll try again once Cassandra 4.0 is released.
>
> On Wed, Apr 17, 2019 at 1:07 PM Stefan Miklosovic
> wrote:
>>
>> if you invoke nodetool it gets false positives number from this metric
>>
>> https://github.com/apache/cassand
ion_window_unit': 'DAYS',
> 'tombstone_threshold': '0.9', 'unchecked_tombstone_compaction':
> 'false'}
>AND dclocal_read_repair_chance = 0.0
>AND default_time_to_live = 63072000
>AND gc_grace_seconds = 10800
> ...
>
What is your bloom_filter_fp_chance for either table? I guess it is
bigger for the first one, bigger that number is between 0 and 1, less
memory it will use (17 MiB against 54.9 Mib) which means more false
positives you will get.
On Wed, 17 Apr 2019 at 19:59, Martin Mačura wrote:
>
> Hi,
> I have
>> I have a 3 node cassandra cluster with Replication factor as 2 and
>> read-write consistency set to QUORUM.
I am not sure what you want to achieve with this. If you have three
nodes and RF 2, for each write there will be two replicas, right ...
If one of your replicas is down out of two in tot
Ah I see it is the default for hinted handoffs. I was somehow thinking
its bigger figure I do not know why :)
I would say you should run repairs continuously / periodically so you
would not even have to do some thinking about that and it should run
in the background in a scheduled manner if possib
Hi Kunal,
where do you have that "more than 3 hours" from?
Regards
On Tue, 9 Apr 2019 at 04:19, Kunal wrote:
>
> Hello everyone..
>
>
>
> I have a 6 node Cassandra datacenter, 3 nodes on each datacenter. If one of
> the node goes down and remain down for more than 3 hr, I have to run nodetool
On Wed, 3 Apr 2019 at 18:38, Oleksandr Shulgin
wrote:
>
> On Wed, Apr 3, 2019 at 12:28 AM Saleil Bhat (BLOOMBERG/ 731 LEX)
> wrote:
>>
>>
>> The standard procedure for doing this seems to be add a 3rd datacenter to
>> the cluster, stream data to the new datacenter via nodetool rebuild, then
>>
Hi Jens,
I am reading Cassandra The definitive guide and there is a chapter 9 -
Reading and Writing Data and section The Cassandra Write Path and this
sentence in it:
If a replica does not respond within the timeout, it is presumed to be down
and a hint is stored for the write.
So your node migh
t;
> at
> org.apache.cassandra.service.StorageService.prepareReplacementInfo(StorageService.java:449)
> ~[apache-cassandra-2.2.8.jar:2.2.8]
>
>
>
>
>
> DN 10.xx.xx.xx 388.43 KB 256 6.9%
> bdbd632a-bf5d-44d4-b220-f17f258c4701 1e
>
>
>
> Under w
> org.apache.cassandra.service.StorageService.prepareReplacementInfo(StorageService.java:449)
> ~[apache-cassandra-2.2.8.jar:2.2.8]
>
>
>
>
>
> DN 10.xx.xx.xx 388.43 KB 256 6.9%
> bdbd632a-bf5d-44d4-b220-f17f258c4701 1e
>
>
>
> Under what conditions does this happen?
>
>
>
>
>
>
> Thank you
>
>
>
>
>
Stefan Miklosovic
can do will be from your data model .
>
> Don’t ask Cassandra to query all data from table but the ideal query will
> be using single partition.
>
>
>
> On Tue, Mar 12, 2019 at 6:46 PM Stefan Miklosovic <
> stefan.mikloso...@instaclustr.com> wrote:
>
> Hi Sean,
a admin should be able to set a flag
> in cassandra.yaml to not allow filtering at all. The cluster should be able
> to protect itself from bad queries.
>
>
>
>
>
>
>
> *From:* Leena Ghatpande
> *Sent:* Tuesday, March 12, 2019 9:02 AM
> *To:* Stefan Miklosovic ;
predictable performance. If you want to execute this query
despite the performance unpredictability, use ALLOW FILTERING"
On Tue, 12 Mar 2019 at 10:10, Stefan Miklosovic <
stefan.mikloso...@instaclustr.com> wrote:
> Hi Leena,
>
> "We are thinking of creating a new table
n will be needed only on adhoc basis and it wont
>be as frequent .
>2. Best way to migrate large volume of data with ttl from one table to
>another within the same cluster.
>
>
> Any other suggestions also will be greatly appreciated.
>
>
>
Stefan Miklosovic
keyspace.customer_sensor_tagids (values(tagids));
> CREATE INDEX XXX ON keyspace.customer_sensor_tagids (values(XXX));
> CREATE INDEX XXX ON keyspace.customer_sensor_tagids (XXX);
> CREATE INDEX XXX ON keyspace.customer_sensor_tagids (XXX);
> CREATE INDEX XXX ON keyspace.customer_sensor_tagids (X
30 matches
Mail list logo