Severity: important
Affected versions:
- Apache Cassandra 4.0.0 through 4.0.9
- Apache Cassandra 4.1.0 through 4.1.1
Description:
Privilege escalation when enabling FQL/Audit logs allows user with JMX access
to run arbitrary commands as the user running Apache Cassandra
This issue affects Apac
This looks like https://issues.apache.org/jira/browse/CASSANDRA-17273
iirc you can merge the two files - making sure all ADD and REMOVE records are
in both files, I think you would need to add
`ADD:[/mnt/data01/cassandra/data/hades/prod_md5_sha1-bb5bdca002b111edb9761fc3bb7c847c/nb-67417-big-,0,8
Severity: high
Description:
When running Apache Cassandra with the following configuration:
enable_user_defined_functions: true
enable_scripted_user_defined_functions: true
enable_user_defined_functions_threads: false
it is possible for an attacker to execute arbitrary code on the host. The
a
problem would be that for every file you flush, you would recompact all of
L1 - files are flushed to L0, then compacted together with all overlapping
files in L1.
On Tue, Sep 18, 2018 at 4:53 AM 健 戴 wrote:
> Hi,
>
> I have one table having 2T data saved in c* each node.
> And if using LCS, the d
It could also be https://issues.apache.org/jira/browse/CASSANDRA-2503
On Mon, Sep 17, 2018 at 4:04 PM Jeff Jirsa wrote:
>
>
> On Sep 17, 2018, at 2:34 AM, Oleksandr Shulgin <
> oleksandr.shul...@zalando.de> wrote:
>
> On Tue, Sep 11, 2018 at 8:10 PM Oleksandr Shulgin <
> oleksandr.shul...@zaland
Anything in the logs? It *could* be
https://issues.apache.org/jira/browse/CASSANDRA-13873
On Tue, Oct 24, 2017 at 11:18 PM, Sotirios Delimanolis <
sotodel...@yahoo.com.invalid> wrote:
> On a Cassandra 2.2.11 cluster, I noticed estimated compactions
> accumulating on one node. nodetool compactions
This is done to avoid overlap in levels > 0
There is this though: https://issues.apache.org/jira/browse/CASSANDRA-13425
If you are restoring an entire node, starting with an empty data directory,
you should probably stop cassandra, copy the snapshot in, and restart, that
will keep the levels
On
It is this: "-XX:+PerfDisableSharedMem" - in your dtest you need to do
"remove_perf_disable_shared_mem(node1)" before starting the node
/Marcus
On Thu, Oct 6, 2016 at 8:30 AM, Benjamin Roth
wrote:
> Maybe additional information, this is the CS command line for ccm node1:
>
> br 20376 3.2
it could also be CASSANDRA-11412 if you have many sstables and vnodes
On Wed, Jun 22, 2016 at 2:50 PM, Bhuvan Rawal wrote:
> Thanks for the info Paulo, Robert. I tried further testing with other
> parameters and it was prevalent. We could be either 11739, 11206. But im
> spektical about 11739 be
yeah that is most likely a bug, could you file a ticket?
On Tue, Mar 22, 2016 at 4:36 AM, Michael Fong <
michael.f...@ruckuswireless.com> wrote:
> Hi, all,
>
>
>
> We recently encountered a scenario under Cassandra 2.0 deployment.
> Cassandra detected a corrupted sstable, and when we attempt to s
On Wed, Mar 16, 2016 at 6:49 PM, Anubhav Kale
wrote:
> I am using Cassandra 2.1.13 which has all the latest DTCS fixes (it does
> STCS within the DTCS windows). It also introduced a field called
> MAX_WINDOW_SIZE which defaults to one day.
>
>
>
> So in my data folders, I may see SS Tables that s
We don't have anything like that, do you have a specific use case in mind?
Could you create a JIRA ticket and we can discuss there?
/Marcus
On Sat, Mar 12, 2016 at 7:05 AM, Dikang Gu wrote:
> Hello there,
>
> RocksDB has the feature called "Compaction Filter" to allow application to
> modify/d
why do you have 'timestamp_resolution': 'MILLISECONDS'? It should be left
as default (MICROSECONDS) unless you do "USING TIMESTAMP
"-inserts, see
https://issues.apache.org/jira/browse/CASSANDRA-11041
On Mon, Feb 29, 2016 at 2:36 PM, Noorul Islam K M wrote:
>
> Hi all,
>
> We are using below comp
ich the user would hit this? I mean, why would the code care either
> way with respect to JBOD strategy for the case where no local data is
> stored?
>
local ranges are all ranges the node should store - if you have 256 vnode
tokens and RF=3, you will have 768 local ranges
/Marcus
&g
If you don't use RandomPartitioner/Murmur3Partitioner you will get the old
behavior.
On Wed, Feb 24, 2016 at 2:47 AM, Jack Krupansky
wrote:
> I just wanted to confirm whether my understanding of how JBOD allocates
> device space is correct of not...
>
> Pre-3.2:
> On each memtable flush Cassandr
It is mentioned here btw: http://www.datastax.com/dev/blog/improving-jbod
On Wed, Feb 24, 2016 at 8:14 AM, Marcus Eriksson wrote:
> If you don't use RandomPartitioner/Murmur3Partitioner you will get the old
> behavior.
>
> On Wed, Feb 24, 2016 at 2:47 AM, Jack Krupansky
>
The reason for this is probably
https://issues.apache.org/jira/browse/CASSANDRA-10831 (which only affects
2.1)
So, if you had problems with incremental repair and LCS before, upgrade to
2.1.13 and try again
/Marcus
On Wed, Feb 10, 2016 at 2:59 PM, horschi wrote:
> Hi Jean,
>
> we had the same
Bryan, this should be improved with
https://issues.apache.org/jira/browse/CASSANDRA-10768 - could you try it
out?
On Tue, Dec 1, 2015 at 10:58 PM, Bryan Cheng wrote:
> Sorry if I misunderstood, but are you asking about the LCS case?
>
> Based on our experience, I would absolutely recommend you c
Yes, it should now be safe to just run a repair with -inc -par to migrate
to incremental repairs
BUT, if you currently use for example repair service in OpsCenter or
Spotifys Cassandra reaper, you might still want to migrate the way it is
documented as you will have to run a full repair to migrate
if you are on Cassandra 2.2, it is probably this:
https://issues.apache.org/jira/browse/CASSANDRA-10270
On Tue, Sep 15, 2015 at 4:37 AM, Saladi Naidu wrote:
> We are using Level Tiered Compaction Strategy on a Column Family. Below
> are CFSTATS from two nodes in same cluster, one node has 880 SS
Starting up fresh it is totally OK to just start using incremental repairs
On Thu, Sep 3, 2015 at 10:25 PM, Jean-Francois Gosselin <
jfgosse...@gmail.com> wrote:
>
> On fresh install of Cassandra what's the best approach to start using
> incremental repair from the get go (I'm using LCS) ?
>
> Ru
It is probably this: https://issues.apache.org/jira/browse/CASSANDRA-9549
On Wed, Jun 17, 2015 at 7:37 PM, Michał Łowicki wrote:
> Looks that memtable heap size is growing on some nodes rapidly (
> https://www.dropbox.com/s/3brloiy3fqang1r/Screenshot%202015-06-17%2019.21.49.png?dl=0).
> Drops ar
did look and that is where i got the above but it doesnt show
> any detail about moving from L0 -L1 any specific arguments i should try
> with ?
>
> On Tue, Apr 21, 2015 at 4:52 PM, Marcus Eriksson
> wrote:
>
>> you need to look at nodetool compactionstats - there is
you need to look at nodetool compactionstats - there is probably a big L0
-> L1 compaction going on that blocks other compactions from starting
On Tue, Apr 21, 2015 at 1:06 PM, Anishek Agarwal wrote:
> the "some_bits" column has about 14-15 bytes of data per key.
>
> On Tue, Apr 21, 2015 at 4:34
Issue here is that getPosition returns null
I think this was fixed in
https://issues.apache.org/jira/browse/CASSANDRA-8750
On Fri, Apr 17, 2015 at 10:55 PM, Robert Coli wrote:
> On Fri, Apr 17, 2015 at 11:40 AM, Mark Greene wrote:
>
>> I'm receiving an exception when I run a repair process via
It should work on 2.0.13. If it fails with that assertion, you should just
retry. If that does not work, and you can reproduce this, please file a
ticket
/Marcus
On Tue, Mar 31, 2015 at 9:33 AM, Amlan Roy wrote:
> Hi,
>
> Thanks for the reply. Since nodetool cleanup is not working even after
>
Do you see the segfault or do you see
https://issues.apache.org/jira/browse/CASSANDRA-8716 ?
On Tue, Mar 17, 2015 at 10:34 AM, Ajay wrote:
> Hi,
>
> Now that 2.0.13 is out, I don't see nodetool cleanup issue(
> https://issues.apache.org/jira/browse/CASSANDRA-8718) been fixed yet. The
> bug show
We had some issues with it right before we wanted to release 2.1.3 so we
temporarily(?) disabled it, it *might* get removed entirely in 2.1.4, if
you have any input, please comment on this ticket:
https://issues.apache.org/jira/browse/CASSANDRA-8833
/Marcus
On Sat, Feb 21, 2015 at 7:29 PM, Mark G
https://issues.apache.org/jira/browse/CASSANDRA-8635
On Tue, Feb 3, 2015 at 5:47 AM, 曹志富 wrote:
> Just run nodetool repair.
>
> The nodes witch has many sstables are newest in my cluster.Before add
> these nodes to my cluster ,my cluster have not compaction automaticly
> because my cluster is an
Hi
Unsure what you mean by automatically, but you should use "-par -inc" when
you repair
And, you should wait until 2.1.3 (which will be out very soon) before doing
this, we have fixed many issues with incremental repairs
/Marcus
On Thu, Jan 29, 2015 at 7:44 AM, Roland Etzenhammer <
r.etzenham.
Yes, you should reenable autocompaction
/Marcus
On Thu, Jan 8, 2015 at 10:33 AM, Roland Etzenhammer <
r.etzenham...@t-online.de> wrote:
> Hi Marcus,
>
> thanks for that quick reply. I did also look at:
>
> http://www.datastax.com/documentation/cassandra/2.1/
> cassandra/operations/ops_repair_nod
If you are on 2.1.2+ (or using STCS) you don't those steps (should probably
update the blog post).
Now we keep separate levelings for the repaired/unrepaired data and move
the sstables over after the first incremental repair
But, if you are running 2.1 in production, I would recommend that you wa
If you are that write-heavy you should definitely go with STCS, LCS
optimizes for reads by doing more compactions
/Marcus
On Tue, Nov 25, 2014 at 11:22 AM, Andrei Ivanov wrote:
> Hi Jean-Armel, Nikolai,
>
> 1. Increasing sstable size doesn't work (well, I think, unless we
> "overscale" - add mo
roblem at ~2TB nodes. (You, know, we are
> trying to save something on HW - we are running on EC2 with EBS
> volumes)
>
> Do I get it right that, we better stick to cmaller nodes?
>
>
>
> On Tue, Nov 18, 2014 at 5:20 PM, Marcus Eriksson
> wrote:
> > No, they will get
g that those big tables (like in my
> "old" cluster) will be hardly compactable in the future...
>
> Sincerely, Andrei.
>
> On Tue, Nov 18, 2014 at 4:27 PM, Marcus Eriksson
> wrote:
> > I suspect they are getting size tiered in L0 - if you have too many
> sst
I suspect they are getting size tiered in L0 - if you have too many
sstables in L0, we will do size tiered compaction on sstables in L0 to
improve performance
Use tools/bin/sstablemetadata to get the level for those sstables, if they
are in L0, that is probably the reason.
/Marcus
On Tue, Nov 18
On Wed, Oct 22, 2014 at 2:39 PM, Juho Mäkinen
wrote:
> I'm having problems understanding how incremental repairs are supposed to
> be run.
>
> If I try to do "nodetool repair -inc" cassandra will complain that "It is
> not possible to mix sequential repair and incremental repairs". However it
> s
On Thu, Oct 16, 2014 at 1:54 AM, Donald Smith <
donald.sm...@audiencescience.com> wrote:
>
> *stream_throughput_outbound_megabits_per_sec* is the timeout per
> operation on the streaming socket. The docs recommend not to have it
> too low (because a timeout causes streaming to restart from the
this is fixed in 2.0.8; https://issues.apache.org/jira/browse/CASSANDRA-7187
/Marcus
On Fri, Oct 10, 2014 at 7:11 PM, Parag Shah wrote:
> Cassandra Version: 2.0.7
>
> In my application, I am using Cassandra Java Driver 2.0.2
>
> Thanks
> Parag
>
> From: Marcus Eri
what version are you on?
On Thu, Oct 9, 2014 at 10:33 PM, Parag Shah wrote:
> Hi all,
>
> I am trying to disable compaction for a few select tables. Here is
> a definition of one such table:
>
> CREATE TABLE blob_2014_12_31 (
> blob_id uuid,
> blob_index int,
> blob_chunk blob,
>
Not really
What version are you on? Do you have pending compactions and no ongoing
compactions?
/Marcus
On Wed, Sep 24, 2014 at 11:35 PM, Donald Smith <
donald.sm...@audiencescience.com> wrote:
> On one of our nodes we have lots of pending compactions (499).In the
> past we’ve seen pending
"select * from " will not populate row cache, but if the row is
cached, it will be used. You need to use "select * from table where X=Y" to
populate row cache.
when setting caching = "rows_only" you disable key cache which might hurt
your performance.
On Wed, Feb 12, 2014 at 9:05 PM, PARASHAR, B
You need an up to date Cassandra, files with -ic- are for Cassandra 1.2.5+
/Marcus
On Mon, Feb 3, 2014 at 8:31 AM, Aravindan T wrote:
> Hi,
>
> There is a necessity where i need to migrate data from acunu cassandra to
> apache cassandra .
>
> As part of it, the column families snapshots are ta
this has been fixed:
https://issues.apache.org/jira/browse/CASSANDRA-6496
On Wed, Dec 18, 2013 at 2:51 PM, Desimpel, Ignace <
ignace.desim...@nuance.com> wrote:
> Hi,
> Would it not be possible that in some rare cases these 'small' files are
> created also and thus resulting in the same endless
yeah this is known, and we are looking for a fix
https://issues.apache.org/jira/browse/CASSANDRA-6275
if you have a simple way of reproducing, please add a comment
On Thu, Nov 14, 2013 at 10:53 AM, Murthy Chelankuri wrote:
> I See lots of these deleted file descriptors cassandra is holding in
.service.CassandraDaemon.setup(CassandraDaemon.java:247)
>
>
> at
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:443)
>
>
> at
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:486)
>
>
*
>
> in 2.0 and this code is gone from 2.1.
>
> ** **
>
> Basically, my fault for skimming over this too quickly.
>
> ** **
>
> We will move from 1.2.10 -> 2.0 -> 2.1
>
> ** **
>
> Thanks,
>
> Chris
>
> ** **
>
cant really reproduce, could you update the ticket with a bit more info
about your setup?
do you have multiple .json files in your data dirs?
On Wed, Sep 25, 2013 at 10:07 AM, Marcus Eriksson wrote:
> this is most likely a bug, filed
> https://issues.apache.org/jira/browse/CASSANDRA-60
this is most likely a bug, filed
https://issues.apache.org/jira/browse/CASSANDRA-6093 and will try to have a
look today.
On Wed, Sep 25, 2013 at 1:48 AM, Christopher Wirt wrote:
> Hi,
>
> ** **
>
> Just had a go at upgrading a node to the latest stable c* 2 release and
> think I ran into som
this is the issue:
https://issues.apache.org/jira/browse/CASSANDRA-5383
guess it fell between chairs, will poke around
On Tue, Sep 24, 2013 at 4:26 PM, Nate McCall wrote:
> What version of 1.2.x?
>
> Unfortunately, you must go through 1.2.9 first. See
> https://github.com/apache/cassandra/blob
yep that works, you need to remove all components of the sstable though,
not just -Data.db
and, in 2.0 there is this:
https://issues.apache.org/jira/browse/CASSANDRA-5228
/Marcus
On Wed, Jul 10, 2013 at 2:09 PM, Theo Hultberg wrote:
> Hi,
>
> I think I remember reading that if you have sstabl
looks very similar to this:
https://issues.apache.org/jira/browse/CASSANDRA-5418
/Marcus
On Fri, Apr 12, 2013 at 9:12 AM, Gabriel Ciuloaica wrote:
> Hi,
>
> From yesterday, I'm trying to add a new node to an existing 3 nodes
> Cassandra cluster, running version 1.2.3. Today I have started clea
you could consider enabling leveled compaction:
http://www.datastax.com/dev/blog/leveled-compaction-in-apache-cassandra
On Tue, Mar 5, 2013 at 9:46 AM, Matthias Zeilinger <
matthias.zeilin...@bwinparty.com> wrote:
> Short question afterwards:
>
> I have read in the documentation, that after a ma
beware of https://issues.apache.org/jira/browse/CASSANDRA-3820 though if
you have many keys per node
other than that, yep, it seems solid
/Marcus
On Wed, Feb 29, 2012 at 6:20 PM, Thibaut Britz <
thibaut.br...@trendiction.com> wrote:
> Thanks!
>
> We will test it on our test cluster in the comin
54 matches
Mail list logo