key apt by apache cassandra seems deprecated and cqlsh is broken

2022-08-09 Thread Dorian ROSSE
hello,


i am getting this error when i try to update my system :

'''W: http://www.apache.org/dist/cassandra/debian/dists/40x/InRelease: Key is 
stored in legacy trusted.gpg keyring (/etc/apt/trusted.gpg), see the 
DEPRECATION section in apt-key(8) for details.
'''

(for this error on my ubuntu system i have reach the ubuntu launchpad suport if 
that hold by their side)

the apache cassandra is installed but the subprogram cqlsh is broken thus i 
have tried to install the module python asked but it seems unexisted :

'''~$ sudo cqlsh
Traceback (most recent call last):
  File "/usr/bin/cqlsh.py", line 20, in 
import cqlshlib
ModuleNotFoundError: No module named 'cqlshlib'
~$ sudo pip3 install cqlshlib
ERROR: Could not find a version that satisfies the requirement cqlshlib (from 
versions: none)
ERROR: No matching distribution found for cqlshlib
~$ sudo python3 install cqlshlib
python3: can't open file '/home/dorianrosse/install': [Errno 2] No such file or 
directory
~$ sudo pip install cqlshlib
ERROR: Could not find a version that satisfies the requirement cqlshlib (from 
versions: none)
ERROR: No matching distribution found for cqlshlib
~$ sudo python install cqlshlib
python: can't open file 'install': [Errno 2] No such file or directory
'''

thanks you in advance to help myself repair both errors,

regards.


Dorian ROSSE.


Re: key apt by apache cassandra seems deprecated and cqlsh is broken

2022-08-09 Thread Claude Warren, Jr via dev
Could this be related to the deprecation of apt-key on your system?  You
don't specify what version of which distribution you are using.  However,
there is a good example of how to solve the issue at
https://www.linuxuprising.com/2021/01/apt-key-is-deprecated-how-to-add.html



On Tue, Aug 9, 2022 at 11:51 AM Dorian ROSSE  wrote:

> hello,
>
>
> i am getting this error when i try to update my system :
>
> '''W: http://www.apache.org/dist/cassandra/debian/dists/40x/InRelease:
> Key is stored in legacy trusted.gpg keyring (/etc/apt/trusted.gpg), see the
> DEPRECATION section in apt-key(8) for details.
> '''
>
> (for this error on my ubuntu system i have reach the ubuntu launchpad
> suport if that hold by their side)
>
> the apache cassandra is installed but the subprogram cqlsh is broken thus
> i have tried to install the module python asked but it seems unexisted :
>
> '''~$ sudo cqlsh
> Traceback (most recent call last):
>   File "/usr/bin/cqlsh.py", line 20, in 
> import cqlshlib
> ModuleNotFoundError: No module named 'cqlshlib'
> ~$ sudo pip3 install cqlshlib
> ERROR: Could not find a version that satisfies the requirement cqlshlib
> (from versions: none)
> ERROR: No matching distribution found for cqlshlib
> ~$ sudo python3 install cqlshlib
> python3: can't open file '/home/dorianrosse/install': [Errno 2] No such
> file or directory
> ~$ sudo pip install cqlshlib
> ERROR: Could not find a version that satisfies the requirement cqlshlib
> (from versions: none)
> ERROR: No matching distribution found for cqlshlib
> ~$ sudo python install cqlshlib
> python: can't open file 'install': [Errno 2] No such file or directory
> '''
>
> thanks you in advance to help myself repair both errors,
>
> regards.
>
>
> Dorian ROSSE.
>


Re: dtests to reproduce the schema disagreement

2022-08-09 Thread Aleksey Yeshchenko
The absolute easiest way would be to down one of the two nodes first,
run CREATE TABLE on the live node, shut it down, get the other one up,
and run the same CREATE TABLE there, the bring up the down node.

> On 9 Aug 2022, at 07:48, Konstantin Osipov via dev  
> wrote:
> 
> * Cheng Wang via dev  [22/08/09 09:43]:
> 
>> I am working on improving the schema disagreement issue. I need some dtests
>> which can reproduce the schema disagreement.  Anyone know if there are any
>> existing tests for that? Or something similar?
> 
> cassandra-10250 is a good start.
> 
> -- 
> Konstantin Osipov, Moscow, Russia



Unsubscribe

2022-08-09 Thread Schmidtberger, Brian M. (STL)
unsubscribe

+
BRIAN SCHMIDTBERGER
Software Engineering Senior Advisor, Core Engineering, Express Scripts
M: 785.766.7450
EVERNORTH.COM

Confidential, unpublished property of Evernorth. Do not duplicate or 
distribute. Use and distribution limited solely to authorized personnel. © 
Copyright 2022 Evernorth. Legal 
Disclaimer



Re: Cassandra project status update 2022-08-03

2022-08-09 Thread Benjamin Lerer
At this point it is clear that we will probably never be able to remove
some level of flakiness from our tests. For me the questions are: 1) Where
do we draw the line for a release ? and 2) How do we maintain that line
over time?

In my opinion, not all flakies are equals. Some fails every 10 runs, some
fails 1 in a 1000 runs. I would personally draw the line based on that
metric. With the circleci tasks that Andres has added we can easily get
that information for a given test.
We can start by putting the bar at a lower level and raise the level over
time when most of the flakies that we hit are above that level.

At the same time we should make sure that we do not introduce new flakies.
One simple approach that has been mentioned several time is to run the new
tests added by a given patch in a loop using one of the CircleCI tasks.
That would allow us to minimize the risk of introducing flaky tests. We
should also probably revert newly committed patch if we detect that they
introduced flakies.

What do you think?





Le dim. 7 août 2022 à 12:24, Mick Semb Wever  a écrit :

>
>
> With that said, I guess we can just revise on a regular basis what exactly
>> are the last flakes and not numbers which also change quickly up and down
>> with the first change in the Infra.
>>
>
>
> +1, I am in favour of taking a pragmatic approach.
>
> If flakies are identified and triaged enough that, with correlation from
> both CI systems, we are confident that no legit bugs are behind them, I'm
> in favour of going beta.
>
> I still remain in favour of somehow incentivising reducing other flakies
> as well. Flakies that expose poor/limited CI infra, and/or tests that are
> not as resilient as they could be, are still noise that indirectly reduce
> our QA (and increase efforts to find and tackle those legit runtime
> problems). Interested in hearing input from others here that have been
> spending a lot of time on this front.
>
> Could it work if we say: all flakies must be ticketed, and test/infra
> related flakies do not block a beta release so long as there are fewer than
> the previous release? The intent here being pragmatic, but keeping us on a
> "keep the campground cleaner" trajectory…
>
>


Re: Unsubscribe

2022-08-09 Thread Bowen Song via dev
To unsubscribe from this mailing list, you'll need to send an email to 
dev-unsubscr...@cassandra.apache.org


On 09/08/2022 12:52, Schmidtberger, Brian M. (STL) wrote:


unsubscribe

+

BRIAN SCHMIDTBERGER

Software Engineering Senior Advisor, Core Engineering, Express Scripts

M: 785.766.7450

EVERNORTH.COM 

/Confidential, unpublished property of Evernorth. Do not duplicate or 
distribute. Use and distribution limited solely to authorized 
personnel. © Copyright 2022 Evernorth. _Legal Disclaimer 
_/


Re: dtests to reproduce the schema disagreement

2022-08-09 Thread Cheng Wang via dev
Thank you, Aleksey,
Yes, I have tried this approach, the problem is there is a timing window
that node 1 runs the CREATE TABLE while node 2 is down, and then we bring
up the node 2 and it may receive the gossip from node 1 at startup, and the
CREATE TABLE will fail on node 2 since the table already exists?



On Tue, Aug 9, 2022 at 4:48 AM Aleksey Yeshchenko  wrote:

> The absolute easiest way would be to down one of the two nodes first,
> run CREATE TABLE on the live node, shut it down, get the other one up,
> and run the same CREATE TABLE there, the bring up the down node.
>
> > On 9 Aug 2022, at 07:48, Konstantin Osipov via dev <
> dev@cassandra.apache.org> wrote:
> >
> > * Cheng Wang via dev  [22/08/09 09:43]:
> >
> >> I am working on improving the schema disagreement issue. I need some
> dtests
> >> which can reproduce the schema disagreement.  Anyone know if there are
> any
> >> existing tests for that? Or something similar?
> >
> > cassandra-10250 is a good start.
> >
> > --
> > Konstantin Osipov, Moscow, Russia
>
>


Re: dtests to reproduce the schema disagreement

2022-08-09 Thread Jeff Jirsa
Stop node 1 before you start node 2, essentially mocking a full network
partition.



On Tue, Aug 9, 2022 at 11:57 AM Cheng Wang via dev 
wrote:

> Thank you, Aleksey,
> Yes, I have tried this approach, the problem is there is a timing window
> that node 1 runs the CREATE TABLE while node 2 is down, and then we bring
> up the node 2 and it may receive the gossip from node 1 at startup, and the
> CREATE TABLE will fail on node 2 since the table already exists?
>
>
>
> On Tue, Aug 9, 2022 at 4:48 AM Aleksey Yeshchenko 
> wrote:
>
>> The absolute easiest way would be to down one of the two nodes first,
>> run CREATE TABLE on the live node, shut it down, get the other one up,
>> and run the same CREATE TABLE there, the bring up the down node.
>>
>> > On 9 Aug 2022, at 07:48, Konstantin Osipov via dev <
>> dev@cassandra.apache.org> wrote:
>> >
>> > * Cheng Wang via dev  [22/08/09 09:43]:
>> >
>> >> I am working on improving the schema disagreement issue. I need some
>> dtests
>> >> which can reproduce the schema disagreement.  Anyone know if there are
>> any
>> >> existing tests for that? Or something similar?
>> >
>> > cassandra-10250 is a good start.
>> >
>> > --
>> > Konstantin Osipov, Moscow, Russia
>>
>>


Re: Cassandra project status update 2022-08-03

2022-08-09 Thread Ekaterina Dimitrova
“ In my opinion, not all flakies are equals. Some fails every 10 runs, some
fails 1 in a 1000 runs.”
Agreed, for all not new tests/regressions which are also not infra related.

“ We can start by putting the bar at a lower level and raise the level over
time when most of the flakies that we hit are above that level.”
My only concern is only who and how will track that.
Also, metric for non-infra issues I guess

“ At the same time we should make sure that we do not introduce new
flakies. One simple approach that has been mentioned several time is to run
the new tests added by a given patch in a loop using one of the CircleCI
tasks. ”
+1, I personally find this very valuable and more efficient than bisecting
and getting back to works done in some cases months ago


“ We should also probably revert newly committed patch if we detect that
they introduced flakies.”
+1, not that I like my patches to be reverted but it seems as the most fair
way to stick to our stated goals. But I think last time we talked about
reverting, we discussed it only for trunk? Or do I remember it wrong?



On Tue, 9 Aug 2022 at 7:58, Benjamin Lerer  wrote:

> At this point it is clear that we will probably never be able to remove
> some level of flakiness from our tests. For me the questions are: 1) Where
> do we draw the line for a release ? and 2) How do we maintain that line
> over time?
>
> In my opinion, not all flakies are equals. Some fails every 10 runs, some
> fails 1 in a 1000 runs. I would personally draw the line based on that
> metric. With the circleci tasks that Andres has added we can easily get
> that information for a given test.
> We can start by putting the bar at a lower level and raise the level over
> time when most of the flakies that we hit are above that level.
>
> TThat would allow us to minimize the risk of introducing flaky tests. We
> should also probably revert newly committed patch if we detect that they
> introduced flakies.
>
> What do you think?
>
>
>
>
>
> Le dim. 7 août 2022 à 12:24, Mick Semb Wever  a écrit :
>
>>
>>
>> With that said, I guess we can just revise on a regular basis what
>>> exactly are the last flakes and not numbers which also change quickly up
>>> and down with the first change in the Infra.
>>>
>>
>>
>> +1, I am in favour of taking a pragmatic approach.
>>
>> If flakies are identified and triaged enough that, with correlation from
>> both CI systems, we are confident that no legit bugs are behind them, I'm
>> in favour of going beta.
>>
>> I still remain in favour of somehow incentivising reducing other flakies
>> as well. Flakies that expose poor/limited CI infra, and/or tests that are
>> not as resilient as they could be, are still noise that indirectly reduce
>> our QA (and increase efforts to find and tackle those legit runtime
>> problems). Interested in hearing input from others here that have been
>> spending a lot of time on this front.
>>
>> Could it work if we say: all flakies must be ticketed, and test/infra
>> related flakies do not block a beta release so long as there are fewer than
>> the previous release? The intent here being pragmatic, but keeping us on a
>> "keep the campground cleaner" trajectory…
>>
>>