+1 to new flags also from me
On 26/7/22 18:39, Andrés de la Peña wrote:
I think that's right, using a closed range makes sense to consume the
data provided by "sstablemetadata", which also provides closed ranges.
Especially because with half-open ranges we couldn't compa
I think that's right, using a closed range makes sense to consume the data
provided by "sstablemetadata", which also provides closed ranges.
Especially because with half-open ranges we couldn't compact a sstable with
a single big partition, of which we might only know
“sstablemetadata” on
the sstable and get the min and max tokens, and then you pass those in to
nodetool compact. In that case you do want the closed range.
This is different from running repair where you get the tokens from the
nodes/nodetool ring and node those level token ranges ownership is half
rdan
> wrote:
> >>
> >> I like the third option, especially if it makes it consistent with
> repair, which has supported ranges longer and I would guess most people
> would think the compact ranges work the same as the repair ranges.
> >>
>
te:
>> >
>> > +1, I think that makes the most sense.
>> >
>> > Kind Regards,
>> > Brandon
>> >
>> > On Tue, Jul 26, 2022 at 8:19 AM J. D. Jordan
>> > wrote:
>> >>
>> >> I like the third option, especi
t;>
> >> I like the third option, especially if it makes it consistent with
> repair, which has supported ranges longer and I would guess most people
> would think the compact ranges work the same as the repair ranges.
> >>
> >> -Jeremiah Jordan
> >>
&g
;>
>> I like the third option, especially if it makes it consistent with repair,
>> which has supported ranges longer and I would guess most people would think
>> the compact ranges work the same as the repair ranges.
>>
>> -Jeremiah Jordan
>>
>>>
+1, I think that makes the most sense.
Kind Regards,
Brandon
On Tue, Jul 26, 2022 at 8:19 AM J. D. Jordan wrote:
>
> I like the third option, especially if it makes it consistent with repair,
> which has supported ranges longer and I would guess most people would think
> the co
I like the third option, especially if it makes it consistent with repair,
which has supported ranges longer and I would guess most people would think the
compact ranges work the same as the repair ranges.
-Jeremiah Jordan
> On Jul 26, 2022, at 6:49 AM, Andrés de la Peña wrote:
>
&g
Hi all,
CASSANDRA-17575 has detected that token ranges in nodetool compact are
interpreted as closed on both sides. For example, the command "nodetool
compact -st 10 -et 50" will compact the tokens in [10, 50]. This way of
interpreting token ranges is unusual since token ranges are us
tension to this idea is multiple backup/secondary replicas. So
> > you
> > > > have RF5 or RF6 or higher, but still are performing CL2 against the
> > > > preferred first three for both read and write.
> > > >
> > > > You could also ascertain the gen
; > The extension to this idea is multiple backup/secondary replicas. So
> you
> > > have RF5 or RF6 or higher, but still are performing CL2 against the
> > > preferred first three for both read and write.
> > >
> > > You could also ascertain the general wr
eferred first three for both read and write.
> >
> > You could also ascertain the general write health of affected ranges
> before
> > taking a node down for maintenance from the primary, and then know the
> > switchover is in good shape. Yes there are CAP limits and race con
secondary replicas. So you
> have RF5 or RF6 or higher, but still are performing CL2 against the
> preferred first three for both read and write.
>
> You could also ascertain the general write health of affected ranges before
> taking a node down for maintenance from the primary, and th
But we COULD have CL2 write (for RF4)
The extension to this idea is multiple backup/secondary replicas. So you
have RF5 or RF6 or higher, but still are performing CL2 against the
preferred first three for both read and write.
You could also ascertain the general write health of affected ranges
ltiple hot spares, so RF5 could still be treated
> as RF3 + hot spares.
>
> The goal here is more data resiliency but not having to rely on as many
> nodes for resiliency.
>
> Since the data is ring-distributed, the fact there are primary owners of
> ranges should still be evenly distri
the assuming hinted handoff and repair will
>> > get it back up to snuff.
>> >
>> > There could also be some mechanism examining the hinted handoff status
>> of
>> > the four to determine when to reactivate the primary that was down.
>> >
>&
uld also be some mechanism examining the hinted handoff status of
> > the four to determine when to reactivate the primary that was down.
> >
> > For mutations, one could prefer a "QUORUM plus" that was a quorum of the
> > primaries plus the hot spare.
> >
s RF3 + hot spares.
>
> The goal here is more data resiliency but not having to rely on as many
> nodes for resiliency.
>
> Since the data is ring-distributed, the fact there are primary owners of
> ranges should still be evenly distributed and no hot nodes should result
-
To unsubscribe, e-mail: dev-unsubscr...@cassandra.apache.org
For additional commands, e-mail: dev-h...@cassandra.apache.org
s, so RF5 could still be treated
as RF3 + hot spares.
The goal here is more data resiliency but not having to rely on as many
nodes for resiliency.
Since the data is ring-distributed, the fact there are primary owners of
ranges should still be evenly distributed and no hot nodes should result
ndra/service/ActiveRepairService.java#L189
>> calls `ss.getLocalRanges(keyspaceName)` everytime and that it takes more
>> than 99% of the time. This call takes 600ms when there is no load on the
>> cluster and more if there is. So for 10k ranges, you can imagine that it
>> takes at least 1.5 hours just to compute ranges. Don't you think that
>> caching this call would make sense ?
>>
>> --
>> Cyril SCETBON
>>
>>
ndra/blob/cassandra-2.1/src/java/org/apache/cassandra/service/ActiveRepairService.java#L189
> calls `ss.getLocalRanges(keyspaceName)` everytime and that it takes more
> than 99% of the time. This call takes 600ms when there is no load on the
> cluster and more if there is. So for 10k ranges
call takes 600ms when there is no load on the cluster and
more if there is. So for 10k ranges, you can imagine that it takes at least 1.5
hours just to compute ranges. Don't you think that caching this call would make
sense ?
--
Cyril SCETBON
nt is
strictly prohibited.
From: Brian O'Neill
Date: Monday, May 11, 2015 at 12:32 PM
To: "dev@cassandra.apache.org"
Subject: Wrap around CQL queries for token ranges?
I was doing some testing around data locality today (and adding it to our
distributed processi
other than the intended recipient is
strictly prohibited.
From: Brian O'Neill
Date: Monday, May 11, 2015 at 12:32 PM
To: "dev@cassandra.apache.org"
Subject: Wrap around CQL queries for token ranges?
I was doing some testing around data locality today (and adding it to our
di
I was doing some testing around data locality today (and adding it to our
distributed processing layer).
I retrieved all of the TokenRanges back using:
tokenRanges = metadata.getTokenRanges(keyspace, localhost)
And when I spun through the token ranges returned, I ended up missing
records.
The
Without looking at the code I would expect EMPTY to work for open
bound on both left and right.
If that doesn't work I would set a breakpoint and have a look at what
"SELECT *" gets turned into.
On Tue, Feb 18, 2014 at 11:45 AM, Berenguer Blasi wrote:
> Hi all,
>
> when iterating CFs with getSeq
Hi all,
when iterating CFs with getSequentialIterator you have to specify him a
range. But what do you do when you need to:
A- Scan the full range?
B- Scan from key X to the end?
Scanning between keys X,Y is easy as you just specify them in the range.
Scanning up to Y can be done with ByteBu
Just List for the most part. If there are exactly two, maybe
Pair.
On Mon, Apr 2, 2012 at 6:30 PM, Mark Dewey wrote:
> Is there an object that is standard for specifying a compound range? (eg
> [W, X] + [Y, Z])
>
> Mark
--
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of DataStax
29 matches
Mail list logo