Vote passes with seven +1s (five binding) and no vetoes.
Ref: https://lists.apache.org/thread/5pq0js4cvnxozrs2cf63p3jf7qk0h1rc.
James.
On Fri, 18 Oct 2024 at 12:51, Dinesh Joshi wrote:
> +1
>
> On Fri, Oct 18, 2024 at 11:09 AM Jon Haddad
> wrote:
>
>> +1
>>
>> On Fri, Oct 18, 2024 at 10:51 AM
Like others have said, I was expecting the scheduling portion of repair is
negligible. I was mostly curious if you had something handy that you can
quickly share.
On 2024/10/21 18:59:41 Jaydeep Chovatia wrote:
> >Jaydeep, do you have any metrics on your clusters comparing them before
> and after i
>Jaydeep, do you have any metrics on your clusters comparing them before
and after introducing repair scheduling into the Cassandra process?
Yes, I had made some comparisons when I started rolling this feature out to
our production five years ago :) Here are the details:
*The Scheduling*
The sche
> While worth pursuing, I think we would need a different CEP just to figure
> out how to do that. Not only is there a lot of infrastructure difficulty in
> running multi process, the inter app communication needs to be figured out
> better then JMX.
I strongly agree and this is a good time to
Jaydeep, do you have any metrics on your clusters comparing them before
and after introducing repair scheduling into the Cassandra process?
On 2024/10/21 16:57:57 "J. D. Jordan" wrote:
> Sounds good. Just wanted to bring it up. I agree that the scheduling bit is
> pretty light weight and the ideal
Sounds good. Just wanted to bring it up. I agree that the scheduling bit is pretty light weight and the ideal would be to bring the whole of the repair external, which is a much bigger can of worms to open.-JeremiahOn Oct 21, 2024, at 11:21 AM, Chris Lohfink wrote:> I actually think we should be
> I actually think we should be looking at how we can move things out of
the database process.
While worth pursuing, I think we would need a different CEP just to figure
out how to do that. Not only is there a lot of infrastructure difficulty in
running multi process, the inter app communication n
> Is there anyway it makes sense for this to be an external process rather than
> a new thread pool inside the C* process?
One thing to keep in mind is that larger clusters require you “smartly” split
the ranges else you nuke your cluster… knowing how to split requires internal
knowledge from t
> Is there anyway it makes sense for this to be an external process rather than
> a new thread pool inside the C* process?
I'm personally more irked by the merkle tree building / streaming / merging /
etc resource utilization being in the primary C* process. My intuition is that
the *scheduling*
I love the idea of a repair service being there by default for an install
of C*. My main concern here is that it is putting more services into the
main database process. I actually think we should be looking at how we can
move things out of the database process. The C* process being a giant
mon
Hi Guo,
+1 for the CONSTRAINTS keyword to be added into the default behavior.
Bernardo
> On Oct 21, 2024, at 12:01 AM, guo Maxwell wrote:
>
> I think the CONSTRAINTS keyword keyword may be in the same situation as
> datamask.
> Maybe it is better to include constraints into the default be
I think the CONSTRAINTS keyword keyword may be in the same situation as
datamask.
Maybe it is better to include constraints into the default behavior of
table copy together with column name, column data type and data mask.
guo Maxwell 于2024年10月21日周一 14:56写道:
> To yifan :
> I don't mind adding
12 matches
Mail list logo