Hi Paulo,
Thanks for looking into the issue on priority. I have serious concerns
regarding reducing the TTL to 15 yrs.The patch will immediately break all
existing applications in Production which are using 15+ yrs TTL. And then they
would be stuck again until all the logic in Production software is modified and
the software is upgraded immediately. This may take days. Such heavy downtime
is generally not acceptable for any business. Yes, they will not have silent
data loss but they would not be able to do any business either. I think the
permanent fix must be prioritized and put on extremely fast track. This is a
certain Blocker and the impact could be enormous--with and without the 15 year
short-term patch.
And believe me --there are plenty such business use cases where you use very
long TTLs such as 20 yrs for compliance and other reasons.
Thanks
Anuj
On Friday 26 January 2018, 4:57:13 AM IST, Michael Kjellman
<[email protected]> wrote:
why are people inserting data with a 15+ year TTL? sorta curious about the
actual use case for that.
> On Jan 25, 2018, at 12:36 PM, horschi <[email protected]> wrote:
>
> The assertion was working fine until yesterday 03:14 UTC.
>
> The long term solution would be to work with a long instead of a int. The
> serialized seems to be a variable-int already, so that should be fine
> already.
>
> If you change the assertion to 15 years, then applications might fail, as
> they might be setting a 15+ year ttl.
>
> regards,
> Christian
>
> On Thu, Jan 25, 2018 at 9:19 PM, Paulo Motta <[email protected]>
> wrote:
>
>> Thanks for raising this. Agreed this is bad, when I filed
>> CASSANDRA-14092 I thought a write would fail when localDeletionTime
>> overflows (as it is with 2.1), but that doesn't seem to be the case on
>> 3.0+
>>
>> I propose adding the assertion back so writes will fail, and reduce
>> the max TTL to something like 15 years for the time being while we
>> figure a long term solution.
>>
>> 2018-01-25 18:05 GMT-02:00 Jeremiah D Jordan <[email protected]>:
>>> If you aren’t getting an error, then I agree, that is very bad. Looking
>> at the 3.0 code it looks like the assertion checking for overflow was
>> dropped somewhere along the way, I had only been looking into 2.1 where you
>> get an assertion error that fails the query.
>>>
>>> -Jeremiah
>>>
>>>> On Jan 25, 2018, at 2:21 PM, Anuj Wadehra <[email protected]>
>> wrote:
>>>>
>>>>
>>>> Hi Jeremiah,
>>>> Validation is on TTL value not on (system_time+ TTL). You can test it
>> with below example. Insert is successful, overflow happens silently and
>> data is lost:
>>>> create table test(name text primary key,age int);
>>>> insert into test(name,age) values('test_20yrs',30) USING TTL 630720000;
>>>> select * from test where name='test_20yrs';
>>>>
>>>> name | age
>>>> ------+-----
>>>>
>>>> (0 rows)
>>>>
>>>> insert into test(name,age) values('test_20yr_plus_1',30) USING TTL
>> 630720001;InvalidRequest: Error from server: code=2200 [Invalid query]
>> message="ttl is too large. requested (630720001) maximum (630720000)"
>>>> ThanksAnuj
>>>> On Friday 26 January 2018, 12:11:03 AM IST, J. D. Jordan <
>> [email protected]> wrote:
>>>>
>>>> Where is the dataloss? Does the INSERT operation return successfully
>> to the client in this case? From reading the linked issues it sounds like
>> you get an error client side.
>>>>
>>>> -Jeremiah
>>>>
>>>>> On Jan 25, 2018, at 1:24 PM, Anuj Wadehra <[email protected]>
>> wrote:
>>>>>
>>>>> Hi,
>>>>>
>>>>> For all those people who use MAX TTL=20 years for inserting/updating
>> data in production, https://issues.apache.org/jira/browse/CASSANDRA-14092
>> can silently cause irrecoverable Data Loss. This seems like a certain TOP
>> MOST BLOCKER to me. I think the category of the JIRA must be raised to
>> BLOCKER from Major. Unfortunately, the JIRA is still "Unassigned" and no
>> one seems to be actively working on it. Just like any other critical
>> vulnerability, this vulnerability demands immediate attention from some
>> very experienced folks to bring out an Urgent Fast Track Patch for all
>> currently Supported Cassandra versions 2.1,2.2 and 3.x. As per my
>> understanding of the JIRA comments, the changes may not be that trivial for
>> older releases. So, community support on the patch is very much appreciated.
>>>>>
>>>>> Thanks
>>>>> Anuj
>>>>
>>>> ---------------------------------------------------------------------
>>>> To unsubscribe, e-mail: [email protected]
>>>> For additional commands, e-mail: [email protected]
>>>
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: [email protected]
>>> For additional commands, e-mail: [email protected]
>>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: [email protected]
>> For additional commands, e-mail: [email protected]
>>
>>
Т���������������������������������������������������������������������ХF�V�7V'67&�&R�R���âFWb�V�7V'67&�&T676�G&�6�R��&pФf�"FF�F����6����G2�R���âFWbֆV�676�G&�6�R��&pР�