>
> Erick, one last question: Is there a quick and easy way to extract the
> date from a time UUID?
>
Yeah, just use any online converters on the web. Cheers!
>
> Any possibility that you "merged" two clusters together?
Ooohh...I think that's the missing piece of this puzzle! A couple weeks
earlier, prior to the problem described in this thread, we did
inadvertently merge two clusters together. We merged the original 'dc1'
cluster with an entirely diff
I agree with Jeff that this isn't related to ALTER TABLE. FWIW, the
original table was created in 2017 but a new version got created on August
5:
- 20739eb0-d92e-11e6-b42f-e7eb6f21c481 - Friday, January 13, 2017 at
1:18:01 GMT
- 8ad72660-f629-11eb-a217-e1a09d8bc60c - Thursday, August 5, 2
ens during a CREATE TABLE. With no other evidence or
>>>>> ability to debug, I would guess that the CFIDs diverged previously, but
>>>>> due
>>>>> to the race(s) I described, the on-disk schema and the in-memory schema
>>>>> differed, and
d previously, but due
>>>> to the race(s) I described, the on-disk schema and the in-memory schema
>>>> differed, and the ALTER KEYSPACE forces the schema from one host to be
>>>> serialized and forced to the others, where the actual IDs get reconciled.
>>
the CFIDs diverged previously, but due to the
>>> race(s) I described, the on-disk schema and the in-memory schema differed,
>>> and the ALTER KEYSPACE forces the schema from one host to be serialized and
>>> forced to the others, where the actual IDs get reconciled.
>&g
actual IDs get reconciled.
>>
>> You may be able to confirm/demonstrate that by looking at the timestamps
>> on the data directories across all of the hosts in the cluster?
>>
>>
>>
>> On Fri, Oct 15, 2021 at 3:02 PM Tom Offermann
>> wrote:
>>
d,
> and the ALTER KEYSPACE forces the schema from one host to be serialized and
> forced to the others, where the actual IDs get reconciled.
>
> You may be able to confirm/demonstrate that by looking at the timestamps
> on the data directories across all of the hosts in the cluster?
>
,
and the ALTER KEYSPACE forces the schema from one host to be serialized and
forced to the others, where the actual IDs get reconciled.
You may be able to confirm/demonstrate that by looking at the timestamps on
the data directories across all of the hosts in the cluster?
On Fri, Oct 15, 2021 at 3
n
>>> wrote:
>>> >
>>> > When adding a datacenter to a keyspace (following the Last Pickle
>>> [Data Center Switch][lp] playbook), I ran into a "Configuration exception
>>> merging remote schema" error. The nodes in one datacenter didn
;Configuration exception merging
>> remote schema" error. The nodes in one datacenter didn't converge to the
>> new schema version, and after restarting them, I saw the symptoms described
>> in this Datastax article on [Fixing a table schema collision][ds], where
>>
nter Switch][lp] playbook), I ran into a "Configuration exception merging
> remote schema" error. The nodes in one datacenter didn't converge to the
> new schema version, and after restarting them, I saw the symptoms described
> in this Datastax article on [Fixing a table schema
> > When adding a datacenter to a keyspace (following the Last Pickle [Data
>> Center Switch][lp] playbook), I ran into a "Configuration exception merging
>> remote schema" error. The nodes in one datacenter didn't converge to the
>> new schema version, and after r
arting them, I saw the symptoms described
> in this Datastax article on [Fixing a table schema collision][ds], where
> there were two data directories for each table in the keyspace on the nodes
> that didn't converge. I followed the recovery steps in the Datastax article
> to move th
new
> schema version, and after restarting them, I saw the symptoms described in
> this Datastax article on [Fixing a table schema collision][ds], where there
> were two data directories for each table in the keyspace on the nodes that
> didn't converge. I followed the recovery
ptoms described
in this Datastax article on [Fixing a table schema collision][ds], where
there were two data directories for each table in the keyspace on the nodes
that didn't converge. I followed the recovery steps in the Datastax article
to move the data from the older directories to the
*From:* Yatong Zhang [mailto:bluefl...@gmail.com]
> *Sent:* Wednesday, April 30, 2014 2:03 AM
> *To:* user@cassandra.apache.org
> *Subject:* Can Cassandra work efficiently with multiple data directories
> on multiple disks?
>
>
>
> Hi there,
>
> I have the follo
work efficiently with multiple data directories on
multiple disks?
Hi there,
I have the following configuration:
data_file_directories:
- /data1/cass
- /data2/cass
- /data3/cass
- /data4/cass
- /data5/cass
- /data6/cass
and each directory resides on a separate stand-alone
Hi there,
I have the following configuration:
data_file_directories:
> - /data1/cass
> - /data2/cass
> - /data3/cass
> - /data4/cass
> - /data5/cass
> - /data6/cass
>
and each directory resides on a separate stand-alone harddisk. My questions
are:
1. Will Cassandra split
I'm actually using it in a couple of nodes, but is slower than directly
accesing the data in a ssd.
El jue, 09-06-2011 a las 11:10 -0400, Chris Burroughs escribió:
> On 06/08/2011 05:54 AM, Héctor Izquierdo Seliva wrote:
> > Is there a way to control what sstables go to what data directory? I
> >
On 06/08/2011 05:54 AM, Héctor Izquierdo Seliva wrote:
> Is there a way to control what sstables go to what data directory? I
> have a fast but space limited ssd, and a way slower raid, and i'd like
> to put latency sensitive data into the ssd and leave the other data in
> the raid. Is this possibl
El mié, 08-06-2011 a las 08:42 -0500, Jonathan Ellis escribió:
> No. https://issues.apache.org/jira/browse/CASSANDRA-2749 is open to
> track this but nobody is working on it to my knowledge.
>
> Cassandra is fine with symlinks at the data directory level but I
> don't think that helps you, since y
No. https://issues.apache.org/jira/browse/CASSANDRA-2749 is open to
track this but nobody is working on it to my knowledge.
Cassandra is fine with symlinks at the data directory level but I
don't think that helps you, since you really want to move the sstables
themselves. (Cassandra is NOT fine wi
Hi,
Is there a way to control what sstables go to what data directory? I
have a fast but space limited ssd, and a way slower raid, and i'd like
to put latency sensitive data into the ssd and leave the other data in
the raid. Is this possible? If not, how well does cassandra play with
symlinks?
24 matches
Mail list logo