Seems good. I'll discus it with data owners and we choose the best method.
Best regards,
Aleksander
4 lut 2014 19:40 "Robert Coli" napisał(a):
> On Tue, Feb 4, 2014 at 12:21 AM, olek.stas...@gmail.com <
> olek.stas...@gmail.com> wrote:
>
>> I don't know what is the real cause of my problem. We ar
On Tue, Feb 4, 2014 at 12:21 AM, olek.stas...@gmail.com <
olek.stas...@gmail.com> wrote:
> I don't know what is the real cause of my problem. We are still guessing.
> All operations I have done one cluster are described on timeline:
> 1.1.7-> 1.2.10 -> upgradesstable -> 2.0.2 -> normal operations
I don't know what is the real cause of my problem. We are still guessing.
All operations I have done one cluster are described on timeline:
1.1.7-> 1.2.10 -> upgradesstable -> 2.0.2 -> normal operations ->2.0.3
-> normal operations -> now
normal operations means reads/writes/repairs.
Could you plea
On Mon, Feb 3, 2014 at 2:17 PM, olek.stas...@gmail.com <
olek.stas...@gmail.com> wrote:
> No, i've done repair after upgrade sstables. In fact it was about 4
> weeks after, because of bug:
>
If you only did a repair after you upgraded SSTables, when did you have an
opportunity to hit :
https://i
2014-02-03 Robert Coli :
> On Mon, Feb 3, 2014 at 1:02 PM, olek.stas...@gmail.com
> wrote:
>>
>> Today I've noticed that oldest files with broken values appear during
>> repair (we do repair once a week on each node). Maybe it's the repair
>> operation, which caused data loss?
>
>
> Yes, unless yo
On Mon, Feb 3, 2014 at 1:02 PM, olek.stas...@gmail.com <
olek.stas...@gmail.com> wrote:
> Today I've noticed that oldest files with broken values appear during
> repair (we do repair once a week on each node). Maybe it's the repair
> operation, which caused data loss?
Yes, unless you added or re
Yes, I haven't run sstableloader. The data loss apperared somwhere on the line:
1.1.7-> 1.2.10 -> upgradesstable -> 2.0.2 -> normal operations ->2.0.3
normal operations -> now
Today I've noticed that oldest files with broken values appear during
repair (we do repair once a week on each node). Maybe
On Mon, Feb 3, 2014 at 12:51 AM, olek.stas...@gmail.com <
olek.stas...@gmail.com> wrote:
> We've faced very similar effect after upgrade from 1.1.7 to 2.0 (via
> 1.2.10). Probably after upgradesstable (but it's only a guess,
> because we noticed problem few weeks later), some rows became
> tombst
Ok, but will upgrade "resurrect" my data? Or maybe I should perform
additional action to bring my system to correct state?
best regards
Aleksander
3 lut 2014 17:08 "Yuki Morishita" napisał(a):
> if you are using < 2.0.4, then you are hitting
> https://issues.apache.org/jira/browse/CASSANDRA-652
if you are using < 2.0.4, then you are hitting
https://issues.apache.org/jira/browse/CASSANDRA-6527
On Mon, Feb 3, 2014 at 2:51 AM, olek.stas...@gmail.com
wrote:
> Hi All,
> We've faced very similar effect after upgrade from 1.1.7 to 2.0 (via
> 1.2.10). Probably after upgradesstable (but it's o
Hi All,
We've faced very similar effect after upgrade from 1.1.7 to 2.0 (via
1.2.10). Probably after upgradesstable (but it's only a guess,
because we noticed problem few weeks later), some rows became
tombstoned. They just disappear from results of queries. After
inverstigation I've noticed, that
On Wed, Dec 11, 2013 at 6:27 AM, Mathijs Vogelzang
wrote:
> When I use sstable2json on the sstable on the destination cluster, it has
> "metadata": {"deletionInfo":
> {"markedForDeleteAt":1796952039620607,"localDeletionTime":0}}, whereas
> it doesn't have that in the source sstable.
> (Yes, this i
Hi all,
We're running into a weird problem trying to migrate our data from a
1.2.10 cluster to a 2.0.3 one.
I've taken a snapshot on the old cluster, and for each host there, I'm running
sstableloader -d KEYSPACE/COLUMNFAMILY
(the sstableloader process from the 2.0.3 distribution, the one from
1
13 matches
Mail list logo