On Mon, Dec 3, 2018 at 4:24 PM Oliver Herrmann
wrote:
>
> You are right. The number of nodes in our cluster is equal to the
> replication factor. For that reason I think it should be sufficient to call
> sstableloader only from one node.
>
The next question is then: do you care about consistency
Am So., 2. Dez. 2018 um 06:24 Uhr schrieb Oleksandr Shulgin <
oleksandr.shul...@zalando.de>:
> On Fri, 30 Nov 2018, 17:54 Oliver Herrmann
>> When using nodetool refresh I must have write access to the data folder
>> and I have to do it on every node. In our production environment the user
>> that
It's a bug in the sstableloader introduced many years ago - before that, it
worked as described in documentation...
Oliver Herrmann at "Fri, 30 Nov 2018 17:05:43 +0100" wrote:
OH> Hi,
OH> I'm having some problems to restore a snapshot using sstableloader. I
On Fri, 30 Nov 2018, 17:54 Oliver Herrmann When using nodetool refresh I must have write access to the data folder
> and I have to do it on every node. In our production environment the user
> that would do the restore does not have write access to the data folder.
>
OK, not entirely sure that's
Thanks Dmitry, that solved my problem.Oliver Originalnachricht Betreff: Re: Problem with restoring a snapshot using sstableloaderVon: Dmitry Saprykin An: user@cassandra.apache.orgCc: You need to move you files into directory named 'cass_testapp/table3/'. sstable loader uses 2 last
You need to move you files into directory named 'cass_testapp/table3/'.
sstable loader uses 2 last path components as keyspace and table names.
On Fri, Nov 30, 2018 at 11:54 AM Oliver Herrmann
wrote:
> When using nodetool refresh I must have write access to the data folder
> and I have to do it
When using nodetool refresh I must have write access to the data folder and
I have to do it on every node. In our production environment the user that
would do the restore does not have write access to the data folder.
Am Fr., 30. Nov. 2018 um 17:39 Uhr schrieb Oleksandr Shulgin <
oleksandr.shul..
On Fri, Nov 30, 2018 at 5:13 PM Oliver Herrmann
wrote:
>
> I'm always getting the message "Skipping file mc-11-big-Data.db: table
> snapshots.table3 doesn't exist". I also tried to rename the snapshots
> folder into the keyspace name (cass_testapp) but then I get the message
> "Skipping file mc-1
Hi,
I'm having some problems to restore a snapshot using sstableloader. I'm
using cassandra 3.11.1 and followed the instructions for a creating and
restoring from this page:
https://docs.datastax.com/en/dse/6.0/dse-admin/datastax_enterprise/tools/toolsSStables/toolsBulkloader.html
Consulting
http://www.thelastpickle.com
2016-05-17 11:14 GMT+01:00 Ravi Teja A V :
> Hi everyone
>
> I am currently working with Cassandra 3.5. I would like to know if it is
> possible to restore backups without using sstableloader. I have been
> referring to the following pages in the dat
Hi everyone
I am currently working with Cassandra 3.5. I would like to know if it is
possible to restore backups without using sstableloader. I have been
referring to the following pages in the datastax documentation:
https://docs.datastax.com/en/cassandra/3.x/cassandra/operations
Is there a page explaining what happens at server side when using
SSTableLoader?
I'm seeking the answers of the following questions:
1. What's about the existing data in the table? From my test, the data
in sstable files will be applied to the existing data. Am I right?
Services
Business Solutions
Consulting
From:
Rahul Neelakantan
To:
"user@cassandra.apache.org"
Date:
07/29/2014 05:02 PM
Subject:
Re: unable to load data using sstableloader
Is SStable loader being run on t
Consulting
>
>
>
> From: Duncan Sands
> To: user@cassandra.apache.org
> Date: 07/29/2014 12:58 PM
> Subject: Re: unable to load data using sstableloader
>
>
>
>
> Hi Akshay,
>
>
__
> Experience certainty.IT Services
>Business Solutions
> Consulting
>
>
>
> From: Duncan Sands
> To: user@cassandra.apache.org
> Date: 07/2
:
Re: unable to load data using sstableloader
Hi Akshay,
On 29/07/14 09:14, Akshay Ballarpure wrote:
> Yes,
> I have created keyspaces, but still i am getting error.
>
> cqlsh:sample_new> DESCRIBE KEYSPACES ;
>
> system sample mykeyspace test *sample_new* system_traces
>
Hi Akshay,
On 29/07/14 09:14, Akshay Ballarpure wrote:
Yes,
I have created keyspaces, but still i am getting error.
cqlsh:sample_new> DESCRIBE KEYSPACES ;
system sample mykeyspace test *sample_new* system_traces
[root@CSL-simulation conf]# ../bin/sstableloader
/root/Akshay/Cassandra/apache
lutions
Consulting
From:
Colin Kuo
To:
user@cassandra.apache.org
Date:
07/28/2014 10:56 PM
Subject:
Re: unable to load data using sstableloader
Have you created the schema for these data files? I meant the schema
should be created
Have you created the schema for these data files? I meant the schema should
be created before you load these data file to C*.
Here is the article for introduction of sstableloader that you could refer.
http://www.datastax.com/documentation/cassandra/1.2/cassandra/tools/toolsBulkloader_t.html
On
Hello,
I am unable to load sstable into cassandra using sstable loader, please
suggest. Thanks.
[root@CSL-simulation conf]# pwd
/root/Akshay/Cassandra/apache-cassandra-2.0.8/conf
[root@CSL-simulation conf]# ls -ltr keyspace/col/
total 32
-rw-r--r-- 1 root root 16 Jul 28 16:55 Test-Data-jb-1-Fil
Hi, Ross.
We had the same problem under the same version of Cassandra. We opted to copy
ALL the stables from the old cluster to each new node, then run nodetool
refresh. The missing rows have appeared after this procedure.
Best regards,
Francisco.
On Nov 27, 2013, at 7:49 PM, Ross Black wro
Hi Tyler,
Thanks (somehow I missed that ticket when I searched for sstableloader
bugs).
I will retry with 1.2.12 when we get a chance to upgrade. In the meantime
I have switched to loading data via the normal client API (slower but
reliable).
Ross
On 28 November 2013 03:45, Tyler Hobbs wrot
On Wed, Nov 27, 2013 at 2:47 AM, Turi, Ferenc (GE Power & Water, Non-GE) <
ferenc.t...@ge.com> wrote:
> Did you try to use CQL2 tables?
>
>
>
> /create the CF / table using “cqlsh -2”.
>
>
>
> We experienced the same but using CQL2 helped us.
>
CQL2 is a historical footnote and is likely to be r
On Wed, Nov 27, 2013 at 3:12 AM, Ross Black wrote:
> Using Cassandra 1.2.10, I am trying to load sstable data into a cluster of
> 6 machines.
This may be affecting you:
https://issues.apache.org/jira/browse/CASSANDRA-6272
Using 1.2.12 for the sstableloader process should work.
--
Tyler Hobb
opped when using sstableloader?
Hi,
Using Cassandra 1.2.10, I am trying to load sstable data into a cluster of 6
machines.
The machines are using vnodes, and are configured with NetworkTopologyStrategy
replication=3 and LeveledCompactionStrategy on the tables being loaded.
The sstable data was gener
SSTableSimpleUnsortedWriter.
The small dataset for one table is ~100GB, the large dataset for another
table is ~500GB.
The data was loaded using:
sstableloader --nodes ihz58,ihz59,ihz60,ihz61,ihz62,ihz63 --verbose
${sstable_dir}
and was run on a machine that was not part of the cluster.
After loading the data
experience in creating sstable using my own java
> code…Everything went ok, but after data loaded using sstableloader I was
> not able to select from the columnfamily anymore.
>
>
>
> Cql returned rcp timeout. Do somebody know what could be the reason?
>
> / I did not find an
Hi,
I tried to get experience in creating sstable using my own java
code...Everything went ok, but after data loaded using sstableloader I was not
able to select from the columnfamily anymore.
Cql returned rcp timeout. Do somebody know what could be the reason?
/ I did not find any warning
Thank you Aaaron!
Your blog post helped me understand how a row with a compound key is stored and
this helped me understand how to create the sstable files.
For anyone who needs it this is how it works:
In Cassandra-cli the row looks like this:
RowKey: 5
=> (column=10:created, value=013f84be
egy', 'DC1' : 1, 'DC2' : 1};
> create table test_table ( k1 bigint, k2 bigint, created timestamp, PRIMARY
> KEY (k1, k2) ) with compaction = { 'class' : 'LeveledCompactionStrategy' };
> I then tried to load data to the table using sstable
2' : 1};
create table test_table ( k1 bigint, k2 bigint, created timestamp, PRIMARY
KEY (k1, k2) ) with compaction = { 'class' : 'LeveledCompactionStrategy' };
I then tried to load data to the table using sstableloader, which uses input
created via SSTableSimpleUnsor
- I am running Hadoop in windows (standalone mode). Trying to
> load my hive tables to Cassandra using sstableloader.
> Had tried 2 options from the enlisted options in the reference link
> provided:
> * Running sstableloader from java module -
path etc.
Regards
Anand B
From: aaron morton [mailto:aa...@thelastpickle.com]
Sent: Thursday, December 20, 2012 10:51 PM
To: user@cassandra.apache.org
Subject: Re: Loading sstables to Cassandra using sstableloader and JMX client
-d '127.0.0.1' 'C:\Anand\Workspace\H2C_POC\Customer
s well and getting same error.
> Also tweaked by changing the slash from '\' to '/' or '\\'.
>
> Any other ideas?
>
> Thanks
> Anand B
> -Original Message-
> From: Pradeep Kumar Mantha [mailto:pradeep...@gmail.com]
> Sent: Thursday
p...@gmail.com]
Sent: Thursday, December 20, 2012 1:18 PM
To: user@cassandra.apache.org
Subject: Re: Loading sstables to Cassandra using sstableloader and JMX client
Hi,
The directory information should contain entire path to the sstables location.
'C:\Anand\Workspace\H2C_POC\Customer\.
I ass
load my sstables to load into Cassandra (1.1.6 -
> localhost).
>
>
>
> Reference -
> http://amilaparanawithana.blogspot.com/2012/06/bulk-loading-external-data-to-cassandra.html
>
>
>
> Note - I am running Hadoop in windows (standalone mode). Trying to load my
> hive
Cassandra using sstableloader.
Had tried 2 options from the enlisted options in the reference link provided:
* Running sstableloader from java module -
Created a java class which invokes org.apache.cassandra.tools.BulkLoader.main
with the following args:
-d '127.0.0.1' 'C:\
ataInputStream.readByte(DataInputStream.java:250)
at sun.rmi.transport.tcp.TCPChannel.createConnection(TCPChannel.java:228)
... 13 more
Please let me know if I am missing something here.
Note - I am running hadoop in windows (standalone mode). Trying to load my hive
tables to Cassandra using ss
> Which nodetool command are you referring to? (info, cfstats, ring,….)
My bad. I meant to write sstableloader
> Do I modify the log4j-tools.properties in $CASSANDRA_HOME/conf to set the
> nodetool logs to DEBUG?
You can use the --debug option with sstableloader to get a better exception
message
> > ERROR 09:02:38,614 Error in ThreadPoolExecutor
> > java.lang.RuntimeException: java.io.EOFException: unable to seek to
> > position 93069003 in /opt/analytics/analytics/chart-hd-104-Data.db
> > (65737276 bytes) in read-only mode
>
>
> This one looks like an error.
>
> Can you run nodeto
> > ERROR 09:02:38,614 Error in ThreadPoolExecutor
> > java.lang.RuntimeException: java.io.EOFException: unable to seek to
> > position 93069003 in /opt/analytics/analytics/chart-hd-104-Data.db
> > (65737276 bytes) in read-only mode
>
>
> This one looks like an error.
>
> Can you run nodeto
> WARN 09:02:38,534 Unable to instantiate cache provider
> org.apache.cassandra.cache.SerializingCacheProvider; using default
> org.apache.cassandra.cache.ConcurrentLinkedHashCacheProvider@5d59054d instead
Happens when JNA is not in the path. Nothing to worry about when using the
sstableloader.
Hi,
we are trying to use SSTableLoader to bootstrap a new 7-node cassandra (v.
1.0.10) cluster with the snapshots taken from a 3-node cassandra cluster. The
new cluster is in a different data centre.
After reading the articles at
[1] http://www.datastax.com/dev/blog/bulk-loading
[2]
http://
It will make your life *a lot* easier by doing a 1 to 1 migration from the 0.6
cluster to the 1.X one. If you want to add nodes do it once you have 1.X happy
and stable, if you need to reduce nodes threaten to hold your breath until you
pass out.
You can then simply:
* drain and shapshot the
I'm new to administering Cassandra so please be kind!
I've been tasked with upgrading a .6 cluster to 1.0.7. In doing this I
need a rollback plan in case things go sideways since my window for the
upgrade is fairly small. So we've decided to stand up a brand new cluster
running 1.0.7 and then st
45 matches
Mail list logo