- al...@thelastpickle.com
France
The Last Pickle - Apache Cassandra Consulting
http://www.thelastpickle.com
2016-09-08 5:02 GMT+02:00 Lu, Boying :
> Hi, All,
>
>
>
> We use Cassandra 2.1.11 in our product and I tried its sstable2json to
> dump some sstable file like this:
>
>
Hi, All,
We use Cassandra 2.1.11 in our product and I tried its sstable2json to dump
some sstable file like this:
sstable2json full-path-to-sstable-file (e.g. xxx-Data.db).
But I got an assert error at "assert initialized ||
keyspaceName.equals(SYSTEM_KS);" (Keyspace.java
I'm trying to use sstable2json in 2.1.6 installation. Note that I'm using
the tool from the tarball since the Ubuntu package does not include it. I'm
seeing the following error message:
./sstable2json
/var/lib/cassandra/data/audits/audits_by_user/audits-audits_by_user-ka-38156-Da
30, 2014, at 7:50 AM, ankit tyagi wrote:
> Hi,
>
> I am using sstable2json to convert data into json from sstable. it gives me
> data in below format.
>
> {"key":
> "000d55494430303030303037383530063932376561640a524541445355444f3031
Hi,
I am using sstable2json to convert data into json from sstable. it gives me
data in below format.
{"key":
*"000d55494430303030303037383530063932376561640a524541445355444f303100*","columns":
[["1406126067358:8:","",1406126067
bAs... for any type. I hope this help.
On 04/03/2014 08:50 AM, ng wrote:
> sstable2json tomcat-t5-ic-1-Data.db -e
> gives me
>
> 0021
> 001f
> 0020
>
>
> How do I convert this (hex) to actual value of column so I can do below
>
> select * from
sstable2json tomcat-t5-ic-1-Data.db -e
gives me
0021
001f
0020
How do I convert this (hex) to actual value of column so I can do below
select * from tomcat.t5 where c1='concerted value';
Thanks in advance for the help.
Thanks Rob. Bug filed.
https://issues.apache.org/jira/browse/CASSANDRA-6450
On Monday, December 2, 2013 at 1:06 PM, Robert Coli wrote:
> On Fri, Nov 29, 2013 at 4:11 PM, Josh Dzielak (mailto:j...@keen.io)> wrote:
> > Having an issue with sstable2json. It appears to hang w
On Fri, Nov 29, 2013 at 4:11 PM, Josh Dzielak wrote:
> Having an issue with sstable2json. It appears to hang when I run it
> against an SSTable that's part of a keyspace with authentication turned on.
> Running it against any other keyspace works, and as far as I can tell the
>
Having an issue with sstable2json. It appears to hang when I run it against an
SSTable that's part of a keyspace with authentication turned on. Running it
against any other keyspace works, and as far as I can tell the only difference
between the keyspaces is authentication. Has anyone run
org.apache.cassandra.tools.SSTableImport.main(SSTableImport.java:479)
> ERROR: Non-hex characters in hertz.246944493-2012
>
>
>
> Van: aaron morton [mailto:aa...@thelastpickle.com]
> Verzonden: woensdag 24 april 2013 5:37
> Aan: user@cassandra.apache.org
> Onderwer
bleImport.java:479)
ERROR: Non-hex characters in hertz.246944493-2012
Van: aaron morton [mailto:aa...@thelastpickle.com]
Verzonden: woensdag 24 april 2013 5:37
Aan: user@cassandra.apache.org
Onderwerp: Re: readable (not hex encoded) column names using
sstable2json
What the CF defini
aron Morton
Freelance Cassandra Consultant
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 23/04/2013, at 9:37 PM, Hans Melgers wrote:
> Hello,
>
> Using Cassandra 1.0.7 sstable2json on some tables I get readable column
> names. This leads to problems (java.lang.NumberF
Hello,
Using Cassandra 1.0.7 sstable2json on some tables I get readable column
names. This leads to problems (java.lang.NumberFormatException: Non-hex
characters in) when importing later.
We're trying to move data over to another cluster but this prevents us
from doing so. Could it have
No, I have the other files unfortunately and I had it fail once and succeed
every time after.
I'm tracking the "external information" of sstable2json more carefully now
(exit status, stdout, stderr), so hopefully if it happens again I can be
more help.
will
On Tue, Jan 22,
>
>
> On Mon, Jan 21, 2013 at 11:36 AM, William Oberman
> wrote:
> I'm running 1.1.6 from the datastax repo.
>
> I ran sstable2json and got the following error:
> Exception in thread "main" java.io.IOError: java.io.IOException: dataSize of
> 7020023552
tax repo.
>
> I ran sstable2json and got the following error:
> Exception in thread "main" java.io.IOError: java.io.IOException: dataSize
> of 7020023552240793698 starting at 993981393 would be larger than file
> /var/lib/cassandra/data/X-Data.
I'm running 1.1.6 from the datastax repo.
I ran sstable2json and got the following error:
Exception in thread "main" java.io.IOError: java.io.IOException: dataSize
of 7020023552240793698 starting at 993981393 would be larger than file
/var/lib/cassandra/data/X-Data.db le
Sounds like bad behavior. Can you open a JIRA ticket for that (once jira
is back up :) ?
On Thu, Aug 9, 2012 at 9:14 AM, Mat Brown wrote:
> Hello,
>
> We've noticed that when passing multiple -k arguments to the
> sstable2json utility, we pretty much always get an IOExceptio
Hello,
We've noticed that when passing multiple -k arguments to the
sstable2json utility, we pretty much always get an IOException with
"Key out of order!". Looking at this:
https://github.com/apache/cassandra/blob/cassandra-1.0.10/src/java/org/apache/cassandra/tools/SSTableExpo
On 2012-03-31 08:45 , Zhu Han wrote:
Did you hit the bug here?
https://issues.apache.org/jira/browse/CASSANDRA-4054
Yes looks like it. But what confuses me most is not the sstable2json bug
but why the major compaction does not replace the deleted row data with
a tombstone.
Is that a bug
sstable after a
> major compaction with 1.0.8 (not just tombstones)?
>
> Or did I mess up my test below?
>
> / Jonas
>
>
>
> On 2012-03-28 10:23 , Jonas Borgström wrote:
>
>> Hi all,
>>
>> I've noticed a change in behavior between 0.8.10 and 1.0.8 wh
tween 0.8.10 and 1.0.8 when it comes
to sstable2json output and major compactions. Is this a bug or intended
behavior?
With 1.0.8:
create keyspace ks;
use ks;
create column family foo;
set foo[1][1] = 1;
nodetool -h localhost flush
sstable2json foo-hc-1-Data.db =>
{
"01": [["01
Hi all,
I've noticed a change in behavior between 0.8.10 and 1.0.8 when it comes
to sstable2json output and major compactions. Is this a bug or intended
behavior?
With 1.0.8:
create keyspace ks;
use ks;
create column family foo;
set foo[1][1] = 1;
nodetool -h localhost flush
sstable
It turns out to be a pycassa 1.5.0 issue, solved by the just released
1.5.1:
https://github.com/pycassa/pycassa/blob/681905ce1037a130d8fed37ea9bd41e2a4fe8bbd/CHANGES
Sorry for the noise.
Thank you Tyler!
bye, lele.
--
nickname: Lele Gaifax | Quando vivrò di quello che ho pensato ieri
real: Eman
sed an ISO 8601 string
representation for such values. Recently pycassa implemented native
datetime support which I'd like to take advantage of.
Given that I need to change this kind of details, I spent some time
trying to understand if sstable2json can be the right tool for the
migration before
It's possible we have a paging bug in sstable2json.
On Fri, Sep 30, 2011 at 10:29 AM, Scott Fines wrote:
> Hi all,
> I've been messing with sstable2json as a means of mass-exporting some data
> (mainly for backups, but also for some convenience trickery on an individual
>
Hi all,
I've been messing with sstable2json as a means of mass-exporting some data
(mainly for backups, but also for some convenience trickery on an individual
nodes' data). However, I've run into a situation where sstable2json appears to
be dumping out TONS of duplicate colum
: []
>
>
> On Fri, Sep 9, 2011 at 1:08 PM, Jonathan Ellis wrote:
>>
>> Sounds like you told Cassandra a key? column? was UTF8 but it had
>> non-UTF8 data in it.
>>
>> On Fri, Sep 9, 2011 at 2:06 PM, Anthony Ikeda
>> wrote:
>> > I can't seem t
n-UTF8 data in it.
>
> On Fri, Sep 9, 2011 at 2:06 PM, Anthony Ikeda
> wrote:
> > I can't seem to export an sstable. The parameter flags don't work either
> > (using -k and -f).
> >
> > sstable2json
> >
> /Users/X/Database/cassandra_files/data
Sounds like you told Cassandra a key? column? was UTF8 but it had
non-UTF8 data in it.
On Fri, Sep 9, 2011 at 2:06 PM, Anthony Ikeda
wrote:
> I can't seem to export an sstable. The parameter flags don't work either
> (using -k and -f).
>
> sstable2json
> /Users/X
I can't seem to export an sstable. The parameter flags don't work either
(using -k and -f).
sstable2json
/Users/X/Database/cassandra_files/data/RegistryFoundation/ServerIdentityProfiles-g-3-Data.db
WARN 12:01:55,721 Invalid file '.DS_Store' in data directory
/
Hi All,
Can you please explain me how can I use json2sstable and sstable2json
features in cassandra. It will be useful to me if anyone of you explain me
with a small example.
1. one small json text file.
2. what will be the keyspace and columnfamily name.
3. Syntax in details. What .db name I
Click on Submit Patch then it should get noticed as the committers go through
the patch list. And / Or update the comments to get it back into the activity
stream
If you need a hand with updating the 0.8 patch let me know.
Cheers
-
Aaron Morton
Freelance Cassandra Developer
@a
On Apr 27, 2011, at 16:59, Timo Nentwig wrote:
> On Apr 27, 2011, at 16:52, Edward Capriolo wrote:
>
>> The method being private is not a deal-breaker.While not good software
>> engineering practice you can copy and paste the code and renamed the
>> class SSTable2MyJson or whatever.
>
> Sure I
On Apr 27, 2011, at 17:10, Edward Capriolo wrote:
> I would think most people who watch dev watch this list.
>
> http://wiki.apache.org/cassandra/HowToContribute
So, here it is: https://issues.apache.org/jira/browse/CASSANDRA-2582
On Wed, Apr 27, 2011 at 10:59 AM, Timo Nentwig wrote:
>
> On Apr 27, 2011, at 16:52, Edward Capriolo wrote:
>
>> The method being private is not a deal-breaker.While not good software
>> engineering practice you can copy and paste the code and renamed the
>> class SSTable2MyJson or whatever.
>
> S
On Apr 27, 2011, at 16:52, Edward Capriolo wrote:
> The method being private is not a deal-breaker.While not good software
> engineering practice you can copy and paste the code and renamed the
> class SSTable2MyJson or whatever.
Sure I can do this but I'd like to have it just available in the d
On Wed, Apr 27, 2011 at 10:16 AM, Timo Nentwig wrote:
>
> On Apr 27, 2011, at 15:58, Edward Capriolo wrote:
>
>> Hacking a separate copy of SSTable2json is trivial. Just look for the
>> section of the code that writes the data and change what it writes. If
>
>
On Apr 27, 2011, at 15:58, Edward Capriolo wrote:
> Hacking a separate copy of SSTable2json is trivial. Just look for the
> section of the code that writes the data and change what it writes. If
I did. The method's private...
> you can make it a knob --nottl then it could
On Wed, Apr 27, 2011 at 9:40 AM, Timo Nentwig wrote:
> Hi!
>
> What about a simple option for sstable2json to not print out expiration
> TTL+LocalDeletionTime (maybe even ignore isMarkedForDelete)? I want to move
> old data from a live cluster (with TTL) to an archive clust
Hi!
What about a simple option for sstable2json to not print out expiration
TTL+LocalDeletionTime (maybe even ignore isMarkedForDelete)? I want to move old
data from a live cluster (with TTL) to an archive cluster (->data does not
expire there).
BTW is there a smarter way to do this? Actua
> >>
> >> Is there a tool similar to sstable2json that can be used to convert data
> in
> >> commitlog to json? Or does sstable2json let us read the commitlog as
> well?
> >>
> >> Regards,
> >> smh.
> >>
> >
> >
> >
>
On Fri, Apr 22, 2011 at 9:07 PM, Jonathan Ellis wrote:
> No.
>
> On Fri, Apr 22, 2011 at 3:22 PM, Subrahmanya Harve
> wrote:
>> Hi,
>>
>> Is there a tool similar to sstable2json that can be used to convert data in
>> commitlog to json? Or does sstable2js
No.
On Fri, Apr 22, 2011 at 3:22 PM, Subrahmanya Harve
wrote:
> Hi,
>
> Is there a tool similar to sstable2json that can be used to convert data in
> commitlog to json? Or does sstable2json let us read the commitlog as well?
>
> Regards,
> smh.
>
--
Jonathan Ell
Hi,
Is there a tool similar to sstable2json that can be used to convert data in
commitlog to json? Or does sstable2json let us read the commitlog as well?
Regards,
smh.
utions
>
> Bye
> Norman
>
> 2011/3/13, Jason Harvey :
>
> > nvm, I found the problem. Sstable2json and json2sstable require a
> > log4j-tools properties file. I created one and all was well. I guess
> > that should be added to the default install packages.
&g
It eventually died with an OOM error. Guess the table was just too
big :( Created an improvement request ticket:
https://issues.apache.org/jira/browse/CASSANDRA-2322
Jason
On Mar 12, 10:50 pm, Jason Harvey wrote:
> Trying to import a 3GB JSON file which was exported from sstable2json.
>
What about creating a bugreport and attach the needed changes. I bet
cassandra devs love contributions
Bye
Norman
2011/3/13, Jason Harvey :
> nvm, I found the problem. Sstable2json and json2sstable require a
> log4j-tools properties file. I created one and all was well. I guess
> th
Trying to import a 3GB JSON file which was exported from sstable2json.
I let it run for over an hour and saw zero IO activity. The last thing
it logs is the following:
DEBUG 23:19:32,638 collecting 0 of 2147483647:
Avro/Schema:false:2042@1298067089267
DEBUG 23:19:32,638 collecting 1 of 2147483647
nvm, I found the problem. Sstable2json and json2sstable require a
log4j-tools properties file. I created one and all was well. I guess
that should be added to the default install packages.
Cheers,
Jason
On Sat, Mar 12, 2011 at 12:09 AM, Jason Harvey wrote:
> Sstable2json always spits out
Sstable2json always spits out the following when I execute it:
log4j:WARN No appenders could be found for logger
(org.apache.cassandra.config.DatabaseDescriptor).
log4j:WARN Please initialize the log4j system properly.
I verified that the run script sets the CLASSPATH properly, and I even
tried
Counters are not yet supported in sstable2json.
(More generally, trunk is not expected to be stable at this point in
the development cycle.)
On Tue, Feb 1, 2011 at 11:32 AM, Narendra Sharma
wrote:
> Version: Cassandra 0.7.1 (build from trunk)
>
> Setup:
> - Cluster of 2 nodes
with CL=ONE. Everything
worked fine. All counters were returned with correct values.
- Using nodetool flush, flushed the memtable to sstable
- Used sstable2json on the sstable and got following exception:
[root@msg-qelnx01-v14 bin]# ./sstable2json
../../cassandra071/data/Keyspace1/SuperCounter1-f-1
can you tar.gz the filter/index/data files for this sstable and attach
it to a ticket so we can debug?
if you can't make the data public you can send it to me off list and I
can have a look.
On Wed, Oct 6, 2010 at 11:37 AM, Narendra Sharma
wrote:
> Has any one used sstable2json on 0
Has any one used sstable2json on 0.6.5 and noticed the issue I described in
my email below? This doesn't look like data corruption issue as sstablekeys
shows the keys.
Thanks,
Naren
On Tue, Oct 5, 2010 at 8:09 PM, Narendra Sharma
wrote:
> 0.6.5
>
> -Naren
>
>
> On Tue,
0.6.5
-Naren
On Tue, Oct 5, 2010 at 6:56 PM, Jonathan Ellis wrote:
> Version?
>
> On Tue, Oct 5, 2010 at 7:28 PM, Narendra Sharma
> wrote:
> > Hi,
> >
> > I am using sstable2json to extract row data for debugging some
> application
> > issue. I first r
Version?
On Tue, Oct 5, 2010 at 7:28 PM, Narendra Sharma
wrote:
> Hi,
>
> I am using sstable2json to extract row data for debugging some application
> issue. I first ran sstablekeys to find the list of keys in the sstable. Then
> I use the key to fetch row from sstable. The
Hi,
I am using sstable2json to extract row data for debugging some application
issue. I first ran sstablekeys to find the list of keys in the sstable. Then
I use the key to fetch row from sstable. The sstable is from Lucandra
deployment. I get following.
-bash-3.2$ ./sstablekeys Documents-37
After dumping the
> CF with the test data using sstable2json, the keys are in opposite order of
> what the current indexReader is expecting for seeking numeric values,
> however it is correct when seeking text values. My question is, when I dump
> a CF via sstable2json, are the keys fro
Hi all,
I'm having a lot of problems getting Lucandra to correctly handle numeric
document fields. After examining the keys it has written to the CF, I
believe it may be an issue of column ordering by bytes. After dumping the
CF with the test data using sstable2json, the keys are in opp
61 matches
Mail list logo