o now it's behaving :)
#define ntohll(x) (((_int64)(ntohl((int)((x << 32) >> 32))) << 32) |
(unsigned int)ntohl(((int)(x >> 32
string result;
result.resize(sizeof(long long));
long long bigendian = htonll(l);
memcpy(&result[0], &bigendian, sizeof(long long));
=> (super_column=1291668233,
uh, ok I was just copying :P
string result;
result.resize(sizeof(long long));
memcpy(&result[0], &l, sizeof(long long));
I'll try and let you know
many thanks!
On Mon, Dec 6, 2010 at 4:29 PM, Tyler Hobbs wrote:
> How are you packing the longs into strings? The large negative numbers
+1
I'm doing this in my C++ client so contact me offlist if you need code
David
Sent from my iPhone
On Dec 6, 2010, at 1:33 PM, Tyler Hobbs wrote:
> Also, thought I should mention:
>
> When you make a std::string out of the char[], make sure to use the
> constructor with the size_t parameter
Also, thought I should mention:
When you make a std::string out of the char[], make sure to use the
constructor with the size_t parameter (size 8).
- Tyler
On Mon, Dec 6, 2010 at 12:29 PM, Tyler Hobbs wrote:
> That should be "big-endian".
>
>
> On Mon, Dec 6, 2010 at 12:29 PM, Tyler Hobbs wro
That should be "big-endian".
On Mon, Dec 6, 2010 at 12:29 PM, Tyler Hobbs wrote:
> How are you packing the longs into strings? The large negative numbers
> point to that being done incorrectly.
>
> Bitshifting and putting each byte of the long into a char[8] then
> stringifying the char[] is th
How are you packing the longs into strings? The large negative numbers
point to that being done incorrectly.
Bitshifting and putting each byte of the long into a char[8] then
stringifying the char[] is the best way to go. Cassandra expects
big-ending longs, as well.
- Tyler
On Mon, Dec 6, 2010
I'm using thrift in C++ and inserting the results in a vector of pairs, so
client-side-mangling does not seem to be the problem.
Also I'm using a "test" column where I insert the same value I'm using as
super column name (in this case the same date converted to string) and when
queried using cassa
What client are you using? Is it storing the results in a hash map or some
other type of
non-order preserving dictionary?
- Tyler
On Mon, Dec 6, 2010 at 10:11 AM, Guillermo Winkler wrote:
> Hi, I've the following schema defined:
>
> EventsByUserDate : {
> UserId : {
> epoch: { // SC
> IID,
>
Hi, I've the following schema defined:
EventsByUserDate : {
UserId : {
epoch: { // SC
IID,
IID,
IID,
IID
},
// and the other events in time
epoch: {
IID,
IID,
IID
}
}
}
Where I'm expecting to store all the event ids for a user ordered by date
(it's seconds since epoch as long long), I'm usin
The column names are arbitrary strings, so it's not obvious what the
"next" value should be at any step. So, I just set the start of the
next page to the end of the last page and eliminate the duplicate
value when joining the 2 pages together.
The paging direction does not matter in my case, as I
You should make sure that your directions and interval endpoints are chosen
correctly. I recall the semantics of the call being like an old-school for
with the descending flag as a step of +1 or -1.
--
Spelling by mobile.
On Jul 15, 2010, at 20:19, Ilya Maykov wrote:
> Hi all,
>
> I'm tryi
Hi all,
I'm trying to debug some pretty weird behavior when paginating through
a ColumnFamily with get_slice(). It basically looks like Cassandra
does not respect the limit parameter in the SlicePredicate, sometimes
returning more than limit columns. It also sometimes silently drops
columns. I'm r
Thanks Jonathan
--- On Thu, 7/1/10, Jonathan Ellis wrote:
From: Jonathan Ellis
Subject: Re: Cassandra 0.6.2 stress test failing due to setKeyspace issue
To: user@cassandra.apache.org
Date: Thursday, July 1, 2010, 3:32 PM
you're running a 0.7 stress.py against a 0.6 cassandra, that's
you're running a 0.7 stress.py against a 0.6 cassandra, that's not going to
work
On Thu, Jul 1, 2010 at 12:16 PM, maneela a wrote:
> Can someone direct me how to resolve this issue in cassandra 0.6.2 version?
>
> ./stress.py -o insert -n 1 -y regular -d
> ec2-17
Can someone direct me how to resolve this issue in cassandra 0.6.2 version?
./stress.py -o insert -n 1 -y regular -d
ec2-174-129-65-118.compute-1.amazonaws.com --threads 5 --keep-going
Created keyspaces. Sleeping 1s for propagation.Traceback (most recent call
last): File "./stre
Nothing to worry about, we just save the partitioner now so we can detect if
it's changed in the config file after data has been loaded (which would
break stuff).
On Tue, Jun 29, 2010 at 7:25 PM, Anthony Ikeda <
anthony.ik...@cardlink.com.au> wrote:
> Kay, I’ve been upgrading node this morning o
Kay, I've been upgrading node this morning on our servers and so far so
good. Although in the startup I noticed the following message:
INFO 10:22:49,413 Saved partitioner not found. Using
org.apache.cassandra.dht.OrderPreservingPartitioner
The storage-conf.xml file is configured to use
org.
On Fri, Jun 25, 2010 at 4:27 PM, Claire Chang wrote:
> Do I have to leave the hinted hand off ON for all servers?
>
No, if your goal is to end up disabling HH, then you can leave it off on the
0.6.2 servers, and when you upgrade the 0.6.1 servers disable it there, then
run cleanup on them
Do I have to leave the hinted hand off ON for all servers?
thanks,
claire
On Jun 24, 2010, at 9:50 PM, Jonathan Ellis wrote:
> Yes
>
> On Thu, Jun 24, 2010 at 10:33 PM, Claire Chang
> wrote:
>> if not? I assume upgrading from .6.2 from .6.1 is just updating the server
>> binary?
>>
>> thanks,
Yes
On Thu, Jun 24, 2010 at 10:33 PM, Claire Chang
wrote:
> if not? I assume upgrading from .6.2 from .6.1 is just updating the server
> binary?
>
> thanks,
> Claire
--
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of Riptano, the source for professional Cassandra support
http://
if not? I assume upgrading from .6.2 from .6.1 is just updating the server
binary?
thanks,
Claire
You're getting errors from BitSetSerializer... that hasn't changed at
all from 0.6.1 to 0.6.2. (or even from 0.5 to 0.6.2...) Sounds more
like bad RAM to me.
2010/6/11 Lu Ming :
> Many files are corrupted when our cassandra is update to 0.6.2
> COMPACTION-POOL is down caused
Many files are corrupted when our cassandra is update to 0.6.2
COMPACTION-POOL is down caused by the following error.
and some nodes can NOT startup because of this error.
Is it caused by the issue CASSANDRA-1169? The node got the wrong or corrupted
stream file?
ERROR [COMPACTION-POOL:1] 2010
Java transports buffer internally. there is no TBufferedTransport the
way there is in C#.
(moving to user@)
On Tue, Jun 8, 2010 at 10:31 AM, Subrata Roy wrote:
> We are using Cassandra 0.6.2 with hector/thrift client, and our
> application performance is really slow. We are not sure that
10, at 9:55 PM, Lu Ming wrote:
>
>>
>> I have ten 0.5.1 Cassandra nodes in my cluster, and I update them to
>> cassandra to 0.6.2 yesterday.
>> But today I find six cassandra nodes have high CPU usage more than 400% in
>> my 8-core CPU sever.
>> The worst on
our cassandra cluster is delpoyed in two datacenters.
--
From: "Lu Ming"
Sent: Friday, June 04, 2010 7:01 PM
To:
Subject: Re: High CPU Usage since 0.6.2
I do the Thread Dump on each cassandra node, and count the thread with
call sta
From: "Lu Ming"
Sent: Friday, June 04, 2010 12:55 PM
To:
Subject: High CPU Usage since 0.6.2
I have ten 0.5.1 Cassandra nodes in my cluster, and I update them to
cassandra to 0.6.2 yesterday.
But today I find six cassandra nodes have high CPU usage more than 400% in
my 8-core C
-
From: "Chris Goffinet"
Sent: Friday, June 04, 2010 1:50 PM
To:
Cc:
Subject: Re: High CPU Usage since 0.6.2
We're seeing this as well. We were testing with a 40+ node cluster on the
latest 0.6 branch from few days ago.
-Chris
On Jun 3, 2010, at 9:55 PM, Lu Mi
We're seeing this as well. We were testing with a 40+ node cluster on the
latest 0.6 branch from few days ago.
-Chris
On Jun 3, 2010, at 9:55 PM, Lu Ming wrote:
>
> I have ten 0.5.1 Cassandra nodes in my cluster, and I update them to
> cassandra to 0.6.2 yesterday.
> But
I have ten 0.5.1 Cassandra nodes in my cluster, and I update them to
cassandra to 0.6.2 yesterday.
But today I find six cassandra nodes have high CPU usage more than 400% in
my 8-core CPU sever.
The worst one is more than 760%. It is very serious.
I use jvisualvm to watch the worst node, and
Just in time for those Memorial Day weekend maintenance windows, I give
you, Apache Cassandra 0.6.2.
You can check out a summary of what's changed here[1], or you can trust
that it's awesome and go straight to the download page[2].
I can smell the upgrades from here.
[1]: http://bit
Yes
On Tue, May 11, 2010 at 11:19 AM, B. Todd Burruss wrote:
> i was thinking about doing some testing with 0.6.2 ... do the devs consider
> the tip of 0.6 branch ok to test with?
>
--
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of Riptano, the source for professional
i was thinking about doing some testing with 0.6.2 ... do the devs
consider the tip of 0.6 branch ok to test with?
33 matches
Mail list logo