Me too.
On Tue, Mar 23, 2010 at 12:48 PM, Jeff Hodges wrote:
> I'll be there.
> --
> Jeff
>
> On Mon, Mar 22, 2010 at 8:40 PM, Eric Florenzano wrote:
>> Nice, I'll go!
>>
>> -Eric Florenzano
>
--
Evan Weaver
The maximum is the thrift-imposed "whatever you can fit in memory, or
2GB, whichever is smaller."
In practice you probably don't want to exceed 64MB or so, depending on
your concurrency requirements.
On Fri, Mar 26, 2010 at 8:35 PM, Jeff Zhang wrote:
> Hi all,
>
> I'd like to use Cassandra to st
I have not restart my nodes.
OK, may be I should give 0.6 a try.
On Sun, Mar 28, 2010 at 9:53 AM, Jonathan Ellis wrote:
> It means that a MessagingService socket closed unexpectedly. If
> you're starting and restarting nodes that could cause it.
>
> This code is obsolete in 0.6 anyway.
>
> On S
It means that a MessagingService socket closed unexpectedly. If
you're starting and restarting nodes that could cause it.
This code is obsolete in 0.6 anyway.
On Sat, Mar 27, 2010 at 8:51 PM, Eric Yu wrote:
> And one more clue here, when ReplicateFactor is 1, it's OK, after changed to
> 2, the
And one more clue here, when ReplicateFactor is 1, it's OK, after changed to
2, the exception occurred.
On Sun, Mar 28, 2010 at 9:46 AM, Eric Yu wrote:
> Hi Jonathan,
>
> I upgraded my jdk to latest version, and I am sure I start Cassandra with
> it (set JAVA_HOME in cassansra.in.sh).
> But the
Hi Jonathan,
I upgraded my jdk to latest version, and I am sure I start Cassandra with it
(set JAVA_HOME in cassansra.in.sh).
But the exception still there, any idea?
On Sun, Mar 28, 2010 at 12:02 AM, Jonathan Ellis wrote:
> This means you need to upgrade your jdk to build 18 or later
>
> On Sa
Could you try running your experiment again with DEBUG logging enabled, and
then attaching the logs to a JIRA?
-Original Message-
From: "Jianing Hu"
Sent: Saturday, March 27, 2010 12:07pm
To: user@cassandra.apache.org
Subject: Re: FW: Re: Is ReplicationFactor (eventually) guaranteed?
He
what's the ulimit set to?
-Chris
On Mar 27, 2010, at 10:29 AM, James Golick wrote:
> Hey,
>
> I put our first cluster in to production (writing but not reading) a couple
> of days ago. Right now, it's got two pretty sizeable nodes taking about 200
> writes per second each and virtually no rea
Nothing in the log. No CPU activity.
I'll try to strace it and connect with jconsole next time it happens.
On Sat, Mar 27, 2010 at 11:09 AM, Jonathan Ellis wrote:
> anything interesting in the log?
>
> is there cpu activity?
>
> can you connect w/ jconsole?
>
> On Sat, Mar 27, 2010 at 12:29 PM,
anything interesting in the log?
is there cpu activity?
can you connect w/ jconsole?
On Sat, Mar 27, 2010 at 12:29 PM, James Golick wrote:
> Hey,
> I put our first cluster in to production (writing but not reading) a couple
> of days ago. Right now, it's got two pretty sizeable nodes taking abo
Hey,
I put our first cluster in to production (writing but not reading) a couple
of days ago. Right now, it's got two pretty sizeable nodes taking about 200
writes per second each and virtually no reads.
Eventually, though, (and this has happened twice), both nodes seem to start
timing out. If I
Here's the conf file, with comments removed. Thanks a lot for your help.
dev
false
0.01
org.apache.cassandra.dht.OrderPreservingPartitioner
foo3
org.apache.cassandra.locator.EndPointSnitch
org.apache.cassandra.locator.RackUnawareS
the limitation is entirely based on the hardware your using and the
volume of data you are getting back (imo at least) since your
basically restricted by the amount of data you can query and return
within your rpc timeout.. if your name/value's are large then you'll
be able to query fewer then if
Jonathan, thanks for your quick reply. I'll have a try now.
On Sun, Mar 28, 2010 at 12:02 AM, Jonathan Ellis wrote:
> This means you need to upgrade your jdk to build 18 or later
>
> On Sat, Mar 27, 2010 at 10:55 AM, Eric Yu wrote:
> > Hi, list
> > I got this exception when insert into a cluste
This means you need to upgrade your jdk to build 18 or later
On Sat, Mar 27, 2010 at 10:55 AM, Eric Yu wrote:
> Hi, list
> I got this exception when insert into a cluster with 5 node, is this a bug
> or something else is wrong.
>
> here is the system log:
>
> INFO [GMFD:1] 2010-03-27 23:15:16,14
Hi, list
I got this exception when insert into a cluster with 5 node, is this a bug
or something else is wrong.
here is the system log:
INFO [GMFD:1] 2010-03-27 23:15:16,145 Gossiper.java (line 543) InetAddress
/172.19.15.210 is now UP
ERROR [Timer-1] 2010-03-27 23:23:27,739 TcpConnection.java (
Mike,
If you have the assumption that your rows are roughly equal in size (at
least statistcally), then you could also just take a node's total load (this
is exposed via Jmx) and divide by the amount of keys/rows on that node. Not
sure how to get the latter, but shouldn't be such a big deal to int
Hi all,
When I invoke the api get_range_slice and set the argument row_count as
Integer.Max, it will throw exception. But if I set it as a small int number,
everything is fine. So what is the limitation of this argument, what is the
biggest number of records I can get at one time ?
Thanks.
--
Sorry for the last mail, hit the wrong button. This JMX property gives a
per-CF granularity, right?
I think it doesn't solve the problem completely here because the problem of
key load-balancing effectively demands for a per-key granularity. But this
could help statistical sampling.
Roland
26.
19 matches
Mail list logo