Hi,
I just encountered a bug with 2.1-rc1 (didn't have the chance to update
to rc2 yet), and wondering if it's known or if I should report the issue
on JIRA.
Basically I dropped a cf/table and it failed, then put Cassandra in a
state where neither the table nor the hybrid can be dropped (at least
ted.
Simon
Le 20/06/2014 11:24, Simon Chemouil a écrit :
> For the record, I could reproduce the problem with blobs of size below 64MB.
>
> Caused by: java.lang.IllegalArgumentException: Mutation of 32000122
> bytes is too large for the maxiumum size of 16777216
>
> 32000122 i
64MB works fine)
Simon
Le 20/06/2014 11:00, Simon Chemouil a écrit :
> Le 20/06/2014 10:41, Duncan Sands a écrit :
>> Hi Simon,
>> 122880122 bytes is a lot more than 0.6MB... How are you sending your blob?
>
> Turns out there was a mistake in my code. The blob in this case
So looks like I was sending more than I expected. Still the question
stands: is CQL the best way to send BLOBs? Are there any remote
operations available on BLOBs?
Thanks,
Simon
Le 20/06/2014 10:03, Simon Chemouil a écrit :
> Hi,
>
> I read in Cassandra's FAQ that it is fine wi
Le 20/06/2014 10:41, Duncan Sands a écrit :
> Hi Simon,
> 122880122 bytes is a lot more than 0.6MB... How are you sending your blob?
Turns out there was a mistake in my code. The blob in this case was
actually 122MB!
Still the same code works fine on Cassandra 2.0.x so there might be a
bug lurkin
Hi,
When I am sending BLOBs _below_ the max query size (blob size=0.6MB), on
Cassandra 2.0, it works fine, but on 2.1-rc1 I get the following error
within the Cassandra server (from the logs) and the query just dies:
WARN [SharedPool-Worker-2] 2014-06-20 10:06:00,263
AbstractTracingAwareExecutor
Hi,
I read in Cassandra's FAQ that it is fine with BLOBs up to 64MB. Here am
I trying to send a 1.6MB BLOB using CQL and Cassandra rejects my query
with the following message:
Caused by: com.datastax.driver.core.exceptions.InvalidQueryException:
Request is too big: length 409600086 exceeds maximum
ies and have a lot of data a few nodes, it's normal that the
> cluster is overloaded.
>
> What's about the I/O figures and CPU usage during your load test ? Is
> the I/O completely saturated ? Is the CPU usage nearing 100% ?
>
>
> On Mon, Jun 2, 2014 at 4
understand how the data is going to be
> stored by Cassandra in rows and columns.
>
>
> -Original Message-
> From: Simon Chemouil [mailto:schemo...@gmail.com]
> Sent: Monday, June 02, 2014 10:56 AM
> To: user@cassandra.apache.org
> Subject: Re: Performance migrati
> CREATE TABLE sensorData (
>
> dataName TEXT,
>
> dayRange int,
>
> discriminator int,
>
> time TIMESTAMP,
>
> sensorId bigint,
>
> dataValue DOUBLE,
>
> PRIMARY KEY
until all the requests are done before sending the response back to the
> client.
>
> The more elements you put into the "IN" clause, the more queries will
> be done from C* side.
>
>
> Regards
>
> Duy Hai DOAN
>
>
> On Wed, May 28, 2014 at 5:25 PM,
Hi,
First, sorry for the length of this mail. TL;DR: DataModeling timeseries
with an extra dimension, and C* not handling stress well; MySQL doesn't
scale as well but handles the queries way better on similar hardware.
==
Context:
We've been evaluating Cassandra for a while now (~1 m
12 matches
Mail list logo