hey are
in fact not empty. (This is really really bad, maybe I failed to configure
something properly)
I can provide more info if needed.
Thanks in advance.
On Tue, Jul 30, 2013 at 3:32 PM, Pavel Kirienko <
pavel.kirienko.l...@gmail.com> wrote:
> Cassandra 1.2.8 still have this issue.
>
29, 2013 at 8:59 AM, Paul Ingalls wrote:
> Great. Let me know what you find!
>
> Thanks!
>
> Paul
>
> Sent from my iPhone
>
> On Jul 27, 2013, at 2:47 AM, Pavel Kirienko
> wrote:
>
> Hi Paul,
>
> I checked out your issue, looks the same indeed. Probab
Hi,
I failed to install the Debian package of Cassandra 1.2.7 from ASF
repository because of 404 error.
APT said:
http://www.apache.org/dist/cassandra/debian/pool/main/c/cassandra/cassandra_1.2.7_all.deb
404 Not Found [IP: 192.87.106.229 80]
http://www.apache.org/dist/cassandra/debian/pool/main/c
been seeing. Still no luck getting a
> simple repro case for creating a JIRA issue. Do you have something simple
> enough to drop in a JIRA report?
>
> Paul
>
> On Jul 26, 2013, at 8:06 AM, Pavel Kirienko
> wrote:
>
> > Hi list,
> >
> > We run Cassan
Hi list,
We run Cassandra 1.2 on three-node cluster. Each node has 16GB RAM, single
200GB HDD with Ubuntu Server 12.04.
There is an issue with one table that contains about 3000 rows, here its
describe-table:
CREATE TABLE outputs (
appid text,
staged boolean,
field ascii,
data blob,
PR
> Do you know any direct ways in CQL to handle BLOB, just like DataStax
Java driver?
Well, CQL3 specification explicitly says that there is no way to encode
blob into CQL request other than HEX string:
http://cassandra.apache.org/doc/cql3/CQL.html#constants
On Tue, Jul 9, 2013 at 6:40 PM, Ollif
y is to not encode them in strings
> at all, but rather to use prepared statement (which don't involve a
> conversion to string).
>
> --
> Sylvain
>
>
> On Mon, Jul 8, 2013 at 11:07 AM, Pavel Kirienko <
> pavel.kirienko.l...@gmail.com> wrote:
>
>> Hi all,
Hi all,
I am curious why there is BLOB datatype that accepts HEX strings only.
HEX encoding requires twice as much space as original data, thus it is
rather ineffective. Instead, base64 encoding with ASCII datatype seems more
effective in terms of space, and I believe it doesn't impose noticeable
Hi everyone,
I was playing with a single-node Cassandra installation when discovered
that a request like [SELECT COUNT(*) FROM CF] seems to load the entire
dataset of CF into RAM. I am not sure is it expected to behave this way or
not. I'd expect it to iterate through the entire set of rows rather