On Mon, Jan 14, 2013 at 11:00 PM, Alain RODRIGUEZ wrote:
> Shouldn't this vote be addressed to the "Developers" mailing list instead
> of the "Users" one ?
>
Yep, I screwed up. Fixed by the present email but feel free to now skip the
user mailing list from replies.
Sorry for the spam on the user
>> Just so I understand, the file contents are *not* stored in the column value
>> ?
>
> No, on that particular CF the columns are SuperColumns with 5 sub columns
> (size, is_dir, hash, name, revision). Each super column is small, I didn't
> mention super columns before because they don't seem
Its the same idea.
If you want to get 50 columns ask for 51, iterate over the first 50 and use the
51st as the first column for the next page. If you get < 51 column then you are
at the end of the page.
I've not used Kundera so cannot talk about specifics.
Cheers
-
Aaron Mo
Aaron
If you have order buy whit a column with a secondary index in a where
clause it fails with:
Bad Request: ORDER BY with 2ndary indexes is not supported.
Best Regards
Shahryar
On Mon, Jan 14, 2013 at 5:55 PM, aaron morton wrote:
> Sylvain,
> Out of interest if the select is…
>
> select
DSE includes hadoop files. It looks like the installation is broken. I would
start again if possible and/or ask the peeps at Data Stax about your particular
OS / JVM configuration.
In the past I've used this to set a particular JVM when multiple ones are
installed…
update-alternatives --set j
That looks technically correct for pre 1.2, in 1.2 the name of the column has
changed to cluster_name.
Note that you are diving into internals and that way be danger.
Cheers
-
Aaron Morton
Freelance Cassandra Developer
New Zealand
@aaronmorton
http://www.thelastpickle.com
On
Sylvain,
Out of interest if the select is…
select * from test where interval = 7 and severity = 3 order by id desc ;
Would the the ordering be a no-op or would it still run ?
Or more generally does including an ORDER BY clause that matches the CLUSTERING
ORDER BY DDL clause incur ove
Shouldn't this vote be addressed to the "Developers" mailing list instead
of the "Users" one ?
Just in case it wasn't done on purpose and to be sure that you reach those
concerned about this vote :).
Alain
2013/1/14 Sylvain Lebresne
> We've fixed our fair share of bugs since 1.1.8 so I propose
On Mon, Jan 14, 2013 at 1:36 PM, Sylvain Lebresne wrote:
> We've fixed our fair share of bugs since 1.1.8 so I propose the following
> artifacts for release as 1.1.9.
>
> sha1: 7eb47c50c394f0aefdfd4ac9170ce51e2e4be549
> Git:
> http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=shortlog;h=ref
Thanks Vivek that worked!
But I can't use a ColumnSliceIterator with this type of query?
Renato M.
2013/1/14 Vivek Mishra :
>
> RangeSlicesQuery rangeSlicesQuery = HFactory
> .createRangeSlicesQuery(
>>
>> keyspace, uuidSerializer,
>> stringSerializer, stringSerializer)
>>
We've fixed our fair share of bugs since 1.1.8 so I propose the following
artifacts for release as 1.1.9.
sha1: 7eb47c50c394f0aefdfd4ac9170ce51e2e4be549
Git:
http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=shortlog;h=refs/tags/1.1.9-tentative
Artifacts:
https://repository.apache.org/conte
On Mon, Jan 14, 2013 at 5:04 PM, Shahryar Sedghi wrote:
> Can I always count on this order, or it may change in the future?
>
I would personally rely on it. I don't see any reason why we would change
that internally and besides I suspect you won't be the only one to rely on
it so we won't take
RangeSlicesQuery rangeSlicesQuery = HFactory
.createRangeSlicesQuery(
>
> keyspace, uuidSerializer,
> stringSerializer, stringSerializer)
> .setColumnFamily("columnFamily1")
> .setRowCount(pCount)
> .setRange("", "", true, Integer.MAX_
Hi all,
I am trying to get all columns from a certain number of records I am
fetching. I am using the RangeSlicesQuery to achieve this, but it
forces me to set how many columns I want to be retrieved.
RangeSlicesQuery rangeSlicesQuery = HFactory
.createRangeSlicesQuery(keyspace,
User ClusterIT tools specifically the clush program
clush -g datanodes netstat -anp grep {port}
And it will run it on all nodes so you can get the ips of who is connected.
Dean
On 1/11/13 8:42 PM, "Rob Coli" wrote:
>On Fri, Jan 11, 2013 at 10:32 AM, Brian Tarbox
>wrote:
>> I'd like to be abl
CQL 3 in Cassandra 1.2 does not allow order by when it is a wide row and
a column with secondary index is used in a where clause which makes
sense. So the question is:
I have a test table like this:
CREATE TABLE test(
interval int,
id uuid,
severity int,
PRIMARY KEY (interval
If you want the "row key", just query it (we prefer the term "partition
key" in CQL3 and that's the term you'll find in documents like
http://cassandra.apache.org/doc/cql3/CQL.html but it's the same thing) and
it'll be part of the return columns.
I understand that, as i am able to fetch "partition
>
> How to fetch and populate "row key" from CqlRow api then?
If you want the "row key", just query it (we prefer the term "partition
key" in CQL3 and that's the term you'll find in documents like
http://cassandra.apache.org/doc/cql3/CQL.html but it's the same thing) and
it'll be part of the re
Is it documented somewhere? How to fetch and populate "row key" from
CqlRow api then?
-Vivek
On Mon, Jan 14, 2013 at 7:18 PM, Sylvain Lebresne wrote:
> On Mon, Jan 14, 2013 at 12:48 PM, Vivek Mishra wrote:
>
>> I am getting an issue, where "key" attribute's in byte[] is returned as
>> empty va
On Mon, Jan 14, 2013 at 12:48 PM, Vivek Mishra wrote:
> I am getting an issue, where "key" attribute's in byte[] is returned as
> empty value.
>
We don't return this anymore as this doesn't make much sense for CQL3. Same
as in CqlMetadata we don't return a default_name_type and
default_value_type
Thanks to everyone, especially to Brian. I wiil continue my studies and if
I have some problem I post here.
I was looking, Astyanax use cassandra 1.1.1, is it possible to use 1.2.0?
How?
Thanks.
2013/1/8 Brian O'Neill
> Not sure where you are on the learning curve, but I've put a couple
> "gett
Hi,
I am trying to migrate Kundera Thrift API from 1.1.6 from 1.2 and changing *
execute_cql_query* to* execute_cql3_query*(with consistenceLevel). I am
getting an issue, where "key" attribute's in byte[] is returned as empty
value.
Though same is working with 1.1.6
-Vivek
On Mon, Jan 14, 2013
Dear all,
[Apologies if you receive this CFP multiple times or are uninterested]
I am organizing the British Conference on Databases(BNCOD) this year and we
would very much like to see some industrial contributions around Big Data.
How have you used Hadoop, HBase, Cassandra, Machine learning tec
On Sun, Jan 13, 2013 at 5:59 PM, Shahryar Sedghi wrote:
> Since new cql3 methods require ConsistencyLevel.xxx, is consistency level
> at the query has precedence over this level at the api or not.
>
There is no "consistency level at the query level" anymore. That's one of
the breaking change (no
Well, for me it was better to use async operations then batches. So, you
are not bitten by latency, but can control everything per-operation. You
will need to support a kind of "window" thought. But this windows can be
quite low, like 10-20 ops.
2013/1/14 Wei Zhu
> Another potential issue is wh
25 matches
Mail list logo