RE: Unable to fetch large amount of rows

2013-03-18 Thread Viktor Jevdokimov
Please, stop flooding this mail list, your questions are not about Cassandra 
development, but about using it.

For Cassandra users there's users mail list: u...@cassandra.apache.org
Subscribe to Cassandra users mail list: user-subscr...@cassandra.apache.org





Best regards / Pagarbiai

Viktor Jevdokimov
Senior Developer

Email: viktor.jevdoki...@adform.com
Phone: +370 5 212 3063
Fax: +370 5 261 0453

J. Jasinskio 16C,
LT-01112 Vilnius,
Lithuania



Disclaimer: The information contained in this message and attachments is 
intended solely for the attention and use of the named addressee and may be 
confidential. If you are not the intended recipient, you are reminded that the 
information remains the property of the sender. You must not use, disclose, 
distribute, copy, print or rely on this e-mail. If you have received this 
message in error, please contact the sender immediately and irrevocably delete 
this message and any copies.> -Original Message-
> From: Pushkar Prasad [mailto:pushkar.pra...@airtightnetworks.net]
> Sent: Monday, March 18, 2013 16:30
> To: dev@cassandra.apache.org
> Subject: Unable to fetch large amount of rows
>
> Hi,
>
>
>
> I have following schema:
>
>
>
> TimeStamp
>
> MACAddress
>
> Data Transfer
>
> Data Rate
>
> LocationID
>
>
>
> PKEY is (TimeStamp, MACAddress). That means partitioning is on TimeStamp,
> and data is ordered by MACAddress, and stored together physically (let me
> know if my understanding is wrong). I have 1000 timestamps, and for each
> timestamp, I have 500K different MACAddress.
>
>
>
> When I run the following query, I get RPC Timeout exceptions:
>
>
>
>
>
> Select * from db_table where Timestamp='...'
>
>
>
> From my understanding, this should give all the rows with just one disk seek,
> as all the records for a particular timeStamp. This should be very quick,
> however, clearly, that doesn't seem to be the case. Is there something I am
> missing here? Your help would be greatly appreciated.
>
>
>
> Thanks
>
> PP



RE: Problems in the cassandra bulk loader

2013-10-09 Thread Viktor Jevdokimov
You overwrite your columns by writing new row/supercolumn.

Remove new row/supercolumn from "for" statement, which is for columns:


int rowKey = 10;
int superColumnKey = 20;
usersWriter.newRow(ByteBufferUtil.bytes(rowKey));
usersWriter.newSuperColumn(ByteBufferUtil.bytes(superColumnKey));
for (int i = 0; i < 10; i++) {
usersWriter.addColumn(
ByteBufferUtil.bytes(i+1),
ByteBufferUtil.bytes(i+1),
System.currentTimeMillis());
 }
 usersWriter.close();

Next time ask such questions in user mail list, not C* devs, which is for C* 
development, not usage/your code development around Cassandra.





Best regards / Pagarbiai

Viktor Jevdokimov
Senior Developer

Email: viktor.jevdoki...@adform.com
Phone: +370 5 212 3063
Fax: +370 5 261 0453

J. Jasinskio 16C,
LT-03163 Vilnius,
Lithuania



Disclaimer: The information contained in this message and attachments is 
intended solely for the attention and use of the named addressee and may be 
confidential. If you are not the intended recipient, you are reminded that the 
information remains the property of the sender. You must not use, disclose, 
distribute, copy, print or rely on this e-mail. If you have received this 
message in error, please contact the sender immediately and irrevocably delete 
this message and any copies.-Original Message-
From: José Elias Queiroga da Costa Araújo [mailto:je...@cesar.org.br]
Sent: Wednesday, October 9, 2013 11:22 PM
To: dev
Subject: Problems in the cassandra bulk loader

Hi all,

I'm trying to use the bulk insertion with the 
SSTableSimpleUnsortedWriter class from cassandra API and I facing some 
problems.  After generating and uploading the .db files by using the 
./sstableloader command , I noticed the data didn't match with inserted one.

I put the used code below to try to explain the bahaviour.

 I'm trying to generate the data files using only one rowkey and one 
supercolumn. Where the super column has 10 columns.

IPartitioner p = new Murmur3Partitioner(); CFMetaData scf = new 
CFMetaData("myKeySpace", "Column",  ColumnFamilyType.Super, BytesType.instance, 
BytesType.instance);

SSTableSimpleUnsortedWriter usersWriter = new SSTableSimpleUnsortedWriter(new 
File("./"), scf, p,64);

int rowKey = 10;
int superColumnKey = 20;
for (int i = 0; i < 10; i++) {
 usersWriter.newRow(ByteBufferUtil.bytes(rowKey));
usersWriter.newSuperColumn(ByteBufferUtil.bytes(superColumnKey));
 usersWriter.addColumn(ByteBufferUtil.bytes(i+1),ByteBufferUtil.bytes(i+1),
System.currentTimeMillis());
 }
 usersWriter.close();

After uploading,  the result is:

RowKey: 000a
   => (super_column=0014,
  (name=0001, value=0001,
timestamp=1381348293144))

1 Row Returned.

In this case, my super column should have 10 columns? With 
values between 0001 to 0011?  Since I'm using the same super column.  
The documentation says the newRow method could be invoked many times, it 
impacts only the performance.

The second question is: If this is the correct behavior, the 
column value should be 0011, since it is the last value passed as argument 
to addColumn(...) method in the loop?

  Thanks in the advance,

   Elias.


RE: [VOTE] Release Apache Cassandra 1.2.13 (Strike 3)

2013-12-18 Thread Viktor Jevdokimov
There are only 5 issues for 1.2.14 and all are resolved. Why not for 1.2.13?


Best regards / Pagarbiai

Viktor Jevdokimov
Senior Developer

Email: viktor.jevdoki...@adform.com
Phone: +370 5 212 3063
Fax: +370 5 261 0453

J. Jasinskio 16C,
LT-03163 Vilnius,
Lithuania



Disclaimer: The information contained in this message and attachments is 
intended solely for the attention and use of the named addressee and may be 
confidential. If you are not the intended recipient, you are reminded that the 
information remains the property of the sender. You must not use, disclose, 
distribute, copy, print or rely on this e-mail. If you have received this 
message in error, please contact the sender immediately and irrevocably delete 
this message and any copies.-Original Message-
From: Sylvain Lebresne [mailto:sylv...@datastax.com]
Sent: Wednesday, December 18, 2013 11:46 AM
To: dev@cassandra.apache.org
Subject: [VOTE] Release Apache Cassandra 1.2.13 (Strike 3) [heur]

Third time's the charm, I propose the following artifacts for release as 1.2.13.

sha1: 1b4c9b45cbf32a72318c42c1ec6154dc1371e8e2
Git:
http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=shortlog;h=refs/tags/1.2.13-tentative
Artifacts:
https://repository.apache.org/content/repositories/orgapachecassandra-066/org/apache/cassandra/apache-cassandra/1.2.13/
Staging repository:
https://repository.apache.org/content/repositories/orgapachecassandra-066/

The artifacts as well as the debian package are also available here:
http://people.apache.org/~slebresne/

Since it is a re-roll, I propose an expediated vote so the vote will be open 
for 24 hours (but longer if needed).

[1]: http://goo.gl/ELcvdB (CHANGES.txt)
[2]: http://goo.gl/lVJqUQ (NEWS.txt)


RE: New keyspace loading creates 'master'

2010-06-11 Thread Viktor Jevdokimov
I can confirm that not all nodes receive schema, not only new, but some old 
also, while old one were dead on time of keyspace creation and upon return have 
not received an update. Sometime even all nodes running fine, not all have 
received schema update.

Viktor

-Original Message-
From: Gary Dusbabek [mailto:gdusba...@gmail.com] 
Sent: Friday, June 11, 2010 2:58 AM
To: dev@cassandra.apache.org
Subject: Re: New keyspace loading creates 'master'

On Thu, Jun 10, 2010 at 17:16, Ronald Park  wrote:
> What we found was that, if the node on which we originally installed the
> keyspace was down when a new node is added, the new node does not get the
> keyspace schema.  In some regards, it is now the 'master', at least in
> distributing the keyspace data.  Is this a known limitation?
>

To clarify, are you saying that on a cluster of N nodes, if the
original node was down and there are N-1 live nodes, that new nodes
will not receive keyspace definitions?  If so, it's less of a
limitation and more of a bug.

Gary.

> Thanks,
> Ron
>



RE: cassandra increment counters, Jira #1072

2010-08-12 Thread Viktor Jevdokimov
We're also looking into increment counters with the same load. It will not be 
periods, it will be constantly.

Viktor


-Original Message-
From: Jesse McConnell [mailto:jesse.mcconn...@gmail.com] 
Sent: Thursday, August 12, 2010 9:21 PM
To: dev@cassandra.apache.org
Subject: Re: cassandra increment counters, Jira #1072

out of curiosity are you shooting for incrementing these counters 10k
times a second for sustained periods of time?

cheers,
jesse

--
jesse mcconnell
jesse.mcconn...@gmail.com



On Thu, Aug 12, 2010 at 03:28, Robin Bowes  wrote:
> Hi Jonathan,
>
> I'm contacting you in your capacity as project lead for the cassandra
> project. I am wondering how close ticket #1072 is to implementation [1]
>
> We are about to do a proof of concept with cassandra to replace around
> 20 MySQL partitions (1 partition = 4 machines: master/slave in DC A,
> master/slave in DC B).
>
> We're essentially just counting web hits - around 10k/second at peak
> times - so increment counters is pretty much essential functionality for us.
>
> How close is the patch in #1072 to being acceptable? What is blocking it?
>
> Thanks,
>
> R.
>
> [1] https://issues.apache.org/jira/browse/CASSANDRA-1072
>
>