Well, on windows Vista and below (haven't checked on 7),
System.currentTimeMillis only has around 10ms granularity. That is for any
10ms period, you get the same value. I develop on Windows and I'd get
sporadic integration test failures due to this.
On Thu, Sep 1, 2011 at 8:31 PM, Jeremiah Jordan
Are you running on windows? If the default timestamp is just using
time.time()*1e6 you will get the same timestamp twice if the code is
close together. time.time() on windows is only millisecond resolution.
I don't use pycassa, but in the Thrift api wrapper I created for our
python code I im
Cheers. That would be another solution.
On Wed, Aug 31, 2011 at 10:42 AM, Jim Ancona wrote:
> You could also look at Hector's approach in:
> https://github.com/rantav/hector/blob/master/core/src/main/java/me/prettyprint/cassandra/service/clock/MicrosecondsSyncClockResolution.java
>
> It works wel
You could also look at Hector's approach in:
https://github.com/rantav/hector/blob/master/core/src/main/java/me/prettyprint/cassandra/service/clock/MicrosecondsSyncClockResolution.java
It works well and I believe there was some performance testing done on
it as well.
Jim
On Tue, Aug 30, 2011 at
Sorry - misread your earlier email. I would login to IRC and ask in
#cassandra. I would think given the nature of nanotime you'll run into harder
to track down problems, but it may be fine.
On Aug 30, 2011, at 2:06 PM, Jiang Chen wrote:
> Do you see any problem with my approach to derive the
Do you see any problem with my approach to derive the current time in
nano seconds though?
On Tue, Aug 30, 2011 at 2:39 PM, Jeremy Hanna
wrote:
> Yes - the reason why internally Cassandra uses milliseconds * 1000 is because
> System.nanoTime javadoc says "This method can only be used to measure
Yes - the reason why internally Cassandra uses milliseconds * 1000 is because
System.nanoTime javadoc says "This method can only be used to measure elapsed
time and is not related to any other notion of system or wall-clock time."
http://download.oracle.com/javase/6/docs/api/java/lang/System.htm
Indeed it's microseconds. We are talking about how to achieve the
precision of microseconds. One way is System.currentTimeInMillis() *
1000. It's only precise to milliseconds. If there are more than one
update in the same millisecond, the second one may be lost. That's my
original problem.
The oth
Ed- you're right - milliseconds * 1000. That's right. The other stuff about
nano time still stands, but you're right - microseconds. Sorry about that.
On Aug 30, 2011, at 1:20 PM, Edward Capriolo wrote:
>
>
> On Tue, Aug 30, 2011 at 1:41 PM, Jeremy Hanna
> wrote:
> I would not use nano ti
On Tue, Aug 30, 2011 at 1:41 PM, Jeremy Hanna wrote:
> I would not use nano time with cassandra. Internally and throughout the
> clients, milliseconds is pretty much a standard. You can get into trouble
> because when comparing nanoseconds with milliseconds as long numbers,
> nanoseconds will al
I would not use nano time with cassandra. Internally and throughout the
clients, milliseconds is pretty much a standard. You can get into trouble
because when comparing nanoseconds with milliseconds as long numbers,
nanoseconds will always win. That bit us a while back when we deleted
someth
Looks like the theory is correct for the java case at least.
The default timestamp precision of Pelops is millisecond. Hence the
problem as explained by Peter. Once I supplied timestamps precise to
microsecond (using System.nanoTime()), the problem went away.
I previously stated that sleeping for
It's a single node. Thanks for the theory. I suspect part of it may
still be right. Will dig more.
On Tue, Aug 30, 2011 at 9:50 AM, Peter Schuller
wrote:
>> The problem still happens with very high probability even when it
>> pauses for 5 milliseconds at every loop. If Pycassa uses microseconds
>
> The problem still happens with very high probability even when it
> pauses for 5 milliseconds at every loop. If Pycassa uses microseconds
> it can't be the cause. Also I have the same problem with a Java client
> using Pelops.
You connect to localhost, but is that a single node or part of a
clus
The problem still happens with very high probability even when it
pauses for 5 milliseconds at every loop. If Pycassa uses microseconds
it can't be the cause. Also I have the same problem with a Java client
using Pelops.
On Tue, Aug 30, 2011 at 12:14 AM, Tyler Hobbs wrote:
>
> On Mon, Aug 29, 201
On Mon, Aug 29, 2011 at 4:56 PM, Peter Schuller wrote:
> > If the client sleeps for a few ms at each loop, the success rate
> > increases. At 15 ms, the script always succeeds so far. Interestingly,
> > the problem seems to be sensitive to alphabetical order. Updating the
> > value from 'aaa' to
> If the client sleeps for a few ms at each loop, the success rate
> increases. At 15 ms, the script always succeeds so far. Interestingly,
> the problem seems to be sensitive to alphabetical order. Updating the
> value from 'aaa' to 'bbb' never has problem. No pause needed.
Is it possible the ver
Hi,
Just started developing using Cassandra (0.8.4). I noticed when
updating the same row and column repeatedly, say, in a test case,
updates may get lost. I found it in a Java client but the following
python script also exhibits the same problem.
*
18 matches
Mail list logo