Hi all,
We've open sourced Polidoro. It's a Cassandra client in Scala on top of
Astyanax and in the style of Cascal.
Find it at https://github.com/SpotRight/Polidoro
-Lanny Ripple
SpotRight, Inc - http://spotright.com
4 AM, Omar Shibli wrote:
>
> All you need to do is to decrease the replication factor of DC1 to 0, and
> then decommission the nodes one by one,
> I've tried this before and it worked with no issues.
>
> Thanks,
>
> On Tue, Jul 23, 2013 at 10:32 PM, Lanny Ripple wrote:
&
Hi,
We have a multi-dc setup using DC1:2, DC2:2. We want to get rid of DC1.
We're in the position where we don't need to save any of the data on DC1.
We know we'll lose a (tiny. already checked) bit of data but our
processing is such that we'll recover over time.
How do we drop DC1 and just m
.KeySlice.read(KeySlice.java:408)
> at
> org.apache.cassandra.thrift.Cassandra$get_paged_slice_result.read(Cassandra.java:14157)
> at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
> at
> org.apache.cassandra.thrift.Cassandra$Client.recv_get_paged_slice(Cassandra.java:7
sultant
> New Zealand
>
> @aaronmorton
> http://www.thelastpickle.com
>
> On 18/04/2013, at 5:50 AM, Lanny Ripple wrote:
>
>> That was our first thought. Using maven's dependency tree info we verified
>> that we're using the expected (cass 1.2.3) jars
>>
> -
> Aaron Morton
> Freelance Cassandra Consultant
> New Zealand
>
> @aaronmorton
> http://www.thelastpickle.com
>
> On 16/04/2013, at 10:17 AM, Lanny Ripple wrote:
>
> A bump to say I found this
>
>
> http://stackoverflow.com/questions/15
e's a bug that's
slipped in (or been uncovered) somewhere. I'll try to narrow down to a dataset
and code that can reproduce.
On Apr 10, 2013, at 6:29 PM, Lanny Ripple wrote:
> We are using Astyanax in production but I cut back to just Hadoop and
> Cassandra to confirm it
Saw this in earlier versions. Our workaround was disable; drain; snap;
shutdown; delete; link from snap; restart;
-ljr
On Apr 11, 2013, at 9:45, wrote:
> I have formulated the following theory regarding C* 1.2.2 which may be
> relevant: Whenever there is a disk error during compaction of an
oo long.
> The mystery to me: Why no complaints in previous versions? Were some checks
> added in Thrift or Hector?
>
> -Original Message-
> From: Lanny Ripple [mailto:la...@spotright.com]
> Sent: Tuesday, April 09, 2013 6:17 PM
> To: user@cassandra.apache.org
> S
Hello,
We have recently upgraded to Cass 1.2.3 from Cass 1.1.5. We ran
sstableupgrades and got the ring on its feet and we are now seeing a new issue.
When we run MapReduce jobs against practically any table we find the following
errors:
2013-04-09 09:58:47,746 INFO org.apache.hadoop.util.Nat
We occasionally (twice now on a 40 node cluster over the last 6-8 months) see
this. My best guess is that Cassandra can fail to mark an SSTable for cleanup
somehow. Forced GC's or reboots don't clear them out. We disable thrift and
gossip; drain; snapshot; shutdown; clear data/Keyspace/Table/
Ah. TimeUUID. Not as useful for you then but still something for the toolbox.
On Mar 27, 2013, at 8:42 AM, Lanny Ripple wrote:
> A type 4 UUID can be created from two Longs. You could MD5 your strings
> giving you 128 hashed bits and then make UUIDs out of that. Using Scala:
>
&
A type 4 UUID can be created from two Longs. You could MD5 your strings giving
you 128 hashed bits and then make UUIDs out of that. Using Scala:
import java.nio.ByteBuffer
import java.security.MessageDigest
import java.util.UUID
val key = "Hello, World!"
val md = MessageDigest
13 matches
Mail list logo