Hi,
I've used range queries for Order Preserving Partition and got the
satisfactory results.
For instance, I can find first 1 million keys that starts with key
'2008010100' and ends with '2008010200'.
Now I'm trying to do the same with Random Partitioning. But here I find that
for Range r
Hello Experts,
I see a queer behavior from on of the Cassandra nodes in my cluster where
the data is not flushed off Commitlogs and the Commitlog file grows in
number. I was inserting the data into the cluster and since yesterday this
node had more than 900 commitlog files.
-rw-r--r-- 1 dev dev
And I'm still getting UnavailableException and TimedOutException when there
Cassandra daemon is doing either Compaction or Garbage collection...
On Thu, Sep 30, 2010 at 2:42 PM, Rana Aich wrote:
> I ran the nodetool cleanup...but the scenario doesn't change...
>
>
> On Thu
I ran the nodetool cleanup...but the scenario doesn't change...
On Thu, Sep 30, 2010 at 1:14 PM, Edward Capriolo wrote:
> After nodetool move you have to run nodetool cleanup.
>
> On Thu, Sep 30, 2010 at 3:45 PM, Rana Aich wrote:
> > I have arranged my initial tokens
20% of the total.
However as the nodetool shows my 5th box (Lowend Box) has 208.41 GB
(expected result ~ 83 GB) with 73% of the capacity already utilized. Other 3
Lowend boxes (~83GB) are behaving properly.
Does anyone have any clue?
>
On Mon, Sep 27, 2010 at 11:50 PM, Oleg Anastasyev wrote:
&g
Hi Peter,
Thanks for your detailed query...
I have 8 m/c cluster. KVSHIGH1,2,3,4 and KVSLOW1,2,3,4. As the name suggests
KVSLOWs have low diskspace ~ 350GB
Whereas KVSHIGHs have 1.5 terabytes.
Yet my nodetool shows the following:
192.168.202.202Down 319.94 GB
72000447307838857304008438688
Hi,
I'm having great difficulty in inserting data in 8 server Cassandra cluster
(RandomPartition with RF 2). For the first one billion the data insertion
was smooth. But slowly I'm getting Unavailable Exception from the cluster.
And now I can't put not more than 30 million data at a one stretch be
Hi All,
I was under the impression that in order to query with get_range_slices one
has to have a OrderPreservingPartitioner.
Can we do get_range_slices with RandomPartitioner also? I can distinctly
remember I read that(OrderPreservingPartitioner for get_range_slices) in
Cassnadra WIKI but now so
Hi All,
We are working with a Cassandra Cluster consisting 3 nodes with each having
storage capacity of 0.5 Terabytes.
We are loading the data into the cluster with OrderPreservingPartition and
with Replication Factor 2.
The Data that has been loaded so far looks as follows:
Address Status
the existing
> columns are overwritten or new columns are added. There is no way to
> cause a duplicate key to be inserted.
>
> On Wed, Jul 28, 2010 at 6:16 PM, Rana Aich wrote:
> > Hello,
> > I was wondering what may the pitfalls in Cassandra when the Key value is
> not
Hello,
I was wondering what may the pitfalls in Cassandra when the Key value is not
UNIQUE?
Will it affect the range query performance?
Thanks and regards,
raich
thout threading to see if it's a cassandra problem or an
> issue with your threading.
>
> Perhaps split the file and run many single threaded processes to load the
> data.
>
> Aaron
>
> On 27 Jul, 2010,at 07:14 AM, Rana Aich wrote:
>
> Hi All,
>
>
>
&
Hi All,
I have to load huge quantity of data into Cassandra (~10Billion rows).
I'm trying to load the Data from files using multithreading.
The idea is each thread will read the TAB delimited file and process chunk
of records.
For example Thread1 reads line 1-1000 lines
Thread 2 reads line 1001
Hi,
Can someone please please throw some light how can I import the Data from mysql
into Cassandra cluster.
- Is there any tool available?
OR
- Do I have to write my own Client using Thrift that will read the export file
(*.sql) and insert the record in the database.
Thanks
raich
14 matches
Mail list logo