After removing some data from Solandra via a Solr query, we are getting
DecoratedKey assertions.
Our setup:
latest version of Solandra (I think it supports 0.8.6, please correct if wrong)
3 solandra nodes, with replication set to 2 and sharding set to 3.
No systems are currently running (ingest
I'm seeing this in my logs:
WARN [1832199239@qtp-673795938-0] 2011-10-06 16:15:46,424
CassandraIndexManager.java (line 364) invalid shard name encountered:
WDPRO-NGELOG-DEV 1
WDPRO-NGELOG-DEV is the name of the index I'm creating. Is there a restriction
on characters in the name?
I'm seeing this error when trying to insert data into a core I've defined in
Solandra
INFO [pool-7-thread-319] 2011-10-06 16:21:34,328 HttpMethodDirector.java (line
445) Retrying request
INFO [pool-7-thread-1070] 2011-10-06 16:21:34,328 HttpMethodDirector.java (line
445) Retrying request
INFO [
does the Solandra specific partitioner distribute data relatively equally
across nodes? Is this influenced by the shards.at.once property? If I'm writing
to 3 nodes, how would the default setting of 4 for this property affect the
distribution of data across my nodes?
From: Jake Luciani mailt
dra.apache.org>"
mailto:user@cassandra.apache.org>>
Date: Wed, 24 Aug 2011 14:54:56 -0700
To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>"
mailto:user@cassandra.apache.org>>
Subject: Re: Cassandra Node Requirements
On Wed, Aug 24, 2011 at 2:54 PM,
I'm trying to determine a node configuration for Cassandra. From what I've been
able to determine from reading around:
1. we need to cap data size at 50% of total node storage capacity for
compaction
2. with RF=3, that means that I need to effectively assume that I have 1/6th
of total stor
if I'm planning to store 20TB of new data per week, and expire all data every 2
weeks, with a replication factor of 3, do I only need approximately 120 TB of
disk? I'm going to use ttl in my column values to automatically expire data. Or
would I need more capacity to handle sstable merges? Given