How much storage do you need? 240G SSDs quite capable of saturating a
3Gbps SATA link are $600. Larger ones are also available with similar
performance. Perhaps you could share a bit more about the storage and
performance requirements. How SSDs to sustain 10k writes/sec PER NODE
WITH LINEAR SCA
Hi Chen,
Kundera uses Lucandra for reverse index processing. Lucandra is an extension of
Lucene with custom IndexWriter and InderReader implementation.
When you submit a document,
1. Lucene processes it and prepares the reverse-indexes, and then
2. Lucandra take over from here and
Thank you for the advice, I will try these settings. I am running defaults
right now. The disk subsystem is one SATA disk for commitlog and 4 SATA
disks in raid 0 for the data.
>From your email you are implying this hardware can not handle this level of
sustained writes? That kind of breaks down t
If you doing geo stuff you may want to take a look at the geo extension for
couch db
http://github.com/vmx/couchdb
Sounds like it may give you many if the features your thinking about out of the
box.
Aaron
On 20 Aug 2010, at 20:54, Jone Lura wrote:
> Thank you for your effort.
>
> Im pret
For reference, I learned this from reading the source:
thrift/CassandraServer.java
On Sat, Aug 21, 2010 at 4:19 PM, Mark wrote:
> Is there anyway to remove drop column family/keyspace privileges?
>
My mistake, the access levels in 0.7 do now distinguish these
operations (at access level FULL).
On Sat, Aug 21, 2010 at 4:19 PM, Mark wrote:
> Is there anyway to remove drop column family/keyspace privileges?
>
No.
On Sat, Aug 21, 2010 at 4:19 PM, Mark wrote:
> Is there anyway to remove drop column family/keyspace privileges?
>
Is there anyway to remove drop column family/keyspace privileges?
My guess is that you have (at least) 2 problems right now:
You are writing 10k ops/sec to each node, but have default memtable
flush settings. This is resulting in memtable flushing every 30
seconds (default ops flush setting is 300k). You thus have a
proliferation of tiny sstables and are seein
Perhaps I missed it in one of the earlier emails, but what is your
disk subsystem config?
On Sat, Aug 21, 2010 at 2:18 AM, Wayne wrote:
> I am already running with those options. I thought maybe that is why they
> never get completed as they keep pushed pushed down in priority? I am
> getting tim
Trying to better understand the problem I tried some variations, but first
my setup:
1. hmaster: runs the hadoop namenode, jobtracker, a tasktracker and a
datanode, also it runs Cassandra and is the first node in the seedlist in
the client configuration (CassandraStorage for Pig)
2. h
I am already running with those options. I thought maybe that is why they
never get completed as they keep pushed pushed down in priority? I am
getting timeouts now and then but for the most part the cluster keeps
running. Is it normal/ok for the repair and compaction to take so long? It
has been o
yes, the AES is the repair.
if you are running linux, try adding the options to reduce compaction
priority from
http://wiki.apache.org/cassandra/PerformanceTuning
On Sat, Aug 21, 2010 at 3:17 AM, Wayne wrote:
> I could tell from munin that the disk utilization was getting crazy high,
> but the s
13 matches
Mail list logo