On Fri, 2012-11-16 at 02:18 +0100, Buttler, David wrote:
> Obviously, I could replicate the data so
> that I wouldn't lose any documents while I replace my disk, but since I
> am already storing the original data in HDFS, (with a 3x replication),
> adding additional replication for solr eats into m
space do I want to be
> using? As little as possible. Drives are cheap, but not free. And, nodes
> only hold so many drives.
>
> Dave
>
> -Original Message-
> From: Upayavira [mailto:u...@odoko.co.uk]
> Sent: Thursday, November 15, 2012 4:37 PM
> To: solr-use
lto:u...@odoko.co.uk]
Sent: Thursday, November 15, 2012 4:37 PM
To: solr-user@lucene.apache.org
Subject: Re: cores shards and disks in SolrCloud
Personally I see no benefit to have more than one JVM per node, cores
can handle it. I would say that splitting a 20m index into 25 shards
strikes me as serious ove
Personally I see no benefit to have more than one JVM per node, cores
can handle it. I would say that splitting a 20m index into 25 shards
strikes me as serious overkill, unless you expect to expand
significantly. 20m would likely be okay with two or three shards. You
can store the indexes for each