Oversharding is another option that punts the ball further down the
road, but 5 years from now somebody _else_ will have to deal with it
;)...
You can host multiple shards on a single Solr. So say you think you'll
need 20 shards in 5 years (or whatever). Start with 20 shards on your
single machine
The collections we index under this multi-collection alias does not use
real time get, no. We have other collections behind single-collection
aliases where get calls seem to work, but I'm not clear whether the
calls are real time. Seems like it would be easy for you to test, but
just be aware t
On Thu, 2014-11-20 at 01:42 +0100, Patrick Henry wrote:
> Good eye, that should have been gigabytes. When adding to the new shard,
> is the shard already part of the the collection? What mechanism have you
> found useful in accomplishing this (i.e. routing)?
Currently (and for the foreseeable fu
Michael,
Interesting, I'm still unfamiliar with limitations (if any) of aliasing.
Does architecture utilize realtime get?
On Nov 18, 2014 11:49 AM, "Michael Della Bitta" <
michael.della.bi...@appinions.com> wrote:
> We're achieving some success by treating aliases as collections and
> collections
Good eye, that should have been gigabytes. When adding to the new shard,
is the shard already part of the the collection? What mechanism have you
found useful in accomplishing this (i.e. routing)?
On Nov 14, 2014 7:07 AM, "Toke Eskildsen" wrote:
> Patrick Henry [patricktheawesomeg...@gmail.com]
We're achieving some success by treating aliases as collections and
collections as shards.
More specifically, there's a read alias that spans all the collections,
and a write alias that points at the 'latest' collection. Every week, I
create a new collection, add it to the read alias, and poin
Patrick Henry [patricktheawesomeg...@gmail.com] wrote:
>I am working with a Solr collection that is several terabytes in size over
> several hundred millions of documents. Each document is very rich, and
> over the past few years we have consistently quadrupled the size our
> collection annually.
Hello everyone,
I am working with a Solr collection that is several terabytes in size over
several hundred millions of documents. Each document is very rich, and
over the past few years we have consistently quadrupled the size our
collection annually. Unfortunately, this sits on a single node w