Otis,
The documents themselves are relatively small, tens of fields, only a
few of them could be up to a hundred bytes.
Lunix Servers with relatively large RAM (256),
Minutes on the searches are fine for our purposes, adding a few tens of
millions of records in tens of minutes are also fine.
We had to do some simple tricks for keeping indexing up to speed but
nothing too fancy.
Moving to the sharding adds a layer of complexity which we don't really
need because of the above, ... and adding complexity may result in lower
reliability :)
Thanks,
Val
On 05/02/2013 03:41 PM, Otis Gospodnetic wrote:
Val,
Haven't seen this mentioned in a while...
I'm curious...what sort of index, queries, hardware, and latency
requirements do you have?
Otis
Solr & ElasticSearch Support
http://sematext.com/
On May 1, 2013 4:36 PM, "Valery Giner" <valgi...@research.att.com> wrote:
Dear Solr Developers,
I've been unable to find an answer to the question in the subject line of
this e-mail, except of a vague one.
We need to be able to index over 2bln+ documents. We were doing well
without sharding until the number of docs hit the limit ( 2bln+). The
performance was satisfactory for the queries, updates and indexing of new
documents.
That is, except for the need to go around the int32 limit, we don't really
have a need for setting up distributed solr.
I wonder whether some one on the solr team could tell us when/what version
of solr we could expect the limit to be removed.
I hope this question may be of interest to some one else :)
--
Thanks,
Val