Hi Gil,

I'd look at the number and type of fields you sort and facet on (this stuff
likes memory).
I'd keep in mind heaps over 32 GB use bigger pointers, so maybe more
smaller heaps are better than one big one.
You didn't mention the # of CPU cores, but keep that in mind when sharding.
 When a query comes in, you want to put all your CPU cores to work.
...

Otis
--
Performance Monitoring * Log Analytics * Search Analytics
Solr & Elasticsearch Support * http://sematext.com/


On Tue, Dec 10, 2013 at 11:51 AM, Hoggarth, Gil <gil.hogga...@bl.uk> wrote:

> We're probably going to be building a Solr service to handle a dataset
> of ~60TB, which for our data and schema typically gives a Solr index
> size of 1/10th - i.e., 6TB. Given there's a general rule about the
> amount of hardware memory required should exceed the size of the Solr
> index (exceed to also allow for the operating system etc.), how have
> people handled this situation? Do I really need, for example, 12 servers
> with 512GB RAM, or are there other techniques to handling this?
>
>
>
> Many thanks in advance for any general/conceptual/specific
> ideas/comments/answers!
>
> Gil
>
>
>
>
>
> Gil Hoggarth
>
> Web Archiving Technical Services Engineer
>
> The British Library, Boston Spa, West Yorkshire, LS23 7BQ
>
>

Reply via email to