> 90G is correct, each host is currently holding that much data.
>
> Are you saying that 32GB to 96GB would be needed for each host?   Assuming
> we did not add more shards that is.

If you want good performance and enough memory to give Solr the heap it
will need, yes. Lucene (the search API that Solr uses) relies on good
operating system caching for the index. Having enough memory to catch the
ENTIRE index is not usually required, but it is recommended.

Alternatively, you can add a lot more hosts and create a new collection
with a lot more shards. The total memory requirement across the whole
cloud won't go down, but each host won't require as much.

Thanks,
Shawn


Reply via email to