On 2/10/2017 2:55 PM, David Hastings wrote:
> of right now has 22 million documents and sits around 360 gb. at this
> rate, it would be around a TB index size. is there a common
> hardware/software configuration to handle TB size indexes?

Memory is the secret to Solr performance.  Lots and lots of memory, so
the OS can effectively cache the index and make sure that the system
doesn't have to actually read the disk for most queries.  The amount of
memory required is frequently surprising to people.  Very large memory
sizes are typically quite expensive, especially in a virtualized world
like Amazon AWS.

If the index reaches a terabyte, you'll probably want between 512GB and
1TB of total memory (across all Solr servers that contain the index). 
If you want to have one or more additional redundant copies of the index
for high availability, plan on adding the same amount of memory again
for each additional copy.

I maintain a wiki page where this is discussed in greater detail:

https://wiki.apache.org/solr/SolrPerformanceProblems

Thanks,
Shawn

Reply via email to