Thanks Shawn,
One of my indexes is 70G on disk but only has 25G RAM, usually it's fast
as hell, less than 0.5s for a full API wrapped call, but we do
occasionally see searches taking 2.5 seconds.
I'm currently shuffling VMs around to increase the RAM, good to hear
this may solve those random slowdowns - or at least rule it out.
On 03/24/2016 01:44 PM, Shawn Heisey wrote:
On 3/24/2016 4:02 AM, Robert Brown wrote:
If my index data directory size is 70G, and I don't have 70G (plus
heap, etc) in the system, this will occasionally affect search speed
right? When Solr has to resort to reading from disk?
Before I go out and throw more RAM into the system, in the above
example, what would you recommend?
Having enough memory available to cache all your index data offers the
best possible performance.
You may be able to achieve acceptable performance when you don't have
that much memory, but I would try to make sure there's at least enough
memory available to cache *half* the index data. Depending on the
nature of your queries and your index, this might not be enough, but
chances are good that it would work well.
I have a dev server where there's only enough memory available to cache
about a tenth of the index -- it's got full copies of all three of my
large indexes on ONE machine, while production runs two copies of these
same indexes on ten machines. Performance of any single query is not
very good on the dev server, but if I absolutely had to use that server
for production with one of my indexes, it would be a slow, but I could
do it. I don't think it would have enough performance to handle running
all three indexes for production, though.
Thanks,
Shawn