The first step is to look at what searches are taking too long, and
see if there is a way to structure them so they don't take as long.

The whole index doesn't have to be in memory to get good search
performance, but 100M documents on a single server is big.  We are
working on distributed search (SOLR-303) so an index can be split
across multiple servers.

-Yonik

On Dec 4, 2007 11:43 AM, Evgeniy Strokin <[EMAIL PROTECTED]> wrote:
> Hello,...
> we have 110M records index under Solr. Some queries takes a while, but we 
> need sub-second results. I guess the only solution is cache (something 
> else?)...
> We use standard LRUCache. In docs it says (as far as I understood) that it 
> loads view of index in to memory and next time works with memory instead of 
> hard drive.
> So, my question: hypothetically, we can have all index in memory if we'd have 
> enough memory size, right? In this case the result should come up very fast. 
> We have very rear updates. So I think this could be a solution.
> How should I configure the cache to achieve such approach?
> Thanks for any advise.
> Gene

Reply via email to