When you say " caching 100.000 docs" what do you mean ?
being able to quickly find information in a corpus which increases in size (
100.000 docs) everyday ?

I second Erick, I think this is fairly normal Solr use case.
If you really care about fast searches, you will get a fairly acceptable
default configuration.
Then, you can tune Solr caching if you need.
Just remember that nowadays by default Solr is optimized for  Near Real Time
Search and it vastly uses the Memory Mapping feature of modern OSs.
This means that Solr is not going to do I/O all the time with the disk but
index portions will be memory mapped (if the memory assigned to the OS is
enough on the machine) .

Furthemore you may use the heap memory assigned to the Solr JVM to cache
additional elements [1] .

In conclusion : I never used the embedded Solr Server ( apart from
integration tests).

If you really want to play a bit with a scenario where you don't need
persistency on disk, you may play with the RamDirectory[2], but also in this
case, I generally discourage this approach unless very specific usecases and
small indexes.

[1]
https://lucene.apache.org/solr/guide/6_6/query-settings-in-solrconfig.html#QuerySettingsinSolrConfig-Caches
[2]
https://lucene.apache.org/solr/guide/6_6/datadir-and-directoryfactory-in-solrconfig.html#DataDirandDirectoryFactoryinSolrConfig-SpecifyingtheDirectoryFactoryForYourIndex



-----
---------------
Alessandro Benedetti
Search Consultant, R&D Software Engineer, Director
Sease Ltd. - www.sease.io
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html

Reply via email to