On 5/21/2014 7:28 AM, Jack Krupansky wrote: > Just to re-emphasize the point - when provisioning Solr, you need to > ASSURE that the system has enough system memory so that the Solr index > on that system fits entirely in the OS file system cache. No ifs, > ands, or buts. If you fail to follow that RULE, all bets are off for > performance and don't even bother complaining about poor performance > on this mailing list!! Either get more memory or shard your index more > heavily - again, no ifs, ands, or buts!! > > Any questions on that rule? > > Maybe somebody else can phrase this "guidance" more clearly, so that > fewer people will fail to follow it. > > Or, maybe we should enhance Solr to check available memory and log a > stern warning if the index size exceeds system memory when Solr is > started.
If the amount of free and cached RAM can be detected by Java in a cross-platform method, it would be awesome to log a performance warning if the total of that memory is less than 50% of the total index size. This is the point where I generally feel comfortable saying that lack of memory is a likely problem. Depending on the exact index composition and the types of queries being run, a Solr server may run very well when only half the index can be cached. I've seen some discussion of a documentation section (and supporting scripts/data in the download) that describes how to set up a production-ready and fault tolerant install. That would be a good place to put this information. An install script on *NIX systems would be able to easily gather memory information and display various index sizes that the hardware is likely to handle efficiently. If nothing else, we can beef up the SYSTEM_REQUIREMENTS.txt file. Later today I'll file an issue and cook up a patch for that. Thanks, Shawn