Ben: As Shawn says, you're on the right track...
Do note, though, that a 10K size here is probably excessive, YMMV of course. And an autowarm count of 5,000 is almost _certainly_ far more than you want. All these fq clauses get re-executed whenever a new searcher is opened (soft commit or hard commit with openSearcher=true). I realize this may just be illustrative. Is this your actual setup? And if so, what is your motivation for 5,000 autowarm count? Best, Erick On Wed, Jun 18, 2014 at 11:42 AM, Shawn Heisey <s...@elyograg.org> wrote: > On 6/18/2014 10:57 AM, Benjamin Wiens wrote: >> Thanks Erick! >> So let's say I have a config of >> >> <filterCache >> class="solr.FastLRUCache" >> size="10000" >> initialSize="10000" >> autowarmCount="5000"/> >> >> MaxDocuments = 1,000,000 >> >> So according to your formula, filterCache should roughly have the potential >> to consume this much RAM: >> ((1,000,000 / 8) + 128) * (10,000) = 1,251,280,000 byte / 1,000 = >> 1,251,280 kb / 1,000 = 1,251.28 mb / 1000 = 1.25 gb > > Yes, this is essentially correct. If you want to arrive at a number > that's more accurate for the way that OS tools will report memory, > you'll divide by 1024 instead of 1000 for each of the larger units. > That results in a size of 1.16GB instead of 1.25. Computers think in > powers of 2, dividing by 1000 assumes a bias to how people think, in > powers of 10. It's the same thing that causes your computer to report > 931GB for a 1TB hard drive. > > Thanks, > Shawn >