That's specific to using the facet.method=enum, but do admit it's easy
to miss that.

I added a note about that though...

Thanks for pointing that out!


On Thu, Jun 19, 2014 at 9:38 AM, Benjamin Wiens
<benjamin.wi...@gmail.com> wrote:
> Thanks to both of you. Yes the mentioned config is illustrative, we decided
> for 512 after thorough testing. However, when you google "Solr filterCache"
> the first link is the community wiki which has a config even higher than
> the illustration which is quite different from the official reference
> guide. It might be a good idea to change this unless there's a very small
> index.
>
> http://wiki.apache.org/solr/SolrCaching#filterCache
>
>     <filterCache      class="solr.LRUCache"      size="16384"
> initialSize="4096"      autowarmCount="4096"/>
>
>
>
>
>
>
> On Thu, Jun 19, 2014 at 9:48 AM, Erick Erickson <erickerick...@gmail.com>
> wrote:
>
>> Ben:
>>
>> As Shawn says, you're on the right track...
>>
>> Do note, though, that a 10K size here is probably excessive, YMMV of
>> course.
>>
>> And an autowarm count of 5,000 is almost _certainly_ far more than you
>> want. All these fq clauses get re-executed whenever a new searcher is
>> opened (soft commit or hard commit with openSearcher=true). I realize
>> this may just be illustrative. Is this your actual setup? And if so,
>> what is your motivation for 5,000 autowarm count?
>>
>> Best,
>> Erick
>>
>> On Wed, Jun 18, 2014 at 11:42 AM, Shawn Heisey <s...@elyograg.org> wrote:
>> > On 6/18/2014 10:57 AM, Benjamin Wiens wrote:
>> >> Thanks Erick!
>> >> So let's say I have a config of
>> >>
>> >> <filterCache
>> >> class="solr.FastLRUCache"
>> >> size="10000"
>> >> initialSize="10000"
>> >> autowarmCount="5000"/>
>> >>
>> >> MaxDocuments = 1,000,000
>> >>
>> >> So according to your formula, filterCache should roughly have the
>> potential
>> >> to consume this much RAM:
>> >> ((1,000,000 / 8) + 128) * (10,000) = 1,251,280,000 byte / 1,000 =
>> >> 1,251,280 kb / 1,000 = 1,251.28 mb / 1000 = 1.25 gb
>> >
>> > Yes, this is essentially correct.  If you want to arrive at a number
>> > that's more accurate for the way that OS tools will report memory,
>> > you'll divide by 1024 instead of 1000 for each of the larger units.
>> > That results in a size of 1.16GB instead of 1.25.  Computers think in
>> > powers of 2, dividing by 1000 assumes a bias to how people think, in
>> > powers of 10.  It's the same thing that causes your computer to report
>> > 931GB for a 1TB hard drive.
>> >
>> > Thanks,
>> > Shawn
>> >
>>

Reply via email to