: FilterCache:
...
: So if a query contains two fq params, it will create two separate entries
: for each of these fq params. The value of each entry is the list of ids of
: all documents across the index that match the corresponding fq param. Each
: entry is independent of any other entry.
...
strictly speaking think of the cache values as a "set of (doc) ids" not a
"list of (doc) ids" ... list implies order, and there is none in the
filterCache.
: A minimum size for filterCache could be (total number of fields * avg
: number of unique values per field) ? Is this correct ? I have not enabled
you could do that ... but it would probably be overkill. you really only
need to worry about hte # of fields users will be filtering on, and even
then only the values people will be filtering on. if you are using
facet.method=enum for a field, then you might wnat to ensure it's big
enough for all the unique values on every (facet) field so you don't get
evicitions in a single request ... but the facet.method=fc is a lot more
efficent in most cases.
it's "ok" for unpopular queries to get evicted from the cache(s), so don't
worry about it too much -- the best way to pick a size for your caches is
to pick a size and then test, if you get lots of evictions, and you have
ram to spare: go bigger. if you get no evicitions, and have a low hit
rate and want the ram for other things: go smaller.
: QueryResultsCache:
...
: q=Status:Active&fq=Org:Apache&fq=Version:13, it will create one entry that
: contains list of ids of documents that match this full query. Is this
correct.
: documentCache:
...
: correct ? For sizing, SolrWiki states that "*The size for the documentCache
: should always be greater than <max_results> * <max_concurrent_queries>*".
: Why do we need the max_concurrent_queries parameter here ? Is it when
: max_results is much lesser than numDocs ? In my case, a q=*:*search is done
max_results in that context is max_results per request ... ie: the "rows"
param. The point is that you don't want a single request have to fetch
the same document from the index twice because it got a cache miss due to
a concurrent request evicted that doc from the documentCache.
-Hoss