Hoping I can get a better response with a more directed question:

With facet queries and the fields used, what qualifies as a "large" number
of values?  The wiki uses U.S. states as an example, so the number of unique
values = 50.  More to the point, is there an algorithm that I can use to
estimate the cache consumption rate for facet queries?

-- j




On 4/1/07, Jeff Rodenburg <[EMAIL PROTECTED]> wrote:

I've read through the list entries here, the Lucene list, and the wiki
docs and am not resolving a major pain point  for us.  We've been trying to
determine what could possibly cause us to hit this in our given environment,
and am hoping more eyes on this issue can help.

Our scenario: 150MB index, 140000 documents, read/write servers in place
using standard replication.  Running Tomcat 5.5.17 on Redhat Enterprise
Linux 4.  Java configured to start with -Xmx1024m.  We encounter java heap
out-of-memory issues on the read server at staggered times, but usually once
every 48 hours.  Search request load is roughly 2 searches every 3 seconds,
with some spikes here or there.  We are using facets: 3 are based on type
integer, one is based on type string.  We are using sorts: 1 is based on
type sint, 2 are based on type date.  Caching is disabled.  Solr bits are
also from September 2006.

Is there anything in that configuration that we should interrogate?

thanks,
j

Reply via email to