I've read through the list entries here, the Lucene list, and the wiki docs and am not resolving a major pain point for us. We've been trying to determine what could possibly cause us to hit this in our given environment, and am hoping more eyes on this issue can help.
Our scenario: 150MB index, 140000 documents, read/write servers in place using standard replication. Running Tomcat 5.5.17 on Redhat Enterprise Linux 4. Java configured to start with -Xmx1024m. We encounter java heap out-of-memory issues on the read server at staggered times, but usually once every 48 hours. Search request load is roughly 2 searches every 3 seconds, with some spikes here or there. We are using facets: 3 are based on type integer, one is based on type string. We are using sorts: 1 is based on type sint, 2 are based on type date. Caching is disabled. Solr bits are also from September 2006. Is there anything in that configuration that we should interrogate? thanks, j