Setting maxBufferedDocs to something smaller (say, 300), might be a
better way of limiting your memory usage. I have difficulties with
the odd huge document when using the default maxBufferedDocs=1000 (in
the next Solr version, there should be an option to limit indexing
based on memory us
In Java files, database handles, and other external open resources are
not automatically closed when the object is garbage-collected. You have
to explicitly close the resource. (There is a feature called
'finalization' where you can implement this for your own classes, but
this has turned out to be
Well when I wasn't sending regular commits I was getting out of memory
exceptions from Solr fairly often, which I assume is due to the size of the
documents I'm sending. I'd love to set the autocommit in solrconfig.xml and
not worry about sending commits on the client side, but autocommit doesn't
Mark,
Another question to ask is: do you *really* need to be calling commit every 300
docs? Unless you really need searchers to see your 300 new docs, you don't
need to commit. Just optimize + commit at the end of your whole batch.
Lowering the mergeFactor is the right thing to do. Out of c
On Dec 24, 2007 1:25 PM, Mark Baird <[EMAIL PROTECTED]> wrote:
> So my question is, when the Reader is being closed due to a commit, what
> exactly is happening? Is it just being set to null and a new instance being
> created?
No, we reference count the SolrIndexSearcher so that it is closed
imme