Hi,
You are saying a doc can be up to 700KB and your maxBufferedDocs is set to 900.
Multiply these two numbers and I think you'll see that this number is greater
than your JVM's default heap. Also, save the optimize call for the end and
your overall indexing time will be shorter.
Otis
--
S
ser@lucene.apache.org
Subject: Re: out of memory every time
Just guessing, but I'd say it has something to do with the dynamic fields...
I ran a similar operation (docs ranged from 1K to 2MB). For the
initial indexing, I wrote a job to submit about 100,000 documents to
solr, committing after every 10
Just guessing, but I'd say it has something to do with the dynamic fields...
I ran a similar operation (docs ranged from 1K to 2MB). For the
initial indexing, I wrote a job to submit about 100,000 documents to
solr, committing after every 10 docs. I never sent any optimize
commands. I also used
On Mon, 2008-03-03 at 21:43 +0200, Justin wrote:
> I'm indexing a large number of documents.
>
> As a server I'm using the /solr/example/start.jar
>
> No matter how much memory I allocate it fails around 7200 documents.
How do you allocate the memory?
Something like:
java -Xms512M -Xmx1500M -ja