Hi all,

it seems that we just post to much to fast to Solr.

When we post 100 documents (seperate calls) and perform a commit everything 
goes well, but as soon as we start sending thousands of documents and than use 
autocommit or send the commit message we have the situation that there are a 
lot of documents not in the index although they were sended to Solr ...

has anyone experience with how much documents you can import and at which speed 
so that Solr stays stable ?

We use Tomcat 5.5 and our java memory limit is 2gb.

Greetings,
Tim
________________________________________
Van: Mike Klaas [EMAIL PROTECTED]
Verzonden: dinsdag 6 mei 2008 20:17
Aan: solr-user@lucene.apache.org
Onderwerp: Re: Delete's increase while adding new documents

On 6-May-08, at 4:56 AM, Tim Mahy wrote:

> Hi all,
>
> it seems that we get errors during the auto-commit :
>
>
> java.io.FileNotFoundException: /opt/solr/upload/nl/archive/data/
> index/_4x.fnm (No such file or directory)
>        at java.io.RandomAccessFile.open(Native Method)
>        at java.io.RandomAccessFile.<init>
> (RandomAccessFile.java:212)
>        at org.apache.lucene.store.FSDirectory$FSIndexInput
> $Descriptor.<init>(FSDirectory.java:501)
>        at org.apache.lucene.store.FSDirectory
> $FSIndexInput.<init>(FSDirectory.java:526)
>
> the _4x.fnm file is not on the file system. When we switch from
> autocommit to manual commits throughout xml messages we get the same
> kind of errors.
> Any idea what could be wrong in our configuration to cause these
> exceptions ?

I have only heard of that error appearing in two cases.  Either the
index is corrupt, or something else deleted the file.  Are you sure
that there is only one Solr instance that accesses the directory, and
that nothing else ever touches it?

Can you reproduce the deletion issue with a small number of documents
(something that could be tested by one of us)?

-Mike




Please see our disclaimer, http://www.infosupport.be/Pages/Disclaimer.aspx

Reply via email to