On Fri, Nov 13, 2009 at 11:45 PM, Lance Norskog wrote:
> I would go with polling Solr to find what is not yet there. In
> production, it is better to assume that things will break, and have
> backstop janitors that fix them. And then test those janitors
> regularly.
Good idea, Lance. I certainly
On Fri, Nov 13, 2009 at 11:02 PM, Otis Gospodnetic
wrote:
> So I think the question is really:
> "If I stop the servlet container, does Solr issue a commit in the shutdown
> hook in order to ensure all buffered docs are persisted to disk before the
> JVM exits".
Exactly right, Otis.
> I don't
On Fri, Nov 13, 2009 at 4:09 PM, Chris Hostetter
wrote:
> please don't kill -9 ... it's grossly overkill, and doesn't give your
[ ... snip ... ]
> Alternately, you could take advantage of the "enabled" feature from your
> client (just have it test the enabled url ever N updates or so) and when
> i
>> To: solr-user@lucene.apache.org
>> Sent: Fri, November 13, 2009 4:09:00 PM
>> Subject: Re: Stop solr without losing documents
>>
>>
>> : which documents have been updated before a successful commit. Now
>> : stopping solr is as easy as kill -9.
>>
Sent: Fri, November 13, 2009 4:09:00 PM
> Subject: Re: Stop solr without losing documents
>
>
> : which documents have been updated before a successful commit. Now
> : stopping solr is as easy as kill -9.
>
> please don't kill -9 ... it's grossly overkill, and doesn
: which documents have been updated before a successful commit. Now
: stopping solr is as easy as kill -9.
please don't kill -9 ... it's grossly overkill, and doesn't give your
servlet container a fair chance to cleanthings up. A lot of work has been
done to make Lucene indexes robust to hard
On Fri, Nov 13, 2009 at 4:32 AM, gwk wrote:
> I don't know if this is the best solution, or even if it's applicable to
> your situation but we do incremental updates from a database based on a
> timestamp, (from a simple seperate sql table filled by triggers so deletes
Thanks, gwk! This doesn't
Michael wrote:
I've got a process external to Solr that is constantly feeding it new
documents, retrying if Solr is nonresponding. What's the right way to
stop Solr (running in Tomcat) so no documents are lost?
Currently I'm committing all cores and then running catalina's stop
script, but betw
I've got a process external to Solr that is constantly feeding it new
documents, retrying if Solr is nonresponding. What's the right way to
stop Solr (running in Tomcat) so no documents are lost?
Currently I'm committing all cores and then running catalina's stop
script, but between my commit and