Hi,

That setup should work and there should be no index corruption.  I do not fully 
follow why you are doing this and have a feeling you are not really solving the 
real problem.  Why is this better than the typical Solr master/slave setup?  I 
think all you did is skip the data copying step.


Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch



----- Original Message ----
> From: Jagadish Rath <[EMAIL PROTECTED]>
> To: solr-user@lucene.apache.org
> Sent: Wednesday, October 1, 2008 4:34:21 AM
> Subject: Re: Anyproblem in running two solr instances on the same machine 
> using the same directory ?
> 
> Can any one throw some light into the issue ??
> 
> On Fri, Sep 26, 2008 at 11:48 AM, Jagadish Rath wrote:
> 
> > Hi
> >
> >   I am running two solr instances on the same machine using the same data
> > directory. one on port 8982 and the other on 8984.
> >
> >    - 1st one *only accepts commits* (indexer) -- *port 8982*
> >
> >          -- It has all tha cache size as 0, to get rid of warmup of
> > searchers
> >
> >    - 2nd one* accepts all the queries*.(searcher) -- *port 8984*
> >
> >          -- It has non-zero cache size as it needs to handle queries
> >
> >    - I have a cron *which does a dummy commit to the 2nd instance (on port
> >    8984)* to update its searcher every 1 minute.
> >
> >          --- *curl http://localhost:8984/solr/update -s -H
> > 'Content-type:text/xml; charset=utf-8' -d  ""*
> >
> >  I am trying to use this as a *solution to the maxWarmingSearcher limit
> > exceeded Error* that occurs as a result of a large no. of commits. I am
> > trying to use this solution as an alternate to the conventional master/slave
> > solution.
> >
> >   I have following questions
> >
> >    - *Is there any known issue with this solution or any issues that can
> >    be foreseen for this solution?*
> >
> > *       -- does it result in a corrupted index ?
> > *
> >
> >    - *What are the other solutions to the problem of "maxWarmingSearchers
> >    limit exceeded error " ?**  *
> >
> >  A would really appreciate a quick response.
> >
> > Thanks
> > Jagadish Rath
> >

Reply via email to