Yes, as far as I know, what Brian said is correct.  Also, as far as I know, 
there is nothing that gracefully handles problematic Solr instances during 
distributed search.  Solr 1.4 request?


Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch



----- Original Message ----
> From: Brian Whitman <[EMAIL PROTECTED]>
> To: solr-user@lucene.apache.org
> Sent: Monday, August 18, 2008 11:57:23 AM
> Subject: Re: partialResults, distributed search & SOLR-502
> 
> On Aug 18, 2008, at 11:51 AM, Ian Connor wrote:
> > On Mon, Aug 18, 2008 at 9:31 AM, Ian Connor   
> > wrote:
> >> I don't think this patch is working yet. If I take a shard out of
> >> rotation (even just one out of four), I get an error:
> >>
> >> org.apache.solr.client.solrj.SolrServerException:
> >> java.net.ConnectException: Connection refused
> >>
> 
> 
> It's my understanding that SOLR-502 is really only concerned with  
> queries timing out (i.e. they connect but take over N seconds to  
> return) If the connection gets refused then a non-solr java connection  
> exception is thrown. Something would have to get put in that  
> (optionally) catches connection errors and still builds the response  
> from the shards that did respond.
> 
> 
> 
> 
> 
> >> On Fri, Aug 15, 2008 at 1:23 PM, Brian Whitman 
> >> > wrote:
> >>> I was going to file a ticket like this:
> >>>
> >>> "A SOLR-303 query with &shards=host1,host2,host3 when host3 is  
> >>> down returns
> >>> an error. One of the advantages of a shard implementation is that  
> >>> data can
> >>> be stored redundantly across different shards, either as direct  
> >>> copies (e.g.
> >>> when host1 and host3 are snapshooter'd copies of each other) or  
> >>> where there
> >>> is some "data RAID" that stripes indexes for redundancy."
> >>>
> >>> But then I saw SOLR-502, which appears to be committed.
> >>>
> >>> If I have the above scenario (host1,host2,host3 where host3 is not  
> >>> up) and
> >>> set a timeAllowed, will I still get a 400 or will it come back with
> >>> "partial" results? If not, can we think of a way to get this to  
> >>> work? It's
> >>> my understanding already that duplicate docIDs are merged in the  
> >>> SOLR-303
> >>> response, so other than building in some "this host isn't working,  
> >>> just move
> >>> on and report it" and of course the work to index redundantly, we  
> >>> wouldn't
> >>> need anything to achieve a good redundant shard implementation.
> >>>
> >>> B
> >>>
> >>>
> >>>
> >>
> >>
> >>
> >> --
> >> Regards,
> >>
> >> Ian Connor
> >>
> >
> >
> >
> > -- 
> > Regards,
> >
> > Ian Connor
> 
> --
> http://variogr.am/

Reply via email to