Thanks Simon for these tracks.

Here's my answers :

Can you tell if GC is happening more frequently than usual/expected  ?

GC is OK.

Is the index optimized - if not, how many segments ?

According to the statistics page from the admin :
One shard (master/slave) has 10 segments
The other shard (master/slave) has 13 segments

Is this ok ? The optimize job is running each day during the night.


It's possible that one of the shards is behind a flaky network connection.

Will check ...


Is the 10s performance just for the Solr query or wallclock time at
the browser ?

Both

You can monitor cache statistics from the admin console 'statistics' page

Thanks


Are you seeing anything untoward in the solr logs ?

I see stacktrace :

Aug 10, 2011 1:49:13 PM org.apache.solr.common.SolrException log
SEVERE: ClientAbortException:  java.net.SocketException: Broken pipe
        at 
org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:358)
        at org.apache.tomcat.util.buf.ByteChunk.append(ByteChunk.java:325)
        at 
org.apache.catalina.connector.OutputBuffer.writeBytes(OutputBuffer.java:381)
        at 
org.apache.catalina.connector.OutputBuffer.write(OutputBuffer.java:370)
        at 
org.apache.catalina.connector.CoyoteOutputStream.write(CoyoteOutputStream.java:89)
        at 
org.apache.solr.common.util.FastOutputStream.flushBuffer(FastOutputStream.java:183)
        at 
org.apache.solr.common.util.JavaBinCodec.marshal(JavaBinCodec.java:89)
        at 
org.apache.solr.request.BinaryResponseWriter.write(BinaryResponseWriter.java:48)
        at 
org.apache.solr.servlet.SolrDispatchFilter.writeResponse(SolrDispatchFilter.java:322)
        at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:254)
        at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
        at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
        at 
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
        at 
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
        at 
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128)
        at 
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
        at 
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
        at 
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:286)
        at 
org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:845)
        at 
org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583)
        at 
org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447)
        at java.lang.Thread.run(Thread.java:619)
Caused by: java.net.SocketException: Broken pipe
        at java.net.SocketOutputStream.socketWrite0(Native Method)
        at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92)
        at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
        at 
org.apache.coyote.http11.InternalOutputBuffer.realWriteBytes(InternalOutputBuffer.java:740)
        at org.apache.tomcat.util.buf.ByteChunk.flushBuffer(ByteChunk.java:434)
        at org.apache.tomcat.util.buf.ByteChunk.append(ByteChunk.java:349)
        at 
org.apache.coyote.http11.InternalOutputBuffer$OutputStreamOutputBuffer.doWrite(InternalOutputBuffer.java:764)
        at 
org.apache.coyote.http11.filters.IdentityOutputFilter.doWrite(IdentityOutputFilter.java:127)
        at 
org.apache.coyote.http11.InternalOutputBuffer.doWrite(InternalOutputBuffer.java:573)
        at org.apache.coyote.Response.doWrite(Response.java:560)
        at 
org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:353)
        ... 21 more

Charles-André Martin


800 Square Victoria
Montréal (Québec) H4Z 0A3
Tél : (514) 504-2703


-----Message d'origine-----
De : simon [mailto:mtnes...@gmail.com] 
Envoyé : August-10-11 1:52 PM
À : solr-user@lucene.apache.org
Objet : Re: query time problem

Off the top of my head ...

Can you tell if GC is happening more frequently than usual/expected  ?

Is the index optimized - if not, how many segments ?

It's possible that one of the shards is behind a flaky network connection.

Is the 10s performance just for the Solr query or wallclock time at
the browser ?

You can monitor cache statistics from the admin console 'statistics' page

Are you seeing anything untoward in the solr logs ?

-Simon

On Wed, Aug 10, 2011 at 1:11 PM, Charles-Andre Martin
<charles-andre.mar...@sunmedia.ca> wrote:
> Hi,
>
>
>
> I've noticed poor performance for my solr queries in the past few days.
>
>
>
> Queries of that type :
>
>
>
> http://server:5000/solr/select?q=story_search_field_en:(water boston) OR 
> story_search_field_fr:(water boston)&rows=350&start=0&sort=r_modify_date 
> desc&shards=shard1:5001/solr,shard2:5002/solr&fq=type:(cch_story OR 
> cch_published_story)
>
>
>
> Are slow (more than 10 seconds).
>
>
>
> I would like to know if someone knows how I could investigate the problem ? I 
> tried to specify the parameters &debugQuery=on&explainOther=on but this 
> doesn't help much.
>
>
>
> I also monitored the shards log. Sometimes, there is broken pipe in the 
> shards logs.
>
>
>
> Also, is there a way I could monitor the cache statistics ?
>
>
>
> For your information, every shards master and slaves computers have enough 
> RAM and disk space.
>
>
>
>
>
> Charles-André Martin
>
>
>
>
>
>

Reply via email to