A couple of things don't particularly make sense here:

You specify edismax, q=*:* yet you specify qf=
You're searching across whatever you defined as the default
field in the request handler. What do you see if you attach
&debug=true to the query?

I think this clause is wrong:
(cents_ri: [* 3000])

I think you mean
(cents_ri: [* TO 3000])

I'm not sure either of those is the problem, but are places I'd start.

As far as the size of your filter cache goes, a hit ratio of .87 actually
isn't bad. Upping the size would add some marginal benefit, but it's
unlikely to be a magic bullet.

But are these slow queries constant or intermittent? In other words,
are all queries of this general form slow or just the first few? In particular
is the first query that mentions sorting on this field slow but subsequent
ones faster? In that case consider adding a query to the newSearcher
event in solrconfig.xml that mentions this sort, that would pre-warm
the sort values. Also, defining all fields that you sort on as docValues="true"
is recommended at this point.

What I'd try is removing clauses to see which one is the problem. On
the surface this is surprisingly slow. And how heavily loaded is the server?
Your autocommit settings look fine, my question is more how much indexing
and querying is going on when you take these measurements.

Best,
Erick

On Wed, Oct 14, 2015 at 3:03 AM, Lorenzo Fundaró
<lorenzo.fund...@dawandamail.com> wrote:
> Hello,
>
> I have following conf for filters and commits :
>
> Concurrent LFU Cache(maxSize=64, initialSize=64, minSize=57,
> acceptableSize=60, cleanupThread=false, timeDecay=true, autowarmCount=8,
> regenerator=org.apache.solr.search.SolrIndexSearcher$2@169ee0fd)
>
>      <autoCommit>
>        <!-- Every 15 seconds -->
>        <maxTime>${solr.autoCommit.maxTime:15000}</maxTime>
>        <openSearcher>false</openSearcher>
>      </autoCommit>
>
>      <autoSoftCommit>
>        <!-- Every 10 minutes -->
>        <maxTime>${solr.autoSoftCommit.maxTime:600000}</maxTime>
>      </autoSoftCommit>
>
> and the following stats for filters:
>
> lookups = 3602
> hits  =  3148
> hit ratio = 0.87
> inserts = 455
> evictions = 400
> size = 63
> warmupTime = 770
>
> *Problem: *a lot of slow queries, for example:
>
> {q=*:*&tie=1.0&defType=edismax&qt=standard&json.nl=map&qf=&fl=pk_i,score&start=0&sort=view_counter_i
> desc&fq={!cost=1 cache=true}type_s:Product AND is_valid_b:true&fq={!cost=50
> cache=true}in_languages_t:de&fq={!cost=99
> cache=false}(shipping_country_codes_mt: (DE OR EURO OR EUR OR ALL)) AND
> (cents_ri: [* 3000])&rows=36&wt=json} hits=3768003 status=0 QTime=1378
>
> I could increase the size of the filter so I would decrease the amount of
> evictions, but it seems to me this would not be solving the root problem.
>
> Some ideas on where/how to start for optimisation ? Is it actually normal
> that this query takes this time ?
>
> We have an index of ~14 million docs. 4 replicas with two cores and 1 shard
> each.
>
> thank you.
>
>
> --
>
> --
> Lorenzo Fundaro
> Backend Engineer
> E-Mail: lorenzo.fund...@dawandamail.com
>
> Fax       + 49 - (0)30 - 25 76 08 52
> Tel        + 49 - (0)179 - 51 10 982
>
> DaWanda GmbH
> Windscheidstraße 18
> 10627 Berlin
>
> Geschäftsführer: Claudia Helming, Michael Pütz
> Amtsgericht Charlottenburg HRB 104695 B

Reply via email to