bq: They are definetly cached. The second time runs in no time.

That's not what I was referring to. Submitting the same query
over will certainly hit the queryResultCache and return in
almost no time.

What I meant was do things like vary the fq clause you have
where you've set cache=false. Or vary the parameters in the fq clauses.
The point is to only take measurements after enough queries have gone
through so you're sure the low-level caches are initialized. But the
queries all have to be different or you hit the queryResultCache.

 Best,
Erick

On Wed, Oct 14, 2015 at 9:50 AM, Lorenzo Fundaró
<lorenzo.fund...@dawandamail.com> wrote:
> On 14 October 2015 at 18:18, Pushkar Raste <pushkar.ra...@gmail.com> wrote:
>
>> Consider
>> 1. Turning on docValues for fields you are sorting, faceting on. This will
>> require to reindex your data
>>
>
> Yes. I am considering doing this.
>
>
>> 2. Try using TrieInt type field you are trying to do range search on (you
>> may have to fiddle with precisoinStep) to balance index size vs
>> performance.
>>
>
> Ok.
>
>
>> 3. If slowness is intermittent - turn on GC logging and see if there are
>> any long and tune GC strategy accordingly.
>>
>
> The Gc strategy is the default that comes when starting solr with bin/solr
> start script. And I was looking at the GC logs, and saw no Full GC at all.
>
> Thank you !
>
>
>
>>
>> -- Pushkar Raste
>>
>> On Wed, Oct 14, 2015 at 5:03 AM, Lorenzo Fundaró <
>> lorenzo.fund...@dawandamail.com> wrote:
>>
>> > Hello,
>> >
>> > I have following conf for filters and commits :
>> >
>> > Concurrent LFU Cache(maxSize=64, initialSize=64, minSize=57,
>> > acceptableSize=60, cleanupThread=false, timeDecay=true, autowarmCount=8,
>> > regenerator=org.apache.solr.search.SolrIndexSearcher$2@169ee0fd)
>> >
>> >      <autoCommit>
>> >        <!-- Every 15 seconds -->
>> >        <maxTime>${solr.autoCommit.maxTime:15000}</maxTime>
>> >        <openSearcher>false</openSearcher>
>> >      </autoCommit>
>> >
>> >      <autoSoftCommit>
>> >        <!-- Every 10 minutes -->
>> >        <maxTime>${solr.autoSoftCommit.maxTime:600000}</maxTime>
>> >      </autoSoftCommit>
>> >
>> > and the following stats for filters:
>> >
>> > lookups = 3602
>> > hits  =  3148
>> > hit ratio = 0.87
>> > inserts = 455
>> > evictions = 400
>> > size = 63
>> > warmupTime = 770
>> >
>> > *Problem: *a lot of slow queries, for example:
>> >
>> > {q=*:*&tie=1.0&defType=edismax&qt=standard&json.nl
>> > =map&qf=&fl=pk_i,score&start=0&sort=view_counter_i
>> > desc&fq={!cost=1 cache=true}type_s:Product AND
>> is_valid_b:true&fq={!cost=50
>> > cache=true}in_languages_t:de&fq={!cost=99
>> > cache=false}(shipping_country_codes_mt: (DE OR EURO OR EUR OR ALL)) AND
>> > (cents_ri: [* 3000])&rows=36&wt=json} hits=3768003 status=0 QTime=1378
>> >
>> > I could increase the size of the filter so I would decrease the amount of
>> > evictions, but it seems to me this would not be solving the root problem.
>> >
>> > Some ideas on where/how to start for optimisation ? Is it actually normal
>> > that this query takes this time ?
>> >
>> > We have an index of ~14 million docs. 4 replicas with two cores and 1
>> shard
>> > each.
>> >
>> > thank you.
>> >
>> >
>> > --
>> >
>> > --
>> > Lorenzo Fundaro
>> > Backend Engineer
>> > E-Mail: lorenzo.fund...@dawandamail.com
>> >
>> > Fax       + 49 - (0)30 - 25 76 08 52
>> > Tel        + 49 - (0)179 - 51 10 982
>> >
>> > DaWanda GmbH
>> > Windscheidstraße 18
>> > 10627 Berlin
>> >
>> > Geschäftsführer: Claudia Helming, Michael Pütz
>> > Amtsgericht Charlottenburg HRB 104695 B
>> >
>>
>
>
>
> --
>
> --
> Lorenzo Fundaro
> Backend Engineer
> E-Mail: lorenzo.fund...@dawandamail.com
>
> Fax       + 49 - (0)30 - 25 76 08 52
> Tel        + 49 - (0)179 - 51 10 982
>
> DaWanda GmbH
> Windscheidstraße 18
> 10627 Berlin
>
> Geschäftsführer: Claudia Helming, Michael Pütz
> Amtsgericht Charlottenburg HRB 104695 B

Reply via email to