Thanks  Yonik.

Should I consider sharding in this case ( actually I have one big index
with replication) ? Or create 2 index (one for search and other for facet
on a different machine) ?

Thanks folks

With love from Paris (it's raining today :(

Le mardi 13 novembre 2012, Yonik Seeley a écrit :

> On Mon, Nov 12, 2012 at 8:39 PM, Aeroox Aeroox 
> <aero...@gmail.com<javascript:;>>
> wrote:
> > Hi folks,
> >
> > I have a solr index with up to 50M documents. A document contain 62
> fields
> > (docid, name, location....).
> >
> > The facet count took 1 to 2 minutes with this params :
> >
> > http://XXXX.../select/?q=solr&;
> >
> version=2.2&start=0&rows=0&facet=true&facet.limit=6&facet.mincount=1&mm=3<-1&facet.field=schoolname_hl&facet.method=fc
>
> It should hopefully just take that long the first time?  How much time
> does it take to facet on the same field subsequent times?
>
> > And my cache policy :
> >
> > <filterCache class="solr.FastLRUCache"
> >                  size="4096"
> >                  initialSize="4096"
> >                  autowarmCount="4096"/>
> >
> >     <queryResultCache class="solr.LRUCache"
> >                      size="5000"
> >                      initialSize="5000"
> >                      autowarmCount="5000"/>
>
> These are relatively big caches - consider reducing them if you can.
> Especially the filter cache, depending on what percent of the entries
> are bitsets.
> Worst case would be 50M / 8 * 4096 = 25GB of bitsets.
>
> > * i'm using solr 1.4 (LUCENE_36)
> > * 64GB Ram (with 60GB allocated to java/tomcat6)
>
> Reduce this if you can - it doesn't leave enough memory for the OS to
> cache the index files and can contribute to slowness (more disk IO).
>
> -Yonik
> http://lucidworks.com
>

Reply via email to