We increased our number of terms (String) in a facet by 50,000.  Now we are
getting an error when we facet by this field - so we switched it to
facet.method=enum, and now the results come back. However, when we put it
into production we literally hit a wall (CPU went to 100% for 16 cores)
after about 30 minutes live.

We tried adding more machines to reduce the CPU, but it did not help. I am
wondering if CPU is a symptom... So we increased memory from 12GB to 16GB.
(the core size is 10GB). This is Solr 4.10.3.

What are some ideas? We are going to try docValues on the field. Does
anyone know if method=fc or method=enum works for docValue? I cannot find
any documentation on that.

We are thinking of splitting the field into 2 fields (fielda, fieldb). At
least the number will be less, but not sure if it will help memory?

The weird thing is for the first 30 minutes things are performing great.
Literally at like 10% CPU across 16 cores, not much memory and normal GC.

Originally the facet was a method=fc. Is there an issue with enum? We have
facet.threads=20 set, and not sure this is wise for a enum ?

What would you look at? Would you recommend turning off different caches
like fieldCache or filterCache?

-- 
Bill Bell
billnb...@gmail.com
cell 720-256-8076

Reply via email to