In general we do not have too complex filters, but I decreased the
filterCache autowarm count to 256, will see how it performs during a month
or so before take any changes on it.
It also seems that adding more shards could improve the situation. We have
16 CPU cores and SSD RAID 10, so I think it
On 12/23/2014 2:31 AM, heaven wrote:
> We do not use dates here, at least not too often. Usually its something like
> type:Profile (we do use it from the rails application so type describes
> model names), opted_in:true, etc. Solr wasn't running too long though, so
> this could not show the real st
We do not use dates here, at least not too often. Usually its something like
type:Profile (we do use it from the rails application so type describes
model names), opted_in:true, etc. Solr wasn't running too long though, so
this could not show the real state.
Currently for the filter cache it shows
Milliseconds. The thing to track here is your
cumulative_hitratio.
0.7 isn't bad, but it's not great either. I'd be really
curious what kinds of fq clauses you're entering,
anything that mentions NOW is potentially a
waste unless you round with "date math"
FWIW,
Erick
On Mon, Dec 22, 2014 at
It is getting better now with smaller caches like this:
filterCache
class:org.apache.solr.search.FastLRUCache
version:1.0
description:Concurrent LRU Cache(maxSize=4096, initialSize=512,
minSize=3686, acceptableSize=3891, cleanupThread=false, autowarmCount=256,
regenerator=org.apache.solr.search.Sol
50K is still very, very large. You say you have 50M docs/node. Each
filterCache entry will be on the order of 6M. Times 50,000 (potential
if you turn indexing off). Or 300G memory for your filter cache alone.
There are OOMs out there with your name on them, just waiting to
happen at 3:00 AM after y
Okay, thanks for the suggestion, will try to decrease the caches gradually.
Each node has near 50 000 000 docs, perhaps we need more shards...
We had smaller caches before but that was leading to bad feedback from our
users. Besides our application users we also use Solr internally for data
analyz
As Shalin points out, these cache sizes are wy out the norm.
For filterCache, each entry is roughly maxDoc/8. You haven't told
us now many docs are on the node, but you can find maxDoc on
the admin page. What I _have_ seen is a similar situation and
if you ever stop indexing you'll get OOM err
Thanks, decreased the caches at twice, increased the heap size to 16G,
configured Huge Pages and added these options:
-XX:+UseConcMarkSweepGC
-XX:+UseLargePages
-XX:+CMSParallelRemarkEnabled
-XX:+ParallelRefProcEnabled
-XX:+UseLargePages
-XX:+AggressiveOpts
-XX:CMSInitiatingOccupancyFraction=75
Be
Those are huge cache sizes. My guess is that the searchExecutor thread is
spending too much time doing warming. Garbage collection may also be a
factor as other people pointed out.
On Fri, Dec 19, 2014 at 12:50 PM, heaven wrote:
>
> I have the next settings in my solrconfig.xml:
>
>
I have the next settings in my solrconfig.xml:
What is the best way to calculate the optimal cache/heap sizes? I understand
there's no a common formula and all docs have different size but -Xmx is
already 12G.
Thanks,
Alex
--
View this message in context:
http://lucene.472066.n3.nabble.
Right, I've seen situation where as Solr is using a high percentage of the
available memory, Java spends more and more time in GC cycles. Say
you've allocated 8G to the heap. Say further that the "steady state" for
Solr needs 7.5g (numbers made up...). Now the GC algorithm only has
0.5G to play wit
I've been experiencing this problem. Running VisualVM on my instances
shows that they spend a lot of time creating WeakReferences
(org.apache.lucene.util.WeakIdentityMap$IdentityWeakReference that is).
I think what's happening here is the heap's not big enough for Lucene's
caches and it ends up
13 matches
Mail list logo