I think the below article explains it well:
http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html
I was thinking that doc-Values need to be transitioned into JVM from the OS
cache.
Turns out that is not required as the docValues are loaded into the virtual
address space by the OS
On 12/2/2017 6:59 PM, S G wrote:
I am a bit curious on the docValues implementation.
I understand that docValues do not use JVM memory and
they make use of OS cache - that is why they are more performant.
But to return any response from the docValues, the values in the
docValues' column-oriented
On Sat, Dec 2, 2017 at 8:59 PM, S G wrote:
> I am a bit curious on the docValues implementation.
> I understand that docValues do not use JVM memory and
> they make use of OS cache - that is why they are more performant.
>
> But to return any response from the docValues, the values in the
> docVal
I am a bit curious on the docValues implementation.
I understand that docValues do not use JVM memory and
they make use of OS cache - that is why they are more performant.
But to return any response from the docValues, the values in the
docValues' column-oriented-structures would need to be brough
Hi Toke,
Nearly 30% of the requests are setting facet.limit=200
On 42000 requests the number of time each field is used for faceting is
$ grep 'facet=true' select.log | grep -oP 'facet.field=([^&])*' | sort |
uniq -c | sort -r
23119 facet.field=category_path
8643 facet.field=EUR_0_price_de
Dominique Bejean wrote:
> Hi, Thank you for the explanations about faceting. I was thinking the hit
> count had a biggest impact on facet memory lifecycle.
Only if you have a very high facet.limit. Could you provide us with a typical
query, including all the parameters?
- Toke Eskildsen
Hi, Thank you for the explanations about faceting. I was thinking the hit
count had a biggest impact on facet memory lifecycle. Regardless the hit
cout there is a query peak at the time the issue occurs. This is relative
in regard of what Solr is supposed be able to handle, but this should be
suffi
Doninique:
Actually, the memory requirements shouldn't really go up as the number
of hits increases. The general algorithm is (say rows=10)
Calcluate the score of each doc
If the score is zero, ignore
If the score is > the minimum in my current top 10, replace the lowest
scoring doc in my current
Hi,
Thank you both for your responses.
I just have solr log for the very last period of the CG log.
Grep command allows me to count queries per minute with hits > 1000 or >
1 and so with the biggest impact on memory and cpu during faceting
> 1000
59 11:13
45 11:14
36 1
Your autowarm counts are rather high, bit as Toke says this doesn't
seem outrageous.
I have seen situations where Solr is running close to the limits of
its heap and GC only reclaims a tiny bit of memory each time, when you
say "full GC with no memory
reclaimed" is that really no memory _at all_?
Dominique Bejean wrote:
> We are encountering issue with GC.
> Randomly nearly once a day there are consecutive full GC with no memory
> reclaimed.
[... 1.2M docs, Xmx 6GB ...]
> Gceasy suggest to increase heap size, but I do not agree
It does seem strange, with your apparently modest index &
Hi,
We are encountering issue with GC.
Randomly nearly once a day there are consecutive full GC with no memory
reclaimed.
So the old generation heap usage grow up to the limit.
Solr stop responding and we need to force restart.
We are using Solr 6.6.1 with Oracle 1.8 JVM. The JVM settings are
12 matches
Mail list logo