On another note, since response time is in question, I have been using a
customhighlighter to just override the method encodeSnippets() in the
UnifiedSolrHighlighter class since solr 6 since Solr sends back blank array
(ZERO_LEN_STR_ARRAY) in the response payload for fields that do not match.
Here
Hi David,
Thanks so much for your reply.
hl.weightMatches was indeed the culprit. After setting it to false, I am
now getting the same sub-second response as Solr 6. I am using Solr 8.6.1
(8.6.1)
Here are the tests I carried out:
hl.requireFieldMatch=true&hl.weightMatches=true (2458 ms)
hl.requi
I'm not sure about _performance_, but I'm pretty sure you don't want to be
faceting on docValued SortableTextField (and faceting on non-docValued
SortableTextField, though I think technically possible, works against
uninverted _indexed_values, so ends up doing something entirely different):
https:/
Thanks Hoss and Shawn for helping.
there are not many OOM stack details printed in the solr log file, it's
just saying No enough memory, and it's killed by oom.sh(solr's script).
My question(issue) is not it's OOM or not, the issue is why JVM memory
usage keeps growing up but never going down
I am wondering that the performance of facet of DocValued SortableText
Field is slower than non Docvalued String Field.
Does anyone know why?
Thanks,
Jae
: Is the matter to use the config file ? I am using custom config instead
: of _default, my config is from solr 8.6.2 with custom solrconfig.xml
Well, it depends on what's *IN* the custom config ... maybe you are using
some built in functionality that has a bug but didn't get triggered by my
On 1/27/2021 9:00 PM, Luke wrote:
it's killed by OOME exception. The problem is that I just created empty
collections and the Solr JVM keeps growing and never goes down. there is no
data at all. at the beginning, I set Xxm=6G, then 10G, now 15G, Solr 8.7
always use all of them and it will be kill
Hello Kerwin,
Firstly, hopefully you've seen the upgrade notes:
https://lucene.apache.org/solr/guide/8_7/solr-upgrade-notes.html
8.6 fixes a performance regression found in 8.5; perhaps you are using 8.5?
Missing from the upgrade notes but found in the CHANGES.txt for 8.0
is hl.weightMatches=true
Thanks Chris,
Is the matter to use the config file ? I am using custom config instead of
_default, my config is from solr 8.6.2 with custom solrconfig.xml
Derrick
Sent from my iPhone
> On Jan 28, 2021, at 2:48 PM, Chris Hostetter wrote:
>
>
> FWIW, I just tried using 8.7.0 to run:
>b
FWIW, I just tried using 8.7.0 to run:
bin/solr -m 200m -e cloud -noprompt
And then setup the following bash one liner to poll the heap metrics...
while : ; do date; echo "node 8989" && (curl -sS
http://localhost:8983/solr/admin/metrics | grep memory.heap); echo "node 7574"
&& (curl -
: Hi, I am using solr 8.7.0, centos 7, java 8.
:
: I just created a few collections and no data, memory keeps growing but
: never go down, until I got OOM and solr is killed
Are you usinga custom config set, or just the _default configs?
if you start up this single node with something like -X
: I am wondering if there is a way to warmup new searcher on commit by
: rerunning queries processed by the last searcher. May be it happens by
: default but then I can't understand why we see high query times if those
: searchers are being warmed.
it only happens by default if you have an 'auto
We recently have had a few occasions when cores for one specific collection
were renamed (or more likely dropped and recreated, and thus ended up with a
different core name).
Is this a known phenomenon? Is there any explanation?
It may be relevant that we just recently started running this Solr
and here is GC log when I create collection(just create collection, nothing
else)
{Heap before GC invocations=1530 (full 412):
garbage-first heap total 10485760K, used 10483431K [0x00054000,
0x000540405000, 0x0007c000)
region size 4096K, 0 young (0K), 0 survivors (0K)
Met
Mike,
No, it's not docker. it is just one solr node(service) which connects to
external zookeeper, the below is a JVM setting and memory usage.
There are 25 collections which have a few 2000 documents totally. I am
wondering why solr uses so much memory.
-XX:+AlwaysPreTouch-XX:+ExplicitGCInvoke
Hi,
The above boolean query works fine when the rows fetched are smaller like
10/20 but when it is increased to a bigger number it slows down.
Is document collection very expensive? Is there any configuration I am
missing?
*Solr setup details:*
Mode : SolrCloud
Number of Shards : 12
Index size :
16 matches
Mail list logo