Hi Shalin,
Moving to 8.6.3 fixed it!
Thank you very much for that. :)
We'd considered an upgrade - just because - but we won't have done so so
quickly without your information.
Cheers
On Sat, Oct 24, 2020 at 11:37 PM Shalin Shekhar Mangar <
shalinman...@gmail.com> wrote:
> Hi Jonathan,
>
> Are
Hi,
I have hooked up Grafana dashboards with Solr 8.5.2 Prometheus exporter.
For some reason, some dashboards like Requests, Timeouts are not showing
any data. When I took a look at corresponding data from Prometheus
exporter, it showed two entries per search request handler, first with
count of 0
I am interested in disallowing delete through security.json
After seeing the "method" section in
lucene.apache.org/solr/guide/8_4/rule-based-authorization-plugin.html my first
attempt was as follows:
{"set-permission":{
"name":"NO_delete",
"path":["/update/*","/update"],
"collection":col_name,
Can we get the metrics for a particular time range? I know metrics history
was not enabled, so that I will be having only from when the solr node is
up and running last time, but even from it can we do a data range like for
example on to see CPU usage on a particular time range?
Note: Solr version
I'm trying to get a simple text listing of my collections (so that I can do
some shell scripting loops [for example calling SOLR Cloud Backup on each]).
I'm getting an exception when I simply append the "&wt=csv" to the end of the
collection's LIST api (eg:
http://localhost:8983/solr/admin/co
8.6 still has uninvertible=true, so this should go ahead and create an on-heap
docValues structure. That’s going to consume 38M ints to the heap. Still, that
shouldn’t require 500M additional space, and this would have been happening in
your old system anyway so I’m at a loss to explain…
Unless
The “requests” metric is a simple counter. Please see the documentation in the
Reference Guide on the available metrics and their meaning. This counter is
initialised when the replica starts up, and it’s not persisted (so if you
restart this Solr node it will reset to 0).
If by “frequency” you
Hello Anshum,
Good point! We sort on the collection's uniqueKey, our id field and this one
does not have docValues enabled for it. It could be a contender but is it the
problem? I cannot easily test it at this scale.
Thanks,
Markus
-Original message-
> From:Anshum Gupta
> Sent: Monda
I am new to metrics api in solr , when I try to do
solr/admin/metrics?prefix=QUERY./select.requests its throwing numbers
against each collection that I have, I can understand those are the
requests coming in against each collection, but for how much frequencies??
Like are those numbers from the tim
Hey Markus,
What are you sorting on? Do you have docValues enabled on the sort field ?
On Mon, Oct 26, 2020 at 5:36 AM Markus Jelsma
wrote:
> Hello,
>
> We have been using a simple Python tool for a long time that eases
> movement of data between Solr collections, it uses CursorMark to fetch
>
Thanks Shawn and Erick.
So far I haven't noticed any performance issues before and after the change.
My concern all along is COST. We could have left the configuration as is -
keeping the deleting documents in the index - But we have to scale up our
Solr cluster. This will double our Solr Cluste
Damien, I gathered that you're using "nested facet"; but there are a
lot of different ways to do that, with different implications. e.g.,
nesting terms facet within terms facet, query facet within terms,
terms within query, different stats, sorting, overrequest/overrefine
(and for that matter, refi
"Some large segments were merged into 12GB segments and
deleted documents were physically removed.”
and
“So with the current natural merge strategy, I need to update solrconfig.xml
and increase the maxMergedSegmentMB often"
I strongly recommend you do not continue down this path. You’re making a
m
Hello,
We have been using a simple Python tool for a long time that eases movement of
data between Solr collections, it uses CursorMark to fetch small or large
pieces of data. Recently it stopped working when moving data from a production
collection to my local machine for testing, the Solr nod
This was my mistake.
Thank you.
Taisuke
2020年10月23日(金) 15:02 Taisuke Miyazaki :
> Thanks.
>
> I analyzed it as explain=true and this is what I found.
> Why does this behave this way?
>
> fq=foo:1
> bq=foo:(1)^1
> bf=sum(200)
>
> If you do this, the score will be boosted by bq.
> However, if
According to the source code here
https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.6.2/solr/core/src/java/org/apache/solr/core/SolrPaths.java#L134
your allowPaths value is NOT equal to «*» (which is stored as _ALL_) (parsed
here
https://github.com/apache/lucene-solr/blob/releas
16 matches
Mail list logo