Thanks for reply.
Will try.
From: Gael Jourdan-Weil<mailto:gael.jourdan-w...@kelkoogroup.com>
Sent: 01 March 2021 05:48 PM
To: solr-user@lucene.apache.org<mailto:solr-user@lucene.apache.org>
Subject: RE: How to read tlog
Hello,
You can just use "cat" or "tail"
Hello,
You can just use "cat" or "tail", even though the tlog is not a text file, its
content can mostly be read using these commands.
You will have one document per line and should be able to see the fields
content.
I don't know is there is a Solr command which would give better display though
Terms query does not do analysis chain, but expect tokenized values.
Because it matches what is returned by faceting.
So I would check whether that field is string or text and difference in
processing. Enabling debug will also show difference in final expanded
form.
Regards,
Alex
P. S. It is
Alexandre Rafalovitch wrote
> What about copyField with the target being index only (docValue only?) and
> no lowercase on the target field type?
>
> Solr is not a database, you are optimising for search. So duplicate,
> multi-process, denormalise, create custom field types, etc.
>
> Regards,
>
What about copyField with the target being index only (docValue only?) and
no lowercase on the target field type?
Solr is not a database, you are optimising for search. So duplicate,
multi-process, denormalise, create custom field types, etc.
Regards,
Alex
On Wed., Feb. 3, 2021, 4:43 p.m. eli
Alexandre Rafalovitch wrote
> It is documented in the reference guide:
> https://lucene.apache.org/solr/guide/8_8/analysis-screen.html
>
> Hope it helps,
>Alex.
>
> On Tue, 2 Feb 2021 at 00:57, elivis <
> elivis@
> > wrote:
>>
>> Alexandre Rafalovitch wrote
>> > Admin UI also allows you to
It is documented in the reference guide:
https://lucene.apache.org/solr/guide/8_8/analysis-screen.html
Hope it helps,
Alex.
On Tue, 2 Feb 2021 at 00:57, elivis wrote:
>
> Alexandre Rafalovitch wrote
> > Admin UI also allows you to run text string against a field definition to
> > see what eac
Alexandre Rafalovitch wrote
> Admin UI also allows you to run text string against a field definition to
> see what each stage of analyzer chain does.
Thank you. Could you please give me some pointers how to achieve this (see
what each stage of analyzer chain does in Admin UI)?
--
Sent from: ht
Alexandre Rafalovitch wrote
> Admin UI also allows you to run text string against a field definition to
> see what each stage of analyzer chain does.
Thank you. Could please let me know how to do this (see what each stage of
analyzer chain does)?
--
Sent from: https://lucene.472066.n3.nabble.c
Check the field type and associated indexing chain in managed-schema of
your core. It probably has the lowercase filter in it.
Find a better type or make one yourself. Remember to reload the schema and
reindex the content.
Admin UI also allows you to run text string against a field definition to
On 1/4/2021 11:25 AM, Chris Hostetter wrote:
Can't you just configure nagios to do a "negative match" against
numFound=0 ? ... ie: "if response matches 'numFound=0' fail the check."
(IIRC there's an '--invert-regex' option for this)
Nothing's ever simple: apparently the standard plugin does n
Can't you just configure nagios to do a "negative match" against
numFound=0 ? ... ie: "if response matches 'numFound=0' fail the check."
(IIRC there's an '--invert-regex' option for this)
: Date: Mon, 28 Dec 2020 14:36:30 -0600
: From: Dmitri Maziuk
: Reply-To: solr-user@lucene.apache.org
: T
Hi,
I was able to add the config set to the STATUS response by implementing a
custom extended CoreAdminHandler.
However, it would be nice if this could be added in Solr itself. I've create
a JIRA for this: https://issues.apache.org/jira/browse/SOLR-15034
Kind regards,
Andreas
--
Sent from: ht
This blog gets more specific with some of the ideas behind the eval
expression:
https://joelsolr.blogspot.com/2017/04/having-talk-with-solr-using-new-echo.html
Joel Bernstein
http://joelsolr.blogspot.com/
On Thu, Nov 19, 2020 at 12:21 PM Joel Bernstein wrote:
> You could have a program that
You could have a program that writes a Streaming Expression
programmatically then use eval to run it. You can also save Streaming
Expression data structures: tuple, list, array etc... and eval them into
live streams that can be iterated.
Joel Bernstein
http://joelsolr.blogspot.com/
On Wed, No
On 11/9/2020 5:44 AM, raj.yadav wrote:
*Question:*
Since reload is not done, none of the replica (including leader) will have
updated solrconfig. And if we restart replica and if it trys to sync up with
leader will it reflect the latest changes of solrconfig or it will be the
same as leader.
JjviLoypusuKOxCnOw97zuo&disp=emb]
De: DAVID MARTIN NIETO
Enviado: miércoles, 4 de noviembre de 2020 16:20
Para: solr-user@lucene.apache.org
Asunto: RE: How to raise open file limits
Hi,
You must have to change the ulimit -a parameters on your SO config.
I b
Hi,
You must have to change the ulimit -a parameters on your SO config.
I believe the problem that you have is in:
max user processes (-u) 4096
Kind regards.
David Martín Nieto
Analista Funcional
Calle Cabeza Mesada 5
28031, Madrid
T: +34 667 414 432
Hello,
If you enable authentication, this will work on your HTTP port. Solr won’t make
a difference on whether the request comes from the Web UI or Dovecot.
I guess the workaround could be to put the web UI behind a proxy like NGINX and
have authentication there?
But if anyone can have direct
raj.yadav wrote
> In cases for which we are getting this warning, I'm not able to extract
> the
> `exact solr query`. Instead logger is logging `parsedquery ` for such
> cases.
> Here is one example:
>
>
> 2020-09-29 13:09:41.279 WARN (qtp926837661-82461) [c:mycollection
> s:shard1_0 r:core_
Hi,
I went through other queries for which we are getting `The request took too
long to iterate over doc values` warning. As pointed by Erick I have cross
check all the fields that are being used in query and there is no such field
against which we are searching and it as index=false and docValues
Hi,
I went through other queries for which we are getting `The request took too
long to iterate over doc values` warning. As pointed by Erick I have cross
check all the fields that are being used in query and there is no such field
against which we are searching and it as index=false and docValues
I went through other queries for which we are getting `The request took too
long to iterate over doc values` warning. As pointed by Erick I have cross
check all the fields that are being used in query and there is no such field
against which we are searching and it as index=false and docValues=true
Hey Erick,
In cases for which we are getting this warning, I'm not able to extract the
`exact solr query`. Instead logger is logging `parsedquery ` for such cases.
Here is one example:
2020-09-29 13:09:41.279 WARN (qtp926837661-82461) [c:mycollection
s:shard1_0 r:core_node5 x:mycollection_s
Let’s see the query. My bet is that you are _searching_ against the field and
have indexed=false.
Searching against a docValues=true indexed=false field results in the
equivalent of a “table scan” in the RDBMS world. You may use
the docValues efficiently for _function queries_ to mimic some
searc
This is solved by using local parameters. So
{!func}sub(num_tokens_int,query({!dismax qf=field_name v=${text}}))
works
On Mon, Sep 21, 2020 at 7:43 PM krishan goyal wrote:
> Hi,
>
> I have use cases of features which require a query function and some more
> math on top of the result of the qu
Hi all,
I have found the below details in stackoverflow but not sure how to
include the jar. Can any one help with this?
I've created a new filter class from "FilteringTokenFilter". The task is
pretty simple. I would check before adding into the list.
I have created a simple plugin Eliminate du
But not sure why these type of search string is causing high cpu
utilization.
On Fri, 18 Sep, 2020, 12:49 am Rahul Goswami, wrote:
> Is this for a phrase search? If yes then the position of the token would
> matter too and not sure which token would you want to remove. "eg
> "tshirt hat tshirt".
Is this for a phrase search? If yes then the position of the token would
matter too and not sure which token would you want to remove. "eg
"tshirt hat tshirt".
Also, are you looking to save space and want this at index time? Or just
want to remove duplicates from the search string?
If this is at s
If someone is searching with " tshirt tshirt tshirt tshirt tshirt tshirt"
we need to remove the duplicates and search with tshirt.
On Fri, 18 Sep, 2020, 12:19 am Alexandre Rafalovitch,
wrote:
> This is not quite enough information.
> There is
> https://lucene.apache.org/solr/guide/8_6/filter-de
This is not quite enough information.
There is
https://lucene.apache.org/solr/guide/8_6/filter-descriptions.html#remove-duplicates-token-filter
but it has specific limitations.
What is the problem that you are trying to solve that you feel is due
to duplicate tokens? Why are they duplicates? Is i
It is kept in zookeeper within /configs/[collection_name], at least with my
SolrCloud 6.6.6.
bin/solr zk ls /configs/[your_collection_name]
Regards
Bernd
Am 08.09.20 um 21:40 schrieb yaswanth kumar:
> Can someone help me on how to persists the data that's updated in
> dataimport.properties file
Hi,
I noticed that when I created TLOG Replicas using ADDREPLICA API, I called
the API parallely for all the shards, because of which all the replicas
were created on a single node i.e. replicas were not distributed evenly
across the nodes.
After fixing that, getting better indexing performance t
Hi,
Even if it is not the root cause, I suggest to try to respect some basic
best practices and so not have "2 Zk running on the
same nodes where Solr is running". Maybe you can achieve this by just
stopping these 2 Zk (and move them later). Did you increase
ZK_CLIENT_TIMEOUT to 3 ?
Did you c
Hi,
I changed all the replicas, 50x2, from NRT to TLOG by adding TLOG replicas
using the ADDREPLICA API and then deleting the NRT replicas.
But now, these replicas are going into recovery even more frequently during
indexing. Same errors are observed.
Also, commit is taking a lot of time compared
Commits should absolutely not be taking that much time, that’s where I’d focus
first.
Some sneaky places things go wonky:
1> you have suggester configured that builds whenever there’s a commit.
2> you send commits from the client
3> you’re optimizing on commit
4> you have too much data for your
Are you able to use TLOG replicas? That should reduce the time it takes to
recover significantly. It doesn't seem like you have a hard need for
near-real-time, since slow ingestions are fine.
- Houston
On Tue, Aug 25, 2020 at 12:03 PM Anshuman Singh
wrote:
> Hi,
>
> We have a 10 node (150G RAM,
Good morning! To add more context on the question, I can successfully use the
Java API to build the list of new Clauses. However, the problem that I have is
that I don't know how to "write" those changes back to solr using the Java API.
I see there's a writeMap method in the Policy class however
Hi Community members,
I tried the following approaches but non of them worked for my use case.
1. For achieving exact match in solr we have to kept sow='false' (solr will
use field centric matching mode) and grouped multiple similar fields into
one copy field. It does solve the problem of recall
Are you also posting the same question as :Akshay Murarka ?
Please do not do this if so, use one e-mail address.
would in-place updates serve your use-case better? See:
https://lucene.apache.org/solr/guide/8_1/updating-parts-of-documents.html
> On Aug 10, 2020, at 8:17 AM, raj.yadav wrote:
>
Thanks for looking into this @Erick Erickson.
What'd be the proper way to get David Smiley's attention on this issue? A
JIRA ticket?
As for the performance difference, we haven't had a chance to test it.
We're still in the dev phase for migrating to solr 8, so we'll run our
benchmarks afterward, a
tly text fields that cannot have DocValues
>
> -Original Message-
> From: Webster Homer
> Sent: Thursday, July 23, 2020 2:07 PM
> To: solr-user@lucene.apache.org
> Subject: RE: How to measure search performance
>
> Hi Erick,
>
> This is an example of a pseudo
I forgot to mention, the fields being used in the function query are indexed
fields. They are mostly text fields that cannot have DocValues
-Original Message-
From: Webster Homer
Sent: Thursday, July 23, 2020 2:07 PM
To: solr-user@lucene.apache.org
Subject: RE: How to measure search
%
Thank you for your quick response.
Webster
-Original Message-
From: Erick Erickson
Sent: Thursday, July 23, 2020 12:52 PM
To: solr-user@lucene.apache.org
Subject: Re: How to measure search performance
This isn’t usually a cause for concern. Clearing the caches doesn’t necessarily
clear
This isn’t usually a cause for concern. Clearing the caches doesn’t necessarily
clear the OS caches for instance. I think you’re already aware that Lucene uses
MMapDirectory, meaning the index pages are mapped to OS memory space. Whether
those pages are actually _in_ the OS physical memory or no
Hmm, ok.
I’d have to defer to David Smiley about whether that was an intended change.
I’m curious whether you can actually measure the difference in performance. If
you can then that changes the urgency. Of course it’ll be a little more
expensive
for the replica serving shard2 on that machine t
Our use case here is that we want to highlight a single document (against
user-provided keywords), and we know the document's unique key already.
So this is really not a distributed query, but more of a get by id, but we
use SolrClient.query() for highlighting capabilities.
And since we know the un
First I want to check if this is an XY problem. Why do you want to do this?
If you’re using CloudSolrClient, requests are automatically load balanced. And
even if you send a top-level request (assuming you do NOT set distrib=false),
then the request may be forwarded to another Solr node anyway. Th
Hi
Thanks Erick and Walter for your response.
Solr Version Used : 6.5.0
I tried to elaborate the issue:
Case 1 : Search String : Industrial Electric Oven
Results=945
Case 2 : Search String : Dell laptop bags
Results=992
In above both cases, mm play its role.(match any
First, remove the “mm” parameter from the request handler definition. That can
be added back in and tweaked later, or just left out.
Second, you don’t need any query syntax to search for two words. This query
should work fine:
books bags
wunder
Walter Underwood
wun...@wunderwood.org
http://o
Please let s know what version of Solr you use, otherwise it’s very hard to know
whether you’re running into https://issues.apache.org/jira/browse/SOLR-8812
or similar.
But two things to try:
1> specify q.op
lr
2> specify mm=0%
Best,
Erick
> On Jul 2, 2020, at 1:22 AM, Tushar Arora wrote:
>
>
Hi,
Maybe https://github.com/sematext/solr-diagnostics can be of use?
Otis
--
Monitoring - Log Management - Alerting - Anomaly Detection
Solr & Elasticsearch Consulting Support Training - http://sematext.com/
On Mon, Jun 29, 2020 at 3:46 PM Erick Erickson
wrote:
> Really look at your cache s
Really look at your cache size settings.
This is to eliminate this scenario:
- your cache sizes are very large
- when you looked and the memory was 9G, you also had a lot of cache entries
- there was a commit, which threw out the old cache and reduced your cache size
This is frankly kind of unlik
On Mon, Jun 29, 2020 at 3:13 PM Erick Erickson
wrote:
> ps aux | grep solr
>
[solr@faspbsy0002 database-backups]$ ps aux | grep solr
solr 72072 1.6 33.4 22847816 10966476 ? Sl 13:35 1:36 java
-server -Xms16g -Xmx16g -XX:+UseG1GC -XX:+ParallelRefProcEnabled
-XX:G1HeapRegionSize=8m -XX
Maybe you can identify in the logfiles some critical queries?
What is the total size of the index?
What client are you using on the web app side? Are you reusing clients or
create one new for every query.
> Am 29.06.2020 um 21:14 schrieb Ryan W :
>
> On Mon, Jun 29, 2020 at 1:49 PM David Hast
On Mon, Jun 29, 2020 at 1:49 PM David Hastings
wrote:
> little nit picky note here, use 31gb, never 32.
Good to know.
Just now I got this output from bin/solr status:
"solr_home":"/opt/solr/server/solr",
"version":"7.7.2 d4c30fc2856154f2c1fefc589eb7cd070a415b94 - janhoy -
2019-05-28 23:37
ps aux | grep solr
should show you all the parameters Solr is running with, as would the
admin screen. You should see something like:
-XX:OnOutOfMemoryError=your_solr_directory/bin/oom_solr.sh
And there should be some logs laying around if that was the case
similar to:
$SOLR_LOGS_DIR/solr_oom_ki
little nit picky note here, use 31gb, never 32.
On Mon, Jun 29, 2020 at 1:45 PM Ryan W wrote:
> It figures it would happen again a couple hours after I suggested the issue
> might be resolved. Just now, Solr stopped running. I cleared the cache in
> my app a couple times around the time that i
It figures it would happen again a couple hours after I suggested the issue
might be resolved. Just now, Solr stopped running. I cleared the cache in
my app a couple times around the time that it happened, so perhaps that was
somehow too taxing for the server. However, I've never allocated so mu
The thing that’s unsettling about this is that assuming you were hitting OOMs,
and were running the OOM-killer script, you _should_ have had very clear
evidence that that was the cause.
If you were not running the killer script, the apologies for not asking about
that
in the first place. Java’s p
sometimes just throwing money/ram/ssd at the problem is just the best
answer.
On Mon, Jun 29, 2020 at 11:38 AM Ryan W wrote:
> Thanks everyone. Just to give an update on this issue, I bumped the RAM
> available to Solr up to 16GB a couple weeks ago, and haven’t had any
> problem since.
>
>
> On
Thanks everyone. Just to give an update on this issue, I bumped the RAM
available to Solr up to 16GB a couple weeks ago, and haven’t had any
problem since.
On Tue, Jun 16, 2020 at 1:00 PM David Hastings
wrote:
> me personally, around 290gb. as much as we could shove into them
>
> On Tue, Jun 1
me personally, around 290gb. as much as we could shove into them
On Tue, Jun 16, 2020 at 12:44 PM Erick Erickson
wrote:
> How much physical RAM? A rule of thumb is that you should allocate no more
> than 25-50 percent of the total physical RAM to Solr. That's cumulative,
> i.e. the sum of the h
How much physical RAM? A rule of thumb is that you should allocate no more
than 25-50 percent of the total physical RAM to Solr. That's cumulative,
i.e. the sum of the heap allocations across all your JVMs should be below
that percentage. See Uwe Schindler's mmapdirectiry blog...
Shot in the dark.
To add to this, i generally have solr start with this:
-Xms31000m-Xmx31000m
and the only other thing that runs on them are maria db gallera cluster
nodes that are not in use (aside from replication)
the 31gb is not an accident either, you dont want 32gb.
On Tue, Jun 16, 2020 at 11:26 AM Shawn H
On 6/11/2020 11:52 AM, Ryan W wrote:
I will check "dmesg" first, to find out any hardware error message.
[1521232.781801] Out of memory: Kill process 117529 (httpd) score 9 or
sacrifice child
[1521232.782908] Killed process 117529 (httpd), UID 48, total-vm:675824kB,
anon-rss:181844kB, file-r
XX:NewRatio=3 -XX:NewSize=134217728
>> >>>> -XX:NumberOfGCLogFiles=9 -XX:OldPLABSize=16 -XX:OldSize=402653184
>> >>>> -XX:-OmitStackTraceInFastThrow
>> >>>> -XX:OnOutOfMemoryError=/opt/solr/bin/oom_solr.sh 8983
>> >>> /opt/solr/server/l
/solr/bin/oom_solr.sh 8983
> >>> /opt/solr/server/logs
> >>>> -XX:ParallelGCThreads=4 -XX:+ParallelRefProcEnabled
> >>>> -XX:PretenureSizeThreshold=67108864 -XX:+PrintGC
> >>>> -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCDateStamps
> >>>> -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -X
reshold=67108864 -XX:+PrintGC
>>>> -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCDateStamps
>>>> -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintHeapAtGC
>>>> -XX:+PrintTenuringDistribution -XX:SurvivorRatio=4
>>>> -XX:TargetSurvivorRatio=90 -XX:ThreadS
leRotation
>> > -XX:+UseParNewGC
>> >
>> > Buried in there I see "OnOutOfMemoryError=/opt/solr/bin/oom_solr.sh".
>> But I
>> > think this is just a setting that indicates what to do in case of an
>> OOM.
>> > And if I look in that oo
yError=/opt/solr/bin/oom_solr.sh".
> But I
> > think this is just a setting that indicates what to do in case of an OOM.
> > And if I look in that oom_solr.sh file, I see it would write an entry to
> a
> > solr_oom_kill log. And there is no such log in the logs direct
rver,
>> for instance, top, vmstat, lsof, iostat ... or simply install some nice
>> free monitoring tool into this system, like monit, monitorix, nagios.
>> Good luck!
>>
>>
>> From: Ryan W
>> Sent: Thursday, June 11, 202
tall some nice
> free monitoring tool into this system, like monit, monitorix, nagios.
> Good luck!
>
>
> From: Ryan W
> Sent: Thursday, June 11, 2020 2:13 AM
> To: solr-user@lucene.apache.org
> Subject: Re: How to determine why solr stops ru
On 6/10/2020 12:13 PM, Ryan W wrote:
People keep suggesting I check the logs for errors. What do those errors
look like? Does anyone have examples of the text of a Solr oom error? Or
the text of any other errors I should be looking for the next time solr
fails? Are there phrases I should grep
Good luck!
From: Ryan W
Sent: Thursday, June 11, 2020 2:13 AM
To: solr-user@lucene.apache.org
Subject: Re: How to determine why solr stops running?
Hi all,
People keep suggesting I check the logs for errors. What do those errors
look like? Does anyone have examples of the text of
Hi all,
People keep suggesting I check the logs for errors. What do those errors
look like? Does anyone have examples of the text of a Solr oom error? Or
the text of any other errors I should be looking for the next time solr
fails? Are there phrases I should grep for in the logs? Should I be
To add to what Dave said, if you have a particular machine that’s prone to
suddenly stopping, that’s usually a red flag that you should seriously
think about hardware issues.
If the problem strikes different machines, then I agree with Shawn that
the first thing I’d be suspicious of is OOM errors
I’ll add that whenever I’ve had a solr instance shut down, for me it’s been a
hardware failure. Either the ram or the disk got a “glitch” and both of these
are relatively fragile and wear and tear type parts of the machine, and should
be expected to fail and be replaced from time to time. Solr i
On 5/14/2020 7:22 AM, Ryan W wrote:
I manage a site where solr has stopped running a couple times in the past
week. The server hasn't been rebooted, so that's not the reason. What else
causes solr to stop running? How can I investigate why this is happening?
Any situation where Solr stops run
I assumed it does, based on your description. If you installed it as a service
(systemd), then systemd can start the service again if it fails. (something
like Restart=always in your [Service] definition).
But if it doesn’t restart automatically now, I think it’s easier to
troubleshoot: just ch
"If Solr auto-restarts"
It doesn't auto-restart. Is there some auto-restart functionality? I'm
not aware of that.
On Mon, Jun 8, 2020 at 7:10 AM Radu Gheorghe
wrote:
> Hi Ryan,
>
> If Solr auto-restarts, I suppose it's systemd doing that. When it restarts
> the Solr service, systemd should lo
Hi Ryan,
If Solr auto-restarts, I suppose it's systemd doing that. When it restarts
the Solr service, systemd should log this (maybe somethibg like: journalctl
--no-pager | grep -i solr).
Then you can go in your Solr logs and check what happened right before that
time. Also, check system logs for
Happened again today. Solr stopped running. Apache hasn't stopped in 10
days, so this is not due to a server reboot.
Solr is not being run with the oom-killer. And when I grep for ERROR in
the logs, there is nothing from today.
On Mon, May 18, 2020 at 3:15 PM James Greene
wrote:
> I usually do
To: solr-user@lucene.apache.org
Subject: Re: How to restore deleted collection from filesystem
ATTENTION: External Email – Be Suspicious of Attachments, Links and Requests
for Login Information.
See inline.
> On May 21, 2020, at 10:13 AM, Kommu, Vinodh K. wrote:
>
> Thanks Eric for quick
If you boost it high enough it should, but you’re right it’s not guaranteed.
The number is “whatever works”, it’s just a number the score is multiplied
by.
But another, not costly, but guaranteed to work would be have your app
do a real-time get on the ID in parallel with the main query. Real-tim
Thanks Erick.
OR'ing ID:"MOD2012A"^1000 with the original query will not always
guarantee that the record with the matching ID will be the #1 hit on the
list, or will it?
Also, why did you boost by a factor of 1000? I never figured out what the
number means for boosting. I have seen 10, 100
Try something like q=whatever OR q=id:whatever^1000
I’d put it in quotes for the id= clause, and do look at what the parsed
query looks like when you specify &debug=query. The reason I
recommend this is you’ll no doubt try something like
q=id:download MOD2012A manual
witout quotes and be ver
-Original Message-
> From: Erick Erickson
> Sent: Thursday, May 21, 2020 6:17 PM
> To: solr-user@lucene.apache.org
> Subject: Re: How to restore deleted collection from filesystem
>
> ATTENTION: External Email – Be Suspicious of Attachments, Links and Requests
> for L
ll
work?
Lastly anything needs to be aware in core.properties in newly created
collection or any reference pointing to new collection specific?
Thanks & Regards,
Vinodh
-Original Message-
From: Erick Erickson
Sent: Thursday, May 21, 2020 6:17 PM
To: solr-user@lucene.apache.org
Subject
So what I’m reading here is that you have the _data_ saved somewhere, right? By
“data” I just mean the data directories under the replica.
1> Go ahead and recreate the collection. It _must_ have the same number of
shards. Make it leader-only, i.e. replicationFactor == 1
2> The collection will be
I usually do a combination of grepping for ERROR in solr logs and checking
journalctl to see if an external program may have killed the process.
Cheers,
/
* James Austin Greene
* www.jamesaustingreene.com
* 336-lol-nerd
ps aux | grep solr
on a *.nix system will show you all the runtime parameters.
> On May 18, 2020, at 12:46 PM, Ryan W wrote:
>
> Is there a config file containing the start params? I run solr like...
>
> bin/solr start
>
> I have not seen anything in the logs that seems informative. When I g
Is there a config file containing the start params? I run solr like...
bin/solr start
I have not seen anything in the logs that seems informative. When I grep in
the logs directory for 'memory', I see nothing besides a couple entries
like...
2020-05-14 13:05:56.155 INFO (main) [ ] o.a.s.h.a.
Probably, but check that you are running with the oom-killer, it'll be in
your start params.
But absent that, something external will be the culprit, Solr doesn't stop
by itself. Do look at the Solr log once things stop, it should show if
someone or something stopped it.
On Mon, May 18, 2020, 10:
I don't see any log file with "oom" in the file name. Does that mean there
hasn't been an out-of-memory issue? Thanks.
On Thu, May 14, 2020 at 10:05 AM James Greene
wrote:
> Check the log for for an OOM crash. Fatal exceptions will be in the main
> solr log and out of memory errors will be in
Check the log for for an OOM crash. Fatal exceptions will be in the main
solr log and out of memory errors will be in their own -oom log.
I've encountered quite a few solr crashes and usually it's when there's a
threshold of concurrent users and/or indexing happening.
On Thu, May 14, 2020, 9:2
Any reference on this Is it possible actually?
On Tue, May 12, 2020 at 2:21 PM Vignan Malyala wrote:
> How to add mlt handler in Solr Cloud?
>
> There is very limited documentation on this. Using search component with
> mlt=true doesn't include all configurations like boosting and mlt filters.
>
check out the videos on this website TROO.TUBE don't be such a
sheep/zombie/loser/NPC. Much love!
https://troo.tube/videos/watch/aaa64864-52ee-4201-922f-41300032f219
On Tue, May 12, 2020 at 12:59 PM Vignan Malyala wrote:
>
> Anyone knows how to add mlt handler in solr cloud?
>
> On Tue, May 12, 2
Anyone knows how to add mlt handler in solr cloud?
On Tue, May 12, 2020 at 2:21 PM Vignan Malyala wrote:
> How to add mlt handler in Solr Cloud?
>
> There is very limited documentation on this. Using search component with
> mlt=true doesn't include all configurations like boosting and mlt filter
Authentication works the same in 7.x but the admin is not aware of it so you
will get a browser request for password if using basic auth. Upgrade to 8.x for
the login screen.
Jan Høydahl
> 26. apr. 2020 kl. 06:05 skrev Amy Bai :
>
> Thanks so much for your kindly reply.
> Another question, I
1 - 100 of 4489 matches
Mail list logo