HI Jason,
Thanks for this. Without screenshots this is what I get:
Site A
Last Modified:less than a minute ago
Num Docs:5455
Max Doc:5524
Heap Memory Usage:-1
Deleted Docs:69
Version:699
Segment Count:3
Current: Y
Site B
Last Modified:3 days ago
Num Docs:5454
Max Doc:5523
Heap Memory Usage:-1
De
Sorry to hijack this a little bit. Shawn, what's the calculation for the
size of the filter cache?
Is that 1 bit per document in the core / shard?
Thanks
On Fri, 5 Jun 2020 at 17:20, Shawn Heisey wrote:
> On 6/5/2020 12:17 AM, Srinivas Kashyap wrote:
> > q=*:*&fq=PARENT_DOC_ID:100&fq=MODIFY_TS:[
Hi David,
sorry for my late answer. I created simple test scenarios on github
https://github.com/hlavki/solr-unified-highlighter-test[1]
There are 2 documents, both bigger sized.
Test method:
https://github.com/hlavki/solr-unified-highlighter-test/blob/master/src/test/java/com/example/Highlight
Hi Ryan,
If Solr auto-restarts, I suppose it's systemd doing that. When it restarts
the Solr service, systemd should log this (maybe somethibg like: journalctl
--no-pager | grep -i solr).
Then you can go in your Solr logs and check what happened right before that
time. Also, check system logs for
Hi Tom,
To your last two questions, I'd like to vent an alternative design: have
dedicated "hot" and "warm" nodes. That is, 2020+lists will go to the hot
tier, and 2019, 2018,2017+lists go to the warm tier.
Then you can scale the hot tier based on your query load. For the warm
tier, I assume ther
It’s _bounded_ buy MaxDoc/8 + (some overhead). The overhead is
both the map overhead and the representation of the query.
This is an upper bound, the full bitset is not stored if there
are few entries that match the filter, in that case the
doc IDs are stored. Consider if maxDoc is 1M and only 2 d
When highlighting, the stored data for the field is re-analyzed against the
query based on the field you’re highlighting. My bet is that if you query just
“q=doc_text:mosh” you will not get a hit. Check your text_ws fieldType, it’s
probably case sensitive. So if you changed the doc_text type to
"If Solr auto-restarts"
It doesn't auto-restart. Is there some auto-restart functionality? I'm
not aware of that.
On Mon, Jun 8, 2020 at 7:10 AM Radu Gheorghe
wrote:
> Hi Ryan,
>
> If Solr auto-restarts, I suppose it's systemd doing that. When it restarts
> the Solr service, systemd should lo
"A simple cronjob with /bin/solr status and /bin/solr start should do the trick."
I don't know what that would look like. Wouldn't the job have to check the
status and only give the start command if solr isn't running? I don't
think it's possible to put logic in a cron job. I think it would have
Hi
I'm trying to do atomic updates with an 'add-distinct' modifier in a Solr 7
cloud. It seems to behave like an 'add' and I end up with double values in
my multiValued field. This only happens with multiple values for the field
in an update (cat:{"add-distinct":["a","b","d"]} exhibits this
proble
Use the solution described by Walter. This allows you to automatically restart
in case of failure and is also cleaner than defining a cronjob. Otherwise This
would be another dependency one needs to keep in mind - means if there is an
issue and someone does not know the system the person has to
A simple Perl script would be able to cover this, I have a cron job Perl script
that does a search with an expected result, if the result isn’t there it fails
over to a backup search server, sends me an email, and I fix what’s wrong. The
backup search server is a direct clone of the live server
I could write a script, too, though I’d do it with straight shell code. But
then I’d have to test it, check it in somewhere, document it for ops, install
it, ...
Instead, when we switch from monit, I'll start with one of these systemd
configs.
https://gist.github.com/hammady/3d7b5964c7b0f90997
Great, thanks Erick
On Mon, 8 Jun 2020 at 13:22, Erick Erickson wrote:
> It’s _bounded_ buy MaxDoc/8 + (some overhead). The overhead is
> both the map overhead and the representation of the query.
>
> This is an upper bound, the full bitset is not stored if there
> are few entries that match the
>
> Why have a cold backup and then switch?
>
my current set up is:
1. master indexer
2. master slave on a release/commit basis
3. 3 live slave searching nodes in two data different centers
the three live nodes are in front of nginx load balancing and they are
mostly hot but not all of them, i f
I agree with the systemd guys if you’re unfamiliar with scripting this sort of
thing. I’d wind up with piping through awk and grep and the like, which is as
clear as mud if you don’t already know it. Might as well learn and use the
modern tools if you can. We have an old-school hard division
Hi,
It appears that a query criteria is mandatory for a join. Taking this example
from the documentation: fq={!join from=id fromIndex=movie_directors
to=director_id}has_oscar:true. What if I want to find all movies that have a
director (regardless of whether they have won an Oscar or not)? This
or probably -director_id:[* TO *]
On Mon, Jun 8, 2020 at 10:56 PM Hari Iyer wrote:
> Hi,
>
> It appears that a query criteria is mandatory for a join. Taking this
> example from the documentation: fq={!join from=id fromIndex=movie_directors
> to=director_id}has_oscar:true. What if I want to find
Hello, Yasufumi-san and Solr Community:
Thank you for your suggestion.
When I added the parameter hl.maxAnalyzedChars=-1, I could highlight for
long text.
Sincerely,
Kaya Ota
2020年6月6日(土) 20:39 Yasufumi Mizoguchi :
> Hi, Kaya.
>
> How about using hl.maxAnalyzedChars parameter ?
>
> Thanks,
> Ya
Hi Shawn,
It's a vague question and I haven't tried it out yet.
Can I instead mention query as below:
Basically instead of
q=*:*&fq=PARENT_DOC_ID:100&fq=MODIFY_TS:[1970-01-01T00:00:00Z TO
*]&fq=PHY_KEY2:"HQ012206"&fq=PHY_KEY1:"BAMBOOROSE"&rows=1000&sort=MODIFY_TS
desc,LOGICAL_SECT_NAME asc,
I assumed it does, based on your description. If you installed it as a service
(systemd), then systemd can start the service again if it fails. (something
like Restart=always in your [Service] definition).
But if it doesn’t restart automatically now, I think it’s easier to
troubleshoot: just ch
Any idea?
I still won't be able to get TolerantUpdateProcessorFactory working, solr
exited at any error without any tolerance, any suggestions will be appreciated.
curl
"http://localhost:7070/solr/mycore/update?update.chain=tolerant-chain&maxErrors=100";
-d @data.xml
100
400
1
If your XML or JSON can't be parsed, your content never makes it to the
update chain.
It looks like you're trying to index non-UTF-8 data. You can set the
encoding of your XML in the Content-Type header of your POST request.
-H 'Content-Type: text/xml; charset=GB18030'
JSON only allows UTF-8, UT
On 5/14/2020 7:22 AM, Ryan W wrote:
I manage a site where solr has stopped running a couple times in the past
week. The server hasn't been rebooted, so that's not the reason. What else
causes solr to stop running? How can I investigate why this is happening?
Any situation where Solr stops run
Thanks for your reply, this is one of the example where it fail. POST by using
charset=utf-8 or other charset didn't help that CTRL-CHAR "^" error found in
the title field, I hope solr can simply skip this record and go ahead to index
the rest data.
9780373773244
9780373773244
Missing: I
25 matches
Mail list logo