: Of course, here is the full stack trace (collection 'techproducts' with
: just one core to make it easier):
Ah yeah ... see -- this looks like a mistake introduced at some point...
: Caused by: org.apache.solr.core.SolrResourceNotFoundException: Can't
: find resource 'elevate.xml' in classpat
: I need to have the elevate.xml file updated frequently and I was wondering
: if it is possible to put this file in the dataDir folder when using Solr
: Cloud. I know that this is possible in the standalone mode, and I haven't
: seen in the documentation [1] that it can not be done in Cloud.
:
: Let me add some background. A user triggers an operation which under the
: hood needs to update a single field. Atomic update fails with a message
: that one of the mandatory fields is missing (which is strange by
: itself). When I query Solr for the exact document (fq with the document
: i
FWIW: that log message was added to branch_8x by 3c02c9197376 as part of
SOLR-15052 ... it's based on master commit 8505d4d416fd -- but that does
not add that same logging message ... so it definitely smells like a
mistake to me that 8x would add this INFO level log message that master
doesn'
: there are not many OOM stack details printed in the solr log file, it's
: just saying No enough memory, and it's killed by oom.sh(solr's script).
not many isn't the same as none ... can you tell us *ANYTHING* about what
the logs look like? ... as i said: it's not just the details of the O
: Is the matter to use the config file ? I am using custom config instead
: of _default, my config is from solr 8.6.2 with custom solrconfig.xml
Well, it depends on what's *IN* the custom config ... maybe you are using
some built in functionality that has a bug but didn't get triggered by my
FWIW, I just tried using 8.7.0 to run:
bin/solr -m 200m -e cloud -noprompt
And then setup the following bash one liner to poll the heap metrics...
while : ; do date; echo "node 8989" && (curl -sS
http://localhost:8983/solr/admin/metrics | grep memory.heap); echo "node 7574"
&& (curl -
: Hi, I am using solr 8.7.0, centos 7, java 8.
:
: I just created a few collections and no data, memory keeps growing but
: never go down, until I got OOM and solr is killed
Are you usinga custom config set, or just the _default configs?
if you start up this single node with something like -X
: I am wondering if there is a way to warmup new searcher on commit by
: rerunning queries processed by the last searcher. May be it happens by
: default but then I can't understand why we see high query times if those
: searchers are being warmed.
it only happens by default if you have an 'auto
: You need to update EVERY solrconfig.xml that the JVM is loading for this to
: actually work.
that has not been true for a while, see SOLR-13336 / SOLR-10921 ...
: > 2. updated solr.xml :
: > ${solr.max.booleanClauses:2048}
:
: I don't think it's currently possible to set the value with solr
Can't you just configure nagios to do a "negative match" against
numFound=0 ? ... ie: "if response matches 'numFound=0' fail the check."
(IIRC there's an '--invert-regex' option for this)
: Date: Mon, 28 Dec 2020 14:36:30 -0600
: From: Dmitri Maziuk
: Reply-To: solr-user@lucene.apache.org
: T
https://lucene.apache.org/solr/guide/8_6/authentication-and-authorization-plugins.html
*Authentication* is global, but *Authorization* can be configured to use
rules that restrict permissions on a per collection basis...
https://lucene.apache.org/solr/guide/8_6/rule-based-authorization-plugin.
: I need to integrate Semantic Knowledge Graph with Solr 7.7.0 instance.
If you're talking about the work Trey Grainger has writtne some papers on
that was originally implemented in this repo...
https://github.com/careerbuilder/semantic-knowledge-graph
..then that work was incorported into so
Just to provide a little closure, it appears that this issue is fixed in
Java 14.0.2.
Chris
On Mon, Jul 27, 2020 at 5:38 PM Chris Larsson wrote:
> Nice... That's the code I was just looking at.
>
>
> https://github.com/apache/lucene-solr/blob/branch_8_6/solr/core/src/jav
: Hmm, setting -Dfile.encoding=UTF-8 solves the problem. I have to now check
: which component of the application screws it up, but at the moment I do NOT
: believe it is related to Solrj.
You can use the "forbidden-apis" project to analyze your code and look for
uses of APIs that depend on the
suspect the message is a bit misleading and the formatter in that call
> is null or some such. Why I
> haven’t a clue.
>
> Best,
> Erick
>
>
> > On Jul 27, 2020, at 3:54 PM, Chris Larsson wrote:
> >
> > Thank you Erick. I appreciate you taking the time to respon
ake a look at:
>
> https://issues.apache.org/jira/browse/SOLR-13606
>
> It’s actually a weirdness with Java. In that JIRA David Smiley
> suggest a way to deal with it, but I confess I haven’t a clue
> about the nuances there.
>
> Best,
> Erick
>
>
>
> > On Jul 27, 2
tus=0 QTime=0
2020-07-27 18:02:53.954 INFO (qtp997850486-21) [ ] o.a.s.s.HttpSolrCall
[admin] webapp=null path=/admin/info/system
params={wt=json&_=1595872899384} status=0 QTime=6
2020-07-27 18:02:54.022 INFO (qtp997850486-19) [ ] o.a.s.s.HttpSolrCall
[admin] webapp=null path=/admin/info/system
params={wt=json&_=1595872899384} status=0 QTime=6
Chris
: **
:
: **
...
: I was expecting that for field "fieldA" indexed will be marked as false and
: it will not be part of the index. But Solr admin "SCHEMA page" (we get this
: option after selecting collection name in the drop-down menu) is showing
: it as an indexed field (green tick mark
nd this if you require the first form functionality by,
> say,
> including a boolean field “has_tags”, then the first one would be
>
> fq=has_tags:true -tags:email
>
> Best,
> Erick
>
> > On Jul 14, 2020, at 8:05 AM, Emir Arnautović <
> emir.arnauto...@sematext
I'm trying to understand the difference between something like
fq={!cache=false}(tag:* -tag:email) which is very slow compared to
fq={!cache=false}(*:* -tag:email) on Solr 7.7.1.
I believe in the case of `tag:*` Solr spends some effort to gather all of
the documents that have a value for `tag` and
The JSON based query APIs (including JSON Faceting) use (and unfortunately
subtly different) '${NAME}' syntax for dereferencing variables in the
"body" of a JSON data structure...
https://lucene.apache.org/solr/guide/8_5/json-request-api.html#parameter-substitution-macro-expansion
...but note
eters.html#cache-parameter
>
> Hope it helps,
> Alex.
>
> On Thu., Jul. 9, 2020, 2:05 p.m. Chris Dempsey, wrote:
>
> > Hi all! In a collection where we have ~54 million documents we've noticed
> > running a query with the following:
> >
> > "fq&qu
Hi all! In a collection where we have ~54 million documents we've noticed
running a query with the following:
"fq":["{!cache=false}_class:taggedTickets",
"{!cache=false}taggedTickets_ticketId:100241",
"{!cache=false}companyId:22476"]
when I debugQuery I see:
"parsed_filter_querie
@Mikhail
Thanks for the link! I'll read through that.
On Tue, Jun 30, 2020 at 6:28 AM Chris Dempsey wrote:
> @Erick,
>
> You've got the idea. Basically the users can attach zero or more tags (*that
> they create*) to a document. So as an example say they've created
hought I'd check I hadn't overlooked any other
options. :)
On Mon, Jun 29, 2020 at 3:54 PM Mikhail Khludnev wrote:
> Hello, Chris.
> I suppose index time analysis can yield these terms:
> "paid","ms-reply-unpaid","ms-reply-paid", and thus let you
u can process the data such that you won’t need wildcards.
>
> Best,
> Erick
>
> > On Jun 29, 2020, at 11:16 AM, Chris Dempsey wrote:
> >
> > Hello, all! I'm relatively new to Solr and Lucene (*using Solr 7.7.1*)
> but
> > I'm looking into option
Hello, all! I'm relatively new to Solr and Lucene (*using Solr 7.7.1*) but
I'm looking into options for optimizing something like this:
> fq=(tag:* -tag:*paid*) OR (tag:* -tag:*ms-reply-unpaid*) OR
tag:*ms-reply-paid*
It's probably not a surprise that we're seeing performance issues with
somethin
Trying to track down an issue I am seeing with Solr 8.5.1 running on CentOS
8.2 and Java 14.0.1 (openJDK). My test system was running fine before
updating the OS packages and rebooting at which time it started throwing
the following error:
2020-06-19 20:23:37.877 INFO (main) [ ] o.e.j.s.Server
: Subject: Does 8.5.2 depend on 8.2.0
No. The code certainly doesn't, but i suppose it's possible some metadata
somewhere in some pom file may be broken?
: My build.gradle has this:
: compile(group: 'org.apache.solr', name: 'solr-solrj', version:'8.5.2')
: No where is there a reference to 8
First off: Forgive me if my comments/questions are redundent or uninformed
bsaed o nthe larger discussion taking place. I have not
caught up on the whole thread before replying -- but that's solely based
on a lack of time on my part, not a lack of willingness to embrace this
change.
>From
: Subject: TimestampUpdateProcessorFactory updates the field even if the value
: if present
:
: Hi,
:
: Following is the update request processor chain.
:
: <
: processor class="solr.TimestampUpdateProcessorFactory"> index_time_stamp_create
:
: And, here is how the field is defined in
fq clause instead, the result set will
> be put into the filter cache and be reused assuming you want to do this
> repeatedly.
>
> BTW, Solr doesn't use strict Boolean logic, which may be a bit confusing.
> Google for Chris Hostetter's (Hossman) blog at Lucidwirks for a great
&g
I'm new to Solr and made an honest stab to finding this in info the docs.
I'm working on an update to an existing large collection in Solr 7.7 to add
a BoolField to mark it as "soft deleted" or not. My understanding is that
updating the schema will mean the new field will only exist and have a
val
: Is what is shown in "analysis" the same as what is stored in a field?
https://lucene.apache.org/solr/guide/8_5/analyzers.html
The output of an Analyzer affects the terms indexed in a given field (and
the terms used when parsing queries against those fields) but it has no
impact on the store
: 4) A query with different fq.
:
http://localhost:8984/solr/techproducts/select?q=popularity:[5%20TO%2012]&fq=manu:samsung
...
: 5) A query with the same fq again (fq=manu:samsung OR manu:apple)the
: numbers don't get update for this fq hereafter for subsequent searches
:
:
http://l
The goal you are describing doesn't really sound at all like faceting --
it sounds like what you want might be "grouping" (or collapse/expand)
... OR: depending on how you index your data perhaps what you really
want is "nested documents" ... or maybe maybe if youre usecase is simple
enough j
: I was trying to analyze the filter cache performance and noticed a strange
: thing. Upon searching with fq, the entry gets added to the cache the first
: time. Observing from the "Stats/Plugins" tab on Solr admin UI, the 'lookup'
: and 'inserts' count gets incremented.
: However, if I search wi
/browse/LUCENE-9315
: On Mon, Apr 6, 2020, 20:43 Chris Hostetter wrote:
:
: >
: > : I red your attached blog post (and more) but still the penny hasn't
: > dropped
: > : yet about what causes the operator clash when the default operator is
: > AND.
: > : I red that when q.op=
: Solr/Lucene do not employ boolean logic. See Hossman’s excellent post:
:
: https://lucidworks.com/post/why-not-and-or-and-not/
:
: Until you internalize this rather subtle difference, you’ll be surprised. A
lot ;).
:
: You can make query parsing look a lot like boolean logic by carefully usin
: I red your attached blog post (and more) but still the penny hasn't dropped
: yet about what causes the operator clash when the default operator is AND.
: I red that when q.op=AND, OR will change the left(if not MUST_NOT) and
: right clause Occurs to SHOULD - what that means is that the "order
: I am working with a customer who needs to be able to query various
: account/customer ID fields which may or may not have embedded dashes.
: But they want to be able to search by entering the dashes or not and by
: entering partial values or not.
:
: So we may have an account or customer I
: Is the documentation wrong or have I misunderstood it?
The documentation is definitely wrong, thanks for pointing this out...
https://issues.apache.org/jira/browse/SOLR-14383
-Hoss
http://www.lucidworks.com/
: Using solr 8.3.0 it seems like required operator isn't functioning properly
: when default conjunction operator is AND.
You're mixing the "prefix operators" with the "infix operators" which is
always a recipe for disaster.
The use of q.op=AND vs q.op=OR in these examples only
complicates
: So, I thought it can be simplified by moving this state transitions and
: processing logic into Solr by writing a custom update processor. The idea
: occurred to me when I was thinking about Solr serializing multiple
: concurrent requests for a document on the leader replica. So, my thought
: p
It sounds like fundementally the problem you have is that you want solr to
"block" all updates to docId=X ... at the update processor chain level ...
until an existing update is done.
but solr has no way to know that you want to block at that level.
ie: you asked...
: In the case of multiple
: docid is the natural order of the posting lists, so there is no sorting
effort.
: I expect that means “don’t sort”.
basically yes, as documented in the comment right above hte lines of code
linked to.
: > So no one knows this then?
: > It seems like a good opportunity to get some performance!
: We think this is a bug (silently dropping commits even if the client
: requested "waitForSearcher"), or at least a missing feature (commits beging
: the only UpdateRequests not reporting the achieved RF), which should be
: worth a JIRA Ticket.
Thanks for your analysis Michael -- I agree someth
ot:run" it, it
seems to use embedded Jetty rather then embedded Tomcat. I *think* it's
because "solr-core" has transitive dependency on jetty jars. I will file a
Jira when I get to the bottom of this as well.
Thanks for getting back to me.
-Chris
On 1/16/20, 1
(sorry for bad formatting Outlook-for-Mac doesn't support Internet quoting)
Thanks Mark, I did that until I finally was able to exclude it altogether.
-Chris
On 1/17/20, 10:20 AM, "Mark H. Wood" wrote:
For the version problem, I would try adding
I may be missunderstanding something in your setup, and/or I may be
miss-remembering things about Solr, but I think the behavior you are
seeing is because *search* in solr is "eventually consistent" -- while
"RTG" (ie: using the /get" handler) is (IIRC) "strongly consistent"
ie: there's a rea
: Link to the issue was helpful.
:
: Although, when I take a look at Dockerfile for any Solr version from here
: https://github.com/docker-solr/docker-solr, the very first line says
: FROM openjdk...It
: does not say FROM adoptopenjdk. Am I missing something?
Ahhh ... I have no idea, But at lea
: Subject: How do I send multiple user version parameter value for a delet by id
: request with multiple IDs ?
If you're talking about Solr's normal optimistic concurrency version
constraints then you just pass '_version_' with each delete block...
https://lucene.apache.org/solr/guide/8_4
Just upgrade?
This has been fixed in most recent versions of AdoptOpenJDK builds...
https://github.com/AdoptOpenJDK/openjdk-build/issues/465
hossman@slate:~$ java8
hossman@slate:~$ java -XshowSettings:properties -version 2>&1 | grep -e
vendor -e version
java.class.version = 52.0
java.ru
--- original message ---
It looks to me as though solr-core is not the only artifact with that
dependency. The first thing I would do is examine the output of 'mvn
dependency:tree' to see what has dragged log4j-slf4j-impl in even when
it is excluded from solr-core.
--- end of original message ---
I am having several issues due to the slf4j implementation dependency
“log4j-slf4j-impl” being declared as a dependency of solr-core:7.5.0. The
first issue observed when starting the app is this:
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/Users/ma-w
: Is there a way to construct a query that needs two different parsers?
: Example:
: q={!xmlparser}Hello
: AND
: q={!edismax}text_en:"foo bar"~4
The easies way to do what you're asking about would be to choose one of
those queries for "storking" purposes, and put the other one in an "fq"
simpl
: Is there a way to use combine paging's cursor feature with graph query
: parser?
it should work just fie -- the cursorMark logic doesn't care what query
parser you use.
Is there a particular problem you are running into when you send requests
using both?
-Hoss
http://www.lucidworks.com/
: If I use the following query:in the browser, I get the expected results at
: the top of the returned values from Solr.
:
: {
: "responseHeader":{
: "status":0,
: "QTime":41,
: "params":{
: "q":"( clt_ref_no:OWL-2924-8 ^2 OR contract_number:OWL-2924-8^2 )",
: "indent":
: > whoa... that's not normal .. what *exactly* does the fieldType declaration
: > (with all analyzers) look like, and what does the declaration
: > look like?
: >
: >
:
:
:
NOTE: "text_general" != "text_gen_sort"
Assuming your "text_general" declaration looks like it does in the
_default
: > a) What is the fieldType of the uniqueKey field in use?
: >
:
: It is a textField
whoa... that's not normal .. what *exactly* does the fieldType declaration
(with all analyzers) look like, and what does the declaration
look like?
you should really never use TextField for a uniqueKey ...
Based on the info provided, it's hard to be certain, but reading between
the lines here are hte assumptions i'm making...
1) your core name is "dbtr"
2) the uniqueId field for the "dbtr" core is "debtor_id"
..are those assumptions correct?
Two key pieces of information that doesn't seem to be
: Some of the log files that Solr generated contain <0x00> (null characters)
: in log files (like below)
I don't know of any reason why solr would write any null bytes to the
logs, and certainly not in either of the places mentioned in your examples
(where it would be at the end of an otherwis
: I'm using Solr's cursor mark feature and noticing duplicates when paging
: through results. The duplicate records happen intermittently and appear
: at the end of one page, and the beginning of the next (but not on all
: pages through the results). So if rows=20 the duplicate records would
: Documentation says that we can copy multiple fields using wildcard to one
: or more than one fields.
correct ... the limitation is in the syntax and the ambiguity that would
be unresolvable if you had a wildcard in the dest but not in the source.
the wildcard is essentially a variable. if
This is strange -- I can't reproduce, and I can't see any evidence of a
change to explain why this might have been failing 8 days ago but not any
more.
Are you still seeing this error?
The lines in question are XML comments inside of (example) code blocks (in
the ref-guide source), which is
: We show a table of search results ordered by score (relevancy) that was
: obtained from sending a query to the standard /select handler. We're
: working in the life-sciences domain and it is common for our result sets to
: contain many millions of results (unfortunately). After users browse the
Thanks Erik. I created SOLR-13699. I agree wrt adding a Unit Test, that was
my thinking as well. I am currently working on a test, and then I will
submit my patch.
Thanks,
Chris
On Thu, Aug 15, 2019 at 1:06 PM Erick Erickson
wrote:
> Chris:
>
> I certainly don’t see anything in J
et me know what you think, and I will log the bug.
I have implemented a fix which I am currently testing and will be happy to
submit a patch, assuming it's agreed that this is not intended behavior.
Thanks,
Chris
: I think by "what query parser" you mean this:
no, that's the fieldType -- what i was refering to is that you are in fact
using "edismax", but with solr 8.1 lowercaseOperators should default to
"false", so my initial guess is probably wrong.
: By "request parameter" I think you are asking wh
what version of solr?
what query parser are you using?
what do all of your request params (including defaults) look like?
it's possible you are seeing the effects of edismax's "lowercaseOperators"
param, which _should_ default to "false" in modern solr, but
in very old versions it defaulted to
changed), or by creating a new set of
AMIs and then terminating the instances.Is there a better way to do this? I'm
not facing any real problems with this setup, but I want to make sure I'm not
missing something obvious.Thanks,Chris
ll not be split on
whitespace before analysis. See
https://lucidworks.com/2017/04/18/multi-word-synonyms-solr-adds-query-time-support/
."
On Tue, Apr 2, 2019 at 8:11 AM Chris Ulicny wrote:
> Hi all,
>
> We have a multivalued field that has an integer at the beginning followed
> by
understanding was that the query parser should split the (34 27) into
search terms "34" and "27" before the query analyzer chain is even entered.
Is that not correct anymore?
Thanks,
Chris
We start our Java
> 8 JVMs with these parameters:
> >>
> >> SOLR_HEAP=8g
> >> # Use G1 GC -- wunder 2017-01-23
> >> # Settings from https://wiki.apache.org/solr/ShawnHeisey
> >> GC_TUNE=" \
> >> -XX:+UseG1GC \
> >> -
a great write up:
https://lucidworks.com/2013/08/23/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/
Best,
Chris
On Mon, Mar 18, 2019 at 9:36 AM Aaron Yingcai Sun wrote:
> Hi, Emir,
>
> My system used to run with max 32GB, the response time is bad as well.
> sw
up on the same instance. We just manually end up moving them
since we don't split shards very often.
Best,
Chris
On Wed, Jan 30, 2019 at 12:46 PM Rahul Goswami
wrote:
> Hello,
> I have a followup question on SPLITSHARD behavior. I understand that after
> a split, the leader re
Field("id", user.key().toString());
document.addField("applications", Collections.singletonMap("set",
user.value()));
request.add(document);
request.process(solrClient);
}
solrClient.commit();
On 29/01/2019 16:27, Chris Wareham wrote:
I'm trying t
chain to use and I assume I need to
use the UpdateRequest class. However, it's not clear how I go about
setting a parameter on the UpdateRequest in order to specify the update
chain.
Chris
| text|>|text |
|text_general analysis| |index|
+-+ +-+
On 28/01/2019 12:37, Scott Stults wrote:
Hi Chris,
You've included the field definition of type text_en, but in your queries
you
I'm trying to index some data which often includes domain names. I'd
like to remove the .com TLD, so I have modified the text_en field type
by adding a PatternReplaceFilterFactory filter. However, it doesn't
appear to be working as a search for "text:(mydomain.com)" matches
records but "text:(m
Out of curiosity, why are you manually deleting nodes in zookeeper?
It's always seemed to me that the majority (definitely not all) of
modifications needed during normal operations can usually be done through
Solr's APIs.
Thanks,
Chris
On Thu, Jan 10, 2019 at 12:04 AM Yogendra
gParameter
Best,
Chris
On Mon, Nov 19, 2018 at 12:10 PM Rajdeep Sahoo
wrote:
> Hi all,
> Please suggest, how can I analyze the time taken by a solr query?
> Is there any tool for analyzing the query response time.I f there is any
> way to do this please suggest.
>
1) depending on the number of CPUs / load on your solr server, it's
possible you're just getting lucky. it's hard to "prove" with a
multithreaded test that concurrency bugs exist.
2) a lot depends on what your updates look like (ie: the impl of
SolrDocWriter.atomicWrite()), and what the field
that is
being displayed in the logs is still the 401 error when accessing
/solr/admin/metrics. To me, it seems that this internal request is just
ignoring the security altogether.
On Wed, Oct 31, 2018 at 4:54 PM Vadim Ivanov <
vadim.iva...@spb.ntk-intourist.ru> wrote:
> Hi, Chris
> I
60 seconds. The accompanying logged error and
expecting stacktrace are also included below.
Is there a JIRA ticket for this issue (or a directly related one)? I
couldn't seem to find one.
Thanks,
Chris
*security.json:*
{
"authentication":{"blockUnknown&
UNSUBSCRIBE
On Tue, 30 Oct 2018 at 8:24 pm, Stefan Kuhn wrote:
> Hi,
>
> last week I found an error in the result sorting regarding a field of the
> type "solr.CurrencyFieldType" in solr version 7.3.1.
>
> There are multiple documents which I must sort with this field, but the
> order of the res
: Hi Erick,Thanks for your reply.No, we aren't using schemaless
: mode. is not explicitly declared in
: our solrconfig.xmlAlso we have only one replica and one shard.
ManagedIndexSchemaFactory has been the default since 6.0 unless an
explicit schemaFactory is defined...
https://lucene.apac
hanks,
Chris
On Tue, Oct 23, 2018 at 5:40 AM Daniel Carrasco
wrote:
> Hi,
> El mar., 23 oct. 2018 a las 10:18, Charlie Hull ()
> escribió:
>
> > On 23/10/2018 02:57, Daniel Carrasco wrote:
> > > annoyingHello,
> > >
> > > I've a Solr Cluster
There weren't any particular problems we ran into since the client that
makes the queries to multiple collections previously would query multiple
cores using the 'shards' parameter before we moved to solrcloud. We didn't
have any complicated sorting or scoring requirements fortunately.
The one thi
,archive2,archive4&q=...
It seems like it might work for your use case, but you might need to tread
carefully depending on your requirements for the returned results. Sorting
and duplicate unique keys come to mind.
Best,
Chris
On Mon, Oct 22, 2018 at 1:49 PM Rohan Kasat wrote:
> Hi All ,
>
e a JIRA issue asking
for this?
Regards,
Chris
On 22/10/2018 14:28, Emir Arnautović wrote:
Hi Chris,
Yes you can do that. There is also type=“ignored” that you can use in such
scenario.
HTH,
Emir
--
Monitoring - Log Management - Alerting - Anomaly Detection
Solr & Elasticsearch Consu
quot; multiValued="true"/>
Can I set both the indexed and stored values to false for the body,
sectors and locations fields since I don't want to search or retrieve them?
Regards,
Chris
y other specific log settings I should turn on that might
produce some useful information?
Thanks,
Chris
ory glance.
Due to the volume and number of different processes that, this cluster
requires more coordination to reindex and upgrade. So it's currently the
last one on our plan to get upgraded to 7.X (or 8.X if timing allows).
On Thu, Oct 11, 2018 at 8:22 AM sgaron cse wrote:
> Hey Chris,
> H. I wonder if a version conflict or perhaps other failure can
> > > somehow cause this. It shouldn't be very hard to add that to my test
> > > setup, just randomly add n _version_ field value.
> > >
> > > Erick
> > > On Mon, Oct 1, 2018
UNSUBSCRIBE
On Tue, 9 Oct 2018 at 4:36 am, Sudheer Shinde
wrote:
> solr 6.2.1:
>
> my schema:
>
> id is primary key here:
>
> required="true" multiValued="false"/>
> name="TC_0Y0_ItemRevision_0Y0_awp0Item_item_id" required="false"
> stored="false" type="string"/>
> name="TC_0Y0_Item_uid" req
: There's nothing out-of-the-box.
Which is to say: there are no explicit convenience methods for it, but you
can absolutely use the JSON DSL and JSON facets via SolrJ and the
QueryRequest -- just add the param key=value that you want, where the
value is the JSON syntax...
ModifiableSolrParam
In our case, we are heavily indexing in the collection while the /get
requests are happening which is what we assumed was causing this very rare
behavior. However, we have experienced the problem for a collection where
the following happens in sequence with minutes in between them.
1. Document id=
uccessfully returned to previous /get requests. We haven't been able to
replicate it with any consistency, but it isn't a particularly critical
issue with our use case.
Best,
Chris
On Thu, Sep 27, 2018 at 2:53 PM Shawn Heisey wrote:
> On 9/27/2018 11:48 AM, sgaron cse wrote:
> > So
1 - 100 of 5528 matches
Mail list logo