: Of course, here is the full stack trace (collection 'techproducts' with
: just one core to make it easier):
Ah yeah ... see -- this looks like a mistake introduced at some point...
: Caused by: org.apache.solr.core.SolrResourceNotFoundException: Can't
: find resource 'elevate.xml' in classpat
: I need to have the elevate.xml file updated frequently and I was wondering
: if it is possible to put this file in the dataDir folder when using Solr
: Cloud. I know that this is possible in the standalone mode, and I haven't
: seen in the documentation [1] that it can not be done in Cloud.
:
: Let me add some background. A user triggers an operation which under the
: hood needs to update a single field. Atomic update fails with a message
: that one of the mandatory fields is missing (which is strange by
: itself). When I query Solr for the exact document (fq with the document
: i
FWIW: that log message was added to branch_8x by 3c02c9197376 as part of
SOLR-15052 ... it's based on master commit 8505d4d416fd -- but that does
not add that same logging message ... so it definitely smells like a
mistake to me that 8x would add this INFO level log message that master
doesn'
: there are not many OOM stack details printed in the solr log file, it's
: just saying No enough memory, and it's killed by oom.sh(solr's script).
not many isn't the same as none ... can you tell us *ANYTHING* about what
the logs look like? ... as i said: it's not just the details of the O
: Is the matter to use the config file ? I am using custom config instead
: of _default, my config is from solr 8.6.2 with custom solrconfig.xml
Well, it depends on what's *IN* the custom config ... maybe you are using
some built in functionality that has a bug but didn't get triggered by my
FWIW, I just tried using 8.7.0 to run:
bin/solr -m 200m -e cloud -noprompt
And then setup the following bash one liner to poll the heap metrics...
while : ; do date; echo "node 8989" && (curl -sS
http://localhost:8983/solr/admin/metrics | grep memory.heap); echo "node 7574"
&& (curl -
: Hi, I am using solr 8.7.0, centos 7, java 8.
:
: I just created a few collections and no data, memory keeps growing but
: never go down, until I got OOM and solr is killed
Are you usinga custom config set, or just the _default configs?
if you start up this single node with something like -X
: I am wondering if there is a way to warmup new searcher on commit by
: rerunning queries processed by the last searcher. May be it happens by
: default but then I can't understand why we see high query times if those
: searchers are being warmed.
it only happens by default if you have an 'auto
: You need to update EVERY solrconfig.xml that the JVM is loading for this to
: actually work.
that has not been true for a while, see SOLR-13336 / SOLR-10921 ...
: > 2. updated solr.xml :
: > ${solr.max.booleanClauses:2048}
:
: I don't think it's currently possible to set the value with solr
Can't you just configure nagios to do a "negative match" against
numFound=0 ? ... ie: "if response matches 'numFound=0' fail the check."
(IIRC there's an '--invert-regex' option for this)
: Date: Mon, 28 Dec 2020 14:36:30 -0600
: From: Dmitri Maziuk
: Reply-To: solr-user@lucene.apache.org
: T
https://lucene.apache.org/solr/guide/8_6/authentication-and-authorization-plugins.html
*Authentication* is global, but *Authorization* can be configured to use
rules that restrict permissions on a per collection basis...
https://lucene.apache.org/solr/guide/8_6/rule-based-authorization-plugin.
: I need to integrate Semantic Knowledge Graph with Solr 7.7.0 instance.
If you're talking about the work Trey Grainger has writtne some papers on
that was originally implemented in this repo...
https://github.com/careerbuilder/semantic-knowledge-graph
..then that work was incorported into so
: Hmm, setting -Dfile.encoding=UTF-8 solves the problem. I have to now check
: which component of the application screws it up, but at the moment I do NOT
: believe it is related to Solrj.
You can use the "forbidden-apis" project to analyze your code and look for
uses of APIs that depend on the
: **
:
: **
...
: I was expecting that for field "fieldA" indexed will be marked as false and
: it will not be part of the index. But Solr admin "SCHEMA page" (we get this
: option after selecting collection name in the drop-down menu) is showing
: it as an indexed field (green tick mark
The JSON based query APIs (including JSON Faceting) use (and unfortunately
subtly different) '${NAME}' syntax for dereferencing variables in the
"body" of a JSON data structure...
https://lucene.apache.org/solr/guide/8_5/json-request-api.html#parameter-substitution-macro-expansion
...but note
: Subject: Does 8.5.2 depend on 8.2.0
No. The code certainly doesn't, but i suppose it's possible some metadata
somewhere in some pom file may be broken?
: My build.gradle has this:
: compile(group: 'org.apache.solr', name: 'solr-solrj', version:'8.5.2')
: No where is there a reference to 8
First off: Forgive me if my comments/questions are redundent or uninformed
bsaed o nthe larger discussion taking place. I have not
caught up on the whole thread before replying -- but that's solely based
on a lack of time on my part, not a lack of willingness to embrace this
change.
>From
: Subject: TimestampUpdateProcessorFactory updates the field even if the value
: if present
:
: Hi,
:
: Following is the update request processor chain.
:
: <
: processor class="solr.TimestampUpdateProcessorFactory"> index_time_stamp_create
:
: And, here is how the field is defined in
: Is what is shown in "analysis" the same as what is stored in a field?
https://lucene.apache.org/solr/guide/8_5/analyzers.html
The output of an Analyzer affects the terms indexed in a given field (and
the terms used when parsing queries against those fields) but it has no
impact on the store
: 4) A query with different fq.
:
http://localhost:8984/solr/techproducts/select?q=popularity:[5%20TO%2012]&fq=manu:samsung
...
: 5) A query with the same fq again (fq=manu:samsung OR manu:apple)the
: numbers don't get update for this fq hereafter for subsequent searches
:
:
http://l
The goal you are describing doesn't really sound at all like faceting --
it sounds like what you want might be "grouping" (or collapse/expand)
... OR: depending on how you index your data perhaps what you really
want is "nested documents" ... or maybe maybe if youre usecase is simple
enough j
: I was trying to analyze the filter cache performance and noticed a strange
: thing. Upon searching with fq, the entry gets added to the cache the first
: time. Observing from the "Stats/Plugins" tab on Solr admin UI, the 'lookup'
: and 'inserts' count gets incremented.
: However, if I search wi
/browse/LUCENE-9315
: On Mon, Apr 6, 2020, 20:43 Chris Hostetter wrote:
:
: >
: > : I red your attached blog post (and more) but still the penny hasn't
: > dropped
: > : yet about what causes the operator clash when the default operator is
: > AND.
: > : I red that when q.op=
: Solr/Lucene do not employ boolean logic. See Hossman’s excellent post:
:
: https://lucidworks.com/post/why-not-and-or-and-not/
:
: Until you internalize this rather subtle difference, you’ll be surprised. A
lot ;).
:
: You can make query parsing look a lot like boolean logic by carefully usin
: I red your attached blog post (and more) but still the penny hasn't dropped
: yet about what causes the operator clash when the default operator is AND.
: I red that when q.op=AND, OR will change the left(if not MUST_NOT) and
: right clause Occurs to SHOULD - what that means is that the "order
: I am working with a customer who needs to be able to query various
: account/customer ID fields which may or may not have embedded dashes.
: But they want to be able to search by entering the dashes or not and by
: entering partial values or not.
:
: So we may have an account or customer I
: Is the documentation wrong or have I misunderstood it?
The documentation is definitely wrong, thanks for pointing this out...
https://issues.apache.org/jira/browse/SOLR-14383
-Hoss
http://www.lucidworks.com/
: Using solr 8.3.0 it seems like required operator isn't functioning properly
: when default conjunction operator is AND.
You're mixing the "prefix operators" with the "infix operators" which is
always a recipe for disaster.
The use of q.op=AND vs q.op=OR in these examples only
complicates
: So, I thought it can be simplified by moving this state transitions and
: processing logic into Solr by writing a custom update processor. The idea
: occurred to me when I was thinking about Solr serializing multiple
: concurrent requests for a document on the leader replica. So, my thought
: p
It sounds like fundementally the problem you have is that you want solr to
"block" all updates to docId=X ... at the update processor chain level ...
until an existing update is done.
but solr has no way to know that you want to block at that level.
ie: you asked...
: In the case of multiple
: docid is the natural order of the posting lists, so there is no sorting
effort.
: I expect that means “don’t sort”.
basically yes, as documented in the comment right above hte lines of code
linked to.
: > So no one knows this then?
: > It seems like a good opportunity to get some performance!
: We think this is a bug (silently dropping commits even if the client
: requested "waitForSearcher"), or at least a missing feature (commits beging
: the only UpdateRequests not reporting the achieved RF), which should be
: worth a JIRA Ticket.
Thanks for your analysis Michael -- I agree someth
I may be missunderstanding something in your setup, and/or I may be
miss-remembering things about Solr, but I think the behavior you are
seeing is because *search* in solr is "eventually consistent" -- while
"RTG" (ie: using the /get" handler) is (IIRC) "strongly consistent"
ie: there's a rea
: Link to the issue was helpful.
:
: Although, when I take a look at Dockerfile for any Solr version from here
: https://github.com/docker-solr/docker-solr, the very first line says
: FROM openjdk...It
: does not say FROM adoptopenjdk. Am I missing something?
Ahhh ... I have no idea, But at lea
: Subject: How do I send multiple user version parameter value for a delet by id
: request with multiple IDs ?
If you're talking about Solr's normal optimistic concurrency version
constraints then you just pass '_version_' with each delete block...
https://lucene.apache.org/solr/guide/8_4
Just upgrade?
This has been fixed in most recent versions of AdoptOpenJDK builds...
https://github.com/AdoptOpenJDK/openjdk-build/issues/465
hossman@slate:~$ java8
hossman@slate:~$ java -XshowSettings:properties -version 2>&1 | grep -e
vendor -e version
java.class.version = 52.0
java.ru
: Is there a way to construct a query that needs two different parsers?
: Example:
: q={!xmlparser}Hello
: AND
: q={!edismax}text_en:"foo bar"~4
The easies way to do what you're asking about would be to choose one of
those queries for "storking" purposes, and put the other one in an "fq"
simpl
: Is there a way to use combine paging's cursor feature with graph query
: parser?
it should work just fie -- the cursorMark logic doesn't care what query
parser you use.
Is there a particular problem you are running into when you send requests
using both?
-Hoss
http://www.lucidworks.com/
: If I use the following query:in the browser, I get the expected results at
: the top of the returned values from Solr.
:
: {
: "responseHeader":{
: "status":0,
: "QTime":41,
: "params":{
: "q":"( clt_ref_no:OWL-2924-8 ^2 OR contract_number:OWL-2924-8^2 )",
: "indent":
: > whoa... that's not normal .. what *exactly* does the fieldType declaration
: > (with all analyzers) look like, and what does the declaration
: > look like?
: >
: >
:
:
:
NOTE: "text_general" != "text_gen_sort"
Assuming your "text_general" declaration looks like it does in the
_default
: > a) What is the fieldType of the uniqueKey field in use?
: >
:
: It is a textField
whoa... that's not normal .. what *exactly* does the fieldType declaration
(with all analyzers) look like, and what does the declaration
look like?
you should really never use TextField for a uniqueKey ...
Based on the info provided, it's hard to be certain, but reading between
the lines here are hte assumptions i'm making...
1) your core name is "dbtr"
2) the uniqueId field for the "dbtr" core is "debtor_id"
..are those assumptions correct?
Two key pieces of information that doesn't seem to be
: Some of the log files that Solr generated contain <0x00> (null characters)
: in log files (like below)
I don't know of any reason why solr would write any null bytes to the
logs, and certainly not in either of the places mentioned in your examples
(where it would be at the end of an otherwis
: I'm using Solr's cursor mark feature and noticing duplicates when paging
: through results. The duplicate records happen intermittently and appear
: at the end of one page, and the beginning of the next (but not on all
: pages through the results). So if rows=20 the duplicate records would
: Documentation says that we can copy multiple fields using wildcard to one
: or more than one fields.
correct ... the limitation is in the syntax and the ambiguity that would
be unresolvable if you had a wildcard in the dest but not in the source.
the wildcard is essentially a variable. if
This is strange -- I can't reproduce, and I can't see any evidence of a
change to explain why this might have been failing 8 days ago but not any
more.
Are you still seeing this error?
The lines in question are XML comments inside of (example) code blocks (in
the ref-guide source), which is
: We show a table of search results ordered by score (relevancy) that was
: obtained from sending a query to the standard /select handler. We're
: working in the life-sciences domain and it is common for our result sets to
: contain many millions of results (unfortunately). After users browse the
: I think by "what query parser" you mean this:
no, that's the fieldType -- what i was refering to is that you are in fact
using "edismax", but with solr 8.1 lowercaseOperators should default to
"false", so my initial guess is probably wrong.
: By "request parameter" I think you are asking wh
what version of solr?
what query parser are you using?
what do all of your request params (including defaults) look like?
it's possible you are seeing the effects of edismax's "lowercaseOperators"
param, which _should_ default to "false" in modern solr, but
in very old versions it defaulted to
1) depending on the number of CPUs / load on your solr server, it's
possible you're just getting lucky. it's hard to "prove" with a
multithreaded test that concurrency bugs exist.
2) a lot depends on what your updates look like (ie: the impl of
SolrDocWriter.atomicWrite()), and what the field
: Hi Erick,Thanks for your reply.No, we aren't using schemaless
: mode. is not explicitly declared in
: our solrconfig.xmlAlso we have only one replica and one shard.
ManagedIndexSchemaFactory has been the default since 6.0 unless an
explicit schemaFactory is defined...
https://lucene.apac
: There's nothing out-of-the-box.
Which is to say: there are no explicit convenience methods for it, but you
can absolutely use the JSON DSL and JSON facets via SolrJ and the
QueryRequest -- just add the param key=value that you want, where the
value is the JSON syntax...
ModifiableSolrParam
My suggestion:
* completley avoid using UUIDField
* use StrField instead
* use the UUIDUpdateProcessorFactory if you want solr to generate the
UUIDs for you when adding a new doc.
The fact that UUIDField internally passes values around as java.util.UUID
objects (and other classes like it that
: I have seen that one. But as I understand spanFirst, it only allows you
: to define a boost if your span matches, i.e. not a gradually lower score
: the further down in the document the match is?
I believe you are incorrect.
Unless something has drastically changed in SpanQuery in the past
: WARN: (main) AbstractLifeCycle FAILED org.eclipse.jetty.server.Server@...
: java.io.FileNotFoundException: /opt/solr-5.4.1/server (Is a directory)
: java.io.FileNotFoundException: /opt/solr-5.4.1/server (Is a directory)
: at java.io.FileInputStream.open0(Native Method)
: at java
: Subject: MetricsHistoryHandler getOverseerLeader fails when hostname contains
: hyphen
that's unfortunate. I filed a jira...
https://issues.apache.org/jira/browse/SOLR-12594
: Can one just ignore this warning and what will happen then?
I think as long as you don't care about the mstr
: For deep pagination, it is recommended that we use cursorMark and
: provide a sort order for as a tiebreaker.
:
: I want my results in relevancy order and so have no sort specified on my
query by default.
:
: Do I need to explicitly set :
:
: sort : score desc, asc
Yes.
: Or can
: We are using Solr as a user index, and users have email addresses.
:
: Our old search behavior used a SQL substring match for any search
: terms entered, and so users are used to being able to search for e.g.
: "chr" and finding my email address ("ch...@christopherschultz.net").
:
: By defaul
: > defType=edismax q=sysadmin name:Mike qf=title text last_name
: > first_name
:
: Aside: I'm curious about the use of "qf", here. Since I didn't want my
: users to have to specify any particular field to search, I created an
: "all" field and dumped everything into it. It seems like it would
: Chris, I was trying the below method for sorting the faceted buckets but
: am seeing that the function query query($q) applies only to the score
: from “q” parameter. My solr request has a combination of q, “bq” and
: “bf” and it looks like the function query query($q) is calculating the
: s
: So if I want to alias the "first_name" field to "first" and the
: "last_name" field to "last", then I would ... do what, exactly?
se the last example here...
https://lucene.apache.org/solr/guide/7_4/the-extended-dismax-query-parser.html#examples-of-edismax-queries
defType=edismax
q=sysadmin
: FWIW: I used the script below to build myself 3.8 million documents, with
: 300 "text fields" consisting of anywhere from 1-10 "words" (integers
: between 1 and 200)
Whoops ... forgot to post the script...
#!/usr/bin/perl
use strict;
use warnings;
my $num_docs = 3_800_000;
my $max_words_
: SQL DB 4M documents with up to 5000 metadata fields each document [2xXeon
: 2.1Ghz, 32GB RAM]
: Actual Solr: 1 Core version 4.6, 3.8M documents, schema has 300 metadata
: fields to import, size 3.6GB [2xXeon 2.4Ghz, 32GB RAM]
: (atm we need 35h to build the index and about 24h for a mass update
: I performed a bulk reindex against one of our larger databases for the first
: time using solr 7.3. The document count was substantially less (like at
: least 15% less) than our most recent bulk reindex from th previous solr 4.7
: server. I will perform a more careful analysis, but I am assumi
: If I want to plug in my own sorting for facets, what would be the best
: approach. I know, out of the box, solr supports sort by facet count and
: sort by alpha. I want to plug in my own sorting (say by relevancy). Is
: there a way to do that? Where should I start with if I need to write a
: So I have the following at the bottom of my schema.xml file
:
:
:
:
:
: The documentation says "top level element" - so should that actually be
outside the schema tag?
No, the schema tag is the "root" level element, it's direct children are
the "top level elements"
(the wording m
: the documentation of 'cursorMarks' recommends to fetch until a query returns
: the cursorMark that was passed in to a request.
:
: But that always requires an additional request at the end, so I wonder if I
: can stop already, if a request returns less results than requested (num rows).
: Ther
: I'd like to generate stats for the results of a facet range.
: For example, calculate the mean sold price over a range of months.
: Does anyone know how to do this?
: This Jira issue seems to indicate its not yet possible.
: [SOLR-6352] Let Stats Hang off of Range Facets - ASF JIRA
This is poss
: I also noticed that there's the concept of "latest" (similar to "current"
: in postgres documentation) in solr. This is pretty cool. I am afraid
: though, that this currently is somewhat confusing. E.g., if I search for
: managed schema in google I get this as 1st url:
:
:
https://lucene.apach
There's been some discussion along the lines of doing some things like
what you propose which were spun out of discussion in SOLR-10595 into the
issue LUCENE-7924 ... but so far no one has attempted the
tooling/scripting work needed to make it happen.
Pathes certainly welcome.
: Date: Mon,
You need to configure Solr to use a "truststore" that contains the
certificate you want it to trust. With a solr cloud setup, that usually
involves configuring the "keystore" and the "truststore" to both contain
the same keys...
https://lucene.apache.org/solr/guide/6_6/enabling-ssl.html
: D
erformed its write
: > method.
: >
: > Whats the best way to set response headers using values from documents in
: > the search result? In particular Content-Type.
: >
: > Cheers Lee C
: >
: > On 20 April 2018 at 20:03, Chris Hostetter
: > wrote:
: >
: >>
: >
Invariant really means "invariant" ... nothing can change them
In the case of "wt" this may seem weird and unhelpful, but the code that
handles defaults/appends/invariants is ignorant of what the params are.
Since your writting custom code anyway, my suggestion would be that
perhaps you could
: In my Solr 6.6 based code, I have the following line that get the total
: number of documents in a collection:
:
: totalDocs=indexSearcher.getStatistics().get("numDocs"))
...
: With Solr 7.2.1, 'getStatistics' is no longer available, and it seems that
: it is replaced by 'collectionSta
: Have you tried reading existing example schemas? They show various
: permutations of copy fields.
Hmm... as the example schema's have been simplified/consolidated/purged it
seems we have lost the specific examples that are relevant to the users
question -- the only instance of a glob'ed copyF
:
: Ah, there's the extra bit of context:
: > PS C:\curl> .\curl '
:
: You're using Windows perhaps? If so, it's probably a shell issue
: getting all of the data to the "curl" command.
Yep.. and you cna even see in the trace output that curl thinks the entire
JSON payload you want to send is 2
I can't reproduce the problem you described -- using 7.2.1 and the
techproducts example i can index a JSON string w/white space just fine...
$ bin/solr -e techproducts
$ curl 'http://localhost:8983/solr/techproducts/update/json/docs' -H
'Content-type:application/json' -d '
{
"id":"1",
"nam
: > 3) Lastly, it is not clear the role of export handler. It seems that the
: > export handler would also have to do exactly the same kind of thing as
: > start=0 and rows=1000,000. And that again means bad performance.
: <3> First, streaming requests can only return docValues="true"
: f
: We are using Solr 7.1.0 to index a database of addresses. We have found
: that our index size increases massively when we add one extra field to
: the index, even though that field is stored and not indexed, and doesn’t
what about docValues?
: When we run an index load without the problema
https://issues.apache.org/jira/browse/SOLR-11977
: Date: Mon, 12 Feb 2018 14:44:34 -0700 (MST)
: From: Chris Hostetter
: To: "solr-user@lucene.apache.org"
: Subject: Re: "editorialMarkerFieldName"
:
:
: IIUC the "editorialMarkerFieldName" config option is a bit
IIUC the "editorialMarkerFieldName" config option is a bit missleading.
Configuring that doesn't automatically add a field w/that name to your
docs to indicate which of them have been elevated -- all it does is
provide an *override* for what name can be used to refer to the
"[elevated]" DocTra
: True, I could remove the trigger to rebuild the entire document. But
: what if a different field changes and the whole document is triggered
: for update for a different field. We have the same problem.
at a high level, your concern is really compleltey orthoginal to the
question of in-place
tegory_id,category_rank i want to change.
:
: -Original Message-
: From: Chris Hostetter [mailto:hossman_luc...@fucit.org]
: Sent: Friday, February 2, 2018 12:24 PM
: To: solr-user@lucene.apache.org
: Subject: RE: External file fields
:
:
: : I did look into updatable docValues, but my understanding is
: Have you manage to get the regex for this string in Chinese: 预支款管理及账务处理办法 ?
...
: > An example of the string in Chinese is 预支款管理及账务处理办法
: >
: > The number of characters is 12, but the expected length should be 36.
...
: >> > So this would likely be different from what the operati
: I did look into updatable docValues, but my understanding is that the
: field has to be non-indexed (indexed="false"). I need to be able to sort
: on these values. External field fields are sortable.
YOu can absolutely sort on a field that is docValues="true"
indexed="false" ... that is much
: I am converting a SOLR 4.10 db to SOLR 7.1
:
: It is NOT schemaless - so it uses a ClassicIndexSchemaFactory.
:
: In 4.10, I have a field that is a phone number (here's the schema information
for the field):
:
:
:
: When inserting documents into SOLR, there are some documents where the
:
: We encountered an issue when using the refine parameter when subfaceting in
: a range facet.
: When enabling the refine option, the counts of the response are the double
: of the counts of the response without refine option.
: We are running Solr 6.6.1 in a cloud setup.
...
: If I execut
: Thanks Chris! Is RetrieveFieldsOptimizer a new functionality introduced in
: 7.x? Our observation is with botht 5.4 & 6.4. I have created a jira for
: the issue:
The same basic code path (related to stored fields) probably existed
largely as is in 5.x and 6.x and was then later refactored in
: Inside convertLuceneDocToSolrDoc():
:
:
: https://github.com/apache/lucene-solr/blob/df874432b9a17b547acb24a01d3491
: 839e6a6b69/solr/core/src/java/org/apache/solr/response/
: DocsStreamer.java#L182
:
:
:for (IndexableField f : doc.getFields())
:
:
: I am a bit puzzled why we need to i
what does your full request, including the results block look like when
you search on one of these queries with "fl=*,score" ?
I'm suspicios that perhaps the problem isn't the payload encoding, or the
PayloadScoreQuery -- but perhaps it's simply a bug in the Explanation
produced by those queri
: Yes I do so. The Problem ist that the collect-Method is called for EVERY
: document the query matches. Even if the User only wants to see like 10
: documents. The Operation I have to perform takes maybe 50ms/per document
You running into a classic chicken/egg problem with document collection
: defType=dismax does NOT do anything special with *:* other than treat it
...
: > As Chris explained, this is special:
...
I'm interpreting your followup question differently then Erick & Erik
did. I'm going to assume both E & E missunderstood your question, and i'm
going to
: Might be the case as you mentioned Shawn. But there are no search requests
: triggered and might be that somehow a search query is getting fired to Solr
: end while indexing. Given the complete log information(solr.log) while the
: indexing is triggered.
the search request is triggered by a "ne
: Yes, i am using dismax. But dismax allows *:* for q.alt ,which also seems
: like inconsistency.
dismax is a *parser* that affects how a single query string is parsed.
when you use defType=dismax, that only changes how the "q" param is
parsed -- not any other query string params, like "fq" or
https://lucene.apache.org/solr/guide/7_2/other-parsers.html
fq={!frange l=0}your(complex(func(fieldA,fieldB),fieldC))
As of 7.2, frange filters will default to being PostFilters as long as
you use cache=false ...
https://lucidworks.com/2017/11/27/caching-and-filters-and-post-filters/
https://i
AFAICT The behavior you're describing with Trie fields was never
intentionally supported/documented?
It appears that it only worked as a fluke side effect of how the default
implementation of FieldType.getprefixQuery() was inherited by Trie fields
*and* because "indexed=true" TrieFields use T
: We're started to migrate our integration-framework to move over to
: JavaEE JSON-B as default json-serialization /deserialization framework
: and now the highlighning component is giving us some troubles. Here's a
: constructed example of the JSON response from Solr.
Wait .. what? that make
: Do I need to define a field with when I use an external file
: field? I see the to define it, but the docs don’t say how
: to define the field.
you define the field (or dynamicField) just like any other field -- the
fieldType is where you specify things like the 'keyField' & the 'defVal',
:
: As I said before, I do not think that Solr will use timezones for date display
: -- ever. Solr does support timezones in certain circumstances, but I'm pretty
One possibility that has been discussed in the pst is the idea of a "Date
Formatting DocTransformer" that would always return a Stri
1 - 100 of 5002 matches
Mail list logo