Hi All,
I'm using the graph query parser to traverse a set of edge documents. An
edge looks like
"id":"edge1", "recordType":"journey", "Date":"2021-03-04T00:00:00Z", "Origin
":"AAC", "OriginLocalDateTime&
Hi,
How to graph query from A to X where number of hops is not known, but when
graph query for each hop remains same.
For example:
If my graph looks like this,
id:A -> pk:A1 -> tgt:A2
id:B -> pk:B1 -> tgt:B2
...
id:X
To get from A to B,
1. We query A to A2 using (id->p
I am using Solr 6.1.0. We have 2 shards and each has one replica.
My schema field is below in one collection
When I execute below query It is taking more than 180 milliseconds every time.
http://10.38.33.24:8983/solr/forms/select?q=project_id:(2117627+2102977+2109667+2102912+2113720+2102976
It worked! Thanks Mr. Rafalovitch. I just removed “type”: “query”.. keys from
the json, and used indexAnalyzer and queryAnalyzer in place of analyzer json
node.
Sent from Mail for Windows 10
From: Alexandre Rafalovitch
Sent: 03 March 2021 01:19
To: solr-user
Subject: Re: Schema API specifying
enizerFactory" }}}
}' http://localhost:8983/solr/gettingstarted/schema
So, indexAnalyzer/queryAnalyzer, rather than array:
https://lucene.apache.org/solr/guide/8_8/schema-api.html#add-a-new-field-type
Hope this works,
Alex.
P.s. Also check whether you are using matching API and V1/
Hello,
I’m trying to change a field’s query analysers. The following works but it
replaces both index and query type analysers:
{
"replace-field-type": {
"name": "string_ci",
"class": "solr.TextField",
I have a type=”text_general” multivalued=”true” field, named fieldA.
When I use a function query, with fields like
fields=if(true, fieldA, -1), fieldA
Response is:
"response":{"numFound":1,"start":0,"maxScore":4.6553917,"docs":[
{
Hi Team,
I was implementing block join faceting query in my project and was stuck in
integrating the existing functional queries in the block join faceting
query.
*The current query using 'select' handler is as follows* :-
https://localhost:8983/solr/master_Product_default/*select*?*yq
Thanks Alex and Shawn.
Regards,
Anuj
On Thu, 18 Feb 2021 at 18:57, Shawn Heisey wrote:
> On 2/18/2021 3:38 AM, Anuj Bhargava wrote:
> > Solr 8.0 query length limit
> >
> > We are having an issue where queries are too big, we get no result. And
> if
> > we re
On 2/18/2021 3:38 AM, Anuj Bhargava wrote:
Solr 8.0 query length limit
We are having an issue where queries are too big, we get no result. And if
we remove a few keywords we get the result.
The best option is to convert the request to POST, as Thomas suggested.
With that, the query
:
> You can send big queries as a POST request instead of a GET request.
>
> Op do 18 feb. 2021 om 11:38 schreef Anuj Bhargava :
>
> > Solr 8.0 query length limit
> >
> > We are having an issue where queries are too big, we get no result. And
> if
> > we
You can send big queries as a POST request instead of a GET request.
Op do 18 feb. 2021 om 11:38 schreef Anuj Bhargava :
> Solr 8.0 query length limit
>
> We are having an issue where queries are too big, we get no result. And if
> we remove a few keywords we get the result.
>
Solr 8.0 query length limit
We are having an issue where queries are too big, we get no result. And if
we remove a few keywords we get the result.
Error we get - error 414 (Request-URI Too Long)
Have made the following changes in jetty.xml, still the same error
**
**
**
**
**
**
**
**
only by the intended recipient. If you received this
in error, please contact the sender and delete the e-mail and its
attachments from all devices.
-Original Message-
From: Flowerday, Matthew J
Sent: 15 January 2021 11:18
To: solr-user@lucene.apache.org
Subject: RE: Query over migrat
Hi,
The above boolean query works fine when the rows fetched are smaller like
10/20 but when it is increased to a bigger number it slows down.
Is document collection very expensive? Is there any configuration I am
missing?
*Solr setup details:*
Mode : SolrCloud
Number of Shards : 12
Index size
Hi,
The boolean query with a bigger value for *rows *times out with the
following message.
The request took too long to iterate over terms. Timeout: timeoutAt
Solr version : Solr 8.6.3
Time allowed : 30
Field :
Query : fl:(term1 OR term2 OR . OR term1)
rows : 1
wt : json/phps
Business Park | Wavendon | Milton Keynes | MK17
8LX
<http://www.unisys.com/>
THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY
MATERIAL and is for use only by the intended recipient. If you received this
in error, please contact the sender and delete the
I think if you have _root_ in schema.xml you should look elsewhere. My memory
is merely adding this one line to schema.xml took care of our problem.
From: Flowerday, Matthew J
Sent: Tuesday, January 12, 2021 3:23 AM
To: solr-user@lucene.apache.org
Subject: RE: Query over migrating a solr
Jim
Sent: 11 January 2021 22:58
To: solr-user@lucene.apache.org
Subject: RE: Query over migrating a solr database from 7.7.1 to 8.7.0
EXTERNAL EMAIL - Be cautious of all links and attachments.
When we upgraded from 7.x to 8.x, I ran into an issue similar to yours:
when updating an existing documen
Hello Guys,
Does Solr support edismax parser with Block Join Parent Query Parser? If
yes then could you provide me the syntax or point me to some reference
document? And how does it affect the performance?
I am working on a search screen in an eCommerce application's backend. The
requireme
eature was added for nested documents, this field
somehow became mandatory in order for updates to work properly, at least in
some cases.
From: Flowerday, Matthew J
Sent: Saturday, January 9, 2021 4:44 AM
To: solr-user@lucene.apache.org
Subject: RE: Query over migrating a solr database from 7.7.
bmann...@free.fr]
Envoyé : dimanche 10 janvier 2021 17:57
À : solr-user@lucene.apache.org
Objet : [solr8.7] not relevant results for chinese query
Hello,
I try to use chinese language with my index.
My definiti
Hello,
I try to use chinese language with my index.
My definition is:
But, I get too much not relevant results.
i.e. : With the query (phone case):
tizh:(手機殼)
my query is translate to
Cross-posted / addressed (both me), here.
https://stackoverflow.com/questions/65620642/solr-query-with-space-only-q-20-stalls/65638561#65638561
--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Cross-posted / addressed (both me), here.
https://stackoverflow.com/questions/65620642/solr-query-with-space-only-q-20-stalls/65638561#65638561
--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
;
>
>
>
>
>
>
> From: Flowerday, Matthew J
> Sent: 07 January 2021 12:25
> To: solr-user@lucene.apache.org
> Subject: Query over migrating a solr database from 7.7.1 to 8.7.0
>
> Hi There
>
> I have recently upgraded a solr database from 7.7.1 t
acebook.com/unisyscorp> <https://vimeo.com/unisys>
<http://blogs.unisys.com/>
From: Flowerday, Matthew J
Sent: 07 January 2021 12:25
To: solr-user@lucene.apache.org
Subject: Query over migrating a solr database from 7.7.1 to 8.7.0
Hi There
I have recently upgraded a so
I have a frontend that uses Ajax to query Solr.
It's working well, but if I enter a single space (nothing else) in the
input/search box (the URL in the browser will show
... index.html#q=%20
In that circumstance I get a 400 error (as there are no parameters in the
request), which is
I have a frontend that uses Ajax to query Solr.
It's working well, but if I enter a single space (nothing else) in the
input/search box (the URL in the browser will show
... index.html#q=%20
In that circumstance I get a 400 error (as there are no parameters in the
request), which is
Hi There
I have recently upgraded a solr database from 7.7.1 to 8.7.0 and not wiped
the database and re-indexed (as this would take too long to run on site).
On my local windows machine I have a single solr server 7.7.1 installation
I upgraded in the following manner
* Install
rg/solr/guide/7_6/index.html
I looked at the analysis screen, but it wasn't helpful. That's why I started
using the "debug=query" parameter and the content of parsedquery.
--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Can you post the managed schema and solrconfig content here ?
Do try the solr admin analysis screen
once as well to see the behaviour for this field.
https://lucene.apache.org/solr/guide/7_6/index.html
On Sun, 27 Dec, 2020, 6:54 pm nettadalet, wrote:
> Thank you, that was helpful!
>
> For Solr
Hi,
thank for the comment, but I tried to use both "sow=false" and "saw=true"
and I still get the same result. For query (TITLE_ItemCode_t:KI_7) I still
see:
Solr 4.6: "parsedquery": "PhraseQuery(TITLE_ItemCode_t:\"ki 7\")"
Solr 7.5: "parse
default
>behaviour change in solr 7.
>
>The sow parameter (short for "Split on Whitespace") now defaults to
>false, which allows support for multi-word synonyms out of the box.
>This parameter is used with the eDismax and standard/"lucene" query
>parsers. If this pa
d standard/"lucene" query
parsers. If this parameter is not explicitly specified as true, query
text will not be split on whitespace before analysis.
https://lucene.apache.org/solr/guide/7_0/major-changes-in-solr-7.html
On Sun, 27 Dec, 2020, 8:25 pm nettadalet, wrote:
> I added &
I added "defType=lucene" to both searches to make sure I use the same query
parser, but it didn't change the results.
--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
I'm not sure how to check the implementation of the query parser, or how to
change the query parser that I use. I think I'm using the standard query
parser.
I use Solr Admin to run the queries. If I look at the URL, I see
Solr 4.6:
select?q=TITLE_ItemCode_t:KI_7&fl=TITLE_ItemC
which query parser are you using? I think to answer your question, you need to
check the implementation of the query parser
At 2020-12-27 21:23:59, "nettadalet" wrote:
>Thank you, that was helpful!
>
>For Solr 4.6 I get
>"parsedquery": &quo
Thank you, that was helpful!
For Solr 4.6 I get
"parsedquery": "PhraseQuery(TITLE_ItemCode_t:\"ki 7\")"
For Solr 7.5 I get
"parsedquery":"+(+(TITLE_ItemCode_t:ki7 (+TITLE_ItemCode_t:ki
+TITLE_ItemCode_t:7)))"
So this is the cause of the difference in the search result, but I still
don't know wh
Hi,
Try adding debug=true or debug=query in the url and see the formed query at
the end .
You will get to know why the results are different.
On Thu, 24 Dec, 2020, 8:05 pm nettadalet, wrote:
> Hello,
>
> I have the the same field type defined in Solr 4.6 and Solr 7.5. When I
> sear
t the result was the same.
I have the following *6 values set for field text1 of type text_type1 for 6
different documents* (the type(s) from above):
KI_d5e7b43a
KI_b7c490bd
KI_7df2f026
KI_fa7d129d
KI_5867aec7
KI_7c3c0b93
My query is *text1=KI_7*.
Using Solr 4.6, I get 2 result - KI_7df
This should work as you expect, so the first thing I’d do
is add &debug=query and see the parsed query in both cases.
If that doesn’t show anything, please post the
full debug response in both cases.
Best,
Erick
> On Dec 21, 2020, at 4:31 AM, Alok Bhandari wrote:
>
> Hello All
Hello All ,
we are using Solr6.2 , in schema that we use we have an integer field. For
a given query we want to know how many documents have duplicate value for
the field , for an example how many documents have same doc_id=10.
So to find this information we fire a query to solr-cloud with
gt; > matched result.
> > Here Is the code .
> >
> > XYZ:concat(
> >
> > if(exists(query({!v='field1:12345'})), '12345', ''),
> >
> > if(exists(query({!v='field1:23456'})), '23456', ''),
> &g
subquery_
On Fri, Dec 11, 2020 at 3:31 PM Jae Joo wrote:
> I have the requirement to create field - xyz to be returned based on the
> matched result.
> Here Is the code .
>
> XYZ:concat(
>
> if(exists(query({!v='field1:12345'})), '12345', '
I have the requirement to create field - xyz to be returned based on the
matched result.
Here Is the code .
XYZ:concat(
if(exists(query({!v='field1:12345'})), '12345', ''),
if(exists(query({!v='field1:23456'})), '23456', '
ortunately.
>
> But you (or other readers) might find this "Query Facet" example handy
> - it uses the "type": "query" syntax that MIchael mentioned. [1]
>
> [1]
> https://lucene.apache.org/solr/guide/8_5/json-facet-api.html#query-facet
>
> Best,
&
Hey Arturas,
Can't help you with the secrets of Michael's inspiration (though I'm
also curious :-p). And I'm not sure if there's any equivalent of
facet.threads for JSON Faceting. You're on your own there
unfortunately.
But you (or other readers) might find th
less new to Solr. I need to run queries that use joins all
> over the place. (The idea is to index database records pretty much as
> they are and then query them in interesting ways and, most importantly,
> get the rank. Our dataset is not too large so the performance is great.)
>
>
Hi,
I'm more or less new to Solr. I need to run queries that use joins all
over the place. (The idea is to index database records pretty much as
they are and then query them in interesting ways and, most importantly,
get the rank. Our dataset is not too large so the performance is great.
Thanks for the suggestions. At some point I'll have to actually put it to
the test and see what impact everything has.
Cheers
On Sat, 5 Dec 2020 at 13:31, Erick Erickson wrote:
> Have you looked at the Term Query Parser (_not_ the TermS Query Parser)
> or Raw Query Parser
Have you looked at the Term Query Parser (_not_ the TermS Query Parser)
or Raw Query Parser?
https://lucene.apache.org/solr/guide/8_4/other-parsers.html
NOTE: these perform _no_ analysis, so you have to give them the exact term...
These are pretty low level, and if they’re “fast enough” you
Hello,
I was just wondering. If I don't care about the number of matches for a
query, let alone what the matches are, just that there is *at least 1*
match for a query, what's the most efficient way to execute that query (on
the /select handler)? (Using Solr 8.7)
As a general appr
nally
grateful for your help!
Michael, maybe you happen to know how I can plugin in facet.threads
parameter in that JSON body below, so the query uses more threads to
compute the answer? I am dying out of curiosity.
Cheers,
Arturas
On Thu, Dec 3, 2020 at 7:59 PM Michael Gibney
wrote:
> I t
I think the first "error" case in your set of examples above is closest to
being correct. For "query" facet type, I think you want to explicitly
specify `"type":"query"`, and specify the query itself in the `"q"` param,
i.e.:
{
"query&qu
Hi Michael,
Thanks for helping me to figure this out.
If I fire:
{
"query" : "*:*",
"limit" : 0,
"facet": {
"aip": { "query": "cfname2:aip", }
}
}
I get
"response": { "num
el
On Thu, Dec 3, 2020 at 3:47 AM Arturas Mazeika wrote:
> Hi Solr Team,
>
> I am trying to check how I can formulate facet queries using JSON format. I
> can successfully formulate query, range, term queries, as well as nested
> term queries. How can I formulate a nested facet q
Hi Solr Team,
I am trying to check how I can formulate facet queries using JSON format. I
can successfully formulate query, range, term queries, as well as nested
term queries. How can I formulate a nested facet query involving "query" as
well as "range" formulations? The fol
ions that make explicit value
> lookups very slow, and make them unsuitable for use on uniqueKey fields.
> Something about the field not having a "term" available.
>
> 2) A query of the type "fieldname:*" is a wildcard query. These tend to
> be slow and inefficient, wh
ve suggestions on this issue?
Here's a couple of facts:
1) Points-based fields have certain limitations that make explicit value
lookups very slow, and make them unsuitable for use on uniqueKey fields.
Something about the field not having a "term" available.
2) A query o
to be solved client side?
>>
>> On Tue, Nov 24, 2020 at 7:50 AM matthew sporleder
>> wrote:
>>
>>> Is the normal/standard solution here to regex remove the '-'s and
>>> combine them into a single token?
>>>
>>> On Tue, Nov 24, 20
Dear Team,
We are in the process of migrating from Solr 5 to Solr 8, during testing
identified that "Not null" queries on plong & pint field types are not
giving any results, it is working fine with solr 5.4 version.
could you please let me know if you have suggestions on this issue?
Thanks
Deep
v 24, 2020 at 8:00 AM Erick Erickson
>> wrote:
>>>
>>> This is a common point of confusion. There are two phases for creating a
>> query,
>>> query _parsing_ first, then the analysis chain for the parsed result.
>>>
>>> So what e-dismax
;
> On Tue, Nov 24, 2020 at 8:00 AM Erick Erickson
> wrote:
> >
> > This is a common point of confusion. There are two phases for creating a
> query,
> > query _parsing_ first, then the analysis chain for the parsed result.
> >
> > So what e-dismax sees i
Is the normal/standard solution here to regex remove the '-'s and
combine them into a single token?
On Tue, Nov 24, 2020 at 8:00 AM Erick Erickson wrote:
>
> This is a common point of confusion. There are two phases for creating a
> query,
> query _parsing_ first, then
This is a common point of confusion. There are two phases for creating a query,
query _parsing_ first, then the analysis chain for the parsed result.
So what e-dismax sees in the two cases is:
Name_enUS:“high tech” -> two tokens, since there are two of them pf2 comes into
play.
Name_enUS:“h
Fetch would work for my specific case (since I’m working with id’s there’s no
one to many), if I was able to restrict fetch’s target domain with a query. I
would first get all possible deleted ids, then use fetch to the items
collection. But then the current fetch implementation would find all
I am troubleshooting an issue with ranking for search terms that contain a
"-" vs the same query that does not contain the dash e.g. "high-tech" vs
"high tech". The field that I am querying is using the standard tokenizer,
so I would expect that the underlying lucen
e a blocker for you which is that it doesn't support one-to-many joins yet.
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
>
> On Sun, Nov 22, 2020 at 10:37 AM ufuk yılmaz
> wrote:
>
>> Hi all,
>>
>> I’m looking for a way to query two collections
the main limitation is likely to
be a blocker for you which is that it doesn't support one-to-many joins yet.
Joel Bernstein
http://joelsolr.blogspot.com/
On Sun, Nov 22, 2020 at 10:37 AM ufuk yılmaz
wrote:
> Hi all,
>
> I’m looking for a way to query two collections and find docu
Hi all,
I’m looking for a way to query two collections and find documents that exist in
both, I know this can be done with innerJoin streaming expression but I want to
avoid it, since one of the collection streams can possibly have billions of
results:
Let’s say two collections are
> *NestableJsonFacet
> > object.
> >
> > But I have noticed it does not maintain the facet-query order in which it
> > was given in *facet.json.*
> > *Direct queries to solr do maintain that order, but not after it comes to
> > Java layer in SolrJ.*
> >
sues.apache.org/jira/browse/SOLR-6468
> >
> > I was thinking about workarounds, but each solution I've attempted
> doesn't
> > quite work.
> >
> > Therefore, maybe one possible solution is to take a step back and
> > preprocess index/query data going
ng about workarounds, but each solution I've attempted doesn't
> quite work.
>
> Therefore, maybe one possible solution is to take a step back and
> preprocess index/query data going to Solr, something like:
>
> String wordsForSolr = removeStopWordsFrom("This is
https://issues.apache.org/jira/browse/SOLR-6468
I was thinking about workarounds, but each solution I've attempted doesn't
quite work.
Therefore, maybe one possible solution is to take a step back and
preprocess index/query data going to Solr, something like:
String wordsForSolr = removeStop
As Solr query result set may contain documents that does not include all
search terms, we were wondering if it is possible to get indication what
terms were missing as part of the response.
For example, if our index has the following indexed doc:
{
"title": "hello"
Hi all,
We are experiencing some unexpected behaviour for phrase queries which we
believe might be related to the FlattenGraphFilterFactory and stopwords.
Brief description: when performing a phrase query
"Molecular cloning and evolution of the" => we get expected hits
"Mol
) to run these kinds of
facet queries with no intention of ever conditionally following up in
a way that would want the actual results/docSet -- even if the
initial/more common query only cares about boolean existence.
The case in which this type of functionality really might be indicated is:
1. o
> "exists()" function that doesn't currently exist, that *is* an
> aggregate function, and the *does* stop early. I didn't account for
> the fact that there's already an "exists()" function *query* that
> behaves very differently. So yes, definitely confusing :
Michael, sorry for the confusion; I was positing a *hypothetical*
"exists()" function that doesn't currently exist, that *is* an
aggregate function, and the *does* stop early. I didn't account for
the fact that there's already an "exists()" function *query* th
searches to that
content types but often with distinct combinations of categories, i.e.
customer A wants his facet "tours" to only count hiking tours, customer B
only mountaineering tours, customer C a combination of both, etc
* We use "query" facets as each facet request will be b
class.
> basically as *queryResponse.getJsonFacetingResponse() -> returns
> *NestableJsonFacet
> object.
>
> But I have noticed it does not maintain the facet-query order in which it
> was given in *facet.json.*
> *Direct queries to solr do maintain that order, but not after it
ically clearer than capping count to 1, as I gather
`facet.exists` does. For the same reason, implementing this as a
function would probably be better than adding this functionality to
the `query` facet type, which carries certain useful assumptions (the
meaning of the "count" attribut
This really sounds like an XY problem. The whole point of facets is
to count the number of documents that have a value in some
number of buckets. So trying to stop your facet query as soon
as it matches a hit for the first time seems like an odd thing to do.
So what’s the “X”? In other words
Hi,
I use json facets of type 'query'. As these queries are pretty slow and I'm
only interested in whether there is a match or not, I'd like to restrict
the query execution similar to the standard facetting (like with the
facet.exists parameter). My simplified query looks som
ng only a digit like "1" or "2" ,... or
> >>> just a letter like "a" or "b" ...
> >>>
> >>> Is it a good idea to block them ... ie just single digits 0 - 9 and a
> -
> >> z
> >>> by put
t;>
>>> Hello,
>>>
>>> I want to block queries having only a digit like "1" or "2" ,... or
>>> just a letter like "a" or "b" ...
>>>
>>> Is it a good idea to block them ... ie just single digits 0 - 9
y a digit like "1" or "2" ,... or
> > just a letter like "a" or "b" ...
> >
> > Is it a good idea to block them ... ie just single digits 0 - 9 and a -
> z
> > by putting them as a stop word? The problem with this I can anticip
like "1" or "2" ,... or
> just a letter like "a" or "b" ...
>
> Is it a good idea to block them ... ie just single digits 0 - 9 and a - z
> by putting them as a stop word? The problem with this I can anticipate is a
> query like "1 inch sc
Hello,
I want to block queries having only a digit like "1" or "2" ,... or
just a letter like "a" or "b" ...
Is it a good idea to block them ... ie just single digits 0 - 9 and a - z
by putting them as a stop word? The problem with this I can antic
Hi folks,
Doing some faceted queries using 'facet.json' param and SolrJ, the results
of which I am processing using SolrJ NestableJsonFacet class.
basically as *queryResponse.getJsonFacetingResponse() -> returns
*NestableJsonFacet
object.
But I have noticed it does not maintain th
Hi,
Suppose I have say 50 ElevateIds and I have a way to identify those that
would get filtered out in the query by predefined fqs. So they would in
reality never be even in the results and hence never be elevated.
Is there any advantage if I avoid passing them in the elevateIds at the
time of
ot;,
"text_suggest":[
""
],
"sponsorname":[
""
],
"status":"",
"_version_":1680437253090836480
}
],
"therapeuticareas&quo
ed to
use a query like field:** as a workaround which works but is quite
inefficient. Another workaround is to search with a large distance to
match any possible point. This is pretty fast (in fact, with my data
it is even faster than field:* in 8.4.1) but it seems like an ugly
hack. Anyway, I
ield,
which may group children from different parents, but in this particular case
groups are only from one parent.
This is the query example:
qt=/select&wt=json&indent=true&start=0&rows=30&df=_text_sp_&q=VERY_LONG_BOOLEAN_QUERY_USING_SEVERAL_INDEXED_STRING_FIELDS_FROM_CHIL
harjags wrote
> Below errors are very common in 7.6 and we have solr nodes failing with
> tanking memory.
>
> The request took too long to iterate over terms. Timeout: timeoutAt:
> 162874656583645 (System.nanoTime(): 162874701942020),
> TermsEnum=org.apache.lucene.codecs.blocktree.SegmentTermsEnum
le 300 documents .vs. 10 documents for
the response. Regardless of the fields returned, the entire document will be
decompresses if you return any fields that are not docValues=true. So it’s
possible that what you’re seeing is related.
Try adding, as Alexandre suggests, &debug to the
What do the debug versions of the query show between two versions?
One thing that changed is sow (split on whitespace) parameter among
many. It is unlikely to be the cause, but I am mentioning just in
case.
https://lucene.apache.org/solr/guide/8_6/the-standard-query-parser.html#standard-query
Hi Solr Experts!
We are moving from Solr 6.5.1 to Solr 8.5.0 and having a problem with long
query, which has a search text plus many OR and AND conditions (all in one
place, the query is about 20KB long).
For the same set of data (about 500K docs) and the same schema the query in
Solr 6 return
This is solved by using local parameters. So
{!func}sub(num_tokens_int,query({!dismax qf=field_name v=${text}}))
works
On Mon, Sep 21, 2020 at 7:43 PM krishan goyal wrote:
> Hi,
>
> I have use cases of features which require a query function and some more
> math on top of the r
1 - 100 of 11111 matches
Mail list logo