Thank you Walter!
Will have a look at how to do this with edismax.
Ruslan
On Tue, Jun 18, 2019 at 6:26 PM Walter Underwood
wrote:
> Use two fields, one for exact, one for phonetic. Use the edismax query
> handler and set
> a higher weight on the exact field.
>
> wunder
> Walter Underwood
> w
Use two fields, one for exact, one for phonetic. Use the edismax query handler
and set
a higher weight on the exact field.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On Jun 18, 2019, at 5:23 PM, Ruslan Dautkhanov wrote:
>
> We're using phonetic
We're using phonetic filters (BMPM), and we want to boost exact matches if
there are any.
For example, for name "stephen" BM filter will generate two terms: stifn,
stipin
And for example it'll find for name "stepheM" (misspelled last letter),
it'll match on the same two terms.
This makes match sc
Hello,
I was looking into the code to try to get to the root of this issue. Looks
like this is an issue after all (as of 7.2.1 which is the version we are
using), but wanted to confirm on the user list before creating a JIRA. I
found that the soTimeout property of ConcurrentUpdateSolrClient class
Attached the patch, but that isn't sent out on the mailing list, my mistake.
Patch below:
### START
diff --git
a/solr/core/src/java/org/apache/solr/cloud/api/collections/SplitShardCmd.java
b/solr/core/src/java/org/apache/solr/cloud/api/collections/SplitShardCmd.java
index 24a52eaf97..e018f8a
Hello,
We're on Solr 6.2.1 and have a requirement where we need to facet on nested
docs. So far we'd been using two pass query approach, where the first query
searches within the parent domain and gets all the matching nested doc ids
as facets (parent docs keep track of nested docs they contain) a
On 6/18/2019 12:16 PM, ilango dhandapani wrote:
Tried several attempts like delete collection/config, take index backup from
5.3, clear index and place them back after upgrade. All tried resulted in
faceting not working with 5.3 and 6.0 data combined.
Most likely what happened here is that the
The Lucene PMC is pleased to announce that the Solr Reference Guide for
Solr 8.1 is now available.
This 1,483 page PDF is the definitive guide to Apache Solr, the search
server built on Apache Lucene.
The PDF can be downloaded from:
https://www.apache.org/dyn/closer.cgi/lucene/solr/ref-guide/apac
Am trying to upgrade from solr (cloud mode ) 5.3 to 6.0. My ZK version is
3.4.6
I updated the schema from 6.0 and started back solr as 6.0. All the old data
is present. I have a UI, where all the files are displayed ( by search from
solr).
When I add new data , faceting is not working and having i
Am trying to upgrade my solr (cloud mode ) from 5.3 to 6.0 version. My Zk
verison is 3.4.6.
After updating schema and starting solr as 6.0, all the nodes health look
fine. When I add new files and they goto all the shards, faceting stops
working. I have a UI where the files are displayed ( search
Looks like the disk check here is the problem, I am no Java developer, but this
patch ignores the check if you are using the link method for splitting.
Attached the patch. This is off of the commit for 7.7.2, d4c30fc285 . The
modified version only has to be run on the overseer machine, so there
I see below for CDCR Queues API Documentation
The output is composed of a list “queues” which contains a list of (ZooKeeper)
Target hosts, themselves containing a list of Target collections. For each
collection, the current size of the queue and the timestamp of the last update
operation succe
Using Solr 7.7.2 Docker image, testing some of the new autoscale features, huge
fan so far. Tested with the link method on a 2GB core and found that it took
less than 1MB of additional space. Filled the core quite a bit larger, 12GB of
a 20GB PVC, and now splitting the shard fails with the follo
We are using bidirectional CDCR with solr 7.6 and it works for us. Did you look
at the logs to see if there are any errors.
"Both Cluster 1 and Cluster 2 can act as Source and Target at any given
point of time but a cluster cannot be both Source and Target at the same
time."
The above me
Dynamic fields don’t make any difference, they’re just like fixed fields as far
as merging is concerned.
So this is almost certainly merging being kicked off by your commits. The number
of documents and the more terms, the more work Lucene has to do, so I suspect
this is just how things work.
I’l
Thanks Erick.
I see the above pattern only at the time of commit.
I have many fields (like around 250 fields out of which around 100 fields
are dynamic fields and around 3 n-gram fields and text fields, while many of
them are stored fields along with indexed fields), will a merge take a lot
of t
Hi,
I have two collections - collection1 and collection2.
I am doing HTTP request on collection2 using
http://localhost:8983/solr/collection1/tcfts/?params={q=col:value AND
_query_:{!join}...AND _query_:{!join}..}
If my query is like - fieldOnCollection1:somevalue AND INNER JOIN (with
collection1
Hi,
I have two collections - collection1 and collection2.
I am doing HTTP request on collection2 using
http://localhost:8983/solr/collection1/tcfts/?params={q=col:value AND
_query_:{!join}...AND _query_:{!join}..}
If my query is like - fieldOnCollection1:somevalue AND INNER JOIN (with
collection1
Thanks Erick. I will try the terms query.
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
If these are the id field (i.e. ), then delete by id is
much less painful. That aside:
1> check how the query is parsed with just one or two Barcodes. If
you are pushing this through edismax or similar, you might be getting
surprising results
2> try putting that massive OR clause inside a Terms q
Hi,
maybe this one?
http://opennlp.sourceforge.net/models-1.5/
2019年6月18日(火) 17:13 Vidit Mathur :
>
> Sir/ma'am
> I was trying to integrate OpenNLP with solr for lemmatizating the search
> text but I could find the lemmatization model on opennle.sourceforge.net .
> Could you please help me with th
There is a situation where we have to delete a lot of assets for a customer
from solr index. There are occasions where the number of assets run into
thousands. So I am constructing the query as below. If the number of ‘OR’
clauses cross a certain limit (like 50), the delete is not working.
We ar
Sir/ma'am
I was trying to integrate OpenNLP with solr for lemmatizating the search
text but I could find the lemmatization model on opennle.sourceforge.net .
Could you please help me with this issue or suggestion some work around.
Regards
Vidit Mathur
(Student)
Dear Solr Developer,
I am a Chinese Software developer and I having been using solr for nearly
4years.First Thank u for your continuous effort on improving solr. Recently I
began to read the source code because I am very curious about how it works. But
I encountered many questions which I s
Can you please describe the steps you have done so far?
> Am 18.06.2019 um 02:22 schrieb Val D :
>
> To whom it may concern:
>
> I have a Windows based system running Java 8. I have installed SOLR 7.7.2 (
> I also tried this with version 8.1.1 as well with the same results ). I have
> SQL S
To whom it may concern:
I have a Windows based system running Java 8. I have installed SOLR 7.7.2 ( I
also tried this with version 8.1.1 as well with the same results ). I have SQL
Server 2018 with 1 table that contains 22+ columns and a few thousand rows. I
am attempting to index the SQL Se
26 matches
Mail list logo