Hi All,
I am trying to integrate UIMA with Solr.I was able to do the same.But some
of the UIMA fields are not getting indexed into solr whereas other *fields
like pos,ChukType are getting indexed*.
I am using openNLP-UIMA together for text analysis.
When I tried to index the UIMA field for locatio
Anyone has any idea if the authentication will expired automatically? Mine
has already been authenticated for more than 20 hours, and it has not auto
logged out yet.
Regards,
Edwin
On 11 April 2017 at 00:19, Zheng Lin Edwin Yeo wrote:
> Hi,
>
> Would like to check, after I have entered the auth
I found from StackOverflow that we should declare it this way:
http://stackoverflow.com/questions/43335419/using-basicauth-with-solrj-code
SolrRequest req = new QueryRequest(new SolrQuery("*:*"));//create a new
request object
req.setBasicAuthCredentials(userName, password);
solrClient.request(re
Hi,
I have the following odd behavior on my queries. I'm using group result
using the group feature of solr. The problem is that when using the default
sort=score desc the result is as expected. But when I changed it to
sort=score asc the result is not what I expected. Below are the details of
my
Please open an issue on Tika's JIRA and share the triggering file if possible.
If we can touch the file, we may be able to recommend alternate ways to
configure Tika's encoding detectors. We just added configurability to the
encoding detectors and that will be available with Tika 1.15. [1]
We
Hi,All
I use TikaEntityProcessor to extract the text content from binary or text
file.
But when I try to extract Japanese Characters from HTML File whose
caharacter encoding is SJIS, the content is garbled.In the case of UTF-8,it
does work
well.
The setting of Data Import Handler is as below.
Here we go
https://cwiki.apache.org/confluence/display/solr/Other+Parsers#OtherParsers-FunctionRangeQueryParser
.
But it's %100 YAGNI. You'd better tweak search to be more precise.
On Mon, Apr 10, 2017 at 7:12 PM, Ahmet Arslan
wrote:
> Hi,
> I remember that this is possible via frange query pars
Hi Mike
disclaimer I'm the author of https://github.com/freedev/
solrcloud-zookeeper-docker
I had same problem when I tried to create a cluster SolrCloud with docker,
just because the docker instances were referred by ip addresses I cannot
access with SolrJ.
I avoided this problem referring each
Hi,
Would like to check, after I have entered the authentication to access Solr
with Basic Authentication Plugin, will the authentication be expired
automatically after a period of time?
I'm using SolrCloud on Solr 6.4.2
Regards,
Edwin
Hi,
I remember that this is possible via frange query parser.But I don't have the
query string at hand.
Ahmet
On Monday, April 10, 2017, 9:00:09 PM GMT+3, David Kramer
wrote:
I’ve done quite a bit of searching on this. Pretty much every page I find says
it’s a bad idea and won’t work well, but
Well, that's rather the point, the low-scoring docs aren't unrelated,
someone just thinks they are.
Flippancy aside, the score is, as you've researched, a bad gauge.
Since Lucene has to compute the score of a doc before it knows the
score, at any point in the collection process you may get a doc t
Hi,
I have just set up the Basic Authentication Plugin in Solr 6.4.2 on
SolrCloud, and I am trying to modify my SolrJ code so that the code can go
through the authentication and do the indexing.
I tried using the following code from the Solr Documentation
https://cwiki.apache.org/confluence/displ
Hello guys,
I manage a Solr cluster and I am experiencing some problems with dynamic
schemas.
The cluster has 16 nodes and 1500 collections with 12 shards per collection
and 2 replicas per shard. The nodes can be divided in 2 major tiers:
- tier1 is composed of 12 machines with 4 physical cores
We are using Solr 6.4.2, Can anyone tell me this is a Bug for which I can open
a JIRA?
With Thanks & Regards
Karthik Ramachandran
Direct: (732) 923-2197
P Please don't print this e-mail unless you really need to
From: Karthik Ramachandran
Sent: Tuesday, April 4, 2017 8:35 PM
To: 'solr-user@lucen
I’ve done quite a bit of searching on this. Pretty much every page I find says
it’s a bad idea and won’t work well, but I’ve been asked to at least try it to
reduce the number of completely unrelated results returned. We are not trying
to normalize the number, or display it as a percentage, an
hi all.
i'm experiencing a head scratcher. i've got some queries that aren't
matching despite seeing them do so in the Analysis window. i'm wondering if
it's due to multi-term differences between Analysis and raw queries.
i'm querying something like this: ...fq=manufacturer:("VENDOR:VENDOR US")
O
Hi All,I am trying to use solr with 2 cores interacting with 2 different
databases, one core is executing full-import successfully where as when I am
running for 2nd one it is throwing table or view not found exception. If I
am using the query directly It is running fine. Below is the error meassge
Hi Himanshu,
maxWarmingSearchers would break nothing on production. Whenever you request
solr to open a new searcher, it autowarms the searcher so that it can
utilize caching. After autowarm is complete a new searcher is opened.
The questions you need to adress here are
1. Are you using soft-com
thanks Alex for taking out your valuable time and helping us to understand
better.
Cheers!
Kshitij
On Mon, Apr 10, 2017 at 4:00 PM, alessandro.benedetti
wrote:
> It really depends on the schema change...
> Any addition/deletion usually implies you can avoid re-indexing if you
> don't
> care the
It really depends on the schema change...
Any addition/deletion usually implies you can avoid re-indexing if you don't
care the old documents will remain outdated.
But doing a type change, or a change to the data structures involved ( such
enabling docValues, norms ect ect) without a full re-index
I was able to figure out the issue.I was directly calling the installed pear
file in Solr updateRequestHandler.Once I mapped the pear file inside a AE
xml file and pointing the AE file in requesthandler solved the issue.
--
View this message in context:
http://lucene.472066.n3.nabble.c
This is the error message that I get.
2017-04-10 08:30:05.766 ERROR (main) [ ] o.a.s.s.SolrDispatchFilter Could
not start Solr. Check solr/home property and the logs
2017-04-10 08:30:05.779 ERROR (main) [ ] o.a.s.c.SolrCore
null:java.lang.ClassCastException: java.lang.String cannot be cast to
Hi Toke,
Thanks for your time and quick response. As you said, I changed our logging
level from SEVERE to INFO and indeed found the performance warning *Overlapping
onDeckSearchers=2* in the logs. I am considering limiting the
*maxWarmingSearchers* count in configuration but want to be sure that
n
Hi Alex,
After full re-indexing things work out fine.
But is there any other way to make schema changes on the go?
Or we have to reindex entire data whenever a schema change is done?
we are having 30-40 million documents and it is a tedious and time taking
task.
What other approaches are there
24 matches
Mail list logo