Hello Team,
I am a beginner learning Apache Solr. I am trying to check the compatibility of
solr with SharePoint Online, but I am not getting anything concrete related to
this in the website documentation. Can you please help me in providing some
information on this? How I can index my SharePoi
Hi,
I used following query
{!parent which="isParent:1" exp=total}+description:JSON +exp:[4 TO 7]
{!func}exp.
It is considering only highest experience from matched descriptions but not
sum of matched description experience.
Can you please explain me in detail.
--
Sent from: http://lucene.472066.
Hi Emir,
There is one behavior i noticed while performing the incremental import. I
added a new field into the managed-schema.xml to test the incremental
nature of using the clean=false.
**
Now xtimestamp is having a new value even on every DIH import with
clean=false property. Now i am
Thanks for your response.
CoreDescriptor is not present in TransientSolrCoreCacheDefault for a core that
has init failure. Is that expected?
On 1/29/18, 4:35 PM, "Erick Erickson" wrote:
Lots of that was reworked between those two versions.
I'm not clear what you expect here. If a c
SELinux? Number open File limits? Number of Process limits?
--
Sorry for being brief. Alternate email is rickleir at yahoo dot com
That's what "positionIncrementGap" is all about. It's the offset
between the last token of one element of your list and the first token
of the next. Let's say your doc looks like:
some text
other stuff
I'm presuming that's what you mean by "list".
Now the positions of these tokens are
som
There was a change in the configs between 6.1 and 6.6. If you upgraded you
system and kept the old configs then the /export handler won't work
properly. Check solrconfig.xml and remove any reference to the /export
handler. You also don't need to specify the rq or wt when you access the
/export hand
Lots of that was reworked between those two versions.
I'm not clear what you expect here. If a core fails to initialize,
then what's the purpose of unloading it? It isn't there in the first
place. The coreDescriptor should still be available if you need that,
and can be used to load the core later
Hi Solr users,I am having an issue on boolean search with Solr parser
edismax. The search "OR" doesn't work. The image below shows the different
results tested on different Solr versions. There are two types of search
requester handlers, /select vs /search. The /select requester uses Lucene
default
Hello,
I'm wondering what performance improvements occurred in Solr JOIN from
4.10.0 to 7.x. I noticed that the performance is quicker, but from looking
at the code in JoinQParserPlugin it isn't much change.
I saw that in Solr 5.x passing score=none would invoke Lucene's join
algorithm, which can
Hello,
I observed the following change on switching from solr verion 6.4.2 to 6.6.2:
In 6.6.2, in case of an init failure, SolrCores.getCoreDescriptor does not
return the core. The core is transient in nature but was not present in
transient core cache.
This was not the case in 6.4.2. I was get
Hey guys, thank you so much for this help.
Recalling all your suggestions I think I should:
1. Have two collections with an alias that alternatively point to one of
them. Use the configuration with autoSoftCommit and openSearcher=true. I
call this scenario an active/passive configuration.
2. The
Hi there!
We have a use case where we'd like to search within a list field, however
the search should not match across different elements in the list field --
all terms should match a single element in the list.
For eg if the field is a list of comments on a product, search should be
able to find
On 1/29/18 1:31 PM, Shawn Heisey wrote:
On 1/29/2018 2:02 PM, Scott Prentice wrote:
Thanks, Shawn. I was wondering if there was something going on with
IP redirection that was causing confusion. Any thoughts on how to
debug? And, what do you mean by "extreme garbage collection pauses"?
Is tha
Hi All,
I was using this feature in Solr 6.1:
https://issues.apache.org/jira/browse/SOLR-5244
It seems that this feature is broken in Solr 6.6. If I do this query in
Solr 6.1, it works as expected.
q=*:*&fl=exp_id_s&rq={!xport}&wt=xsort&sort=exp_id_s+asc
However, doing the same query in Solr 6
On 1/29/2018 2:02 PM, Scott Prentice wrote:
Thanks, Shawn. I was wondering if there was something going on with IP
redirection that was causing confusion. Any thoughts on how to debug?
And, what do you mean by "extreme garbage collection pauses"? Is that
Solr garbage collection or the OS itself
Looks like 2888 and 2890 are not open. At least they are not reported
with a netstat -plunt .. could be the problem.
Thanks, all!
...scott
On 1/29/18 1:10 PM, Davis, Daniel (NIH/NLM) [C] wrote:
Trying 127.0.0.1 could help. We kind of tend to think localhost is always
127.0.0.1, but I've s
Trying 127.0.0.1 could help. We kind of tend to think localhost is always
127.0.0.1, but I've seen localhost start to resolve to ::1, the IPv6 equivalent
of 127.0.0.1.
I guess some environments can be strict enough to restrict communication on
localhost; seems hard to imagine, but it does hap
Interesting. I am using "localhost" in the config files (using the IP
caused things to break even worse). But perhaps I should check with IT
to make sure the ports are all open.
Thanks,
...scott
On 1/29/18 12:57 PM, Davis, Daniel (NIH/NLM) [C] wrote:
To expand on that answer, you have to won
On 1/29/18 12:44 PM, Shawn Heisey wrote:
On 1/29/2018 1:13 PM, Scott Prentice wrote:
But when I do the same thing on the Red Hat system it fails. Through
the UI, it'll first time out with this message ..
Connection to Solr lost
Then after a refresh, the collection appears to have been pa
To expand on that answer, you have to wonder what ports are open in the server
system's port-based firewall.I have to ask my systems team to open ports
for everything I'm using, especially when I move from localhost to outside.
You should be able to "fake it out" if you set up your zookeeper
On 1/29/2018 1:13 PM, Scott Prentice wrote:
But when I do the same thing on the Red Hat system it fails. Through
the UI, it'll first time out with this message ..
Connection to Solr lost
Then after a refresh, the collection appears to have been partially
created, but it's in the "Gone" st
Using Solr 7.2.0 and Zookeeper 3.4.11
In an effort to move to a more robust Solr environment, I'm setting up a
prototype system of 3 Solr servers and 3 Zookeeper servers. For now,
this is all on one machine, but will eventually be 3 machines.
This works fine on a Ubuntu 5.4.0-6 VM on my local
Believe this is reported in https://issues.apache.org/jira/browse/SOLR-10471
On Mon, Jan 29, 2018 at 2:55 PM, Markus Jelsma
wrote:
> Hello SG,
>
> The default in solr.in.sh is commented so it defaults to the value set in
> bin/solr, which is fifteen seconds. Just uncomment the setting in
> solr
Hello SG,
The default in solr.in.sh is commented so it defaults to the value set in
bin/solr, which is fifteen seconds. Just uncomment the setting in solr.in.sh
and your timeout will be thirty seconds.
For Solr itself to really default to thirty seconds, Solr's bin/solr needs to
be patched to
Hi Markus,
We are in the process of upgrading our clusters to 7.2.1 and I am not sure
I quite follow the conversation here.
Is there a simple workaround to set the ZK_CLIENT_TIMEOUT to a higher value
in the config (and it's just a default value being wrong/overridden
somewhere)?
Or is it more seve
If you need to make a request to Solr that has a lot of custom
parameters and values, you can create an additional definition for a
Request handler and all all those parameters in there, instead of
hardcoding them on the client side. See solrconfig.xml, there are lots
of examples there.
Regards,
Thanks!
Joel Bernstein
http://joelsolr.blogspot.com/
On Mon, Jan 29, 2018 at 11:14 AM, Kojo wrote:
> Joel,
> The Jira is created:
> https://issues.apache.org/jira/browse/SOLR-11922
>
> I hope it helps.
>
> Thank you very much.
>
>
>
>
> 2018-01-29 13:03 GMT-02:00 Joel Bernstein :
>
> > This loo
did you push 'syns.txt' to ZooKeeper into the same place your schema is?
Best,
Erick
On Mon, Jan 29, 2018 at 7:34 AM, beji dhia wrote:
> hello,
> I'm beginner with solrand I have some difficulties to manipulate resources.
> In fact, I m using solr 7.2. in cloud mode using zookeper.
> 1/ I creat
Try searching with lowercase the word and. Somehow you have to allow
the parser to distinguish the two.
You _might_ be able to try "AND~2" (with quotes) to see if you can get
that through the parser. Kind of a hack, but
There's also a parameter (depending on the parser) about lowercasing
oper
Hello everybody,
how can I formulate a fuzzy query that works for an arbitrary string,
resp. is there a formal syntax definition somewhere?
I already found by by hand, that
field:"val"~2
Is read by the parser, but the fuzzyness seems to get lost. So I write
field:val~2
Now if val contain s
Joel,
The Jira is created:
https://issues.apache.org/jira/browse/SOLR-11922
I hope it helps.
Thank you very much.
2018-01-29 13:03 GMT-02:00 Joel Bernstein :
> This looks like a bug in the CartesianProductStream. It's going to have be
> fixed before parallel cartesian products can be run. Fe
hello,
I'm beginner with solrand I have some difficulties to manipulate resources.
In fact, I m using solr 7.2. in cloud mode using zookeper.
1/ I create collection named films
2/ I wanted to add synonyms file called "syns.txt" so I added it into
"server/solr/configsets/_default/conf/"
3/ i execu
I think it really depends on the particular use case. Sometime the absolute
score is a good feature, sometimes no.
If you are using the default bm25, I think that increasing the number of terms
in the query will increase the average doc. score in the results. So maybe I
would normalize the sc
Taking a look to Lucene code, this seems the closest query to your
requirement :
org.apache.lucene.search.spans.SpanPositionRangeQuery
But it is not used in Solr out of the box according to what I know.
You may potentially develop a query parser and use it to reach your goals.
Given that, I thin
This looks like a bug in the CartesianProductStream. It's going to have be
fixed before parallel cartesian products can be run. Feel free to create a
jira for this.
Joel Bernstein
http://joelsolr.blogspot.com/
On Mon, Jan 29, 2018 at 9:58 AM, Kojo wrote:
> Hi solr-users!
> I have a Streaming Ex
Hi solr-users!
I have a Streaming Expression which joins two search SE, one of them is
evaluated on a cartesianProduct SE.
I´am trying to run that in parallel mode but it does not work.
Trying a very simple parallel I can see that it works:
parallel(
search(
But this one I´m trying to run,
Hi Karan,
Glad it worked for you.
I am not sure how to do it in C# client, but adding clean=false parameter in
URL should do the trick.
Thanks,
Emir
--
Monitoring - Log Management - Alerting - Anomaly Detection
Solr & Elasticsearch Consulting Support Training - http://sematext.com/
> On 29 Ja
>It seems to me that the original score feature is not useful because it is
not normalized across all queries and therefore cannot be used to compare
relevance in different queries.
I don't agree with this statement and it's not what Alessandro was
suggesting ("When you put the original score toge
Thanks Emir :-) . Setting the property *clean=false* worked for me.
Is there a way, i can selectively clean the particular index from the
C#.NET code using the SolrNet API ?
Please suggest.
Kind regards,
Karan
On 29 January 2018 at 16:49, Emir Arnautović
wrote:
> Hi Karan,
> Did you try runni
In theory it should be possible if you are indexing the positions of the tokens
in your field,
but I am not aware of any solr query that allows you to weight the matches
based on the position, does anyone know if is possible?
From: solr-user@lucene.apache.org At: 01/29/18 11:25:36To:
solr-us
Hi,
Is it possible to merge fields of two stream sources in a specific way?
Take for example two search result sets:
search_1(... fl="score")
search_2(... fl="score")
I would like to merge these two into one result set. Its score would be
computed using a custom function f(x,y) that takes take s
FYI. I recently did a study on 'Performance of Solr'
https://www.linkedin.com/pulse/performance-comparison-solr-elasticsearch-deepak-goel/?trackingId=N2j9xWvVEQQaZYa%2BoEsy%2Bw%3D%3D
Deepak
"Please stop cruelty to Animals, help by becoming a Vegan"
+91 73500 12833
deic...@gmail.com
Facebook: h
Hi Aashish,
Can you tell us a bit more about the size of your index and if you are running
updates at the same time, types of queries, tests (is it some randomized query
or some predefined), how many test threads do you use?
Thanks,
Emir
--
Monitoring - Log Management - Alerting - Anomaly Detect
Hi Alessandro,
Thanks for making it more clear. As I mentioned I do not want to change my
index (mentioned in subject) for the feature I requested.
search query will have to look for first 100 characters indexed in same XYZ
field. "
How can I achieve this without changing index? I want at search
Hi Karan,
Did you try running full import with clean=false?
Emir
--
Monitoring - Log Management - Alerting - Anomaly Detection
Solr & Elasticsearch Consulting Support Training - http://sematext.com/
> On 29 Jan 2018, at 11:18, Karan Saini wrote:
>
> Hi folks,
>
> Please suggest the solution
This seems different from what you initially asked ( and Diego responded)
"One is simple, search query will look for whole content indexed in XYZ
field
Other one is, search query will have to look for first 100 characters
indexed in same XYZ field. "
This is still doable at Indexing time using a
Hi Muhammad,
If the limit(s) are static, you can still do it at index time: Assuming you
send “content” field, you index it fully (and store if needed), and you use
copy field to copy to content_limitted field where you use limit token count
filter to index only first X tokens:
https://lucene.a
Ok, i applied the patch and it is clear the timeout is 15000. Solr.xml says
3 if ZK_CLIENT_TIMEOUT is not set, which is by default unset in
solr.in.sh,but set in bin/solr to 15000. So it seems Solr's default is still
15000, not 3.
But, back to my topic. I see we explicitly set it in sol
Thanks Erick.
This is fine but I do not want to update my indexes as this configuration
will get applied to indexing as well. I have a requirement where one field
(XYZ) of type (text) requires two types of searches.
One is simple, search query will look for whole content indexed in XYZ field
Othe
Hi folks,
Please suggest the solution for importing and indexing PDF files
*incrementally*. My requirements is to pull the PDF files remotely from the
network folder path. This network folder will be having new sets of PDF
files after certain intervals (for say 20 secs). The folder will be forced
Hi,
Solr query time for a request comes aroung 10-12ms. But when I am hitting
the queries parallely the qtime rises to 900 ms but there is no significant
increase in cpu load. I am using solr with default memory settings. How can
I optimize to give less query time.
Thanks in advance.
Aashish Ag
Generally speaking, if a full re-index is happening everyday, wouldn't be
better to use a technique such as collection alias ?
You could point your search clients to the "Alias" which points to the
online collection "collection1".
When you re-index you build "collection2", when it is finished you
when you say : "However, for the phonetic matches, there are some matches
closer to the query text than others. How can we boost these results ? "
Do you mean closer in String edit distance ?
If that is the case you could use the String distance metric implemented in
Solr with a function query :
>
Hello.
We work on a search application whose main goal is to find persons by name
(surname and lastname).
Query text comes from a user-entered text field. Ordering of the text is not
defined (lastname-surname, surname-lastname), but
some orderings are most important than others. The ranking is
55 matches
Mail list logo