Thanks to everyone, I'll give it a try because it seems very useful.
Compiling with maven is really trivial, and it is not indeed a problem. I've a
customer that asked me if developers can find an official compiled version
instead of compiling on their machine. The solution is probably sharing t
Hi Shawn,
I'm looking for the docs num from the core overview page and the situation
is:
Num Docs: 35031923Max Doc: 35156879
The difference doesn't explain the strange behavior.
On Mon, Dec 28, 2015 at 1:35 AM, Shawn Heisey wrote:
> On 12/26/2015 11:21 AM, Luca Quarello wrote:
> > I ha
Hi guys,
While I was exploring the way we build the More Like This query, I
discovered a part I am not convinced of :
Let's see how we build the query :
org.apache.lucene.queries.mlt.MoreLikeThis#retrieveTerms(int)
1) we extract the terms from the interesting fields, adding them to a map :
Map
Hello ,
I am aware of the fact that Solr (I am using 5.2) does not support join on
distributed search with documents to be joined residing on different
shards/collections.
My use case is I want to fetch uuid of documents that are resultant of a
search and also those docs which are outside this se
Hi Upaya,
talking about wrapping the MLT query parser with additional query parsers :
let's assume I want to run my MLT query + 2 boost functions on the results
to affect the ranking.
Can you give me an example of how to wrap them together ?
Those two are the independent pieces :
{!boost b=recip
I simply concatenated them :
q={!boost b=recip(dist(2,0,star_rating,0,3),1,10,10)}{!mlt
qf=name,description,facilities,resort,region,dest_level_2 mintf=1 mindf=3
maxqt=100}43083
>From the debug query the syntax is fine.
Am i correct ?
Cheers
On 29 December 2015 at 11:48, Alessandro Benedetti
Feel free to create a JIRA and put up a patch if you can.
On Tue, Dec 29, 2015 at 4:26 PM, Alessandro Benedetti wrote:
> Hi guys,
> While I was exploring the way we build the More Like This query, I
> discovered a part I am not convinced of :
>
>
>
> Let's see how we build the query :
> org.apac
That might work, but this might be clearer:
q={!boost b=recip(dist(2, 0, star_rating, 0, 3),1,10,10) v=$mlt}&
mlt={!mlt qf=name,description,facilities,resort,region,dest_level_2
mintf=1 mindf=3 maxqt=100}43083
Upayavira
On Tue, Dec 29, 2015, at 12:00 PM, Alessandro Benedetti wrote:
> I simply co
Hi,
I defined a multi-term analyzer to my analysis chain, and it works as I expect.
However, for some queries (for example '* or 'term *') I get an exception
"analyzer returned no terms for multiTerm term". These queries work when I
don't customize a multi-term analyzer.
My question: is there a
Sure, I will proceed tomorrow with the Jira and the simple patch + tests.
In the meantime let's try to collect some additional feedback.
Cheers
On 29 December 2015 at 12:43, Anshum Gupta wrote:
> Feel free to create a JIRA and put up a patch if you can.
>
> On Tue, Dec 29, 2015 at 4:26 PM, Ale
Hi Eyal,
What is your analyzer definition for multi-term?
In your example, is star charter separated from the term by a space?
Ahmet
On Tuesday, December 29, 2015 3:26 PM, Eyal Naamati
wrote:
Hi,
I defined a multi-term analyzer to my analysis chain, and it works as I expect.
However, f
Alok,
You can use the Streaming API to achieve this goal but joins have not been
added to a 5.X release (at least I don't see it on the changelog). They do
exist on trunk and will be a part of Solr 6.
Documentation is still under development but if you wanted to play around
with it now you could
Or to put it another way, how does one get security.json to work with SOLR-5960?
Has anyone any suggestions?
-Original Message-
From: Oakley, Craig (NIH/NLM/NCBI) [C]
Sent: Thursday, December 24, 2015 2:12 PM
To: 'solr-user@lucene.apache.org'
Subject: post.jar with security.json
In the
Hi Craig,
To pass the username and password, you'll want to enable authorization and
authentication in security.json as is mentioned in this blog post in step 1 of
"Enabling Basic Authentication".
https://lucidworks.com/blog/2015/08/17/securing-solr-basic-auth--rules/
Is this what you're look
Thanks guys for your responses.
@Shalin: Do you have a documentation that explains this? Moreover, is it
only for Solr 5+ or is it still applicable to Solr 3+? I am asking this as
I am working in a team and in some of our projects we are using old Solr
versions and I need to convince the guys that
I do have authorization and authentication setup in security.json: the question
is how to pass the login and password into post.jar and/or into
solr-5.4.0/bin/post -- it does not seem to like the
user:pswd@host:8983/solr/corename/update syntax from SOLR-5960: when I try
that, it complains "Simp
What shalin says is solid and will work with solr 5.x as well as 3.x
You could do a little poc if you want to be absolutely certain. Shouldn't
take you very long.
Your only concern will be that your old docs won't be matched against
queries matched against the newly added fields.
On Tue, 29 Dec 20
Hi,
the only way that i find to solve my problem is to do the split using a
solr instance configured in standalone mode.
curl
http://localhost:8983/solr/admin/cores?action=SPLIT&core=sepa&path=/nas_perf_2/FRAGMENTS/17MINDEXES/1/index&path=/nas_perf/FRAGMENTS/17MINDEXES/2/index
In solr_cloud mode
Hi,
the only way that i find to solve my problem is to do the split using a
solr instance configured in standalone mode.
curl
http://localhost:8983/solr/admin/cores?action=SPLIT&core=sepa&path=/nas_perf_2/FRAGMENTS/17MINDEXES/1/index&path=/nas_perf/FRAGMENTS/17MINDEXES/2/index
In solr_cloud mode
You will probably find that the SimplePostTool (aka post.jar) has not
been updated to take into account security.json functionality.
Thus, the way to do this would be to look at the source code (it will
just use SolrJ to connect to Solr) and make enhancements to get it to
work (or if you're not fa
Erick,
I am not sure when you say "the only available terms are "not" and
"necessarily"" is totally correct. I go into the schema browser and I can
see that there are two terms "not" and "not necessarily" with the correct
count. Unless these are not the terms you are talking about. Can you
explain
I am purging some of my data on regular basis, but when I run a facet query,
the deleted values are still shown in the facet list.
Seems, commit with expunge resolves this issue
(http://grokbase.com/t/lucene/solr-user/106313v302/deleted-documents-appearing-in-facet-fields
). But it seems, commi
Hi,
I have an 260M documents index (90GB) with this structure:
where the fragmetnt field contains XML messagges.
There is a search function that provide the messagges satisfying a search
criterion.
TARGET:
To find the best c
Thoughts?
I can duplicate it at will...
On Mon, Dec 28, 2015 at 9:02 PM, William Bell wrote:
> I have having issues with {!join}. If the core have multiValued field and
> the inner join does not have a multiValued field it does not find the
> ones...
>
> Solr 5.3.1... 5.3.1
>
> Example.
>
> PS1
Let's be sure we're using terms similarly
That article is from 2010, so is unreliable in the 5.2 world, I'd ignore that.
First, facets should always reflect the latest commit, regardless of
expungeDeletes or optimizes/forcemerges.
_commits_ are definitely recommended. Optimize/forcemerge (or
Sorry, I overlooked the ShingleFilterFactory.
You're getting that from, presumably, your
ShingleFilterFactory. Note that the minShingleSize=2
does not mean that only 2-shingles are output, there's
yet another parameter "outputUnigrams" that controls
that in combination with outputUnigramsIfNoShingl
I believe the problem here is that terms from the deleted docs still appear
in the facets, even with a doc count of 0, is that it? Can you use
facet.mincount=1 or would that not be a good fit for your use case?
https://cwiki.apache.org/confluence/display/solr/Faceting#Faceting-Thefacet.mincountPar
Hi,
I am facing a situation, when I do an optimization by clicking on the
"Optimized" button on the Solr Admin Overview UI, the memory usage of the
server increases gradually, until it reaches near the maximum memory
available. There is 64GB of memory available in the server.
Even after the optim
Do not “optimize".
It is a forced merge, not an optimization. It was a mistake to ever name it
“optimize”. Solr automatically merges as needed. There are a few situations
where a force merge might make a small difference. Maybe 10% or 20%, no one had
bothered to measure it.
If your index is co
Hi Walter,
Thanks for your reply.
Then how about optimization after indexing?
Normally the index size is much larger after indexing, then after
optimization, the index size reduces. Do we still need to do that?
Regards,
Edwin
On 30 December 2015 at 10:45, Walter Underwood
wrote:
> Do not “opt
The only time that a force merge might be useful is when you reindex all
content every night or every week, then do not make any changes until the next
reindex. But even then, it probably does not matter.
Just let Solr do its thing. Solr is pretty smart.
A long time ago (1996-2006), I worked on
Thanks for the information.
Another thing I like to confirm is, will the Java Heap size setting affect
the optimization process or the memory usage?
Is the any recommended setting that we can use, for an index size of 200GB?
Regards,
Edwin
On 30 December 2015 at 11:07, Walter Underwood
wrote:
I have a scenario that I need to merge the solr indexes online. I have a
primary solr index of 100 Gb and it is serving the end users and it can't
go offline for a moment. Everyday new lucene indexes(2 GB) are generated
separately.
I have tried coreadmin
https://cwiki.apache.org/confluence/display
Some people also want to control when major segment merges happen, and
optimizing at a known time helps prevent a major merge at an unknown
time (which can be equivalent to an optimize/forceMerge).
The benefits of optimizing (and having fewer segments to search
across) will vary depending on the r
You probably do not NEED to merge your indexes. Have you tried not merging the
indexes?
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On Dec 29, 2015, at 7:31 PM, jeba earnest wrote:
>
> I have a scenario that I need to merge the solr indexes online
Could you simply add the new documents to the current index?
That aside, merging does not need to create a new core or a new
folder. The form:
mergeindexes&core=core0&indexDir=/opt/solr/core1/data/index&indexDir=/opt/solr/core2/data/index
Should merge the indexes from the two directories into th
Would collection aliases be an option (assuming you are using SolrCloud
mode)?
https://cwiki.apache.org/confluence/display/solr/Collections+API#CollectionsAPI-api4
On Tue, Dec 29, 2015 at 9:21 PM, Erick Erickson
wrote:
> Could you simply add the new documents to the current index?
>
> That asi
Question: does anyone have example good merge settings for solrconfig ? To
keep the number of segments small like 6?
On Tue, Dec 29, 2015 at 8:49 PM, Yonik Seeley wrote:
> Some people also want to control when major segment merges happen, and
> optimizing at a known time helps prevent a major me
Hi Ahmet,
Yes there is a space in my example.
This is my multiterm analyzer:
Thanks!
Eyal Naamati
Alma Developer
Tel: +972-2-6499313
Mobile: +972-547915255
eyal.naam...@exlibrisgroup.com
www.exlibrisgroup.com
-Original Message-
From: Ahmet Arslan [mailto:io
Hi,
I would like to find out, will having a replica slow down the search for
Solr?
Currently, I'm having 1 shard and a replicationFactor of 2 using Solr
5.3.0. I'm running SolrCloud, with 3 external ZooKeeper using ZooKeeper
3.4.6, and my index size is 183GB.
I have been getting QTime of more th
40 matches
Mail list logo