Hello all,
I'm unloading a core with async param then sending query with request id
http://localhost:8983/solr/admin/cores?action=UNLOAD&core=expressions&async=1001http://localhost:8983/solr/admin/cores?action=REQUESTSTATUS&requestid=1001
and would like to find a piece of doc with all possible val
Hello all,
I'm unloading a core with async param then sending query with request id
http://localhost:8983/solr/admin/cores?action=UNLOAD&core=expressions&async=1001
http://localhost:8983/solr/admin/cores?action=REQUESTSTATUS&requestid=1001
and would like to find a piece of doc with all possible
Hello all,
We are using solr 7.3.1, with master and slave config.
When we deliver a new index we unload the core, with option delete data dir
= true, then recreate the data folder and copy the new index files into
that folder before sending solr a command to recreate the core (with the
same name)
characters, but if you want full UNICODE normalization, take a look at
> the ICUFoldingFilter:
>
> https://lucene.apache.org/solr/guide/6_6/filter-descriptions.html#FilterDescriptions-ICUFoldingFilter
>
> --Ere
>
> elisabeth benoit kirjoitti 8.2.2019 klo 22.47:
> > yes you
e. You might be
> better off just using wildcards (restrict to three letters at the prefix
> though).
>
> This is perfectly valid, I'm mostly asking if it's your intent.
>
> Best,
> Erick
>
> On Fri, Feb 8, 2019 at 9:35 AM SAUNIER Maxence wrote:
>
Hello,
We use solr 7 and use
with mapping-ISOLatin1Accent.txt
containing lines like
# À => A
"\u00C0" => "A"
# Á => A
"\u00C1" => "A"
# Â => A
"\u00C2" => "A"
# Ã => A
"\u00C3" => "A"
# Ä => A
"\u00C4" => "A"
# Å => A
"\u00C5" => "A"
# Ā Ă Ą =>
"\u0100" => "A"
"\u0102" => "A"
"\u0104" =
Hello,
We are trying to use NGramFilterFactory for approximative search with solr
7.
We usually use a similarity with no tf, no idf (our similarity extends
ClassicSimilarity, with tf and idf functions always returning 1).
For ngram search though, it seems inappropriate since it scores a word
mat
Hello,
We are using solr with a home made jar with a custom function.
function(0.1,1.0,43.8341851366,5.7818349,43.8342868634,5.7821059,latlng_pi)
where latlng_pi is a document field of type location
In solr 5.5.2, location was defined like this
and parsed in the jar like this (with
gt; Cassandra
>
> On Wed, Jul 26, 2017 at 2:30 AM, elisabeth benoit
> wrote:
> > Are in place updates available in solr 5.5.2, I find atomic updates in
> the
> > doc
> > https://archive.apache.org/dist/lucene/solr/ref-guide/
> apache-solr-ref-guide-5.5.pdf,
>
Are in place updates available in solr 5.5.2, I find atomic updates in the
doc
https://archive.apache.org/dist/lucene/solr/ref-guide/apache-solr-ref-guide-5.5.pdf,
which redirects me to the page
https://cwiki.apache.org/confluence/display/solr/Updating+Parts+of+Documents#UpdatingPartsofDocuments-At
Hello,
I am using solr 5.5.2.
I am trying to give a lower score to frequent words in query.
The only way I've found so far is to do like
q=avenue^0.1 de champaubert village suisse 75015 paris
where avenue is a frequent word.
The problem is I'm using edismax, and when I add ^0.1 to avenue, it
Hello,
I would like to score higher, or even better to sort documents with same
text scores based on norm
for instance, with query "a b d"
document with
a b d
should score higher than (or appear before) document with
a b c d
The problem is my field is multivalued so omitNorms= False is not
73]
0
5
1
word
end is set to 1, 2, 3 or 4 depending on edgengrams length
2016-09-22 14:57 GMT+02:00 elisabeth benoit :
>
> Hello
>
> After migrating from solr 4.10.1 to solr 5.5.2, we dont have the same
> behaviour with highlighting on edge ngrams fields.
>
> We're us
Hello
After migrating from solr 4.10.1 to solr 5.5.2, we dont have the same
behaviour with highlighting on edge ngrams fields.
We're using it for an autocomplete component. With Solr 4.10.1, if request
is sol, highlighting on solr is sol<\em>r
with solr 5.5.2, we have solr<\em>.
Same problem as
Well, we rekicked the machine with puppet, restarted solr and now it seems
ok. dont know what happened.
2016-09-08 11:38 GMT+02:00 elisabeth benoit :
>
> Hello,
>
>
> We are perf testing solr 5.5.2 (with a limit test, i.e. sending as much
> queries/sec as possible) and we see
Hello,
We are perf testing solr 5.5.2 (with a limit test, i.e. sending as much
queries/sec as possible) and we see the cpu never goes over 20%, and
threads are blocked in org.eclipse.jetty.util.BlockingArrayQueue, as we can
see in solr admin interface thread dumps
qtp706277948-757 (757)
java.ut
Hello,
We are migrating from solr 4.10.1 to solr 5.5.2. We don't use solr cloud.
We installed the service with installation script and kept the default
configuration, except for a few settings about logs and the gc config (the
same used with solr 4.10.1).
We tested today the performances of solr
Thanks! This is very helpful!
Best regards,
Elisabeth
2016-08-25 17:07 GMT+02:00 Shawn Heisey :
> On 8/24/2016 6:01 AM, elisabeth benoit wrote:
> > I was wondering was is the right way to prevent solr 5 from creating a
> new
> > log file at every startup (and renaming
Thanks a lot for your answer.
Best regards,
elisabeth
2016-08-24 16:16 GMT+02:00 Shawn Heisey :
> On 8/24/2016 5:44 AM, elisabeth benoit wrote:
> > I'd like to know what is the best way to have the equivalent of tomcat
> > localhost_access_log for solr 5?
>
> I don
Hello again,
We're planning on using solr 5.5.2 on production, using installation
script install_solr_service.sh.
I was wondering was is the right way to prevent solr 5 from creating a new
log file at every startup (and renaming the actual file mv
"$SOLR_LOGS_DIR/solr_gc.log" "$SOLR_LOGS_DIR/sol
Hello,
I'd like to know what is the best way to have the equivalent of
tomcat localhost_access_log for solr 5?
Best regards,
Elisabeth
oh sorry wrote too fast. had to change the defaultOperator to OR.
Elisabeth
2016-07-27 10:11 GMT+02:00 elisabeth benoit :
>
> Hello,
>
> We are migrating from solr 4.10.1 to solr 5.5.2, and it seems that the mm
> parameter is not working the same anymore.
>
> In fact, as so
Hello,
We are migrating from solr 4.10.1 to solr 5.5.2, and it seems that the mm
parameter is not working the same anymore.
In fact, as soon as there is a word not in the index in the query, no
matter what mm value I send, I get no answer as if my query is a pure AND
query.
Does anyone have a cl
uot; drop-down) that the heavy-duty
> work of opening the core actually executes.
>
> So I think it's working as expected,. But do note
> that this whole area (transient cores, loading on
> startup true/false) is intended for stand-alone
> Solr and is unsupported in SolrCloud.
>
&
Hello,
I have a core.properties with content
name=indexer
loadOnStartup=false
but the core is loaded on start up (it appears on the admin interface).
I thougth the core would be unloaded on startup. did I miss something?
best regards,
elisabeth
In addition to what was proposed
We use the technic described here
https://github.com/cominvent/exactmatch
and it works quite well.
Best regards
Elisabeth
2016-06-15 16:32 GMT+02:00 Alessandro Benedetti :
> In addition to what Erick correctly proposed,
> are you storing norms for your field o
stal code P.
>
> Is what you're wanting "return me all pairs of documents within that
> postal code that have all the terms matching and the polygons enclosing
> those streets plus some distance intersect"?
>
> Seems difficult.
>
> Best,
> Erick
>
> O
Hello all,
I was wondering if there was a solr solution for a problem I have (and I'm
not the only one I guess)
We use solr as a search engine for addresses. We sometimes have requests
with let's say for instance
street A close to street B City postcode
I was wondering if some kind of join betw
Hello all,
I am using Solr 4.10.1. I use edismax, with pf2 to boost documents starting
with. I use a start with token (b) automatically added at index time,
and added in request at query time.
I have a problem at this point.
request is *q=b saint denis rer*
the start with field is name_
esults, you can move ngram logic
> outside of analysis chain - simplest solution is to move it to client. In
> such setup, you should be able to use pf2 and pf3 and see if that produces
> desired result.
>
> Regards,
> Emir
>
>
> On 10.03.2016 13:47, elisabeth benoit wro
he query parser tokenisation to build the grams and then
> the query time analysis is applied.
> This according to my remembering,
> I will double check in the code and let you know.
>
> Cheers
>
>
> On 10 March 2016 at 11:02, elisabeth benoit
> wrote:
>
> > T
ial
> phrase query to affect the scoring.
> Not a good fit for your problem.
>
> More than grams, have you considered using some sort of phonetic matching ?
> Could this help :
> https://cwiki.apache.org/confluence/display/solr/Phonetic+Matching
>
> Cheers
>
>
nd of search/autocompletion/relevancy tuning are you trying to
> achieve ?
> Maybe we can help better if we start from the problem :)
>
> Cheers
>
> [1]
>
> http://alexbenedetti.blogspot.co.uk/2015/07/exploring-solr-internals-lucene.html
>
> On 9 March 2016 at 15:02, e
> Cheers
>
> On 8 March 2016 at 10:08, elisabeth benoit
> wrote:
>
> > Thanks for your answer Emir,
> >
> > I'll check that out.
> >
> > Best regards,
> > Elisabeth
> >
> > 2016-03-08 10:24 GMT+01:00 Emir Arnautovic >:
>
ecific length.
> It should not be too hard to create such filter - you can take a look how
> nagram filter is coded - yours should be simpler than that.
>
> Regards,
> Emir
>
>
> On 08.03.2016 08:52, elisabeth benoit wrote:
>
>> Hello,
>>
>> I'm
Hello,
I'm using solr 4.10.1. I'd like to index words with ngrams of fix lenght
with a position in the end.
For instance, with fix lenght 3, Amsterdam would be something like:
a0 (two spaces added at beginning)
am1
ams2
mst3
ste4
ter5
erd6
rda7
dam8
am9 (one more space in the end)
The number a
Hello,
There was a discussion on this thread about exact match
http://www.mail-archive.com/solr-user%40lucene.apache.org/msg118115.html
they mention an example on this page
https://github.com/cominvent/exactmatch
Best regards,
Elisabeth
2016-02-19 18:01 GMT+01:00 Loïc Stéphan :
> Hello,
>
> You can find this formula and a decent explanation for it in the book solr
> in action or online in the lucene docs:
>
> https://lucene.apache.org/core/3_5_0/api/core/org/apache/lucene/search/Similarity.html
>
> On Tue, 22 Dec 2015, 11:10 elisabeth benoit
> wrote:
>
> >
res or
> just one record with a lower score and the other with a higher one?
>
> On Mon, 21 Dec 2015, 18:45 elisabeth benoit
> wrote:
>
> > Hello,
> >
> > I don't think the query is important in this case.
> >
> > After checking out solr's deb
21 Dec 2015, 14:37 elisabeth benoit
> wrote:
>
> > Hello all,
> >
> > I am using solr 4.10.1 and I have configured my pf2 pf3 like this
> >
> > catchall~0^0.2 name~0^0.21 synonyms^0.2
> > catchall~0^0.2 name~0^0.21 synonyms^0.2
> >
> > my search
Hello all,
I am using solr 4.10.1 and I have configured my pf2 pf3 like this
catchall~0^0.2 name~0^0.21 synonyms^0.2
catchall~0^0.2 name~0^0.21 synonyms^0.2
my search field (qf) is my catchall field
I'v been trying to change slop in pf2, pf3 for catchall and synonyms (going
from 0, or default v
ok, thanks a lot for your advice.
i'll try that.
2015-12-17 10:05 GMT+01:00 Binoy Dalal :
> For this case of inversion in particular a slop of 1 won't cause any issues
> since such a reverse match will require the slop to be 2
>
> On Thu, 17 Dec 2015, 14:20 eli
Inversion (paris charonne or charonne paris) cannot be scored the same.
2015-12-16 11:08 GMT+01:00 Binoy Dalal :
> What is your exact use case?
>
> On Wed, 16 Dec 2015, 13:40 elisabeth benoit
> wrote:
>
> > Thanks for your answer.
> >
> > Actually, using a
t; > try setting your slop = 1 in which case it should match Gare Saint Lazare
> > even with the de in it.
> >
> > On Mon, Dec 14, 2015 at 7:22 PM elisabeth benoit <
> > elisaelisael...@gmail.com> wrote:
> >
> >> Hello,
> >>
> >> I
Hello,
I am using solr 4.10.1. I have a field with stopwords
And I use pf2 pf3 on that field with a slop of 0.
If the request is "Gare Saint Lazare", and I have a document "Gare de Saint
Lazare", "de" being a stopword, this document doesn't get the pf3 boost,
because of "de".
I was wondering
y, then you could move on to the next
> highest most likely field, maybe product title (short one line
> description), and query voluminous fields like detailed product
> descriptions, specifications, and user comments/reviews only as a last
> resort.
>
> -- Jack Krupansky
>
>
in descriptions, all of which would occur in a catchall.
>
> -- Jack Krupansky
>
> On Mon, Oct 12, 2015 at 8:39 AM, elisabeth benoit <
> elisaelisael...@gmail.com
> > wrote:
>
> > Hello,
> >
> > We're using solr 4.10 and storing all data in a catc
Hello,
We're using solr 4.10 and storing all data in a catchall field. It seems to
me that one good reason for using a catchall field is when using scoring
with idf (with idf, a word might not have same score in all fields). We got
rid of idf and are now considering using multiple fields. I rememb
Shouldn't you specify a spellcheck.dictionary in your request handler?
Best regards,
Elisabeth
2015-04-17 11:24 GMT+02:00 Derek Poh :
> Hi
>
> I have enabled spellcheck but not getting any suggestions withincorrectly
> spelled keywords.
> I added the spellcheck into the/select request handler.
>
ds in
spellcheck.collateParam.fq.
Best regards,
Elisabeth
2015-04-14 17:05 GMT+02:00 elisabeth benoit :
> Thanks for your answer!
>
> I didn't realize this what not supposed to be done (conjunction of
> DirectSolrSpellChecker and FileBasedSpellChecker). I got this idea in the
> mailing l
This is something I've
> personally wanted for a long time.
>
> James Dyer
> Ingram Content Group
>
>
> -Original Message-
> From: elisabeth benoit [mailto:elisaelisael...@gmail.com]
> Sent: Tuesday, April 14, 2015 7:37 AM
> To: solr-user@lucene.apache.
Hello,
I am using Solr 4.10.1 and trying to use DirectSolrSpellChecker and
FileBasedSpellchecker in same request.
I've applied change from patch 135.patch (cf Solr-6271). I've tried running
the command "patch -p1 -i 135.patch --dry-run" but it didn't work, maybe
because the patch was a fix to Sol
t;
> -Original Message- From: elisabeth benoit
> Sent: Thursday, October 30, 2014 6:07 AM
> To: solr-user@lucene.apache.org
> Subject: prefix length in fuzzy search solr 4.10.1
>
>
> Hello all,
>
> Is there a parameter in solr 4.10.1 api allowing user to fix pref
Hello all,
Is there a parameter in solr 4.10.1 api allowing user to fix prefix length
in fuzzy search.
Best regards,
Elisabeth
Hello all,
We are using solr 4.2.1 (but planning to switch to solr 4.10 very soon).
We are trying to do approximative search using ~ operator.
We use catchall_light field without stemming (to do not mix fuzzy and
stemming)
We send a request to solr using fuzzy operator on non "frequent" words
thanks a lot for your answers!
2014-10-14 6:10 GMT+02:00 Jack Krupansky :
> To correct myself, the selected Similarity class can have a computeNorm
> method that calculates the "norm" value that will be stored in the index
> when the document is indexed, so changing the Similarity class will requ
ok thanks.
I think something is not working here (I'm quite sure my similarity class
is not beeing used because when I use
SchemaSimilarityFactory and a custom fieldtype similarity definition with
NoTFSimilarity, I don't get the same scoring as when I use NoTFSimilarity
as global similarity; but
Thanks for the information!
I've been struggling with that debug output. Any other way to know for sure
my similarity class is being used?
Thanks again,
Elisabeth
2014-10-09 13:03 GMT+02:00 Markus Jelsma :
> Hi - it should work, not seeing your implemenation in the debug output is
> a known iss
I've read somewhere that we do have to reindex when changing similarity
class. Is that right?
Thanks again,
Elisabeth
Hello,
I am using Solr 4..2.1 and I've tried to use a per field similarity, as
described in
https://apache.googlesource.com/lucene-solr/+/c5bb5cd921e1ce65e18eceb55e738f40591214f0/solr/core/src/test-files/solr/collection1/conf/schema-sim.xml
so in my schema I have
and a custom similarity in f
on this mailing list should be sure
> that they are "registered" on that support wiki. Hey, it's free! And be
> sure to keep your listing up to date, including regional availability and
> any specialties.
>
> -- Jack Krupansky
>
> -Original Message- From: el
Hello,
We are looking for a solr consultant to help us with our devs using solr.
We've been working on this for a little while, and we feel we need an
expert point of view on what we're doing, who could give us insights about
our solr conf, performance issues, error handling issues (big thing). W
urn the fallback.
>
> Please let me know how this goes.
>
> ~ David Smiley
> Freelance Apache Lucene/Solr Search Consultant/Developer
> http://www.linkedin.com/in/davidwsmiley
>
>
> On Tue, Jul 22, 2014 at 3:12 AM, elisabeth benoit <
> elisaelisael...@gmail.com
> >
Hello,
I am using solr 4.2.1. I have the following use case.
I should find results inside bbox OR if there is none, first result outside
bbox within a 1000 km distance. I was wondering what is the best way to
proceed.
I was considering doing a geofilt search from the center of my bounding box
an
Thanks for your answer,
best regards,
Elisabeth
2014-06-12 14:07 GMT+02:00 Alexandre Rafalovitch :
> There is always UpdateRequestProcessor.
>
> Regards,
> Alex
> On 12/06/2014 7:05 pm, "elisabeth benoit"
> wrote:
>
> > Hello,
> >
> > Is i
Hello,
Is it possible, in solr 4.2.1, to split a multivalued field with a json
update as it is possible do to with a csv update?
with csv
/update/csv?f.address.split=true&f.address.separator=%2C&commit=true
with json (using a post)
/update/json
Thanks,
Elisabeth
ok, thanks a lot, I'll check that out.
2014-05-14 14:20 GMT+02:00 Markus Jelsma :
> Elisabeth, i think you are looking for SOLR-3211 that introduced
> spellcheck.collateParam.* to override e.g. dismax settings.
>
> Markus
>
> -Original message-
> From:elisabeth benoit
> Sent:Wed 14-05-2
Hello,
I'm using solr 4.2.1.
I use a very permissive value for mm, to be able to find results even if
request contains non relevant words.
At the same time, I'd like to be able to do some efficient spellcheking
with solrdirectspellchecker.
So for instance, if user searches for "rue de Chraonne
ith Lucene committers:
> > https://twitter.com/DmitryKan/status/399820408444051456
> >
> > HTH,
> >
> > Dmitry
> >
> >
> > On Tue, Apr 1, 2014 at 11:34 AM, elisabeth benoit <
> > elisaelisael...@gmail.com
> > > wrote:
> >
> >
true.
> If your solr update lifecycle includes frequent deletes, try this out.
>
> This of course does not override working towards finding better
> GCparameters.
>
> https://cwiki.apache.org/confluence/display/solr/Near+Real+Time+Searching
>
>
> On Mon, Mar 31, 2014 at
something else me might not see?
Thanks again,
Elisabeth
2014-03-31 16:26 GMT+02:00 Shawn Heisey :
> On 3/31/2014 6:57 AM, elisabeth benoit wrote:
> > We are currently using solr 4.2.1. Our index is updated on a daily basis.
> > After noticing solr query time has increased (two ti
Hello,
We are currently using solr 4.2.1. Our index is updated on a daily basis.
After noticing solr query time has increased (two times the initial size)
without any change in index size or in solr configuration, we tried an
optimize on the index but it didn't fix our problem. We checked the garb
yutz
Envoyé de mon iPhoneippj
Le 26 janv. 2014 à 06:13, Shalin Shekhar Mangar a
écrit :
> There is no timestamp versioning as such in Solr but there is a new
> document based versioning which will allow you to specify your own
> (externally assigned) versions.
>
> See the "Document Centric Ve
s right along, but adding
> debug=query will show the parsed query.
>
> I really question though, your apparent combination of
> autoGeneratePhraseQuery what looks like an ngram field.
> I'm not at all sure how those would interact...
>
> Best,
> Erick
>
> On Fri
like
> OR "original query"
> and optionally boost it high. But I'd start with the autoGenerate bits.
>
> Best,
> Erick
>
>
> On Fri, Sep 27, 2013 at 7:37 AM, elisabeth benoit
> wrote:
> > Thanks for your answer.
> >
> > So I guess if
ltfield:term2
> This happens long before the terms get to the analysis chain
> for the field.
>
> So your only options are to either quote the string or
> escape the spaces.
>
> Best,
> Erick
>
> On Wed, Sep 25, 2013 at 9:24 AM, elisabeth benoit
> wrote:
> >
Hello,
I am using solr 4.2.1 and I have a autocomplete_edge type defined in
schema.xml
When I have a request with more then one word, for instance "rue de la", my
request doesn't match with my autocomplete_edge field unless I use
Hello,
I'd like to know if there is some specific way, in Solr 3.6.1, to have
something like an homogeneous dispersion of documents in a bbox.
My use case is I a have a request returning let's say 1000 documents in a
bbox (they all have the same solr score), and I want only 50 documents, but
not
n\ de\ coiffure
>
> or you may use something like that :
> fq={!term f=ONLY_EXACT_MATCH_FIELD v=$qq}&qq=salon de coiffure
>
>
> Hope it helps,
> Franck Brisbart
>
>
> Le jeudi 02 août 2012 à 09:56 +0200, elisabeth benoit a écrit :
> > Hello,
> >
>
the indexer and the searcher is transformed and matched.
>
> Cheers,
> Chantal
>
> Am 02.08.2012 um 09:56 schrieb elisabeth benoit:
>
> > Hello,
> >
> > I am using Solr 3.4.
> >
> > I'm trying to define a type that it is possible
Hello,
I am using Solr 3.4.
I'm trying to define a type that it is possible to match with only if
request contains exactly the same words.
Let's say I have two different values for ONLY_EXACT_MATCH_FIELD
ONLY_EXACT_MATCH_FIELD: salon de coiffure
ONLY_EXACT_MATCH_FIELD: salon de coiffure pour fe
ok, thanks a lot for the answer.
Elisabeth
2012/5/31 Chris Hostetter
>
> : When I read fieldValueCache statistics I have something that looks like
> :
> : item_ABC_FACET :
> :
> {field=ABC_FACET,memSize=4224,tindexSize=32,time=92,phase1=92,nTerms=0,bigTerms=0,termInstances=0,uses=11}
> :
> :
>
gt;
> As you can see my search for "naturwald" extends to single and multiword
> synonyms e.g. "forêt naturelle"
>
>
> My SynonymFilterFactory has the following settings:
>
> org.apache.solr.analysis.SynonymFilterFactory
> {tokenizerFactory=solr.KeywordTokenizerFactory,
&
tokens while
> the window matches in your trie.
> Once you have a complete match in your trie, the filter can set an
> attribute of the type your choice (e.g. MyCustomKeywordAttribute) on the
> first matching token, and make the attribute be the complete match (e.g.
> "Hotel de ville&quo
ery time expansion (no need to reindex if the thesaurus changes).
> The thesaurus holds synonyms and "used for terms" in 24 languages. So
> it is also some kind of language translation. And naturally the thesaurus
> translates from single term to multi term synonyms and vic
NALYZED field, is it possible that the data from your
> test cases simply isn't _in_ CATEGORY_ANALYZED?
>
> Best
> Erick
>
> On Wed, Apr 25, 2012 at 9:39 AM, elisabeth benoit
> wrote:
> > I'm not at the office until next Wednesday, and I don't have my Solr
that section like:
> "parsed_filter_queries"
>
> My other question is "are you absolutely sure that your
> CATEGORY_ANALYZED field has the correct content?". How does it
> get populated?
>
> Nothing jumps out at me here
>
> Best
> Erick
>
&g
q="hotel de ville"&fq=price:[100 To *]&fq=roomType:"King size Bed" ===>
> returns 40 documents from super set of 100 documents
>
>
> hope this helps!
>
> - Jeevanandam
>
>
>
> On 24-04-2012 3:08 pm, elisabeth benoit wrote:
>
>&
the one with the line
.
Anyone as a clue what is different between q analysis behaviour and fq
analysis behaviour?
Thanks a lot
Elisabeth
2012/4/12 elisabeth benoit
> oh, that's right.
>
> thanks a lot,
> Elisabeth
>
>
> 2012/4/11 Jeevanandam Madanagopal
>
>>
index time. So
> "mairie" and "hotel de ville" searchable on document.
>
> However, still white space tokenizer splits at query time will be a
> problem as described by Markus.
>
> --Jeevanandam
>
> On Apr 11, 2012, at 12:30 PM, elisabeth benoit wrote:
&
de ville => mairie
> might work for you.
>
> Best
> Erick
>
> On Tue, Apr 10, 2012 at 1:41 AM, elisabeth benoit
> wrote:
> > Hello,
> >
> > I've read several post on this issue, but can't find a real solution to
> my
> > multi-words sy
Hello,
I've read several post on this issue, but can't find a real solution to my
multi-words synonyms matching problem.
I have in my synonyms.txt an entry like
mairie, hotel de ville
and my index time analyzer is configured as followed for synonyms.
The problem I have is that now "mairie" m
Hi all,
I'm using solr 3.4 with a catchall field and an edismaw request handler.
I'd like to score higher answers matching with words not contained in one
of the fields copied into my catchall field.
So my catchallfield is called catchall. It contains, let's say, fields
NAME, CATEGORY, TOWN, WAY
then use one or the other
> depending, but as you say that would increase the size of your
> index...
>
> Best
> Erick
>
> On Wed, Jan 11, 2012 at 9:47 AM, elisabeth benoit
> wrote:
> > Hello,
> >
> > I have a catchall field, and I need to do some request in all
Hello,
I have a catchall field, and I need to do some request in all fields of
that catchall field, minus one. To avoid duplicating my index, I'd like to
know if there is a way to use my catch field while excluding that one field.
Thanks,
Elisabeth
Thanks for the answer.
yes in fact when I look at debugQuery output, I notice that name and number
are never treated as single entries.
I have
(((text:name text:number)) (text:ru) (text:tain) (text:paris)))
so name and number are in same parenthesis, but not exactlly treated as a
phrase, as far
same problem with Solr 4.0
2011/12/8 elisabeth benoit
>
>
> Hello,
>
> I'm using Solr 3.4, and I'm having a problem with a request returning
> different results if I have or not a space after a coma.
>
> The request "name, number rue taine paris" retu
Hello,
I'm using Solr 3.4, and I'm having a problem with a request returning
different results if I have or not a space after a coma.
The request "name, number rue taine paris" returns results with 4 words out
of 5 matching ("name", "number", "rue", "paris")
The request "name,number rue taine pa
Thanks a lot for these answers!
Elisabeth
2011/12/4 Erick Erickson
> See below:
>
> On Thu, Dec 1, 2011 at 10:57 AM, elisabeth benoit
> wrote:
> > Hello,
> >
> > If anybody can help, I'd like to confirm a few things about Solr's caches
> > confi
Hello,
If anybody can help, I'd like to confirm a few things about Solr's caches
configuration.
If I want to calculate cache size in memory relativly to cache size in
solrconfig.xml
For Document cache
size in memory = size in solrconfig.xml * average size of all fields
defined in fl parameter
1 - 100 of 129 matches
Mail list logo