: thank you for joining the discussion :).
Heh ... no problem, i was a little behind on my mail for a while there ...
but i'm catching up.
: 2) If I understood the API-documentation right, the behaviour of the
: FieldQParser depends exactly on what I've defined in my analyzer.
right ... it's
re two another understanding - questions:
What does WDF mean and what does HTE stand for?
Thank you very much!
Kind regards
- Mitch
--
View this message in context:
http://lucene.472066.n3.nabble.com/Minimum-Should-Match-the-other-way-round-tp694867p742797.html
Sent from the Solr - User mailing list archive at Nabble.com.
: However, maybe I missunderstood your point:
: "- Pick MAX_LEN Based On Number Of Query Clauses From Super"
: since I thought, that the number of query clauses depends on the number of
: whitespaces in my query. If I am wrong, and it depends on the result of my
: analyzer-chain, there is no prob
Does this number already consider the number of clauses (or what I really
mean: token) after the analyzer has worked on them?
It would be really nice to feel certain of that.
Kind regards
- Mitch
--
View this message in context:
http://n3.nabble.com/Minimum-Should-Match-the-other-way-round-tp694867
r Of Query Clauses From Super"
since I thought, that the number of query clauses depends on the number of
whitespaces in my query. If I am wrong, and it depends on the result of my
analyzer-chain, there is no problem. But I am not sure, if this is the case
or not.
Thank you for
: However, I got some doubts on this: What about queries that should be
: filtered with the WordDelimiterFilter. This could make a large difference to
: a none-delimiter-filtered MAX_LEN *and* it has got a protwords param. I
: can't instantiate a new WordDelimiterFilter everytime I do a query, so
way, I will try to implement it next
time.
- Mitch
--
View this message in context:
http://n3.nabble.com/Minimum-Should-Match-the-other-way-round-tp694867p703353.html
Sent from the Solr - User mailing list archive at Nabble.com.
On Apr 7, 2010, at 7:40 AM, MitchK wrote:
I can't believe that Solr isn't caching data like the synonym.txt's
etc.
Solr does cache these, look at the implementation of
SynonymFilterFactory where it keeps SynonymMap.
Are there no ideas how to access them?
There is a public getSynonymMap
I can't believe that Solr isn't caching data like the synonym.txt's etc.
Are there no ideas how to access them?
- Mitch
--
View this message in context:
http://n3.nabble.com/Minimum-Should-Match-the-other-way-round-tp694867p702761.html
Sent from the Solr - User mailing
stopwords.txt or protwords.txt, I want to access those (as I
understood cached) ressources.
- Mitch
--
View this message in context:
http://n3.nabble.com/Minimum-Should-Match-the-other-way-round-tp694867p698796.html
Sent from the Solr - User mailing list archive at Nabble.com.
is message in context:
http://n3.nabble.com/Minimum-Should-Match-the-other-way-round-tp694867p698683.html
Sent from the Solr - User mailing list archive at Nabble.com.
: > However, I am searching for a solution that does something like: "this is my
: > query" and the document has to consist of this query plus maximal - for
: > example - two another terms?
...
: Not quite following. It sounds like you are saying you want to favor
: docs that are shorter,
d like to
> integrate this into Lucene/Solr itself.
> Any ideas which components I have to customize?
>
> At the moment I am speculating that I have to customize the class which is
> collecting the result, before it is passing it to the ResponseWriter.
>
> Kind regards
hat I have to customize the class which is
collecting the result, before it is passing it to the ResponseWriter.
Kind regards
- Mitch
--
View this message in context:
http://n3.nabble.com/Minimum-Should-Match-the-other-way-round-tp694867p694867.html
Sent from the Solr - User mailing list archive at Nabble.com.
14 matches
Mail list logo