First: Usually you do not use post.jar for updating your index. It's a simple
tool, but normally you use features like the csv- or
xml-update-RequestHandler.
Have a look at "UpdateCSV" and "UpdateXMLMessages" in the wiki.
There you can find examples on how to commit explicitly.
With the post.jar
Hi Andy,
Andy-152 wrote:
>
>
> 1
> 1000
>
>
> has been commented out.
>
> - With commented out, does it mean that every new document
> indexed to Solr is being auto-committed individually? Or that they are not
> being auto-committed at all?
>
I am not sure, whether there is a de
Hi Shaun,
I think it is more easy to fix this problem, if we got more information
about what is going on in your application.
Please, could you provide the CoreAdminResponse returned by car.process()
for us?
Kind regards,
- Mitch
--
View this message in context:
http://lucene.472066.n3.nabble.
Frank,
have a look at SOLR-646.
Do you think a workaround for the data-dir-tag in the solrconfig.xml can
help?
I think about something like ${solr./data/corename} for
illustration.
Unfortunately I am not very skilled in working with solr's variables and
therefore I do not know what variables ar
I must add something to my last post:
When saying it could be used together with techniques like consistent
hashing, I mean it could be used at indexing time for indexing documents,
since I assumed that the number of shards does not change frequently and
therefore an ODV-case becomes relatively i
every
hour. :-)
Thoughts?
Kind regards
Andrzej Bialecki wrote:
>
> On 2010-09-06 16:41, Yonik Seeley wrote:
>> On Mon, Sep 6, 2010 at 10:18 AM, MitchK wrote:
>> [...consistent hashing...]
>>> But it doesn't solve the problem at all, correct me if I am wrong, but:
&
Andrzej,
thank you for sharing your experiences.
> b) use consistent hashing as the mapping schema to assign documents to a
> changing number of shards. There are many explanations of this schema on
> the net, here's one that is very simple:
>
Boom.
With the given explanation, I understan
Yonik,
are there any discussions about SolrCloud-indexing?
I would be glad to join them, if I find some interesting papers about that
topic.
- Mitch
--
View this message in context:
http://lucene.472066.n3.nabble.com/anyone-use-hadoop-solr-tp485333p1426469.html
Sent from the Solr - User maili
Thanks for your detailed feedback Andzej!
>From what I understood, SOLR-1301 becomes obsolete ones Solr becomes
cloud-ready, right?
> Looking into the future: eventually, when SolrCloud arrives we will be
> able to index straight to a SolrCloud cluster, assigning documents to
> shards throug
Peter,
take a close look at tagging and and excluding filters:
http://wiki.apache.org/solr/SimpleFacetParameters#LocalParams_for_faceting
Another way would be to index your services_raw as
services_raw/Exclusive rental
services_raw/Fotoreport
services_raw/Live music
In this case, you can use t
Hi,
this topic started a few months ago, however there are some questions from
my side, that I couldn't answer by looking at the SOLR-1301-issue nor the
wiki-pages.
Let me try to explain my thoughts:
Given: a Hadoop-cluster, a solr-search-cluster and nutch as a
crawling-engine which also perform
Hi Micheal,
have a look at SweetSpotSimilarity (Lucene).
Kind regards,
- Mitch
--
View this message in context:
http://lucene.472066.n3.nabble.com/full-control-over-norm-values-tp1366910p1367462.html
Sent from the Solr - User mailing list archive at Nabble.com.
Johann,
try to remove the wordDelimiterFilter from the query-analyzer of your
fieldType.
If your index-analyzer-wordDelimiterFilter is well configured, it will find
everything you want.
Does this solve the problem?
Kind regards,
- Mitch
--
View this message in context:
http://lucene.472066.n
Hi Scott,
> (so shorter fields are automatically boosted up). "
>
The theory behind that is the following (in easy words):
Let's say you got two documents, each doc contains on 1 field (like it was
in my example).
Additionally we got a query that contains two words.
Let's say doc1 contains o
will be equal.
Kind regards,
- Mitch
scott chu wrote:
>
> I don't quite understand additional-field-way? Do you mean making another
> field that stores special words particularly but no indexing for that
> field?
>
> Scott
>
> - Original Message -
&g
Hi,
keepword-filter is no solution for this problem, since this would lead to
the problematic that one has to manage a word-dictionary. As explained, this
would lead to too much effort.
You can easily add outputUnigrams=true and check out the analysis.jsp for
this field. So you can see how much
Alex,
it sounds like it would make sense.
Use cases could be i.e. clustering or similar techniques.
However, in my opinion the point of view for such a modification is not the
right.
I.e. one wants to have got several resultsets. I could imagine that one does
a primary-query (the query for the d
Jonathan Rochkind wrote:
>
>> qf needs to have spaces in it, unfortunately the local query parser can
>> not
>> deal with that, as Erik Hatcher mentioned some months ago.
>
> By "local query parser", you mean what I call the LocalParams stuff (for
> lack of being sure of the proper term)?
>
Hi,
as I promised, I want to give a feedback for transforming SolrJ's output
into JSON with the package from json.org (the package was the json.org's
one):
I need to make a small modification to the package, since they store the
JSON-key-value-pairs in a HashMap, I changed this to a LinkedHa
Hi,
qf needs to have spaces in it, unfortunately the local query parser can not
deal with that, as Erik Hatcher mentioned some months ago.
A solution would be to do something like that:
{!dismax%20qf=$yourqf}yourQuery&yourgf=title^1.0 tags^2.0
Since you are using the dismax-query-parser, you c
I got some problems with Nabble, too.
Nabble sends some warnings that my posts are still pending to the
mailing-list, while people were already answering to my initial questions.
Did you send a message to the nabble-support?
Kind regards,
- Mitch
kenf_nc wrote:
>
> The Nabble.com page for Sol
edback!
Are you sure that you cannot change the SOLR results at query time
according to your needs?
Unfortunately, it is not possible in this case.
Kind regards,
Mitch
Am 28.07.2010 16:49, schrieb Chantal Ackermann:
Hi Mitch
On Wed, 2010-07-28 at 16:38 +0200, MitchK wrote:
Thank you,
e JacksonParser with Spring.
http://json.org/ lists parsers for different programming languages.
Cheers,
Chantal
On Wed, 2010-07-28 at 15:08 +0200, MitchK wrote:
Hello ,
Second try to send a mail to the mailing list...
I need to translate SolrJ's response into JSON-response.
I can not
you haven't already, and query with wt=json. Can't get
mucht easier.
Cheers,
On Wednesday 28 July 2010 15:08:26 MitchK wrote:
Hello ,
Second try to send a mail to the mailing list...
I need to translate SolrJ's response into JSON-response.
I can not query Solr directly, becau
Hello ,
Second try to send a mail to the mailing list...
I need to translate SolrJ's response into JSON-response.
I can not query Solr directly, because I need to do some math with the
responsed data, before I show the results to the client.
Any experiences how to translate SolrJ's response i
Hello community,
I need to transform SolrJ - responses into JSON, after some computing on
those results by another application has finished.
I can not do those computations on the Solr - side.
So, I really have to translate SolrJ's output into JSON.
Any experiences how to do so without writing
Hi Chantal,
instead of:
/* multivalued, not required */
you do:
/* multivalued, not required */
The yourCustomFunctionToReturnAQueryString(vip, querystring1, querystring2)
{
if(vip != n
Hi Chantal,
> However, with this approach indexing time went up from 20min to more
> than 5 hours.
>
This is 15x slower than the initial solution... wow.
>From MySQL I know that IN ()-clauses are the embodiment of endlessness -
they perform very, very badly.
New idea:
Create a method which
Hi Chantal,
did you tried to write a http://wiki.apache.org/solr/DIHCustomFunctions
custom DIH Function ?
If not, I think this will be a solution.
Just check, whether "${prog.vip}" is an empty string or null.
If so, you need to replace it with a value that never can response anything.
So the vi
Stockii,
Solr's index is a Lucene Index. Therefore, Solr documents are Lucene
documents.
Kind regards,
- Mitch
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-Doc-Lucene-Doc-tp995922p995968.html
Sent from the Solr - User mailing list archive at Nabble.com.
Good morning,
https://issues.apache.org/jira/browse/SOLR-1632
- Mitch
Li Li wrote:
>
> where is the link of this patch?
>
> 2010/7/24 Yonik Seeley :
>> On Fri, Jul 23, 2010 at 2:23 PM, MitchK wrote:
>>> why do we do not send the output of TermsComponent of every
Okay, but than LiLi did something wrong, right?
I mean, if the document exists only at one shard, it should get the same
score whenever one requests it, no?
Of course, this only applies if nothing gets changed between the requests.
The only remaining problem here would be, that you need distribut
That only works if the docs are exactly the same - they may not be.
Ahm, what? Why? If the uniqueID is the same, the docs *should* be the same,
don't they?
--
View this message in context:
http://lucene.472066.n3.nabble.com/a-bug-of-solr-distributed-search-tp983533p990563.html
Sent from the So
values to disk (Which I do not suggest).
Thoughts?
- Mitch
MitchK wrote:
>
> Yonik,
>
> why do we do not send the output of TermsComponent of every node in the
> cluster to a Hadoop instance?
> Since TermsComponent does the map-part of the map-reduce concept, Hadoop
> only need
Yonik,
why do we do not send the output of TermsComponent of every node in the
cluster to a Hadoop instance?
Since TermsComponent does the map-part of the map-reduce concept, Hadoop
only needs to reduce the stuff. Maybe we even do not need Hadoop for this.
After reducing, every node in the cluste
Thank you three for your feedback!
Chantal, unfortuntately kenf is right. Facetting won't work in this special
case.
> parallel calls.
>
Yes, this will be the solution. However, this would lead to a second
HTTP-request and I hoped to be able to avoid it.
Chantal Ackermann wrote:
>
> Sure S
It already was sorted by score.
The problem here is the following:
Shard_A and shard_B contain doc_X and doc_X.
If you are querying for something, doc_X could have a score of 1.0 at
shard_A and a score of 12.0 at shard_B.
You can never be sure which doc Solr sees first. In the bad case, Solr see
I don't know much about the code.
Maybe you can tell me to what file you are referring?
However, from the comments one can see, that the problem is known but one
decided to let it happen, because of System requirements in the Java
version.
- Mitch
--
View this message in context:
http://luce
Oh,... I just see, there is no direct question ;-).
How can I specify the number of returned documents in the desired way
*within* one request?
- Mitch
--
View this message in context:
http://lucene.472066.n3.nabble.com/nested-query-and-number-of-matched-records-tp983756p983773.html
Sent from
Ah, okay. I understand your problem. Why should doc x be at position 1 when
searching for the first time, and when I search for the 2nd time it occurs
at position 8 - right?
I am not sure, but I think you can't prevent this without custom coding or
making a document's occurence unique.
Kind rega
Hello community,
I got a situation, where I know that some types of documents contain very
extensive information and other types are giving more general information.
Since I don't know whether a user searches for general or extensive
information (and I don't want to ask him when he uses the defau
Li Li,
this is the intended behaviour, not a bug.
Otherwise you could get back the same record in a response for several
times, which may not be intended by the user.
Kind regards,
- Mitch
--
View this message in context:
http://lucene.472066.n3.nabble.com/a-bug-of-solr-distributed-search-tp98
Here you can find params and their meanings for the dismax-handler.
You may not find anything in the wiki by searching for a parser ;).
Link: http://wiki.apache.org/solr/DisMaxRequestHandler Wiki:
DisMaxRequestHandler
Kind regards
- Mitch
Erik Hatcher-4 wrote:
>
> Consider using the dismax
It sounds like the best solution here, right.
However, I do not want to exclude the possibility of doing things one
*should* do in different cores with different configurations and schema.xml
in one core.
I haven't completly read the lucidimagination article, but I would suggest
you to do your wo
Frank,
have a look at Solr's example-directory's and look for 'multicore'. There
you can see an example-configuration for a multicore-environment.
Kind regards,
- Mitch
--
View this message in context:
http://lucene.472066.n3.nabble.com/Autocomplete-with-NGrams-tp979312p979610.html
Sent from
Thank you both.
I will do what Hoss suggested, tomorrow.
The mail was sent over the nabble-board and another time over my
thunderbird-client. Both with the same result. So there was not more
HTML-code than it was in every of my other postings.
Kind regards
- Mitch
--
View this message in contex
Hello,
I try to post
http://lucene.472066.n3.nabble.com/Solr-in-an-extra-project-what-about-replication-scaling-etc-td977961.html#a977961
this message for the fourth time to the Solr Mailinglist and everytime I
get the following response from the Mailing-list's server:
> solr-user@lucene.
I need to revive this discussion...
If you do distributed indexing correctly, what about updating the documents
and what about replicating them correctly?
Does this work? Or wasn't this an issue?
Kind regards
- Mitch
--
View this message in context:
http://lucene.472066.n3.nabble.com/solr-wit
Britske good workaround!
I did not thought about the possibility of using subqueries.
Regards
- Mitch
--
View this message in context:
http://lucene.472066.n3.nabble.com/How-I-can-use-score-value-for-my-function-tp899662p931448.html
Sent from the Solr - User mailing list archive at Nabble.com.
Ramzesua,
this is not possible, because Solr does not know what is the resulting score
at query-time (as far as I know).
The score will be computed, when every hit from every field is combined by
the scorer.
Furthermore I have shown you an alternative in the other threads. It makes
not exactly wh
David,
well, I am no committer, but I noticed that Lucene will no longer care of
compressing (I think this was because of the trouble when doing this) and
maybe this is the reason why Solr keeps this option no longer available.
Unfortunately, I do not have got any link for it, but I think this w
Hello community,
since a few days I recieve daily some mails with suspicious content. It is
said that some of my mails were rejected, because of the file-types of the
mail's attachements and other things.
This wonders me a lot, because I didn't send any mails with attachements and
even the eMail-
Hi Chantal,
Munich? Germany seems to be soo small :-).
Chantal Ackermann wrote:
>
> I only want a way to show to the
> user a kind of relevancy or similarity indicator (for example using a
> range of 10 stars) that would give a hint on how similar the mlt hit is
> to the input (match) item.
Chantal,
have a look at
http://lucene.apache.org/java/3_0_1/api/all/org/apache/lucene/search/similar/MoreLikeThis.html
More like this to have a guess what the MLT's score concerns.
The problem is that you can't compare scores.
The query for the "normal" result-response was maybe something like
Sebastian,
sounds like an exciting project.
> We've found the argument "TokenGroup" in method "highlightTerm"
> implemented in SimpleHtmlFormatter. "TokenGroup" provides the method
> "getPayload()", but the returned value is always "NULL".
>
No, Token provides this method, not TokenGroup. Bu
I wanted to add a Jira-issue about exactly what Otis is asking here.
Unfortunately, I haven't time for it because of my exams.
However, I'd like to add a question to Otis' ones:
If you destribute the indexing-progress this way, are you able to replicate
the different documents correctly?
Thank y
Barani,
without more background on dynamic fields, I would say that the most easiest
way would be to define a suffix for each of the fields you want to index
into the mentioned dynamic field and to redefine your dynamic field -
condition.
If suffix does not work, because of other dynamic-field d
Otis,
And again I wished I were registred.
I will check the JIRA and when I feel comfortable with it, I will open it.
Kind regards
- Mitch
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-and-Nutch-Droids-to-use-or-not-to-use-tp900069p904145.html
Sent from the Solr - U
Joe,
please, can you provide an example of what you are thinking of?
Subqueries with Solr... I've never seen something like that before.
Thank you!
Kind regards
- Mitch
--
View this message in context:
http://lucene.472066.n3.nabble.com/DismaxRequestHandler-tp903641p904142.html
Sent from th
Hi,
> One problem down, two left! =) bf ==> bq did the trick, thanks. Now at
> least if I can't get the DIH solution working I don't have to tack that on
> every query string.
>
I would really recommend to use a boost function. If your rank will change
in future implementations, you do not
What is the usecase for such an architecture?
Do you send requests to two different masters for indexing and that's why
they need to be synchronized?
Kind regards
- Mitch
--
View this message in context:
http://lucene.472066.n3.nabble.com/Master-master-tp884253p903233.html
Sent from the Solr -
Antonello,
here are a few links to the Solr Wiki:
http://wiki.apache.org/solr/SolrReplication Solr Replication
http://wiki.apache.org/solr/DistributedSearchDesign Distributed Search
Design
http://wiki.apache.org/solr/DistributedSearch Distributed Search
http://wiki.apache.org/solr/SolrCloud So
Sorry, I've overlooked your other question.
>
> rank:1^10.0 rank:2^9.0 rank:3^8.0 rank:4^7.0 rank:5^6.0 rank:6^5.0
> rank:7^4.0 rank:8^3.0 rank:9^2.0
>
>
This is wrong.
You need to change "bf" to "bq".
Bf -> boosting function
Bq -> boosting query.
--
View this message
Hi,
first of all, are you sure that row.put('$docBoost',docBoostVal) is correct?
I think it should be row.put($docBoost,docBoostVal); - unfortunately I am
not sure.
Hm, I think, until you can solve the problem with the docBoosts itself, you
should use a functionQuery.
Use "div(1, rank)" as boo
h you can use today:
>
> http://search-lucene.com/?q=ExternalFileField&fc_project=Solr
>
> Otis
>
> Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
> Lucene ecosystem search :: http://search-lucene.com/
>
>
>
> - Original Message
>&g
> Solr doesn't know anything about OPIC, but I suppose you can feed the OPIC
> score computed by Nutch into a Solr field and use it during scoring, if
> you want, say with a function query.
>
Oh! Yes, that makes more sense than using the OPIC as doc-boost-value. :-)
Anywhere at the Lucene Mail
Good morning!
Great feedback from you all. This really helped a lot to get an impression
of what is possible and what is not.
What is interesting to me are some detail questions.
Let's assume Solr is possible to work on his own with distributed indexing,
so that the client does not need to know
Thanks, that really helps to find the right beginning for such a journey. :-)
> * Use Solr, not Nutch's search webapp
>
As far as I have read, Solr can't scale, if the index gets too large for one
Server
> The setup explained here has one significant caveat you also need to keep
> in mind:
Thank you for the feedback, Otis.
Yes, I thought that such an approach is usefull if the number of pages to
crawl is relatively low.
However, what about using solr + nutch?
Exists the problem that this would not scale, if the index becomes too
large, up to now?
What about extending nutch with fe
Hello community,
from several discussions about Solr and Nutch, I got some questions for a
virtual web-search-engine.
I know I've posted this message to the mailing list a few days ago, but the
thread got injected and at least I did not get any more postings about the
topic and so I try to reop
Just wanted to push the topic a little bit, because those question come up
quite often and it's very interesting for me.
Thank you!
- Mitch
MitchK wrote:
>
> Hello community and a nice satureday,
>
> from several discussions about Solr and Nutch, I got some questions fo
Guys???
You are in the wrong thread. Please, send a message to the mailing list, do
not answer to existing posts.
Thank you.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-and-Nutch-Droids-to-use-or-not-to-use-tp890640p892041.html
Sent from the Solr - User mailing li
Hello community and a nice satureday,
from several discussions about Solr and Nutch, I got some questions for a
virtual web-search-engine.
The requirements:
I. I need a scalable solution for a growing index that becomes larger than
one machine can handle. If I add more hardware, I want to linear
Hello out there,
I am searching for a solution for conditional Document Boosting.
During analyzing the fields of a document, I want to create a document boost
based on some metrics.
There are two approaches:
First: I preprocess the data. The main problem with this is, that I need to
take care ab
Hi dotriz,
to answer such questions it would be beneficial for us, if you can provide
some schema.xml-information.
Kind regards
- Mitch
--
View this message in context:
http://lucene.472066.n3.nabble.com/searching-documents-in-solr-tp844800p844847.html
Sent from the Solr - User mailing list ar
Good question.
Well, I never worked productively with SolrJ.
But two things:
The first: As the documentation says, you *should* get your IndexSearcher
from your SolrQueryRequest-object.
The second: As a developer of the SolrJ I would do as much as I can
automatically behind the curtain. That mean
Ahh, now I understand.
No, you need no second IndexSearcher as long as the Server is alive.
You can reuse your searcher for every user.
The only commands you are executing per user are those to create a
search-query.
Kind regards,
- Mitch
--
View this message in context:
http://lucene.472066.
Where is your query?
You don't search for anything.
The q-param is empty.
You got two options (untested): remove the q-param or search for something
special.
I think removing is not a good idea. Instead search for *:* would retrive
ALL results that match your filter-query.
Kind regards
- Mitch
> In my case, I have an index which will not be modified after creation.
> Does
> this mean that in a multi-user scenario, I can have a static IndexSearcher
> object that can be shared by multiple users ?
>
I am not sure, what you mean with "multi-user"-scenario. Can you tell me
what you got
The score isn't computed when you try to access it. Furthermore your
functionQuery needs to become part of the score.
So what can you do???
The keyword is boosting.
Do: {!func}product(0.88,rank)^x
Where x is a boosting factor based on your experiences.
Keep in mind that the result of your pro
Rahul,
the IndexSearcher of Solr gets shared with every request within two commits.
That means one IndexSearcher + its caches got a lifetime of one commit.
After every commit, there will be a new one created.
The cache does not mean, that they are applied automatically. They mean,
that a filter
Hi dc,
> - at query time, specify boosts for 'my items' items
>
Do you mean something like document-boost or do you want to include
something like
"OR myItemId:100^100"
?
Can you tell us how you would specify document-boostings at query-time? Or
are you querying something like a boolean fiel
Hi RiH,
I think the idea is interesting, but the approach you think of is a little
bit difficult to implement.
Imagine you got 10.000 users, than a Solr document must actually have 10.000
fields which are responsible to managing the whole stuff. Furthermore your
data will change quite often. So t
Forget what I said about the second case.
The second case is a simple sort on your field.
--
View this message in context:
http://lucene.472066.n3.nabble.com/sort-by-function-tp814380p821252.html
Sent from the Solr - User mailing list archive at Nabble.com.
Can you please do some math to show the principle?
Do you want to do something like this:
finalScore = score * rank
finalScore = rank
???
If the first is the case, than it is done by default (have a look at the
wiki-example for making more recent documents more relevant).
If the second is the
Can you provide us some more information on what you really want to do?
Like the examples in the wiki said, the returned value of the function query
is multiplied with the score - you can boost your returned value from the
function query, if you like to do so.
Kind regards
- Mitch
--
View this
Okay, I will do so in future, if another problem like this occurs.
At the moment, everything is fine after I followed your suggestions.
Kind regards
- Mitch
--
View this message in context:
http://lucene.472066.n3.nabble.com/Short-DismaxRequestHandler-Question-tp775913p820355.html
Sent from th
Thank you for the explanation, Hoss.
At the moment, I am not able to try out Solr, because I am going to write my
exam from tomorrow to Wednesday.
This is because I would try out as much as possible and dive into some
sourcecode and this would be no good idea in this context. So my questions
are
Hi Ramzesua,
take a look at the example of the function query that influences relvancy by
the popular-field of the example-directory.
http://wiki.apache.org/solr/FunctionQuery#Using_FunctionQuery
Kind regards
- Mitch
--
View this message in context:
http://lucene.472066.n3.nabble.com/increase
Btw: This thread helps a lot to understand the difference between qf and pf
:-)
http://lucene.472066.n3.nabble.com/Dismax-query-phrases-td489994.html#a489995
--
View this message in context:
http://lucene.472066.n3.nabble.com/Short-DismaxRequestHandler-Question-tp775913p783379.html
Sent from the
Okay, let me be more specific:
I got a custom StopWordFilter and a WordMarkingFilter.
The WordMarkingFilter is an easy implementation to determine which type a
word is.
The StopWordFilter (my implementation) removes specific types of words *and*
all markers from all words.
This leads to a delet
Did you clean up the Browser-Cache?
Maybe you need to restart (I am currently not sure, whether Solr caches
HTTP-requests, even when you did a commit???).
Kind regards
- Mitch
--
View this message in context:
http://lucene.472066.n3.nabble.com/How-do-I-return-all-the-results-in-an-index-tp7772
I would prefer extending the given CollapseComponent, because of
performance-reasons. What you want to do sounds a bit like making things too
complicate.
There are two options I would prefer:
1. get the schema-information for every field you want to query against and
define, whether you want to
I got an idea:
If I would catenate all relevant fields to one large multiValued field, I
could query like this:
{!dismax qf='myLargeField^5'}solr development //mm is 1 (100%) if not set
Additionally to that, I add a phraseQuery
{!dismax qf='myLargeField^5'}solr development AND title:(solr
develo
Thank you for responsing.
This would be possible. However, I wouldn't like to do so, because a match
in "title" should boost higher than a match in "category".
--
View this message in context:
http://lucene.472066.n3.nabble.com/Short-DismaxRequestHandler-Question-tp775913p776238.html
Sent fr
When is the returned facet-info the expected info for your multiValued
fields?
Before or after your collapse?
It could be possible, that you need to facet only on your multiValued fields
before you are collapsing to retrive the right values.
If this is the case, you need to integrate the before-co
Hello community,
I need a minimum should match only on some fields, not on all.
Let me give you an example:
title: "Breaking News: New information about Solr 1.5"
category: development
tag: Solr News
If I am searching for "Solr development", I want to return this doc,
although I defined a minim
Just for clear terminology: You mean field, not fieldType. FieldType is the
definition of tokenizers, filters etc..
You apply a fieldType on a field. And you query against a field, not against
a whole fieldType. :-)
Kind regards
- Mitch
Marco Martinez-2 wrote:
>
> Hi Ranveer,
>
> If you don't
Good morning,
I do not have the time to read your full code very carefully at the moment.
I will do so later on, however: Have a look at simpleFacets. Consider the
method that creates the facetCounts. When I got it right in mind, the author
uses the IndexSearcher's numDoc(arg1, arg2) method.
Tha
Unfortunately this patch does not support multiValued-fields (as this is said
by the author and some others that worked with that patch). I had a look on
others, but they seem to have the same problem.
What would I suggest, hmm...
Out-of-the-box and at this time (it's late here in Germany) I got o
1 - 100 of 212 matches
Mail list logo