check out the videos on this website TROO.TUBE don't be such a
sheep/zombie/loser/NPC. Much love!
https://troo.tube/videos/watch/aaa64864-52ee-4201-922f-41300032f219
On Thu, May 7, 2020 at 8:49 PM Joel Bernstein wrote:
>
> You can be pretty sure that adding static warming queries will improve you
You can be pretty sure that adding static warming queries will improve your
performance following softcommits. But, opening new searchers every 2
seconds may be too fast to allow for warming so you may need to adjust. As
a general rule you cannot open searchers faster than you can warm them.
Joel
Hi joel, No, we have not, we have softCommit requirement of 2 secs.
On Tue, May 5, 2020 at 3:31 PM Joel Bernstein wrote:
> Have you configured static warming queries for the facets? This will warm
> the cache structures for the facet fields. You just want to make sure you
> commits are spaced fa
Have you configured static warming queries for the facets? This will warm
the cache structures for the facet fields. You just want to make sure you
commits are spaced far enough apart that the warming completes before a new
searcher starts warming.
Joel Bernstein
http://joelsolr.blogspot.com/
O
Hi Erick, Thanks for the explanation and advise. With facet queries, does
doc Values help at all ?
1) indexed=true, docValues=true => all facets
2)
- indexed=true , docValues=true => only for subfacets
- inexed=true, docValues=false=> facet query
- docValues=true, indexed=false=> term
DocValues should help when faceting over fields, i.e. facet.field=blah.
I would expect docValues to help with sub facets and, but don’t know
the code well enough to say definitely one way or the other.
The empirical approach would be to set “uninvertible=true” (Solr 7.6) and
turn docValues off. W
Hi Erick, You are correct, we have only about 1.8M documents so far and
turning on the indexing on the facet fields helped improve the timings of
the facet query a lot which has (sub facets and facet queries). So does
docValues help at all for sub facets and facet query, our tests
revealed further
In a word, “yes”. I also suspect your corpus isn’t very big.
I think the key is the facet queries. Now, I’m talking from
theory rather than diving into the code, but querying on
a docValues=true, indexed=false field is really doing a
search. And searching on a field like that is effectively
analog
On 2/8/2018 5:36 AM, LOPEZ-CORTES Mariano-ext wrote:
> We are just 1 field "status" in facets with a cardinality of 93.
>
> We realize that increasing memory will work. But, you think it's necessary?
>
> Thanks in advance.
2GB for 27 million docs seems a little bit small even WITHOUT facets.
You
" in facets with a cardinality of 93.
>
> We realize that increasing memory will work. But, you think it's necessary?
>
> Thanks in advance.
>
> -Message d'origine-
> De : Zisis T. [mailto:zist...@runbox.com]
> Envoyé : jeudi 8 février 2018 13:14
> À
13:14
À : solr-user@lucene.apache.org
Objet : Re: Facets OutOfMemoryException
I believe that things like the following will affect faceting memory
requirements
-> how many fields do you facet on
-> what is the cardinality of each one of them What is you QPS rate
but 2GB for 27M documents seems too low. D
I believe that things like the following will affect faceting memory
requirements
-> how many fields do you facet on
-> what is the cardinality of each one of them
-> What is you QPS rate
but 2GB for 27M documents seems too low. Did you try to increase the memory
on Solr's JVM?
--
Sent from:
John Davis wrote:
> 100M unique values might be across all docs, and unless the faceting
> implementation is really naive I cannot see how that can come into play
> when the query matches a fraction of those.
Solr simple string faceting uses an int-array to hold counts for the different
terms in
On Tue, Oct 24, 2017 at 8:37 AM, Erick Erickson
wrote:
> bq: It is a bit surprising why facet computation
> is so slow even when the query matches hundreds of docs.
>
> The number of terms in the field over all docs also comes into play.
> Say you're faceting over a field that has 100,000,000 u
bq: It is a bit surprising why facet computation
is so slow even when the query matches hundreds of docs.
The number of terms in the field over all docs also comes into play.
Say you're faceting over a field that has 100,000,000 unique values
across all docs, that's a lot of bookkeeping.
Best,
Hi John,
Did you mean “docValues don’t work for analysed fields” since it works for
multivalue string (or other supported types) fields. What you need to do is to
convert your analysed field to multivalue string field - that requires changes
in indexing flow.
HTH,
Emir
--
Monitoring - Log Mana
Docvalues don't work for multivalued fields. I just started a separate
thread with more debug info. It is a bit surprising why facet computation
is so slow even when the query matches hundreds of docs.
On Mon, Oct 23, 2017 at 6:53 AM, alessandro.benedetti
wrote:
> Hi John,
> first of all, I may
Hi John,
first of all, I may state the obvious, but have you tried docValues ?
Apart from that a friend of mine ( Diego Ceccarelli) was discussing a
probabilistic implementation similar to the hyperloglog[1] to approximate
facets counting.
I didn't have time to take a look in details / implement
Hi Yonik,
Any update on sampling based facets. The current faceting is really slow
for fields with high cardinality even with method=uif. Or are there
alternative work-arounds to only look at N docs when computing facets?
On Fri, Nov 4, 2016 at 4:43 PM, Yonik Seeley wrote:
> Sampling has been on
Most like this is autowarming. New searchers are not available until
the autowarming period is complete. The sequence is:
> commit
> new searcher is opened and autowarming starts on it.
> new requests are served by the old searcher
> autowarming completes
> new requests are served by the new search
Yes, all three fields should be docValues. The point of docValues is
to keep from "uninverting" the docValues structure in Java's heap. Any
time you have to answer the question "What is the value in
docX.fieldY" it should be a docValues field. The way facets (and
funciton queries for tha t matter w
Hello, John!
You can try to do that manually by applying filter by random field.
On Fri, Nov 4, 2016 at 10:02 PM, John Davis
wrote:
> Hi,
> I am trying to improve the performance of queries with facets. I understand
> that for queries with high facet cardinality and large number results the
> c
From: John Davis wrote:
> Does there exist an option to compute facets by just looking at the top-n
> results instead of all of them or a sample of results based on some query
> parameters?
Doing it for the top-n results does not play well with the current query flow
in Solr (I might be wrong he
Sampling has been on my TODO list for the JSON Facet API.
How much it would help depends on where the bottlenecks are, but that
in conjunction with a hashing approach to collection (assuming field
cardinality is high) should definitely help.
-Yonik
On Fri, Nov 4, 2016 at 3:02 PM, John Davis wro
https://issues.apache.org/jira/browse/SOLR-5894 had some pretty interesting
looking work on heuristic counts for facets, among other things.
Unfortunately, it didn’t get picked up, but if you don’t mind using Solr 4.10,
there’s a jar.
On 11/4/16, 12:02 PM, "John Davis" wrote:
Hi,
I a
I believe that's what's JSON facet API does by default. Have you tried that?
Regards,
Alex.
Solr Example reading group is starting November 2016, join us at
http://j.mp/SolrERG
Newsletter and resources for Solr beginners and intermediates:
http://www.solr-start.com/
On 5 November 2016 at
Hi Jainam,
One workaround is to use facet.query and frange query parser.
facet.query={!frange l=50 u=100}field(price)
Ahmet
On Thursday, April 16, 2015 1:01 PM, jainam vora wrote:
Hi,
I am using external field for price field since it changes frequently.
generate facets using external field
Hi Joshua,
The functionality you are asking about is requested by
https://issues.apache.org/jira/browse/SOLR-5743.
I've prepared a patch with initial implementation and going to speak about
it on Lucene/Solr Revolution 2014 Conference, held in Washington, DC on
November 11-14, http://lucenerevolut
Yes. One way is using a join query to link authors to books. The query will
look like this:
q={!join to=author_id_fk to=author_id} publication_date:[...]
The other way is using grouping. Here, you first retrieved books based their
publication then group them on their authors.
--
View this mes
application then just needs to do some simple substring handling to return only
the actual facet value without the language.
-Original Message-
From: davyme [mailto:meybosd...@hotmail.com]
Sent: Friday, September 12, 2014 3:36 AM
To: solr-user@lucene.apache.org
Subject: Re: Facets not
The reason why I'm asking this is because I have no influence on the fields
that are indexed. The CMS automatically does that. So there is no way for me
to split up languages in seperate fields.
I can change the scheme.xml, but I don't know if there is a way to copy
fields into seperate language
The way this is done in drupal and probably many others is that the facet
fields are keywords from a taxonomy.
If you want to facet through single language, you probably want to separate the
fields where you index each of the languages (so a field "text-en", "text-ft"
through which you would fac
Thank you very much Mikhail.
Walter
Ing. Walter Liguori
2014-07-09 21:43 GMT+02:00 Mikhail Khludnev :
> Colleagues,
> So far you can either vote or contribute to
> https://issues.apache.org/jira/browse/SOLR-5743
>
> Walter,
> Usually, index-time tricks loose relationships information, that lea
Colleagues,
So far you can either vote or contribute to
https://issues.apache.org/jira/browse/SOLR-5743
Walter,
Usually, index-time tricks loose relationships information, that leads to
wrong counts.
On Tue, Jul 8, 2014 at 2:40 PM, Walter Liguori
wrote:
> Yes, also i've the same problem.
> In
Yes, also i've the same problem.
In my case i have 2 type (parent and children) in a single collection and i
want to retrieve only the parent with a facet on a children field.
I've seen that is possible via block join query (availble by solr 4.5).
I've solr 1.2 and I've thinked about static facet f
Hi Iorixxx!
I have not optimized the index but the day after this post I saw I didn't
have this problem anymore.
I will follow your advice next time!
Now I'm avoiding so much manipulation at indexation time and I'm doing more
work in the java code in the client side.
If I had time I would imple
Hi,
Please optimize your index (you can do it core admin GUI) and see if problem
goes away.
Ahmet
On Friday, March 7, 2014 1:18 PM, epnRui wrote:
Hi guys!
I solved my problem on the client side but at least I solved it...
Anyway, now I have another problem, which is related to the followi
Hi guys!
I solved my problem on the client side but at least I solved it...
Anyway, now I have another problem, which is related to the following:
- I had previously used replace chars and replace patterns, charfilters and
filters, at index time to replace "EP" by "European Parliament". At that
Hi guys,
So, I keep facing this problem which I can't solve. I thought it was due to
HTML anchors containing the name of the hashtag, and thus repeating it, but
it's not.
So the use case is:
1 - I need to consider hashtags as tokens.
2 - The hashtag has to show up in the facets.
Right now if I i
Hi guys,
I'm on my way to solve it properly.
This is how my field looks like now:
I still have one case where I'm facing issues because in fact I want to
pres
Hi,
Let's say you have accomplished what you want. You have a .txt with the tokens
tomerge, like "European" and "Parliament". What is your use case then? What is
your high level goal?
MappingCharFilter approach is closer (to your .txt approach) than
PatternReplaceCharFilterFactory approach.
Have you tried to just use a copyField? For example, I had a similar use
case where I needed to have particular field (f1) tokenized but also
needed to facet on the complete contents.
For that, I created a copyField
f1 used tokenizers and filters but f2 was just a plain string. You then
Hi Ahmet!!
I went ahead and did something I thought it was not a clean solution and
then when I read your post and I found we thought of the same solution,
including the European_Parliament with the _ :)
So I guess there would be no way to do this more cleanly, maybe only
implementing my own Tok
Hi epnRui,
I don't full follow your e-mail (I think you need to describe your use case)
but here are some answers,
- Is it possible to have facets of two or more words?
Yes. For example if you use ShingleFilterFactory at index time you will see two
or more words in facets.
- Can I tokenize
Hello Henning,
There is no open source facet component for child level of block-join.
There is no even open jira for this.
Don.t think it helps.
11.02.2014 12:22 пользователь "Henning Ivan Solberg"
написал:
> Hello,
>
> I'm testing block join in solr 4.6.1 and wondering, is it possible to get
>
nope.
On Mon, Jan 27, 2014 at 3:00 PM, Yiannis Pericleous <
y.pericle...@albourne.com> wrote:
> Hi,
>
> Is it possible to use query time join with a facet.field option?
>
> ie. i need to do something like this:
> facet.field={!join from=parent_type to=id}city
>
> yiannis
>
--
Sincerely yours
Yeah for a couple years we have wanted to know the number of values in a
facet field.
I.e.
facet.field=name&facet.limit=-1
But we only want to return 3, and we want to know how many names.
facet
facet.field = name
facet.field.name.count = 156
facet.field.name.1 = Bill, 8958
facet.field.name.2 =
Hello Yago,
This condition doesn't help to reduce computation significantly for
facet.method=fc nor fcs, it might help for enum, but it requires
implementation efforts. Also, my feeling is that you have much more
performance challenges if you have million size facets response, it's not
typical usa
Hi David,
As Karan suggested,your current icDesc_en is tokenized (understandably you
need to do that if you want to search on it in powerful way). So the
solution is create another field say icDesc_en_facet and define "string" as
the type (like Karan suggested) and then do this : .
Now you can us
Karan,
The field was a "text" type, which by experimentation I changed to "string"
and all was OK.
Thanks for your prompt reply.
David
--
View this message in context:
http://lucene.472066.n3.nabble.com/Facets-tp491p4111234.html
Sent from the Solr - User mailing list archive at Nabble.co
what's field type of "icDesc_en"?
See it in schema.xml in conf directory of your solr setup.
I guess it must be tokenized by tokenizer.
If that is the case than change the type of this field to "string" type.
By doing this tokens wouldn't be created and you will get desired results.
-Karan
On
Thanks for the excellent clarification. I'll ask the sunspot guys about the
localparams issue. I have a patch that would fix it
Thanks
Brendan
On May 16, 2013, at 1:42 PM, Chris Hostetter wrote:
>
> : I would then like to refer to these 'pseudo' field later in the request
> : string. I thoug
: I would then like to refer to these 'pseudo' field later in the request
: string. I thought this would be how I'd do it:
:
: f.my_facet_key.facet.prefix=a_given_prefix
...
that syntax was proposed in SOLR-1351 and a patch was made available, but
it was never commited (it only support
I got more information with the responses.Now, It's time to re look into the
number of facets to be configured.
Thanks,
Siva
http://smarttechies.wordpress.com/
--
View this message in context:
http://lucene.472066.n3.nabble.com/Facets-with-5000-facet-fields-Out-of-memory-error-during-the-quer
If you're talking about _filter queries_, Kai's answer is good
But your question is confusing. You
talk about facet queries, but then use fq, which is "filter
query" and has nothing to do with facets at all unless
you're talking about turning facet information into filter
queries..
FWIW,
Eric
Try fq=(groups:group1 OR locations:location1)
Am 24.04.2013 um 12:39 schrieb vsl:
> Hi,
>
> my request contains following term:
>
> The are 3 facets:
> groups, locations, categories.
>
>
>
> When I select some items then I see such syntax in my request.
> fq=groups:group1&fq=locations:locati
solr-user@lucene.apache.org
Sent: Thursday, March 21, 2013 9:04 AM
Subject: Re: Facets with 5000 facet fields
as was said below, add facet.method=fcs to your query URL.
Upayavira
On Thu, Mar 21, 2013, at 09:41 AM, Andy wrote:
> What do I need to do to use this new per segment fa
apache.org
> Sent: Wednesday, March 20, 2013 1:09 PM
> Subject: Re: Facets with 5000 facet fields
>
>
> On Mar 20, 2013, at 11:29 AM, Chris Hostetter
> wrote:
>
> > Not true ... per segment FIeldCache support is available in solr
> > faceting, you just have to sp
What do I need to do to use this new per segment faceting method?
From: Mark Miller
To: solr-user@lucene.apache.org
Sent: Wednesday, March 20, 2013 1:09 PM
Subject: Re: Facets with 5000 facet fields
On Mar 20, 2013, at 11:29 AM, Chris Hostetter wrote
It looks like docvalues might solve a problem we have. (sorry for the
thread jacking)
I looked for info on it on the wiki, but could not find any.
Is there any documentation done on it yet?
On Wed, Mar 20, 2013 at 6:09 PM, Mark Miller wrote:
>
> On Mar 20, 2013, at 11:29 AM, Chris Hostetter
On Mar 20, 2013, at 11:29 AM, Chris Hostetter wrote:
> Not true ... per segment FIeldCache support is available in solr
> faceting, you just have to specify facet.method=fcs (FieldCache per
> Segment)
Also, if you use docvalues in 4.2, Robert tells me it is uses a new per seg
faceting method
: > I seem to recall that facet cache is not per segment so every time the
: > index is updated the facet cache will need to be re-computed.
:
: That is correct. We haven't experimented with segment based faceting
Not true ... per segment FIeldCache support is available in solr
faceting, you ju
On Wed, 2013-03-20 at 10:12 +0100, Andy wrote:
> Are you doing NRT updates?
No. Startup/re-open time is around 1 minute for the Solr instance, but
due to we are currently doing nightly updates only.
> I seem to recall that facet cache is not per segment so every time the
> index is updated the f
at problem?
From: Toke Eskildsen
To: "solr-user@lucene.apache.org" ; Andy
Sent: Wednesday, March 20, 2013 4:06 AM
Subject: Re: Facets with 5000 facet fields
On Wed, 2013-03-20 at 07:19 +0100, Andy wrote:
> What about the case where there's only a small number of fields (a
&g
On Wed, 2013-03-20 at 07:19 +0100, Andy wrote:
> What about the case where there's only a small number of fields (a
> dozen or two) but each field has hundreds of thousands or millions of
> values? Would Solr be able to handle that?
We do that on a daily basis at State and University Library, Denm
esday, March 19, 2013 6:09 PM
Subject: Re: Facets with 5000 facet fields
: In order to support faceting, Solr maintains a cache of the faceted
: field. You need one cache for each field you are faceting on, meaning
: your memory requirements will be substantial, unless, I guess, your
1) you can con
: In order to support faceting, Solr maintains a cache of the faceted
: field. You need one cache for each field you are faceting on, meaning
: your memory requirements will be substantial, unless, I guess, your
1) you can consider trading ram for time by using "facet.method=enum" (and
disabling
Toke Eskildsen [t...@statsbiblioteket.dk] wrote:
[Solr, 11M documents, 5000 facet fields, 12GB RAM, OOM]
> 5000 fields @ 9 MByte is about 45GB for faceting.
> If you are feeling really adventurous, take a look at
> https://issues.apache.org/jira/browse/SOLR-2412
I tried building a test-index wi
On Mon, 2013-03-18 at 08:34 +0100, sivaprasad wrote:
> We have configured solr for 5000 facet fields as part of request handler.We
> have 10811177 docs in the index.
>
> The solr server machine is quad core with 12 gb of RAM.
>
> When we are querying with facets, we are getting out of memory erro
I'd be very surprised if this were to work. I recall one situation in
which 24 facets in a request placed too much pressure on the server.
In order to support faceting, Solr maintains a cache of the faceted
field. You need one cache for each field you are faceting on, meaning
your memory requireme
Nope. Information about your higher level use-case
would probably be a good thing, this is starting to
smell like an "XY" problem.
Best
Erick
On Fri, Apr 13, 2012 at 5:48 AM, Marc SCHNEIDER
wrote:
> Hi,
>
> Thanks for your answer.
> Yes it works in this case when I know the facet name (Computer)
Hi,
Thanks for your answer.
Yes it works in this case when I know the facet name (Computer). What
if I want to automatically compute all facets?
facet.query=keyword:* short_title:* doesn't work, right?
Marc.
On Thu, Apr 12, 2012 at 2:08 PM, Erick Erickson wrote:
> facet.query=keywords:computer
facet.query=keywords:computer short_title:computer
seems like what you're asking for.
On Thu, Apr 12, 2012 at 3:19 AM, Marc SCHNEIDER
wrote:
> Hi,
>
> Thanks for your answer.
> Let's say I have to fields : 'keywords' and 'short_title'.
> For these fields I'd like to make a faceted search : if 'Co
Hi,
Thanks for your answer.
Let's say I have to fields : 'keywords' and 'short_title'.
For these fields I'd like to make a faceted search : if 'Computer' is
stored in at least one of these fields for a document I'd like to get
it added in my results.
doc1 => keywords : 'Computer' / short_title : '
Have you considered facet.query? You can specify an arbitrary query
to facet on which might do what you want. Otherwise, I'm not sure what
you mean by "faceted search using two fields". How should these fields
be combined into a single facet? What that means practically is not at
all obvious from y
If you mean, that you need to group facets starting after top-10 as
"Others", than I'm not sure if SOLR would allow you do this without
tweaking on the source code level.
However, it is still possible on the client side to grab those facet counts
that logically belong to "Others" group and sum thei
How to mark remaining as "Others"
That field is a multi-valued field and so cant do any calculation
based on resultset count.
On Fri, Jan 13, 2012 at 5:44 PM, Dmitry Kan wrote:
> You could do this on the client side, just read 10 first facets off the top
> of the list and mark the remaining as "O
You could do this on the client side, just read 10 first facets off the top
of the list and mark the remaining as "Others".
On Fri, Jan 13, 2012 at 12:47 PM, Manish Bafna wrote:
> Hi,
> Is it possible to get top 10 facets and group the remaining in "Others".
>
> Thanks,
> Manish.
>
--
Regards
Faceting harvests the fields that are already indexed (so you have to
both store and index the fields) and uses Java object refs (pointers),
without copying the facet values. You know how log files have
multi-line exception stacks & the like? The multi-line exception
stacks after the real log line
"A common way is to make a facet string of categoryId-2_name_imageurl.
Then in your UI display the categoryId part of the facet."
I've been thinking about doing something like this for the same purposes. Will
having an "extra long" facet string like that have any effect on faceting
performace?
Sort of.
A common way is to make a facet string of categoryId-2_name_imageurl.
Then in your UI display the categoryId part of the facet.
On Thu, Aug 19, 2010 at 12:25 PM, Satish Kumar
wrote:
> Hi,
>
> Is it possible to associate properties to a facet? For example, facet on
> categoryId (1, 2, 3
On Thu, 29 Jul 2010 20:39:57 +0530
Shishir Jain wrote:
> Hi,
>
> Am using Solr facets for my data and have a field which has
> multiple values in its field am using ";" to delimit those
> values. So after doing a solr search it returns me a facet array
> but that contains ";" in the facet value.
: Basically, what is the difference between issuing a facet field query
: that returns facets with counts,
: and a query with term vectors that also returns document frequency
: counts for terms in a field?
The FacetComponent generates counts that are relative the set of documents
that match you
... I meant; the terms component is faster than using facets. Both of cause
provide the autocomplete.
From: Villemos, Gert [mailto:gert.ville...@logica.com]
Sent: Tue 5/4/2010 8:30 AM
To: solr-user@lucene.apache.org
Subject: RE: Facets vs TermV's
I fo
I found a thread ones (sorry; cant remember where) which stated that the issue
is performance; the terms component is faster than the autocomplete.
I'm no expert but I guess its a question of when the auto complete index gets
build. Where as the terms component likely builds it at storage time,
Hi Yonik!
I've tried recreating the problem now to get some log-output and the problem
just doesn't seem to be there anymore... This puzzles me abit, as the
problem WAS definitely there before.
I've done one change and that is to optimize the index on one of the
servers. But should that impact thi
Something looks wrong... that type of slowdown is certainly not expected.
You should be able to see both the main query and a sub-query in the
logs... could you post an actual example?
-Yonik
http://www.lucidimagination.com
On Mon, Jan 4, 2010 at 4:15 AM, Aleksander Stensby
wrote:
> Hi everyone
1.4 has a good chance of being released next week. There was a hope that it
might make it this week, but another bug in Lucene 2.9.1 was found, pushing
things back just a little bit longer.
-Jay
http://www.lucidimagination.com
On Thu, Oct 29, 2009 at 11:43 AM, beaviebugeater wrote:
>
> Do you h
Do you have any (educated) guess on when 1.4 will be officially released?
Weeks? Months? Years?
Yonik Seeley-2 wrote:
>
> Perhaps something like this that's actually running Solr w/ multi-selecti?
> http://search.lucidimagination.com/
>
> http://wiki.apache.org/solr/SimpleFacetParameters#Ta
I'll dive in. On the surface this looks like exactly what I described.
Thanks for the quick reply!!
Yonik Seeley-2 wrote:
>
> Perhaps something like this that's actually running Solr w/ multi-selecti?
> http://search.lucidimagination.com/
>
> http://wiki.apache.org/solr/SimpleFacetParameter
Perhaps something like this that's actually running Solr w/ multi-selecti?
http://search.lucidimagination.com/
http://wiki.apache.org/solr/SimpleFacetParameters#Tagging_and_excluding_Filters
You just need a recent version of Solr 1.4
-Yonik
http://www.lucidimagination.com
On Thu, Oct 29, 2009
In Solr a facet is assigned one number: the number of documents in
which it appears. The facets are sorted by that number. Would your
use case be solved with a second number that is formulated from the
relevance of the associated documents? For example:
facet relevance = count * sum(scores of
Hi Wojtek:
Sorry for the late, late reply. I haven't implemented this yet, but it is
on the (long) list of my todos. Have you made any progress?
Asif
On Thu, Aug 13, 2009 at 5:42 PM, wojtekpia wrote:
>
> Hi Asif,
>
> Did you end up implementing this as a custom sort order for facets? I'm
> f
Hi Sébastien,
I've experienced the same issue but when using "range queries". Maybe this
might help you too.
I was trying to filter a query using a range as "[ B TO F ]" being case and
accent insensitive, and still get back the case and accent at results.
The solution have been NOT TOKENIZE the
Hi Asif,
Did you end up implementing this as a custom sort order for facets? I'm
facing a similar problem, but not related to time. Given 2 terms:
A: appears twice in half the search results
B: appears once in every search result
I think term A is more "interesting". Using facets sorted by freque
hossman wrote:
>
>
> but are you sure that example would actually cause a problem?
> i suspect if you index thta exact sentence as is you wouldn't see the
> facet count for "si" or "que" increase at all.
>
> If you do a query for "{!raw field=content}que" you bypass the query
> parsers (whi
: http://projecte01.development.barcelonamedia.org/fonetic/
: you will see a "Top Words" list (in Spanish and stemmed) in the list there
: is the word "si" which is in 20649 documents.
: If you click at this word, the system will perform the query
: (x) content:si, with no answers at all
:
Sorry , I was too cryptic.
I you follow this link
http://projecte01.development.barcelonamedia.org/fonetic/
you will see a "Top Words" list (in Spanish and stemmed) in the list there
is the word "si" which is in 20649 documents.
If you click at this word, the system will perform the query
: Date: Tue, 9 Jun 2009 16:04:03 -0700 (PDT)
: From: JCodina
: Subject: facets and stopwords
: I have a text field from where I remove stop words, as a first approximation
: I use facets to see the most common words in the text, but.. stopwords are
: there, and if I search documents having the st
Thanks for your reply. I will have a look at this.
Peter Wolanin a écrit :
Seems like this might be approached using a Lucene payload? For
example where the original string is stored as the payload and
available in the returned facets for display purposes?
Payloads are byte arrays stored with
1 - 100 of 156 matches
Mail list logo