Hi,
In working through some updates for the Solr Size Estimator, I have
found a number of gaps in the Solr Wiki. I've Google'd to a fair degree
on each of these and either found nothing or an insufficient explanation.
In particular, for each of the following I'm looking for:
A) An explanation
Seeing something odd going on with faceting . . . we execute facets with
every query and yet the fieldValueCache is not being used:
name: fieldValueCache
class: org.apache.solr.search.FastLRUCache
version: 1.0
description: Concurrent LRU Cache(maxSize=1, initial
Got the FVH to work in Solr 3.1 (or at least I presume I have given I
can see multi-color highlighting in the output.)
But I am not able to get it to recognize the "regex" fragmenter. I get
no change in output if I specify the fragmenter. In fact, I can even
enter bogus names for the fragmente
lucene/search/vectorhighlight/FragListBuilder.html>
*interfaces to take in and apply the regex.
I would be happy to contribute back what I create.
Appreciate whatever guidance you can offer,
Christopher
On 2:59 PM, Koji Sekiguchi wrote:
(10/12/05 5:53), CRB wrote:
Got the FVH to work in Solr 3.1 (or at
Has anyone been able to get Saxon 9 working with Solr3.1?
I was following the wiki page
(http://wiki.apache.org/solr/XsltResponseWriter), placing all the
saxon-*.jars are in Jetty's lib/ext folder and start with
java
-Djavax.xml.transform.TransformerFactory=net.sf.saxon.TransformerFactoryImp
Koji,
Thank you for the reply.
Being something of a novice with Solr, I would be grateful if you could
clarify my next steps.
I infer from your reply that there is no current implementation yet
contributed for the FVH similar to the regex fragmenter.
Thus I need to write my own custom exte
We have documents which are comprised of:
- A short list of terms (about 1 to 5 terms per document)
- An estimate of the probability of the terms occurrence (stored as
tint)
For each term in the index, we would like to get the result of the
following function:
(our estimate of t
We are trying to get edismax to handle collocations mapped to a single
token. To do so we need to manipulate the "chunks" (as Hoss referred to
them in http://www.lucidimagination.com/blog/2010/05/23/whats-a-dismax/)
generated by the dismax parser. We have numerous collocations (terms of
speech