Have tried using
*
*
for the field domain_score but still get
0.0 = (MATCH) FunctionQuery(sfloat(domain_score)), product of:
0.0 = sfloat(domain_score)=0.0
1.0 = boost
0.07387746 = queryNorm
Thanks!
Julien
2008/6/20 Julien Nioche <[EMAIL PROTECTED]>:
> Hi guys,
>
> I am usgin SO
Right, we could start by adding a JIRA issue to at least request the
feature.
-Grant
On Jun 20, 2008, at 12:17 AM, Noble Paul നോബിള്
नोब्ळ् wrote:
I can take it up. But should we wait for the feature to 'stabilize'
before adding it to SolrJ? Till then the approach suggested by Yonik
(get
Hey Julien,
What's your actual query look like? The original, the parsed, etc. (I
think a bunch of the variations get output when using debugQuery=true)
There is a DoubleFieldSource in the trunk as of SOLR-324, so that
probably explains why you are seeing the FloatFieldSource (as I
recal
Hi Grant,
Thanks for your help. I've just found the explanation to my problem: the
fields need to be indexed in order to be used in a bf, which was even stated
clearly in the documentation ;-) Hopefully someone will make the same
mistake at some point and find this.
I'm now using the SVN trunk ve
Here we go...
https://issues.apache.org/jira/browse/SOLR-430
On Fri, Jun 20, 2008 at 9:35 PM, Grant Ingersoll <[EMAIL PROTECTED]>
wrote:
> Right, we could start by adding a JIRA issue to at least request the
> feature.
>
> -Grant
>
>
> On Jun 20, 2008, at 12:17 AM, Noble Paul നോബിള് नोब्ळ् wrot
I've seen mention of these filters:
But I don't see them in the 1.2 distribution. Am I looking in the wrong
place? What will the UnicodeNormalizationFilterFactory do for me? I
can't find any documentation on it.
Thanks,
Phil
Apologies for reposting. I should have posted this in a new thread.
I've seen mention of these filters:
But I don't see them in the 1.2 distribution. Am I looking in the wrong
place? What will the UnicodeNormalizationFilterFactory do for me? I
can't find any documentation on it.
Thank
Regarding indexing words with accented and unaccented characters with
positionIncrement zero:
Chris Hostetter wrote:
you don't really need a custom tokenizer -- just a buffered TokenFilter
that clones the original token if it contains accent chars, mutates the
clone, and then emits it next w
Maybe autoCommit is a good option for you. In other words, don't let your
clients issue commits - have Solr commit every N docs or every N minutes/hours.
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
> From: Sébastien Rainville <[EMAIL PROTECTED]
It does much more sense :P Glad to learn about that option!
thanks,
Sebastien
On Fri, Jun 20, 2008 at 1:22 PM, Otis Gospodnetic <
[EMAIL PROTECTED]> wrote:
> Maybe autoCommit is a good option for you. In other words, don't let your
> clients issue commits - have Solr commit every N docs or e
Thank you for mentioning our product, Walter :-)
> I've worked with the Basis products. Solid, good support.
> Last time I talked to them, they were working on hooking them
> into Lucene.
So, Basis Technology's Rosette Language Platform has what we call
"Base Linguistics" (basically a morpholo
: 1) Overview: Currently, we have around 20,000 documents to index with
: individual doc size around 5k. We have set up faceting on a multi-valued
: field (there will be ~20 facets per document).
: 2) Faceted navigation: I've read that faceted navigation on a
: multi-valued field has some perfo
For server side XSLT processing, you need to use the XSLTResponseWriter
and you specify the stylesheet using the "tr" paramater.
for client side XSLT response writting there is a "stylesheet" param that
causes the XmlResponseWriter to include a stylesheet declaration in the
response, but it's
Tom: how did your talk go?
(I would have come by, but I have class on Wed nights)
If you have any slides online it would be great if you could add a link to
them on the wiki...
http://wiki.apache.org/solr/SolrResources
: Subject: Talk on Solr - Oakland, CA June 18, 2008
-Hoss
: but really, I wasn't planning on having anyone (solr or otherwise) solving my
: needs. I just find it odd that I need to discern the number of returned
: results.
true, but you also have to discern the number of fields returned for each
doc, and the number of values in each of those fields, a
: But we still prefer the usage of DIH package classes without any prefix.
: type="HttpDataSource"
: instead of
: type="solr.HttpDataSource"
:
: But users must be able to load their classes using the "solr."
format
I'm not 100% certain what you mean by that last comment, but it seems like
the
: Iÿÿm new to solr (using the 1.3 nightly at the moment) and trying to configure
: it to accept a third-party xml schema at the /update interface. I would
: like to define transformations like those of the DataImportHandler which use
Take a look at SOLR-285 and SOLR-370.
Disclaimer: I wrote th
Hi,
Who knows when it will be available in the nightly builds ?
Thanks,
William.
On Fri, Jun 20, 2008 at 6:36 PM, Chris Hostetter <[EMAIL PROTECTED]>
wrote:
>
> : But we still prefer the usage of DIH package classes without any prefix.
> : type="HttpDataSource"
> : instead of
> : type="solr.Http
My guess, within the next 2 weeks. :)
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
> From: William Silva <[EMAIL PROTECTED]>
> To: solr-user@lucene.apache.org
> Sent: Friday, June 20, 2008 7:04:14 PM
> Subject: Re: Slight issue with classloading
Although field collpasing worked fine in my brief testing,
when I put it to work with more documents, I start to get
exceptions. It seems to have something to do with the queries
(or documents, since different queries return different
documents). With some queries, this exception does not happen.
Hi Robert,
We had actually a similar problem to your own (slow highlighting of big
documents). We corrected it by extending copyField functionality :
https://issues.apache.org/jira/browse/SOLR-538
We just updated the patch. It should work perfectly on trunk.
Please tell us if it answers your p
Hi guys,
I am usgin SOLR 2.2. I am trying to boost documents such as this one
9.600311
1.8212872
content
d340da6d1483f028110b0ffc2402c417
*14730*
http://www.bebo.com/Video.jsp
20080529185637
www.bebo.com
Video
20080529195525711
http://www.bebo.com/Video.jsp
6
using the *domain_score *field
th
On Jun 19, 2008, at 6:28 PM, Yonik Seeley wrote:
2. I use acts_as_solr and by default they only make "post"
requests, even
for /select. With that setup the response time for most queries,
simple or
complex ones, were ranging from 150ms to 600ms, with an average of
250ms. I
changed the sele
On Fri, Jun 20, 2008 at 8:32 AM, Erik Hatcher <[EMAIL PROTECTED]>
wrote:
>
> On Jun 19, 2008, at 6:28 PM, Yonik Seeley wrote:
>
>> 2. I use acts_as_solr and by default they only make "post" requests, even
>>> for /select. With that setup the response time for most queries, simple
>>> or
>>> comple
Hi,
Is there a rule of thumb for the maximum number of updates before issuing a
commit?
For example, I'm using a MapReduce job for indexing a table in HBase... but
the problem is that I can't just let the reducers commit whenever they want
or else commits tend to happen at the same time and there
If you start tomcat in the directory where solr is, then your solr
configuration for snapshooter should work. If you have the scripts
directory in your PATH, then you can just use
snapshooter
I always use the full path name so it will always works.
Bill
On Thu, Jun 19, 2008 at 10:25 PM, Otis G
26 matches
Mail list logo