You mention "that is one way to do it" is there another i'm not seeing?
On Jan 10, 2012, at 4:34 PM, Ted Dunning wrote:
> On Tue, Jan 10, 2012 at 5:32 PM, Tanner Postert
> wrote:
>
>> We've had some issues with people searching for a document with the
>&g
We've had some issues with people searching for a document with the
search term '200 movies'. The document is actually title 'two hundred
movies'.
Do we need to add every number to our synonyms dictionary to
accomplish this? Is it best done at index or search time?
this would be useful to me as well.
even when searching with q=test, I know it defaults to the default search
field, but it would helpful to know what field(s) match the query term.
On Thu, Sep 22, 2011 at 3:29 AM, Nicolas Martin wrote:
> Hi everyBody,
>
> I need your help to get more informatio
sure enough that worked. could have sworn we had it this way before, but
either way, that fixed it. Thanks.
On Wed, Sep 21, 2011 at 11:01 AM, Tanner Postert
wrote:
> i believe that was the original configuration, but I can switch it back and
> see if that yields any results.
>
>
>
did you try to copy into
> the text field directly from the genre field? Instead of the
> genre_search field? Did that yield working queries?
>
> On Wed, Sep 21, 2011 at 12:16 PM, Tanner Postert
> wrote:
> > i have 3 fields that I am working with: genre, genre_search and tex
i have 3 fields that I am working with: genre, genre_search and text. genre
is a string field which comes from the data source. genre_search is a text
field that is copied from genre, and text is a text field that is copied
from genre_search and a few other fields. Text field is the default search
I've tried to use a spellcheck dictionary built from my own content, but my
content ends up having a lot of misspelled words so the spellcheck ends up
being less than effective. I could use a standard dictionary, but it may
have problems with proper nouns. It also misses phrases. When someone
searc
gt; Spelling_Dictionary
> text_spelling
> true
> .001
>
>
>
> I have it on my to-do list to look into this further but haven't yet. If
> you decide to try it and can get it to work, please let me know how you do
> it.
>
> James Dyer
> E-Commerce Syst
I am using Solr 1.4.1 (Solr Implementation Version: 1.4.1 955763M - mark -
2010-06-17 18:06:42) to be exact.
I'm trying to implement that GeoSpacial field type by adding to the schema:
but I get the following errors:
org.apache.solr.common.SolrException: Unknown fieldtype 'location'
spec
That worked, thought I tried it before, not sure why it didn't before.
Also, is there a way to query without a q parameter?
I'm just trying to pull back all of the field results where field1:(1 OR 2
OR 3) etc. so I figured I'd use the FQ param for caching purposes because
those queries will likel
Trying to figure out how I can run something similar to this for the fq
parameter
Field1 in ( 1, 2, 3 4 )
AND
Field2 in ( 4, 5, 6, 7 )
I found some examples on the net that looked like this: &fq=+field1:(1 2 3
4) +field2(4 5 6 7) but that yields no results.
i noticed that your search terms are using caps vs lower case, are your
search fields perhaps not set to lowercase the terms and/or the search
term?
On Mon, Feb 28, 2011 at 10:41 AM, mrw wrote:
> Say I have an index with first_name and last_name fields, and also a copy
> field for the full name
I'm using an index based spellcheck dictionary and I was wondering if there
were a way for me to manually remove certain words from the dictionary.
Some of my content has some mis-spellings, and for example when I search for
the word sherrif (which should be spelled sheriff), it get recommendation
right now when I search for 'brake a leg', solr returns valid results with
no indication of misspelling, which is understandable since all of those
terms are valid words and are probably found in a few pieces of our content.
My question is:
is there any way for it to recognize that the phase shoul
I updated my data importer.
I used to have:
which wasn't working. But I changed that to
and it is working fine.
On Tue, Feb 15, 2011 at 5:50 PM, Koji Sekiguchi wrote:
> (11/02/16 8:03), Tanner Postert wrote:
>
>> I am using the data import handler and using the HTM
I'm building my spellcheck index from my content and it seems to be working,
but my problem is that there are a few misspelled words in my content. For
example: the word Sheriff is improperly misspelled Sherrif in my content a
couple dozen times (but spelled correctly a couple thousand times). The
yes it is possible via ${dataimporter.request.param}
see
http://wiki.apache.org/solr/DataImportHandler#Accessing_request_parameters
On Tue, Feb 15, 2011 at 4:45 PM, Jason Rutherglen <
jason.rutherg...@gmail.com> wrote:
> It'd be nice to be able to pass HTTP parameters into DataImportHandler
> t
n the data importer query:
On Tue, Feb 15, 2011 at 3:49 PM, Tanner Postert wrote:
> nevermind, I think I found my answer here:
> http://www.mail-archive.com/solr-user@lucene.apache.org/msg34622.html
>
> <http://www.mail-archive.com/solr-user@lucene.apache.org/msg34622.h
nevermind, I think I found my answer here:
http://www.mail-archive.com/solr-user@lucene.apache.org/msg34622.html
<http://www.mail-archive.com/solr-user@lucene.apache.org/msg34622.html>I
will add the HTML stripper to the data importer and see how that goes
On Tue, Feb 15, 2011 at 3:43 PM,
ok, I will look at using that filter factory on my content.
But I was also looking at the stop filter number so I could adjust my mm
parameter based on the number of non-stopwords in the search parameter so I
don't run into the dismax stopword issue. any way around that other than
using a very low
I am trying to see if there is a way to get back the resulting search'd
query to solr excluding the stopwords. Right now when I search for: "the
year in review" i can see in the debug that the parsed query contains: text:"?
year ? review" but that information is mixed in with all the parsed boosti
I have a multicore system and I am looking to boost results by date, but
only for 1 core. Is this at all possible?
Basically one of the core's content is very new, and changes all the time,
and if I boost everything by date, that core's content will almost always be
at the top of the results, so I
When I took off the qf=title &
qf=description fields the results works. I am rebuilding my indexes now.
On Fri, Feb 11, 2011 at 3:20 PM, Tanner Postert wrote:
> looks like that might be the case, if I just do a search for "with"
> including the dismax parameters, it returns n
looks like that might be the case, if I just do a search for "with"
including the dismax parameters, it returns no results, as opposed to a
search for 'obsessed' does return results. Is there any way I can get around
this behavior? or do I have something configured wrong?
>
> Might "with" be a sto
I'm having a problem using the dismax query. For example: for the term
"obsessed with winning" I use:
http://localhost:8983/solr/core1/select?q=obsessed+with+winning&fq=code:xyz&shards=localhost:8983/solr/core1,localhost:8983/solr/core2,&rows=10&start=0&defType=dismax&qf=title
^10+description^4+te
I'm having a problem using the dismax query for the term "obsessed with
winning"
http://localhost:8983/solr/core1/select?q=obsessed+with+winning&fq=code:xyz&shards=localhost:8983/solr/core1,localhost:8983/solr/core2,&rows=10&start=0&defType=dismax&qf=title
^10+description^4+text^1&debugQuery=true
there error I am getting is that I have no default value
for ${dataimporter.last_index_time}
should I just define -00-00 00:00:00 as the default for that field?
On Wed, Jan 19, 2011 at 12:45 PM, Markus Jelsma
wrote:
> No, you only need defaults if you use properties that are not defined in
>
i even have to define default values for the dataimport.delta values? that
doesn't seem right
On Wed, Jan 19, 2011 at 11:57 AM, Markus Jelsma
wrote:
> Hi,
>
> I'm unsure if i completely understand but you first had the error for
> local.code and then set the property in solr.xml? Then of course i
I'm trying to dynamically add a core to a multi core system using the
following command:
http://localhost:8983/solr/admin/cores?action=CREATE&name=items&instanceDir=items&config=data-config.xml&schema=schema.xml&dataDir=data&persist=true
the data-config.xml looks like this:
t
29 matches
Mail list logo