I recently upgraded a solr index from 3.5 to 4.3.0. I'm now having trouble with
the data import handler when using the CachedSqlEntityProcessor.
The first issue I found was that the 'where' option doesn't work anymore.
Instead I am now using 'cacheKey' and 'cacheLookup'.
My next issue is that i
Same problem with 4.4.0 RC1.
-Original Message-
From: Alexandre Rafalovitch [mailto:arafa...@gmail.com]
Sent: Sunday, July 21, 2013 5:57 AM
To: solr-user@lucene.apache.org
Subject: Re: DIH nested cached entities not working after upgrade
Could you check with Solr 4.4 RC1:
http://people.a
Hi,
I think this is a pretty common requirement so hoping someone can easily point
out the solution:
I have an average rating field defined in my schema that is a tdouble and can
be anything from 0 - 5 (including decimals). I am using dismax so I want to
define a boost based on the average rat
Hi
I have seen several questions on this already but haven't been able to sort my
issue. My problem is that multi-word synonyms aren't behaving as I would
expect. I have copied my field type definition at the bottom of this message,
but the relevant synonym filter is here (used at index time):
Thanks for the response. This almost worked, I created a new field using the
KeywordTokenizerFactory as you suggested. The only problem was that searches
only found documents when quotes were used.
E.g.
synonyms.txt setup like this:
simple syrup,sugar syrup,stock syrup
I indexed a document wit
Thanks for your response. When I don't include the KeywordTokenizerFactory in
the SynonymFilter definition, I get additional term values that I don't want.
e.g. synonyms.txt looks like:
simple syrup,sugar syrup,stock syrup
A document with a value containing 'simple syrup' can now be found when
I suppose I could translate every user query to include the term with quotes.
e.g. if someone searches for stock syrup I send a query like:
q=stock syrup OR "stock syrup"
Seems like a bit of a hack though, is there a better way of doing this?
Zac
-Original Message-----
From:
It doesn't seem to do it for me. My field type is:
I am using edism
Are you able to explain how I would create another field to fit my scenario?
-Original Message-
From: O. Klein [mailto:kl...@octoweb.nl]
Sent: Tuesday, February 07, 2012 1:28 PM
To: solr-user@lucene.apache.org
Subject: RE: Multi word synonyms
Well, if you want both multi word and single
Hi,
I have a simple field type that uses the KeywordTokenizerFactory. I would like
to use this so that values in this field are only matched with the full text of
the field.
e.g. If I indexed the text 'chicken stock', searches on this field would only
match when searching for 'chicken stock'. I
fieldNorm(field=ingredient_synonyms, doc=0)
Any ideas?
My dismax handler is setup like this:
dismax
explicit
0.01
ingredient_synonyms^0.6
ingredient_synonyms^0.6
Zac
From: Zac Smith
Sent: Thursday, February 09, 2012 12:52 PM
To: solr-user@lucene.apache.o
jsp) does not perform actual query parsing.
One thing to be aware of when Using Keyword Tokenizer at query time is: Query
string (chicken stock) is pre-tokenized according to white spaces, before it
reaches keyword tokenizer.
If you use quotes ("chicken stock"), query parser does no
when removing terms from the query:
https://issues.apache.org/jira/browse/SOLR-3128
Feedback welcome.
Thanks
Zac
-Original Message-
From: Zac Smith
Sent: Friday, February 10, 2012 3:30 PM
To: 'solr-user@lucene.apache.org'
Subject: RE: Keyword Tokenizer Phrase Issue
Thanks, that e
Does anyone have an example formula that can be used to sort by a 5 star rating
in SOLR?
I am looking at an example on IMDB's top 250 movie list:
The formula for calculating the Top Rated 250 Titles gives a true Bayesian
estimate:
weighted rating (WR) = (v ÷ (v+m)) × R + (m ÷ (v+m)) × C
where
I hope this question is being directed to the right place ...
I am trying to use SQLite (v3) as a source for the Data Import Handler. I am
using a sqllite jdbc driver (link below) and this works when using with only
one entity. As soon as I add a sub-entity it falls over with a locked DB error:
I was able to resolve this issue by using a different jdbc driver:
http://www.xerial.org/trac/Xerial/wiki/SQLiteJDBC
-Original Message-
From: Zac Smith [mailto:z...@trinkit.com]
Sent: Friday, April 01, 2011 5:56 PM
To: solr-user@lucene.apache.org
Subject: Using the Data Import Handler
I have come across an issue with the DIH where I get a null exception when
pre-caching entities. I expect my entity to have null values so this is a bit
of a roadblock for me. The issue was described more succinctly in this
discussion:
http://lucene.472066.n3.nabble.com/DataImportHandlerExcepti
Let's say I have a data model that involves books and bookshelves. I have tens
of thousands of books and thousands of bookshelves. There is a many-many
relationship between books & bookshelves. All of the books are indexed by SOLR.
I need to be able to query SOLR and get all the books for a give
i Zac,
Solr 4.0 (trunk) has support for relationships/JOIN. Have a look:
http://search-lucene.com/?q=solr+join
Otis
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch Lucene ecosystem
search :: http://search-lucene.com/
- Original Message
> From: Zac Smith
> To:
Ok thanks for the responses. My option #2 will be easier to implement than
having the new doc with combinations so will give it a try. But that has opened
my eyes to different possibilities!
-Original Message-
From: Erick Erickson [mailto:erickerick...@gmail.com]
Sent: Sunday, May 15, 2
How would I specify a filter that covered a rectangular viewport? I have 4
coordinate points for the corners and I want to return everything inside that
area.
My first naive attempt was this:
q=*:*&fq=coords:[44.119141,-125.948638 TO 47.931066,-111.029205]
At first this seems to work OK, except
way to specify the exact coordinates of the bounding box -
http://wiki.apache.org/solr/SpatialSearch#bbox_-_Bounding-box_filter ??
Zac
-Original Message-
From: Zac Smith [mailto:z...@trinkit.com]
Sent: Sunday, May 22, 2011 9:34 PM
To: solr-user@lucene.apache.org
Subject: Spatial Sol
Sounds like you might not be committing the delete. How are you deleting it?
If you run the data import handler with clean=true (which is the default) it
will delete the data for you anyway so you don't need to delete it yourself.
Hope that helps.
-Original Message-
From: antoniosi [mail
23 matches
Mail list logo