Hi,
I am trying to pass empty values to fq parameter but passing null (or empty)
doesn't seem to work for fq.
Something like...
q=*:*&fq=(field1:test OR null)
We are trying to make fq more tolerant by making not fail whenever a
particular variable value is not passed..
Ex:
/select?q=*:*&fq=ln
Hi,
I am trying to pass empty values to fq parameter but passing null (or empty)
doesn't seem to work for fq.
Something like...
q=*:*&fq=(field1:test OR null)
We are trying to make fq more tolerant.. It shouldn't fail if particular
variable value is not passed..
Ex:
/select?q=*:*&fq=lna
Jack,
First, thanks a lot for your response.
We hardcode certain queries directly in search component as its easy for us
to make changes to the query from SOLR side compared to changing in
applications (as many applications - mobile, desktop etc.. use single SOLR
instance). We don't want to chang
I am trying to implement Historical search using SOLR.
Ex:
If I search on address 800 5th Ave and provide a time range, it should list
the name of the person who was living at the address during the time period.
I am trying to figure out a way to store the data without redundancy.
I can do a jo
check the below link to get more info on IndexBasedSpellCheckers
http://searchhub.org/2010/08/31/getting-started-spell-checking-with-apache-lucene-and-solr/
--
View this message in context:
http://lucene.472066.n3.nabble.com/Spellcheck-questions-tp4078985p4079000.html
Sent from the Solr - User
Did you try setting useCompoundFile to true in solrconfig.xml?
Also, try using a lower mergeFactor which will result in fewer segments and
hence fewer open files.
Also, I assume you can set the limit using a ulimit command..
ex:
ulimit -n20
--
View this message in context:
http://lucen
Did you look in to this link?
http://www.marshut.com/ruzyy/download-and-configure-morphlinesolrsink.html
--
View this message in context:
http://lucene.472066.n3.nabble.com/Indexing-CSV-files-in-a-Folder-tp4079192p4079222.html
Sent from the Solr - User mailing list archive at Nabble.com.
Currently I am using SOLR 3.5.X and I push updates to SOLR via queue (Active
MQ) and perform hard commit every 30 minutes (since my index is relatively
big around 30 million documents). I am thinking of using soft commit to
implement NRT search but I am worried about the reliability.
For ex: If I
Thanks for your response.
We are planning to move to SOLR 4.3.1 from 3.5.x. Currently we just use hard
commits every 30 minutes (as we are using 3.X), but we want to do softcommit
in new version of SOLR and we also want to make the commits more reliable.
--
View this message in context:
http
Are you using softcommits heavily? I heard that using softcommits heavily
(every second) and not using hard commit for long time causes out of memory
issues since SOLR uses hashmap for transaction log.
--
View this message in context:
http://lucene.472066.n3.nabble.com/How-to-debug-an-OutOfMemo
Solr 4.4 is already released!!!
http://lucene.apache.org/solr/
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-4-3-0-SolrCloud-lost-all-documents-when-leaders-got-rebuilt-tp4080185p4080188.html
Sent from the Solr - User mailing list archive at Nabble.com.
We are using SOLR 4.3.1 but not using solrcloud now.
We currently support both push and pull indexing and we use softcommit for
push indexing purpose. Now whenever we perform pull indexing (using indexer
program) the changes made by the push indexing process (during indexing
time) might get lost h
May be this wont work, but just a thought...Cant you use
PathHierarchyTokenizerFactory and configure as below?
In this example however we see the oposite configuration, so that a query
for Books/NonFic/Science/Physics would match documents containing
Books/NonFic, Books/NonFic/Science, or Books/No
I have used JMX with SOLR before..
http://docs.lucidworks.com/display/solr/Using+JMX+with+Solr
--
View this message in context:
http://lucene.472066.n3.nabble.com/monitor-jvm-heap-size-for-solrcloud-tp4080713p4080725.html
Sent from the Solr - User mailing list archive at Nabble.com.
I am currently using SOLR 4.4. but not planning to use solrcloud in very near
future.
I have 3 master / 3 slave setup. Each master is linked to its corresponding
slave.. I have disabled auto polling..
We do both push (using MQ) and pull indexing using SOLRJ indexing program.
I have enabled soft
I need some advice on the best way to implement Batch indexing with soft
commit / Push indexing (via queue) with soft commit when using SolrCloud.
*I am trying to figure out a way to:
*
1. Make the push indexing available almost real time (using soft commit)
without degrading the search / indexing
I am indexing more than 300 million records, it takes less than 7 hours to
index all the records..
Send the documents in batches and also use CUSS (ConcurrentUpdateSolrServer)
for multi threading support.
Ex:
ConcurrentUpdateSolrServer server= new
ConcurrentUpdateSolrServer(solrServer, queueSi
We index the data using queue (from application) and batch processing (around
10 million documents). We want the data sent by queue to be seen
instantaneously even when delta process is submitting the documents. I was
thinking of submitting documents from queue to a node and using another node
for
I am trying to match the keywords with / without white space but one of the
case fails always..
For ex:
I am indexing 4 documents
name: wal mart
name: walmart
name: WalMart
name: Walmart
Now searching on name either using
wal mart
walmart
Walmart
WalMart
should return all the above 4 documents
Is PositionFilter is deprecated as of Lucene 4.4? Is there any other
alternate way to implement that functionality?
--
View this message in context:
http://lucene.472066.n3.nabble.com/PositionFilter-is-deprecated-as-of-Lucene-4-4-tp4082245.html
Sent from the Solr - User mailing list archive at
I am trying to use phonetic algorithm to perform (approx) search but I need
some help on finalizing the algorithm since each algorithm has its pros and
cons.
For Ex: Most of the phonetic algorithms matches 'tattoo' for the keyword
'Toyota'. Some fail to match hedison when searched for Hudson..
I
I am trying to use the below query to boost the score of dismax component but
it doesn't seem to work ..
_query_:"{!dismax qf=Fname v=$f_name}"^8.0 OR
_query_:"{!dismax qf=Lname v=$l_name}"^8.0
Can someone let me know a way to boost Dismax / function queries without
using bq?
--
View this m
I suppose you can use Substring and Charindex to perform your task at SQL
level then use the value in another entity in DIH..
--
View this message in context:
http://lucene.472066.n3.nabble.com/DIH-Problem-create-multiple-docs-from-a-single-entity-tp4083050p4083106.html
Sent from the Solr - Use
I am trying to use range queries to take advantage of having constant scores
in multivalued field but I am not sure if range queries support phrase
query..
Ex:
The below range query works fine.
_query_:"address:([Charlotte TO Charlotte])"^5.5
The below query doesn't work,
_query_:"address:([
Currently we use multiple stopwords.txt, protwords.txt , elevate.xml and
other solr related config files. We use subversion to maintain various
versions of these files manually. I was thinking of checking the forum on
the process that is being followed to preserve the version history other
than jus
Now use DIH to get the data from MYSQL database in to SOLR..
http://wiki.apache.org/solr/DataImportHandler
You need to define the field mapping (between my sql and SOLR document) in
data-config.xml.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Schema-tp4086136p4086140.h
I don't think there's an SOLR- SVN connector available out of the box.
You can write a custom SOLRJ indexer program to get the necessary data from
SVN (using JAVA API) and add the data to SOLR.
--
View this message in context:
http://lucene.472066.n3.nabble.com/How-to-SOLR-file-in-svn-reposit
Did you look here?
https://cwiki.apache.org/confluence/display/solr/Working+with+External+Files+and+Processes
--
View this message in context:
http://lucene.472066.n3.nabble.com/SOLR-search-by-external-fields-tp4086197p4086408.html
Sent from the Solr - User mailing list archive at Nabble.com.
I am trying to implement an auto suggest based on time decay function. I have
a separate index just to store auto suggest keywords.
I would be calculating the frequency over time rather than just calculating
just based on frequency alone.
I am thinking of using a database to perform the calculat
I am using a totally separate core for storing the auto suggest keywords.
Would you be able to send me some more details on your implementation?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Auto-Suggest-Time-decay-tp4092965p4092969.html
Sent from the Solr - User mailing
Is there a way to sort the returned Autosuggest list based on a particular
value (ex: score)?
I am trying to sort the returned suggestions based on a field that has been
calculated manually but not sure how to use that field for sorting
suggestions.
--
View this message in context:
http://luc
I am using SOLR 4.1.0 and perform atomic updates on SOLR documents.
Unfortunately there is a bug in 4.1.0
(https://issues.apache.org/jira/browse/SOLR-4297) that blocks me from using
null="true" for deleting a field through atomic update functionality. Is
there any other way to delete a field other
I am using suggester that uses external dictionary file for suggestions (as
below).
# This is a sample dictionary file.
iPhone3g
iPhone4 295
iPhone5c620
iPhone4g710
Everything works fine except for the fact that the suggester seems to be
case sensitive.
/suggest?q=ip is
33 matches
Mail list logo