Thanks for your reply Erick.
I create a simple field type as below for testing and added 'junk' to the
stopwords but it doesnt seem to honor it when using fuzzzy search
Btw, I am using qf along with edismax and pass the value in q (sample query
below).
/solr/collection1/select?qf=title_autoCompl
Hi,
Is there a way to use stopwords and fuzzy match in a SOLR query?
The below query matches 'jack' too and I added 'junk' to the stopwords (in
query) to avoid returning results but looks like its not honoring the
stopwords when using the fuzzy search.
solr/collection1/select?app-qf=title_autoCo
Hi,
Is there a way to use stopwords and fuzzy match in a SOLR query?
The below query matches 'jack' too and I added 'junk' to the stopwords (in
query) to avoid returning results but looks like its not honoring the
stopwords when using the fuzzy search.
solr/collection1/select?app-qf=title_autoCo
Hi,
I was trying to query a field that has specific term in it and to my
surprise the score was different for different documents even though the
field I am searching for contained the same exact terms in all the
documents.
Any idea when this issue would come up?
*Note:* All the documents conta
The problem with pf2 is that it will return the document if it matches
loosely too and then I need to do a comparison to see whether the match was
a complete phrase match OR not before actually using the result. It would
become a 2 step process..
--
Sent from: http://lucene.472066.n3.nabble.com/
Hi,
I want to do a complete "phrase contain" match.
For ex: Value is stored as below in the multivalued field
1
transfer responsibility
transfer account
*Positive cases: (when it should return this document)*
searchTerms:how to transfer responsibility
searchTerms:show me ways to transfer re
Hi,
I have a requirement where I want to perform the 'contains' match and would
need your help to define the fieldtype and query for this requirement.
Value stored in SOLR:
transfer responsibility
transfer account
Now, I want the above document to be returned for the below keyword when I
searc
I used to use elevate.xml before (in SOLR 4.1) and never noticed this
behavior before (may be I didn't check these specific use cases where
elevated documents doesn't contain any searched keyword) but I started
elevating id's via query param now (using elevateIds parameter) and I
started noticing t
I was under the impression that elevate component would only elevate if the
document is part of the returned result set(at some position) for that
searched keyword. Is that true?
I see that the results are elevated even if the elevated document doesn't
match with the keyword (score - 0) now. I w
Walter, It's just that I have a use case (to evaluate one field over other)
for which I am trying out multiple solutions in order to avoid making
multiple calls to SOLR.
I am trying to do a Short-circuit evaluation.
Short-circuit evaluation, minimal evaluation, or McCarthy evaluation (after
John
Thanks Erick. I will check this out.
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
You are right. I don't care about the score rather I want a document
containing specific term in a specific field to be evaluated first before
checking the next field.
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
I am trying to figure out a way to form boolean (||) query in SOLR.
Ideally my expectation is that with boolean operator ||, if first term is
true second term shouldn't be evaluated.
&q=searchTerms:"testing" || matchStemming:"stemming"
works same as
&q=searchTerms:"testing" OR matchStemming:"st
Is there a way to do pivot grouping (group within a group) in SOLR?
We initially group the results by category and inturn we are trying to group
the data under one category based on another field. Is there a way to do
that?
Categories (group by)
|--Shop
|---Color
Hi,
I am using suggester component in SOLR 5.5.1 and sort the matching
suggestion based on a custom field (lookupCount) field. The below
configuration seems to work fine but its returning the matching term even if
the weight is set to 0. Is there a way to restrict returning the matching
term based
Its still throwing error without quotes.
solr start -e cloud -noprompt -z
localhost:2181,localhost:2182,localhost:2183
Invalid command-line option: localhost:2182
Usage: solr start [-f] [-c] [-h hostname] [-p port] [-d directory] [-z
zkHost] [
-m memory] [-e example] [-s solr.solr.home] [-a "add
Ok when I run the below command it looks like its ignoring the double quotes.
solr start -c -z "localhost:2181,localhost:2182,localhost:2183" -e cloud
This interactive session will help you launch a SolrCloud cluster on your
local
workstation.
To begin, how many Solr nodes would you like to run
I downloaded the latest version of SOLR (5.5.0) and also installed zookeeper
on port 2181,2182,2183 and its running fine.
Now when I try to start the SOLR instance using the below command its just
showing help content rather than executing the command.
bin/solr start -e cloud -z localhost:2181,l
Hi,
I am using KStem factory for stemming. This stemmer converts 'france to
french', 'chinese to china' etc.. I am good with this stemming but I am
trying to boost the results that contain the original term compared to the
stemmed terms. Is this possible?
Thanks,
Learner
--
View this message
Thanks for your reply.
Overall design has changed little bit.
Now I will be sending the SKU id (SKU id is in SOLR document) to an external
API and it will return a new price to me for that SKU based on some logic (I
wont be calculating the new price).
Once I get that value I need to use that ne
Thanks for your response.
--
View this message in context:
http://lucene.472066.n3.nabble.com/How-to-preserve-0-after-decimal-point-tp4159295p4231961.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi,
We have a price field in our SOLR XML feed that we currently use for
sorting. We are planning to introduce discounts based on login credentials
and we have to dynamically calculate price (using base price in SOLR feed)
based on a specific discount returned by an API. Now after the discount is
Hi,
Note: I have very basic knowledge on NLP..
I am working on an answer engine prototype where when the user enters a
keyword and searches for it we show them the answer corresponding to that
keyword (rather than displaying multiple documents that match the keyword)
For Ex:
When user searches
Thanks for your response.
I fixed this issue by using the
--
View this message in context:
http://lucene.472066.n3.nabble.com/Predictive-search-match-a-word-occurring-anywhere-in-a-field-tp4
Hi,
I am trying to figure out a way to implement partial match autosuggest but
it doesn't work in some
cases.
When I search for iphone 5s, I am able to see the below results.
title_new:Apple iPhone 5s - 16GB - Gold
but when I search for iphone gold (in title_new field), I am not able to see
th
I am using the below code to do partial update (in SOLR 4.2)
partialUpdate = new HashMap();
partialUpdate.put("set",Object);
doc.setField(description, partialUpdate);
server.add(docs);
server.commit();
I am seeing the below description value with {set =...}, Any idea why this
is getting added?
Thanks for your response.. It's indeed a good idea..I will try that out..
--
View this message in context:
http://lucene.472066.n3.nabble.com/How-to-generate-calculate-facet-counts-for-external-fields-tp4168653p4168790.html
Sent from the Solr - User mailing list archive at Nabble.com.
I am using external field for price field since it changes every 10 minutes.
I am able to display the price / use range queries to display the documents
based on a price range.
I am trying to see if its possible to generate facets using external field.
I understand that faceting requires indexing
We index around 10k documents in SOLR and use inbuilt suggest functionality
for auto complete.
We have a field that contain a flag that is used to show or hide the
documents from search results.
I am trying to figure out a way to control the terms added to autosuggest
index (to skip the document
I have a requirement to preserve 0 after decimal point, currently with the
below field type
27.50 is stripped as 27.5
27.00 is stripped as 27.0
27.90 is stripped as 29.9
27.5
I also tried using double but even then the 0's are getting stripped.
27.5
Input data:
27.50
--
View this
Hi,
I need to change the components (inside a request handler) dynamically using
query parameters instead of creating multiple request handlers. Is it
possible to do this on the fly from the query?
For Ex:
change the highlight search component to use different search component
based on a query p
Hi,
I use the below highlight search component in one of my request handler.
I am trying to figure out a way to change the value of highlight search
component dynamically from the query. Is it possible to modify the
parameters dynamically using the query (without creating another
searchcomponent)
Hi,
This might be a silly question..
I came across the below query online but I couldn't really understand the
bolded part. Can someone help me understanding this part of the query?
deviceType_:"Cell" OR deviceType_:"Prepaid" *OR (phone
-data_source_name:("Catalog" OR "Device How To - Interactiv
I see that the threads parameter has been removed from DIH from all version
starting SOLR 4.x. Can someone let me know the best way to initiate indexing
in multi threaded mode when using DIH now? Is there a way to do that?
--
View this message in context:
http://lucene.472066.n3.nabble.com/The-
Dynamically adding fields to schema is yet to get released..
https://issues.apache.org/jira/browse/SOLR-3251
We used dynamic field and copy field for dynamically creating facets...
We had too many dynamic fields (retrieved from a database table) and we had
to make sure that facets exists for the
http://wiki.apache.org/solr/SolrPerformanceFactors
If you do a lot of field based sorting, it is advantageous to add explicitly
warming queries to the "newSearcher" and "firstSearcher" event listeners in
your solrconfig which sort on those fields, so the FieldCache is populated
prior to any querie
The below config file works fine with sql server. Make sure you are using the
correct database / server name.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Configuring-Solr-to-connect-to-a-SQL-serv
Not sure what you are trying to achieve.
I assume you are trying to return the documents that doesn't contain any
value in a particular field..
You can use the below query for that..
http://localhost:8983/solr/doc1/select?q=-text:*&debugQuery=on&defType=lucene
--
View this message in context:
I would suggest you to take the suggested string and create another query to
solr along with the filter parameter.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Filtering-down-terms-in-suggest-tp4069627p4069997.html
Sent from the Solr - User mailing list archive at Nabble
I suppose you can implement custom hashing by using "_shard_" field. I am not
sure on this, but I have come across this approach sometime back..
At query time, you can specify "shard.keys" parameter...
--
View this message in context:
http://lucene.472066.n3.nabble.com/shardkey-tp4069940p40699
check this link..
http://stackoverflow.com/questions/11319465/geoclusters-in-solr
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-4-3-Spatial-clustering-tp4069941p4069986.html
Sent from the Solr - User mailing list archive at Nabble.com.
Ok I wrote a custom Java transformer as below. Can someone confirm if this is
the right way?
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import org.apache.solr.handler.dataimport.Context;
import org.apache.solr.handler.dataimport.DataImporter;
import org.apache.solr.h
I am trying to combine latitude and longitude data extracted from text file
using data import handler..
One document can contain multiple latitudes / longitudes...
My data would be of format [lat,lat], [long,long]
Example:
[33.7209548950195, 34.474838],[-117.176193237305, -117.573463]
I am cur
Erick,
Thanks a lot for your response.
Just to confirm if I am right, I need to use solr.xml even if I change the
folder structure as below. Am I right?
Do you have any idea when "discovery-based" core enumeration feature would
be released?
--
View this message in context:
http://lucene.472
I suppose you can use fq with SpellCheckComponent but I haven't tried it yet.
https://issues.apache.org/jira/browse/SOLR-2010
--
View this message in context:
http://lucene.472066.n3.nabble.com/Filtering-down-terms-in-suggest-tp4069627p4069690.html
Sent from the Solr - User mailing list arch
I think if you use validate=false in schema.xml, field or dynamicField level,
Solr will not disable validation.
I think this only works in solr 4.3 and above..
--
View this message in context:
http://lucene.472066.n3.nabble.com/SOLR-4641-Schema-now-throws-exception-on-illegal-field-parameters-
Try the below code..
query.setQueryType("/admin/luke");
QueryResponse rsp = server.query( query,METHOD.GET );
System.out.println(rsp.getResponse());
--
View this message in context:
http://lucene.472066.n3.nabble.com/How-to-Reach-LukeRequestHandl
Thanks a lot for your response Jack. I figured out that issue, this file is
currently generated by a perl program and seems like a bug in that program.
Thanks anyways
--
View this message in context:
http://lucene.472066.n3.nabble.com/Re-SOLR-4-3-0-synonym-filter-parse-error-SOLR-4-3-0-tp40
For some reason I am getting the below error when parsing synonyms using
synonyms file.
Synonyms File:
http://www.pastebin.ca/2395108
The server encountered an internal error ({msg=SolrCore 'solr' is not
available due to init failure: java.io.IOException: Error parsing synonyms
file:,trace=org.a
Not sure if this is the right way,
I just moved solr.xml outside of solr directory and made changes to sol.xml
to make it point to solr directory and it seems to work fine as before. Can
someone confirm if this is the right way to configure when running single
instance of solr?
--
I am in process of migrating SOLR 3.x to 4.3.0.
I am trying to figure out a way to run single instance of SOLR without
modifying the directory structure. Is it mandatory to have a folder named
collection1 in order for the new SOLR server to work? I see that by default
it always searches the confi
Seems like this feature is still yet to be implemented..
https://issues.apache.org/jira/browse/SOLR-866
--
View this message in context:
http://lucene.472066.n3.nabble.com/Auto-Suggest-spell-check-dictionary-replication-to-slave-issue-tp4068562p4068739.html
Sent from the Solr - User mailing li
As suggested by Shawn try to change the JVM, this might resolve your issue.
I had seen this error ':java.lang.VerifyError' before (not specific to SOLR)
when compiling code using JDK1.7.
After some research I figured out the code compiled using Java 1.7 requires
stack map frame instructions. If y
can you check if you have correct solrj client library version in both nutch
and Solr server.
--
View this message in context:
http://lucene.472066.n3.nabble.com/nutch-1-4-solr-3-4-configuration-error-tp4068724p4068733.html
Sent from the Solr - User mailing list archive at Nabble.com.
Not sure if I understand your situation..I am not sure how would you relate
the data between 2 tables if theres no relationship? You are trying to just
dump random values from 2 tables in to a document?ConsiderTable1: Name
idpeter 1john2mike 3Table2:Title TitleIdCEO
You don't really need to have a relationship but the unique id should be
unique in a document. I had mentioned about the relationship due to the fact
that the unique key was present only in one table but not the other..
Check out this link for more information on importing multiple table data.
ht
Not sure if this solution will work for you but this is what I did to
implement nested grouping using SOLR 3.X.
Simple idea behind is to Concatenate 2 fields and index them in to single
field and group on that field..
http://stackoverflow.com/questions/12202023/field-collapsing-grouping-how-to-ma
The below error clearly says that you have declared a unique id but that
unique id is missing for some documents.
org.apache.solr.common.SolrException: [doc=null] missing required field:
nameid
This is mainly because you are just trying to import 2 tables in to a
document without any relationship
I have 5 shards that has different data indexed in them (each document has a
unique id).
Now when I perform dynamic updates (push indexing) I need to update the
document corresponding to the unique id that is needs to be updated but I
wont know which core that corresponding document is present in.
I am not sure the best way to search across multiple collection using SOLR
4.3.
Suppose, each collection have their own config files and I perform various
operations on collections individually but when I search I want the search
to happen across all collections. Can someone let me know how to pe
Might not be a solution but I had asked a similar question before..Check out
this thread..
http://lucene.472066.n3.nabble.com/Is-there-a-way-to-load-multiple-schema-when-using-zookeeper-td4058358.html
You can create multiple collection and each collecion can use completley
differnet sets of conf
I tested using the new geospatial class, works fine with new spatial type
using class="solr.SpatialRecursivePrefixTreeFieldType"
http://wiki.apache.org/solr/SolrAdaptersForLuceneSpatial4
you can dynamically set the boolean value by using script transformer when
indexing the data. you dont really
I used to face this issue more often when I used CachedSqlEntityProcessor in
DIH.
I then started indexing in batches (by including where condition) instead of
indexing everything at once..
You can refer to other available options for mysql driver
http://dev.mysql.com/doc/refman/5.0/en/connector
select?q=*-location_field:** worked for me
--
View this message in context:
http://lucene.472066.n3.nabble.com/search-for-docs-where-location-not-present-tp4068444p4068452.html
Sent from the Solr - User mailing list archive at Nabble.com.
A Solr index does not need a unique key, but almost all indexes use one.
http://wiki.apache.org/solr/UniqueKey
Try the below query passing id as id instead of titleid..
A proper dataimport config will look like,
Check out this
http://stackoverflow.com/questions/5549880/using-solr-for-indexing-multiple-languages
http://wiki.apache.org/solr/LanguageAnalysis#French
French stop words file (sample):
http://trac.foswiki.org/browser/trunk/SolrPlugin/solr/multicore/conf/stopwords-fr.txt
Solr includes three stem
That was a very silly mistake. I forgot to add the values to array before
putting it inside row..the below code works.. Thanks a lot...
--
View this message in context:
http://lucene.472066.n3.nabble.com/java-lang-NumberFormatException-when-adding-latitude-longitude-using-DIH-tp4068223p40
Thanks a lot for your response Hoss.. I thought about using scriptTransformer
too but just thought of checking if there is any other way to do that..
Btw, for some reason the values are getting overridden even though its a
multivalued field.. Not sure where I am going wrong!!!
for latlong values
I am trying to combine latitude and longitude data extracted from text file
using data import handler but I am getting the below error whenever I run my
data import with the geo(lat,long) field.. The import works fine without geo
field.
I assume this error is due to the fact
did you try escaping double quotes when you are making the http request.
HttpGet req = new
HttpGet(\"+getSolrClient().getBaseUrl()+ADMIN_CORE_CONSTRUCT+"?action="+action+"&name="+name+\");
HttpResponse response = client.execute(request);
--
View this message in context:
http://lucene.472066
I would use the below method to create new core on the fly...
CoreAdminResponse e = CoreAdminRequest.createCore("name", "instanceDir",
server);
http://lucene.apache.org/solr/4_3_0/solr-solrj/org/apache/solr/client/solrj/response/CoreAdminResponse.html
--
View this message in context:
http://
You can refer this post to use doctransforemers..
http://java.dzone.com/news/solr-40-doctransformers-first
--
View this message in context:
http://lucene.472066.n3.nabble.com/Custom-Response-Handler-tp4067558p4067926.html
Sent from the Solr - User mailing list archive at Nabble.com.
You can also check out this link.
http://lucene.472066.n3.nabble.com/Is-there-a-way-to-remove-caches-in-SOLR-td4061216.html#a4061219
--
View this message in context:
http://lucene.472066.n3.nabble.com/Disable-all-caches-in-solr-tp4066517p4067870.html
Sent from the Solr - User mailing list ar
You can use this tool to analyze the logs..
https://github.com/dfdeshom/solr-loganalyzer
We use solrmeter to test the performance / Stress testing.
https://code.google.com/p/solrmeter/
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-query-performance-tool-tp406690
Why dont you follow this one tutorial to set the SOLR on tomcat..
http://wiki.apache.org/solr/SolrTomcat
--
View this message in context:
http://lucene.472066.n3.nabble.com/installing-configuring-solr-over-ms-sql-server-tutorial-needed-tp4067344p4067488.html
Sent from the Solr - User mailing l
solrconfig.xml - the lib directives specified in the configuration file are
the lib locations where Solr would look for the jars.
solr.xml - In case of the Multi core setup, you can have a sharedLib for all
the collections. You can add the jdbc driver into the sharedLib folder.
--
View this mes
Looks like what you need is pivoted facets...Range within range
http://wiki.apache.org/solr/SimpleFacetParameters#Pivot_.28ie_Decision_Tree.29_Faceting
Pivot faceting allows you to facet within the results of the parent facet
Category1 (17)
item 1 (9)
item 2 (8)
Category2 (6)
item 3 (6)
Categor
As far as I know, partial update in Solr 4.X doesn’t partially update Lucene
index , but instead removes a document from the index and indexes an
updated one. The underlying lucene always requires to delete the old
document and index the new one..
We usually dont use partial update when updating
I am not an expert on this one but I would try doing this..I would implement
SolrCoreAware class and override inform method to make it core aware..
something like ...
public void inform( SolrCore core )
--
View this message in context:
http://lucene.472066.n3.nabble.com/How-can-a-Tokenizer-b
Please create a new topic for any new questions..
--
View this message in context:
http://lucene.472066.n3.nabble.com/Support-for-Mongolian-language-tp4066871p4067374.html
Sent from the Solr - User mailing list archive at Nabble.com.
How are you indexing the documents? Are you using indexing program?
The below post discusses the same issue..
http://lucene.472066.n3.nabble.com/removing-write-lock-file-in-solr-after-indexing-td3699356.html
--
View this message in context:
http://lucene.472066.n3.nabble.com/solr-4-3-write-lo
Hoss, thanks a lot for the explanation.
We override most of the methods of query
component(prepare,handleResponses,finishStage etc..) to incorporate custom
logic and we set the _responseDocs values based on custom logic (after
filtering out few data) and then we call the parent(super) method(quer
My assumptions were right :)
I was able to fix this error by copying all my custom jar inside
webapp/web-inf/lib directory and everything started working
--
View this message in context:
http://lucene.472066.n3.nabble.com/java-lang-IllegalAccessError-when-invoking-protected-method-fro
Hi,
I am overriding the query component and creating a custom component. I am
using _responseDocs from org.apache.solr.handler.component.ResponseBuilder
to get the values. I have my component in same package
(org.apache.solr.handler.component) to access the _responseDocs value.
Everything works f
I totally missed that..Sorry about that :)...It seems to work fine now...
--
View this message in context:
http://lucene.472066.n3.nabble.com/SOLR-4-3-0-How-to-make-fq-optional-tp4066592p4066891.html
Sent from the Solr - User mailing list archive at Nabble.com.
Not sure if you are looking for this..
http://wiki.apache.org/solr/FieldCollapsing
--
View this message in context:
http://lucene.472066.n3.nabble.com/Grouping-results-based-on-the-field-which-matched-the-query-tp4065670p4066882.html
Sent from the Solr - User mailing list archive at Nabble.com
# has a separate meaning in URL.. You need to encode that..
http://lucene.apache.org/core/3_6_0/queryparsersyntax.html#Escaping%20Special%20Characters.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Query-syntax-error-Cannot-parse-tp4066560p4066879.html
Sent from the Solr
Check out..
wiki.apache.org/solr/LanguageAnalysis‎
For some reason the above site takes long time to open..
--
View this message in context:
http://lucene.472066.n3.nabble.com/Support-for-Mongolian-language-tp4066871p4066874.html
Sent from the Solr - User mailing list archive at Nabble.co
Ok..I removed all my custom components from findperson request handler..
lucene
explicit
10
AND
person_name_all_i
50
32
*:*
{!switch case='*:*' default=$fq_bbox
v=$fps_latlong}
Hoss, for some reason this doesn't work when I pass the latlong value via
query..
This is the query.. It just returns all the values for fname='peter'
(doesn't filter for Tarmac, Florida).
fl=*,score&rows=10&qt=findperson&fps_latlong=26.22084,-80.29&fps_fname=peter
*solrconfig.xml*
Hoss, you read my mind Thanks a lott for your awesome
explanation! You rock!!!
--
View this message in context:
http://lucene.472066.n3.nabble.com/SOLR-4-3-0-How-to-make-fq-optional-tp4066592p4066630.html
Sent from the Solr - User mailing list archive at Nabble.com.
Erik,
I am trying to enable / disable a part of fq based on a particular value
passed from the query.
For Ex: If I have the value for the keyword where in the query then I would
like to enable this fq, else just ignore it..
select?where="New york,NY"
Enable only when where has some value. (I g
David, I felt like there should be a flag with which we can either throw the
error message or do nothing in case of bad inputs..
--
View this message in context:
http://lucene.472066.n3.nabble.com/SOLR-4-3-0-How-to-make-fq-optional-tp4066592p4066610.html
Sent from the Solr - User mailing list
I am using the SOLR geospatial capabilities for filtering the results based
on the particular radius (something like below).. I have added the below fq
query in solrconfig and passing the latitude and longitude information
dynamically..
select?q=firstName:john&fq={!bbox%20sfield=geo%20pt=40.279392
Try this..
SKU* from
CAT_TABLE WHERE CategoryLevel=1" cacheKey=*"Cat1.SKU"*
cacheLookup="Product.SKU" processor="CachedSqlEntityProcessor">
Also not sure if you are using Alpha / Beta release of SOLR 4.0.
In Solr 3.6, 3.6.1, 4.0-Alpha &
Thanks a lot for your response!!!
--
View this message in context:
http://lucene.472066.n3.nabble.com/solr-LatLonType-type-vs-solr-SpatialRecursivePrefixTreeFieldType-tp4061113p4065031.html
Sent from the Solr - User mailing list archive at Nabble.com.
Try like this...
--
View this message in context:
http://lucene.472066.n3.nabble.com/How-do-I-use-CachedSqlEntityProcessor-tp4064919p4065030.html
Sent from the Solr - User mailing list archive at Nabble.com.
I was able to make it work as below..
SolrQueryResponse rsp = new SolrQueryResponse();
SolrQueryRequest req = new LocalSolrQueryRequest(
_requestHandler.getCore(), new
ModifiableSolrParams());
SolrRequestInfo.setReque
Why don't you boost during query time?
Something like q=superman&qf=title^2 subject
You can refer: http://wiki.apache.org/solr/SolrRelevancyFAQ
--
View this message in context:
http://lucene.472066.n3.nabble.com/Boosting-Documents-tp4064955p4064966.html
Sent from the Solr - User mailing list
I am trying to migrate the tests for custom SOLR components written for SOLR
3.5.0 to SOLR 4.3.0..This is a simple index test for distributed search /
index (not using solrcloud, just using shards)... For some reason one of the
test fails with the below error message. The test pass without any issu
1 - 100 of 182 matches
Mail list logo