I downloaded the latest version of SOLR (5.5.0) and also installed zookeeper
on port 2181,2182,2183 and its running fine.
Now when I try to start the SOLR instance using the below command its just
showing help content rather than executing the command.
bin/solr start -e cloud -z localhost:2181,l
Ok when I run the below command it looks like its ignoring the double quotes.
solr start -c -z "localhost:2181,localhost:2182,localhost:2183" -e cloud
This interactive session will help you launch a SolrCloud cluster on your
local
workstation.
To begin, how many Solr nodes would you like to run
Its still throwing error without quotes.
solr start -e cloud -noprompt -z
localhost:2181,localhost:2182,localhost:2183
Invalid command-line option: localhost:2182
Usage: solr start [-f] [-c] [-h hostname] [-p port] [-d directory] [-z
zkHost] [
-m memory] [-e example] [-s solr.solr.home] [-a "add
Hi,
We have a price field in our SOLR XML feed that we currently use for
sorting. We are planning to introduce discounts based on login credentials
and we have to dynamically calculate price (using base price in SOLR feed)
based on a specific discount returned by an API. Now after the discount is
Thanks for your response.
--
View this message in context:
http://lucene.472066.n3.nabble.com/How-to-preserve-0-after-decimal-point-tp4159295p4231961.html
Sent from the Solr - User mailing list archive at Nabble.com.
Thanks for your reply.
Overall design has changed little bit.
Now I will be sending the SKU id (SKU id is in SOLR document) to an external
API and it will return a new price to me for that SKU based on some logic (I
wont be calculating the new price).
Once I get that value I need to use that ne
Hi,
I am using KStem factory for stemming. This stemmer converts 'france to
french', 'chinese to china' etc.. I am good with this stemming but I am
trying to boost the results that contain the original term compared to the
stemmed terms. Is this possible?
Thanks,
Learner
--
View this message
Hi,
Note: I have very basic knowledge on NLP..
I am working on an answer engine prototype where when the user enters a
keyword and searches for it we show them the answer corresponding to that
keyword (rather than displaying multiple documents that match the keyword)
For Ex:
When user searches
Hi,
I am using suggester component in SOLR 5.5.1 and sort the matching
suggestion based on a custom field (lookupCount) field. The below
configuration seems to work fine but its returning the matching term even if
the weight is set to 0. Is there a way to restrict returning the matching
term based
Is there a way to do pivot grouping (group within a group) in SOLR?
We initially group the results by category and inturn we are trying to group
the data under one category based on another field. Is there a way to do
that?
Categories (group by)
|--Shop
|---Color
Hi,
This might be a silly question..
I came across the below query online but I couldn't really understand the
bolded part. Can someone help me understanding this part of the query?
deviceType_:"Cell" OR deviceType_:"Prepaid" *OR (phone
-data_source_name:("Catalog" OR "Device How To - Interactiv
Hi,
I use the below highlight search component in one of my request handler.
I am trying to figure out a way to change the value of highlight search
component dynamically from the query. Is it possible to modify the
parameters dynamically using the query (without creating another
searchcomponent)
Hi,
I need to change the components (inside a request handler) dynamically using
query parameters instead of creating multiple request handlers. Is it
possible to do this on the fly from the query?
For Ex:
change the highlight search component to use different search component
based on a query p
I have a requirement to preserve 0 after decimal point, currently with the
below field type
27.50 is stripped as 27.5
27.00 is stripped as 27.0
27.90 is stripped as 29.9
27.5
I also tried using double but even then the 0's are getting stripped.
27.5
Input data:
27.50
--
View this
We index around 10k documents in SOLR and use inbuilt suggest functionality
for auto complete.
We have a field that contain a flag that is used to show or hide the
documents from search results.
I am trying to figure out a way to control the terms added to autosuggest
index (to skip the document
I am using external field for price field since it changes every 10 minutes.
I am able to display the price / use range queries to display the documents
based on a price range.
I am trying to see if its possible to generate facets using external field.
I understand that faceting requires indexing
Thanks for your response.. It's indeed a good idea..I will try that out..
--
View this message in context:
http://lucene.472066.n3.nabble.com/How-to-generate-calculate-facet-counts-for-external-fields-tp4168653p4168790.html
Sent from the Solr - User mailing list archive at Nabble.com.
I am using the below code to do partial update (in SOLR 4.2)
partialUpdate = new HashMap();
partialUpdate.put("set",Object);
doc.setField(description, partialUpdate);
server.add(docs);
server.commit();
I am seeing the below description value with {set =...}, Any idea why this
is getting added?
Hi,
I am trying to figure out a way to implement partial match autosuggest but
it doesn't work in some
cases.
When I search for iphone 5s, I am able to see the below results.
title_new:Apple iPhone 5s - 16GB - Gold
but when I search for iphone gold (in title_new field), I am not able to see
th
Thanks for your response.
I fixed this issue by using the
--
View this message in context:
http://lucene.472066.n3.nabble.com/Predictive-search-match-a-word-occurring-anywhere-in-a-field-tp4
I have used multiple schema files by using multiple cores but not sure if I
will be able to use multiple schema configuration when integrating SOLR with
zookeeper. Can someone please let me know if its possible and if so, how?
--
View this message in context:
http://lucene.472066.n3.nabble.com
Hi,
I am currently using SOLR 4.2 to index geospatial data. I have configured my
geospatial field as below.
I just want to make sure that I am using the correct SOLR class for
performing geospatial search since I am not sure which of the 2
class(LatLonType vs SpatialRecursivePr
I am trying to create performance metrics for SOLR. I don't want the searcher
to warm up when I issue a query since I am trying to collect metrics for
cold search. Is there a way to disable warming?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Is-there-a-way-to-remove-cac
Hi,
I am trying to override query component to implement a custom logic but I am
not sure how to get the _responseDocs value since this is not visible in my
class (The field ResponseBuilder._responseDocs is not visible). Is there a
way to get this value?
public class CustomqueryComponent extends
I am in the process of migrating our existing SOLR (version 3.5) to new
version of SOLR (4.3.0).
Currently we delete the document from SOLR using the below code (deletion
happens via message queue).
UpdateRequestProcessor processor = _getProcessor(); // return SOLR core
information
Dele
Shawn,
This is indeed a custom SOLR update component used by the message queue to
perform various operations (add, update , delete etc..). We have implemented
custom logic in this component...
Thanks,
BB
--
View this message in context:
http://lucene.472066.n3.nabble.com/SOLR-4-3-0-Migration
Hi,
We have created lots of custom components using SOLR 3.5 API. we are now in
the process of migrating the components to work with SOLR 4.2 API but its
becoming very difficult to make the changes to our custom components by
going through the API one by one and understanding each and every method
I am using SOLR 4.3.0, I have created multiple custom components.
I am getting the below error when I run tests (using SOLR 4.3 test
framework) against one of the custom componentAll the tests pass but I
still get the below error once test gets completed. Can someone help me
resolve this error
I am using SOLR 4.3.0...I am currently getting the below error when running
test for custom SOLR components. The tests pass without any issues but I am
getting the below error after the tests are done.. Can someone let me how to
resolve this issue?
thread leaked from SUITE scope at com.solr.acti
Thanks a lot for your response.
I figured out that I am not closing the LocalSolrQueryRequest after handling
the response..The error got resolved after closing the request object.
--
View this message in context:
http://lucene.472066.n3.nabble.com/SOLR-test-framework-ERROR-SolrIndexSearcher-op
I am trying to migrate the tests for custom SOLR components written for SOLR
3.5.0 to SOLR 4.3.0..This is a simple index test for distributed search /
index (not using solrcloud, just using shards)... For some reason one of the
test fails with the below error message. The test pass without any issu
Why don't you boost during query time?
Something like q=superman&qf=title^2 subject
You can refer: http://wiki.apache.org/solr/SolrRelevancyFAQ
--
View this message in context:
http://lucene.472066.n3.nabble.com/Boosting-Documents-tp4064955p4064966.html
Sent from the Solr - User mailing list
I was able to make it work as below..
SolrQueryResponse rsp = new SolrQueryResponse();
SolrQueryRequest req = new LocalSolrQueryRequest(
_requestHandler.getCore(), new
ModifiableSolrParams());
SolrRequestInfo.setReque
Try like this...
--
View this message in context:
http://lucene.472066.n3.nabble.com/How-do-I-use-CachedSqlEntityProcessor-tp4064919p4065030.html
Sent from the Solr - User mailing list archive at Nabble.com.
Thanks a lot for your response!!!
--
View this message in context:
http://lucene.472066.n3.nabble.com/solr-LatLonType-type-vs-solr-SpatialRecursivePrefixTreeFieldType-tp4061113p4065031.html
Sent from the Solr - User mailing list archive at Nabble.com.
Try this..
SKU* from
CAT_TABLE WHERE CategoryLevel=1" cacheKey=*"Cat1.SKU"*
cacheLookup="Product.SKU" processor="CachedSqlEntityProcessor">
Also not sure if you are using Alpha / Beta release of SOLR 4.0.
In Solr 3.6, 3.6.1, 4.0-Alpha &
I am using the SOLR geospatial capabilities for filtering the results based
on the particular radius (something like below).. I have added the below fq
query in solrconfig and passing the latitude and longitude information
dynamically..
select?q=firstName:john&fq={!bbox%20sfield=geo%20pt=40.279392
David, I felt like there should be a flag with which we can either throw the
error message or do nothing in case of bad inputs..
--
View this message in context:
http://lucene.472066.n3.nabble.com/SOLR-4-3-0-How-to-make-fq-optional-tp4066592p4066610.html
Sent from the Solr - User mailing list
Erik,
I am trying to enable / disable a part of fq based on a particular value
passed from the query.
For Ex: If I have the value for the keyword where in the query then I would
like to enable this fq, else just ignore it..
select?where="New york,NY"
Enable only when where has some value. (I g
Hoss, you read my mind Thanks a lott for your awesome
explanation! You rock!!!
--
View this message in context:
http://lucene.472066.n3.nabble.com/SOLR-4-3-0-How-to-make-fq-optional-tp4066592p4066630.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hoss, for some reason this doesn't work when I pass the latlong value via
query..
This is the query.. It just returns all the values for fname='peter'
(doesn't filter for Tarmac, Florida).
fl=*,score&rows=10&qt=findperson&fps_latlong=26.22084,-80.29&fps_fname=peter
*solrconfig.xml*
Ok..I removed all my custom components from findperson request handler..
lucene
explicit
10
AND
person_name_all_i
50
32
*:*
{!switch case='*:*' default=$fq_bbox
v=$fps_latlong}
Check out..
wiki.apache.org/solr/LanguageAnalysis‎
For some reason the above site takes long time to open..
--
View this message in context:
http://lucene.472066.n3.nabble.com/Support-for-Mongolian-language-tp4066871p4066874.html
Sent from the Solr - User mailing list archive at Nabble.co
# has a separate meaning in URL.. You need to encode that..
http://lucene.apache.org/core/3_6_0/queryparsersyntax.html#Escaping%20Special%20Characters.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Query-syntax-error-Cannot-parse-tp4066560p4066879.html
Sent from the Solr
Not sure if you are looking for this..
http://wiki.apache.org/solr/FieldCollapsing
--
View this message in context:
http://lucene.472066.n3.nabble.com/Grouping-results-based-on-the-field-which-matched-the-query-tp4065670p4066882.html
Sent from the Solr - User mailing list archive at Nabble.com
I totally missed that..Sorry about that :)...It seems to work fine now...
--
View this message in context:
http://lucene.472066.n3.nabble.com/SOLR-4-3-0-How-to-make-fq-optional-tp4066592p4066891.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi,
I am overriding the query component and creating a custom component. I am
using _responseDocs from org.apache.solr.handler.component.ResponseBuilder
to get the values. I have my component in same package
(org.apache.solr.handler.component) to access the _responseDocs value.
Everything works f
My assumptions were right :)
I was able to fix this error by copying all my custom jar inside
webapp/web-inf/lib directory and everything started working
--
View this message in context:
http://lucene.472066.n3.nabble.com/java-lang-IllegalAccessError-when-invoking-protected-method-fro
Hoss, thanks a lot for the explanation.
We override most of the methods of query
component(prepare,handleResponses,finishStage etc..) to incorporate custom
logic and we set the _responseDocs values based on custom logic (after
filtering out few data) and then we call the parent(super) method(quer
How are you indexing the documents? Are you using indexing program?
The below post discusses the same issue..
http://lucene.472066.n3.nabble.com/removing-write-lock-file-in-solr-after-indexing-td3699356.html
--
View this message in context:
http://lucene.472066.n3.nabble.com/solr-4-3-write-lo
Please create a new topic for any new questions..
--
View this message in context:
http://lucene.472066.n3.nabble.com/Support-for-Mongolian-language-tp4066871p4067374.html
Sent from the Solr - User mailing list archive at Nabble.com.
I am not an expert on this one but I would try doing this..I would implement
SolrCoreAware class and override inform method to make it core aware..
something like ...
public void inform( SolrCore core )
--
View this message in context:
http://lucene.472066.n3.nabble.com/How-can-a-Tokenizer-b
As far as I know, partial update in Solr 4.X doesn’t partially update Lucene
index , but instead removes a document from the index and indexes an
updated one. The underlying lucene always requires to delete the old
document and index the new one..
We usually dont use partial update when updating
Looks like what you need is pivoted facets...Range within range
http://wiki.apache.org/solr/SimpleFacetParameters#Pivot_.28ie_Decision_Tree.29_Faceting
Pivot faceting allows you to facet within the results of the parent facet
Category1 (17)
item 1 (9)
item 2 (8)
Category2 (6)
item 3 (6)
Categor
solrconfig.xml - the lib directives specified in the configuration file are
the lib locations where Solr would look for the jars.
solr.xml - In case of the Multi core setup, you can have a sharedLib for all
the collections. You can add the jdbc driver into the sharedLib folder.
--
View this mes
Why dont you follow this one tutorial to set the SOLR on tomcat..
http://wiki.apache.org/solr/SolrTomcat
--
View this message in context:
http://lucene.472066.n3.nabble.com/installing-configuring-solr-over-ms-sql-server-tutorial-needed-tp4067344p4067488.html
Sent from the Solr - User mailing l
You can use this tool to analyze the logs..
https://github.com/dfdeshom/solr-loganalyzer
We use solrmeter to test the performance / Stress testing.
https://code.google.com/p/solrmeter/
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-query-performance-tool-tp406690
You can also check out this link.
http://lucene.472066.n3.nabble.com/Is-there-a-way-to-remove-caches-in-SOLR-td4061216.html#a4061219
--
View this message in context:
http://lucene.472066.n3.nabble.com/Disable-all-caches-in-solr-tp4066517p4067870.html
Sent from the Solr - User mailing list ar
You can refer this post to use doctransforemers..
http://java.dzone.com/news/solr-40-doctransformers-first
--
View this message in context:
http://lucene.472066.n3.nabble.com/Custom-Response-Handler-tp4067558p4067926.html
Sent from the Solr - User mailing list archive at Nabble.com.
I would use the below method to create new core on the fly...
CoreAdminResponse e = CoreAdminRequest.createCore("name", "instanceDir",
server);
http://lucene.apache.org/solr/4_3_0/solr-solrj/org/apache/solr/client/solrj/response/CoreAdminResponse.html
--
View this message in context:
http://
did you try escaping double quotes when you are making the http request.
HttpGet req = new
HttpGet(\"+getSolrClient().getBaseUrl()+ADMIN_CORE_CONSTRUCT+"?action="+action+"&name="+name+\");
HttpResponse response = client.execute(request);
--
View this message in context:
http://lucene.472066
I am trying to combine latitude and longitude data extracted from text file
using data import handler but I am getting the below error whenever I run my
data import with the geo(lat,long) field.. The import works fine without geo
field.
I assume this error is due to the fact
Thanks a lot for your response Hoss.. I thought about using scriptTransformer
too but just thought of checking if there is any other way to do that..
Btw, for some reason the values are getting overridden even though its a
multivalued field.. Not sure where I am going wrong!!!
for latlong values
That was a very silly mistake. I forgot to add the values to array before
putting it inside row..the below code works.. Thanks a lot...
--
View this message in context:
http://lucene.472066.n3.nabble.com/java-lang-NumberFormatException-when-adding-latitude-longitude-using-DIH-tp4068223p40
Check out this
http://stackoverflow.com/questions/5549880/using-solr-for-indexing-multiple-languages
http://wiki.apache.org/solr/LanguageAnalysis#French
French stop words file (sample):
http://trac.foswiki.org/browser/trunk/SolrPlugin/solr/multicore/conf/stopwords-fr.txt
Solr includes three stem
A Solr index does not need a unique key, but almost all indexes use one.
http://wiki.apache.org/solr/UniqueKey
Try the below query passing id as id instead of titleid..
A proper dataimport config will look like,
select?q=*-location_field:** worked for me
--
View this message in context:
http://lucene.472066.n3.nabble.com/search-for-docs-where-location-not-present-tp4068444p4068452.html
Sent from the Solr - User mailing list archive at Nabble.com.
I used to face this issue more often when I used CachedSqlEntityProcessor in
DIH.
I then started indexing in batches (by including where condition) instead of
indexing everything at once..
You can refer to other available options for mysql driver
http://dev.mysql.com/doc/refman/5.0/en/connector
I tested using the new geospatial class, works fine with new spatial type
using class="solr.SpatialRecursivePrefixTreeFieldType"
http://wiki.apache.org/solr/SolrAdaptersForLuceneSpatial4
you can dynamically set the boolean value by using script transformer when
indexing the data. you dont really
Might not be a solution but I had asked a similar question before..Check out
this thread..
http://lucene.472066.n3.nabble.com/Is-there-a-way-to-load-multiple-schema-when-using-zookeeper-td4058358.html
You can create multiple collection and each collecion can use completley
differnet sets of conf
I am not sure the best way to search across multiple collection using SOLR
4.3.
Suppose, each collection have their own config files and I perform various
operations on collections individually but when I search I want the search
to happen across all collections. Can someone let me know how to pe
I have 5 shards that has different data indexed in them (each document has a
unique id).
Now when I perform dynamic updates (push indexing) I need to update the
document corresponding to the unique id that is needs to be updated but I
wont know which core that corresponding document is present in.
The below error clearly says that you have declared a unique id but that
unique id is missing for some documents.
org.apache.solr.common.SolrException: [doc=null] missing required field:
nameid
This is mainly because you are just trying to import 2 tables in to a
document without any relationship
Not sure if this solution will work for you but this is what I did to
implement nested grouping using SOLR 3.X.
Simple idea behind is to Concatenate 2 fields and index them in to single
field and group on that field..
http://stackoverflow.com/questions/12202023/field-collapsing-grouping-how-to-ma
You don't really need to have a relationship but the unique id should be
unique in a document. I had mentioned about the relationship due to the fact
that the unique key was present only in one table but not the other..
Check out this link for more information on importing multiple table data.
ht
Not sure if I understand your situation..I am not sure how would you relate
the data between 2 tables if theres no relationship? You are trying to just
dump random values from 2 tables in to a document?ConsiderTable1: Name
idpeter 1john2mike 3Table2:Title TitleIdCEO
can you check if you have correct solrj client library version in both nutch
and Solr server.
--
View this message in context:
http://lucene.472066.n3.nabble.com/nutch-1-4-solr-3-4-configuration-error-tp4068724p4068733.html
Sent from the Solr - User mailing list archive at Nabble.com.
As suggested by Shawn try to change the JVM, this might resolve your issue.
I had seen this error ':java.lang.VerifyError' before (not specific to SOLR)
when compiling code using JDK1.7.
After some research I figured out the code compiled using Java 1.7 requires
stack map frame instructions. If y
Seems like this feature is still yet to be implemented..
https://issues.apache.org/jira/browse/SOLR-866
--
View this message in context:
http://lucene.472066.n3.nabble.com/Auto-Suggest-spell-check-dictionary-replication-to-slave-issue-tp4068562p4068739.html
Sent from the Solr - User mailing li
I am in process of migrating SOLR 3.x to 4.3.0.
I am trying to figure out a way to run single instance of SOLR without
modifying the directory structure. Is it mandatory to have a folder named
collection1 in order for the new SOLR server to work? I see that by default
it always searches the confi
Not sure if this is the right way,
I just moved solr.xml outside of solr directory and made changes to sol.xml
to make it point to solr directory and it seems to work fine as before. Can
someone confirm if this is the right way to configure when running single
instance of solr?
--
For some reason I am getting the below error when parsing synonyms using
synonyms file.
Synonyms File:
http://www.pastebin.ca/2395108
The server encountered an internal error ({msg=SolrCore 'solr' is not
available due to init failure: java.io.IOException: Error parsing synonyms
file:,trace=org.a
Thanks a lot for your response Jack. I figured out that issue, this file is
currently generated by a perl program and seems like a bug in that program.
Thanks anyways
--
View this message in context:
http://lucene.472066.n3.nabble.com/Re-SOLR-4-3-0-synonym-filter-parse-error-SOLR-4-3-0-tp40
Try the below code..
query.setQueryType("/admin/luke");
QueryResponse rsp = server.query( query,METHOD.GET );
System.out.println(rsp.getResponse());
--
View this message in context:
http://lucene.472066.n3.nabble.com/How-to-Reach-LukeRequestHandl
I think if you use validate=false in schema.xml, field or dynamicField level,
Solr will not disable validation.
I think this only works in solr 4.3 and above..
--
View this message in context:
http://lucene.472066.n3.nabble.com/SOLR-4641-Schema-now-throws-exception-on-illegal-field-parameters-
I suppose you can use fq with SpellCheckComponent but I haven't tried it yet.
https://issues.apache.org/jira/browse/SOLR-2010
--
View this message in context:
http://lucene.472066.n3.nabble.com/Filtering-down-terms-in-suggest-tp4069627p4069690.html
Sent from the Solr - User mailing list arch
Erick,
Thanks a lot for your response.
Just to confirm if I am right, I need to use solr.xml even if I change the
folder structure as below. Am I right?
Do you have any idea when "discovery-based" core enumeration feature would
be released?
--
View this message in context:
http://lucene.472
I am trying to combine latitude and longitude data extracted from text file
using data import handler..
One document can contain multiple latitudes / longitudes...
My data would be of format [lat,lat], [long,long]
Example:
[33.7209548950195, 34.474838],[-117.176193237305, -117.573463]
I am cur
Ok I wrote a custom Java transformer as below. Can someone confirm if this is
the right way?
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import org.apache.solr.handler.dataimport.Context;
import org.apache.solr.handler.dataimport.DataImporter;
import org.apache.solr.h
check this link..
http://stackoverflow.com/questions/11319465/geoclusters-in-solr
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-4-3-Spatial-clustering-tp4069941p4069986.html
Sent from the Solr - User mailing list archive at Nabble.com.
I suppose you can implement custom hashing by using "_shard_" field. I am not
sure on this, but I have come across this approach sometime back..
At query time, you can specify "shard.keys" parameter...
--
View this message in context:
http://lucene.472066.n3.nabble.com/shardkey-tp4069940p40699
I would suggest you to take the suggested string and create another query to
solr along with the filter parameter.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Filtering-down-terms-in-suggest-tp4069627p4069997.html
Sent from the Solr - User mailing list archive at Nabble
Not sure what you are trying to achieve.
I assume you are trying to return the documents that doesn't contain any
value in a particular field..
You can use the below query for that..
http://localhost:8983/solr/doc1/select?q=-text:*&debugQuery=on&defType=lucene
--
View this message in context:
The below config file works fine with sql server. Make sure you are using the
correct database / server name.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Configuring-Solr-to-connect-to-a-SQL-serv
http://wiki.apache.org/solr/SolrPerformanceFactors
If you do a lot of field based sorting, it is advantageous to add explicitly
warming queries to the "newSearcher" and "firstSearcher" event listeners in
your solrconfig which sort on those fields, so the FieldCache is populated
prior to any querie
Dynamically adding fields to schema is yet to get released..
https://issues.apache.org/jira/browse/SOLR-3251
We used dynamic field and copy field for dynamically creating facets...
We had too many dynamic fields (retrieved from a database table) and we had
to make sure that facets exists for the
I see that the threads parameter has been removed from DIH from all version
starting SOLR 4.x. Can someone let me know the best way to initiate indexing
in multi threaded mode when using DIH now? Is there a way to do that?
--
View this message in context:
http://lucene.472066.n3.nabble.com/The-
Hi,
I have SOLR instances running in both Linux / windows server (same version /
same index data). Search performance is good in windows box compared to
Linux box.
Some queries takes more than 10 seconds in Linux box but takes just a second
in windows box. Have anyone encountered this kind of i
Hi,
I am facing some issue with dynamic fields. I have 2 fields (UID and ID) on
which I want to do whole word search only.. I made those 2 fields to be of
type 'string'.
I also have a dynamic field with textgen field type as below
This dynamic field seems to capture all the data including
Hi,
I just want to know if there will be any overhead / performance degradation
if I use the Dismax search handler instead of standard search handler?
We are planning to index millions of documents and not sure if using Dismax
will slow down the search performance. Would be great if someone can
1 - 100 of 182 matches
Mail list logo