SOLR cloud startup poniting to zookeeper ensemble
I downloaded the latest version of SOLR (5.5.0) and also installed zookeeper on port 2181,2182,2183 and its running fine. Now when I try to start the SOLR instance using the below command its just showing help content rather than executing the command. bin/solr start -e cloud -z localhost:2181,localhost:2182,localhost:2183 -noprompt The below command works with one zookeeper host. solr start -e cloud -z localhost:2181 -noprompt Am I missing anything? -- View this message in context: http://lucene.472066.n3.nabble.com/SOLR-cloud-startup-poniting-to-zookeeper-ensemble-tp4259023.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: SOLR cloud startup - zookeeper ensemble
Ok when I run the below command it looks like its ignoring the double quotes. solr start -c -z "localhost:2181,localhost:2182,localhost:2183" -e cloud This interactive session will help you launch a SolrCloud cluster on your local workstation. To begin, how many Solr nodes would you like to run in your local cluster? (spec ify 1-4 nodes) [2]: 2 Ok, let's start up 2 Solr nodes for your example SolrCloud cluster. Please enter the port for node1 [8983]: 8983 Please enter the port for node2 [7574]: 7573 Solr home directory C:\Users\bb728a\Downloads\solr-5.5.0\solr-5.5.0\example\clou d\node1\solr already exists. C:\Users\bb728a\Downloads\solr-5.5.0\solr-5.5.0\example\cloud\node2 already exis ts. Starting up Solr on port 8983 using command: C:\Users\bb728a\Downloads\solr-5.5.0\solr-5.5.0\bin\solr.cmd start -cloud -p 898 3 -s "C:\Users\bb728a\Downloads\solr-5.5.0\solr-5.5.0\example\cloud\node1\solr" -z *localhost:2181,localhost:2182,localhost:2183* Invalid command-line option: localhost:2182 Usage: solr start [-f] [-c] [-h hostname] [-p port] [-d directory] [-z zkHost] [ -m memory] [-e example] [-s solr.solr.home] [-a "additional-options"] [-V] -- View this message in context: http://lucene.472066.n3.nabble.com/SOLR-cloud-startup-zookeeper-ensemble-tp4259023p4259028.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: SOLR cloud startup poniting to zookeeper ensemble
Its still throwing error without quotes. solr start -e cloud -noprompt -z localhost:2181,localhost:2182,localhost:2183 Invalid command-line option: localhost:2182 Usage: solr start [-f] [-c] [-h hostname] [-p port] [-d directory] [-z zkHost] [ -m memory] [-e example] [-s solr.solr.home] [-a "additional-options"] [-V] -fStart Solr in foreground; default starts Solr in the background and sends stdout / stderr to solr-PORT-console.log -c or -cloud Start Solr in SolrCloud mode; if -z not supplied, an embedded Zo *Info on using double quotes:* http://lucene.472066.n3.nabble.com/Solr-5-2-1-setup-zookeeper-ensemble-problem-td4215823.html#a4215877 -- View this message in context: http://lucene.472066.n3.nabble.com/SOLR-cloud-startup-error-zookeeper-ensemble-windows-tp4259023p4259567.html Sent from the Solr - User mailing list archive at Nabble.com.
Using dynamically calculated value for sorting
Hi, We have a price field in our SOLR XML feed that we currently use for sorting. We are planning to introduce discounts based on login credentials and we have to dynamically calculate price (using base price in SOLR feed) based on a specific discount returned by an API. Now after the discount is calculated we want to sort based on the new price (discounted price). What is the best way to do that? Any ideas would be appreciated. -- View this message in context: http://lucene.472066.n3.nabble.com/Using-dynamically-calculated-value-for-sorting-tp4231950.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: How to preserve 0 after decimal point?
Thanks for your response. -- View this message in context: http://lucene.472066.n3.nabble.com/How-to-preserve-0-after-decimal-point-tp4159295p4231961.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Using dynamically calculated value for sorting
Thanks for your reply. Overall design has changed little bit. Now I will be sending the SKU id (SKU id is in SOLR document) to an external API and it will return a new price to me for that SKU based on some logic (I wont be calculating the new price). Once I get that value I need to use that new price value for sorting. -- View this message in context: http://lucene.472066.n3.nabble.com/Using-dynamically-calculated-value-for-sorting-tp4231950p4232320.html Sent from the Solr - User mailing list archive at Nabble.com.
Boost non stemmed keywords (KStem filter)
Hi, I am using KStem factory for stemming. This stemmer converts 'france to french', 'chinese to china' etc.. I am good with this stemming but I am trying to boost the results that contain the original term compared to the stemmed terms. Is this possible? Thanks, Learner -- View this message in context: http://lucene.472066.n3.nabble.com/Boost-non-stemmed-keywords-KStem-filter-tp4240880.html Sent from the Solr - User mailing list archive at Nabble.com.
Answer engine - NLP related question
Hi, Note: I have very basic knowledge on NLP.. I am working on an answer engine prototype where when the user enters a keyword and searches for it we show them the answer corresponding to that keyword (rather than displaying multiple documents that match the keyword) For Ex: When user searches for 'activate phone', we have answerTags tagged in the SOLR documents along with answer field (that will be displayed as answer for this keyword). activate phone activation activations activate This is the answer This works fine when user searches for the exact keyword tagged in the 'answerTag' field. Now I am trying to figure out a way to match keywords based on position of speech too. Example: I want 'how to activate phone' to match 'activate phone' in answerTags field 'how to activate' to match 'activate' in answerTags field I dont want to add all possible combinations of search keywords in answerTags... rather match is based on position of speech and other NLP techniques. I am trying to figure out a way to standardize the keywords (may be using NLP) and map it to predefined keywords (may be rule based NLP?). I am not sure how to proceed with these kinds of searches.. Any insight is appreciated. -- View this message in context: http://lucene.472066.n3.nabble.com/Answer-engine-NLP-related-question-tp4203730.html Sent from the Solr - User mailing list archive at Nabble.com.
Is there a way to filter the results based on weight - SOLR suggester?
Hi, I am using suggester component in SOLR 5.5.1 and sort the matching suggestion based on a custom field (lookupCount) field. The below configuration seems to work fine but its returning the matching term even if the weight is set to 0. Is there a way to restrict returning the matching term based on weight field? Something like. Return the matching suggestions that doesn't have the value of the weight field set to 0? mySuggester FuzzyLookupFactory DocumentDictionaryFactory userQuery lookupCount string *Current sample output (returns the matching term even it has weight set to 0) * /solr/test/suggest?suggest=true&suggest.dictionary=mySuggester&wt=xml&suggest.q=bill&suggest.count=5 5 bill 0 billing history 69753 -- View this message in context: http://lucene.472066.n3.nabble.com/Is-there-a-way-to-filter-the-results-based-on-weight-SOLR-suggester-tp4289286.html Sent from the Solr - User mailing list archive at Nabble.com.
Is it possible to do pivot grouping in SOLR?
Is there a way to do pivot grouping (group within a group) in SOLR? We initially group the results by category and inturn we are trying to group the data under one category based on another field. Is there a way to do that? Categories (group by) |--Shop |---Color (group by) |--Support -- View this message in context: http://lucene.472066.n3.nabble.com/Is-it-possible-to-do-pivot-grouping-in-SOLR-tp4306352.html Sent from the Solr - User mailing list archive at Nabble.com.
Need details on this query
Hi, This might be a silly question.. I came across the below query online but I couldn't really understand the bolded part. Can someone help me understanding this part of the query? deviceType_:"Cell" OR deviceType_:"Prepaid" *OR (phone -data_source_name:("Catalog" OR "Device How To - Interactive" OR "Device How To - StepByStep"))* Thanks, Barani -- View this message in context: http://lucene.472066.n3.nabble.com/Need-details-on-this-query-tp4153606.html Sent from the Solr - User mailing list archive at Nabble.com.
How to change search component parameters dynamically using query
Hi, I use the below highlight search component in one of my request handler. I am trying to figure out a way to change the value of highlight search component dynamically from the query. Is it possible to modify the parameters dynamically using the query (without creating another searchcomponent)? 200 . SENTENCE en US -- View this message in context: http://lucene.472066.n3.nabble.com/How-to-change-search-component-parameters-dynamically-using-query-tp4156672.html Sent from the Solr - User mailing list archive at Nabble.com.
Is there a way to modify the request handler parameters dynamically?
Hi, I need to change the components (inside a request handler) dynamically using query parameters instead of creating multiple request handlers. Is it possible to do this on the fly from the query? For Ex: change the highlight search component to use different search component based on a query parameter filterbyrole landingPage firstRulesComp query * highlight * facet spellcheck lastRulesComp debug elevator -- View this message in context: http://lucene.472066.n3.nabble.com/Is-there-a-way-to-modify-the-request-handler-parameters-dynamically-tp4156697.html Sent from the Solr - User mailing list archive at Nabble.com.
How to preserve 0 after decimal point?
I have a requirement to preserve 0 after decimal point, currently with the below field type 27.50 is stripped as 27.5 27.00 is stripped as 27.0 27.90 is stripped as 29.9 27.5 I also tried using double but even then the 0's are getting stripped. 27.5 Input data: 27.50 -- View this message in context: http://lucene.472066.n3.nabble.com/How-to-preserve-0-after-decimal-point-tp4159295.html Sent from the Solr - User mailing list archive at Nabble.com.
Is there a way to prevent some keywords from being added to autosuggest dictionary?
We index around 10k documents in SOLR and use inbuilt suggest functionality for auto complete. We have a field that contain a flag that is used to show or hide the documents from search results. I am trying to figure out a way to control the terms added to autosuggest index (to skip the documents from getting added to auto suggest index) based on the value of the flag. Is there a way to do that? -- View this message in context: http://lucene.472066.n3.nabble.com/Is-there-a-way-to-prevent-some-keywords-from-being-added-to-autosuggest-dictionary-tp4164699.html Sent from the Solr - User mailing list archive at Nabble.com.
How to Facet external fields
I am using external field for price field since it changes every 10 minutes. I am able to display the price / use range queries to display the documents based on a price range. I am trying to see if its possible to generate facets using external field. I understand that faceting requires indexing and the facets are calculated during indexing and since external fields fields are not actually indexed, its not possible to get facets out of the box. I am thinking of writing multiple range queries and get the counts and update the facets (just for price). I want to know if there is any other solution for this implementation. -- View this message in context: http://lucene.472066.n3.nabble.com/How-to-Facet-external-fields-tp4168653.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: How to Facet external fields
Thanks for your response.. It's indeed a good idea..I will try that out.. -- View this message in context: http://lucene.472066.n3.nabble.com/How-to-generate-calculate-facet-counts-for-external-fields-tp4168653p4168790.html Sent from the Solr - User mailing list archive at Nabble.com.
SOLRJ Atomic updates of String field
I am using the below code to do partial update (in SOLR 4.2) partialUpdate = new HashMap(); partialUpdate.put("set",Object); doc.setField(description, partialUpdate); server.add(docs); server.commit(); I am seeing the below description value with {set =...}, Any idea why this is getting added? {set=The iPhone 6 Plus features a 5.5-inch retina HD display, the A8 chip for faster processing and longer battery life, the M8 motion coprocessor to track speed, distance and elevation, and with an 8MP iSight camera, you can record 1080p HD Video at 60 FPS!} -- View this message in context: http://lucene.472066.n3.nabble.com/SOLRJ-Atomic-updates-of-String-field-tp4168809.html Sent from the Solr - User mailing list archive at Nabble.com.
Partial match autosuggest (match a word occurring anywhere in a field)
Hi, I am trying to figure out a way to implement partial match autosuggest but it doesn't work in some cases. When I search for iphone 5s, I am able to see the below results. title_new:Apple iPhone 5s - 16GB - Gold but when I search for iphone gold (in title_new field), I am not able to see the above result. Is there a way to implement full partial match (occuring anywhere in a field)? Please find below my fieldtype configuration for title_new -- View this message in context: http://lucene.472066.n3.nabble.com/Partial-match-autosuggest-match-a-word-occurring-anywhere-in-a-field-tp4174660.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Partial match autosuggest (match a word occurring anywhere in a field)
Thanks for your response. I fixed this issue by using the -- View this message in context: http://lucene.472066.n3.nabble.com/Predictive-search-match-a-word-occurring-anywhere-in-a-field-tp4174660p4174822.html Sent from the Solr - User mailing list archive at Nabble.com.
Is there a way to load multiple schema when using zookeeper?
I have used multiple schema files by using multiple cores but not sure if I will be able to use multiple schema configuration when integrating SOLR with zookeeper. Can someone please let me know if its possible and if so, how? -- View this message in context: http://lucene.472066.n3.nabble.com/Is-there-a-way-to-load-multiple-schema-when-using-zookeeper-tp4058358.html Sent from the Solr - User mailing list archive at Nabble.com.
solr.LatLonType type vs solr.SpatialRecursivePrefixTreeFieldType
Hi, I am currently using SOLR 4.2 to index geospatial data. I have configured my geospatial field as below. I just want to make sure that I am using the correct SOLR class for performing geospatial search since I am not sure which of the 2 class(LatLonType vs SpatialRecursivePrefixTreeFieldType) will be supported by future versions of SOLR. I assume latlong is an upgraded version of SpatialRecursivePrefixTreeFieldType, can someone please confirm if I am right? Thanks, Barani -- View this message in context: http://lucene.472066.n3.nabble.com/solr-LatLonType-type-vs-solr-SpatialRecursivePrefixTreeFieldType-tp4061113.html Sent from the Solr - User mailing list archive at Nabble.com.
Is there a way to remove caches in SOLR?
I am trying to create performance metrics for SOLR. I don't want the searcher to warm up when I issue a query since I am trying to collect metrics for cold search. Is there a way to disable warming? -- View this message in context: http://lucene.472066.n3.nabble.com/Is-there-a-way-to-remove-caches-in-SOLR-tp4061216.html Sent from the Solr - User mailing list archive at Nabble.com.
How to get ResponseBuilder's _responseDocs value?
Hi, I am trying to override query component to implement a custom logic but I am not sure how to get the _responseDocs value since this is not visible in my class (The field ResponseBuilder._responseDocs is not visible). Is there a way to get this value? public class CustomqueryComponent extends QueryComponent { ... @Override public void finishStage(ResponseBuilder rb) { if ( rb.stage == ResponseBuilder.STAGE_EXECUTE_QUERY && isOptimizableFL( rb ) ) rb.rsp.add( "response", *rb._responseDocs* ); else super.finishStage( rb ); } } Thanks, BB -- View this message in context: http://lucene.472066.n3.nabble.com/How-to-get-ResponseBuilder-s-responseDocs-value-tp4062403.html Sent from the Solr - User mailing list archive at Nabble.com.
SOLR 4.3.0 - How to use DeleteUpdateCommand?
I am in the process of migrating our existing SOLR (version 3.5) to new version of SOLR (4.3.0). Currently we delete the document from SOLR using the below code (deletion happens via message queue). UpdateRequestProcessor processor = _getProcessor(); // return SOLR core information DeleteUpdateCommand deleteCommand = new DeleteUpdateCommand(); deleteCommand.id = docId; deleteCommand.fromPending = true; deleteCommand.fromCommitted = true; processor.processDelete(deleteCommand); It looks like the new version of SOLR API has different constructor, it requires a SOLR request to passed in...So I did something like below.. I just want to confirm if I am doing this right? Can someone please confirm? Also is there any migration help document that I can use to overcome migration related issues? NamedList nList=new NamedList(); nList.add("id", docId); LocalSolrQueryRequest req = new LocalSolrQueryRequest( _requestHandler.getCore(), nList); UpdateRequestProcessor processor = _getProcessor(); DeleteUpdateCommand deleteCommand = new DeleteUpdateCommand(req); processor.processDelete(deleteCommand); -- View this message in context: http://lucene.472066.n3.nabble.com/SOLR-4-3-0-How-to-use-DeleteUpdateCommand-tp4062454.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: SOLR 4.3.0 - How to use DeleteUpdateCommand?
Shawn, This is indeed a custom SOLR update component used by the message queue to perform various operations (add, update , delete etc..). We have implemented custom logic in this component... Thanks, BB -- View this message in context: http://lucene.472066.n3.nabble.com/SOLR-4-3-0-Migration-How-to-use-DeleteUpdateCommand-tp4062454p4062477.html Sent from the Solr - User mailing list archive at Nabble.com.
Is there a migration document for migration SOLR 3.x to SOLR 4.x?
Hi, We have created lots of custom components using SOLR 3.5 API. we are now in the process of migrating the components to work with SOLR 4.2 API but its becoming very difficult to make the changes to our custom components by going through the API one by one and understanding each and every methods.. It would be great if there is any migration reference document with a basic high level code changes that needs to be incorporated to old version code. Is there any migration help document available currently? Something like this would be really helpful...http://lucene.apache.org/core/4_2_0/MIGRATE.html but this doesn't seem to be a comprehensive document.. Thanks, BB -- View this message in context: http://lucene.472066.n3.nabble.com/Is-there-a-migration-document-for-migration-SOLR-3-x-to-SOLR-4-x-tp4062514.html Sent from the Solr - User mailing list archive at Nabble.com.
SOLR test framework- ERROR: SolrIndexSearcher opens=1 closes=0
I am using SOLR 4.3.0, I have created multiple custom components. I am getting the below error when I run tests (using SOLR 4.3 test framework) against one of the custom componentAll the tests pass but I still get the below error once test gets completed. Can someone help me resolve this error? java.lang.AssertionError: ERROR: SolrIndexSearcher opens=1 closes=0 at __randomizedtesting.SeedInfo.seed([C2DCAC50C9ACBACE]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.SolrTestCaseJ4.endTrackingSearchers(SolrTestCaseJ4.java:252) at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:101) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559) at com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:700) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358) at java.lang.Thread.run(Thread.java:680) -- View this message in context: http://lucene.472066.n3.nabble.com/SOLR-test-framework-ERROR-SolrIndexSearcher-opens-1-closes-0-tp4063940.html Sent from the Solr - User mailing list archive at Nabble.com.
SOLR Junit test - How to resolve error - 'thread leaked from SUITE scope'?
I am using SOLR 4.3.0...I am currently getting the below error when running test for custom SOLR components. The tests pass without any issues but I am getting the below error after the tests are done.. Can someone let me how to resolve this issue? thread leaked from SUITE scope at com.solr.activemq.TestWriter: [junit]1) Thread[id=19, name=ActiveMQ Scheduler, state=WAITING, group=TGRP-TestWriter] [junit] at java.lang.Object.wait(Native Method) [junit] at java.lang.Object.wait(Object.java:503) [junit] at java.util.TimerThread.mainLoop(Timer.java:526) [junit] at java.util.TimerThread.run(Timer.java:505) [junit] com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE scope at com.solr.activemq.TestWriter: [junit]1) Thread[id=19, name=ActiveMQ Scheduler, state=WAITING, group=TGRP-TestWriter] [junit] at java.lang.Object.wait(Native Method) [junit] at java.lang.Object.wait(Object.java:503) [junit] at java.util.TimerThread.mainLoop(Timer.java:526) [junit] at java.util.TimerThread.run(Timer.java:505) [junit] at __randomizedtesting.SeedInfo.seed([64E0A7A0D98E09EE]:0) -- View this message in context: http://lucene.472066.n3.nabble.com/SOLR-Junit-test-How-to-resolve-error-thread-leaked-from-SUITE-scope-tp4064026.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: SOLR test framework- ERROR: SolrIndexSearcher opens=1 closes=0
Thanks a lot for your response. I figured out that I am not closing the LocalSolrQueryRequest after handling the response..The error got resolved after closing the request object. -- View this message in context: http://lucene.472066.n3.nabble.com/SOLR-test-framework-ERROR-SolrIndexSearcher-opens-1-closes-0-tp4063940p4064044.html Sent from the Solr - User mailing list archive at Nabble.com.
SOLR 4.3.0 - SolrCore 'collection1' is not available due to init failure
I am trying to migrate the tests for custom SOLR components written for SOLR 3.5.0 to SOLR 4.3.0..This is a simple index test for distributed search / index (not using solrcloud, just using shards)... For some reason one of the test fails with the below error message. The test pass without any issues when run against SOLR 3.5.0... I assume this is searching for a default collection 'collection1'... Can someone help me with this issue? org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Server at http://127.0.0.1:54093 returned non ok status:500, message:{msg=*SolrCore 'collection1' is not available due to init failure:* Could not load config for solrconfig.xml,trace=org.apache.solr.common.SolrException: SolrCore 'collection1' is not available due to init failure: Could not load config for solrconfig.xml at org.apache.solr.core.CoreContainer.getCore(CoreContainer.java:1212) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:248) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:155) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1307) at org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:135) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1307) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:453) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:229) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.jav at __randomizedtesting.SeedInfo.seed([C5F81A27BEC36971:441E943FC99C094D]:0) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:372) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180) at org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:117) at org.apache.solr.client.solrj.SolrServer.deleteByQuery(SolrServer.java:285) at org.apache.solr.client.solrj.SolrServer.deleteByQuery(SolrServer.java:271) at org.apache.solr.BaseDistributedSearchTestCase.del(BaseDistributedSearchTestCase.java:465) at com.whitepages.solr.handler.component.WPFastDistributedQueryComponentMergingTest.resetForTest(WPFastDistributedQueryComponentMergingTest.java:82) at com.whitepages.solr.handler.component.WPFastDistributedQueryComponentMergingTest.doTest(WPFastDistributedQueryComponentMergingTest.java:60) at org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:806) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559) at com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746) at com.carrotsearch.randomizedtesting.Randomized
Re: Boosting Documents
Why don't you boost during query time? Something like q=superman&qf=title^2 subject You can refer: http://wiki.apache.org/solr/SolrRelevancyFAQ -- View this message in context: http://lucene.472066.n3.nabble.com/Boosting-Documents-tp4064955p4064966.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: SOLR 4.3.0 Migration- How to use DeleteUpdateCommand?
I was able to make it work as below.. SolrQueryResponse rsp = new SolrQueryResponse(); SolrQueryRequest req = new LocalSolrQueryRequest( _requestHandler.getCore(), new ModifiableSolrParams()); SolrRequestInfo.setRequestInfo(new SolrRequestInfo(req, rsp)); AddUpdateCommand cmd = new AddUpdateCommand(req); -- View this message in context: http://lucene.472066.n3.nabble.com/SOLR-4-3-0-Migration-How-to-use-DeleteUpdateCommand-tp4062454p4065027.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: How do I use CachedSqlEntityProcessor?
Try like this... -- View this message in context: http://lucene.472066.n3.nabble.com/How-do-I-use-CachedSqlEntityProcessor-tp4064919p4065030.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: solr.LatLonType type vs solr.SpatialRecursivePrefixTreeFieldType
Thanks a lot for your response!!! -- View this message in context: http://lucene.472066.n3.nabble.com/solr-LatLonType-type-vs-solr-SpatialRecursivePrefixTreeFieldType-tp4061113p4065031.html Sent from the Solr - User mailing list archive at Nabble.com.
RE: How do I use CachedSqlEntityProcessor?
Try this.. SKU* from CAT_TABLE WHERE CategoryLevel=1" cacheKey=*"Cat1.SKU"* cacheLookup="Product.SKU" processor="CachedSqlEntityProcessor"> Also not sure if you are using Alpha / Beta release of SOLR 4.0. In Solr 3.6, 3.6.1, 4.0-Alpha & 4.0-Beta, the "cacheKey" parameter was re-named "cachePk". This is renamed back for 4.0 (& 3.6.2, if released). See SOLR-3850 -- View this message in context: http://lucene.472066.n3.nabble.com/How-do-I-use-CachedSqlEntityProcessor-tp4064919p4065116.html Sent from the Solr - User mailing list archive at Nabble.com.
SOLR 4.3.0 - How to make fq optional?
I am using the SOLR geospatial capabilities for filtering the results based on the particular radius (something like below).. I have added the below fq query in solrconfig and passing the latitude and longitude information dynamically.. select?q=firstName:john&fq={!bbox%20sfield=geo%20pt=40.279392,-81.85891723%20d=10} _query_:"{firstName=$firstName}" _query_:"{!bbox pt=$fps_latlong sfield=geo d=$fps_dist}" Now when I pass the latitude and longitude data, the query works fine but whenever I dont pass the latitude / longitude data it throws exception.. Is there a way to make fq optional? Is there a way to ignore spatial queries when the co ordinates are not passed? Looking for something like dismax, that doesnt throw any exceptions... -- View this message in context: http://lucene.472066.n3.nabble.com/SOLR-4-3-0-How-to-make-fq-optional-tp4066592.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: SOLR 4.3.0 - How to make fq optional?
David, I felt like there should be a flag with which we can either throw the error message or do nothing in case of bad inputs.. -- View this message in context: http://lucene.472066.n3.nabble.com/SOLR-4-3-0-How-to-make-fq-optional-tp4066592p4066610.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: SOLR 4.3.0 - How to make fq optional?
Erik, I am trying to enable / disable a part of fq based on a particular value passed from the query. For Ex: If I have the value for the keyword where in the query then I would like to enable this fq, else just ignore it.. select?where="New york,NY" Enable only when where has some value. (I get the value passed in query inside fq - using $where) I need something like.. *if($where!=null){* {!bbox pt=$fps_latlong sfield=geo d=$fps_dist} *}* Is it possible to achieve this using the switch query parser? Thanks, BB -- View this message in context: http://lucene.472066.n3.nabble.com/SOLR-4-3-0-How-to-make-fq-optional-tp4066592p4066624.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: SOLR 4.3.0 - How to make fq optional?
Hoss, you read my mind Thanks a lott for your awesome explanation! You rock!!! -- View this message in context: http://lucene.472066.n3.nabble.com/SOLR-4-3-0-How-to-make-fq-optional-tp4066592p4066630.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: SOLR 4.3.0 - How to make fq optional?
Hoss, for some reason this doesn't work when I pass the latlong value via query.. This is the query.. It just returns all the values for fname='peter' (doesn't filter for Tarmac, Florida). fl=*,score&rows=10&qt=findperson&fps_latlong=26.22084,-80.29&fps_fname=peter *solrconfig.xml* {!switch case='*:*' default=$fq_bbox v=$fps_latlong} _query_:"{!bbox pt=$fps_latlong sfield=geo d=$fps_dist}" *Works when used via custom component:* This works fine when the latlong value is passed via custom component. We have a custom component which gets the location name via query, calculates the corresponding lat long co-ordinates stored in TSV file and passes the co-ordinates to the query. *Custom component config:* centroids.tsv fps_where fps_latitude fps_longitude fps_latlong fps_dist 48.2803 1.0 *Custom component query:* fl=*,score&rows=10&*fps_where="new york, ny"*&qt=findperson&fps_latlong=26.22084,-80.29&fps_dist=.10&fps_fname=peter Is it a bug? -- View this message in context: http://lucene.472066.n3.nabble.com/SOLR-4-3-0-How-to-make-fq-optional-tp4066592p4066862.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: SOLR 4.3.0 - How to make fq optional?
Ok..I removed all my custom components from findperson request handler.. lucene explicit 10 AND person_name_all_i 50 32 *:* {!switch case='*:*' default=$fq_bbox v=$fps_latlong} _query_:"{!bbox pt=$fps_latlong sfield=geo d=$fps_dist}" query debug My query: select?fl=*,score&rows=10&qt=findperson&fps_latlong=42.3482,-75.1890 The above query just returns everything back from SOLR (should only return results corresponding to lat and long values passed in the query)... I even tried changing the below hack, but got the same results. {!bbox pt=$fps_latlong sfield=geo d=$fps_dist} Not sure if I am missing something... -- View this message in context: http://lucene.472066.n3.nabble.com/SOLR-4-3-0-How-to-make-fq-optional-tp4066592p4066872.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Support for Mongolian language
Check out.. wiki.apache.org/solr/LanguageAnalysis‎ For some reason the above site takes long time to open.. -- View this message in context: http://lucene.472066.n3.nabble.com/Support-for-Mongolian-language-tp4066871p4066874.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Query syntax error: Cannot parse ....
# has a separate meaning in URL.. You need to encode that.. http://lucene.apache.org/core/3_6_0/queryparsersyntax.html#Escaping%20Special%20Characters. -- View this message in context: http://lucene.472066.n3.nabble.com/Query-syntax-error-Cannot-parse-tp4066560p4066879.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Grouping results based on the field which matched the query
Not sure if you are looking for this.. http://wiki.apache.org/solr/FieldCollapsing -- View this message in context: http://lucene.472066.n3.nabble.com/Grouping-results-based-on-the-field-which-matched-the-query-tp4065670p4066882.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: SOLR 4.3.0 - How to make fq optional?
I totally missed that..Sorry about that :)...It seems to work fine now... -- View this message in context: http://lucene.472066.n3.nabble.com/SOLR-4-3-0-How-to-make-fq-optional-tp4066592p4066891.html Sent from the Solr - User mailing list archive at Nabble.com.
java.lang.IllegalAccessError when invoking protected method from another class in the same package path but different jar.
Hi, I am overriding the query component and creating a custom component. I am using _responseDocs from org.apache.solr.handler.component.ResponseBuilder to get the values. I have my component in same package (org.apache.solr.handler.component) to access the _responseDocs value. Everything works fine when I run the test for this component but I am getting the below error when I package the custom component in a jar and place it in lib directory (inside solr/lib - using basic jetty configuration). I assume this is due to the fact that different class loaders load different class at runtime. Is there a way to resolve this? java.lang.IllegalAccessError: tried to access field org.apache.solr.handler.component.ResponseBuilder._responseDocs from class org.apache.solr.handler.component.WPFastDistributedQueryComponentjava.lang.RuntimeException: java.lang.IllegalAccessError: tried to access field org.apache.solr.handler.component.ResponseBuilder._responseDocs from class org.apache.solr.handler.component.CustomComponent at org.apache.solr.servlet.SolrDispatchFilter.sendError(SolrDispatchFilter.java:670) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:380) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:155) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1307) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:453) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:560) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1072) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:382) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1006) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135) at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255) at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116) at org.eclipse.jetty.server.Server.handle(Server.java:365) at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:485) at org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53) at org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:926) at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:988) at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:635) at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235) at org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72) at org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608) at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543) at java.lang.Thread.run(Thread.java:722) Caused by: java.lang.IllegalAccessError: tried to access field org.apache.solr.handler.component.ResponseBuilder._responseDocs from class org.apache.solr.handler.component.WPFastDistributedQueryComponent at org.apache.solr.handler.component.WPFastDistributedQueryComponent.handleResponses(WPFastDistributedQueryComponent.java:131) at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:311) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135) at org.apache.solr.core.SolrCore.execute(SolrCore.java:1816) at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:656) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:359) ... 26 more -- View this message in context: http://lucene.472066.n3.nabble.com/java-lang-IllegalAccessError-when-invoking-protected-method-from-another-class-in-the-same-package-p-tp4066904.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: java.lang.IllegalAccessError when invoking protected method from another class in the same package path but different jar.
My assumptions were right :) I was able to fix this error by copying all my custom jar inside webapp/web-inf/lib directory and everything started working -- View this message in context: http://lucene.472066.n3.nabble.com/java-lang-IllegalAccessError-when-invoking-protected-method-from-another-class-in-the-same-package-p-tp4066904p4066906.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: java.lang.IllegalAccessError when invoking protected method from another class in the same package path but different jar.
Hoss, thanks a lot for the explanation. We override most of the methods of query component(prepare,handleResponses,finishStage etc..) to incorporate custom logic and we set the _responseDocs values based on custom logic (after filtering out few data) and then we call the parent(super) method(query component) with the modified responsedocs. Thats the main reason we are using the _responsedocs variable as is.. -- View this message in context: http://lucene.472066.n3.nabble.com/java-lang-IllegalAccessError-when-invoking-protected-method-from-another-class-in-the-same-package-p-tp4066904p4067086.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: solr 4.3: write.lock is not removed
How are you indexing the documents? Are you using indexing program? The below post discusses the same issue.. http://lucene.472066.n3.nabble.com/removing-write-lock-file-in-solr-after-indexing-td3699356.html -- View this message in context: http://lucene.472066.n3.nabble.com/solr-4-3-write-lock-is-not-removed-tp4066908p4067101.html Sent from the Solr - User mailing list archive at Nabble.com.
RE: Support for Mongolian language
Please create a new topic for any new questions.. -- View this message in context: http://lucene.472066.n3.nabble.com/Support-for-Mongolian-language-tp4066871p4067374.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: How can a Tokenizer be CoreAware?
I am not an expert on this one but I would try doing this..I would implement SolrCoreAware class and override inform method to make it core aware.. something like ... public void inform( SolrCore core ) -- View this message in context: http://lucene.472066.n3.nabble.com/How-can-a-Tokenizer-be-CoreAware-tp4066735p4067377.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: updating docs in solr cloud hangs
As far as I know, partial update in Solr 4.X doesn’t partially update Lucene index , but instead removes a document from the index and indexes an updated one. The underlying lucene always requires to delete the old document and index the new one.. We usually dont use partial update when updating huge number of documents. This is really useful for small number of documents (mostly during push indexing)... -- View this message in context: http://lucene.472066.n3.nabble.com/updating-docs-in-solr-cloud-hangs-tp4067388p4067416.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Find rows within range of other rows
Looks like what you need is pivoted facets...Range within range http://wiki.apache.org/solr/SimpleFacetParameters#Pivot_.28ie_Decision_Tree.29_Faceting Pivot faceting allows you to facet within the results of the parent facet Category1 (17) item 1 (9) item 2 (8) Category2 (6) item 3 (6) Category3 (23) item 4 (23) If you are using old version of SOLR, you can follow these steps.. http://loose-bits.com/2011/09/20/pivot-facets-solr.html -- View this message in context: http://lucene.472066.n3.nabble.com/Find-rows-within-range-of-other-rows-tp4067115p4067428.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: installing & configuring solr over ms sql server - tutorial needed
solrconfig.xml - the lib directives specified in the configuration file are the lib locations where Solr would look for the jars. solr.xml - In case of the Multi core setup, you can have a sharedLib for all the collections. You can add the jdbc driver into the sharedLib folder. -- View this message in context: http://lucene.472066.n3.nabble.com/installing-configuring-solr-over-ms-sql-server-tutorial-needed-tp4067344p4067465.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: installing & configuring solr over ms sql server - tutorial needed
Why dont you follow this one tutorial to set the SOLR on tomcat.. http://wiki.apache.org/solr/SolrTomcat -- View this message in context: http://lucene.472066.n3.nabble.com/installing-configuring-solr-over-ms-sql-server-tutorial-needed-tp4067344p4067488.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Solr query performance tool
You can use this tool to analyze the logs.. https://github.com/dfdeshom/solr-loganalyzer We use solrmeter to test the performance / Stress testing. https://code.google.com/p/solrmeter/ -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-query-performance-tool-tp4066900p4067869.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Disable all caches in solr
You can also check out this link. http://lucene.472066.n3.nabble.com/Is-there-a-way-to-remove-caches-in-SOLR-td4061216.html#a4061219 -- View this message in context: http://lucene.472066.n3.nabble.com/Disable-all-caches-in-solr-tp4066517p4067870.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Custom Response Handler
You can refer this post to use doctransforemers.. http://java.dzone.com/news/solr-40-doctransformers-first -- View this message in context: http://lucene.472066.n3.nabble.com/Custom-Response-Handler-tp4067558p4067926.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Creating a new core programmicatically in solr
I would use the below method to create new core on the fly... CoreAdminResponse e = CoreAdminRequest.createCore("name", "instanceDir", server); http://lucene.apache.org/solr/4_3_0/solr-solrj/org/apache/solr/client/solrj/response/CoreAdminResponse.html -- View this message in context: http://lucene.472066.n3.nabble.com/Creating-a-new-core-programmicatically-in-solr-tp4068132p4068134.html Sent from the Solr - User mailing list archive at Nabble.com.
RE: Creating a new core programmicatically in solr
did you try escaping double quotes when you are making the http request. HttpGet req = new HttpGet(\"+getSolrClient().getBaseUrl()+ADMIN_CORE_CONSTRUCT+"?action="+action+"&name="+name+\"); HttpResponse response = client.execute(request); -- View this message in context: http://lucene.472066.n3.nabble.com/Creating-a-new-core-programmicatically-in-solr-tp4068132p4068139.html Sent from the Solr - User mailing list archive at Nabble.com.
java.lang.NumberFormatException when adding latitude,longitude using DIH
I am trying to combine latitude and longitude data extracted from text file using data import handler but I am getting the below error whenever I run my data import with the geo(lat,long) field.. The import works fine without geo field. I assume this error is due to the fact that I am not converting the values to double prior to concatenation..Do I need to convert the value to double (when concatenating ${x.addr_latitude},${x.addr_longitude})? Whats the best way to do that? *Error:* org.apache.solr.common.SolrException: ERROR: [doc=S|0004b7c7-b9c3-4eab-856f-cc233f201ad7] Error adding field 'geo'='[33.7209548950195, 34.474838],[-117.176193237305, -117.573463]' msg=For input string: "[33.7209548950195" at org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:306) at org.apache.solr.update.AddUpdateCommand.getLuceneDocument(AddUpdateCommand.java:73) at org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:199) at org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69) at org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51) at org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:530) at org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:396) at org.apache.solr.update.processor.LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:100) at org.apache.solr.handler.dataimport.SolrWriter.upload(SolrWriter.java:70) at org.apache.solr.handler.dataimport.DataImportHandler$1.upload(DataImportHandler.java:235) at org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:500) at org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:404) at org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:319) at org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:227) at org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:422) at org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:487) at org.apache.solr.handler.dataimport.DataImporter$1.run(DataImporter.java:468) Caused by: java.lang.NumberFormatException: For input string: "[33.7209548950195" at sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:1241) at java.lang.Double.parseDouble(Double.java:540) at com.spatial4j.core.io.ParseUtils.parsePointDouble(ParseUtils.java:101) at com.spatial4j.core.io.ParseUtils.parseLatitudeLongitude(ParseUtils.java:136) at com.spatial4j.core.io.ParseUtils.parseLatitudeLongitude(ParseUtils.java:118) at com.spatial4j.core.io.ShapeReadWriter.readLatCommaLonPoint(ShapeReadWriter.java:162) at com.spatial4j.core.io.ShapeReadWriter.readStandardShape(ShapeReadWriter.java:146) at com.spatial4j.core.io.ShapeReadWriter.readShape(ShapeReadWriter.java:46) at com.spatial4j.core.context.SpatialContext.readShape(SpatialContext.java:195) at org.apache.solr.schema.AbstractSpatialFieldType.parseShape(AbstractSpatialFieldType.java:142) at org.apache.solr.schema.AbstractSpatialFieldType.createFields(AbstractSpatialFieldType.java:118) at org.apache.solr.update.DocumentBuilder.addField(DocumentBuilder.java:186) at org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:257) ... 16 more -- View this message in context: http://lucene.472066.n3.nabble.com/java-lang-NumberFormatException-when-adding-latitude-longitude-using-DIH-tp4068223.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: java.lang.NumberFormatException when adding latitude,longitude using DIH
Thanks a lot for your response Hoss.. I thought about using scriptTransformer too but just thought of checking if there is any other way to do that.. Btw, for some reason the values are getting overridden even though its a multivalued field.. Not sure where I am going wrong!!! for latlong values - 33.7209548950195,34.474838 -117.176193237305,-117.573463 The below value is getting indexed.. 34.474838,-117.573463 *Script transformer:* -- View this message in context: http://lucene.472066.n3.nabble.com/java-lang-NumberFormatException-when-adding-latitude-longitude-using-DIH-tp4068223p4068401.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: java.lang.NumberFormatException when adding latitude,longitude using DIH
That was a very silly mistake. I forgot to add the values to array before putting it inside row..the below code works.. Thanks a lot... -- View this message in context: http://lucene.472066.n3.nabble.com/java-lang-NumberFormatException-when-adding-latitude-longitude-using-DIH-tp4068223p4068410.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Solr 4.3 with Internationalization.
Check out this http://stackoverflow.com/questions/5549880/using-solr-for-indexing-multiple-languages http://wiki.apache.org/solr/LanguageAnalysis#French French stop words file (sample): http://trac.foswiki.org/browser/trunk/SolrPlugin/solr/multicore/conf/stopwords-fr.txt Solr includes three stemmers for French: one via solr.SnowballPorterFilterFactory, an alternative stemmer Solr3.1 via solr.FrenchLightStemFilterFactory, and an even less aggressive approach Solr3.1 via solr.FrenchMinimalStemFilterFactory. Solr can also removing elisions via solr.ElisionFilterFactory, and Lucene includes an example stopword list. ... ... -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-4-3-with-Internationalization-tp4068368p4068426.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: data-import problem
A Solr index does not need a unique key, but almost all indexes use one. http://wiki.apache.org/solr/UniqueKey Try the below query passing id as id instead of titleid.. A proper dataimport config will look like, -- View this message in context: http://lucene.472066.n3.nabble.com/data-import-problem-tp4068345p4068447.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: search for docs where location not present
select?q=*-location_field:** worked for me -- View this message in context: http://lucene.472066.n3.nabble.com/search-for-docs-where-location-not-present-tp4068444p4068452.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Indexing Heavy dataset
I used to face this issue more often when I used CachedSqlEntityProcessor in DIH. I then started indexing in batches (by including where condition) instead of indexing everything at once.. You can refer to other available options for mysql driver http://dev.mysql.com/doc/refman/5.0/en/connector-j-reference-configuration-properties.html I also included the below stuff in datasource settings.. -- View this message in context: http://lucene.472066.n3.nabble.com/Indexing-Heavy-dataset-tp4068279p4068460.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: search for docs where location not present
I tested using the new geospatial class, works fine with new spatial type using class="solr.SpatialRecursivePrefixTreeFieldType" http://wiki.apache.org/solr/SolrAdaptersForLuceneSpatial4 you can dynamically set the boolean value by using script transformer when indexing the data. you dont really need to store it in DB. -- View this message in context: http://lucene.472066.n3.nabble.com/search-for-docs-where-location-not-present-tp4068444p4068462.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Configuring seperate db-data-config.xml per shard
Might not be a solution but I had asked a similar question before..Check out this thread.. http://lucene.472066.n3.nabble.com/Is-there-a-way-to-load-multiple-schema-when-using-zookeeper-td4058358.html You can create multiple collection and each collecion can use completley differnet sets of configs. You then can have a cloud cluster for performing searching across multiple collections. -- View this message in context: http://lucene.472066.n3.nabble.com/Configuring-seperate-db-data-config-xml-per-shard-tp4068383p4068466.html Sent from the Solr - User mailing list archive at Nabble.com.
Search across multiple collections
I am not sure the best way to search across multiple collection using SOLR 4.3. Suppose, each collection have their own config files and I perform various operations on collections individually but when I search I want the search to happen across all collections. Can someone let me know how to perform search on multiple collections? Do I need to use sharding again? -- View this message in context: http://lucene.472066.n3.nabble.com/Search-across-multiple-collections-tp4068469.html Sent from the Solr - User mailing list archive at Nabble.com.
How to update a particular document on multi-shards configuration?
I have 5 shards that has different data indexed in them (each document has a unique id). Now when I perform dynamic updates (push indexing) I need to update the document corresponding to the unique id that is needs to be updated but I wont know which core that corresponding document is present in. I can do a search across all the shards for that unique id and get back the document but I am not sure how to get the core information corresponding to that document (unless I index that info, also this method requires one extra search to find the document). Is there a way to automatically push the document to proper core based on unique id? I am not using solrcloud yet, just the basic sharding. I know this might not be possible without solrcloud feature, just thought of getting your inputs.. -- View this message in context: http://lucene.472066.n3.nabble.com/How-to-update-a-particular-document-on-multi-shards-configuration-tp4068476.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: data-import problem
The below error clearly says that you have declared a unique id but that unique id is missing for some documents. org.apache.solr.common.SolrException: [doc=null] missing required field: nameid This is mainly because you are just trying to import 2 tables in to a document without any relationship between the data of 2 tables. table 1 has the nameid (unique key) but table 2 has to be joined with table 1 to form a relationship between the 2 tables. You can't just dump the value since table 2 might have more values than table1 (but table1 has the unique id). I am not sure of your table structure, I am assuming that there is a key (ex: nameid in title table) that can be used to join name and title table. Try something like this.. * * -- View this message in context: http://lucene.472066.n3.nabble.com/data-import-problem-tp4068345p4068636.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Group by multiple fields
Not sure if this solution will work for you but this is what I did to implement nested grouping using SOLR 3.X. Simple idea behind is to Concatenate 2 fields and index them in to single field and group on that field.. http://stackoverflow.com/questions/12202023/field-collapsing-grouping-how-to-make-solr-return-intersection-of-2-resultse -- View this message in context: http://lucene.472066.n3.nabble.com/Group-by-multiple-fields-tp4068518p4068638.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: data-import problem
You don't really need to have a relationship but the unique id should be unique in a document. I had mentioned about the relationship due to the fact that the unique key was present only in one table but not the other.. Check out this link for more information on importing multiple table data. http://lucene.472066.n3.nabble.com/Create-index-on-few-unrelated-table-in-Solr-td4068054.html -- View this message in context: http://lucene.472066.n3.nabble.com/data-import-problem-tp4068345p4068650.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: data-import problem
Not sure if I understand your situation..I am not sure how would you relate the data between 2 tables if theres no relationship? You are trying to just dump random values from 2 tables in to a document?ConsiderTable1: Name idpeter 1john2mike 3Table2:Title TitleIdCEO 111developer222Officer333Cleaner 444IT 555Your document will look something like..but Peter is a cleaner and not a CEO..1peterCEO> -- View this message in context: http://lucene.472066.n3.nabble.com/data-import-problem-tp4068345p4068677.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: nutch 1.4, solr 3.4 configuration error
can you check if you have correct solrj client library version in both nutch and Solr server. -- View this message in context: http://lucene.472066.n3.nabble.com/nutch-1-4-solr-3-4-configuration-error-tp4068724p4068733.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Solr 4.1 over Websphere errors
As suggested by Shawn try to change the JVM, this might resolve your issue. I had seen this error ':java.lang.VerifyError' before (not specific to SOLR) when compiling code using JDK1.7. After some research I figured out the code compiled using Java 1.7 requires stack map frame instructions. If you wish to modify Java 1.7 class files, you need to use ClassWriter.COMPUTE_FRAMES or MethodVisit.visitFrame(). I was able to solve this issue by using the java option "-XX:UseSplitVerifier".. -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-4-1-over-Websphere-errors-tp4068715p4068735.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Auto-Suggest, spell check dictionary replication to slave issue
Seems like this feature is still yet to be implemented.. https://issues.apache.org/jira/browse/SOLR-866 -- View this message in context: http://lucene.472066.n3.nabble.com/Auto-Suggest-spell-check-dictionary-replication-to-slave-issue-tp4068562p4068739.html Sent from the Solr - User mailing list archive at Nabble.com.
How to ignore folder collection1 when running single instance of SOLR?
I am in process of migrating SOLR 3.x to 4.3.0. I am trying to figure out a way to run single instance of SOLR without modifying the directory structure. Is it mandatory to have a folder named collection1 in order for the new SOLR server to work? I see that by default it always searches the config files in collection1 folder. Is there a way to force it to ignore the collection1 directory? My directory structure is as below SOLR |___ conf |___ lib |___ index I tried setting home (-Dsolr.solr.home) point to solr (directory) but it searches for folder named collection1 under solr directory. -- View this message in context: http://lucene.472066.n3.nabble.com/How-to-ignore-folder-collection1-when-running-single-instance-of-SOLR-tp4069416.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: How to ignore folder collection1 when running single instance of SOLR?
Not sure if this is the right way, I just moved solr.xml outside of solr directory and made changes to sol.xml to make it point to solr directory and it seems to work fine as before. Can someone confirm if this is the right way to configure when running single instance of solr? -- View this message in context: http://lucene.472066.n3.nabble.com/How-to-ignore-folder-collection1-when-running-single-instance-of-SOLR-tp4069416p4069423.html Sent from the Solr - User mailing list archive at Nabble.com.
SOLR 4.3.0 synonym filter - parse error - SOLR 4.3.0
For some reason I am getting the below error when parsing synonyms using synonyms file. Synonyms File: http://www.pastebin.ca/2395108 The server encountered an internal error ({msg=SolrCore 'solr' is not available due to init failure: java.io.IOException: Error parsing synonyms file:,trace=org.apache.solr.common.SolrException: SolrCore 'solr' is not available due to init failure: java.io.IOException: Error parsing synonyms file: at org.apache.solr.core.CoreContainer.getCore(CoreContainer.java:1212) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:248) at I couldn't really find any issue with the synonyms file..Can someone let me know where I am wrong? -- View this message in context: http://lucene.472066.n3.nabble.com/SOLR-4-3-0-synonym-filter-parse-error-SOLR-4-3-0-tp4069469.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: SOLR 4.3.0 synonym filter - parse error - SOLR 4.3.0
Thanks a lot for your response Jack. I figured out that issue, this file is currently generated by a perl program and seems like a bug in that program. Thanks anyways -- View this message in context: http://lucene.472066.n3.nabble.com/Re-SOLR-4-3-0-synonym-filter-parse-error-SOLR-4-3-0-tp4069487p4069500.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: How to Reach LukeRequestHandler From Solrj?
Try the below code.. query.setQueryType("/admin/luke"); QueryResponse rsp = server.query( query,METHOD.GET ); System.out.println(rsp.getResponse()); -- View this message in context: http://lucene.472066.n3.nabble.com/How-to-Reach-LukeRequestHandler-From-Solrj-tp4069280p4069512.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: SOLR-4641: Schema now throws exception on illegal field parameters.
I think if you use validate=false in schema.xml, field or dynamicField level, Solr will not disable validation. I think this only works in solr 4.3 and above.. -- View this message in context: http://lucene.472066.n3.nabble.com/SOLR-4641-Schema-now-throws-exception-on-illegal-field-parameters-tp4069622p4069688.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Filtering down terms in suggest
I suppose you can use fq with SpellCheckComponent but I haven't tried it yet. https://issues.apache.org/jira/browse/SOLR-2010 -- View this message in context: http://lucene.472066.n3.nabble.com/Filtering-down-terms-in-suggest-tp4069627p4069690.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: How to ignore folder collection1 when running single instance of SOLR?
Erick, Thanks a lot for your response. Just to confirm if I am right, I need to use solr.xml even if I change the folder structure as below. Am I right? Do you have any idea when "discovery-based" core enumeration feature would be released? -- View this message in context: http://lucene.472066.n3.nabble.com/Ignore-folder-collection1-when-running-single-instance-of-SOLR-without-using-solr-xml-tp4069416p4069715.html Sent from the Solr - User mailing list archive at Nabble.com.
Best way to concatenate 2 array pairs - DIH?
I am trying to combine latitude and longitude data extracted from text file using data import handler.. One document can contain multiple latitudes / longitudes... My data would be of format [lat,lat], [long,long] Example: [33.7209548950195, 34.474838],[-117.176193237305, -117.573463] I am currently using the below script transformer for concatenating latitude and longitude (to form latitude longitude pair using above data).. It work's fine, but I am not sure if its good to use script transformer for indexing. I am afraid this might affect the performance of indexing (since I am using a script thats not a compiled code).. Can someone let me know if there's any other way to implement this solution? I was thinking of modifying the LineEntityProcessor but not sure if thats the right way of implementing a new functionality.. Is there any other way of implementing custom functionality? -- View this message in context: http://lucene.472066.n3.nabble.com/Best-way-to-concatenate-2-array-pairs-DIH-tp4069784.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Best way to concatenate 2 array pairs - DIH?
Ok I wrote a custom Java transformer as below. Can someone confirm if this is the right way? import java.util.ArrayList; import java.util.List; import java.util.Map; import org.apache.solr.handler.dataimport.Context; import org.apache.solr.handler.dataimport.DataImporter; import org.apache.solr.handler.dataimport.Transformer; //Custom Concatenation transformer public class ConcatTransformer extends Transformer { @Override public Object transformRow(Map row, Context context) { List> fields = context.getAllEntityFields(); ArrayList arrayValue = new ArrayList(); String columnName = null; for (Map field : fields) { // Check if this field has cincatField is specified in the // data-config.xml String concat = field.get("concatField"); if (concat != null) { String[] fieldNames = concat.split(","); if (fieldNames != null && fieldNames.length > 0) { // Get column Name columnName = field.get(DataImporter.COLUMN); // Get this field's value from the current row ArrayList value1 = (ArrayList) row.get(fieldNames[0]); ArrayList value2 = (ArrayList) row.get(fieldNames[1]); System.out.println("" + value1 + "" + value2); // Trim and put the updated value back in the current row if (value1 != null && value2 != null && !(value1.isEmpty()) && !(value2.isEmpty())) { for (int i = 0; i < value1.size(); i++) { if (value1.get(i) != null && value2.get(i) != null && value1.get(i) != "" && value2.get(i) != "") arrayValue.add(value1.get(i) + "," + value2.get(i)); } } } } } row.put(columnName, arrayValue); return row; } } -- View this message in context: http://lucene.472066.n3.nabble.com/Best-way-to-concatenate-2-array-pairs-DIH-tp4069784p4069810.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Solr 4.3 Spatial clustering?
check this link.. http://stackoverflow.com/questions/11319465/geoclusters-in-solr -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-4-3-Spatial-clustering-tp4069941p4069986.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: shardkey
I suppose you can implement custom hashing by using "_shard_" field. I am not sure on this, but I have come across this approach sometime back.. At query time, you can specify "shard.keys" parameter... -- View this message in context: http://lucene.472066.n3.nabble.com/shardkey-tp4069940p4069990.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Filtering down terms in suggest
I would suggest you to take the suggested string and create another query to solr along with the filter parameter. -- View this message in context: http://lucene.472066.n3.nabble.com/Filtering-down-terms-in-suggest-tp4069627p4069997.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: What is wrong with this blank query?
Not sure what you are trying to achieve. I assume you are trying to return the documents that doesn't contain any value in a particular field.. You can use the below query for that.. http://localhost:8983/solr/doc1/select?q=-text:*&debugQuery=on&defType=lucene -- View this message in context: http://lucene.472066.n3.nabble.com/What-is-wrong-with-this-blank-query-tp4069995p4069998.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Configuring Solr to connect to a SQL server instance
The below config file works fine with sql server. Make sure you are using the correct database / server name. -- View this message in context: http://lucene.472066.n3.nabble.com/Configuring-Solr-to-connect-to-a-SQL-server-instance-tp4070005p4070010.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Sorting by field is slow
http://wiki.apache.org/solr/SolrPerformanceFactors If you do a lot of field based sorting, it is advantageous to add explicitly warming queries to the "newSearcher" and "firstSearcher" event listeners in your solrconfig which sort on those fields, so the FieldCache is populated prior to any queries being executed by your users. -- View this message in context: http://lucene.472066.n3.nabble.com/Sorting-by-field-is-slow-tp4070026p4070028.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Dynamically create new fields
Dynamically adding fields to schema is yet to get released.. https://issues.apache.org/jira/browse/SOLR-3251 We used dynamic field and copy field for dynamically creating facets... We had too many dynamic fields (retrieved from a database table) and we had to make sure that facets exists for the new fields.. schema.xml example: This way we were able to access the facets using the fieldname followed by keyword 'Facet' For ex: name field has facet field nameFacet -- View this message in context: http://lucene.472066.n3.nabble.com/Dynamically-create-new-fields-tp4070029p4070031.html Sent from the Solr - User mailing list archive at Nabble.com.
The 'threads' parameter in DIH - SOLR 4.3.0
I see that the threads parameter has been removed from DIH from all version starting SOLR 4.x. Can someone let me know the best way to initiate indexing in multi threaded mode when using DIH now? Is there a way to do that? -- View this message in context: http://lucene.472066.n3.nabble.com/The-threads-parameter-in-DIH-SOLR-4-3-0-tp4070315.html Sent from the Solr - User mailing list archive at Nabble.com.
SOLR search performance - Linux vs Windows servers
Hi, I have SOLR instances running in both Linux / windows server (same version / same index data). Search performance is good in windows box compared to Linux box. Some queries takes more than 10 seconds in Linux box but takes just a second in windows box. Have anyone encountered this kind of issue before? Thanks, BB -- View this message in context: http://lucene.472066.n3.nabble.com/SOLR-search-performance-Linux-vs-Windows-servers-tp901069p901069.html Sent from the Solr - User mailing list archive at Nabble.com.
Question on dynamic fields
Hi, I am facing some issue with dynamic fields. I have 2 fields (UID and ID) on which I want to do whole word search only.. I made those 2 fields to be of type 'string'. I also have a dynamic field with textgen field type as below This dynamic field seems to capture all the data including the data from UID field.. Now my issue is that I am having a copyfield called Text which I am using to copy all the necessary static fields + all dynamic fields ( seems like UID is also getting copied since I have used * for dynamic fields) in to that Text field. This text field is of field type Textgen and hence it has all kinds of analyzers implemented in it... Now my questions is that, Is there a way for me to avoid UID and ID to be copied in to this Text field? I want to copy all other fields (including dynamic fields) but not ID and UID. My schema file looks like below, uid text Thanks, BB -- View this message in context: http://lucene.472066.n3.nabble.com/Question-on-dynamic-fields-tp904053p904053.html Sent from the Solr - User mailing list archive at Nabble.com.
Performance related question on DISMAX handler..
Hi, I just want to know if there will be any overhead / performance degradation if I use the Dismax search handler instead of standard search handler? We are planning to index millions of documents and not sure if using Dismax will slow down the search performance. Would be great if someone can share their thoughts. Thanks, BB -- View this message in context: http://lucene.472066.n3.nabble.com/Performance-related-question-on-DISMAX-handler-tp914892p914892.html Sent from the Solr - User mailing list archive at Nabble.com.