Solr 4.8 result page desplay changes and highlighting
Hi Everyone, I just installed solr 4.8 release and playing with DIH and Velocity configuration. I am trying to change result page columns to display more # of fields and type of format to tabular since I have 1 rows to display on one page if I can in out of box configuration. I also tried highlight feature in 4.8 and out of box it is not working. Has anyone ran into this issue? Please advise, All help is appreciated in advance. -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-4-8-result-page-desplay-changes-and-highlighting-tp4142504.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Highlighting not working
Were you ever able to resolve this issue? I am having same issue and highligh is not working for me on solr 4.8? -- View this message in context: http://lucene.472066.n3.nabble.com/Highlighting-not-working-tp4112659p4142513.html Sent from the Solr - User mailing list archive at Nabble.com.
Solr Deleted Docs Issue
Hi, I am having an issue with my solr setup. In my solr config I have set following property *10* Now consider following situation. I have* 200* documents in my index. I need to update all the 200 docs If total commit operations I hit are* 20* i.e I update batches of 10 docs merging is done after every 10th update and so the max Segment Count I can have is 10 which is fine. However even when merging happens deleted docs are not cleared and I end up with 100 deleted docs in index. If this operation is continuously done I would end up with a large set of deleted docs which will affect the performance of the queries I hit on this solr. Can anyone please help me if I have missed a config or if this is an expected behaviour -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-Deleted-Docs-Issue-tp4193292.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Solr Deleted Docs Issue
Hi, Thanks erick and shawn for the reply. Just wanted to clarify that commit size of 10 was only an example and in production commit is handled via auto-commit feature of solr. The requirement we have is to store around 20-30 lakh docs out of which around 5-6 lakh docs get updated daily. What I have observed is though merge factor seems to work we always end up with around 6 lakh deleted docs in index daily. On optimizing all this deleted docs are removed. We benefit on memory as well as query speed on optimization. But as I understand its a small time gain and situation repeats itself daily. I fail to understand why this deleted docs are not removed from index on merging. Is there a good documentation which explains how exactly is merging done? What can I do to solve this problem other than optimization? -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-Deleted-Docs-Issue-tp4193292p4193937.html Sent from the Solr - User mailing list archive at Nabble.com.
Facing issue while implementing connection pooling with solr
0 down vote favorite I have this requirement where I want to limit the number of concurrent calls to solr say 50. So I am trying to implement connection pooling in HTTP client which is then used in solr object HttpSolrServer. Please find the code below HttpClient httpclient = new DefaultHttpClient(); httpclient.getParams().setParameter( HttpClientUtil.PROP_MAX_CONNECTIONS_PER_HOST, 50); httpclient.getParams().setParameter( HttpClientUtil.PROP_MAX_CONNECTIONS, 50); HttpSolrServer httpSolrServer = new HttpSolrServer( "solr url",httpclient); SolrQuery solrQuery = new SolrQuery("*:*"); for (int i = 0; i < 1; i++) { long numFound = httpSolrServer.query(solrQuery).getResults() .getNumFound(); System.out.println(numFound); }` I was expecting only 50 connections to be created from my application to solr and then probably experience some slowness until the older connections are freed. However at every regular interval a new connection is created despite there are waiting connections at solr end and those connections are never used again. Example Output tcp 0 0192.168.0.241:22192.168.0.109:54120 ESTABLISHED tcp 0 0 :::192.168.0.241:8190 :::192.168.0.109:47382TIME_WAIT tcp 0 0 :::192.168.0.241:8190 :::192.168.0.109:47383ESTABLISHED tcp 0 0 :::192.168.0.241:8190 :::192.168.0.109:47371TIME_WAIT tcp 0 0 :::192.168.0.241:8190 :::192.168.0.109:47381TIME_WAIT where 109 is the ip where I am running my application and 241 is ip where solr is run. In this case :192.168.0.109:47382 will never be used again and it is finally terminated by solr Am i going wrong somewhere. Any help will be highly appreciated -- View this message in context: http://lucene.472066.n3.nabble.com/Facing-issue-while-implementing-connection-pooling-with-solr-tp4149176.html Sent from the Solr - User mailing list archive at Nabble.com.
Group.ngroup query slower with docValues
Hi, I am planning to use the docValues feature of Solr. I have added "docValues= true" parameter to a few fields in my schema on which there is heavy faceting and grouping query involved. While I noticed a considerable improvement in faceting queries. I didnt get any improvements in grouping query and instead places where i used "group.ngroups=true" in my query there was around 3-4 times degradation in performance. Any know issues or possible workarounds for this case -- View this message in context: http://lucene.472066.n3.nabble.com/Group-ngroup-query-slower-with-docValues-tp4099581.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Group.ngroup query slower with docValues
Hi Shawn, Thanks for the reply. But the issue you pointed out talk about general performance issue of ngroups. However what i noticed is that after using docValues the performance of group.ngroups had degraded about 2-3 times. This is stopping me from using docValues which otherwise in case of faceting and grouping gives tremendous improvement -- View this message in context: http://lucene.472066.n3.nabble.com/Group-ngroup-query-slower-with-docValues-tp4099581p4099721.html Sent from the Solr - User mailing list archive at Nabble.com.
Solr cloud view shows core as down after using reload action
Hi, I have a solr set up using external zookeeper . Whenever there are any schema changes to be made I make those changes and upload the new config via cloud-scripts. I then reload the core using the action http://localhost:8190/solr/admin/cores?action=RELOAD&core=coreName Everything works fine after this. The new changes are reflected and core is up for reading and writing. However when I go to the cloud view of solr it shows the core name in orange which means the core is down. Can anybody help me in resolving the issue -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-cloud-view-shows-core-as-down-after-using-reload-action-tp4101625.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Solr cloud view shows core as down after using reload action
Hi Shalin, There is only one shard for this core. I even tried reloading it using the collection API still in cloud view it is shown as orange. However, as I said the core is working perfectly fine. So no exception in logs. It is just the cloud view that is bothering me -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-cloud-view-shows-core-as-down-after-using-reload-action-tp4101625p4101647.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Solr cloud view shows core as down after using reload action
Hi Shalin, I am using solr 4.3. The cluster state is as follows {"znode":{ "path":"/clusterstate.json","prop":{ "version":186, "aversion":0, "children_count":0, "ctime":"Mon Nov 18 12:30:53 IST 2013 (1384758053678)", "cversion":0, "czxid":39, "dataLength":34472, "ephemeralOwner":0, "mtime":"Mon Nov 18 14:22:00 IST 2013 (1384764720957)", "mzxid":7446, "pzxid":39}, "data":"{\n \"TES-3313973552\":{\n\"shards\":{\"shard1\":{\n \"range\":null,\n\"state\":\"active\",\n \"replicas\":{\"192.168.0.241:8190_solr_TES-3313973552\":{\n \"shard\":\"shard1\",\n\"state\":\"active\",\n \"core\":\"TES-3313973552\",\n \"collection\":\"TES-3313973552\",\n \"node_name\":\"192.168.0.241:8190_solr\",\n \"base_url\":\"http://192.168.0.241:8190/solr\",\n \"leader\":\"true\",\n\"router\":\"implicit\"},\n \"ABC-3573014985\":{\n\"shards\":{\"shard1\":{\n \"range\":null,\n\"state\":\"active\",\n \"replicas\":{\"192.168.0.241:8190_solr_ABC-3573014985\":{\n \"shard\":\"shard1\",\n\"state\":\"active\",\n \"core\":\"ABC-3573014985\",\n \"collection\":\"ABC-3573014985\",\n \"node_name\":\"192.168.0.241:8190_solr\",\n \"base_url\":\"http://192.168.0.241:8190/solr\",\n \"leader\":\"true\",\n\"router\":\"implicit\"},\n \"S4-1699783846\":{\n\"shards\":{\"shard1\":{\n\"range\":null,\n \"state\":\"active\",\n \"replicas\":{\"192.168.0.241:8190_solr_S4-1699783846\":{\n \"shard\":\"shard1\",\n\"state\":\"active\",\n \"core\":\"S4-1699783846\",\n\"collection\":\"S4-1699783846\",\n \"node_name\":\"192.168.0.241:8190_solr\",\n \"base_url\":\"http://192.168.0.241:8190/solr\",\n \"leader\":\"true\",\n\"router\":\"implicit\"},\n \"TES-123456789\":{\n\"shards\":{\"shard1\":{\n\"range\":null,\n \"state\":\"active\",\n \"replicas\":{\"192.168.0.197:8190_solr_TES-123456789\":{\n \"shard\":\"shard1\",\n\"state\":\"active\",\n \"core\":\"TES-123456789\",\n\"collection\":\"TES-123456789\",\n \"node_name\":\"192.168.0.197:8190_solr\",\n \"base_url\":\"http://192.168.0.197:8190/solr\",\n \"leader\":\"true\",\n\"router\":\"implicit\"},\n \"NEW-2887178148\":{\n\"shards\":{\"shard1\":{\n \"range\":null,\n\"state\":\"active\",\n \"replicas\":{\"192.168.0.197:8190_solr_NEW-2887178148\":{\n \"shard\":\"shard1\",\n\"state\":\"active\",\n \"core\":\"NEW-2887178148\",\n \"collection\":\"NEW-2887178148\",\n \"node_name\":\"192.168.0.197:8190_solr\",\n \"base_url\":\"http://192.168.0.197:8190/solr\",\n \"leader\":\"true\",\n\"router\":\"implicit\"},\n \"-123-43ashd421\":{\n\"shards\":{\"shard1\":{\n \"range\":null,\n\"state\":\"active\",\n \"replicas\":{\"192.168.0.197:8190_solr_-123-43ashd421\":{\n \"shard\":\"shard1\",\n\"state\":\"active\",\n \"core\":\"-123-43ashd421\",\n \"collection\":\"-123-43ashd421\",\n \"node_name\":\"192.168.0.197:8190_solr\",\n \"base_url\":\"http://192.168.0.197:8190/solr\",\n \"leader\":\"true\",\n\"router\":\"implicit\"},\n \"CHA-1302844885\":{\n\"shards\":{\"shard1\":{\n \"range\":null,\n\"state\":\"active\",\n \"replicas\":{\"192.168.0.197:8190_solr_CHA-1302844885\":{\n \"shard\":\"shard1\",\n\"state\":\"active\",\n \"core\":\"CHA-1302844885\",\n \"collection\":\"CHA-1302844885\",\n \"node_name\":\"192.168.0.197:8190_solr\",\n \"base_url\":\"http://192.168.0.197:8190/solr\",\n \"leader\":\"true\",\n\"router\":\"implicit\"},\n \"TES-1247195258\":{\n\"shards\":{\"shard1\":{\n \"range\":null,\n\"state\":\"active\",\n \"replicas\":{\"192.168.0.241:8190_solr_TES-1247195258\":{\n \"shard\":\"shard1\",\n\"state\":\"active\",\n \"core\":\"TES-1247195258\",\n \"collection\":\"TES-1247195258\",\n \"node_name\":\"192.168.0.241:8190_solr\",\n \"base_url\":\"http://192.168.0.241:8190/solr\",\n \"leader\":\"true\",\n\"router\":\"implicit\"},\n \"S1-904240846\":{\n\"shards\":{\"shard1\":{\n\"range\":null,\n \"state\":\"active\",\n \"replicas\":{\"192.168.0.241:8190_solr_S1-904240846\":{\n \"shard\":\"shard1\",\n\"state\":\"active\",\n \"core\":\"S1-904240846\",\n\"collection\":\"S1-904240846\",\n \"node
Re: Solr cloud view shows core as down after using reload action
Hi Shalin, Core ONL-3117132084 is the one which is reloaded -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-cloud-view-shows-core-as-down-after-using-reload-action-tp4101625p4101651.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Solr cloud view shows core as down after using reload action
Hi Shalin, Thanks a lot. Yes it is the same issue. Due to some restrictions I cannot migrate my product to solr 4.5 as of now. But I will test it offline and post the reply here. -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-cloud-view-shows-core-as-down-after-using-reload-action-tp4101625p4101656.html Sent from the Solr - User mailing list archive at Nabble.com.
Issues faced after docValues migration
Hi, I am using solr 4.3 version. I am planning to use the docValues feature introduced in solr 4.2. Although I see a significant improvement in facet and group query , there is a degrade in group.facet and group.ngroups query. Has anybody faced a similar issue? Any work arounds? -- View this message in context: http://lucene.472066.n3.nabble.com/Issues-faced-after-docValues-migration-tp4102134.html Sent from the Solr - User mailing list archive at Nabble.com.
Is it possible to find a leader from a list of cores in solr via java code
Hi , I have a set up of 1 leader and 1 replica and I have a requirement where in I need to find the leader core from the collection. Is there an api in solrj by means of which this can be achieved. -- View this message in context: http://lucene.472066.n3.nabble.com/Is-it-possible-to-find-a-leader-from-a-list-of-cores-in-solr-via-java-code-tp4074994.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Is it possible to find a leader from a list of cores in solr via java code
Hi, I have a requirement where in I want to write to the leader and read from the replica. Reason being If a write request is sent to the replica it relays it to the leader and then the leader relays it to all the replicas. This will help me in saving some network traffic as my application performs continuous writes -- View this message in context: http://lucene.472066.n3.nabble.com/Is-it-possible-to-find-a-leader-from-a-list-of-cores-in-solr-via-java-code-tp4074994p4075083.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Is it possible to find a leader from a list of cores in solr via java code
Hi Erik, I just wanted to clarify if u got my concern right. If i send some documents to the replica core wont it first have to send the documents to the leader core which in turn would be sending it back to the replica cores. If yes then this will lead to additional network traffic which can be avoided by sending the documents directly to leader. Please correct me if I have got the concept incorrect.Any help is appreciated Thanks, Vicky -- View this message in context: http://lucene.472066.n3.nabble.com/Is-it-possible-to-find-a-leader-from-a-list-of-cores-in-solr-via-java-code-tp4074994p4076016.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Is it possible to find a leader from a list of cores in solr via java code
Hi, As per the suggestions above I shifted my focus to using CloudSolrServer. In terms of sending updates to the leaders and reducing network traffic it works great. But i faced one problem in using CloudSolrServer is that it opens too many connections as large as five thousand connections. My Code is as follows ModifiableSolrParams params = new ModifiableSolrParams(); params.set(HttpClientUtil.PROP_MAX_CONNECTIONS, 3); params.set(HttpClientUtil.PROP_MAX_CONNECTIONS_PER_HOST, 2); HttpClient client = HttpClientUtil.createClient(params); LBHttpSolrServer lbServer = new LBHttpSolrServer(client); server = new CloudSolrServer(zkHost,lbServer); server.setDefaultCollection(defaultColllection); If there is only one instance of solr up then this works great. But in 1 shard 1 replica system it opens up too many connections in waiting state. Am I doing something incorrect. Any help would be highly appreciated -- View this message in context: http://lucene.472066.n3.nabble.com/Is-it-possible-to-find-a-leader-from-a-list-of-cores-in-solr-via-java-code-tp4074994p4077587.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Is it possible to find a leader from a list of cores in solr via java code
Hi, I got the solution to the above problem . Sharing the code so that it could help people in future PoolingClientConnectionManager poolingClientConnectionManager = new PoolingClientConnectionManager(); poolingClientConnectionManager.setMaxTotal(2); poolingClientConnectionManager.setDefaultMaxPerRoute(1); HttpClient httpClient = (HttpClient)new DefaultHttpClient(poolingClientConnectionManager); LBHttpSolrServer lbServer = new LBHttpSolrServer(httpClient); server = new CloudSolrServer("zkhost", lbServer); server.setDefaultCollection(collectionName); Thanks a lot to every1 in the thread chain. Your suggestions helped a lot -- View this message in context: http://lucene.472066.n3.nabble.com/Is-it-possible-to-find-a-leader-from-a-list-of-cores-in-solr-via-java-code-tp4074994p4078012.html Sent from the Solr - User mailing list archive at Nabble.com.
Querying a specific core in solr cloud
Hi, I had a requirement wherein I wanted to query a specific core on a specific solr instance . I found the following content in solr wiki * Explicitly specify the addresses of shards you want to query: http://localhost:7574/solr/collection1/select?shards=localhost:7574/solr/collection1/* Now suppose if I have a set ip of 1 leader(port:8983) and 1 replica (port:7574) and collection 1 is present on 8983 still the above query give me correct result. It fails only in port 7574 is down. Can anybody please provide some help on this -- View this message in context: http://lucene.472066.n3.nabble.com/Querying-a-specific-core-in-solr-cloud-tp4079964.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Querying a specific core in solr cloud
I m not bothered about the leader. I just want to check if a particluar core is up on a particular solr instance. My Use case is as follows I have to create a core on one instance and then there is some DB code. If after creating the core the DB action fails then the entire task is repeated again. at this point I need to find out if the core was successfully created on that machine or not -- View this message in context: http://lucene.472066.n3.nabble.com/Querying-a-specific-core-in-solr-cloud-tp4079964p4079973.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Querying a specific core in solr cloud
Hi Erik, Thanks for the reply But does &distrib=true work for replicas as well. As i mentioned earliear I have a set up of 1 leader and 1 replica. If a core is up on either of the instances querying to both the instances gives me results even with &distrib=false -- View this message in context: http://lucene.472066.n3.nabble.com/Querying-a-specific-core-in-solr-cloud-tp4079964p4080244.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Querying a specific core in solr cloud
Hi, I have also noticed that once I put the core up on both the machine &distrib=false works well. could this be a possible bug that when a core is down on one instance &distrib=false doesnt work -- View this message in context: http://lucene.472066.n3.nabble.com/Querying-a-specific-core-in-solr-cloud-tp4079964p4080246.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Querying a specific core in solr cloud
Hi Erick, First Of all sorry for the late reply. The scenario is as follows 1. Create a solr set up on two machines say (ip1 and ip2) with shard=1 and external zoo-keeper 2. Now if i create a core x on machine with ip1 only and use the query http://ip1:port1/solr/x/select?q=*:*&distrib=false http://ip2:port2/solr/x/select?q=*:*&distrib=false I get the same result that is docs are visible. However practically the core is not on instance with ip2 so i was expecting that query to fail 3. Now if I create the core on machine 2 as well and then hit those two queries second query gives me response 0 untill it comes in sync with ip1. This behavious is as expected. Please correct me if my expections are wrong and thanks for all the help provided untill now -- View this message in context: http://lucene.472066.n3.nabble.com/Querying-a-specific-core-in-solr-cloud-tp4079964p4080528.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Querying a specific core in solr cloud
Hi Erik, I did check the logs and request is going to ip1 if core is not present on ip2. This should be a bug right? -- View this message in context: http://lucene.472066.n3.nabble.com/Querying-a-specific-core-in-solr-cloud-tp4079964p4082454.html Sent from the Solr - User mailing list archive at Nabble.com.
Exception on solr Unload
Hi All, I am getting an exception on unloading a core from solr. This happens only in the case where the core name and the collection name is same. I am getting the below mentioned exception on using solrj as well as solr admin UI. I have a configuration of 1 leader and 1 replica with the core name same as that of collection on both the machine. the exception is as follows 819443 [http-8192-3] ERROR org.apache.solr.servlet.SolrDispatchFilter – null:org.apache.solr.common.SolrException: Error trying to proxy request for url: http://192.168.0.44:8192/solr/BAN-1272336463/get at org.apache.solr.servlet.SolrDispatchFilter.remoteQuery(SolrDispatchFilter.java:497) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:267) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:155) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:263) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:844) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:584) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447) at java.lang.Thread.run(Thread.java:619) Caused by: java.io.FileNotFoundException: http://192.168.0.44:8192/solr/BAN-1272336463/get at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at sun.net.www.protocol.http.HttpURLConnection$6.run(HttpURLConnection.java:1368) at java.security.AccessController.doPrivileged(Native Method) at sun.net.www.protocol.http.HttpURLConnection.getChainedException(HttpURLConnection.java:1362) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1016) at org.apache.solr.servlet.SolrDispatchFilter.remoteQuery(SolrDispatchFilter.java:484) ... 14 more Caused by: java.io.FileNotFoundException: http://192.168.0.44:8192/solr/BAN-1272336463/get at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1311) at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:373) at org.apache.solr.servlet.SolrDispatchFilter.remoteQuery(SolrDispatchFilter.java:472) ... 14 more where coreName = collection name = BAN-1272336463 As anybody else facing a similar issue and is there any work around. One important thing to note might be that this is not creating any problem in core unload as well as querying on the remaining node Any help would be appreciated Regards, Vicky -- View this message in context: http://lucene.472066.n3.nabble.com/Exception-on-solr-Unload-tp4082459.html Sent from the Solr - User mailing list archive at Nabble.com.
struggling with solr.WordDelimiterFilterFactory
Hi All, I have a query regarding the use of wordDelimiterFilterFactory. My schema definition for the text field is as follows If I make the following query q=Content:speedPost then docs having Content *speed post *are matched which is as expected but docs having Content *speedpost* do not match. Can anybody please highlight if I am going incorrect somewhere -- View this message in context: http://lucene.472066.n3.nabble.com/struggling-with-solr-WordDelimiterFilterFactory-tp4085021.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: struggling with solr.WordDelimiterFilterFactory
Hi Aloke, I am using the same analyzer for indexing as well as quering so LowerCaseFilterFactory should work for both, right? -- View this message in context: http://lucene.472066.n3.nabble.com/struggling-with-solr-WordDelimiterFilterFactory-tp4085021p4085025.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: struggling with solr.WordDelimiterFilterFactory
Hi, Another Example I found is q=Content:wi-fi doesn't match for documents with word wifi. I think it is not catenating the query keywords correctly -- View this message in context: http://lucene.472066.n3.nabble.com/struggling-with-solr-WordDelimiterFilterFactory-tp4085021p4085030.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: struggling with solr.WordDelimiterFilterFactory
Hi, I have created a new index. So reindexing shouldnt be the issue. Analysis page shows me correct result and match should be found as per the analysis page.But no output on actual query The Output of debug query is as follows content:speedPost content:speedPost MultiPhraseQuery(content:"(speedpost speed) (post speedpost)") content:"(speedpost speed) (post speedpost)" I dont understand the output for MultiPhraseQuery. Can anyone suggest a good read for the same. Erik - I m searching for the correct field name. But still no output One suprising fact is if my index word is speedPost and I query for speedpost I find a match but vice versa doesnt work -- View this message in context: http://lucene.472066.n3.nabble.com/struggling-with-solr-WordDelimiterFilterFactory-tp4085021p4085405.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: struggling with solr.WordDelimiterFilterFactory
Hi Erik, These are the request handlers defined in solrconfig.xml -- View this message in context: http://lucene.472066.n3.nabble.com/struggling-with-solr-WordDelimiterFilterFactory-tp4085021p4085417.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: struggling with solr.WordDelimiterFilterFactory
Hi, Another observation while testing Docs having the value for content field as below 1. content:speedPost 2. content:sPeedpost 3. content:speEdpost 4. content:speedposT matches the query q=content:speedPost. So basically if in the entire word there is one 1 letter that is camel cased then it matches the query. however content:speedpost with all letters lowercase is not found to be a match -- View this message in context: http://lucene.472066.n3.nabble.com/struggling-with-solr-WordDelimiterFilterFactory-tp4085021p4085421.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: struggling with solr.WordDelimiterFilterFactory
Hi Aloke, I have multiple fields in my schema which are of type text. i tried the same case on all the fields. Not working for me on any of them. If possible for u can u please post your dummy solrconfig.xml and schema.xml. I can replace them and check -- View this message in context: http://lucene.472066.n3.nabble.com/struggling-with-solr-WordDelimiterFilterFactory-tp4085021p4085432.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: struggling with solr.WordDelimiterFilterFactory
Hi Aloke, After taking the schema.xml and solrconfig.xml with the changes u mentioned it worked fine. However simply making this changes in schema.xml doesnt work. So seems like there is an issue in some configuration in solrconfig.xml. I will figure that out and post it here. Anyways thanks a lot to every1 for being patient enough and resolving my query. Regards, Vicky -- View this message in context: http://lucene.472066.n3.nabble.com/struggling-with-solr-WordDelimiterFilterFactory-tp4085021p4085447.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: struggling with solr.WordDelimiterFilterFactory
Hi All, There were two fixes for the issue I was facing 1. By changing the version in schema form* 1.1* to *1.5* OR 2. keeping the version to 1.1 and adding *autoGeneratePhraseQueries*="false" to the field type However the issue is not completely resolved yet on searching for content:speedPost the output of debug query is as follows cContent:speedpost cContent:speed cContent:post cContent:speedpost But If i search for content:"speedPost" the output of the debug query is as follows cContent:"(speedpost speed) (post speedpost)" This gives incorrect results -- View this message in context: http://lucene.472066.n3.nabble.com/struggling-with-solr-WordDelimiterFilterFactory-tp4085021p4085605.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: struggling with solr.WordDelimiterFilterFactory
Hi Erik, I was going to come to that. Now if I have the word *speedpost* in the index and if I dont use catenation at the query end then query for the word speedPost wont fetch me the results. It would then might make sense to remove the entire WDFF from query and search for a few possible combinations to fina all matching docs -- View this message in context: http://lucene.472066.n3.nabble.com/struggling-with-solr-WordDelimiterFilterFactory-tp4085021p4085650.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: struggling with solr.WordDelimiterFilterFactory
Hi Jack, As mentioned earliear a part of the issue was resolved by the two fixes I mentioned above and for the query u mentioned I am getting the same result as yours. What is not working though is the query *q=content:"speedPost"* with the text enclosed in inverted commas -- View this message in context: http://lucene.472066.n3.nabble.com/struggling-with-solr-WordDelimiterFilterFactory-tp4085021p4085658.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: struggling with solr.WordDelimiterFilterFactory
Hi Jack, Thanks for the expalnation -- View this message in context: http://lucene.472066.n3.nabble.com/struggling-with-solr-WordDelimiterFilterFactory-tp4085021p4085661.html Sent from the Solr - User mailing list archive at Nabble.com.
updateLog in Solr 4.2
If i disable update log in solr 4.2 then i get the following exception SEVERE: :java.lang.NullPointerException at org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:190) at org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:156) at org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:100) at org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:266) at org.apache.solr.cloud.ZkController.joinElection(ZkController.java:935) at org.apache.solr.cloud.ZkController.register(ZkController.java:761) at org.apache.solr.cloud.ZkController.register(ZkController.java:727) at org.apache.solr.core.CoreContainer.registerInZk(CoreContainer.java:908) at org.apache.solr.core.CoreContainer.registerCore(CoreContainer.java:892) at org.apache.solr.core.CoreContainer.register(CoreContainer.java:841) at org.apache.solr.core.CoreContainer$3.call(CoreContainer.java:638) at org.apache.solr.core.CoreContainer$3.call(CoreContainer.java:629) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:619) Apr 12, 2013 6:39:56 PM org.apache.solr.common.SolrException log SEVERE: null:org.apache.solr.common.cloud.ZooKeeperException: at org.apache.solr.core.CoreContainer.registerInZk(CoreContainer.java:931) at org.apache.solr.core.CoreContainer.registerCore(CoreContainer.java:892) at org.apache.solr.core.CoreContainer.register(CoreContainer.java:841) at org.apache.solr.core.CoreContainer$3.call(CoreContainer.java:638) at org.apache.solr.core.CoreContainer$3.call(CoreContainer.java:629) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:619) Caused by: java.lang.NullPointerException at org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:190) at org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:156) at org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:100) at org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:266) at org.apache.solr.cloud.ZkController.joinElection(ZkController.java:935) at org.apache.solr.cloud.ZkController.register(ZkController.java:761) at org.apache.solr.cloud.ZkController.register(ZkController.java:727) at org.apache.solr.core.CoreContainer.registerInZk(CoreContainer.java:908) ... 12 more and solr fails to start . However if i add updatelog in my solrconfig.xml it starts. Is the update log parameter mandatory for solr4.2 -- View this message in context: http://lucene.472066.n3.nabble.com/updateLog-in-Solr-4-2-tp4055548.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: updateLog in Solr 4.2
If updateLog tag is manadatory than why is it given as a parameter in solrconfig.xml . I mean by default it should be always writing update logs in my data directory even if I dont use updateLog parameter in config file. Also the same config file works for solr 4.0 but not solr 4.2 I will be logging a bug for the same -- View this message in context: http://lucene.472066.n3.nabble.com/updateLog-in-Solr-4-2-tp4055548p4056665.html Sent from the Solr - User mailing list archive at Nabble.com.
is phrase search possible in solr
I want to do a phrase search in solr without analyzers being applied to it eg - If I search for *"DelhiDareDevil"* (i.e - with inverted commas)it should search the exact text and not apply any analyzers or tokenizers on this field However if i search for *DelhiDareDevil* it should use tokenizers and analyzers and split it to something like this *delhi dare devil* My schema definition for this is as follows any help would be appreciated -- View this message in context: http://lucene.472066.n3.nabble.com/is-phrase-search-possible-in-solr-tp4057312.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: is phrase search possible in solr
Hi, Agreed it is a typo. And yes I can use one set of analyzers and tokenizers for query as well as indexing but that too will not solve my problem -- View this message in context: http://lucene.472066.n3.nabble.com/is-phrase-search-possible-in-solr-tp4057312p4057802.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: is phrase search possible in solr
Hi Jack, Making a changes in the schema either keyword tokenizer or copy field option which u suggested would require reindexing of entire data. Is there an option wherein if I have a query in double quotes it simply ignores all the tokenizers and analyzers. -- View this message in context: http://lucene.472066.n3.nabble.com/is-phrase-search-possible-in-solr-tp4057312p4057804.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: is phrase search possible in solr
Hi, If I use shinglingFilter than all type of queries will be impacted. I want queries within double quotes to be an exact search but for queries without double quotes all analyzers and tokenizers should be applied. Is there a setting or a configuration in schema.xml which can cater this requirement -- View this message in context: http://lucene.472066.n3.nabble.com/is-phrase-search-possible-in-solr-tp4057312p4057812.html Sent from the Solr - User mailing list archive at Nabble.com.
Maximum number of facet query ina single query
Hi, Is there any upper limit on the number of facet queries I can include in a single query. Also is there any performance hit if I include too many facet queries in a single query Any help would be appreciated -- View this message in context: http://lucene.472066.n3.nabble.com/Maximum-number-of-facet-query-ina-single-query-tp4059926.html Sent from the Solr - User mailing list archive at Nabble.com.
commit in solr4 takes a longer time
Hi all, I have recently migrated from solr 3.6 to solr 4.0. The documents in my core are getting constantly updated and so I fire a code commit after every 10 thousand docs . However moving from 3.6 to 4.0 I have noticed that for the same core size it takes about twice the time to commit in solr4.0 compared to solr 3.6. Is there any workaround by which I can reduce this time. Any help would be highly appreciated -- View this message in context: http://lucene.472066.n3.nabble.com/commit-in-solr4-takes-a-longer-time-tp4060396.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: commit in solr4 takes a longer time
Hi, I am using 1 shard and two replicas. Document size is around 6 lakhs My solrconfig.xml is as follows LUCENE_40 2147483647 simple true 500 1000 5 30 true *:* -- View this message in context: http://lucene.472066.n3.nabble.com/commit-in-solr4-takes-a-longer-time-tp4060396p4060402.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: commit in solr4 takes a longer time
Hi sandeep, I made the changes u mentioned and tested again for the same set of docs but unfortunately the commit time increased. -- View this message in context: http://lucene.472066.n3.nabble.com/commit-in-solr4-takes-a-longer-time-tp4060396p4060622.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: commit in solr4 takes a longer time
Hi Gopal, I added the opensearcher parameter as mentioned by you but on checking logs I found that apensearcher was still true on commit. it is only when I removed the autosoftcommit parameter the opensearcher parameter worked and provided faster updates as well. however I require soft commit in my application. Any suggestions. -- View this message in context: http://lucene.472066.n3.nabble.com/commit-in-solr4-takes-a-longer-time-tp4060396p4060623.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: commit in solr4 takes a longer time
My solrconfig.xml is as follows LUCENE_40 2147483647 simple true 500 1000 5 30 false true *:* -- View this message in context: http://lucene.472066.n3.nabble.com/commit-in-solr4-takes-a-longer-time-tp4060396p4060628.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: commit in solr4 takes a longer time
Hi All, setting opensearcher flag to true solution worked and it give me visible improvement in commit time. One thing to make note of is that while using solrj client we have to call server.commit(false,false) which i was doing incorrectly and hence was not able to see the improvement earliear. Thanks everyone -- View this message in context: http://lucene.472066.n3.nabble.com/commit-in-solr4-takes-a-longer-time-tp4060396p4060688.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: commit in solr4 takes a longer time
Hi, After using the following config 500 1000 5000 false When a commit operation is fired I am getting the following logs INFO: start commit{,optimize=false,openSearcher=false,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false} even though openSearcher is false , waitSearcher is true . Can that be set to false too? Will that give a performance improvement and what is the config for that -- View this message in context: http://lucene.472066.n3.nabble.com/commit-in-solr4-takes-a-longer-time-tp4060396p4060706.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: commit in solr4 takes a longer time
Hi, When a auto commit operation is fired I am getting the following logs INFO: start commit{,optimize=false,openSearcher=false,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false} setting the openSearcher to false definetly gave me a lot of performance improvement but was wondering if waitSearcher can also be set to false and will that give me a performance raise too. -- View this message in context: http://lucene.472066.n3.nabble.com/commit-in-solr4-takes-a-longer-time-tp4060396p4060715.html Sent from the Solr - User mailing list archive at Nabble.com.
Solr Replication
Hi, I am using solr 4 setup. For the backup purpose once in a day I start one additional tomcat server with cores having empty data folders and which acts as a slave server. However it does not replicate data from the master unless there is a commit on the master. Is there a possibility to pull data from master core without firing a commit operation on that core -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-Replication-tp4047266.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Solr Replication
Hi, I have a multi core setup and there is continuous updation going on in each core. Hence I dont prefer a bckup as it would either cause a downtime or if during a backup there is a write activity my backup will be corrupted. Can you please suggest if there is a cleaner way to handle this -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-Replication-tp4047266p4047591.html Sent from the Solr - User mailing list archive at Nabble.com.