Re: distributed search is significantly slower than direct search
Hi, I isolated the case Installed on a new machine (2 x Xeon E5410 2.33GHz) I have an environment with 12Gb of memory. I assigned 6gb of memory to Solr and I’m not running any other memory consuming process so no memory issues should arise. Removed all indexes apart from two: emptyCore – empty – used for routing core1 – holds the stored data – has ~750,000 docs and size of 400Mb Again this is a single machine that holds both indexes. The query http://localhost:8210/solr/emptyCore/select?rows=5000&q=*:*&shards=127.0.0.1:8210/solr/core1&wt=jsonQTime takes ~3 seconds and direct query http://localhost:8210/solr/core1/select?rows=5000&q=*:*&wt=json Qtime takes ~15 ms - a magnitude difference. I ran the long query several times and got an improvement of about a sec (33%) but that’s it. I need to better understand why this is happening. I tried looking at Solr code and debugging the issue but with no success. The one thing I did notice is that the getFirstMatch method which receives the doc id, searches the term dict and returns the internal id takes most of the time for some reason. I am pretty stuck and would appreciate any ideas My only solution for the moment is to bypass the distributed query, implement code in my own app that directly queries the relevant cores and handles the sorting etc.. Thanks On Sat, Nov 16, 2013 at 2:39 PM, Michael Sokolov < msoko...@safaribooksonline.com> wrote: > Did you say what the memory profile of your machine is? How much memory, > and how large are the shards? This is just a random guess, but it might be > that if you are memory-constrained, there is a lot of thrashing caused by > paging (swapping?) in and out the sharded indexes while a single index can > be scanned linearly, even if it does need to be paged in. > > -Mike > > > On 11/14/2013 8:10 AM, Elran Dvir wrote: > >> Hi, >> >> We tried returning just the id field and got exactly the same performance. >> Our system is distributed but all shards are in a single machine so >> network issues are not a factor. >> The code we found where Solr is spending its time is on the shard and not >> on the routing core, again all shards are local. >> We investigated the getFirstMatch() method and noticed that the >> MultiTermEnum.reset (inside MultiTerm.iterator) and MultiTerm.seekExact >> take 99% of the time. >> Inside these methods, the call to BlockTreeTermsReader$ >> FieldReader$SegmentTermsEnum$Frame.loadBlock takes most of the time. >> Out of the 7 seconds run these methods take ~5 and >> BinaryResponseWriter.write takes the rest(~ 2 seconds). >> >> We tried increasing cache sizes and got hits, but it only improved the >> query time by a second (~6), so no major effect. >> We are not indexing during our tests. The performance is similar. >> (How do we measure doc size? Is it important due to the fact that the >> performance is the same when returning only id field?) >> >> We still don't completely understand why the query takes this much longer >> although the cores are on the same machine. >> >> Is there a way to improve the performance (code, configuration, query)? >> >> -Original Message- >> From: idokis...@gmail.com [mailto:idokis...@gmail.com] On Behalf Of >> Manuel Le Normand >> Sent: Thursday, November 14, 2013 1:30 AM >> To: solr-user@lucene.apache.org >> Subject: Re: distributed search is significantly slower than direct search >> >> It's surprising such a query takes a long time, I would assume that after >> trying consistently q=*:* you should be getting cache hits and times should >> be faster. Try see in the adminUI how do your query/doc cache perform. >> Moreover, the query in itself is just asking the first 5000 docs that >> were indexed (returing the first [docid]), so seems all this time is wasted >> on transfer. Out of these 7 secs how much is spent on the above method? >> What do you return by default? How big is every doc you display in your >> results? >> Might be the matter that both collections work on the same ressources. >> Try elaborating your use-case. >> >> Anyway, it seems like you just made a test to see what will be the >> performance hit in a distributed environment so I'll try to explain some >> things we encountered in our benchmarks, with a case that has at least the >> similarity of the num of docs fetched. >> >> We reclaim 2000 docs every query, running over 40 shards. This means >> every shard is actually transfering to our frontend 2000 docs every >> document-match request (the first you were referring to). Even if lazily >> loaded, reading 2000 id's (on 40 servers) and lazy loading the fields is a >> tough job. Waiting for the slowest shard to respond, then sorting the docs >> and reloading (lazy or not) the top 2000 docs might take a long time. >> >> Our times are 4-8 secs, but do it's not possible comparing cases. We've >> done few steps that improved it along the way, steps that led to others. >> These were our starters: >> >> 1. Profile these querie
Re: distributed search is significantly slower than direct search
Hi Yuval, quick question. You say that your code has 750k docs and around 400mb? Is this some kind of test dataset and you expect it to grow significantly? For an index of this size, I wouldn't use distributed search, single shard should be fine. Tomás On Sun, Nov 17, 2013 at 6:50 AM, Yuval Dotan wrote: > Hi, > > I isolated the case > > Installed on a new machine (2 x Xeon E5410 2.33GHz) > > I have an environment with 12Gb of memory. > > I assigned 6gb of memory to Solr and I’m not running any other memory > consuming process so no memory issues should arise. > > Removed all indexes apart from two: > > emptyCore – empty – used for routing > > core1 – holds the stored data – has ~750,000 docs and size of 400Mb > > Again this is a single machine that holds both indexes. > > The query > > http://localhost:8210/solr/emptyCore/select?rows=5000&q=*:*&shards=127.0.0.1:8210/solr/core1&wt=jsonQTime > takes ~3 seconds > > and direct query > http://localhost:8210/solr/core1/select?rows=5000&q=*:*&wt=json Qtime > takes > ~15 ms - a magnitude difference. > > I ran the long query several times and got an improvement of about a sec > (33%) but that’s it. > > I need to better understand why this is happening. > > I tried looking at Solr code and debugging the issue but with no success. > > The one thing I did notice is that the getFirstMatch method which receives > the doc id, searches the term dict and returns the internal id takes most > of the time for some reason. > > I am pretty stuck and would appreciate any ideas > > My only solution for the moment is to bypass the distributed query, > implement code in my own app that directly queries the relevant cores and > handles the sorting etc.. > > Thanks > > > > > On Sat, Nov 16, 2013 at 2:39 PM, Michael Sokolov < > msoko...@safaribooksonline.com> wrote: > > > Did you say what the memory profile of your machine is? How much memory, > > and how large are the shards? This is just a random guess, but it might > be > > that if you are memory-constrained, there is a lot of thrashing caused by > > paging (swapping?) in and out the sharded indexes while a single index > can > > be scanned linearly, even if it does need to be paged in. > > > > -Mike > > > > > > On 11/14/2013 8:10 AM, Elran Dvir wrote: > > > >> Hi, > >> > >> We tried returning just the id field and got exactly the same > performance. > >> Our system is distributed but all shards are in a single machine so > >> network issues are not a factor. > >> The code we found where Solr is spending its time is on the shard and > not > >> on the routing core, again all shards are local. > >> We investigated the getFirstMatch() method and noticed that the > >> MultiTermEnum.reset (inside MultiTerm.iterator) and MultiTerm.seekExact > >> take 99% of the time. > >> Inside these methods, the call to BlockTreeTermsReader$ > >> FieldReader$SegmentTermsEnum$Frame.loadBlock takes most of the time. > >> Out of the 7 seconds run these methods take ~5 and > >> BinaryResponseWriter.write takes the rest(~ 2 seconds). > >> > >> We tried increasing cache sizes and got hits, but it only improved the > >> query time by a second (~6), so no major effect. > >> We are not indexing during our tests. The performance is similar. > >> (How do we measure doc size? Is it important due to the fact that the > >> performance is the same when returning only id field?) > >> > >> We still don't completely understand why the query takes this much > longer > >> although the cores are on the same machine. > >> > >> Is there a way to improve the performance (code, configuration, query)? > >> > >> -Original Message- > >> From: idokis...@gmail.com [mailto:idokis...@gmail.com] On Behalf Of > >> Manuel Le Normand > >> Sent: Thursday, November 14, 2013 1:30 AM > >> To: solr-user@lucene.apache.org > >> Subject: Re: distributed search is significantly slower than direct > search > >> > >> It's surprising such a query takes a long time, I would assume that > after > >> trying consistently q=*:* you should be getting cache hits and times > should > >> be faster. Try see in the adminUI how do your query/doc cache perform. > >> Moreover, the query in itself is just asking the first 5000 docs that > >> were indexed (returing the first [docid]), so seems all this time is > wasted > >> on transfer. Out of these 7 secs how much is spent on the above method? > >> What do you return by default? How big is every doc you display in your > >> results? > >> Might be the matter that both collections work on the same ressources. > >> Try elaborating your use-case. > >> > >> Anyway, it seems like you just made a test to see what will be the > >> performance hit in a distributed environment so I'll try to explain some > >> things we encountered in our benchmarks, with a case that has at least > the > >> similarity of the num of docs fetched. > >> > >> We reclaim 2000 docs every query, running over 40 shards. This means > >> every shard is actually transfering to our fron
Re: distributed search is significantly slower than direct search
Hi Tomás This is just a test environment meant only to reproduce the issue I am currently investigating. The number of documents should grow substantially (billions of docs). On Sun, Nov 17, 2013 at 7:12 PM, Tomás Fernández Löbbe < tomasflo...@gmail.com> wrote: > Hi Yuval, quick question. You say that your code has 750k docs and around > 400mb? Is this some kind of test dataset and you expect it to grow > significantly? For an index of this size, I wouldn't use distributed > search, single shard should be fine. > > > Tomás > > > On Sun, Nov 17, 2013 at 6:50 AM, Yuval Dotan wrote: > > > Hi, > > > > I isolated the case > > > > Installed on a new machine (2 x Xeon E5410 2.33GHz) > > > > I have an environment with 12Gb of memory. > > > > I assigned 6gb of memory to Solr and I’m not running any other memory > > consuming process so no memory issues should arise. > > > > Removed all indexes apart from two: > > > > emptyCore – empty – used for routing > > > > core1 – holds the stored data – has ~750,000 docs and size of 400Mb > > > > Again this is a single machine that holds both indexes. > > > > The query > > > > > http://localhost:8210/solr/emptyCore/select?rows=5000&q=*:*&shards=127.0.0.1:8210/solr/core1&wt=jsonQTime > > takes ~3 seconds > > > > and direct query > > http://localhost:8210/solr/core1/select?rows=5000&q=*:*&wt=json Qtime > > takes > > ~15 ms - a magnitude difference. > > > > I ran the long query several times and got an improvement of about a sec > > (33%) but that’s it. > > > > I need to better understand why this is happening. > > > > I tried looking at Solr code and debugging the issue but with no success. > > > > The one thing I did notice is that the getFirstMatch method which > receives > > the doc id, searches the term dict and returns the internal id takes most > > of the time for some reason. > > > > I am pretty stuck and would appreciate any ideas > > > > My only solution for the moment is to bypass the distributed query, > > implement code in my own app that directly queries the relevant cores and > > handles the sorting etc.. > > > > Thanks > > > > > > > > > > On Sat, Nov 16, 2013 at 2:39 PM, Michael Sokolov < > > msoko...@safaribooksonline.com> wrote: > > > > > Did you say what the memory profile of your machine is? How much > memory, > > > and how large are the shards? This is just a random guess, but it might > > be > > > that if you are memory-constrained, there is a lot of thrashing caused > by > > > paging (swapping?) in and out the sharded indexes while a single index > > can > > > be scanned linearly, even if it does need to be paged in. > > > > > > -Mike > > > > > > > > > On 11/14/2013 8:10 AM, Elran Dvir wrote: > > > > > >> Hi, > > >> > > >> We tried returning just the id field and got exactly the same > > performance. > > >> Our system is distributed but all shards are in a single machine so > > >> network issues are not a factor. > > >> The code we found where Solr is spending its time is on the shard and > > not > > >> on the routing core, again all shards are local. > > >> We investigated the getFirstMatch() method and noticed that the > > >> MultiTermEnum.reset (inside MultiTerm.iterator) and > MultiTerm.seekExact > > >> take 99% of the time. > > >> Inside these methods, the call to BlockTreeTermsReader$ > > >> FieldReader$SegmentTermsEnum$Frame.loadBlock takes most of the time. > > >> Out of the 7 seconds run these methods take ~5 and > > >> BinaryResponseWriter.write takes the rest(~ 2 seconds). > > >> > > >> We tried increasing cache sizes and got hits, but it only improved the > > >> query time by a second (~6), so no major effect. > > >> We are not indexing during our tests. The performance is similar. > > >> (How do we measure doc size? Is it important due to the fact that the > > >> performance is the same when returning only id field?) > > >> > > >> We still don't completely understand why the query takes this much > > longer > > >> although the cores are on the same machine. > > >> > > >> Is there a way to improve the performance (code, configuration, > query)? > > >> > > >> -Original Message- > > >> From: idokis...@gmail.com [mailto:idokis...@gmail.com] On Behalf Of > > >> Manuel Le Normand > > >> Sent: Thursday, November 14, 2013 1:30 AM > > >> To: solr-user@lucene.apache.org > > >> Subject: Re: distributed search is significantly slower than direct > > search > > >> > > >> It's surprising such a query takes a long time, I would assume that > > after > > >> trying consistently q=*:* you should be getting cache hits and times > > should > > >> be faster. Try see in the adminUI how do your query/doc cache perform. > > >> Moreover, the query in itself is just asking the first 5000 docs that > > >> were indexed (returing the first [docid]), so seems all this time is > > wasted > > >> on transfer. Out of these 7 secs how much is spent on the above > method? > > >> What do you return by default? How big is every doc you display in > your > > >> resu
Re: distributed search is significantly slower than direct search
You are asking for 5000 docs right? And that’s forcing us to look up 5000 external to internal ids. I think this always had a cost, but it’s obviously worse if you ask for a ton of results. I don’t think single node has to do this? And if we had like Searcher leases or something (we will eventually), I think we could avoid it and just use internal ids. - Mark On Nov 17, 2013, at 12:44 PM, Yuval Dotan wrote: > Hi Tomás > This is just a test environment meant only to reproduce the issue I am > currently investigating. > The number of documents should grow substantially (billions of docs). > > > > On Sun, Nov 17, 2013 at 7:12 PM, Tomás Fernández Löbbe < > tomasflo...@gmail.com> wrote: > >> Hi Yuval, quick question. You say that your code has 750k docs and around >> 400mb? Is this some kind of test dataset and you expect it to grow >> significantly? For an index of this size, I wouldn't use distributed >> search, single shard should be fine. >> >> >> Tomás >> >> >> On Sun, Nov 17, 2013 at 6:50 AM, Yuval Dotan wrote: >> >>> Hi, >>> >>> I isolated the case >>> >>> Installed on a new machine (2 x Xeon E5410 2.33GHz) >>> >>> I have an environment with 12Gb of memory. >>> >>> I assigned 6gb of memory to Solr and I’m not running any other memory >>> consuming process so no memory issues should arise. >>> >>> Removed all indexes apart from two: >>> >>> emptyCore – empty – used for routing >>> >>> core1 – holds the stored data – has ~750,000 docs and size of 400Mb >>> >>> Again this is a single machine that holds both indexes. >>> >>> The query >>> >>> >> http://localhost:8210/solr/emptyCore/select?rows=5000&q=*:*&shards=127.0.0.1:8210/solr/core1&wt=jsonQTime >>> takes ~3 seconds >>> >>> and direct query >>> http://localhost:8210/solr/core1/select?rows=5000&q=*:*&wt=json Qtime >>> takes >>> ~15 ms - a magnitude difference. >>> >>> I ran the long query several times and got an improvement of about a sec >>> (33%) but that’s it. >>> >>> I need to better understand why this is happening. >>> >>> I tried looking at Solr code and debugging the issue but with no success. >>> >>> The one thing I did notice is that the getFirstMatch method which >> receives >>> the doc id, searches the term dict and returns the internal id takes most >>> of the time for some reason. >>> >>> I am pretty stuck and would appreciate any ideas >>> >>> My only solution for the moment is to bypass the distributed query, >>> implement code in my own app that directly queries the relevant cores and >>> handles the sorting etc.. >>> >>> Thanks >>> >>> >>> >>> >>> On Sat, Nov 16, 2013 at 2:39 PM, Michael Sokolov < >>> msoko...@safaribooksonline.com> wrote: >>> Did you say what the memory profile of your machine is? How much >> memory, and how large are the shards? This is just a random guess, but it might >>> be that if you are memory-constrained, there is a lot of thrashing caused >> by paging (swapping?) in and out the sharded indexes while a single index >>> can be scanned linearly, even if it does need to be paged in. -Mike On 11/14/2013 8:10 AM, Elran Dvir wrote: > Hi, > > We tried returning just the id field and got exactly the same >>> performance. > Our system is distributed but all shards are in a single machine so > network issues are not a factor. > The code we found where Solr is spending its time is on the shard and >>> not > on the routing core, again all shards are local. > We investigated the getFirstMatch() method and noticed that the > MultiTermEnum.reset (inside MultiTerm.iterator) and >> MultiTerm.seekExact > take 99% of the time. > Inside these methods, the call to BlockTreeTermsReader$ > FieldReader$SegmentTermsEnum$Frame.loadBlock takes most of the time. > Out of the 7 seconds run these methods take ~5 and > BinaryResponseWriter.write takes the rest(~ 2 seconds). > > We tried increasing cache sizes and got hits, but it only improved the > query time by a second (~6), so no major effect. > We are not indexing during our tests. The performance is similar. > (How do we measure doc size? Is it important due to the fact that the > performance is the same when returning only id field?) > > We still don't completely understand why the query takes this much >>> longer > although the cores are on the same machine. > > Is there a way to improve the performance (code, configuration, >> query)? > > -Original Message- > From: idokis...@gmail.com [mailto:idokis...@gmail.com] On Behalf Of > Manuel Le Normand > Sent: Thursday, November 14, 2013 1:30 AM > To: solr-user@lucene.apache.org > Subject: Re: distributed search is significantly slower than direct >>> search > > It's surprising such a query takes a long time, I would assume that >>> after > trying consistently q=*:* you should be gettin
Install SOLr Cloud
New to SOLR cloud and am looking for some guidance on Install and configure. I was also looking at the Bitnami SOLR cloud ami. Any article or a link where there is some help on install. My idea is to have 3 VM’s on AWS, now I am not sure after building the VM’s how do I make a cluster from these 3 machines. Dinar
Re: Install SOLr Cloud
You should start reading from here: http://wiki.apache.org/solr/SolrCloud 2013/11/17 dinar dalvi > New to SOLR cloud and am looking for some guidance on Install and > configure. I was also looking at the Bitnami SOLR cloud ami. Any article > or a link where there is some help on install. > > My idea is to have 3 VM’s on AWS, now I am not sure after building the > VM’s how do I make a cluster from these 3 machines. > > > Dinar
Editing config files in the Solr Admin UI
Stefan Matheis and I have conspired to make the various Solr config files editable from the admin UI. Here's the Wiki writeup: https://wiki.apache.org/solr/Editing%20configuration%20files%20in%20the%20admin%20UI It's not quite complete yet, see the sub-issues attached to https://issues.apache.org/jira/browse/SOLR-5446, they involve adding a reload button, being able to add new files and a bit of cleanup. In the sprit of getting this into people's hands ASAP, though, I thought I'd let people know. Give it a spin and tell us what you think! Current 4x and trunk (5x) have this if you download and build.
Datadir Could not Set via System Properties?
I have removed the line of *dataDir* from my schema.xml and I have not a property defined as *dataDir *at my old style solr.xml. I have that line of codes: System.setProperty("solr.data.dir", "/home/somewhere/data"); CoreContainer coreContainer = CoreContainer.createAndLoad(solrHomePath, solrConfigFile); solrServer = new EmbeddedSolrServer(coreContainer, "collection1"); // An embedded server with a RAMDirectoryFactory However data directory is created at default place rather then I've set via System properties. Any ideas?
Nutch 1.7 solrdedup error
When trying to delete duplicates after crawl I get the following, http://pastebin.com/aQbqmPLm When running this command on terminal: $ bin/nutch solrdedup http://localhost:8983/solr/rockies Here is my setup: - Nutch 1.7 - Solr 4.5.0 - java version "1.6.0_51" On Stackoverflow as well, http://stackoverflow.com/questions/20013630/nutch-1-7-solrdedup-error Thanks, Mark IMPORTANT NOTICE: This e-mail message is intended to be received only by persons entitled to receive the confidential information it may contain. E-mail messages sent from Bridgepoint Education may contain information that is confidential and may be legally privileged. Please do not read, copy, forward or store this message unless you are an intended recipient of it. If you received this transmission in error, please notify the sender by reply e-mail and delete the message and any attachments.
Re: distributed search is significantly slower than direct search
In order to accelerate the BinaryResponseWriter.write we extended this writer class to implement the docid to id tranformation by docValues (on memory) with no need to access stored field for id reading nor lazy loading of fields that also has a cost. That should improve read rate as docValues are sequential and should avoid disk IO. This docValues implementation is accessed during both query stages (as mentioned above) in case you ask for id's only, or only once, during the distributed search stage, in case you intend asking for stored fields different than id. We just started testing it for performance. I would love hearing any oppinions or test performances for this implementation Manu
Re: exceeded limit of maxWarmingSearchers ERROR
Hi Erickson, Thanks for your reply. Iam getting the following error with liferay tomcat. 2013/11/18 07:29:42 ERROR com.liferay.portal.search.solr.SolrIndexWriterImpl.deleteDocument(SolrIndexWriterImpl.java:90) [] [liferay/search_writer] org.apache.solr.common.SolrException: Not Found Not Found request: http://10.43.4.155:8080/apache-solr-1.4.1/liferay/update?wt=javabin&version=2.2 org.apache.solr.common.SolrException: Not Found Not Found request: http://10.43.4.155:8080/apache-solr-1.4.1/liferay/update?wt=javabin&version=2.2 at org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:343) at org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:183) at com.liferay.portal.search.solr.server.BasicAuthSolrServer.request(BasicAuthSolrServer.java:93) at org.apache.solr.client.solrj.request.UpdateRequest.process(UpdateRequest.java:217) at org.apache.solr.client.solrj.SolrServer.deleteById(SolrServer.java:97) at com.liferay.portal.search.solr.SolrIndexWriterImpl.deleteDocument(SolrIndexWriterImpl.java:83) at com.liferay.portal.search.solr.SolrIndexWriterImpl.updateDocument(SolrIndexWriterImpl.java:133) at com.liferay.portal.kernel.search.messaging.SearchWriterMessageListener.doReceive (SearchWriterMessageListener.java:86) at com.liferay.portal.kernel.search.messaging.SearchWriterMessageListener.receive (SearchWriterMessageListener.java:33) at com.liferay.portal.kernel.messaging.InvokerMessageListener.receive(InvokerMessageListener.java:63) at com.liferay.portal.kernel.messaging.ParallelDestination$1.run(ParallelDestination.java:61) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:679)... Can you help me why Iam getting this error. PFA of the same error log and the solr-spring.xml files. Regards, Lokanadham Ganta - Original Message - From: "Erick Erickson [via Lucene]" To: "Loka" Sent: Friday, November 15, 2013 7:14:26 PM Subject: Re: exceeded limit of maxWarmingSearchers ERROR That's a fine place to start. This form: ${solr.autoCommit.maxTime:15000} just allows you to define a sysvar to override the 15 second default, like java -Dsolr.autoCommti.maxTime=3 -jar start.jar On Fri, Nov 15, 2013 at 8:11 AM, Loka < [hidden email] > wrote: > Hi Erickson, > > I have seen the following also from google, can I use the same in > : > false > > If the above one is correct to add, can I add the below tags aslo in > along with the above tag: > > > 3 > > > > 1 > > > > so finally, it will look like as: > > > > 3 > > > > 1 > > false > > > > > Is the above one fine? > > > Regards, > Lokanadham Ganta > > > > > - Original Message - > From: "Lokanadham Ganta" < [hidden email] > > To: "Erick Erickson [via Lucene]" < > [hidden email] > > Sent: Friday, November 15, 2013 6:33:20 PM > Subject: Re: exceeded limit of maxWarmingSearchers ERROR > > Erickson, > > Thanks for your reply, before your reply, I have googled and found the > following and added under > tag of solrconfig.xml > file. > > > > 3 > > > > 1 > > > Is the above one is fine or should I go strictly as per ypur suggestion > means as below: > > > ${solr.autoCommit.maxTime:15000} > false > > > > > > ${solr.autoSoftCommit.maxTime:1} > > > > > Please confirm me. > > But how can I check how much autowarming that Iam doing, as of now I have > set the maxWarmingSearchers as 2, should I increase the value? > > > Regards, > Lokanadham Ganta > > > - Original Message - > From: "Erick Erickson [via Lucene]" < > [hidden email] > > To: "Loka" < [hidden email] > > Sent: Friday, November 15, 2013 6:07:12 PM > Subject: Re: exceeded limit of maxWarmingSearchers ERROR > > Where did you get that syntax? I've never seen that before. > > What you want to configure is the "maxTime" in your > autocommit and autosoftcommit sections of solrconfig.xml, > as: > > > ${solr.autoCommit.maxTime:15000} > false > > > > > > ${solr.autoSoftCommit.maxTime:1} > > > And you do NOT want to commit from your client. > > Depending on how long autowarm takes, you may still see this error, > so check how much autowarming you're doing, i.e. how you've > configured the caches in solrconfig.xml and what you > have for newSearcher and firstSearcher. > > I'd start with autowarm numbers of, maybe, 16 or so at most. > > Best, > Erick > > > On Fri, Nov 15, 2013 at 2:46 AM, Loka < [hidden email] > wrote: > >
Solr cloud view shows core as down after using reload action
Hi, I have a solr set up using external zookeeper . Whenever there are any schema changes to be made I make those changes and upload the new config via cloud-scripts. I then reload the core using the action http://localhost:8190/solr/admin/cores?action=RELOAD&core=coreName Everything works fine after this. The new changes are reflected and core is up for reading and writing. However when I go to the cloud view of solr it shows the core name in orange which means the core is down. Can anybody help me in resolving the issue -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-cloud-view-shows-core-as-down-after-using-reload-action-tp4101625.html Sent from the Solr - User mailing list archive at Nabble.com.