On Mon, Jan 4, 2010 at 7:25 PM, dipti khullar <dipti.khul...@gmail.com>wrote:
> Thanks Shalin. > > Following are the relevant details: > > There are 2 search servers in a virtualized VMware environment. Each has 2 > instances of Solr running on separates ports in tomcat. > Server 1: hosts 1 master(application 1), 1 slave (application 1) > Server 2: hosta 1 master (application 2), 1 slave (application 1) > > Have you tried a non-virtualized environment? Virtual instances are not that great for high I/O throughput environments. > Both servers have 4 CPUs and 4 GB RAM. > > Master > - 4GB RAM > - 1GB JVM Heap memory is allocated to Solr > Slave1/Slave2: > - 4GB RAM > - 2GB JVM Heap memory is allocated to Solr > > Solr Details: > apache-solr Version: 1.3.0 > Lucene - 2.4-dev > > - autocommit: 50 docs and 5 minutes > - optimize runs on master in every 7 minutes > - using postOptimize , we execute snapshooter on master > - snappuller/snapinstaller on 2 slaves runs after every 10 minutes > > You are committing every 5 minutes and optimizing every 7 minutes. Can you try committing less often? > Master and Slave1 (solr1)are on single box and Slave2(solr2) on different > box. We use HAProxy to load balance query requests between > 2 slaves. Master is only used for indexing. > > Solrj client which is used to query slave solr,gets timedout and there is > high CPU usage/load avg.T he problem is reported on slaves for application > 1. The SolrJ client which queries Solr over HTTP times out (10 sec is the > timeout value) though in the Solr tomcat access log we find all requests > have 200 response. > During the tme, requests timeout the load avg. of the server goes extremely > high (10-20). > The issue gets resolved as soon as we optimize the slave index. In the solr > admin, it shows only 4 requests/sec is handled with 400 ms response time. > > I am attaching solrconfig.xml for both master and slaves. > > There is no autowarming on slaves which is probably OK if you are committing so often. But do you really need to index new documents so often? -- Regards, Shalin Shekhar Mangar.