Is there any reason why you have to limit each instance to only 1M documents? If you could put more documents in the same core I think it would dramatically improve your response times.
-----Original Message----- From: marship [mailto:mars...@126.com] Sent: donderdag 15 juli 2010 6:23 To: solr-user Subject: How to speed up solr search speed Hi. All. I got a problem with distributed solr search. The issue is I have 76M documents spread over 76 solr instances, each instance handles 1M documents. Previously I put all 76 instances on single server and when I tested I found each time it runs, it will take several times, mostly 10-20s to finish a search. Now, I split these instances into 2 servers. each one with 38 instances. the search speed is about 5-10s each time. 10s is a bit unacceptable for me. And based on my observation, the slow is caused by disk operation as all theses instances are on same server. Because when I test each single instance, it is purely fast, always ~400ms. When I use distributed search, I found some instance say it need 7000+ms. Our server has plenty of memory free of use. I am thinking is there a way we can make solr use more memory instead of harddisk index, like, load all indexes into memory so it can speed up? welcome any help. Thanks. Regards. Scott