Hi Hien; Actually high index rate is a relative concept. I could index such kind of data within a few hours. I aim to index much much more data within same time soon. I can share my test results when I do.
Thanks; Furkan KAMACI 6 Aralık 2013 Cuma tarihinde Hien Luu <h...@yahoo.com> adlı kullanıcı şöyle yazdı: > Hi Furkan, > > Just curious what was the index rate that you were able to achieve? > > Regards, > > Hien > > > > On Thursday, December 5, 2013 3:06 PM, Furkan KAMACI < furkankam...@gmail.com> wrote: > > Hi; > > Erick and Shawn have explained that we need more information about your > infrastructure. I should add that: I had test data at my SolrCloud nearly > as much as yours and I did not have any problems except for when indexing > at a huge index rate and it can be solved with turning. You should optimize > your parameters according to your system. So you should give use more > information about your system. > > Thanks; > Furkan KAMACI > > 4 Aralık 2013 Çarşamba tarihinde Shawn Heisey <s...@elyograg.org> adlı > kullanıcı şöyle yazdı: > >> On 12/4/2013 6:31 AM, kumar wrote: >>> I am having almost 5 to 6 crores of indexed documents in solr. And when > i am >>> going to change anything in the configuration file solr server is going >>> down. >> >> If you mean crore and not core, then you are talking about 50 to 60 >> million documents. That's a lot. Solr is perfectly capable of handling >> that many documents, but you do need to have very good hardware. >> >> Even if they are small, your index is likely to be many gigabytes in >> size. If the documents are large, that might be measured in terabytes. >> Large indexes require a lot of memory for good performance. This will >> be discussed in more detail below. >> >>> As a new user to solr i can't able to find the exact reason for going > server >>> down. >>> >>> I am using cache's in the following way : >>> >>> <filterCache class="solr.FastLRUCache" >>> size="16384" >>> initialSize="4096" >>> autowarmCount="4096"/> >>> <queryResultCache class="solr.FastLRUCache" >>> size="16384" >>> initialSize="4096" >>> autowarmCount="1024"/> >>> >>> and i am not using any documentCache, fieldValueCahe's >> >> As Erick said, these cache sizes are HUGE. In particular, your >> autowarmCount values are extremely high. >> >>> Whether this can lead any performance issue means going server down. >> >> Another thing that Erick pointed out is that you haven't really told us >> what's happening. When you say that the server goes down, what EXACTLY >> do you mean? >> >>> And i am seeing logging in the server it is showing exception in the >>> following way >>> >>> >>> Servlet.service() for servlet [default] in context with path [/solr] > threw >>> exception [java.lang.IllegalStateException: Cannot call sendError() after >>> the response has been committed] with root cause >> >> This message comes from your servlet container, not Solr. You're >> probably using Tomcat, not the included Jetty. There is some indirect >> evidence that this can be fixed by increasing the servlet container's >> setting for the maximum number of request parameters. >> >> http://forums.adobe.com/message/4590864 >> >> Here's what I can say without further information: >> >> You're likely having performance issues. One potential problem is your >> insanely high autowarmCount values. Your cache configuration tells Solr >> that every time you have a soft commit or a hard commit with >> openSearcher=true, you're going to execute up to 1024 queries and up to >> 4096 filters from the old caches, in order to warm the new caches. Even >> if you have an optimal setup, this takes a lot of time. I suspect that >> you don't have an optimal setup. >> >> Another potential problem is that you don't have enough memory for the >> size of your index. A number of potential performance problems are >> discussed on this wiki page: >> >>