Hello again, Current situation is, after setting the two options in order not to load the cores on start up and ramBufferSizeMB=32 Tomcat is stable, responsive, threads reach 60 as a maximum. Browsing and storing are fast. I should note that I have many cores with small amount of documents. Unfortunately the problem with the creation of a new core taking 20 minutes still exists. Next step will be downgrading to Java 7u25. Any other suggestions will be highly appreciated. Thanks in advance.
P.S previous SOLR version from which I updated was 3.6. Regards, Atanas Atanasov On Thu, Apr 10, 2014 at 6:06 PM, Shawn Heisey <s...@elyograg.org> wrote: > On 4/10/2014 12:40 AM, Atanas Atanasov wrote: > > I need some help. After updating to SOLR 4.4 the tomcat process is > > consuming about 2GBs of memory, the CPU usage is about 40% after the > start > > for about 10 minutes. However, the bigger problem is, I have about 1000 > > cores and seems that for each core a thread is created. The process has > > more than 1000 threads and everything is extremely slow. Creating or > > unloading a core even without documents takes about 20 minutes. Searching > > is more or less good, but storing also takes a lot. > > Is there some configuration I missed or that I did wrong? There aren't > many > > calls, I use 64 bit tomcat 7, SOLR 4.4, latest 64 bit Java. The machine > has > > 24 GBs of RAM, a CPU with 16 cores and is running Windows Server 2008 R2. > > Index is uppdated every 30 seconds/10 000 documents. > > I haven't checked the number of threads before the update, because I > didn't > > have to, it was working just fine. Any suggestion will be highly > > appreciated, thank you in advance. > > If creating a core takes 20 minutes, that sounds to me like the JVM is > doing constant full garbage collections to free up enough memory for > basic system operation. It could also be explained by temporary work > threads having to wait to execute because the servlet container will not > allow them to run. > > When indexing is happening, each core will set aside some memory for > buffering index updates. By default, the value of ramBufferSizeMB is > 100. If all your cores are indexing at once, multiply the indexing > buffer by 1000, and you'll require 100GB of heap memory. You'll need to > greatly reduce that buffer size. This buffer was 32MB by default in 4.0 > and earlier. If you are not setting this value, this change sounds like > it might fully explain what you are seeing. > > https://issues.apache.org/jira/browse/SOLR-4074 > > What version did you upgrade from? Solr 4.x is a very different beast > than earlier major versions. I believe there may have been some changes > made to reduce memory usage in versions after 4.4.0. > > The jetty that comes with Solr is configured to allow 10,000 threads. > Most people don't have that many, even on a temporary basis, but bad > things happen when the servlet container will not allow Solr to start as > many as it requires. I believe that the typical default maxThreads > value you'll find in a servlet container config is 200. > > Erick's right about a 6GB heap being very small for what you are trying > to do. Putting 1000 cores on one machine is something I would never > try. If it became a requirement I had to deal with, I wouldn't try it > unless the machine had a lot more CPU cores, hundreds of gigabytes of > RAM, and a lot of extremely fast disk space. > > If this worked before a Solr upgrade, I'm amazed. Congratulations to > you for fine work! > > NB: Oracle Java 7u25 is what you should be using. 7u40 through 7u51 > have known bugs that affect Solr/Lucene. These should be fixed by 7u60. > A pre-release of that is available now, and it should be generally > available in May 2014. > > Thanks, > Shawn > >