Let's back up a bit and ask what your primary goal is. Just indexing a bunch of stuff as fast as possible? By and large, I'd index to a single core with multiple threads rather than the approach you're taking (I'm assuming that there's a MERGEINDEXES somewhere in this process). You should be able to max out your CPUs this way in my experience.
Have you tried just indexing to a single core with multiple indexing clients? And what have your results been? Your issue with 100 cores is probably that you're copying around a bunch of files as you merge and/or the 100 cores are context switching to no good purpose. Best, Erick On Sun, Oct 25, 2015 at 12:52 PM, Peri Subrahmanya <peri.subrahma...@htcinc.com> wrote: > Hi, > > I wanted to check if the following would work; > > 1. Spawn n threads > 2. Create n-cores > 3. Index n records simultaneously in n-cores > 4. Merge all core indexes into a single master core > > I have been able to successfully do this for 5 threads (5 cores) with 1000 > documents each. However, I wanted to check if there are any performance > parameters that can be tweaked to make Solr handle 100 cores with 10K records > each? It seems to be churning and I am not sure if it was ever going to > finish. > > Any ideas on how things might be done differently ? > > Thanks > -Peri > > *** DISCLAIMER *** This is a PRIVATE message. If you are not the intended > recipient, please delete without copying and kindly advise us by e-mail of > the mistake in delivery. > NOTE: Regardless of content, this e-mail shall not operate to bind HTC Global > Services to any order or other contract unless pursuant to explicit written > agreement or government initiative expressly permitting the use of e-mail for > such purpose. > >