Hi,

as a feasibility study I am trying to run Solr with multiple thousands of cores 
in the same shard to have small indexes that can be created and removed very 
fast.
Now, I have a Tomcat running with 1.600 cores. Memory and open file handles 
have been adjusted to be enough for that scenario.

I am using SolrJ and I implemented a feeder using timer threads to realize auto 
commits to each Solr Core independently.
Feeding is done randomly to the cores in parallel. Auto commit is enabled.

My questions:
Do I need to execute a commit to each core itself or does a commit to one 
dedicated core commit all changes of the whole shard?
Can I feed in parallel to some cores, if  a commit or optimize to another core 
is currently applied or does Solr block further content integration requests 
during that time?

Because of that many cores, it would be better to define cores for lazy loading 
during creation. Unfortunately the current implementation of CoreAdminHandler 
does not allow to set the 'loadOnStart' parameter of solr.xml. Is there a 
possibility to do this or do I need to implement my own handlers?

Does anybody has some good or bad experiences with using many many cores?

Thanks and Regards,
Torsten

-- 
This email was Anti Virus checked by B-S-S GmbH Astaro Security Gateway

Reply via email to