Hi Torsten,

On Tue, Jul 12, 2011 at 2:45 PM, Torsten Kunze <torsten.ku...@b-s-s.de>wrote:

> Hi,
>
> as a feasibility study I am trying to run Solr with multiple thousands of
> cores in the same shard to have small indexes that can be created and
> removed very fast.
> Now, I have a Tomcat running with 1.600 cores. Memory and open file handles
> have been adjusted to be enough for that scenario.
>
> I am using SolrJ and I implemented a feeder using timer threads to realize
> auto commits to each Solr Core independently.
> Feeding is done randomly to the cores in parallel. Auto commit is enabled.
>
> My questions:
> Do I need to execute a commit to each core itself or does a commit to one
> dedicated core commit all changes of the whole shard?
>

You need to execute a commit to each core to commit the updates done on it.


> Can I feed in parallel to some cores, if  a commit or optimize to another
> core is currently applied or does Solr block further content integration
> requests during that time?
>

You can feed in parallel and Solr won't block requests to a different core
during that time. That being said, your disk would become the bottleneck.


>
> Because of that many cores, it would be better to define cores for lazy
> loading during creation. Unfortunately the current implementation of
> CoreAdminHandler does not allow to set the 'loadOnStart' parameter of
> solr.xml. Is there a possibility to do this or do I need to implement my own
> handlers?
>
> Does anybody has some good or bad experiences with using many many cores?
>

I've done a bunch of stuff. There are some (very old) patches as well but
probably not useful by themselves. See
http://wiki.apache.org/solr/LotsOfCores

-- 
Regards,
Shalin Shekhar Mangar.

Reply via email to