The overhead of just opening a core is insignificant relatively to using it
so, unless you are worried about hitting the max  number of open files
limit, it seems unimportant.

Otis
Solr & ElasticSearch Support
http://sematext.com/
On Apr 23, 2013 7:46 AM, "Jérôme Étévé" <jerome.et...@gmail.com> wrote:

> Thanks!
>
> Yeah I know about the caching/commit things
>
> My question is more about the impact of a "Pure" creation of a Solr core,
> indepently of its "usage" memory requirements (like caches and stuff).
>
> From the experiments I did using JMX, it's not measurable, but I might be
> wrong.
>
>
>
> On 23 April 2013 12:25, Guido Medina <guido.med...@temetra.com> wrote:
>
> > I'm not an expert, but at some extent I think it will come down to few
> > factors:
> >
> >  * How much data is been cached per core.
> >  * If memory is an issue and still you want performance, I/O with low
> >    cache could be an issue (SSDs?)
> >  * Soft commits which implies open searchers per soft commit (and per
> >    core) will depend on caches.
> >
> > I do believe at the end it will be a direct result of your caching and
> > I/O. If all you care is performance, caching (memory) could be replaced
> > with faster I/O though soft commits will be fragile to memory due to its
> > nature (depends on caching/memory and low I/O usage)
> >
> > Hope I made sense, I probably tried too many points of view in a single
> > idea.
> >
> > Guido.
> >
> >
> > On 23/04/13 11:50, Jérôme Étévé wrote:
> >
> >> Hi all,
> >>
> >> We've got quite a lot of (mostly small) solr cores in our Solr instance.
> >> They all share the same solrconfig.xml and schema.xml (only the data
> >> differs).
> >>
> >> I'm wondering how far can I go in terms of number of cores. CPU is not
> an
> >> issue, but memory could be.
> >>
> >> An idea/guideline about the impact of a new Solr Core in a Solr
> instance?
> >>
> >> Thanks!
> >>
> >> Jerome.
> >>
> >>
> >
>
>
> --
> Jerome Eteve
> +44(0)7738864546
> http://www.eteve.net/
>

Reply via email to