Hi Rahul,
It depends. You might have warm up queries that would populate caches. For each 
core Solr exposes JMX stats so you can read just those without “touching" core. 
You can also try using some of existing tools for monitoring Solr, but I don’t 
think that any of them provides you info about cores that are not loaded. You 
would see it as occupied disk.

HTH,
Emir
--
Monitoring - Log Management - Alerting - Anomaly Detection
Solr & Elasticsearch Consulting Support Training - http://sematext.com/



> On 30 Jan 2020, at 01:01, Rahul Goswami <rahul196...@gmail.com> wrote:
> 
> Hi Shawn,
> Thanks for the inputs. I realize I could have been clearer. By "expensive",
> I mean expensive in terms of memory utilization. Eg: Let's say I have a
> core with an index size of 10 GB and is not loaded on startup as per
> configuration. If I load it in order to know the total documents and the
> index size (to gather stats about the Solr server), is the amount of memory
> consumed proportional to the index size in some way?
> 
> Thanks,
> Rahul
> 
> On Wed, Jan 29, 2020 at 6:43 PM Shawn Heisey <apa...@elyograg.org> wrote:
> 
>> On 1/29/2020 3:01 PM, Rahul Goswami wrote:
>>> 1) How expensive is core loading if I am only getting stats like the
>> total
>>> docs and size of the index (no expensive queries)?
>>> 2) Does the memory consumption on core loading depend on the index size ?
>>> 3) What is a reasonable value for transient cache size in a production
>>> setup with above configuration?
>> 
>> What I would do is issue a RELOAD command.  For non-cloud deployments,
>> I'd use the CoreAdmin API.  For cloud deployments, I'd use the
>> Collections API.  To discover the answer, see how long it takes for the
>> response to come back.
>> 
>> The time interval for a RELOAD is likely different than when Solr starts
>> ... but it sounds like you're more interested in the numbers for core
>> loading after Solr starts than the ones during startup.
>> 
>> Thanks,
>> Shawn
>> 

Reply via email to