g.
>
> Thanks in advance,
> Atita
>
> On Mon, Nov 6, 2017 at 2:19 PM, Daniel Ortega >
> wrote:
>
> > Hi Robert,
> >
> > We use the following stack:
> >
> > - Prometheus to scrape metrics (https://prometheus.io/)
> > - Prometheus node expo
t; (Cache usage, QPS,
Response times...) (https://github.com/prometheus/jmx_exporter)
- Grafana to visualize all the data scrapped by Prometheus (
https://grafana.com/)
Best regards
Daniel Ortega
2017-11-06 20:13 GMT+01:00 Petersen, Robert (Contr) <
robert.peters...@ftr.com>:
> PS I knew
I would recommend you Solrmeter cloud
This fork supports solr cloud:
https://github.com/idealista/solrmeter/blob/master/README.md
Disclaimer: This fork was developed by idealista, the company where I work
El El lun, 4 sept 2017 a las 11:18, Selvam Raman
escribió:
> Hi All,
>
> which is the bes
-and-commit-in-sorlcloud/
>
>
> -Scott
>
> On Thu, Aug 24, 2017 at 10:03 AM, Daniel Ortega <
> danielortegauf...@gmail.com
> > wrote:
>
> > Hi Scott,
> >
> > In our indexing service we are using that client too
> > (org.apache.solr.client.sol
gt; Can you post your Update Request Processor Chain?
>
>
> -Scott
>
>
> On Wed, Aug 23, 2017 at 4:13 PM, Daniel Ortega <
> danielortegauf...@gmail.com>
> wrote:
>
> > Hi Scott,
> >
> > - *Can you describe the process that queries the DB an
nt finds out from Zookeeper which nodes are the shard leaders
> and sends docs directly to them.
>
>
> -Scott
>
> On Tue, Aug 22, 2017 at 2:16 PM, Daniel Ortega <
> danielortegauf...@gmail.com>
> wrote:
>
> > *Main Problems*
> >
> >
> > We a
*Main Problems*
We are involved in a migration from Solr Master/Slave infrastructure to
SolrCloud infrastructure.
The main problems that we have now are:
- Excessive resources consumption: Currently we have 5 instances with 80
processors/768 GB RAM each instance using SSD Hard Disk Dr
; 5> if someone is indexing via SolrJ and specifies a "commitWithin" for
> one of the calls.
>
> Can you guarantee that none of these occur?
>
> Best,
> Erick
>
> On Thu, Jul 6, 2017 at 6:14 AM, Daniel Ortega
> wrote:
> > Hi Guys,
> >
> > Could some
Hi Guys,
Could someone explain me why I have segments of 500 KB (with source "flush"
and only with 91 documents) if I have a ramBufferSizeMB of 2GB
and maxBufferedDocs not definined?
Thanks in advance,
Daniel