As detailed below. The collection where we have issues have 16 shards with
2 replica each.
On Sun, May 10, 2020, 9:10 PM matthew sporleder
wrote:
> Why so many shards?
>
> > On May 10, 2020, at 9:09 PM, Ganesh Sethuraman
> wrote:
> >
> > We are using dedicated host, Cent OS in EC2 r5.12xlarge
Why so many shards?
> On May 10, 2020, at 9:09 PM, Ganesh Sethuraman
> wrote:
>
> We are using dedicated host, Cent OS in EC2 r5.12xlarge (48 CPU, ~360GB
> RAM), 2 nodes. Swapiness set to 1. With General purpose 2T EBS SSD volume.
> JVM size of 18gb, with G1 GC enabled. About 92 collection w
We are using dedicated host, Cent OS in EC2 r5.12xlarge (48 CPU, ~360GB
RAM), 2 nodes. Swapiness set to 1. With General purpose 2T EBS SSD volume.
JVM size of 18gb, with G1 GC enabled. About 92 collection with average of 8
shards and 2 replica each. Most of updates over daily batch updates.
Whil
On 5/10/2020 4:48 PM, Ganesh Sethuraman wrote:
The additional info is that when we execute the test for longer (20mins) we
are seeing better response time, however for a short test (5mins) and rerun
the test after an hour or so we are seeing slow response times again. Note
that we don't update th
Here is a quick update based on your question, and few additional
information that will help
The additional info is that when we execute the test for longer (20mins) we
are seeing better response time, however for a short test (5mins) and rerun
the test after an hour or so we are seeing slow respo
Do not, repeat NOT expungeDelete after each deleteByQuery, it is
a very expensive operation. Perhaps after the nightly batch, but
I doubt that’ll help much anyway.
30% deleted docs is quite normal, and should definitely not
change the response time by a factor of 100! So there’s
some other issue
On 10/19/2018 7:57 AM, Roopa Rao wrote:
From the past few months there has been a steady increase in the Solr
response time in our application, yes there are enhancements and index size
increase.
How to approach this issue to find the root cause for this slow and
constant increase? What paramete
On 2/22/2018 10:45 AM, LOPEZ-CORTES Mariano-ext wrote:
For the moment, I have the following information:
12GB is max java heap. Total memory i don't know. No direct access to host.
2 replicas =
Size 1 = 11.51 GB
Size 2 = 11.82 GB
(Sizes showed in the Core-Overview admin gui)
O
De : Shawn Heisey [mailto:elyog...@elyograg.org]
Envoyé : jeudi 22 février 2018 17:06
À : solr-user@lucene.apache.org
Objet : Re: Response time under 1 second?
On 2/22/2018 8:53 AM, LOPEZ-CORTES Mariano-ext wrote:
> With a 3 nodes cluster each 12GB and a corpus of 5GB (CSV format).
>
&
On 2/22/2018 8:53 AM, LOPEZ-CORTES Mariano-ext wrote:
With a 3 nodes cluster each 12GB and a corpus of 5GB (CSV format).
Is it better to disable completely Solr cache ? There is enough RAM for the
entire index.
The size of the input data will have an effect on how big the index is,
but it is
Hi Erik,
thanks for your reply. I made some deeper investigations to tackle the
reason for the behavior but wasn't successful so far
Answer to your questions:
- yes I completely re-indexed the data
- yes I'm running a collection of around 5.000 queries coming from our
productive logs
Now my
Hi Erik,
thanks for your reply. I made some deeper investigations to tackle the
reason for the behavior but wasn't successful so far
Answer to your questions:
- yes I completely re-indexed the data
- yes I'm running a collection of around 5.000 queries coming from our
productive logs
Now my
Two questions:
1> did you completely re-index under 6x? My guess is "yes", since you
jumped two major versions and 6x won't read a 4x index. If not you may
be getting some performance degradation due to back-compat..
2> Try turning &debug=timing. that breaks down the time spent in each
component
SOLR's QTime represents actual time it spent on searching, where as your c#
client response time might be the total time spent in sending HTTP request
and getting back the response(which might also include parsing the results)
.
Regards
Pravesh
--
View this message in context:
http://lucene.4
Hello,
QTime counts only searching and filtering, but not writing response, which
includes retrieving the stored fields (&fl=...). So, it's quite reasonable.
On Thu, Jan 17, 2013 at 7:09 AM, 张浓飞 wrote:
> I have a solr website with about 500 docs ( 30 fileds defined in schema
> ), and a c# cli
How long are the documents ? indexing a large document can be slow
(although 2 seconds is very slow indeed).
2011/6/22 Rode González (libnova) :
> Hi !
>
>
>
> We are using Zend Search based on Lucene. Our indexing pdf consultations
> take longer than 2 seconds.
>
> We want to change to solr to tr
Hi Rode,
Have you seen http://wiki.apache.org/solr/SolrPerformanceFactors ?
Steve
> -Original Message-
> From: Rode González (libnova) [mailto:r...@libnova.es]
> Sent: Wednesday, June 22, 2011 11:30 AM
> To: solr-user@lucene.apache.org
> Cc: dan...@silvereme.com; Gonzalo Iglesias; Leo; M
yes, non cached. If I repeat a query the response is fast since the results
are cached.
2009/4/7 Noble Paul നോബിള് नोब्ळ्
> are these the numbers for non-cached requests?
>
> On Tue, Apr 7, 2009 at 11:46 AM, CIF Search wrote:
> > Hi,
> >
> > I have around 10 solr servers running indexes of aro
are these the numbers for non-cached requests?
On Tue, Apr 7, 2009 at 11:46 AM, CIF Search wrote:
> Hi,
>
> I have around 10 solr servers running indexes of around 80-85 GB each and
> and with 16,000,000 docs each. When i use distrib for querying, I am not
> getting a satisfactory response time.
19 matches
Mail list logo