So, basically I made the first mistake by Optimizing ? At this point, since it
seems I can't stop these optimizations from running, should I just drop all
data and start fresh ?
On Mon, Apr 23, 2018 at 01:23 PM, Erick Erickson wrote:
No, it's not "optimizing on its own". At least it better not b
I only have one core, 'dovecot'. This is a pretty standard config. How do I
stop it from doing all these 'Optimizes' ? Is there an automatic process that
triggers them ?
On Mon, Apr 23, 2018 at 01:25 PM, Shawn Heisey wrote:
On 4/23/2018 11:13 AM, Scott M. wrote:
I recently installed Solr 7.1 and
On 4/23/2018 11:13 AM, Scott M. wrote:
I recently installed Solr 7.1 and configured it to work with Dovecot for
full-text searching. It works great but after about 2 days of indexing, I've
pressed the 'Optimize' button. At that point it had collected about 17 million
documents and it was takin
No, it's not "optimizing on its own". At least it better not be.
As far as your index growing after optimize, that's the little
"gotcha" with optimize, see:
https://lucidworks.com/2017/10/13/segment-merging-deleted-documents-optimize-may-bad/
This is being addressed in the 7.4 time frame (hopeful
I recently installed Solr 7.1 and configured it to work with Dovecot for
full-text searching. It works great but after about 2 days of indexing, I've
pressed the 'Optimize' button. At that point it had collected about 17 million
documents and it was taking up about 60-70GB of space.
It complet
Subject: RE: yet another optimize question
Petersen, Robert [robert.peter...@mail.rakuten.com] wrote:
We actually have hundreds of facet-able fields, but most are specialized
and are only faceted upon if the user has drilled into the particular
category
to which they are applicable and so they are
Petersen, Robert [robert.peter...@mail.rakuten.com] wrote:
> We actually have hundreds of facet-able fields, but most are specialized
> and are only faceted upon if the user has drilled into the particular category
> to which they are applicable and so they are only indexed for products
> in those
Sent: Wednesday, June 19, 2013 10:50 AM
To: solr-user@lucene.apache.org
Subject: Re: yet another optimize question
I generally run with an 8GB heap for a system that does no faceting. 32GB does
seem rather large, but you really should have room for bigger caches.
The Akamai cache will reduce your hit
wun...@wunderwood.org]
> Sent: Tuesday, June 18, 2013 6:57 PM
> To: solr-user@lucene.apache.org
> Subject: Re: yet another optimize question
>
> Your query cache is far too small. Most of the default caches are too small.
>
> We run with 10K entries and get a hit rate around 0.30
[mailto:wun...@wunderwood.org]
Sent: Tuesday, June 18, 2013 6:57 PM
To: solr-user@lucene.apache.org
Subject: Re: yet another optimize question
Your query cache is far too small. Most of the default caches are too small.
We run with 10K entries and get a hit rate around 0.30 across four servers.
This
facet fields eh?
Thanks for the tip.
Thanks
Robi
-Original Message-
From: Andre Bois-Crettez [mailto:andre.b...@kelkoo.com]
Sent: Tuesday, June 18, 2013 3:03 AM
To: solr-user@lucene.apache.org
Subject: Re: yet another optimize question
Recently we had steadily increasing memory usage and
> To: solr-user@lucene.apache.org
> Subject: Re: yet another optimize question
>
> Hi Robi,
>
> This goes against the original problem of getting OOMEs, but it looks like
> each of your Solr caches could be a little bigger if you want to eliminate
> evictions, with the qu
, June 18, 2013 3:03 AM
To: solr-user@lucene.apache.org
Subject: Re: yet another optimize question
Recently we had steadily increasing memory usage and OOM due to facets on
dynamic fields.
The default facet.method=fc need to build a large array of maxdocs ints for
each field (a fieldCache or
already in effect
for me?
10
10
Thanks
Robi
-Original Message-
From: Otis Gospodnetic [mailto:otis.gospodne...@gmail.com]
Sent: Monday, June 17, 2013 6:36 PM
To: solr-user@lucene.apache.org
Subject: Re: yet another optimize question
Yes, in one of the example sol
nSize=223, acceptableSize=235, cleanupThread=false, autowarmCount=10,
> regenerator=org.apache.solr.search.SolrIndexSearcher$2@36e831d6)
> stats: lookups : 3990
> hits : 3831
> hitratio : 0.96
> inserts : 239
> evictions : 26
> size : 244
> warmupTime : 1
> cumulative_lookups : 5745
Recently we had steadily increasing memory usage and OOM due to facets
on dynamic fields.
The default facet.method=fc need to build a large array of maxdocs ints
for each field (a fieldCache or fieldValueCahe entry), whether it is
sparsely populated or not.
Once you have reduced your number of ma
5, cleanupThread=false, autowarmCount=10,
> regenerator=org.apache.solr.search.SolrIndexSearcher$2@36e831d6)
> stats: lookups : 3990
> hits : 3831
> hitratio : 0.96
> inserts : 239
> evictions : 26
> size : 244
> warmupTime : 1
> cumulative_lookups : 5745011
> cum
my index defaults section?
>
>
>10
>10
>
>
> Thanks
> Robi
>
> -Original Message-
> From: Upayavira [mailto:u...@odoko.co.uk]
> Sent: Monday, June 17, 2013 12:29 PM
> To: solr-user@lucene.apache.org
> Subject: Re: yet another optimize que
to want to put something like this into my
index defaults section?
10
10
Thanks
Robi
-Original Message-
From: Upayavira [mailto:u...@odoko.co.uk]
Sent: Monday, June 17, 2013 12:29 PM
To: solr-user@lucene.apache.org
Subject: Re: yet another optimize question
The key figures are num
90
> hits : 3831
> hitratio : 0.96
> inserts : 239
> evictions : 26
> size : 244
> warmupTime : 1
> cumulative_lookups : 5745011
> cumulative_hits : 5496150
> cumulative_hitratio : 0.95
> cumulative_inserts : 351485
> cumulative_evictions : 276308
>
umulative_hitratio : 0.95
cumulative_inserts : 351485
cumulative_evictions : 276308
-Original Message-
From: Otis Gospodnetic [mailto:otis.gospodne...@gmail.com]
Sent: Saturday, June 15, 2013 5:52 AM
To: solr-user@lucene.apache.org
Subject: Re: yet another optimize question
Hi Robi,
I
Hi Robi,
I'm going to guess you are seeing smaller heap also simply because you
restarted the JVM recently (hm, you don't say you restarted, maybe I'm
making this up). If you are indeed indexing continuously then you
shouldn't optimize. Lucene will merge segments itself. Lower
mergeFactor will for
Hi guys,
We're on solr 3.6.1 and I've read the discussions about whether to optimize or
not to optimize. I decided to try not optimizing our index as was recommended.
We have a little over 15 million docs in our biggest index and a 32gb heap for
our jvm. So without the optimizes the index fo
23 matches
Mail list logo