Or consider separating frequently changing data into a different core
from the slow moving data, if you can, reducing the amount of data being
pushed around.
Upayavira
On Mon, Sep 29, 2014, at 09:16 PM, Bryan Bende wrote:
> You can try lowering the mergeFactor in solrconfig.xml to cause more
> me
You can try lowering the mergeFactor in solrconfig.xml to cause more merges
to happen during normal indexing, which should result in more deleted
documents being removed from the index, but there is a trade-off
http://wiki.apache.org/solr/SolrPerformanceFactors#mergeFactor
On Mon, Sep 29, 201
Thanks for replying! Is there anything I could be doing to help prevent the
14GB collection with 700k deleted docs before it tries removing them and at
that point running out of memory? Maybe just scheduled off-peak optimize calls
with expungeDeletes? Or is there some other config option I co
Yes, expungeDeletes=true will remove all deleted docs from the disk but it
also requires merging all segments that have any deleted docs which, in
your case, could mean a re-write of the entire index. So it'd be an
expensive operation. Usually deletes are removed in the normal course of
indexing as
I'm running into memory issues and wondering if I should be using
expungeDeletes on commits. The server in question at the moment has
450k documents in the collection and represents 15GB on disk. There are
also 700k+ "Deleted Docs" and I'm guessing that is part of the disk
space consumption b
Shawn Heisey [mailto:s...@elyograg.org]
Sent: Wednesday, July 10, 2013 5:34 PM
To: solr-user@lucene.apache.org
Subject: Re: expunging deletes
On 7/10/2013 5:58 PM, Petersen, Robert wrote:
> Using solr 3.6.1 and the following settings, I am trying to run without
> optimizes. I used to optim
On 7/10/2013 5:58 PM, Petersen, Robert wrote:
> Using solr 3.6.1 and the following settings, I am trying to run without
> optimizes. I used to optimize nightly, but sometimes the optimize took a
> very long time to complete and slowed down our indexing. We are continuously
> indexing our new o
Hi guys,
Using solr 3.6.1 and the following settings, I am trying to run without
optimizes. I used to optimize nightly, but sometimes the optimize took a very
long time to complete and slowed down our indexing. We are continuously
indexing our new or changed data all day and night. After a f
You can drop your mergeFactor to 2 and then run expungeDeletes?
This will make the operation take longer but (assuming you have > 3
segments in your index) should use less transient disk space.
You could also make a custom merge policy, that expunges one segment
at a time (even slower but even le
Due to some emergency maintenance I needed to run delete on a large
number of documents in a 200Gb index.
The problem is that it's taking an inordinately long amount of time (2+
hours so far and counting) and is steadily eating up disk space -
presumably up to 2x index size which is getting awf
10 matches
Mail list logo