Thanks for replying! Is there anything I could be doing to help prevent the 14GB collection with 700k deleted docs before it tries removing them and at that point running out of memory? Maybe just scheduled off-peak optimize calls with expungeDeletes? Or is there some other config option I could be using to help manage that a little better?
Thanks! Eric On Sep 29, 2014, at 9:35 AM, Shalin Shekhar Mangar <shalinman...@gmail.com> wrote: > Yes, expungeDeletes=true will remove all deleted docs from the disk but it > also requires merging all segments that have any deleted docs which, in > your case, could mean a re-write of the entire index. So it'd be an > expensive operation. Usually deletes are removed in the normal course of > indexing as segments are merged together. > > On Sat, Sep 27, 2014 at 8:42 PM, Eric Katherman <e...@knackhq.com> wrote: > >> I'm running into memory issues and wondering if I should be using >> expungeDeletes on commits. The server in question at the moment has 450k >> documents in the collection and represents 15GB on disk. There are also >> 700k+ "Deleted Docs" and I'm guessing that is part of the disk space >> consumption but I am not having any luck getting that cleared out. I >> noticed the expungeDeletes=false in some of the log output related to >> commit but didn't try setting it to true yet. Will this clear those deleted >> documents and recover that space? Or should something else already be >> managing that but maybe isn't configured correctly? >> >> Our data is user specific data, each customer has their own database >> structure so it varies with each user. They also add/remove data fairly >> frequently in many cases. To compare another collection of the same data >> type, there are 1M documents and about 120k deleted docs but disk space is >> only 6.3GB. >> >> Hoping someone can share some advice about how to manage this. >> >> Thanks, >> Eric >> > > > > -- > Regards, > Shalin Shekhar Mangar.