Erick,
Thanks for your update.
The problem this this data is will being until whole document in the section
be deleted.
I understand this is cause optimize double scan index folder in this case.
We may add some logic to check when the file size do this scan when the file
size is too bigger.
Yon
Oh my! I see what your problem is, but I rather doubt that it'll be
addressed. You've obviously been stress-testing the indexing process
and have a bunch of garbage left over that's not getting removed on
optimize. But it's such a unique case that I don't know if anyone is
really interested in fixi
For our setup, the file size is 123M. Internal it has 2.6M fields.
The problem is facet operation. It take a while for facet.
we are stuck in below call stack for 11 second.
java.util.HashMap.transfer(Unknown Source)
java.util.HashMap.resize(Unknown Source)
java.util.HashMap.addEntry(Unknown S
How big is the fnm file? While you may be technically correct, I'm not
sure it would be worth the effort, I rather expect this file to be
quite small.
Are you seeing a performance issue or is this more in the theoretical realm?
Best,
Erick
On Tue, May 6, 2014 at 1:23 PM, googoo wrote:
> I check
I check implementation.
in SegmentMerger.mergeFieldInfos
public void mergeFieldInfos() throws IOException {
for (AtomicReader reader : mergeState.readers) {
FieldInfos readerFieldInfos = reader.getFieldInfos();
for (FieldInfo fi : readerFieldInfos) {
fieldInfosBuilder.ad