[
https://issues.apache.org/jira/browse/OPENNLP-421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17798123#comment-17798123
]
ASF GitHub Bot commented on OPENNLP-421:
----------------------------------------
rzo1 commented on PR #568:
URL: https://github.com/apache/opennlp/pull/568#issuecomment-1860086040
Here is the JMH result for the interner impls only:
```bash
Benchmark (size) Mode Cnt Score
Error Units
StringDeduplicationBenchmark.chm 1 thrpt 25 36070236,872 ±
1204122,063 ops/s
StringDeduplicationBenchmark.chm 100 thrpt 25 354445,896 ±
2893,432 ops/s
StringDeduplicationBenchmark.chm 10000 thrpt 25 2323,076 ±
25,596 ops/s
StringDeduplicationBenchmark.chm 1000000 thrpt 25 13,137 ±
0,199 ops/s
StringDeduplicationBenchmark.chmd05 1 thrpt 25 45289035,126 ±
143794,468 ops/s
StringDeduplicationBenchmark.chmd05 100 thrpt 25 433279,497 ±
3165,815 ops/s
StringDeduplicationBenchmark.chmd05 10000 thrpt 25 2692,779 ±
8,173 ops/s
StringDeduplicationBenchmark.chmd05 1000000 thrpt 25 13,413 ±
0,155 ops/s
StringDeduplicationBenchmark.hm 1 thrpt 25 35123958,776 ±
472485,649 ops/s
StringDeduplicationBenchmark.hm 100 thrpt 25 371997,311 ±
6780,622 ops/s
StringDeduplicationBenchmark.hm 10000 thrpt 25 2311,588 ±
115,117 ops/s
StringDeduplicationBenchmark.hm 1000000 thrpt 25 14,073 ±
0,068 ops/s
StringDeduplicationBenchmark.intern 1 thrpt 25 10040026,472 ±
77470,327 ops/s
StringDeduplicationBenchmark.intern 100 thrpt 25 87644,048 ±
844,053 ops/s
StringDeduplicationBenchmark.intern 10000 thrpt 25 764,752 ±
34,300 ops/s
StringDeduplicationBenchmark.intern 1000000 thrpt 25 2,956 ±
0,024 ops/s
StringDeduplicationBenchmark.noop 1 thrpt 25 148719353,102 ±
743493,703 ops/s
StringDeduplicationBenchmark.noop 100 thrpt 25 1491614,947 ±
1587,406 ops/s
StringDeduplicationBenchmark.noop 10000 thrpt 25 9848,732 ±
11,076 ops/s
StringDeduplicationBenchmark.noop 1000000 thrpt 25 78,726 ±
0,064 ops/s
```
https://gist.github.com/rzo1/754b15381041b28718cf44073f4986c5
> Large dictionaries cause JVM OutOfMemoryError: PermGen due to String interning
> ------------------------------------------------------------------------------
>
> Key: OPENNLP-421
> URL: https://issues.apache.org/jira/browse/OPENNLP-421
> Project: OpenNLP
> Issue Type: Bug
> Components: Name Finder
> Affects Versions: tools-1.5.2-incubating
> Environment: RedHat 5, JDK 1.6.0_29
> Reporter: Jay Hacker
> Assignee: Richard Zowalla
> Priority: Minor
> Labels: performance
> Original Estimate: 168h
> Remaining Estimate: 168h
>
> The current implementation of StringList:
> https://svn.apache.org/viewvc/incubator/opennlp/branches/opennlp-1.5.2-incubating/opennlp-tools/src/main/java/opennlp/tools/util/StringList.java?view=markup
>
> calls intern() on every String. Presumably this is an attempt to reduce
> memory usage for duplicate tokens. Interned Strings are stored in the JVM's
> permanent generation, which has a small fixed size (seems to be about 83 MB
> on modern 64-bit JVMs:
> [http://www.oracle.com/technetwork/java/javase/tech/vmoptions-jsp-140102.html]).
> Once this fills up, the JVM crashes with an OutOfMemoryError: PermGen
> space.
> The size of the PermGen can be increased with the -XX:MaxPermSize= option to
> the JVM. However, this option is non-standard and not well known, and it
> would be nice if OpenNLP worked out of the box without deep JVM tuning.
> This immediate problem could be fixed by simply not interning Strings.
> Looking at the Dictionary and DictionaryNameFinder code as a whole, however,
> there is a huge amount of room for performance improvement. Currently,
> DictionaryNameFinder.find works something like this:
> for every token in every tokenlist in the dictionary:
> copy it into a "meta dictionary" of single tokens
> for every possible subsequence of tokens in the sentence: // of which
> there are O(N^2)
> copy the sequence into a new array
> if the last token is in the "meta dictionary":
> make a StringList from the tokens
> look it up in the dictionary
> Dictionary itself is very heavyweight: it's a Set<StringListWrapper>, which
> wraps StringList, which wraps Array<String>. Every entry in the dictionary
> requires at least four allocated objects (in addition to the Strings): Array,
> StringList, StringListWrapper, and HashMap.Entry. Even contains and remove
> allocate new objects!
> From this comment in DictionaryNameFinder:
> // TODO: improve performance here
> It seems like improvements would be welcome. :) Removing some of the object
> overhead would more than make up for interning strings. Should I create a
> new Jira ticket to propose a more efficient design?
--
This message was sent by Atlassian Jira
(v8.20.10#820010)