We're using MMSeg with Lucene, but not Solr. Since each SolrCore is
independent, I'm not sure how you can avoid each having a copy of the
dictionary, unless you modified MMSeg to use shared memory. Or, maybe I
missing something.

On Mon, Oct 8, 2012 at 3:37 AM, liyun <liyun2...@corp.netease.com> wrote:

> Hi all,
> Is anybody using mmseg analyzer for Chinese word analyze? When we use this
> in solr multi-core, I find it will load the dictionary per core and each
> core cost about 50MB memory. I think this is a big waste when our JVM has
> only 1GB memory…… Anyone have a good idea for handle this trouble ?
>
> 2012-10-08
>
>
>
> Li Yun
> Software Engineer @ Netease
> Mail: liyun2...@corp.netease.com
> MSN: rockiee...@gmail.com

Reply via email to