On Tue, Jun 9, 2009 at 2:56 PM, revas <revas...@gmail.com> wrote:

> But the spell check componenet uses the n-gram analyzer and henc should
> work
> for any language ,is this correct ,also we can refer an extern dictionary
> for suggestions ,could this be in any language?
>

Yes it does use n-grams but there's an analysis step before the n-grams are
created. For example, if you are creating your spell check index from a Solr
field, SpellCheckComponent uses that field's index time analyzer. So you
should create your language-specific fields in such a way that the analysis
works correctly for that language.


> The open files is not because of spell check as we have not yet implemented
> this yet, every time we restart solr we need to up the ulimit ,otherwise it
> does not work,so is there any workaround to permanently close this open
> files ,does optmizing the index close it?
>

Optimization merges the segments of the index into one big segment. So it
will reduce the number of files. However, during the merge it may create
many more files. The old files after the merge are cleanup by Lucene in a
while (unless you have changed the defaults in the IndexDeletionPolicy
section in solrconfig.xml).

-- 
Regards,
Shalin Shekhar Mangar.

Reply via email to