I had the very same article in mind - how would it be simpler in Solr
than in Lucene? A spellchecker is pretty much standard in every major

I meant it would be a simpler implementation in Solr because you don't
have to deal with java or any Lucene API's. You just create a document
for each "correct" word. For example the word "lettuce" would have a
document:

<doc>
<field name="word">lettuce</field>
<field name="start3">let</field>
<field name="gram3">let ett ttu tuc uce</field>
<field name="end3">uce</field>
<field name="start4">lett</field>
<field name="gram4">lett ettu ttuc tuce</field>
<field name="end4">tuce</field>
</doc>

Then you query Solr using the same syntax they describe in the article.

Anyway I haven't done this or tested it, but when reading that article
I thought it would be much easier to implement using Solr, at least
for me since I already have a database of correct words in Solr.

Kevin

Reply via email to