First, what do you mean "run Lucene/Solr on Hadoop"?

You can use the HdfsDirectoryFactory to store Solr/Lucene
indexes on Hadoop, at that point the actual filesystem
that holds the index is transparent to the end user, you just
use Solr as you would if it was using indexes on the local
file system. See:
https://cwiki.apache.org/confluence/display/solr/Running+Solr+on+HDFS

If you want to use Map-Reduce to _build_ indexes, see the
MapReduceIndexerTool in the Solr contrib area.

Best,
Erick

On Sun, Dec 13, 2015 at 2:50 AM, Dino Chopins <dino.chop...@gmail.com> wrote:
> Hi,
>
> I've tried to figure out how can we run Lucene/SOLR on Hadoop, and found
> several sources. The last pointer is Apache Blur project and it is an
> incubating project.
>
> Is there any straightforward implementation of Lucene/SOLR on Hadoop? Or
> best practice of how to incorporate Lucene/SOLR on Hadoop? Thanks.
>
> --
> Regards,
>
> Dino

Reply via email to