Jack,

Sorry, but I don't agree that it's that cut and dried. I've very
successfully worked with terabytes of data in Hadoop that was stored on an
Isilon mounted via NFS, for example. In cases like this, you're using
MapReduce purely for it's execution model (which existed far before Hadoop
and HDFS ever did).


Michael Della Bitta

Applications Developer

o: +1 646 532 3062  | c: +1 917 477 7906

appinions inc.

“The Science of Influence Marketing”

18 East 41st Street

New York, NY 10017

t: @appinions <https://twitter.com/Appinions> | g+:
plus.google.com/appinions
w: appinions.com <http://www.appinions.com/>


On Tue, Jun 25, 2013 at 8:58 AM, Jack Krupansky <j...@basetechnology.com>wrote:

> ???
>
> Hadoop=HDFS
>
> If the data is not in Hadoop/HDFS, just use the normal Solr indexing
> tools, including SolrCell and Data Import Handler, and possibly ManifoldCF.
>
>
> -- Jack Krupansky
>
> -----Original Message----- From: engy.morsy
> Sent: Tuesday, June 25, 2013 8:10 AM
> To: solr-user@lucene.apache.org
> Subject: Re: Solr indexer and Hadoop
>
>
> Thank you Jack. So, I need to convert those nodes holding data to HDFS.
>
>
>
> --
> View this message in context: http://lucene.472066.n3.**
> nabble.com/Solr-indexer-and-**Hadoop-tp4072951p4073013.html<http://lucene.472066.n3.nabble.com/Solr-indexer-and-Hadoop-tp4072951p4073013.html>
> Sent from the Solr - User mailing list archive at Nabble.com.
>

Reply via email to