Hi,

I want to pick up this old thread from the summer (see below). I do understand that Solr is inteded for more structured data, and that Nutch is a good basis for cluttered information, particularly fetched from crawlers.

However Solr's ease of setup and flexible schemas make it a viable alternative for enterprise solutions. It seems even the purpose of the project itself is to create an enterprise search platform.

In that respect I agree with the original posting that Solr lacks functionality with respect to desired functionality. One can argue that more or less random data should be structured by the user writing a decent application. However a more easy to use and configurable plugin architecture for different filtering and document parsing could make Solr more attractive. I think that many potential users would welcome such additions.

In other words, Solr *could* very well be the right tool for the job in many cases, provided that there is a configurable "pre-Solr" step that can be run on content before it actually "turns XML".

A related design question is to what extent this should be contracted between the XML documents themselves and the schema.xml, or whether most of the work should be done in the parser/pre-processing (i.e. when making the XML documents).

Your thoughts and feedback is greatly appriciated.

Regards,

Eivind


>> browsing through the message thread I tried to find a trail addressing file
>> system crawls. I want to implement an enterprise search over a networked
>> filesystem, crawling all sorts of documents, such as html, doc, ppt and pdf.
>> Nutch provides plugins enabling it to read proprietary formats.
>> Is there support for the same functionality in solr?

> the text out of these types of documents.  You could borrow the
> document parsing pieces from Lucene's contrib and Nutch and glue them
> together into your client that speaks to Solr, or perhaps Solr isn't
> the right approach for your needs?   It certainly is possible to add
> these capabilities into Solr, but it would be awkward to have to
> stream binary data into XML documents such that Solr could parse them
> on the server side.

Agreed.  Solr's focus is in indexing "Structured Data".  The support for
dynamic fields certainly allows you do deal with complex structured data,
and somewhat heterogeneous structured data -- but it's still structured
data.  If your goal is to do a lot of crawling of disparat physical
documents, extract the text, and build a "path,title,content" index
then Nutch is probably your best bet.

Reply via email to