Great minds think alike :-)

Yeah, it is a bit eerie how similar they are, but I think they both go to solve 
a similar issue (mine started out with the desire to have only one Analyzer 
that I could configure with different filters, believe it or not, and grew from 
there).  The biggest difference that I see is that we are search engine 
agnostic (Lucene is but one implementation you could use), but there is no need 
for Solr to be that.

We use Term Vectors quite a bit, in fact, I was thinking of having a go at a 
patch (so if you want to point me at where to begin)...  Other than that, I 
haven't delved into as deeply as I would like to at this point yet, but that is 
coming soon.

Yonik Seeley <[EMAIL PROTECTED]> wrote: Grant, I just today got a chance to 
page through your ApacheCon Lucene
presentation.
I did a double-take when I paged across your "sample configuration" slide.
WIld how similar some of it looks to Solr's schema!

So since it seems like your stuff has it's own schema too, do you see
any features needed for Solr's schema?

-Yonik

=======From Gran's Presentation========
Declare a Tokenizer:
 
                  class="StandardTokenizerWrapper"/>
Declare a Token Filter:
 
stopFile="stopwords.dat"/>
Declare an Analyzer:
 
        test                    
standardTokenizer
        stop
 
Can also use existing Lucene Analyzers
==================================



----------------------------------------------
Grant Ingersoll
http://www.grantingersoll.com
                
---------------------------------
Yahoo! Mail
Bring photos to life! New PhotoMail  makes sharing a breeze. 

Reply via email to