I am doing this. Let's assume that you can make a query so that
each row of the result is one document that you want to index.
Create an XML document in Solr update format in a StringBuilder,
then POST that to the Solr instance. After about 1000 documents,
post a "<commit/>". After all the rows, post a "<commit/>", then
an "<optimize/>".

My documents are rather small, so I batch twenty of them in a
single post. That runs faster than one document per post.

The whole thing is about two pages of Java. It will get bigger
when I start checking the return codes from the commits.
I would share it, but it uses a custom DB layer, so it wouldn't
work for anyone else.

wunder
==
Walter Underwood
Seach Guru, Netflix

On 11/24/06 8:14 AM, "Nicolas St-Laurent" <[EMAIL PROTECTED]> wrote:

> Thank you Bertrand.
> 
> The documentation on Solr is still sparse. I've already looked in
> SolrResources, find some idea, but not exactly what I need.  When my
> solution will work, I will document it into the wiki.
> 
> Nicolas
> 
> Le 06-11-24 à 02:48, Bertrand Delacretaz a écrit :
> 
>> On 11/23/06, Nicolas St-Laurent <[EMAIL PROTECTED]> wrote:
>> 
>>> ...I index huge Oracle tables with Lucene with a custom made
>>> indexer/search engine. But I would prefer to use Solr instead...
>> 
>> Instead of using Lucene's API directly, with Solr you'll have to add
>> your documents to the index using HTTP POST messages.
>> 
>> There are a few Java clients for Solr floating around on the wiki and
>> in Jira IIRC, but you just need a POST, any way of doing it is fine
>> (using jakarta httpclient for example).
>> 
>> See http://wiki.apache.org/solr/SolrResources for more info.
>> 
>> -Bertrand


Reply via email to