Recently started looking into solr to solve a problem created before my
time.  We have a dataset consisting of 390,000,000+ records that had a
search written for it using a simple query.  The problem is that the
dataset needs additional indices to keep operating.  The DBA says no go,
too large a dataset. 

I came to the very quick conclusion that they needed a search engine,
preferably one that can return some data. 

My problem lies in the initial index creation.  Using the
DataImportHandler with JDBC to import 390m records will, I am guessing
take far longer than I would like, and use up quite a few resources. 

Is there any way to chunk this data, with the DataImportHandler?  If not
I will just write some code to handle the initial import.

Thanks

-- 
Eric Myers


Reply via email to