Post your docs in sets of 1000. Create a:

 List<SolrInputDocument> docs

Then add 1000 docs to it, then client.add(docs);

Repeat until your 40m are indexed.

Upayavira

On Wed, Aug 5, 2015, at 05:07 PM, Mugeesh Husain wrote:
> filesystem are about 40 millions of document it will iterate 40 times how
> may
> solrJ could not handle 40m times loops(before  indexing i have to split
> values from filename and make some operation then index to Solr)
> 
> Is it will continuous indexing using 40m times or i have to sleep in
> between
> some interaval.
> 
> Does it will take same time in compare of HTTP or  bin/post ?
> 
> 
> 
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Can-Apache-Solr-Handle-TeraByte-Large-Data-tp3656484p4221060.html
> Sent from the Solr - User mailing list archive at Nabble.com.

Reply via email to