On 26 December 2012 15:00, bsargurunathan <bsargurunat...@gmail.com> wrote: > Hi Everyone, > > In Solr Indexing, I needs to index millions of millions records in single > time from xml file. > While I am doing the indexing, I am constructing the xml file and passing to > the solr. > But right now I am controlling the record count and based on the record > count the xml will create and it will index into solr. > But the requirement is the indexing should be happen continues, if I did not > control the the count I will get the Socket exception. > > So what is the exact solution for that? How can I overcome it?
Sorry, but your question is not very clear. Maybe you could rephrase it? Yes, it is a good idea to break up a large number of records into multiple XML files. You will need to build an indexing layer that does this, and also recovers gracefully from network errors. This should not be too difficult. Regards, Gora P.S. Are you really serious about millions of millions, i.e., of the order of 10^12, records? If so, you will need to worry about many more things in order to ensure scalability.