The size of index is about 300GB, I am seeing the following error in the
logs,

java.net.SocketTimeoutException: Read timed out
        at java.net.SocketInputStream.socketRead0(Native Method)
        at java.net.SocketInputStream.read(SocketInputStream.java:129)
        at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
        at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
        at
org.apache.commons.httpclient.ChunkedInputStream.getChunkSizeFromInputStream
(ChunkedInputStream.java:250)
        at
org.apache.commons.httpclient.ChunkedInputStream.nextChunk(ChunkedInputStrea
m.java:221)
        at
org.apache.commons.httpclient.ChunkedInputStream.read(ChunkedInputStream.jav
a:176)
        at java.io.FilterInputStream.read(FilterInputStream.java:116)
        at
org.apache.commons.httpclient.AutoCloseInputStream.read(AutoCloseInputStream
.java:108)
        at
org.apache.solr.common.util.FastInputStream.refill(FastInputStream.java:68)
        at
org.apache.solr.common.util.FastInputStream.read(FastInputStream.java:97)
        at
org.apache.solr.common.util.FastInputStream.readFully(FastInputStream.java:1
22)
        at
org.apache.solr.common.util.FastInputStream.readFully(FastInputStream.java:1
17)
        at
org.apache.solr.handler.SnapPuller$FileFetcher.fetchPackets(SnapPuller.java:
943)
        at
org.apache.solr.handler.SnapPuller$FileFetcher.fetchFile(SnapPuller.java:904
)
        at
org.apache.solr.handler.SnapPuller.downloadIndexFiles(SnapPuller.java:545)
        at
org.apache.solr.handler.SnapPuller.fetchLatestIndex(SnapPuller.java:295)
        at
org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:2
68)
        at
org.apache.solr.handler.ReplicationHandler$1.run(ReplicationHandler.java:149
)
May 14, 2012 1:45:46 PM org.apache.solr.handler.ReplicationHandler doFetch
SEVERE: SnapPull failed 
org.apache.solr.common.SolrException: Unable to download _vvyv.fdt
completely. Downloaded 200278016!=208644265
        at
org.apache.solr.handler.SnapPuller$FileFetcher.cleanup(SnapPuller.java:1038)
        at
org.apache.solr.handler.SnapPuller$FileFetcher.fetchFile(SnapPuller.java:918
)
        at
org.apache.solr.handler.SnapPuller.downloadIndexFiles(SnapPuller.java:545)
        at
org.apache.solr.handler.SnapPuller.fetchLatestIndex(SnapPuller.java:295)
        at
org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:2
68)
        at
org.apache.solr.handler.ReplicationHandler$1.run(ReplicationHandler.java:149
)


Actually the replication starts, but is never able to complete and then
restarts again.

Regards,
Rohit
Mobile: +91-9901768202
About Me: http://about.me/rohitg


-----Original Message-----
From: Erick Erickson [mailto:erickerick...@gmail.com] 
Sent: 14 May 2012 18:00
To: solr-user@lucene.apache.org
Subject: Re: Relicating a large solr index

What do your logs show? Solr replication should be robust.
How large is "large"?

You might review:
http://wiki.apache.org/solr/UsingMailingLists

Best
Erick

On Mon, May 14, 2012 at 3:11 AM, Rohit <ro...@in-rev.com> wrote:
> Hi,
>
>
>
> I have a large solr index which needs to be replicated, solr 
> replication start but then keeps breaking and starting from 0. Is 
> there another way to achieve this,          I was thinking of using 
> scp to copy the index from master to slave and then enable replication,
will this work?
>
>
>
>
> Regards,
>
> Rohit
>
>
>


Reply via email to