Hi Erick,

Yes I have enabled the following setting,
<str name="compression">internal</str>
       <!--The following values are used when the slave connects to the
master to download the index files.
        Default values implicitly set as 5000ms and 10000ms respectively.
The user DOES NOT need to specify
        these unless the bandwidth is extremely low or if there is an
extremely high latency-->
       <str name="httpConnTimeout">5000</str>
       <str name="httpReadTimeout">10000</str>

Will try with higher timeouts. I tried scp command and the link didn’t break
once, I was able to copy the entire 300Gb files, so am not too sure if this
is a network problem.

Regards,
Rohit
Mobile: +91-9901768202
About Me: http://about.me/rohitg


-----Original Message-----
From: Erick Erickson [mailto:erickerick...@gmail.com] 
Sent: 14 May 2012 20:22
To: solr-user@lucene.apache.org
Subject: Re: Relicating a large solr index

Have you tried modifying the timeout parameters? See:
http://wiki.apache.org/solr/SolrReplication,
the "Slave" section..

Best
Erick

On Mon, May 14, 2012 at 10:30 AM, Rohit <ro...@in-rev.com> wrote:
> The size of index is about 300GB, I am seeing the following error in 
> the logs,
>
> java.net.SocketTimeoutException: Read timed out
>        at java.net.SocketInputStream.socketRead0(Native Method)
>        at java.net.SocketInputStream.read(SocketInputStream.java:129)
>        at 
> java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>        at 
> java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>        at
> org.apache.commons.httpclient.ChunkedInputStream.getChunkSizeFromInput
> Stream
> (ChunkedInputStream.java:250)
>        at
> org.apache.commons.httpclient.ChunkedInputStream.nextChunk(ChunkedInpu
> tStrea
> m.java:221)
>        at
> org.apache.commons.httpclient.ChunkedInputStream.read(ChunkedInputStre
> am.jav
> a:176)
>        at java.io.FilterInputStream.read(FilterInputStream.java:116)
>        at
> org.apache.commons.httpclient.AutoCloseInputStream.read(AutoCloseInput
> Stream
> .java:108)
>        at
> org.apache.solr.common.util.FastInputStream.refill(FastInputStream.jav
> a:68)
>        at
> org.apache.solr.common.util.FastInputStream.read(FastInputStream.java:
> 97)
>        at
> org.apache.solr.common.util.FastInputStream.readFully(FastInputStream.
> java:1
> 22)
>        at
> org.apache.solr.common.util.FastInputStream.readFully(FastInputStream.
> java:1
> 17)
>        at
>
org.apache.solr.handler.SnapPuller$FileFetcher.fetchPackets(SnapPuller.java:
> 943)
>        at
> org.apache.solr.handler.SnapPuller$FileFetcher.fetchFile(SnapPuller.ja
> va:904
> )
>        at
> org.apache.solr.handler.SnapPuller.downloadIndexFiles(SnapPuller.java:
> 545)
>        at
> org.apache.solr.handler.SnapPuller.fetchLatestIndex(SnapPuller.java:29
> 5)
>        at
> org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.
> java:2
> 68)
>        at
> org.apache.solr.handler.ReplicationHandler$1.run(ReplicationHandler.ja
> va:149
> )
> May 14, 2012 1:45:46 PM org.apache.solr.handler.ReplicationHandler 
> doFetch
> SEVERE: SnapPull failed
> org.apache.solr.common.SolrException: Unable to download _vvyv.fdt 
> completely. Downloaded 200278016!=208644265
>        at
> org.apache.solr.handler.SnapPuller$FileFetcher.cleanup(SnapPuller.java
> :1038)
>        at
> org.apache.solr.handler.SnapPuller$FileFetcher.fetchFile(SnapPuller.ja
> va:918
> )
>        at
> org.apache.solr.handler.SnapPuller.downloadIndexFiles(SnapPuller.java:
> 545)
>        at
> org.apache.solr.handler.SnapPuller.fetchLatestIndex(SnapPuller.java:29
> 5)
>        at
> org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.
> java:2
> 68)
>        at
> org.apache.solr.handler.ReplicationHandler$1.run(ReplicationHandler.ja
> va:149
> )
>
>
> Actually the replication starts, but is never able to complete and 
> then restarts again.
>
> Regards,
> Rohit
> Mobile: +91-9901768202
> About Me: http://about.me/rohitg
>
>
> -----Original Message-----
> From: Erick Erickson [mailto:erickerick...@gmail.com]
> Sent: 14 May 2012 18:00
> To: solr-user@lucene.apache.org
> Subject: Re: Relicating a large solr index
>
> What do your logs show? Solr replication should be robust.
> How large is "large"?
>
> You might review:
> http://wiki.apache.org/solr/UsingMailingLists
>
> Best
> Erick
>
> On Mon, May 14, 2012 at 3:11 AM, Rohit <ro...@in-rev.com> wrote:
>> Hi,
>>
>>
>>
>> I have a large solr index which needs to be replicated, solr 
>> replication start but then keeps breaking and starting from 0. Is 
>> there another way to achieve this,          I was thinking of using 
>> scp to copy the index from master to slave and then enable 
>> replication,
> will this work?
>>
>>
>>
>>
>> Regards,
>>
>> Rohit
>>
>>
>>
>
>


Reply via email to