On 9/3/2010 12:37 PM, Jonathan Rochkind wrote:
Is the OS disk cache something you configure, or something the OS just does
automatically based on available free RAM? Or does it depend on the exact OS?
Thinking about the OS disk cache is new to me. Thanks for any tips.
Depends on what you w
.
From: Shawn Heisey [s...@elyograg.org]
Sent: Friday, September 03, 2010 1:46 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr crawls during replication
On 9/2/2010 9:31 AM, Mark wrote:
Thanks for the suggestions. Our slaves have 12G with 10G dedicated to
the JVM.. too much
...@elyograg.org]
Sent: Friday, September 03, 2010 1:46 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr crawls during replication
On 9/2/2010 9:31 AM, Mark wrote:
> Thanks for the suggestions. Our slaves have 12G with 10G dedicated to
> the JVM.. too much?
>
> Are the rysnc snappuller f
On 9/2/2010 9:31 AM, Mark wrote:
Thanks for the suggestions. Our slaves have 12G with 10G dedicated to
the JVM.. too much?
Are the rysnc snappuller featurs still available in 1.4.1? I may try
that to see if helps. Configuration of the switches may also be possible.
Also, would you mind expl
Yes, the rsync scripts are still there. And they still work fine. It
definitely helps to be a Unix shell wiz.
You would add an option to the rsync call in the scripts that does
rsync throttling.
Rsync is just a standard copying tool in the SSH toolsuite. It's 12
years old and works quite well.
L
On 8/6/10 5:03 PM, Chris Hostetter wrote:
: We have an index around 25-30G w/ 1 master and 5 slaves. We perform
: replication every 30 mins. During replication the disk I/O obviously shoots up
: on the slaves to the point where all requests routed to that slave take a
: really long time... somet
: We have an index around 25-30G w/ 1 master and 5 slaves. We perform
: replication every 30 mins. During replication the disk I/O obviously shoots up
: on the slaves to the point where all requests routed to that slave take a
: really long time... sometimes to the point of timing out.
:
: Is the