RE: How to accomadate huge data

2014-08-28 Thread Toke Eskildsen
kokatnur.vi...@gmail.com [kokatnur.vi...@gmail.com] On Behalf Of Ethan [eh198...@gmail.com] wrote: > Before adding swap space nodes used to shutdown due to OOM or crash > after 2-5 minutes of uptime. By bumping swap space the server came up > cleanly. ** We have 7GB of heap. I'll need to ask adm

Re: How to accomadate huge data

2014-08-28 Thread Ethan
On Thu, Aug 28, 2014 at 11:12 AM, Shawn Heisey wrote: > On 8/28/2014 11:57 AM, Ethan wrote: > > Our index size is 110GB and growing, crossed RAM capacity of 96GB, and we > > are seeing a lot of disk and network IO resulting in huge latencies and > > instability(one of the server used to shutdown

RE: How to accomadate huge data

2014-08-28 Thread Toke Eskildsen
kokatnur.vi...@gmail.com [kokatnur.vi...@gmail.com] On Behalf Of Ethan [eh198...@gmail.com] wrote: > Our index size is 110GB and growing, crossed RAM capacity of 96GB, and we > are seeing a lot of disk and network IO resulting in huge latencies and > instability(one of the server used to shutdown

re: How to accomadate huge data

2014-08-28 Thread Chris Morley
Look into SolrCloud. From: "Ethan" Sent: Thursday, August 28, 2014 1:59 PM To: "solr-user" Subject: How to accomadate huge data Our index size is 110GB and growing, crossed RAM capacity of 96GB, and we are seeing a lot of

Re: How to accomadate huge data

2014-08-28 Thread Shawn Heisey
On 8/28/2014 11:57 AM, Ethan wrote: > Our index size is 110GB and growing, crossed RAM capacity of 96GB, and we > are seeing a lot of disk and network IO resulting in huge latencies and > instability(one of the server used to shutdown and stay in recovery mode > when restarted). Our admin added sw

How to accomadate huge data

2014-08-28 Thread Ethan
Our index size is 110GB and growing, crossed RAM capacity of 96GB, and we are seeing a lot of disk and network IO resulting in huge latencies and instability(one of the server used to shutdown and stay in recovery mode when restarted). Our admin added swap space and that seemed to have mitigated t