MTIME+ COMMAND
4250 root 20 0 129g 14g 1.9g S2.021.317:40.61 java
--
View this message in context:
http://lucene.472066.n3.nabble.com/SolrCloud-OOM-Problem-tp4152389p4152753.html
Sent from the Solr - User mailing list archive at Nabble.com.
On 8/13/2014 5:42 AM, tuxedomoon wrote:
> Have you used a queue to intercept queries and if so what was your
> implementation? We are indexing huge amounts of data from 7 SolrJ instances
> which run independently, so there's a lot of concurrent indexing.
On my setup, the queries come from a java
On 8/13/2014 5:34 AM, tuxedomoon wrote:
> Great info. Can I ask how much data you are handling with that 6G or 7G
> heap?
My dev server is the one with the 7GB heap. My production servers only
handle half the index shards, so they have the smaller heap. Here is
the index size info from my dev s
Have you used a queue to intercept queries and if so what was your
implementation? We are indexing huge amounts of data from 7 SolrJ instances
which run independently, so there's a lot of concurrent indexing.
--
View this message in context:
http://lucene.472066.n3.nabble.com/SolrClou
Great info. Can I ask how much data you are handling with that 6G or 7G
heap?
--
View this message in context:
http://lucene.472066.n3.nabble.com/SolrCloud-OOM-Problem-tp4152389p4152712.html
Sent from the Solr - User mailing list archive at Nabble.com.
On 8/12/2014 3:12 PM, tuxedomoon wrote:
I have modified my instances to m2.4xlarge 64-bit with 68.4G memory. Hate to
ask this but can you recommend Java memory and GC settings for 90G data and
the above memory? Currently I have
CATALINA_OPTS="${CATALINA_OPTS} -XX:NewSize=1536m -XX:MaxNewSize=1
472066.n3.nabble.com/SolrCloud-OOM-Problem-tp4152389p4152585.html
Sent from the Solr - User mailing list archive at Nabble.com.
On Tue, 2014-08-12 at 01:27 +0200, dancoleman wrote:
> My SolrCloud of 3 shard / 3 replicas is having a lot of OOM errors. Here are
> some specs on my setup:
>
> hosts: all are EC2 m1.large with 250G data volumes
Is that 3 (each running a primary and a replica shard) or 6 instances?
> documents
> 90G is correct, each host is currently holding that much data.
>
> Are you saying that 32GB to 96GB would be needed for each host? Assuming
> we did not add more shards that is.
If you want good performance and enough memory to give Solr the heap it
will need, yes. Lucene (the search API that
90G is correct, each host is currently holding that much data.
Are you saying that 32GB to 96GB would be needed for each host? Assuming
we did not add more shards that is.
--
View this message in context:
http://lucene.472066.n3.nabble.com/SolrCloud-OOM-Problem-tp4152389p4152401.html
Sent
On 8/11/2014 5:27 PM, dancoleman wrote:
> My SolrCloud of 3 shard / 3 replicas is having a lot of OOM errors. Here are
> some specs on my setup:
>
> hosts: all are EC2 m1.large with 250G data volumes
> documents: 120M total
> zookeeper: 5 external t1.micros
> Linux "top" command output with no
:80/solr/lighting_products/&df=text&fl=uuid_s,shortid_s,contenttype_s,contentnamespace_s,secondarynamespaces_s_mv,content_name_s,urlkey_s,content_description_t,score&fl=id&start=0&q=spike&ie=UTF-8&bf=recip(ms(NOW/HOUR,updated_dt),3.16e-11,1,1)&q.op=AND&is
12 matches
Mail list logo