t;/>
>
> Should we change some of them? The mergeScheduler class is empty.
>
> When I go to Core Admin, select our core I see:
>
> maxCacheMB=48.0 maxMergeSizeMB=4.0
>
> Is that ok, or the values are too low?
>
> Best,
> Pavel
>
>
>
> --
> View this
I see:
maxCacheMB=48.0 maxMergeSizeMB=4.0
Is that ok, or the values are too low?
Best,
Pavel
--
View this message in context:
http://lucene.472066.n3.nabble.com/Optimize-SolrCloud-without-downtime-tp4195170p4196506.html
Sent from the Solr - User mailing list archive at Nabble.com.
rds,
> Pavel
>
>
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Optimize-SolrCloud-without-downtime-tp4195170p4196273.html
> Sent from the Solr - User mailing list archive at Nabble.com.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Optimize-SolrCloud-without-downtime-tp4195170p4196273.html
Sent from the Solr - User mailing list archive at Nabble.com.
bq: It does NOT optimize multiple replicas or shards in parallel.
This behavior was changed in 4.10 though, see:
https://issues.apache.org/jira/browse/SOLR-6264
So with 5.0 Pavel is seeing the result of that JIRA I bet.
I have to agree with Shawn, the optimization step should proceed
invisibly
On 3/25/2015 9:08 AM, pavelhladik wrote:
> Our data are changing frequently so that's why so many deletedDocs.
> Optimized core takes around 50GB on disk, we are now almost on 100GB and I'm
> looking for best solution howto optimize this huge core without downtime. I
> know optimization working in
oxy forward requests to node2 and optimize cores on node1
>
> But when I do optimize on node2, the node1 is doing optimization as well,
> even if I use the "distrib=false" with curl.
>
> Can you please recommend architecture for optimizing without downtime? Many
> than
ucene.472066.n3.nabble.com/Optimize-SolrCloud-without-downtime-tp4195170.html
Sent from the Solr - User mailing list archive at Nabble.com.