A few weeks ago optimization in SolrCloud was discussed in this thred:
http://lucene.472066.n3.nabble.com/SolrCloud-optimizing-a-core-triggers-optimization-of-another-td4097499.html#a4098020

The thread was covering the distributed optimization inside a collection.
My use case requires manually running optimizations every week or so,
because I do delete by query often, and deletedDocs number gets to huge
amounts, and the only way to regain that space is by optimizing.

Since I have a pretty steady high load, I can't do it over night and i was
thinking to do it one core at a time -> meaning optimizing shard1_replica1
and then shard1_replica2 and so on, using 
curl
'http://localhost:8983/solr/collection1_shard1_replica1/update?optimize=true&distrib=false'

My question is how would this reflect on the performance of the system? All
queries that would be routed to that shard replica would be very slow I
assume. 

Would there be any problems if a replica is optimized and another is not?
Anybody tried something like this? Any tips or stories ?
Thank you!



-----
Thanks,
Michael
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Optimizing-cores-in-SolrCloud-tp4100871.html
Sent from the Solr - User mailing list archive at Nabble.com.

Reply via email to