Can someone please advise considering my previous answer?
--
View this message in context:
http://lucene.472066.n3.nabble.com/solr-cloud-invalid-shard-collection-configuration-tp4245151p4245986.html
Sent from the Solr - User mailing list archive at Nabble.com.
I have an existing solrcloud 4.4 configured with zookeeper.
The current setting is 3 shards, each shard has a leader and replica. All
are mapped to the same collection1.
{"collection1":{
"shards":{
"shard1":{
"range":"8000-d554",
"state":"active",
"replicas":{
"core_n
Hi, thanks for the answer.
We installed solr with solr.cmd -e cloud utility that comes with the
installation.
The names of shards are odd because in this case after the installation
We've migrated an old index from our other environment (wich is solr single
node) and splitted it with Collection A
Hello,
We perform frequent deletions from our index, which greatly increases the
index size.
How can we perform an optimization in order to reduce the size.
Please advise,
Thanks.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Frequent-deletions-tp4176689.html
Sent from th
Thank you all for your response,
The thing is that we have 180G index while half of it are deleted documents.
We tried to run an optimization in order to shrink index size but it
crashes on ‘out of memory’ when the process reaches 120G.
Is it possible to optimize parts of the index?
Please adv
Hi,
It's not an option for us, all the documents in our index have same deletion
probability.
Is there any other solution to perform an optimization in order to reduce
index size?
Thanks in advance.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Frequent-deletions-tp41766
Hi,
We gave 120G to JVM, while we have 140G memory on this machine.
We use the default merge policy("TieredMergePolicy"), and there are 54
segments in our index.
We tried to perform an optimization with different numbers of maxSegments
(53 and less)
it didn't help.
How much memory we need for 180G
Hi,
Unfortunately this is the case, we do have hundreds of millions of documents
on one
Solr instance/server. All our configs and schema are with default
configurations. Our index
size is 180G, does that mean that we need at least 180G heap size?
Thanks.
--
View this message in context:
htt