Re: solr cloud invalid shard/collection configuration

2015-12-16 Thread ig01
Can someone please advise considering my previous answer? -- View this message in context: http://lucene.472066.n3.nabble.com/solr-cloud-invalid-shard-collection-configuration-tp4245151p4245986.html Sent from the Solr - User mailing list archive at Nabble.com.

solr cloud invalid shard/collection configuration

2015-12-14 Thread ig01
I have an existing solrcloud 4.4 configured with zookeeper. The current setting is 3 shards, each shard has a leader and replica. All are mapped to the same collection1. {"collection1":{ "shards":{ "shard1":{ "range":"8000-d554", "state":"active", "replicas":{ "core_n

Re: solr cloud invalid shard/collection configuration

2015-12-14 Thread ig01
Hi, thanks for the answer. We installed solr with solr.cmd -e cloud utility that comes with the installation. The names of shards are odd because in this case after the installation We've migrated an old index from our other environment (wich is solr single node) and splitted it with Collection A

Frequent deletions

2014-12-31 Thread ig01
Hello, We perform frequent deletions from our index, which greatly increases the index size. How can we perform an optimization in order to reduce the size. Please advise, Thanks. -- View this message in context: http://lucene.472066.n3.nabble.com/Frequent-deletions-tp4176689.html Sent from th

Re: Frequent deletions

2015-01-11 Thread ig01
Thank you all for your response, The thing is that we have 180G index while half of it are deleted documents. We tried to run an optimization in order to shrink index size but it crashes on ‘out of memory’ when the process reaches 120G. Is it possible to optimize parts of the index? Please adv

Re: Frequent deletions

2015-01-11 Thread ig01
Hi, It's not an option for us, all the documents in our index have same deletion probability. Is there any other solution to perform an optimization in order to reduce index size? Thanks in advance. -- View this message in context: http://lucene.472066.n3.nabble.com/Frequent-deletions-tp41766

Re: Frequent deletions

2015-01-12 Thread ig01
Hi, We gave 120G to JVM, while we have 140G memory on this machine. We use the default merge policy("TieredMergePolicy"), and there are 54 segments in our index. We tried to perform an optimization with different numbers of maxSegments (53 and less) it didn't help. How much memory we need for 180G

Re: Frequent deletions

2015-01-12 Thread ig01
Hi, Unfortunately this is the case, we do have hundreds of millions of documents on one Solr instance/server. All our configs and schema are with default configurations. Our index size is 180G, does that mean that we need at least 180G heap size? Thanks. -- View this message in context: htt