Re: solr cloud invalid shard/collection configuration

2015-12-16 Thread ig01
Can someone please advise considering my previous answer?



--
View this message in context: 
http://lucene.472066.n3.nabble.com/solr-cloud-invalid-shard-collection-configuration-tp4245151p4245986.html
Sent from the Solr - User mailing list archive at Nabble.com.


solr cloud invalid shard/collection configuration

2015-12-14 Thread ig01
I have an existing solrcloud 4.4 configured with zookeeper.
 The current setting is 3 shards, each shard has a leader and replica. All
are mapped to the same collection1. 

{"collection1":{
"shards":{
  "shard1":{
"range":"8000-d554",
"state":"active",
"replicas":{
  "core_node4":{
"state":"active",
"core":"collection1",
"node_name":"10.200.101.132:9983_solr",
"base_url":"http://10.200.101.132:9983/solr";,
"leader":"true"},
  "core_node7":{
"state":"active",
"core":"collection1",
"node_name":"10.200.101.131:8983_solr",
"base_url":"http://10.200.101.131:8983/solr"}}},
  "shard2":{
"range":"d555-2aa9",
"state":"active",
"replicas":{
  "core_node2":{
"state":"active",
"core":"collection1",
"node_name":"10.200.101.131:9983_solr",
"base_url":"http://10.200.101.131:9983/solr"},
  "core_node5":{
"state":"active",
"core":"collection1",
"node_name":"10.200.101.133:8983_solr",
"base_url":"http://10.200.101.133:8983/solr";,
"leader":"true"}}},
  "shard3":{
"range":"2aaa-7fff",
"state":"active",
"replicas":{
  "core_node3":{
"state":"active",
"core":"collection1",
"node_name":"10.200.101.132:8983_solr",
"base_url":"http://10.200.101.132:8983/solr"},
  "core_node6":{
"state":"active",
"core":"collection1",
"node_name":"10.200.101.133:9983_solr",
"base_url":"http://10.200.101.133:9983/solr";,
"leader":"true",
"router":"compositeId"}}



I have downloaded solrcloud 5.2.1 and ran solr.cmd. Created almost the same
setting with 2 shards each shard has replica and leader.
 
{"collection1":{
"replicationFactor":"1",
"shards":{
  "shard1_0":{
"range":"8000-",
"state":"active",
"replicas":{
  "core_node3":{
"core":"collection1_shard1_0_replica1",
"base_url":"http://10.1.20.31:8983/solr";,
"node_name":"10.1.20.31:8983_solr",
"state":"active",
"leader":"true"},
  "core_node5":{
"core":"collection1_shard1_0_replica2",
"base_url":"http://10.1.20.31:7574/solr";,
"node_name":"10.1.20.31:7574_solr",
"state":"active"}}},
  "shard1_1":{
"range":"0-7fff",
"state":"active",
"replicas":{
  "core_node4":{
"core":"collection1_shard1_1_replica1",
"base_url":"http://10.1.20.31:8983/solr";,
"node_name":"10.1.20.31:8983_solr",
"state":"active",
"leader":"true"},
  "core_node6":{
"core":"collection1_shard1_1_replica2",
"base_url":"http://10.1.20.31:7574/solr";,
"node_name":"10.1.20.31:7574_solr",
"state":"active",
"router":{"name":"compositeId"},
"maxShardsPerNode":"1",
"autoAddReplicas":"false",
"autoCreated":"true"}}

The problem is when i am indexing a document to http://10.1.20.31:8983/solr,
it is only indexed to collection1_shard1_0_replica1, it is not spreading the
documents to the other shard.. why is that ? is the solr configured well ?
In the existing enviroment, i see only 1 core for all shards, in the new
there are 2 cores for each shard ? 

Please advise.




--
View this message in context: 
http://lucene.472066.n3.nabble.com/solr-cloud-invalid-shard-collection-configuration-tp4245151.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: solr cloud invalid shard/collection configuration

2015-12-14 Thread ig01
Hi, thanks for the answer.


We installed solr with solr.cmd -e cloud utility that comes with the
installation.
The names of shards are odd because in this case after the installation
We've migrated an old index from our other environment (wich is solr single
node) and splitted it with Collection API splitt command.
The splitting completed successfuly, documents were spreaded almost equally
between two shards and I was able to retrieve our old documents. After that
I deleted the old shard that was splitted (with Collection API delete
command).

Anyway this behavior is the same also for a regular solr cloud installation
with solr.cmd -e cloud, without any index migration...

We are indexing our documents by using the
url="http://10.1.20.31/8983/solr/collection1/";.
After the installation we indexed 4 documents and they all indexed on
the same shard. 

Thanks in advance,
Inna.






--
View this message in context: 
http://lucene.472066.n3.nabble.com/solr-cloud-invalid-shard-collection-configuration-tp4245151p4245419.html
Sent from the Solr - User mailing list archive at Nabble.com.


Frequent deletions

2014-12-31 Thread ig01
Hello,
We perform frequent deletions from our index, which greatly increases the
index size.
How can we perform an optimization in order to reduce the size.
Please advise,
Thanks.




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Frequent-deletions-tp4176689.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Frequent deletions

2015-01-11 Thread ig01
Thank you all for your response,
The thing is that we have 180G index while half of it are deleted documents.
We  tried to run an optimization in order to shrink index size but it
crashes on ‘out of memory’ when the process reaches 120G.   
Is it possible to optimize parts of the index? 
Please advise what can we do in this situation.




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Frequent-deletions-tp4176689p4178700.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Frequent deletions

2015-01-11 Thread ig01
Hi,

It's not an option for us, all the documents in our index have same deletion
probability.
Is there any other solution to perform an optimization in order to reduce
index size?

Thanks in advance.



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Frequent-deletions-tp4176689p4178720.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Frequent deletions

2015-01-12 Thread ig01
Hi,

We gave 120G to JVM, while we have 140G memory on this machine.
We use the default merge policy("TieredMergePolicy"), and there are 54
segments in our index.
We tried to perform an optimization with different numbers of maxSegments
(53 and less)
it didn't help.
How much memory we need for 180G optimization?
Is every update deletes the document and creates a new one?
How can commit with expungeDeletes=true affect performance?
Currently we do not have a performance issue.

Thanks in advance.



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Frequent-deletions-tp4176689p4178875.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Frequent deletions

2015-01-12 Thread ig01
Hi,

Unfortunately this is the case, we do have hundreds of millions of documents
on one 
Solr instance/server. All our configs and schema are with default
configurations. Our index
size is 180G, does that mean that we need at least 180G heap size?

Thanks.




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Frequent-deletions-tp4176689p4179122.html
Sent from the Solr - User mailing list archive at Nabble.com.