Re: Reload core via CoreAdminRequest doesnt work with solr cloud? (solrj)
Yes, but which url would tou use? I'm in solr cloud. my index is distributed among 10 servers. I was trying to use the solrJ API which seem to work in HttpSolrServer. Tomás Fernández Löbbe wrote > If you need to reload all the cores from a given collection you can use > the > Collections API: > http://localhost:8983/solr/admin/collections?action=RELOAD&name=mycollection > > > On Thu, Nov 22, 2012 at 3:17 PM, > joe.cohen.m@ > < > joe.cohen.m@ >> wrote: > >> Hi, >> I'm using solr-4.0.0 >> I'm trying to reload all the cores of a given collection in my solr >> cloud. >> I use it like this: >> >> CloudSolrServer server = new CloudSolrServer (zkserver:port); >> server.setDefaultCollection("collection1"); >> CoreAdminRequest req = new CoreAdminRequest(); >> req.reloadCore("collection1", server) >> >> This throws an Exception telling me that no live solr servers are >> availble, >> listing the servers like this: >> http://server/solr/collection1 >> >> Of course doing other tasks like adding documnets to the CloudSolrServer >> above works fine. >> Using reloadCore on a HttpSolrServer also works fine. >> >> Any know issue with CloudSolrServer and CoreAdminRequest ? >> >> >> Note that I moved to solr-4.0.0 from solr-4.0.0-beta after trying the >> same >> thing also failed, but with a different exception. >> it failed saying cannot cast string to map in class ClusterState, in >> load() >> method (line 300), because the key "range" gave some String value instead >> of >> a map object. >> >> >> >> >> -- >> View this message in context: >> http://lucene.472066.n3.nabble.com/Reload-core-via-CoreAdminRequest-doesnt-work-with-solr-cloud-solrj-tp4021882.html >> Sent from the Solr - User mailing list archive at Nabble.com. >> -- View this message in context: http://lucene.472066.n3.nabble.com/Reload-core-via-CoreAdminRequest-doesnt-work-with-solr-cloud-solrj-tp4021882p4022249.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: SolrCloud: Very strange behavior when doing atomic updates or documents reindexation.
I'm having a smiliar problem. Did you by any chance try the suggestion here: https://issues.apache.org/jira/browse/SOLR-4080?focusedCommentId=13498055&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13498055 ? Rakudten wrote > More info: > > - I´m trying to update the document re-indexing the whole document again. > I first retrieve the document querying by it´s id, then delete it by it´s > id, and re-index including the new changes. > - At the same time there are other index writing operations. > > *RESULT*: in most cases the document wasn´t updated. Bad news... it smells > like a critical bug. > > Regards, > > > - Luis Cappa. > > 2012/11/22 Luis Cappa Banda < > luiscappa@ > > > >> For more details, my indexation App is: >> >> 1. Multithreaded. >> 2. NRT indexation. >> 3. It´s a Web App with a REST API. It receives asynchronous requests that >> produces those atomic updates / document reindexations I told before. >> >> I´m pretty sure that the wrong behavior is related with CloudSolrServer >> and with the fact that maybe you are trying to modify the index while an >> index update is in course. >> >> Regards, >> >> >> - Luis Cappa. >> >> >> 2012/11/22 Luis Cappa Banda < > luiscappa@ > > >> >>> Hello! >>> >>> I´m using a simple test configuration with nShards=1 without any >>> replica. >>> SolrCloudServer is suposed to forward properly those index/update >>> operations, isn´t it? I test with a complete document reindexation, not >>> atomic updates, using the official LBHttpSolrServer, not my custom >>> BinaryLBHttpSolrServer, and it dosn´t work. I think is not just a bug >>> related with atomic updates via CloudSolrServer but a general bug when >>> an >>> index changes with reindexations/updates frequently. >>> >>> Regards, >>> >>> - Luis Cappa. >>> >>> >>> 2012/11/22 Sami Siren < > ssiren@ > > >>> It might even depend on the cluster layout! Let's say you have 2 shards (no replicas) if the doc belongs to the node you send it to so that it does not get forwarded to another node then the update should work and in case where the doc gets forwarded to another node the problem occurs. With replicas it could appear even more strange: the leader might have the doc right and the replica not. I only briefly looked at the bits that deal with this so perhaps there's something more involved. On Thu, Nov 22, 2012 at 8:29 PM, Luis Cappa Banda < > luiscappa@ > >>> >wrote: > Hi, Sami! > > But isn´t strange that some documents were updated (atomic updates) > correctly and other ones not? Can´t it be a more serious problem like some > kind of index writer lock, or whatever? > > Regards, > > - Luis Cappa. > > 2012/11/22 Sami Siren < > ssiren@ > > > > > I think the problem is that even though you were able to work around the > > bug in the client solr still uses the xml format internally so the atomic > > update (with multivalued field) fails later down the stack. The bug you > > filed needs to be fixed to get the problem solved. > > > > > > On Thu, Nov 22, 2012 at 8:19 PM, Luis Cappa Banda < > luiscappa@ > > >wrote: > > > > > Hello everyone. > > > > > > I´ve starting to seriously worry about with SolrCloud due an strange > > > behavior that I have detected. The situation is this the following: > > > > > > *1.* SolrCloud with one shard and two Solr instances. > > > *2.* Indexation via SolrJ with CloudServer and a custom > > > BinaryLBHttpSolrServer that uses BinaryRequestWriter to execute > correctly > > > atomic updates. Check > > > JIRA-4080< > > > > > > https://issues.apache.org/jira/browse/SOLR-4080?focusedCommentId=13498055&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13498055 > > > > > > > *3.* An asynchronous proccess updates partially some document fields. > > After > > > that operation I automatically execute a commit, so the index must be > > > reloaded. > > > > > > What I have checked is that both using atomic updates or complete > > document > > > reindexations* aleatory documents are not updated* *even if I saw > > debugging > > > how the add() and commit() operations were executed correctly* *and > > without > > > errors*. Has anyone experienced a similar behavior? Is it posible that > if > > > an index update operation didn´t finish and CloudSolrServer receives a > > new > > > one this second update operation doesn´t complete? > > > > > > Thank you in advance. > > > > > > Regards, > > > > > > -- > > > > > > - Luis Cappa > > > > > > >
Re: Struggling with solr 4.0 and zookeeper - multiple solr collection and configs
If I run java -classpath example/solr-webapp/WEB-INF/lib/* org.apache.solr.cloud.ZkCLI -cmd bootstrap -zkhost 127.0.0.1:9983 -solrhome example/solr on each collcetion, I end up having 3 different configs. But when I start solr, it is not able running all 3 collections with each one's configs. It keeps searching collection2's an collection3's config under collection1's relative path config. Erick Erickson wrote > On the solr cloud page, admittedly down the page a ways, is the line > below. > Does that apply? > Best > Erick > > # try bootstrapping all the conf dirs in solr.xml > java -classpath example/solr-webapp/WEB-INF/lib/* > org.apache.solr.cloud.ZkCLI -cmd bootstrap -zkhost 127.0.0.1:9983 > -solrhome example/solr > > > > On Wed, Dec 19, 2012 at 1:46 PM, > joe.cohen.m@ > < > joe.cohen.m@ >> wrote: > >> I'm trying to build the following solr cluster: >> 3 collections, with 3 differnet configuration sets, on multiple servers. >> It seems that solr can't use different config trees in the zookeeper at a >> certain time. >> Even if I manage to get to a state in which under the 'configs' node in >> the >> zookeeper, I have 3 config folders with the solr conf files, when I run >> solr, it seems like it picks one of them and looks for the other config >> files under the single one it picked. >> >> thus I get messages like " no zookeeper node found in >> /configs/collection1cong/collection2conf/solrconfig.xml" >> while I was assuming it should see that it has the node : >> /configs/collection2conf/solrconfig.xml. >> >> my zookeper configs node looks like: >> configs/ >> configs/collection1conf >> configs/collection2conf >> configs/collection3conf >> configs/collection1conf/ > >> configs/collection2conf/ > >> configs/collection3conf/ > >> >> I've tried many different ways of solr.xml editing and none helped: >> 1. setting full paths for each collection - an error says invalid path >> 2. setting relative paths for each collection - an error says cant find >> zookeper node because it searchs under defaultcollectionpath+relativepath >> 3. running with only one core - solr doesnt see the other collections. >> >> >> any idea? >> is this even possible with current solr version? >> >> thanks. >> >> >> >> -- >> View this message in context: >> http://lucene.472066.n3.nabble.com/Struggling-with-solr-4-0-and-zookeeper-multiple-solr-collection-and-configs-tp4028113.html >> Sent from the Solr - User mailing list archive at Nabble.com. >> -- View this message in context: http://lucene.472066.n3.nabble.com/Struggling-with-solr-4-0-and-zookeeper-multiple-solr-collection-and-configs-tp4028113p4028797.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Struggling with solr 4.0 and zookeeper - multiple solr collection and configs
Can you please clarify your answer? You said to try to bootstap: > > On the solr cloud page, admittedly down the page a ways, is the line > > below. > > Does that apply? > > Best > > Erick > > > > # try bootstrapping all the conf dirs in solr.xml > > java -classpath example/solr-webapp/WEB-INF/lib/* > > org.apache.solr.cloud.ZkCLI -cmd bootstrap -zkhost 127.0.0.1:9983 > > -solrhome example/solr any that is what I tried. Erick Erickson wrote > You haven't indicated whether you tried what I pointed you at, you only > repeated that you tried bootstrapping on the example directory, _not_ > wherever you put your multi-core configuration. > > I've seen the example I mentioned work like a champ. Really, try it. > > I'd think about using the latest solr nightly build too... > > Best > Erick > > > On Sun, Dec 23, 2012 at 2:55 AM, > joe.cohen.m@ > < > joe.cohen.m@ >> wrote: > >> If I run >> java -classpath example/solr-webapp/WEB-INF/lib/* >> org.apache.solr.cloud.ZkCLI -cmd bootstrap -zkhost 127.0.0.1:9983 >> -solrhome example/solr >> >> on each collcetion, I end up having 3 different configs. >> But when I start solr, it is not able running all 3 collections with >> each >> one's configs. It keeps searching collection2's an collection3's config >> under collection1's relative path config. >> >> >> Erick Erickson wrote >> > On the solr cloud page, admittedly down the page a ways, is the line >> > below. >> > Does that apply? >> > Best >> > Erick >> > >> > # try bootstrapping all the conf dirs in solr.xml >> > java -classpath example/solr-webapp/WEB-INF/lib/* >> > org.apache.solr.cloud.ZkCLI -cmd bootstrap -zkhost 127.0.0.1:9983 >> > -solrhome example/solr >> > >> > >> > >> > On Wed, Dec 19, 2012 at 1:46 PM, >> >> > joe.cohen.m@ >> >> > < >> >> > joe.cohen.m@ >> >> >> wrote: >> > >> >> I'm trying to build the following solr cluster: >> >> 3 collections, with 3 differnet configuration sets, on multiple >> servers. >> >> It seems that solr can't use different config trees in the zookeeper >> at >> a >> >> certain time. >> >> Even if I manage to get to a state in which under the 'configs' node >> in >> >> the >> >> zookeeper, I have 3 config folders with the solr conf files, when I >> run >> >> solr, it seems like it picks one of them and looks for the other >> config >> >> files under the single one it picked. >> >> >> >> thus I get messages like " no zookeeper node found in >> >> /configs/collection1cong/collection2conf/solrconfig.xml" >> >> while I was assuming it should see that it has the node : >> >> /configs/collection2conf/solrconfig.xml. >> >> >> >> my zookeper configs node looks like: >> >> configs/ >> >> configs/collection1conf >> >> configs/collection2conf >> >> configs/collection3conf >> >> configs/collection1conf/ >> > > >> >> configs/collection2conf/ >> > > >> >> configs/collection3conf/ >> > > >> >> >> >> I've tried many different ways of solr.xml editing and none helped: >> >> 1. setting full paths for each collection - an error says invalid >> path >> >> 2. setting relative paths for each collection - an error says cant >> find >> >> zookeper node because it searchs under >> defaultcollectionpath+relativepath >> >> 3. running with only one core - solr doesnt see the other collections. >> >> >> >> >> >> any idea? >> >> is this even possible with current solr version? >> >> >> >> thanks. >> >> >> >> >> >> >> >> -- >> >> View this message in context: >> >> >> http://lucene.472066.n3.nabble.com/Struggling-with-solr-4-0-and-zookeeper-multiple-solr-collection-and-configs-tp4028113.html >> >> Sent from the Solr - User mailing list archive at Nabble.com. >> >> >> >> >> >> >> >> -- >> View this message in context: >> http://lucene.472066.n3.nabble.com/Struggling-with-solr-4-0-and-zookeeper-multiple-solr-collection-and-configs-tp4028113p4028797.html >> Sent from the Solr - User mailing list archive at Nabble.com. >> -- View this message in context: http://lucene.472066.n3.nabble.com/Struggling-with-solr-4-0-and-zookeeper-multiple-solr-collection-and-configs-tp4028113p4029920.html Sent from the Solr - User mailing list archive at Nabble.com.
solr cloud shards and servers issue
Hi I have the following scenario: I have 1 collection across 10 servers. Num of shards: 10. Each server has 2 solr instances running. replication is 2. I want to move one of the instances to another server. meaning, kill the solr process in server X and start a new solr process in server Y instead. When I kill the solr process in server X, I can still see that instance in the solr-cloud-graph (marked differently). When I run the instance on server Y, it get attahced to another shard, instead of getting into the shard that is now actually missing an instance. 1. Any way to tell solr/zookeeper - "Forget about that instance"? 2. when running a new solr instance - any way to tell solr/zookeper - "add this instance to shard X"? thanks. -- View this message in context: http://lucene.472066.n3.nabble.com/solr-cloud-shards-and-servers-issue-tp4021101.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: solr cloud shards and servers issue
How can I unload a solrCore after i killed the running process? Mark Miller-3 wrote > On Nov 19, 2012, at 11:24 AM, > joe.cohen.m@ > wrote: > >> Hi >> I have the following scenario: >> I have 1 collection across 10 servers. Num of shards: 10. >> Each server has 2 solr instances running. replication is 2. >> >> I want to move one of the instances to another server. meaning, kill the >> solr process in server X and start a new solr process in server Y >> instead. >> When I kill the solr process in server X, I can still see that instance >> in >> the solr-cloud-graph (marked differently). >> When I run the instance on server Y, it get attahced to another shard, >> instead of getting into the shard that is now actually missing an >> instance. >> >> 1. Any way to tell solr/zookeeper - "Forget about that instance"? > > Unload the SolrCores involved. > >> 2. when running a new solr instance - any way to tell solr/zookeper - >> "add >> this instance to shard X"? > > Specify a shardId when creating the core or configuring it in solr.xml and > make it match the shard you want to add to. > > - Mark -- View this message in context: http://lucene.472066.n3.nabble.com/solr-cloud-shards-and-servers-issue-tp4021101p402.html Sent from the Solr - User mailing list archive at Nabble.com.