On 7/29/2014 2:23 PM, avgxm wrote: > Is there a correct way to take an existing Solr index (core.properties, > conf/, data/ directories from a standalone Solr instance) and copy it over > to a Solr cloud, with shards, without having to use import or re-indexing? > Does anyone know the proper steps to accomplish this type of a move? The > target system is zookeeper 3.4.6, tomcat 7.0.54, and solr 4.8.1. I have > been able to copy the data and load the core by executing upconfig, > linkconfig to zookeeper, and then copying over the core.properties, and > conf/ and data/ directories, bouncing tomcat. The core comes up and is > searchable. The cloud pic looks like corename ---- shard1 ---- > ip_addr:8080. Then, I have tried to use split core, split shard, create > core, without success to try and add shard2 and shard3, either on the same > or different hosts. Not sure what I'm missing or if this way of reusing the > existing data is even an option.
You'll need to create a collection with the Collections API (which also creates the cores) before you try copying anything, and then you'll want to copy *only* the "data" directory -- the config is in zookeeper and the core.properties file should already exist. When you create the collection, you'll likely want numShards on the CREATE call to be 1, and replicationFactor should be whatever you want -- if it's 2, you'll end up with two copies of your index on different servers. If the collection is named "test" then the core on the first server will be named test_shard1_replica1, the core on the second server will be named test_shard1_replica2, and so on. Your zookeeper ensemble should be separate from Solr, don't use the -DzkRun option. To put the existing data into the collection: 1) Shut down all the Solr servers. 2) Delete the data and tlog directories from XXXX_shard1_replicaN on all the servers. 3) Copy the data directory from the source to the first server, then start Solr on that server. 4) Wait a few minutes for everything to stabilize. 5) Start Solr on any other servers. Thanks, Shawn