Just one suggestion, instead of stopping zk and removing zoo_data,
better to use Solr's zkcli.sh script from cloud-scripts to clear out
data, e.g.

zkcli.sh -zkhost localhost:9983 -cmd clear /solr

The paths I clear when I want a full clean-up are:

/configs/CONFIG_NAME
/collections/COLLECTION_NAME
/clusterstate.json



On Tue, Jan 29, 2013 at 4:14 PM, Mingfeng Yang <mfy...@wisewindow.com> wrote:
> An experiment found that stop all shards, remove the zoo_data (assume your
> zookeeper is used for this particular solrcloud, otherwise, be cautious),
> and then start instance by order works fine.
>
> Ming
>
>
>
> On Sat, Jan 26, 2013 at 5:31 AM, Per Steffensen <st...@designware.dk> wrote:
>
>> Hi
>>
>> We have actually tested this and found that the following will do it
>> * Shutdown all Solr nodes - make sure ZKs are still running
>> * For each replica (shard-instance) move its data-folder to the new server
>> (if they are not already available to it through some shared storage)
>> * For each repilca (shard-instance) also move solr.xmls
>> * Extract clusterstate.json from ZK into a file. Modify that file so that
>> hosts/IPs and ports are correct according to new setup. Replace
>> clusterstate.json in ZK with the modified content of the clusterstate.json
>> file
>> * Start new Solr nodes
>>
>> Good luck!
>>
>> Regards, Per Steffensen
>>
>>
>>
>> On 1/26/13 6:56 AM, Mingfeng Yang wrote:
>>
>>> Hi Mark,
>>>
>>> When I did testing with SolrCloud, I found the following.
>>>
>>> 1. I started 4 shards on the same host on port 8983, 8973, 8963, and 8953.
>>> 2. Index some data.
>>> 3. Shutdown all 4 shards.
>>> 4. Started 4 shards again, all pointing to the same data directory and use
>>> the same configuration, except that now we use different ports 8983, 8973,
>>>   7633 and 7648.
>>> 5. Now Solr has problem to load all cores properly.
>>>
>>> Therefore, I had the impression that ZooKeeper may have a memory of which
>>> hosts correspond to which shards. If I change the host info, it may get
>>> confused.  I could not find any related documentation or discussion about
>>> this issue.
>>>
>>> Thanks,
>>> Ming
>>>
>>>
>>>
>>>
>>> On Fri, Jan 25, 2013 at 5:52 PM, Mark Miller <markrmil...@gmail.com>
>>> wrote:
>>>
>>>  You could do it that way.
>>>>
>>>> I'm not sure why you are worried about the leaders. That shouldn't
>>>> matter.
>>>>
>>>> You could also start up new Solrs on the new machines as replicas of the
>>>> cores you want to move - then once they are active, unload the cores on
>>>> the
>>>> old machine, stop the Solr instances and remove the stuff left on the
>>>> filesystem.
>>>>
>>>> - Mark
>>>>
>>>> On Jan 25, 2013, at 7:42 PM, Mingfeng Yang <mfy...@wisewindow.com>
>>>> wrote:
>>>>
>>>>  Right now I have an index with four shards on a single EC2 server, each
>>>>> running on different ports.  Now I'd like to migrate three shards
>>>>> to independent servers.
>>>>>
>>>>> What should I do to safely accomplish this process?
>>>>>
>>>>> Can I just
>>>>> 1. shutdown all four solr instances.
>>>>> 2. copy three shards (indexes) to different servers.
>>>>> 3. launch 4 solr instances on 4 different servers, each with -zKhost
>>>>> specified, pointing to the zookeeper servers.
>>>>>
>>>>> In my impression, zookeeper remembers which shards are leaders.  What I
>>>>> plan to do above could not elect the three new servers as leaders.  If
>>>>>
>>>> so,
>>>>
>>>>> what's the correct way to do it?
>>>>>
>>>>> Thanks,
>>>>> Ming
>>>>>
>>>>
>>>>
>>

Reply via email to