Hello,

my problem is closely related to the thread [1], but I didn't find a solution there. I have a resource that is set up as a clone C restricted to two copies (using the clone-max=2 meta attribute||), because the resource takes long time to get ready (it starts immediately though), and by having it ready as a clone, I can failover in the time it takes to move an IP resource. I have a colocation constraint "resource IP with clone C", which will make sure IP runs with a working instance of C:

Configuration:
 Clone: dummy-clone
  Meta Attrs: clone-max=2 interleave=true
  Resource: dummy (class=ocf provider=heartbeat type=Dummy)
   Operations: start interval=0s timeout=20 (dummy-start-interval-0s)
               stop interval=0s timeout=20 (dummy-stop-interval-0s)
               monitor interval=10 timeout=20 (dummy-monitor-interval-10)
 Resource: ip (class=ocf provider=heartbeat type=Dummy)
  Operations: start interval=0s timeout=20 (ip-start-interval-0s)
              stop interval=0s timeout=20 (ip-stop-interval-0s)
              monitor interval=10 timeout=20 (ip-monitor-interval-10)

Colocation Constraints:
  ip with dummy-clone (score:INFINITY)

State:
 Clone Set: dummy-clone [dummy]
     Started: [ sub1.example.org sub3.example.org ]
 ip     (ocf::heartbeat:Dummy): Started sub1.example.org


This is fine until the the active node (sub1.example.org) fails. Instead of moving the IP to the passive node (sub3.example.org) with ready clone instance, Pacemaker will move it to the node where it just started a fresh instance of the clone (sub2.example.org in my case):

New state:
 Clone Set: dummy-clone [dummy]
     Started: [ sub2.example.org sub3.example.org ]
 ip     (ocf::heartbeat:Dummy): Started sub2.example.org


Documentation states that the cluster will choose a copy based on where the clone is running and the resource's own location preferences, so I don't understand why this is happening. Is there a way to tell Pacemaker to move the IP to the node where the resource is already running?

Thanks!
Jan Wrona

[1] http://lists.clusterlabs.org/pipermail/users/2016-November/004540.html
_______________________________________________
Users mailing list: [email protected]
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org

Reply via email to