[Pacemaker] Fencing dependency between bare metal host and its VMs guest

Tomasz Kontusz tomasz.kontusz at gmail.com
Mon Nov 10 04:07:18 EST 2014

I think the suggestion was to put shooting the host in the fencing path of a VM. This way if you can't get the host to fence the VM (as the host is already dead) you just check if the host was fenced.

Daniel Dehennin <daniel.dehennin at baby-gnu.org> napisał:
>Andrei Borzenkov <arvidjaar at gmail.com> writes:
>>> Now I have one issue, when the bare metal host on which the VM is
>>> running die, the VM is lost and can not be fenced.
>>> Is there a way to make pacemaker ACK the fencing of the VM running
>on a
>>> host when the host is fenced itself?
>> Yes, you can define multiple stonith agents and priority between
>> http://clusterlabs.org/wiki/Fencing_topology
>If I understand correctly, fencing topology is the way to have several
>fencing devices for a node and try them consecutively until one works.
>In my configuration, I group the VM stonith agents with the
>corresponding VM resource, to make them move together[1].
>Here is my use case:
>1. Resource ONE-Frontend-Group runs on nebula1
>2. nebula1 is fenced
>3. node one-fronted can not be fenced
>Is there a way to say that the life on node one-frontend is related to
>the state of resource ONE-Frontend?
>In which case when the node nebula1 is fenced, pacemaker should be
>aware that
>resource ONE-Frontend is not running any more, so node one-frontend is
>Daniel Dehennin
>Récupérer ma clef GPG: gpg --recv-keys 0xCC1E9E5B7A6FE2DF
>Fingerprint: 3E69 014E 5C23 50E8 9ED6  2AAD CC1E 9E5B 7A6F E2DF
>Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
>Project Home: http://www.clusterlabs.org
>Getting started:
>Bugs: http://bugs.clusterlabs.org

Wysłane za pomocą K-9 Mail.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20141110/d32e1663/attachment-0003.html>

More information about the Pacemaker mailing list