[ClusterLabs] VirtualDomain started in two hosts

Oscar Segarra oscar.segarra at gmail.com
Tue Jan 17 11:36:51 UTC 2017


Hi,

I attach cluster configuration:

Note that "migration_network_suffix=tcp://" is correct in my environment as
I have edited the VirtualDomain resource agent in order to build the
correct url "tcp://vdicnode01-priv"

mk_migrateuri() {
        local target_node
        local migrate_target
        local hypervisor

        target_node="$OCF_RESKEY_CRM_meta_migrate_target"

        # A typical migration URI via a special  migration network looks
        # like "tcp://bar-mig:49152". The port would be randomly chosen
        # by libvirt from the range 49152-49215 if omitted, at least since
        # version 0.7.4 ...
        if [ -n "${OCF_RESKEY_migration_network_suffix}" ]; then
                hypervisor="${OCF_RESKEY_hypervisor%%[+:]*}"
                # Hostname might be a FQDN
                #migrate_target=$(echo ${target_node} | sed -e
"s,^\([^.]\+\),\1${OCF_RESKEY_migration_network_suffix},")
                migrate_target=${target_node}

My cluster configuration:

[root at vdicnode01 ~]# pcs config
Cluster Name: vdic-cluster
Corosync Nodes:
 vdicnode01-priv vdicnode02-priv
Pacemaker Nodes:
 vdicnode01-priv vdicnode02-priv

Resources:
 Resource: nfs-vdic-mgmt-vm-vip (class=ocf provider=heartbeat type=IPaddr)
  Attributes: ip=192.168.100.200 cidr_netmask=24
  Operations: start interval=0s timeout=20s
(nfs-vdic-mgmt-vm-vip-start-interval-0s)
              stop interval=0s timeout=20s
(nfs-vdic-mgmt-vm-vip-stop-interval-0s)
              monitor interval=10s
(nfs-vdic-mgmt-vm-vip-monitor-interval-10s)
 Resource: nfs-vdic-images-vip (class=ocf provider=heartbeat type=IPaddr)
  Attributes: ip=192.168.100.201 cidr_netmask=24
  Operations: start interval=0s timeout=20s
(nfs-vdic-images-vip-start-interval-0s)
              stop interval=0s timeout=20s
(nfs-vdic-images-vip-stop-interval-0s)
              monitor interval=10s
(nfs-vdic-images-vip-monitor-interval-10s)
 Clone: nfs_setup-clone
  Resource: nfs_setup (class=ocf provider=heartbeat type=ganesha_nfsd)
   Attributes: ha_vol_mnt=/var/run/gluster/shared_storage
   Operations: start interval=0s timeout=5s (nfs_setup-start-interval-0s)
               stop interval=0s timeout=5s (nfs_setup-stop-interval-0s)
               monitor interval=0 timeout=5s (nfs_setup-monitor-interval-0)
 Clone: nfs-mon-clone
  Resource: nfs-mon (class=ocf provider=heartbeat type=ganesha_mon)
   Operations: start interval=0s timeout=40s (nfs-mon-start-interval-0s)
               stop interval=0s timeout=40s (nfs-mon-stop-interval-0s)
               monitor interval=10s timeout=10s
(nfs-mon-monitor-interval-10s)
 Clone: nfs-grace-clone
  Meta Attrs: notify=true
  Resource: nfs-grace (class=ocf provider=heartbeat type=ganesha_grace)
   Meta Attrs: notify=true
   Operations: start interval=0s timeout=40s (nfs-grace-start-interval-0s)
               stop interval=0s timeout=40s (nfs-grace-stop-interval-0s)
               monitor interval=5s timeout=10s
(nfs-grace-monitor-interval-5s)
 Resource: vm-vdicone01 (class=ocf provider=heartbeat type=VirtualDomain)
  Attributes: hypervisor=qemu:///system
config=/mnt/nfs-vdic-mgmt-vm/vdicone01.xml migration_transport=ssh
migration_network_suffix=tcp://
  Meta Attrs: allow-migrate=true
  Utilization: cpu=1 hv_memory=512
  Operations: start interval=0s timeout=90 (vm-vdicone01-start-interval-0s)
              stop interval=0s timeout=90 (vm-vdicone01-stop-interval-0s)
              monitor interval=10 timeout=30
(vm-vdicone01-monitor-interval-10)
 Resource: vm-vdicdb01 (class=ocf provider=heartbeat type=VirtualDomain)
  Attributes: hypervisor=qemu:///system
config=/mnt/nfs-vdic-mgmt-vm/vdicdb01.xml migration_transport=ssh
migration_network_suffix=tcp://
  Meta Attrs: allow-migrate=true
  Utilization: cpu=1 hv_memory=512
  Operations: start interval=0s timeout=90 (vm-vdicdb01-start-interval-0s)
              stop interval=0s timeout=90 (vm-vdicdb01-stop-interval-0s)
              monitor interval=10 timeout=30
(vm-vdicdb01-monitor-interval-10)
 Resource: vm-vdicdb02 (class=ocf provider=heartbeat type=VirtualDomain)
  Attributes: hypervisor=qemu:///system
config=/mnt/nfs-vdic-mgmt-vm/vdicdb02.xml migration_network_suffix=tcp://
migration_transport=ssh
  Meta Attrs: allow-migrate=true
  Utilization: cpu=1 hv_memory=512
  Operations: start interval=0s timeout=90 (vm-vdicdb02-start-interval-0s)
              stop interval=0s timeout=90 (vm-vdicdb02-stop-interval-0s)
              monitor interval=10 timeout=30
(vm-vdicdb02-monitor-interval-10)
 Resource: vm-vdicdb03 (class=ocf provider=heartbeat type=VirtualDomain)
  Attributes: hypervisor=qemu:///system
config=/mnt/nfs-vdic-mgmt-vm/vdicdb03.xml migration_network_suffix=tcp://
migration_transport=ssh
  Meta Attrs: allow-migrate=true
  Utilization: cpu=1 hv_memory=512
  Operations: start interval=0s timeout=90 (vm-vdicdb03-start-interval-0s)
              stop interval=0s timeout=90 (vm-vdicdb03-stop-interval-0s)
              monitor interval=10 timeout=30
(vm-vdicdb03-monitor-interval-10)

Stonith Devices:
Fencing Levels:

Location Constraints:
  Resource: nfs-grace-clone
    Constraint: location-nfs-grace-clone
      Rule: score=-INFINITY  (id:location-nfs-grace-clone-rule)
        Expression: grace-active ne 1
 (id:location-nfs-grace-clone-rule-expr)
  Resource: nfs-vdic-images-vip
    Constraint: location-nfs-vdic-images-vip
      Rule: score=-INFINITY  (id:location-nfs-vdic-images-vip-rule)
        Expression: ganesha-active ne 1
 (id:location-nfs-vdic-images-vip-rule-expr)
  Resource: nfs-vdic-mgmt-vm-vip
    Constraint: location-nfs-vdic-mgmt-vm-vip
      Rule: score=-INFINITY  (id:location-nfs-vdic-mgmt-vm-vip-rule)
        Expression: ganesha-active ne 1
 (id:location-nfs-vdic-mgmt-vm-vip-rule-expr)
Ordering Constraints:
  start vm-vdicdb01 then start vm-vdicone01 (kind:Mandatory)
(id:order-vm-vdicdb01-vm-vdicone01-mandatory)
  start vm-vdicdb02 then start vm-vdicone01 (kind:Mandatory)
(id:order-vm-vdicdb02-vm-vdicone01-mandatory)
  start vm-vdicdb03 then start vm-vdicone01 (kind:Mandatory)
(id:order-vm-vdicdb03-vm-vdicone01-mandatory)
Colocation Constraints:
  nfs-vdic-mgmt-vm-vip with nfs-vdic-images-vip (score:-1)
(id:colocation-nfs-vdic-mgmt-vm-vip-nfs-vdic-images-vip-INFINITY)
  vm-vdicdb03 with vm-vdicdb02 (score:-100)
(id:colocation-vm-vdicdb03-vm-vdicdb02--100)
  vm-vdicdb01 with vm-vdicdb03 (score:-100)
(id:colocation-vm-vdicdb01-vm-vdicdb03--100)
  vm-vdicdb01 with vm-vdicdb02 (score:-100)
(id:colocation-vm-vdicdb01-vm-vdicdb02--100)
  vm-vdicone01 with vm-vdicdb01 (score:-10)
(id:colocation-vm-vdicone01-vm-vdicdb01--10)
  vm-vdicone01 with vm-vdicdb02 (score:-10)
(id:colocation-vm-vdicone01-vm-vdicdb02--10)
  vm-vdicone01 with vm-vdicdb03 (score:-10)
(id:colocation-vm-vdicone01-vm-vdicdb03--10)
Ticket Constraints:

Alerts:
 No alerts defined

Resources Defaults:
 No defaults set
Operations Defaults:
 No defaults set

Cluster Properties:
 cluster-infrastructure: corosync
 cluster-name: vdic-cluster
 dc-version: 1.1.15-11.el7_3.2-e174ec8
 have-watchdog: false
 last-lrm-refresh: 1484610370
 start-failure-is-fatal: false
 stonith-enabled: false
Node Attributes:
 vdicnode01-priv: grace-active=1
 vdicnode02-priv: grace-active=1

Quorum:
  Options:
[root at vdicnode01 ~]#


[root at vdicnode01 ~]# pcs status
Cluster name: vdic-cluster
Stack: corosync
Current DC: vdicnode02-priv (version 1.1.15-11.el7_3.2-e174ec8) - partition
with quorum
Last updated: Tue Jan 17 12:25:27 2017          Last change: Tue Jan 17
12:25:09 2017 by root via crm_resource on vdicnode01-priv

2 nodes and 12 resources configured

Online: [ vdicnode01-priv vdicnode02-priv ]

Full list of resources:

 nfs-vdic-mgmt-vm-vip   (ocf::heartbeat:IPaddr):        Started
vdicnode02-priv
 nfs-vdic-images-vip    (ocf::heartbeat:IPaddr):        Started
vdicnode01-priv
 Clone Set: nfs_setup-clone [nfs_setup]
     Started: [ vdicnode01-priv vdicnode02-priv ]
 Clone Set: nfs-mon-clone [nfs-mon]
     Started: [ vdicnode01-priv vdicnode02-priv ]
 Clone Set: nfs-grace-clone [nfs-grace]
     Started: [ vdicnode01-priv vdicnode02-priv ]
 vm-vdicone01   (ocf::heartbeat:VirtualDomain): Started vdicnode01-priv
 vm-vdicdb01    (ocf::heartbeat:VirtualDomain): Started vdicnode02-priv
 vm-vdicdb02    (ocf::heartbeat:VirtualDomain): Started vdicnode01-priv
 vm-vdicdb03    (ocf::heartbeat:VirtualDomain): Started vdicnode02-priv

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled
[root at vdicnode01 ~]#


-------------

[root at vdicnode01 ~]# pcs property list --all
Cluster Properties:
 batch-limit: 0
 cluster-delay: 60s
 cluster-infrastructure: corosync
 cluster-name: vdic-cluster
 cluster-recheck-interval: 15min
 concurrent-fencing: false
 crmd-finalization-timeout: 30min
 crmd-integration-timeout: 3min
 crmd-transition-delay: 0s
 dc-deadtime: 20s
 dc-version: 1.1.15-11.el7_3.2-e174ec8
 default-action-timeout: 20s
 default-resource-stickiness: 0
 election-timeout: 2min
 enable-acl: false
 enable-startup-probes: true
 have-watchdog: false
 is-managed-default: true
 last-lrm-refresh: 1484610370
 load-threshold: 80%
 maintenance-mode: false
 migration-limit: -1
 no-quorum-policy: stop
 node-action-limit: 0
 node-health-green: 0
 node-health-red: -INFINITY
 node-health-strategy: none
 node-health-yellow: 0
 notification-agent: /dev/null
 notification-recipient:
 pe-error-series-max: -1
 pe-input-series-max: 4000
 pe-warn-series-max: 5000
 placement-strategy: default
 remove-after-stop: false
 shutdown-escalation: 20min
 start-failure-is-fatal: false
 startup-fencing: true
 stonith-action: reboot
 stonith-enabled: false
 stonith-timeout: 60s
 stonith-watchdog-timeout: (null)
 stop-all-resources: false
 stop-orphan-actions: true
 stop-orphan-resources: true
 symmetric-cluster: true
Node Attributes:
 vdicnode01-priv: grace-active=1
 vdicnode02-priv: grace-active=1
[root at vdicnode01 ~]#





2017-01-17 11:00 GMT+01:00 emmanuel segura <emi2fast at gmail.com>:

> show your cluster configuration.
>
> 2017-01-17 10:15 GMT+01:00 Oscar Segarra <oscar.segarra at gmail.com>:
> > Hi,
> >
> > Yes, I will try to explain myself better.
> >
> > Initially
> > On node1 (vdicnode01-priv)
> >>virsh list
> > ==============
> > vdicdb01     started
> >
> > On node2 (vdicnode02-priv)
> >>virsh list
> > ==============
> > vdicdb02     started
> >
> > --> Now, I execute the migrate command (outside the cluster <-- not using
> > pcs resource move)
> > virsh migrate --live vdicdb01 qemu:/// qemu+ssh://vdicnode02-priv
> > tcp://vdicnode02-priv
> >
> > Finally
> > On node1 (vdicnode01-priv)
> >>virsh list
> > ==============
> > vdicdb01     started
> >
> > On node2 (vdicnode02-priv)
> >>virsh list
> > ==============
> > vdicdb02     started
> > vdicdb01     started
> >
> > If I query cluster pcs status, cluster thinks resource vm-vdicdb01 is
> only
> > started on node vdicnode01-priv.
> >
> > Thanks a lot.
> >
> >
> >
> > 2017-01-17 10:03 GMT+01:00 emmanuel segura <emi2fast at gmail.com>:
> >>
> >> sorry,
> >>
> >> But do you mean, when you say, you migrated the vm outside of the
> >> cluster? one server out side of you cluster?
> >>
> >> 2017-01-17 9:27 GMT+01:00 Oscar Segarra <oscar.segarra at gmail.com>:
> >> > Hi,
> >> >
> >> > I have configured a two node cluster whewe run 4 kvm guests on.
> >> >
> >> > The hosts are:
> >> > vdicnode01
> >> > vdicnode02
> >> >
> >> > And I have created a dedicated network card for cluster management. I
> >> > have
> >> > created required entries in /etc/hosts:
> >> > vdicnode01-priv
> >> > vdicnode02-priv
> >> >
> >> > The four guests have collocation rules in order to make them
> distribute
> >> > proportionally between my two nodes.
> >> >
> >> > The problem I have is that if I migrate a guest outside the cluster, I
> >> > mean
> >> > using the virsh migrate - - live...  Cluster,  instead of moving back
> >> > the
> >> > guest to its original node (following collocation sets),  Cluster
> starts
> >> > again the guest and suddenly I have the same guest running on both
> nodes
> >> > causing xfs corruption in guest.
> >> >
> >> > Is there any configuration applicable to avoid this unwanted behavior?
> >> >
> >> > Thanks a lot
> >> >
> >> > _______________________________________________
> >> > Users mailing list: Users at clusterlabs.org
> >> > http://lists.clusterlabs.org/mailman/listinfo/users
> >> >
> >> > Project Home: http://www.clusterlabs.org
> >> > Getting started: http://www.clusterlabs.org/
> doc/Cluster_from_Scratch.pdf
> >> > Bugs: http://bugs.clusterlabs.org
> >> >
> >>
> >>
> >>
> >> --
> >>   .~.
> >>   /V\
> >>  //  \\
> >> /(   )\
> >> ^`~'^
> >>
> >> _______________________________________________
> >> Users mailing list: Users at clusterlabs.org
> >> http://lists.clusterlabs.org/mailman/listinfo/users
> >>
> >> Project Home: http://www.clusterlabs.org
> >> Getting started: http://www.clusterlabs.org/
> doc/Cluster_from_Scratch.pdf
> >> Bugs: http://bugs.clusterlabs.org
> >
> >
> >
> > _______________________________________________
> > Users mailing list: Users at clusterlabs.org
> > http://lists.clusterlabs.org/mailman/listinfo/users
> >
> > Project Home: http://www.clusterlabs.org
> > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> > Bugs: http://bugs.clusterlabs.org
> >
>
>
>
> --
>   .~.
>   /V\
>  //  \\
> /(   )\
> ^`~'^
>
> _______________________________________________
> Users mailing list: Users at clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.clusterlabs.org/pipermail/users/attachments/20170117/c2556293/attachment-0002.html>


More information about the Users mailing list