[ClusterLabs] VirtualDomain live migration error

Oscar Segarra oscar.segarra at gmail.com
Wed Oct 18 17:10:09 EDT 2017


Hi Ken,

Lately, I've configured the passwordless ssh for root and everything has
begun working fine...

Thanks a lot,

2017-10-18 22:16 GMT+02:00 Ken Gaillot <kgaillot at redhat.com>:

> On Sat, 2017-09-02 at 01:21 +0200, Oscar Segarra wrote:
> > Hi,
> >
> > I have updated the known_hosts:
> >
> > Now, I get the following error:
> >
> > Sep 02 01:03:41 [1535] vdicnode01        cib:     info:
> > cib_perform_op: +
> >  /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resou
> > rce[@id='vm-vdicdb01']/lrm_rsc_op[@id='vm-vdicdb01_last_0']:
> >  @operation_key=vm-vdicdb01_migrate_to_0, @operation=migrate_to,
> > @crm-debug-origin=cib_action_update, @transition-key=6:27:0:a7fef266-
> > 46c3-429e-ab00-c1a0aab24da5, @transition-magic=-
> > 1:193;6:27:0:a7fef266-46c3-429e-ab00-c1a0aab24da5, @call-id=-1, @rc-
> > code=193, @op-status=-1, @last-run=1504307021, @last-rc-c
> > Sep 02 01:03:41 [1535] vdicnode01        cib:     info:
> > cib_process_request:    Completed cib_modify operation for section
> > status: OK (rc=0, origin=vdicnode01/crmd/77, version=0.169.1)
> > VirtualDomain(vm-vdicdb01)[13085]:      2017/09/02_01:03:41 INFO:
> > vdicdb01: Starting live migration to vdicnode02 (using: virsh --
> > connect=qemu:///system --quiet migrate --live  vdicdb01
> > qemu+ssh://vdicnode02/system ).
> > VirtualDomain(vm-vdicdb01)[13085]:      2017/09/02_01:03:41 ERROR:
> > vdicdb01: live migration to vdicnode02 failed: 1
> >  ]p 02 01:03:41 [1537] vdicnode01       lrmd:   notice:
> > operation_finished:     vm-vdicdb01_migrate_to_0:13085:stderr [
> > error: Cannot recv data: Permission denied, please try again.
> >  ]p 02 01:03:41 [1537] vdicnode01       lrmd:   notice:
> > operation_finished:     vm-vdicdb01_migrate_to_0:13085:stderr [
> > Permission denied, please try again.
> > Sep 02 01:03:41 [1537] vdicnode01       lrmd:   notice:
> > operation_finished:     vm-vdicdb01_migrate_to_0:13085:stderr [
> > Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).:
> > Connection reset by peer ]
> > Sep 02 01:03:41 [1537] vdicnode01       lrmd:   notice:
> > operation_finished:     vm-vdicdb01_migrate_to_0:13085:stderr [ ocf-
> > exit-reason:vdicdb01: live migration to vdicnode02 failed: 1 ]
> > Sep 02 01:03:41 [1537] vdicnode01       lrmd:     info: log_finished:
> >   finished - rsc:vm-vdicdb01 action:migrate_to call_id:16 pid:13085
> > exit-code:1 exec-time:119ms queue-time:0ms
> > Sep 02 01:03:41 [1540] vdicnode01       crmd:   notice:
> > process_lrm_event:      Result of migrate_to operation for vm-
> > vdicdb01 on vdicnode01: 1 (unknown error) | call=16 key=vm-
> > vdicdb01_migrate_to_0 confirmed=true cib-update=78
> > Sep 02 01:03:41 [1540] vdicnode01       crmd:   notice:
> > process_lrm_event:      vdicnode01-vm-vdicdb01_migrate_to_0:16 [
> > error: Cannot recv data: Permission denied, please try
> > again.\r\nPermission denied, please try again.\r\nPermission denied
> > (publickey,gssapi-keyex,gssapi-with-mic,password).: Connection reset
> > by peer\nocf-exit-reason:vdicdb01: live migration to vdicnode02
> > failed: 1\n ]
> > Sep 02 01:03:41 [1535] vdicnode01        cib:     info:
> > cib_process_request:    Forwarding cib_modify operation for section
> > status to all (origin=local/crmd/78)
> > Sep 02 01:03:41 [1535] vdicnode01        cib:     info:
> > cib_perform_op: Diff: --- 0.169.1 2
> > Sep 02 01:03:41 [1535] vdicnode01        cib:     info:
> > cib_perform_op: Diff: +++ 0.169.2 (null)
> > Sep 02 01:03:41 [1535] vdicnode01        cib:     info:
> > cib_perform_op: +  /cib:  @num_updates=2
> > Sep 02 01:03:41 [1535] vdicnode01        cib:     info:
> > cib_perform_op: +
> >  /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resou
> > rce[@id='vm-vdicdb01']/lrm_rsc_op[@id='vm-vdicdb01_last_0']:  @crm-
> > debug-origin=do_update_resource, @transition-
> > magic=0:1;6:27:0:a7fef266-46c3-429e-ab00-c1a0aab24da5, @call-id=16,
> > @rc-code=1, @op-status=0, @exec-time=119, @exit-reason=vdicdb01: live
> > migration to vdicnode02 failed: 1
> > Sep 02 01:03:4
> >
> > as root <-- system prompts the password
> > [root at vdicnode01 .ssh]# virsh --connect=qemu:///system --quiet
> > migrate --live  vdicdb01 qemu+ssh://vdicnode02/system
> > root at vdicnode02's password:
> >
> > as oneadmin (the user that executes the qemu-kvm) <-- does not prompt
> > the password
> > virsh --connect=qemu:///system --quiet migrate --live  vdicdb01
> > qemu+ssh://vdicnode02/system
> >
> > Must I configure passwordless connection with root in order to make
> > live migration work?
> >
> > Or maybe is there any way to instruct pacemaker to use my oneadmin
> > user for migrations inestad of root?
>
> Pacemaker calls the VirtualDomain resource agent as root, but it's up
> to the agent what to do from there. I don't see any user options in
> VirtualDomain or virsh, so I don't think there is currently.
>
> I see two options: configure passwordless ssh for root, or copy the
> VirtualDomain resource and modify it to use "sudo -u oneadmin" when it
> calls virsh.
>
> We've discussed adding the capability to tell pacemaker to execute a
> resource agent as a particular user. We've already put the plumbing in
> for it, so that lrmd can execute alert agents as the hacluster user.
> All that would be needed would be a new resource meta-attribute and the
> IPC API to use it. It's low priority due to a large backlog at the
> moment, but we'd be happy to take a pull request for it. The resource
> agent would obviously have to be able to work as that user.
>
> >
> > Thanks a lot:
> >
> >
> > 2017-09-01 23:14 GMT+02:00 Ken Gaillot <kgaillot at redhat.com>:
> > > On Fri, 2017-09-01 at 00:26 +0200, Oscar Segarra wrote:
> > > > Hi,
> > > >
> > > >
> > > > Yes, it is....
> > > >
> > > >
> > > > The qemu-kvm process is executed by the oneadmin user.
> > > >
> > > >
> > > > When I cluster tries the live migration, what users do play?
> > > >
> > > >
> > > > Oneadmin
> > > > Root
> > > > Hacluster
> > > >
> > > >
> > > > I have just configured pasworless ssh connection with oneadmin.
> > > >
> > > >
> > > > Do I need to configure any other passwordless ssh connection with
> > > any
> > > > other user?
> > > >
> > > >
> > > > What user executes the virsh migrate - - live?
> > >
> > > The cluster executes resource actions as root.
> > >
> > > > Is there any way to check ssk keys?
> > >
> > > I'd just login once to the host as root from the cluster nodes, to
> > > make
> > > it sure it works, and accept the host when asked.
> > >
> > > >
> > > > Sorry for all theese questions.
> > > >
> > > >
> > > > Thanks a lot
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > El 1 sept. 2017 0:12, "Ken Gaillot" <kgaillot at redhat.com>
> > > escribió:
> > > >         On Thu, 2017-08-31 at 23:45 +0200, Oscar Segarra wrote:
> > > >         > Hi Ken,
> > > >         >
> > > >         >
> > > >         > Thanks a lot for you quick answer:
> > > >         >
> > > >         >
> > > >         > Regarding to selinux, it is disabled. The FW is
> > > disabled as
> > > >         well.
> > > >         >
> > > >         >
> > > >         > [root at vdicnode01 ~]# sestatus
> > > >         > SELinux status:                 disabled
> > > >         >
> > > >         >
> > > >         > [root at vdicnode01 ~]# service firewalld status
> > > >         > Redirecting to /bin/systemctl status  firewalld.service
> > > >         > ● firewalld.service - firewalld - dynamic firewall
> > > daemon
> > > >         >    Loaded: loaded
> > > >         (/usr/lib/systemd/system/firewalld.service;
> > > >         > disabled; vendor preset: enabled)
> > > >         >    Active: inactive (dead)
> > > >         >      Docs: man:firewalld(1)
> > > >         >
> > > >         >
> > > >         > On migration, it performs a gracefully shutdown and a
> > > start
> > > >         on the new
> > > >         > node.
> > > >         >
> > > >         >
> > > >         > I attach the logs when trying to migrate from
> > > vdicnode02 to
> > > >         > vdicnode01:
> > > >         >
> > > >         >
> > > >         > vdicnode02 corosync.log:
> > > >         > Aug 31 23:38:17 [1521] vdicnode02        cib:     info:
> > > >         > cib_perform_op: Diff: --- 0.161.2 2
> > > >         > Aug 31 23:38:17 [1521] vdicnode02        cib:     info:
> > > >         > cib_perform_op: Diff: +++ 0.162.0 (null)
> > > >         > Aug 31 23:38:17 [1521] vdicnode02        cib:     info:
> > > >         > cib_perform_op:
> > > >         >
> > > >         --
> > > /cib/configuration/constraints/rsc_location[@id='location-vm-
> > > vdicdb01-vdicnode01--INFINITY']
> > > >         > Aug 31 23:38:17 [1521] vdicnode02        cib:     info:
> > > >         > cib_perform_op: +  /cib:  @epoch=162, @num_updates=0
> > > >         > Aug 31 23:38:17 [1521] vdicnode02        cib:     info:
> > > >         > cib_process_request:    Completed cib_replace operation
> > > for
> > > >         section
> > > >         > configuration: OK (rc=0, origin=vdicnode01/cibadmin/2,
> > > >         > version=0.162.0)
> > > >         > Aug 31 23:38:17 [1521] vdicnode02        cib:     info:
> > > >         > cib_file_backup:        Archived previous version
> > > >         > as /var/lib/pacemaker/cib/cib-65.raw
> > > >         > Aug 31 23:38:17 [1521] vdicnode02        cib:     info:
> > > >         > cib_file_write_with_digest:     Wrote version 0.162.0
> > > of the
> > > >         CIB to
> > > >         > disk (digest: 1f87611b60cd7c48b95b6b788b47f65f)
> > > >         > Aug 31 23:38:17 [1521] vdicnode02        cib:     info:
> > > >         > cib_file_write_with_digest:     Reading cluster
> > > >         configuration
> > > >         > file /var/lib/pacemaker/cib/cib.jt2KPw
> > > >         > (digest: /var/lib/pacemaker/cib/cib.Kwqfpl)
> > > >         > Aug 31 23:38:22 [1521] vdicnode02        cib:     info:
> > > >         > cib_process_ping:       Reporting our current digest to
> > > >         vdicnode01:
> > > >         > dace3a23264934279d439420d5a716cc for 0.162.0
> > > (0x7f96bb26c5c0
> > > >         0)
> > > >         > Aug 31 23:38:27 [1521] vdicnode02        cib:     info:
> > > >         > cib_perform_op: Diff: --- 0.162.0 2
> > > >         > Aug 31 23:38:27 [1521] vdicnode02        cib:     info:
> > > >         > cib_perform_op: Diff: +++ 0.163.0 (null)
> > > >         > Aug 31 23:38:27 [1521] vdicnode02        cib:     info:
> > > >         > cib_perform_op: +  /cib:  @epoch=163
> > > >         > Aug 31 23:38:27 [1521] vdicnode02        cib:     info:
> > > >         > cib_perform_op: ++ /cib/configuration/constraints:
> > > >         <rsc_location
> > > >         > id="location-vm-vdicdb01-vdicnode02--INFINITY"
> > > >         node="vdicnode02"
> > > >         > rsc="vm-vdicdb01" score="-INFINITY"/>
> > > >         > Aug 31 23:38:27 [1521] vdicnode02        cib:     info:
> > > >         > cib_process_request:    Completed cib_replace operation
> > > for
> > > >         section
> > > >         > configuration: OK (rc=0, origin=vdicnode01/cibadmin/2,
> > > >         > version=0.163.0)
> > > >         > Aug 31 23:38:27 [1521] vdicnode02        cib:     info:
> > > >         > cib_file_backup:        Archived previous version
> > > >         > as /var/lib/pacemaker/cib/cib-66.raw
> > > >         > Aug 31 23:38:27 [1521] vdicnode02        cib:     info:
> > > >         > cib_file_write_with_digest:     Wrote version 0.163.0
> > > of the
> > > >         CIB to
> > > >         > disk (digest: 47a548b36746de9275d66cc6aeb0fdc4)
> > > >         > Aug 31 23:38:27 [1521] vdicnode02        cib:     info:
> > > >         > cib_file_write_with_digest:     Reading cluster
> > > >         configuration
> > > >         > file /var/lib/pacemaker/cib/cib.rcgXiT
> > > >         > (digest: /var/lib/pacemaker/cib/cib.7geMfi)
> > > >         > Aug 31 23:38:27 [1523] vdicnode02       lrmd:     info:
> > > >         > cancel_recurring_action:        Cancelling ocf
> > > operation
> > > >         > vm-vdicdb01_monitor_10000
> > > >         > Aug 31 23:38:27 [1526] vdicnode02       crmd:     info:
> > > >         > do_lrm_rsc_op:  Performing
> > > >         > key=6:6:0:fe1a9b0a-816c-4b97-96cb-b90dbf71417a
> > > >         > op=vm-vdicdb01_migrate_to_0
> > > >         > Aug 31 23:38:27 [1523] vdicnode02       lrmd:     info:
> > > >         log_execute:
> > > >         >    executing - rsc:vm-vdicdb01 action:migrate_to
> > > call_id:9
> > > >         > Aug 31 23:38:27 [1526] vdicnode02       crmd:     info:
> > > >         > process_lrm_event:      Result of monitor operation for
> > > >         vm-vdicdb01 on
> > > >         > vdicnode02: Cancelled | call=7 key=vm-
> > > vdicdb01_monitor_10000
> > > >         > confirmed=true
> > > >         > Aug 31 23:38:27 [1521] vdicnode02        cib:     info:
> > > >         > cib_perform_op: Diff: --- 0.163.0 2
> > > >         > Aug 31 23:38:27 [1521] vdicnode02        cib:     info:
> > > >         > cib_perform_op: Diff: +++ 0.163.1 (null)
> > > >         > Aug 31 23:38:27 [1521] vdicnode02        cib:     info:
> > > >         > cib_perform_op: +  /cib:  @num_updates=1
> > > >         > Aug 31 23:38:27 [1521] vdicnode02        cib:     info:
> > > >         > cib_perform_op: +
> > > >         >
> > > /cib/status/node_state[@id='2']/lrm[@id='2']/lrm_resources/lrm_reso
> > > urce[@id='vm-vdicdb01']/lrm_rsc_op[@id='vm-vdicdb01_last_0']:
> > > @operation_key=vm-vdicdb01_migrate_to_0, @operation=migrate_to,
> > > @crm-debug-origin=cib_action_update, @transition-
> > > key=6:6:0:fe1a9b0a-816c-4b97-96cb-b90dbf71417a, @transition-magic=-
> > > 1:193;6:6:0:fe1a9b0a-816c-4b97-96cb-b90dbf71417a, @call-id=-1, @rc-
> > > code=193, @op-status=-1, @last-run=1504215507, @last-rc-cha
> > > >         > Aug 31 23:38:27 [1521] vdicnode02        cib:     info:
> > > >         > cib_process_request:    Completed cib_modify operation
> > > for
> > > >         section
> > > >         > status: OK (rc=0, origin=vdicnode01/crmd/41,
> > > >         version=0.163.1)
> > > >         > VirtualDomain(vm-vdicdb01)[5241]:
> > >  2017/08/31_23:38:27
> > > >         INFO:
> > > >         > vdicdb01: Starting live migration to vdicnode01 (using:
> > > >         virsh
> > > >         > --connect=qemu:///system --quiet migrate --live
> > > vdicdb01
> > > >         qemu
> > > >         > +ssh://vdicnode01/system ).
> > > >         > VirtualDomain(vm-vdicdb01)[5241]:
> > >  2017/08/31_23:38:27
> > > >         ERROR:
> > > >         > vdicdb01: live migration to vdicnode01 failed: 1
> > > >         > Aug 31 23:38:27 [1523] vdicnode02       lrmd:   notice:
> > > >         > operation_finished:     vm-
> > > vdicdb01_migrate_to_0:5241:stderr
> > > >         [ error:
> > > >         > Cannot recv data: Host key verification failed.:
> > > Connection
> > > >         reset by
> > > >         > peer ]
> > > >
> > > >
> > > >         ^^^ There you go. Sounds like the ssh key isn't being
> > > >         accepted. No idea
> > > >         why though.
> > > >
> > > >
> > > >
> > > >         > Aug 31 23:38:27 [1523] vdicnode02       lrmd:   notice:
> > > >         > operation_finished:     vm-
> > > vdicdb01_migrate_to_0:5241:stderr
> > > >         > [ ocf-exit-reason:vdicdb01: live migration to
> > > vdicnode01
> > > >         failed: 1 ]
> > > >         > Aug 31 23:38:27 [1523] vdicnode02       lrmd:     info:
> > > >         log_finished:
> > > >         > finished - rsc:vm-vdicdb01 action:migrate_to call_id:9
> > > >         pid:5241
> > > >         > exit-code:1 exec-time:78ms queue-time:0ms
> > > >         > Aug 31 23:38:27 [1526] vdicnode02       crmd:   notice:
> > > >         > process_lrm_event:      Result of migrate_to operation
> > > for
> > > >         vm-vdicdb01
> > > >         > on vdicnode02: 1 (unknown error) | call=9
> > > >         key=vm-vdicdb01_migrate_to_0
> > > >         > confirmed=true cib-update=14
> > > >         > Aug 31 23:38:27 [1526] vdicnode02       crmd:   notice:
> > > >         > process_lrm_event:
> > > >         vdicnode02-vm-vdicdb01_migrate_to_0:9 [ error:
> > > >         > Cannot recv data: Host key verification failed.:
> > > Connection
> > > >         reset by
> > > >         > peer\nocf-exit-reason:vdicdb01: live migration to
> > > vdicnode01
> > > >         failed: 1
> > > >         > \n ]
> > > >         > Aug 31 23:38:27 [1521] vdicnode02        cib:     info:
> > > >         > cib_process_request:    Forwarding cib_modify operation
> > > for
> > > >         section
> > > >         > status to all (origin=local/crmd/14)
> > > >         > Aug 31 23:38:27 [1521] vdicnode02        cib:     info:
> > > >         > cib_perform_op: Diff: --- 0.163.1 2
> > > >         > Aug 31 23:38:27 [1521] vdicnode02        cib:     info:
> > > >         > cib_perform_op: Diff: +++ 0.163.2 (null)
> > > >         > Aug 31 23:38:27 [1521] vdicnode02        cib:     info:
> > > >         > cib_perform_op: +  /cib:  @num_updates=2
> > > >         > Aug 31 23:38:27 [1521] vdicnode02        cib:     info:
> > > >         > cib_perform_op: +
> > > >         >
> > > /cib/status/node_state[@id='2']/lrm[@id='2']/lrm_resources/lrm_reso
> > > urce[@id='vm-vdicdb01']/lrm_rsc_op[@id='vm-vdicdb01_last_0']:
> > > @crm-debug-origin=do_update_resource, @transition-
> > > magic=0:1;6:6:0:fe1a9b0a-816c-4b97-96cb-b90dbf71417a, @call-id=9,
> > > @rc-code=1, @op-status=0, @exec-time=78, @exit-reason=vdicdb01:
> > > live migration to vdicnode01 failed: 1
> > > >         > Aug 31 23:38:27 [1521] vdicnode02        cib:     info:
> > > >         > cib_perform_op:
> > > >         >
> > > >         ++
> > > /cib/status/node_state[@id='2']/lrm[@id='2']/lrm_resources/lrm_reso
> > > urce[@id='vm-vdicdb01']:  <lrm_rsc_op id="vm-
> > > vdicdb01_last_failure_0" operation_key="vm-vdicdb01_migrate_to_0"
> > > operation="migrate_to" crm-debug-origin="do_update_resource"
> > > crm_feature_set="3.0.10" transition-key="6:6:0:fe1a9b0a-816c-4b97-
> > > 96cb-b90dbf71417a" transition-magic="0:1;6:6:0:fe1a9b0a-816c-4b97-
> > > 96cb-b90dbf71417a" exit-reason="vdicdb01: live migration to vdicn
> > > >         > Aug 31 23:38:27 [1521] vdicnode02        cib:     info:
> > > >         > cib_process_request:    Completed cib_modify operation
> > > for
> > > >         section
> > > >         > status: OK (rc=0, origin=vdicnode02/crmd/14,
> > > >         version=0.163.2)
> > > >         > Aug 31 23:38:27 [1526] vdicnode02       crmd:     info:
> > > >         > do_lrm_rsc_op:  Performing
> > > >         > key=2:7:0:fe1a9b0a-816c-4b97-96cb-b90dbf71417a
> > > >         op=vm-vdicdb01_stop_0
> > > >         > Aug 31 23:38:27 [1523] vdicnode02       lrmd:     info:
> > > >         log_execute:
> > > >         >    executing - rsc:vm-vdicdb01 action:stop call_id:10
> > > >         > VirtualDomain(vm-vdicdb01)[5285]:
> > >  2017/08/31_23:38:27
> > > >         INFO:
> > > >         > Issuing graceful shutdown request for domain vdicdb01.
> > > >         > Aug 31 23:38:27 [1521] vdicnode02        cib:     info:
> > > >         > cib_perform_op: Diff: --- 0.163.2 2
> > > >         > Aug 31 23:38:27 [1521] vdicnode02        cib:     info:
> > > >         > cib_perform_op: Diff: +++ 0.163.3 (null)
> > > >         > Aug 31 23:38:27 [1521] vdicnode02        cib:     info:
> > > >         > cib_perform_op: +  /cib:  @num_updates=3
> > > >         > Aug 31 23:38:27 [1521] vdicnode02        cib:     info:
> > > >         > cib_perform_op: +
> > > >         >
> > > /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_reso
> > > urce[@id='vm-vdicdb01']/lrm_rsc_op[@id='vm-vdicdb01_last_0']:
> > > @operation_key=vm-vdicdb01_stop_0, @operation=stop, @transition-
> > > key=4:7:0:fe1a9b0a-816c-4b97-96cb-b90dbf71417a, @transition-
> > > magic=0:0;4:7:0:fe1a9b0a-816c-4b97-96cb-b90dbf71417a, @call-id=6,
> > > @rc-code=0, @last-run=1504215507, @last-rc-change=1504215507,
> > > @exec-time=57
> > > >         > Aug 31 23:38:27 [1521] vdicnode02        cib:     info:
> > > >         > cib_process_request:    Completed cib_modify operation
> > > for
> > > >         section
> > > >         > status: OK (rc=0, origin=vdicnode01/crmd/43,
> > > >         version=0.163.3)
> > > >         > Aug 31 23:38:30 [1523] vdicnode02       lrmd:     info:
> > > >         log_finished:
> > > >         > finished - rsc:vm-vdicdb01 action:stop call_id:10
> > > pid:5285
> > > >         exit-code:0
> > > >         > exec-time:3159ms queue-time:0ms
> > > >         > Aug 31 23:38:30 [1526] vdicnode02       crmd:   notice:
> > > >         > process_lrm_event:      Result of stop operation for
> > > >         vm-vdicdb01 on
> > > >         > vdicnode02: 0 (ok) | call=10 key=vm-vdicdb01_stop_0
> > > >         confirmed=true
> > > >         > cib-update=15
> > > >         > Aug 31 23:38:30 [1521] vdicnode02        cib:     info:
> > > >         > cib_process_request:    Forwarding cib_modify operation
> > > for
> > > >         section
> > > >         > status to all (origin=local/crmd/15)
> > > >         > Aug 31 23:38:30 [1521] vdicnode02        cib:     info:
> > > >         > cib_perform_op: Diff: --- 0.163.3 2
> > > >         > Aug 31 23:38:30 [1521] vdicnode02        cib:     info:
> > > >         > cib_perform_op: Diff: +++ 0.163.4 (null)
> > > >         > Aug 31 23:38:30 [1521] vdicnode02        cib:     info:
> > > >         > cib_perform_op: +  /cib:  @num_updates=4
> > > >         > Aug 31 23:38:30 [1521] vdicnode02        cib:     info:
> > > >         > cib_perform_op: +
> > > >         >
> > > /cib/status/node_state[@id='2']/lrm[@id='2']/lrm_resources/lrm_reso
> > > urce[@id='vm-vdicdb01']/lrm_rsc_op[@id='vm-vdicdb01_last_0']:
> > > @operation_key=vm-vdicdb01_stop_0, @operation=stop, @transition-
> > > key=2:7:0:fe1a9b0a-816c-4b97-96cb-b90dbf71417a, @transition-
> > > magic=0:0;2:7:0:fe1a9b0a-816c-4b97-96cb-b90dbf71417a, @call-id=10,
> > > @rc-code=0, @exec-time=3159
> > > >         > Aug 31 23:38:30 [1521] vdicnode02        cib:     info:
> > > >         > cib_process_request:    Completed cib_modify operation
> > > for
> > > >         section
> > > >         > status: OK (rc=0, origin=vdicnode02/crmd/15,
> > > >         version=0.163.4)
> > > >         > Aug 31 23:38:31 [1521] vdicnode02        cib:     info:
> > > >         > cib_perform_op: Diff: --- 0.163.4 2
> > > >         > Aug 31 23:38:31 [1521] vdicnode02        cib:     info:
> > > >         > cib_perform_op: Diff: +++ 0.163.5 (null)
> > > >         > Aug 31 23:38:31 [1521] vdicnode02        cib:     info:
> > > >         > cib_perform_op: +  /cib:  @num_updates=5
> > > >         > Aug 31 23:38:31 [1521] vdicnode02        cib:     info:
> > > >         > cib_perform_op: +
> > > >         >
> > > /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_reso
> > > urce[@id='vm-vdicdb01']/lrm_rsc_op[@id='vm-vdicdb01_last_0']:
> > > @operation_key=vm-vdicdb01_start_0, @operation=start, @transition-
> > > key=5:7:0:fe1a9b0a-816c-4b97-96cb-b90dbf71417a, @transition-
> > > magic=0:0;5:7:0:fe1a9b0a-816c-4b97-96cb-b90dbf71417a, @call-id=7,
> > > @last-run=1504215510, @last-rc-change=1504215510, @exec-time=528
> > > >         > Aug 31 23:38:31 [1521] vdicnode02        cib:     info:
> > > >         > cib_process_request:    Completed cib_modify operation
> > > for
> > > >         section
> > > >         > status: OK (rc=0, origin=vdicnode01/crmd/44,
> > > >         version=0.163.5)
> > > >         > Aug 31 23:38:31 [1521] vdicnode02        cib:     info:
> > > >         > cib_perform_op: Diff: --- 0.163.5 2
> > > >         > Aug 31 23:38:31 [1521] vdicnode02        cib:     info:
> > > >         > cib_perform_op: Diff: +++ 0.163.6 (null)
> > > >         > Aug 31 23:38:31 [1521] vdicnode02        cib:     info:
> > > >         > cib_perform_op: +  /cib:  @num_updates=6
> > > >         > Aug 31 23:38:31 [1521] vdicnode02        cib:     info:
> > > >         > cib_perform_op:
> > > >         >
> > > >         ++
> > > /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_reso
> > > urce[@id='vm-vdicdb01']:  <lrm_rsc_op id="vm-
> > > vdicdb01_monitor_10000" operation_key="vm-vdicdb01_monitor_10000"
> > > operation="monitor" crm-debug-origin="do_update_resource"
> > > crm_feature_set="3.0.10" transition-key="6:7:0:fe1a9b0a-816c-4b97-
> > > 96cb-b90dbf71417a" transition-magic="0:0;6:7:0:fe1a9b0a-816c-4b97-
> > > 96cb-b90dbf71417a" on_node="vdicnode01" call-id="8" rc-code="0" op-
> > > s
> > > >         > Aug 31 23:38:31 [1521] vdicnode02        cib:     info:
> > > >         > cib_process_request:    Completed cib_modify operation
> > > for
> > > >         section
> > > >         > status: OK (rc=0, origin=vdicnode01/crmd/45,
> > > >         version=0.163.6)
> > > >         > Aug 31 23:38:36 [1521] vdicnode02        cib:     info:
> > > >         > cib_process_ping:       Reporting our current digest to
> > > >         vdicnode01:
> > > >         > 9141ea9880f5a44b133003982d863bc8 for 0.163.6
> > > (0x7f96bb2625a0
> > > >         0)
> > > >         >
> > > >         >
> > > >         >
> > > >         >
> > > >         >
> > > >         >
> > > >         > vdicnode01 - corosync.log
> > > >         > Aug 31 23:38:27 [1531] vdicnode01        cib:     info:
> > > >         > cib_process_request:    Forwarding cib_replace
> > > operation for
> > > >         section
> > > >         > configuration to all (origin=local/cibadmin/2)
> > > >         > Aug 31 23:38:27 [1531] vdicnode01        cib:     info:
> > > >         > cib_perform_op: Diff: --- 0.162.0 2
> > > >         > Aug 31 23:38:27 [1531] vdicnode01        cib:     info:
> > > >         > cib_perform_op: Diff: +++ 0.163.0 (null)
> > > >         > Aug 31 23:38:27 [1531] vdicnode01        cib:     info:
> > > >         > cib_perform_op: +  /cib:  @epoch=163
> > > >         > Aug 31 23:38:27 [1531] vdicnode01        cib:     info:
> > > >         > cib_perform_op: ++ /cib/configuration/constraints:
> > > >         <rsc_location
> > > >         > id="location-vm-vdicdb01-vdicnode02--INFINITY"
> > > >         node="vdicnode02"
> > > >         > rsc="vm-vdicdb01" score="-INFINITY"/>
> > > >         > Aug 31 23:38:27 [1531] vdicnode01        cib:     info:
> > > >         > cib_process_request:    Completed cib_replace operation
> > > for
> > > >         section
> > > >         > configuration: OK (rc=0, origin=vdicnode01/cibadmin/2,
> > > >         > version=0.163.0)
> > > >         > Aug 31 23:38:27 [1536] vdicnode01       crmd:     info:
> > > >         > abort_transition_graph: Transition aborted by
> > > >         > rsc_location.location-vm-vdicdb01-vdicnode02--INFINITY
> > > >         'create':
> > > >         > Non-status change | cib=0.163.0
> > > source=te_update_diff:436
> > > >         > path=/cib/configuration/constraints complete=true
> > > >         > Aug 31 23:38:27 [1536] vdicnode01       crmd:   notice:
> > > >         > do_state_transition:    State transition S_IDLE ->
> > > >         S_POLICY_ENGINE |
> > > >         > input=I_PE_CALC cause=C_FSA_INTERNAL
> > > >         origin=abort_transition_graph
> > > >         > Aug 31 23:38:27 [1531] vdicnode01        cib:     info:
> > > >         > cib_file_backup:        Archived previous version
> > > >         > as /var/lib/pacemaker/cib/cib-85.raw
> > > >         > Aug 31 23:38:27 [1531] vdicnode01        cib:     info:
> > > >         > cib_file_write_with_digest:     Wrote version 0.163.0
> > > of the
> > > >         CIB to
> > > >         > disk (digest: 47a548b36746de9275d66cc6aeb0fdc4)
> > > >         > Aug 31 23:38:27 [1531] vdicnode01        cib:     info:
> > > >         > cib_file_write_with_digest:     Reading cluster
> > > >         configuration
> > > >         > file /var/lib/pacemaker/cib/cib.npBIW2
> > > >         > (digest: /var/lib/pacemaker/cib/cib.bDogoB)
> > > >         > Aug 31 23:38:27 [1535] vdicnode01    pengine:     info:
> > > >         > determine_online_status:        Node vdicnode02 is
> > > online
> > > >         > Aug 31 23:38:27 [1535] vdicnode01    pengine:     info:
> > > >         > determine_online_status:        Node vdicnode01 is
> > > online
> > > >         > Aug 31 23:38:27 [1535] vdicnode01    pengine:     info:
> > > >         native_print:
> > > >         > vm-vdicdb01     (ocf::heartbeat:VirtualDomain): Started
> > > >         vdicnode02
> > > >         > Aug 31 23:38:27 [1535] vdicnode01    pengine:     info:
> > > >         RecurringOp:
> > > >         > Start recurring monitor (10s) for vm-vdicdb01 on
> > > vdicnode01
> > > >         > Aug 31 23:38:27 [1535] vdicnode01    pengine:   notice:
> > > >         LogActions:
> > > >         > Migrate vm-vdicdb01     (Started vdicnode02 ->
> > > vdicnode01)
> > > >         > Aug 31 23:38:27 [1535] vdicnode01    pengine:   notice:
> > > >         > process_pe_message:     Calculated transition 6, saving
> > > >         inputs
> > > >         > in /var/lib/pacemaker/pengine/pe-input-96.bz2
> > > >         > Aug 31 23:38:27 [1536] vdicnode01       crmd:     info:
> > > >         > do_state_transition:    State transition
> > > S_POLICY_ENGINE ->
> > > >         > S_TRANSITION_ENGINE | input=I_PE_SUCCESS
> > > cause=C_IPC_MESSAGE
> > > >         > origin=handle_response
> > > >         > Aug 31 23:38:27 [1536] vdicnode01       crmd:     info:
> > > >         do_te_invoke:
> > > >         > Processing graph 6 (ref=pe_calc-dc-1504215507-24)
> > > derived
> > > >         > from /var/lib/pacemaker/pengine/pe-input-96.bz2
> > > >         > Aug 31 23:38:27 [1536] vdicnode01       crmd:   notice:
> > > >         > te_rsc_command: Initiating migrate_to operation
> > > >         > vm-vdicdb01_migrate_to_0 on vdicnode02 | action 6
> > > >         > Aug 31 23:38:27 [1536] vdicnode01       crmd:     info:
> > > >         > create_operation_update:        cib_action_update:
> > > Updating
> > > >         resource
> > > >         > vm-vdicdb01 after migrate_to op pending (interval=0)
> > > >         > Aug 31 23:38:27 [1531] vdicnode01        cib:     info:
> > > >         > cib_process_request:    Forwarding cib_modify operation
> > > for
> > > >         section
> > > >         > status to all (origin=local/crmd/41)
> > > >         > Aug 31 23:38:27 [1531] vdicnode01        cib:     info:
> > > >         > cib_perform_op: Diff: --- 0.163.0 2
> > > >         > Aug 31 23:38:27 [1531] vdicnode01        cib:     info:
> > > >         > cib_perform_op: Diff: +++ 0.163.1 (null)
> > > >         > Aug 31 23:38:27 [1531] vdicnode01        cib:     info:
> > > >         > cib_perform_op: +  /cib:  @num_updates=1
> > > >         > Aug 31 23:38:27 [1531] vdicnode01        cib:     info:
> > > >         > cib_perform_op: +
> > > >         >
> > > /cib/status/node_state[@id='2']/lrm[@id='2']/lrm_resources/lrm_reso
> > > urce[@id='vm-vdicdb01']/lrm_rsc_op[@id='vm-vdicdb01_last_0']:
> > > @operation_key=vm-vdicdb01_migrate_to_0, @operation=migrate_to,
> > > @crm-debug-origin=cib_action_update, @transition-
> > > key=6:6:0:fe1a9b0a-816c-4b97-96cb-b90dbf71417a, @transition-magic=-
> > > 1:193;6:6:0:fe1a9b0a-816c-4b97-96cb-b90dbf71417a, @call-id=-1, @rc-
> > > code=193, @op-status=-1, @last-run=1504215507, @last-rc-cha
> > > >         > Aug 31 23:38:27 [1531] vdicnode01        cib:     info:
> > > >         > cib_process_request:    Completed cib_modify operation
> > > for
> > > >         section
> > > >         > status: OK (rc=0, origin=vdicnode01/crmd/41,
> > > >         version=0.163.1)
> > > >         > Aug 31 23:38:27 [1531] vdicnode01        cib:     info:
> > > >         > cib_perform_op: Diff: --- 0.163.1 2
> > > >         > Aug 31 23:38:27 [1531] vdicnode01        cib:     info:
> > > >         > cib_perform_op: Diff: +++ 0.163.2 (null)
> > > >         > Aug 31 23:38:27 [1531] vdicnode01        cib:     info:
> > > >         > cib_perform_op: +  /cib:  @num_updates=2
> > > >         > Aug 31 23:38:27 [1531] vdicnode01        cib:     info:
> > > >         > cib_perform_op: +
> > > >         >
> > > /cib/status/node_state[@id='2']/lrm[@id='2']/lrm_resources/lrm_reso
> > > urce[@id='vm-vdicdb01']/lrm_rsc_op[@id='vm-vdicdb01_last_0']:
> > > @crm-debug-origin=do_update_resource, @transition-
> > > magic=0:1;6:6:0:fe1a9b0a-816c-4b97-96cb-b90dbf71417a, @call-id=9,
> > > @rc-code=1, @op-status=0, @exec-time=78, @exit-reason=vdicdb01:
> > > live migration to vdicnode01 failed: 1
> > > >         > Aug 31 23:38:27 [1531] vdicnode01        cib:     info:
> > > >         > cib_perform_op:
> > > >         >
> > > >         ++
> > > /cib/status/node_state[@id='2']/lrm[@id='2']/lrm_resources/lrm_reso
> > > urce[@id='vm-vdicdb01']:  <lrm_rsc_op id="vm-
> > > vdicdb01_last_failure_0" operation_key="vm-vdicdb01_migrate_to_0"
> > > operation="migrate_to" crm-debug-origin="do_update_resource"
> > > crm_feature_set="3.0.10" transition-key="6:6:0:fe1a9b0a-816c-4b97-
> > > 96cb-b90dbf71417a" transition-magic="0:1;6:6:0:fe1a9b0a-816c-4b97-
> > > 96cb-b90dbf71417a" exit-reason="vdicdb01: live migration to vdicn
> > > >         > Aug 31 23:38:27 [1531] vdicnode01        cib:     info:
> > > >         > cib_process_request:    Completed cib_modify operation
> > > for
> > > >         section
> > > >         > status: OK (rc=0, origin=vdicnode02/crmd/14,
> > > >         version=0.163.2)
> > > >         > Aug 31 23:38:27 [1536] vdicnode01       crmd:  warning:
> > > >         > status_from_rc: Action 6 (vm-vdicdb01_migrate_to_0) on
> > > >         vdicnode02
> > > >         > failed (target: 0 vs. rc: 1): Error
> > > >         > Aug 31 23:38:27 [1536] vdicnode01       crmd:   notice:
> > > >         > abort_transition_graph: Transition aborted by operation
> > > >         > vm-vdicdb01_migrate_to_0 'modify' on vdicnode02: Event
> > > >         failed |
> > > >         > magic=0:1;6:6:0:fe1a9b0a-816c-4b97-96cb-b90dbf71417a
> > > >         cib=0.163.2
> > > >         > source=match_graph_event:310 complete=false
> > > >         > Aug 31 23:38:27 [1536] vdicnode01       crmd:     info:
> > > >         > match_graph_event:      Action vm-vdicdb01_migrate_to_0
> > > (6)
> > > >         confirmed
> > > >         > on vdicnode02 (rc=1)
> > > >         > Aug 31 23:38:27 [1536] vdicnode01       crmd:     info:
> > > >         > process_graph_event:    Detected action (6.6)
> > > >         > vm-vdicdb01_migrate_to_0.9=unknown error: failed
> > > >         > Aug 31 23:38:27 [1536] vdicnode01       crmd:  warning:
> > > >         > status_from_rc: Action 6 (vm-vdicdb01_migrate_to_0) on
> > > >         vdicnode02
> > > >         > failed (target: 0 vs. rc: 1): Error
> > > >         > Aug 31 23:38:27 [1536] vdicnode01       crmd:     info:
> > > >         > abort_transition_graph: Transition aborted by operation
> > > >         > vm-vdicdb01_migrate_to_0 'create' on vdicnode02: Event
> > > >         failed |
> > > >         > magic=0:1;6:6:0:fe1a9b0a-816c-4b97-96cb-b90dbf71417a
> > > >         cib=0.163.2
> > > >         > source=match_graph_event:310 complete=false
> > > >         > Aug 31 23:38:27 [1536] vdicnode01       crmd:     info:
> > > >         > match_graph_event:      Action vm-vdicdb01_migrate_to_0
> > > (6)
> > > >         confirmed
> > > >         > on vdicnode02 (rc=1)
> > > >         > Aug 31 23:38:27 [1536] vdicnode01       crmd:     info:
> > > >         > process_graph_event:    Detected action (6.6)
> > > >         > vm-vdicdb01_migrate_to_0.9=unknown error: failed
> > > >         > Aug 31 23:38:27 [1536] vdicnode01       crmd:   notice:
> > > >         run_graph:
> > > >         >    Transition 6 (Complete=1, Pending=0, Fired=0,
> > > Skipped=0,
> > > >         > Incomplete=5,
> > > >         Source=/var/lib/pacemaker/pengine/pe-input-96.bz2):
> > > >         > Complete
> > > >         > Aug 31 23:38:27 [1536] vdicnode01       crmd:     info:
> > > >         > do_state_transition:    State transition
> > > S_TRANSITION_ENGINE
> > > >         ->
> > > >         > S_POLICY_ENGINE | input=I_PE_CALC cause=C_FSA_INTERNAL
> > > >         > origin=notify_crmd
> > > >         > Aug 31 23:38:27 [1535] vdicnode01    pengine:     info:
> > > >         > determine_online_status:        Node vdicnode02 is
> > > online
> > > >         > Aug 31 23:38:27 [1535] vdicnode01    pengine:     info:
> > > >         > determine_online_status:        Node vdicnode01 is
> > > online
> > > >         > Aug 31 23:38:27 [1535] vdicnode01    pengine:  warning:
> > > >         > unpack_rsc_op_failure:  Processing failed op migrate_to
> > > for
> > > >         > vm-vdicdb01 on vdicnode02: unknown error (1)
> > > >         > Aug 31 23:38:27 [1535] vdicnode01    pengine:  warning:
> > > >         > unpack_rsc_op_failure:  Processing failed op migrate_to
> > > for
> > > >         > vm-vdicdb01 on vdicnode02: unknown error (1)
> > > >         > Aug 31 23:38:27 [1535] vdicnode01    pengine:     info:
> > > >         native_print:
> > > >         > vm-vdicdb01     (ocf::heartbeat:VirtualDomain): FAILED
> > > >         > Aug 31 23:38:27 [1535] vdicnode01    pengine:     info:
> > > >         native_print:
> > > >         > 1 : vdicnode01
> > > >         > Aug 31 23:38:27 [1535] vdicnode01    pengine:     info:
> > > >         native_print:
> > > >         > 2 : vdicnode02
> > > >         > Aug 31 23:38:27 [1535] vdicnode01    pengine:    error:
> > > >         > native_create_actions:  Resource vm-vdicdb01
> > > >         (ocf::VirtualDomain) is
> > > >         > active on 2 nodes attempting recovery
> > > >         > Aug 31 23:38:27 [1535] vdicnode01    pengine:  warning:
> > > >         > native_create_actions:  See
> > > >         > http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active
> > > for
> > > >         more
> > > >         > information.
> > > >         > Aug 31 23:38:27 [1535] vdicnode01    pengine:     info:
> > > >         RecurringOp:
> > > >         > Start recurring monitor (10s) for vm-vdicdb01 on
> > > vdicnode01
> > > >         > Aug 31 23:38:27 [1535] vdicnode01    pengine:   notice:
> > > >         LogActions:
> > > >         > Recover vm-vdicdb01     (Started vdicnode01)
> > > >         > Aug 31 23:38:27 [1535] vdicnode01    pengine:    error:
> > > >         > process_pe_message:     Calculated transition 7 (with
> > > >         errors), saving
> > > >         > inputs in /var/lib/pacemaker/pengine/pe-error-8.bz2
> > > >         > Aug 31 23:38:27 [1536] vdicnode01       crmd:     info:
> > > >         > do_state_transition:    State transition
> > > S_POLICY_ENGINE ->
> > > >         > S_TRANSITION_ENGINE | input=I_PE_SUCCESS
> > > cause=C_IPC_MESSAGE
> > > >         > origin=handle_response
> > > >         > Aug 31 23:38:27 [1536] vdicnode01       crmd:     info:
> > > >         do_te_invoke:
> > > >         > Processing graph 7 (ref=pe_calc-dc-1504215507-26)
> > > derived
> > > >         > from /var/lib/pacemaker/pengine/pe-error-8.bz2
> > > >         > Aug 31 23:38:27 [1536] vdicnode01       crmd:   notice:
> > > >         > te_rsc_command: Initiating stop operation vm-
> > > vdicdb01_stop_0
> > > >         locally
> > > >         > on vdicnode01 | action 4
> > > >         > Aug 31 23:38:27 [1536] vdicnode01       crmd:     info:
> > > >         > do_lrm_rsc_op:  Performing
> > > >         > key=4:7:0:fe1a9b0a-816c-4b97-96cb-b90dbf71417a
> > > >         op=vm-vdicdb01_stop_0
> > > >         > Aug 31 23:38:27 [1533] vdicnode01       lrmd:     info:
> > > >         log_execute:
> > > >         >    executing - rsc:vm-vdicdb01 action:stop call_id:6
> > > >         > Aug 31 23:38:27 [1536] vdicnode01       crmd:   notice:
> > > >         > te_rsc_command: Initiating stop operation vm-
> > > vdicdb01_stop_0
> > > >         on
> > > >         > vdicnode02 | action 2
> > > >         > VirtualDomain(vm-vdicdb01)[5268]:
> > >  2017/08/31_23:38:27
> > > >         INFO:
> > > >         > Domain vdicdb01 already stopped.
> > > >         > Aug 31 23:38:27 [1533] vdicnode01       lrmd:     info:
> > > >         log_finished:
> > > >         > finished - rsc:vm-vdicdb01 action:stop call_id:6
> > > pid:5268
> > > >         exit-code:0
> > > >         > exec-time:57ms queue-time:0ms
> > > >         > Aug 31 23:38:27 [1536] vdicnode01       crmd:   notice:
> > > >         > process_lrm_event:      Result of stop operation for
> > > >         vm-vdicdb01 on
> > > >         > vdicnode01: 0 (ok) | call=6 key=vm-vdicdb01_stop_0
> > > >         confirmed=true
> > > >         > cib-update=43
> > > >         > Aug 31 23:38:27 [1531] vdicnode01        cib:     info:
> > > >         > cib_process_request:    Forwarding cib_modify operation
> > > for
> > > >         section
> > > >         > status to all (origin=local/crmd/43)
> > > >         > Aug 31 23:38:27 [1531] vdicnode01        cib:     info:
> > > >         > cib_perform_op: Diff: --- 0.163.2 2
> > > >         > Aug 31 23:38:27 [1531] vdicnode01        cib:     info:
> > > >         > cib_perform_op: Diff: +++ 0.163.3 (null)
> > > >         > Aug 31 23:38:27 [1531] vdicnode01        cib:     info:
> > > >         > cib_perform_op: +  /cib:  @num_updates=3
> > > >         > Aug 31 23:38:27 [1531] vdicnode01        cib:     info:
> > > >         > cib_perform_op: +
> > > >         >
> > > /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_reso
> > > urce[@id='vm-vdicdb01']/lrm_rsc_op[@id='vm-vdicdb01_last_0']:
> > > @operation_key=vm-vdicdb01_stop_0, @operation=stop, @transition-
> > > key=4:7:0:fe1a9b0a-816c-4b97-96cb-b90dbf71417a, @transition-
> > > magic=0:0;4:7:0:fe1a9b0a-816c-4b97-96cb-b90dbf71417a, @call-id=6,
> > > @rc-code=0, @last-run=1504215507, @last-rc-change=1504215507,
> > > @exec-time=57
> > > >         > Aug 31 23:38:27 [1536] vdicnode01       crmd:     info:
> > > >         > match_graph_event:      Action vm-vdicdb01_stop_0 (4)
> > > >         confirmed on
> > > >         > vdicnode01 (rc=0)
> > > >         > Aug 31 23:38:27 [1531] vdicnode01        cib:     info:
> > > >         > cib_process_request:    Completed cib_modify operation
> > > for
> > > >         section
> > > >         > status: OK (rc=0, origin=vdicnode01/crmd/43,
> > > >         version=0.163.3)
> > > >         > Aug 31 23:38:30 [1531] vdicnode01        cib:     info:
> > > >         > cib_perform_op: Diff: --- 0.163.3 2
> > > >         > Aug 31 23:38:30 [1531] vdicnode01        cib:     info:
> > > >         > cib_perform_op: Diff: +++ 0.163.4 (null)
> > > >         > Aug 31 23:38:30 [1531] vdicnode01        cib:     info:
> > > >         > cib_perform_op: +  /cib:  @num_updates=4
> > > >         > Aug 31 23:38:30 [1531] vdicnode01        cib:     info:
> > > >         > cib_perform_op: +
> > > >         >
> > > /cib/status/node_state[@id='2']/lrm[@id='2']/lrm_resources/lrm_reso
> > > urce[@id='vm-vdicdb01']/lrm_rsc_op[@id='vm-vdicdb01_last_0']:
> > > @operation_key=vm-vdicdb01_stop_0, @operation=stop, @transition-
> > > key=2:7:0:fe1a9b0a-816c-4b97-96cb-b90dbf71417a, @transition-
> > > magic=0:0;2:7:0:fe1a9b0a-816c-4b97-96cb-b90dbf71417a, @call-id=10,
> > > @rc-code=0, @exec-time=3159
> > > >         > Aug 31 23:38:30 [1531] vdicnode01        cib:     info:
> > > >         > cib_process_request:    Completed cib_modify operation
> > > for
> > > >         section
> > > >         > status: OK (rc=0, origin=vdicnode02/crmd/15,
> > > >         version=0.163.4)
> > > >         > Aug 31 23:38:30 [1536] vdicnode01       crmd:     info:
> > > >         > match_graph_event:      Action vm-vdicdb01_stop_0 (2)
> > > >         confirmed on
> > > >         > vdicnode02 (rc=0)
> > > >         > Aug 31 23:38:30 [1536] vdicnode01       crmd:   notice:
> > > >         > te_rsc_command: Initiating start operation
> > > >         vm-vdicdb01_start_0 locally
> > > >         > on vdicnode01 | action 5
> > > >         > Aug 31 23:38:30 [1536] vdicnode01       crmd:     info:
> > > >         > do_lrm_rsc_op:  Performing
> > > >         > key=5:7:0:fe1a9b0a-816c-4b97-96cb-b90dbf71417a
> > > >         op=vm-vdicdb01_start_0
> > > >         > Aug 31 23:38:30 [1533] vdicnode01       lrmd:     info:
> > > >         log_execute:
> > > >         >    executing - rsc:vm-vdicdb01 action:start call_id:7
> > > >         > Aug 31 23:38:31 [1533] vdicnode01       lrmd:     info:
> > > >         log_finished:
> > > >         > finished - rsc:vm-vdicdb01 action:start call_id:7
> > > pid:5401
> > > >         exit-code:0
> > > >         > exec-time:528ms queue-time:0ms
> > > >         > Aug 31 23:38:31 [1536] vdicnode01       crmd:     info:
> > > >         > action_synced_wait:     Managed VirtualDomain_meta-
> > > data_0
> > > >         process 5486
> > > >         > exited with rc=0
> > > >         > Aug 31 23:38:31 [1536] vdicnode01       crmd:   notice:
> > > >         > process_lrm_event:      Result of start operation for
> > > >         vm-vdicdb01 on
> > > >         > vdicnode01: 0 (ok) | call=7 key=vm-vdicdb01_start_0
> > > >         confirmed=true
> > > >         > cib-update=44
> > > >         > Aug 31 23:38:31 [1531] vdicnode01        cib:     info:
> > > >         > cib_process_request:    Forwarding cib_modify operation
> > > for
> > > >         section
> > > >         > status to all (origin=local/crmd/44)
> > > >         > Aug 31 23:38:31 [1531] vdicnode01        cib:     info:
> > > >         > cib_perform_op: Diff: --- 0.163.4 2
> > > >         > Aug 31 23:38:31 [1531] vdicnode01        cib:     info:
> > > >         > cib_perform_op: Diff: +++ 0.163.5 (null)
> > > >         > Aug 31 23:38:31 [1531] vdicnode01        cib:     info:
> > > >         > cib_perform_op: +  /cib:  @num_updates=5
> > > >         > Aug 31 23:38:31 [1531] vdicnode01        cib:     info:
> > > >         > cib_perform_op: +
> > > >         >
> > > /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_reso
> > > urce[@id='vm-vdicdb01']/lrm_rsc_op[@id='vm-vdicdb01_last_0']:
> > > @operation_key=vm-vdicdb01_start_0, @operation=start, @transition-
> > > key=5:7:0:fe1a9b0a-816c-4b97-96cb-b90dbf71417a, @transition-
> > > magic=0:0;5:7:0:fe1a9b0a-816c-4b97-96cb-b90dbf71417a, @call-id=7,
> > > @last-run=1504215510, @last-rc-change=1504215510, @exec-time=528
> > > >         > Aug 31 23:38:31 [1531] vdicnode01        cib:     info:
> > > >         > cib_process_request:    Completed cib_modify operation
> > > for
> > > >         section
> > > >         > status: OK (rc=0, origin=vdicnode01/crmd/44,
> > > >         version=0.163.5)
> > > >         > Aug 31 23:38:31 [1536] vdicnode01       crmd:     info:
> > > >         > match_graph_event:      Action vm-vdicdb01_start_0 (5)
> > > >         confirmed on
> > > >         > vdicnode01 (rc=0)
> > > >         > Aug 31 23:38:31 [1536] vdicnode01       crmd:   notice:
> > > >         > te_rsc_command: Initiating monitor operation
> > > >         vm-vdicdb01_monitor_10000
> > > >         > locally on vdicnode01 | action 6
> > > >         > Aug 31 23:38:31 [1536] vdicnode01       crmd:     info:
> > > >         > do_lrm_rsc_op:  Performing
> > > >         > key=6:7:0:fe1a9b0a-816c-4b97-96cb-b90dbf71417a
> > > >         > op=vm-vdicdb01_monitor_10000
> > > >         > Aug 31 23:38:31 [1536] vdicnode01       crmd:     info:
> > > >         > process_lrm_event:      Result of monitor operation for
> > > >         vm-vdicdb01 on
> > > >         > vdicnode01: 0 (ok) | call=8 key=vm-
> > > vdicdb01_monitor_10000
> > > >         > confirmed=false cib-update=45
> > > >         > Aug 31 23:38:31 [1531] vdicnode01        cib:     info:
> > > >         > cib_process_request:    Forwarding cib_modify operation
> > > for
> > > >         section
> > > >         > status to all (origin=local/crmd/45)
> > > >         > Aug 31 23:38:31 [1531] vdicnode01        cib:     info:
> > > >         > cib_perform_op: Diff: --- 0.163.5 2
> > > >         > Aug 31 23:38:31 [1531] vdicnode01        cib:     info:
> > > >         > cib_perform_op: Diff: +++ 0.163.6 (null)
> > > >         > Aug 31 23:38:31 [1531] vdicnode01        cib:     info:
> > > >         > cib_perform_op: +  /cib:  @num_updates=6
> > > >         > Aug 31 23:38:31 [1531] vdicnode01        cib:     info:
> > > >         > cib_perform_op:
> > > >         >
> > > >         ++
> > > /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_reso
> > > urce[@id='vm-vdicdb01']:  <lrm_rsc_op id="vm-
> > > vdicdb01_monitor_10000" operation_key="vm-vdicdb01_monitor_10000"
> > > operation="monitor" crm-debug-origin="do_update_resource"
> > > crm_feature_set="3.0.10" transition-key="6:7:0:fe1a9b0a-816c-4b97-
> > > 96cb-b90dbf71417a" transition-magic="0:0;6:7:0:fe1a9b0a-816c-4b97-
> > > 96cb-b90dbf71417a" on_node="vdicnode01" call-id="8" rc-code="0" op-
> > > s
> > > >         > Aug 31 23:38:31 [1531] vdicnode01        cib:     info:
> > > >         > cib_process_request:    Completed cib_modify operation
> > > for
> > > >         section
> > > >         > status: OK (rc=0, origin=vdicnode01/crmd/45,
> > > >         version=0.163.6)
> > > >         > Aug 31 23:38:31 [1536] vdicnode01       crmd:     info:
> > > >         > match_graph_event:      Action vm-
> > > vdicdb01_monitor_10000 (6)
> > > >         confirmed
> > > >         > on vdicnode01 (rc=0)
> > > >         > Aug 31 23:38:31 [1536] vdicnode01       crmd:   notice:
> > > >         run_graph:
> > > >         >    Transition 7 (Complete=5, Pending=0, Fired=0,
> > > Skipped=0,
> > > >         > Incomplete=0,
> > > >         Source=/var/lib/pacemaker/pengine/pe-error-8.bz2):
> > > >         > Complete
> > > >         > Aug 31 23:38:31 [1536] vdicnode01       crmd:     info:
> > > >         do_log: Input
> > > >         > I_TE_SUCCESS received in state S_TRANSITION_ENGINE from
> > > >         notify_crmd
> > > >         > Aug 31 23:38:31 [1536] vdicnode01       crmd:   notice:
> > > >         > do_state_transition:    State transition
> > > S_TRANSITION_ENGINE
> > > >         -> S_IDLE
> > > >         > | input=I_TE_SUCCESS cause=C_FSA_INTERNAL
> > > origin=notify_crmd
> > > >         > Aug 31 23:38:36 [1531] vdicnode01        cib:     info:
> > > >         > cib_process_ping:       Reporting our current digest to
> > > >         vdicnode01:
> > > >         > 9141ea9880f5a44b133003982d863bc8 for 0.163.6
> > > (0x7f61cec09270
> > > >         0)
> > > >         >
> > > >         >
> > > >         > Thanks a lot
> > > >         >
> > > >         > 2017-08-31 16:20 GMT+02:00 Ken Gaillot
> > > >         <kgaillot at redhat.com>:
> > > >         >         On Thu, 2017-08-31 at 01:13 +0200, Oscar
> > > Segarra
> > > >         wrote:
> > > >         >         > Hi,
> > > >         >         >
> > > >         >         >
> > > >         >         > In my environment, I have just two hosts,
> > > where
> > > >         qemu-kvm
> > > >         >         process is
> > > >         >         > launched by a regular user (oneadmin) - open
> > > >         nebula -
> > > >         >         >
> > > >         >         >
> > > >         >         > I have created a VirtualDomain resource that
> > > >         starts and
> > > >         >         stops the VM
> > > >         >         > perfectly. Nevertheless, when I change the
> > > >         location weight
> > > >         >         in order to
> > > >         >         > force the migration, It raises a migration
> > > failure
> > > >         "error:
> > > >         >         1"
> > > >         >         >
> > > >         >         >
> > > >         >         > If I execute the virsh migrate command (that
> > > >         appears in
> > > >         >         corosync.log)
> > > >         >         > from command line, it works perfectly.
> > > >         >         >
> > > >         >         >
> > > >         >         > Anybody has experienced the same issue?
> > > >         >         >
> > > >         >         >
> > > >         >         > Thanks in advance for your help
> > > >         >
> > > >         >
> > > >         >         If something works from the command line but
> > > not
> > > >         when run by a
> > > >         >         daemon,
> > > >         >         my first suspicion is SELinux. Check the audit
> > > log
> > > >         for denials
> > > >         >         around
> > > >         >         that time.
> > > >         >
> > > >         >         I'd also check the system log and Pacemaker
> > > detail
> > > >         log around
> > > >         >         that time
> > > >         >         to see if there is any more information.
> > > >         >         --
> > > >         >         Ken Gaillot <kgaillot at redhat.com>
> > > >         >
> > > >         >
> > > >         >
> > > >         >
> > > >         >
> > > >         >         _______________________________________________
> > > >         >         Users mailing list: Users at clusterlabs.org
> > > >         >         http://lists.clusterlabs.org/mailman/listinfo/u
> > > sers
> > > >         >
> > > >         >         Project Home: http://www.clusterlabs.org
> > > >         >         Getting started:
> > > >         >
> > > >          http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> > > >         >         Bugs: http://bugs.clusterlabs.org
> > > >         >
> > > >         >
> > > >
> > > >
> > > >         --
> > > >         Ken Gaillot <kgaillot at redhat.com>
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > >
> > > --
> > > Ken Gaillot <kgaillot at redhat.com>
> > >
> > >
> > >
> > >
> > >
> >
> >
> --
> Ken Gaillot <kgaillot at redhat.com>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/users/attachments/20171018/2ec290bc/attachment-0003.html>


More information about the Users mailing list