<div dir="ltr">Hi Ken, <div><br></div><div>Lately, I've configured the passwordless ssh for root and everything has begun working fine...<br></div><div><br></div><div>Thanks a lot, <br></div><div class="gmail_extra"><br><div class="gmail_quote">2017-10-18 22:16 GMT+02:00 Ken Gaillot <span dir="ltr"><<a href="mailto:kgaillot@redhat.com" target="_blank">kgaillot@redhat.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><span class="gmail-">On Sat, 2017-09-02 at 01:21 +0200, Oscar Segarra wrote:<br>
> Hi, <br>
><br>
</span><div><div class="gmail-h5">> I have updated the known_hosts:<br>
><br>
> Now, I get the following error:<br>
><br>
> Sep 02 01:03:41 [1535] vdicnode01        cib:     info:<br>
> cib_perform_op: +<br>
>  /cib/status/node_state[@id='<wbr>1']/lrm[@id='1']/lrm_<wbr>resources/lrm_resou<br>
> rce[@id='vm-vdicdb01']/lrm_<wbr>rsc_op[@id='vm-vdicdb01_last_<wbr>0']:<br>
>  @operation_key=vm-vdicdb01_<wbr>migrate_to_0, @operation=migrate_to,<br>
> @crm-debug-origin=cib_action_<wbr>update, @transition-key=6:27:0:<wbr>a7fef266-<br>
> 46c3-429e-ab00-c1a0aab24da5, @transition-magic=-<br>
> 1:193;6:27:0:a7fef266-46c3-<wbr>429e-ab00-c1a0aab24da5, @call-id=-1, @rc-<br>
> code=193, @op-status=-1, @last-run=1504307021, @last-rc-c<br>
> Sep 02 01:03:41 [1535] vdicnode01        cib:     info:<br>
> cib_process_request:    Completed cib_modify operation for section<br>
> status: OK (rc=0, origin=vdicnode01/crmd/77, version=0.169.1)<br>
> VirtualDomain(vm-vdicdb01)[<wbr>13085]:      2017/09/02_01:03:41 INFO:<br>
> vdicdb01: Starting live migration to vdicnode02 (using: virsh --<br>
> connect=qemu:///system --quiet migrate --live  vdicdb01<br>
> qemu+ssh://vdicnode02/system ).<br>
> VirtualDomain(vm-vdicdb01)[<wbr>13085]:      2017/09/02_01:03:41 ERROR:<br>
> vdicdb01: live migration to vdicnode02 failed: 1<br>
>  ]p 02 01:03:41 [1537] vdicnode01       lrmd:   notice:<br>
> operation_finished:     vm-vdicdb01_migrate_to_0:<wbr>13085:stderr [<br>
> error: Cannot recv data: Permission denied, please try again.<br>
>  ]p 02 01:03:41 [1537] vdicnode01       lrmd:   notice:<br>
> operation_finished:     vm-vdicdb01_migrate_to_0:<wbr>13085:stderr [<br>
> Permission denied, please try again.<br>
> Sep 02 01:03:41 [1537] vdicnode01       lrmd:   notice:<br>
> operation_finished:     vm-vdicdb01_migrate_to_0:<wbr>13085:stderr [<br>
> Permission denied (publickey,gssapi-keyex,<wbr>gssapi-with-mic,password).:<br>
> Connection reset by peer ]<br>
> Sep 02 01:03:41 [1537] vdicnode01       lrmd:   notice:<br>
> operation_finished:     vm-vdicdb01_migrate_to_0:<wbr>13085:stderr [ ocf-<br>
> exit-reason:vdicdb01: live migration to vdicnode02 failed: 1 ]<br>
> Sep 02 01:03:41 [1537] vdicnode01       lrmd:     info: log_finished:<br>
>   finished - rsc:vm-vdicdb01 action:migrate_to call_id:16 pid:13085<br>
> exit-code:1 exec-time:119ms queue-time:0ms<br>
> Sep 02 01:03:41 [1540] vdicnode01       crmd:   notice:<br>
> process_lrm_event:      Result of migrate_to operation for vm-<br>
> vdicdb01 on vdicnode01: 1 (unknown error) | call=16 key=vm-<br>
> vdicdb01_migrate_to_0 confirmed=true cib-update=78<br>
> Sep 02 01:03:41 [1540] vdicnode01       crmd:   notice:<br>
> process_lrm_event:      vdicnode01-vm-vdicdb01_<wbr>migrate_to_0:16 [<br>
> error: Cannot recv data: Permission denied, please try<br>
> again.\r\nPermission denied, please try again.\r\nPermission denied<br>
> (publickey,gssapi-keyex,<wbr>gssapi-with-mic,password).: Connection reset<br>
> by peer\nocf-exit-reason:<wbr>vdicdb01: live migration to vdicnode02<br>
> failed: 1\n ]<br>
> Sep 02 01:03:41 [1535] vdicnode01        cib:     info:<br>
> cib_process_request:    Forwarding cib_modify operation for section<br>
> status to all (origin=local/crmd/78)<br>
> Sep 02 01:03:41 [1535] vdicnode01        cib:     info:<br>
> cib_perform_op: Diff: --- 0.169.1 2<br>
> Sep 02 01:03:41 [1535] vdicnode01        cib:     info:<br>
> cib_perform_op: Diff: +++ 0.169.2 (null)<br>
> Sep 02 01:03:41 [1535] vdicnode01        cib:     info:<br>
> cib_perform_op: +  /cib:  @num_updates=2<br>
> Sep 02 01:03:41 [1535] vdicnode01        cib:     info:<br>
> cib_perform_op: +<br>
>  /cib/status/node_state[@id='<wbr>1']/lrm[@id='1']/lrm_<wbr>resources/lrm_resou<br>
</div></div>> rce[@id='vm-vdicdb01']/lrm_<wbr>rsc_op[@id='vm-vdicdb01_last_<wbr>0']:  @crm-<br>
> debug-origin=do_update_<wbr>resource, @transition-<br>
<span class="gmail-">> magic=0:1;6:27:0:a7fef266-<wbr>46c3-429e-ab00-c1a0aab24da5, @call-id=16,<br>
> @rc-code=1, @op-status=0, @exec-time=119, @exit-reason=vdicdb01: live<br>
> migration to vdicnode02 failed: 1<br>
> Sep 02 01:03:4<br>
><br>
> as root <-- system prompts the password<br>
> [root@vdicnode01 .ssh]# virsh --connect=qemu:///system --quiet<br>
> migrate --live  vdicdb01 qemu+ssh://vdicnode02/system<br>
> root@vdicnode02's password:<br>
><br>
> as oneadmin (the user that executes the qemu-kvm) <-- does not prompt<br>
> the password<br>
> virsh --connect=qemu:///system --quiet migrate --live  vdicdb01<br>
> qemu+ssh://vdicnode02/system<br>
><br>
> Must I configure passwordless connection with root in order to make<br>
> live migration work?<br>
><br>
> Or maybe is there any way to instruct pacemaker to use my oneadmin<br>
> user for migrations inestad of root?<br>
<br>
</span>Pacemaker calls the VirtualDomain resource agent as root, but it's up<br>
to the agent what to do from there. I don't see any user options in<br>
VirtualDomain or virsh, so I don't think there is currently.<br>
<br>
I see two options: configure passwordless ssh for root, or copy the<br>
VirtualDomain resource and modify it to use "sudo -u oneadmin" when it<br>
calls virsh.<br>
<br>
We've discussed adding the capability to tell pacemaker to execute a<br>
resource agent as a particular user. We've already put the plumbing in<br>
for it, so that lrmd can execute alert agents as the hacluster user.<br>
All that would be needed would be a new resource meta-attribute and the<br>
IPC API to use it. It's low priority due to a large backlog at the<br>
moment, but we'd be happy to take a pull request for it. The resource<br>
agent would obviously have to be able to work as that user.<br>
<div><div class="gmail-h5"><br>
><br>
> Thanks a lot:<br>
><br>
><br>
> 2017-09-01 23:14 GMT+02:00 Ken Gaillot <<a href="mailto:kgaillot@redhat.com">kgaillot@redhat.com</a>>:<br>
> > On Fri, 2017-09-01 at 00:26 +0200, Oscar Segarra wrote:<br>
> > > Hi,<br>
> > ><br>
> > ><br>
> > > Yes, it is....<br>
> > ><br>
> > ><br>
> > > The qemu-kvm process is executed by the oneadmin user.<br>
> > ><br>
> > ><br>
> > > When I cluster tries the live migration, what users do play?<br>
> > ><br>
> > ><br>
> > > Oneadmin<br>
> > > Root<br>
> > > Hacluster<br>
> > ><br>
> > ><br>
> > > I have just configured pasworless ssh connection with oneadmin.<br>
> > ><br>
> > ><br>
> > > Do I need to configure any other passwordless ssh connection with<br>
> > any<br>
> > > other user?<br>
> > ><br>
> > ><br>
> > > What user executes the virsh migrate - - live?<br>
> ><br>
> > The cluster executes resource actions as root.<br>
> ><br>
> > > Is there any way to check ssk keys?<br>
> ><br>
> > I'd just login once to the host as root from the cluster nodes, to<br>
> > make<br>
> > it sure it works, and accept the host when asked.<br>
> ><br>
> > ><br>
> > > Sorry for all theese questions.<br>
> > ><br>
> > ><br>
> > > Thanks a lot<br>
> > ><br>
> > ><br>
> > ><br>
> > ><br>
> > ><br>
> > ><br>
> > > El 1 sept. 2017 0:12, "Ken Gaillot" <<a href="mailto:kgaillot@redhat.com">kgaillot@redhat.com</a>><br>
> > escribió:<br>
> > >         On Thu, 2017-08-31 at 23:45 +0200, Oscar Segarra wrote:<br>
> > >         > Hi Ken,<br>
> > >         ><br>
> > >         ><br>
> > >         > Thanks a lot for you quick answer:<br>
> > >         ><br>
> > >         ><br>
> > >         > Regarding to selinux, it is disabled. The FW is<br>
> > disabled as<br>
> > >         well.<br>
> > >         ><br>
> > >         ><br>
> > >         > [root@vdicnode01 ~]# sestatus<br>
> > >         > SELinux status:                 disabled<br>
> > >         ><br>
> > >         ><br>
> > >         > [root@vdicnode01 ~]# service firewalld status<br>
> > >         > Redirecting to /bin/systemctl status  firewalld.service<br>
> > >         > ● firewalld.service - firewalld - dynamic firewall<br>
> > daemon<br>
> > >         >    Loaded: loaded<br>
> > >         (/usr/lib/systemd/system/<wbr>firewalld.service;<br>
> > >         > disabled; vendor preset: enabled)<br>
> > >         >    Active: inactive (dead)<br>
> > >         >      Docs: man:firewalld(1)<br>
> > >         ><br>
> > >         ><br>
> > >         > On migration, it performs a gracefully shutdown and a<br>
> > start<br>
> > >         on the new<br>
> > >         > node.<br>
> > >         ><br>
> > >         ><br>
> > >         > I attach the logs when trying to migrate from<br>
> > vdicnode02 to<br>
> > >         > vdicnode01:<br>
> > >         ><br>
> > >         ><br>
> > >         > vdicnode02 corosync.log:<br>
> > >         > Aug 31 23:38:17 [1521] vdicnode02        cib:     info:<br>
> > >         > cib_perform_op: Diff: --- 0.161.2 2<br>
> > >         > Aug 31 23:38:17 [1521] vdicnode02        cib:     info:<br>
> > >         > cib_perform_op: Diff: +++ 0.162.0 (null)<br>
> > >         > Aug 31 23:38:17 [1521] vdicnode02        cib:     info:<br>
> > >         > cib_perform_op:<br>
> > >         ><br>
> > >         --<br>
> > /cib/configuration/<wbr>constraints/rsc_location[@id='<wbr>location-vm-<br>
> > vdicdb01-vdicnode01--INFINITY'<wbr>]<br>
> > >         > Aug 31 23:38:17 [1521] vdicnode02        cib:     info:<br>
> > >         > cib_perform_op: +  /cib:  @epoch=162, @num_updates=0<br>
> > >         > Aug 31 23:38:17 [1521] vdicnode02        cib:     info:<br>
> > >         > cib_process_request:    Completed cib_replace operation<br>
> > for<br>
> > >         section<br>
> > >         > configuration: OK (rc=0, origin=vdicnode01/cibadmin/2,<br>
> > >         > version=0.162.0)<br>
> > >         > Aug 31 23:38:17 [1521] vdicnode02        cib:     info:<br>
> > >         > cib_file_backup:        Archived previous version<br>
> > >         > as /var/lib/pacemaker/cib/cib-65.<wbr>raw<br>
> > >         > Aug 31 23:38:17 [1521] vdicnode02        cib:     info:<br>
> > >         > cib_file_write_with_digest:     Wrote version 0.162.0<br>
> > of the<br>
> > >         CIB to<br>
> > >         > disk (digest: 1f87611b60cd7c48b95b6b788b47f6<wbr>5f)<br>
> > >         > Aug 31 23:38:17 [1521] vdicnode02        cib:     info:<br>
> > >         > cib_file_write_with_digest:     Reading cluster<br>
> > >         configuration<br>
> > >         > file /var/lib/pacemaker/cib/cib.<wbr>jt2KPw<br>
> > >         > (digest: /var/lib/pacemaker/cib/cib.<wbr>Kwqfpl)<br>
> > >         > Aug 31 23:38:22 [1521] vdicnode02        cib:     info:<br>
> > >         > cib_process_ping:       Reporting our current digest to<br>
> > >         vdicnode01:<br>
> > >         > dace3a23264934279d439420d5a716<wbr>cc for 0.162.0<br>
> > (0x7f96bb26c5c0<br>
> > >         0)<br>
> > >         > Aug 31 23:38:27 [1521] vdicnode02        cib:     info:<br>
> > >         > cib_perform_op: Diff: --- 0.162.0 2<br>
> > >         > Aug 31 23:38:27 [1521] vdicnode02        cib:     info:<br>
> > >         > cib_perform_op: Diff: +++ 0.163.0 (null)<br>
> > >         > Aug 31 23:38:27 [1521] vdicnode02        cib:     info:<br>
> > >         > cib_perform_op: +  /cib:  @epoch=163<br>
> > >         > Aug 31 23:38:27 [1521] vdicnode02        cib:     info:<br>
> > >         > cib_perform_op: ++ /cib/configuration/<wbr>constraints:<br>
> > >         <rsc_location<br>
> > >         > id="location-vm-vdicdb01-<wbr>vdicnode02--INFINITY"<br>
> > >         node="vdicnode02"<br>
> > >         > rsc="vm-vdicdb01" score="-INFINITY"/><br>
> > >         > Aug 31 23:38:27 [1521] vdicnode02        cib:     info:<br>
> > >         > cib_process_request:    Completed cib_replace operation<br>
> > for<br>
> > >         section<br>
> > >         > configuration: OK (rc=0, origin=vdicnode01/cibadmin/2,<br>
> > >         > version=0.163.0)<br>
> > >         > Aug 31 23:38:27 [1521] vdicnode02        cib:     info:<br>
> > >         > cib_file_backup:        Archived previous version<br>
> > >         > as /var/lib/pacemaker/cib/cib-66.<wbr>raw<br>
> > >         > Aug 31 23:38:27 [1521] vdicnode02        cib:     info:<br>
> > >         > cib_file_write_with_digest:     Wrote version 0.163.0<br>
> > of the<br>
> > >         CIB to<br>
> > >         > disk (digest: 47a548b36746de9275d66cc6aeb0fd<wbr>c4)<br>
> > >         > Aug 31 23:38:27 [1521] vdicnode02        cib:     info:<br>
> > >         > cib_file_write_with_digest:     Reading cluster<br>
> > >         configuration<br>
> > >         > file /var/lib/pacemaker/cib/cib.<wbr>rcgXiT<br>
> > >         > (digest: /var/lib/pacemaker/cib/cib.<wbr>7geMfi)<br>
> > >         > Aug 31 23:38:27 [1523] vdicnode02       lrmd:     info:<br>
> > >         > cancel_recurring_action:        Cancelling ocf<br>
> > operation<br>
> > >         > vm-vdicdb01_monitor_10000<br>
> > >         > Aug 31 23:38:27 [1526] vdicnode02       crmd:     info:<br>
> > >         > do_lrm_rsc_op:  Performing<br>
> > >         > key=6:6:0:fe1a9b0a-816c-4b97-<wbr>96cb-b90dbf71417a<br>
> > >         > op=vm-vdicdb01_migrate_to_0<br>
> > >         > Aug 31 23:38:27 [1523] vdicnode02       lrmd:     info:<br>
> > >         log_execute:<br>
> > >         >    executing - rsc:vm-vdicdb01 action:migrate_to<br>
> > call_id:9<br>
> > >         > Aug 31 23:38:27 [1526] vdicnode02       crmd:     info:<br>
> > >         > process_lrm_event:      Result of monitor operation for<br>
> > >         vm-vdicdb01 on<br>
> > >         > vdicnode02: Cancelled | call=7 key=vm-<br>
> > vdicdb01_monitor_10000<br>
> > >         > confirmed=true<br>
> > >         > Aug 31 23:38:27 [1521] vdicnode02        cib:     info:<br>
> > >         > cib_perform_op: Diff: --- 0.163.0 2<br>
> > >         > Aug 31 23:38:27 [1521] vdicnode02        cib:     info:<br>
> > >         > cib_perform_op: Diff: +++ 0.163.1 (null)<br>
> > >         > Aug 31 23:38:27 [1521] vdicnode02        cib:     info:<br>
> > >         > cib_perform_op: +  /cib:  @num_updates=1<br>
> > >         > Aug 31 23:38:27 [1521] vdicnode02        cib:     info:<br>
> > >         > cib_perform_op: +<br>
> > >         > <br>
> > /cib/status/node_state[@id='2'<wbr>]/lrm[@id='2']/lrm_resources/<wbr>lrm_reso<br>
</div></div><span class="gmail-">> > urce[@id='vm-vdicdb01']/lrm_<wbr>rsc_op[@id='vm-vdicdb01_last_<wbr>0']: <br>
> > @operation_key=vm-vdicdb01_<wbr>migrate_to_0, @operation=migrate_to,<br>
> > @crm-debug-origin=cib_action_<wbr>update, @transition-<br>
</span>> > key=6:6:0:fe1a9b0a-816c-4b97-<wbr>96cb-b90dbf71417a, @transition-magic=-<br>
<div><div class="gmail-h5">> > 1:193;6:6:0:fe1a9b0a-816c-<wbr>4b97-96cb-b90dbf71417a, @call-id=-1, @rc-<br>
> > code=193, @op-status=-1, @last-run=1504215507, @last-rc-cha<br>
> > >         > Aug 31 23:38:27 [1521] vdicnode02        cib:     info:<br>
> > >         > cib_process_request:    Completed cib_modify operation<br>
> > for<br>
> > >         section<br>
> > >         > status: OK (rc=0, origin=vdicnode01/crmd/41,<br>
> > >         version=0.163.1)<br>
> > >         > VirtualDomain(vm-vdicdb01)[<wbr>5241]:     <br>
> >  2017/08/31_23:38:27<br>
> > >         INFO:<br>
> > >         > vdicdb01: Starting live migration to vdicnode01 (using:<br>
> > >         virsh<br>
> > >         > --connect=qemu:///system --quiet migrate --live <br>
> > vdicdb01<br>
> > >         qemu<br>
> > >         > +ssh://vdicnode01/system ).<br>
> > >         > VirtualDomain(vm-vdicdb01)[<wbr>5241]:     <br>
> >  2017/08/31_23:38:27<br>
> > >         ERROR:<br>
> > >         > vdicdb01: live migration to vdicnode01 failed: 1<br>
> > >         > Aug 31 23:38:27 [1523] vdicnode02       lrmd:   notice:<br>
> > >         > operation_finished:     vm-<br>
> > vdicdb01_migrate_to_0:5241:<wbr>stderr<br>
> > >         [ error:<br>
> > >         > Cannot recv data: Host key verification failed.:<br>
> > Connection<br>
> > >         reset by<br>
> > >         > peer ]<br>
> > ><br>
> > ><br>
> > >         ^^^ There you go. Sounds like the ssh key isn't being<br>
> > >         accepted. No idea<br>
> > >         why though.<br>
> > ><br>
> > ><br>
> > ><br>
> > >         > Aug 31 23:38:27 [1523] vdicnode02       lrmd:   notice:<br>
> > >         > operation_finished:     vm-<br>
> > vdicdb01_migrate_to_0:5241:<wbr>stderr<br>
> > >         > [ ocf-exit-reason:vdicdb01: live migration to<br>
> > vdicnode01<br>
> > >         failed: 1 ]<br>
> > >         > Aug 31 23:38:27 [1523] vdicnode02       lrmd:     info:<br>
> > >         log_finished:<br>
> > >         > finished - rsc:vm-vdicdb01 action:migrate_to call_id:9<br>
> > >         pid:5241<br>
> > >         > exit-code:1 exec-time:78ms queue-time:0ms<br>
> > >         > Aug 31 23:38:27 [1526] vdicnode02       crmd:   notice:<br>
> > >         > process_lrm_event:      Result of migrate_to operation<br>
> > for<br>
> > >         vm-vdicdb01<br>
> > >         > on vdicnode02: 1 (unknown error) | call=9<br>
> > >         key=vm-vdicdb01_migrate_to_0<br>
> > >         > confirmed=true cib-update=14<br>
> > >         > Aug 31 23:38:27 [1526] vdicnode02       crmd:   notice:<br>
> > >         > process_lrm_event:<br>
> > >         vdicnode02-vm-vdicdb01_<wbr>migrate_to_0:9 [ error:<br>
> > >         > Cannot recv data: Host key verification failed.:<br>
> > Connection<br>
> > >         reset by<br>
> > >         > peer\nocf-exit-reason:<wbr>vdicdb01: live migration to<br>
> > vdicnode01<br>
> > >         failed: 1<br>
> > >         > \n ]<br>
> > >         > Aug 31 23:38:27 [1521] vdicnode02        cib:     info:<br>
> > >         > cib_process_request:    Forwarding cib_modify operation<br>
> > for<br>
> > >         section<br>
> > >         > status to all (origin=local/crmd/14)<br>
> > >         > Aug 31 23:38:27 [1521] vdicnode02        cib:     info:<br>
> > >         > cib_perform_op: Diff: --- 0.163.1 2<br>
> > >         > Aug 31 23:38:27 [1521] vdicnode02        cib:     info:<br>
> > >         > cib_perform_op: Diff: +++ 0.163.2 (null)<br>
> > >         > Aug 31 23:38:27 [1521] vdicnode02        cib:     info:<br>
> > >         > cib_perform_op: +  /cib:  @num_updates=2<br>
> > >         > Aug 31 23:38:27 [1521] vdicnode02        cib:     info:<br>
> > >         > cib_perform_op: +<br>
> > >         > <br>
> > /cib/status/node_state[@id='2'<wbr>]/lrm[@id='2']/lrm_resources/<wbr>lrm_reso<br>
</div></div><span class="gmail-">> > urce[@id='vm-vdicdb01']/lrm_<wbr>rsc_op[@id='vm-vdicdb01_last_<wbr>0']: <br>
> > @crm-debug-origin=do_update_<wbr>resource, @transition-<br>
</span><div><div class="gmail-h5">> > magic=0:1;6:6:0:fe1a9b0a-816c-<wbr>4b97-96cb-b90dbf71417a, @call-id=9,<br>
> > @rc-code=1, @op-status=0, @exec-time=78, @exit-reason=vdicdb01:<br>
> > live migration to vdicnode01 failed: 1<br>
> > >         > Aug 31 23:38:27 [1521] vdicnode02        cib:     info:<br>
> > >         > cib_perform_op:<br>
> > >         ><br>
> > >         ++<br>
> > /cib/status/node_state[@id='2'<wbr>]/lrm[@id='2']/lrm_resources/<wbr>lrm_reso<br>
> > urce[@id='vm-vdicdb01']:  <lrm_rsc_op id="vm-<br>
> > vdicdb01_last_failure_0" operation_key="vm-vdicdb01_<wbr>migrate_to_0"<br>
> > operation="migrate_to" crm-debug-origin="do_update_<wbr>resource"<br>
> > crm_feature_set="3.0.10" transition-key="6:6:0:<wbr>fe1a9b0a-816c-4b97-<br>
> > 96cb-b90dbf71417a" transition-magic="0:1;6:6:0:<wbr>fe1a9b0a-816c-4b97-<br>
> > 96cb-b90dbf71417a" exit-reason="vdicdb01: live migration to vdicn<br>
> > >         > Aug 31 23:38:27 [1521] vdicnode02        cib:     info:<br>
> > >         > cib_process_request:    Completed cib_modify operation<br>
> > for<br>
> > >         section<br>
> > >         > status: OK (rc=0, origin=vdicnode02/crmd/14,<br>
> > >         version=0.163.2)<br>
> > >         > Aug 31 23:38:27 [1526] vdicnode02       crmd:     info:<br>
> > >         > do_lrm_rsc_op:  Performing<br>
> > >         > key=2:7:0:fe1a9b0a-816c-4b97-<wbr>96cb-b90dbf71417a<br>
> > >         op=vm-vdicdb01_stop_0<br>
> > >         > Aug 31 23:38:27 [1523] vdicnode02       lrmd:     info:<br>
> > >         log_execute:<br>
> > >         >    executing - rsc:vm-vdicdb01 action:stop call_id:10<br>
> > >         > VirtualDomain(vm-vdicdb01)[<wbr>5285]:     <br>
> >  2017/08/31_23:38:27<br>
> > >         INFO:<br>
> > >         > Issuing graceful shutdown request for domain vdicdb01.<br>
> > >         > Aug 31 23:38:27 [1521] vdicnode02        cib:     info:<br>
> > >         > cib_perform_op: Diff: --- 0.163.2 2<br>
> > >         > Aug 31 23:38:27 [1521] vdicnode02        cib:     info:<br>
> > >         > cib_perform_op: Diff: +++ 0.163.3 (null)<br>
> > >         > Aug 31 23:38:27 [1521] vdicnode02        cib:     info:<br>
> > >         > cib_perform_op: +  /cib:  @num_updates=3<br>
> > >         > Aug 31 23:38:27 [1521] vdicnode02        cib:     info:<br>
> > >         > cib_perform_op: +<br>
> > >         > <br>
> > /cib/status/node_state[@id='1'<wbr>]/lrm[@id='1']/lrm_resources/<wbr>lrm_reso<br>
> > urce[@id='vm-vdicdb01']/lrm_<wbr>rsc_op[@id='vm-vdicdb01_last_<wbr>0']: <br>
> > @operation_key=vm-vdicdb01_<wbr>stop_0, @operation=stop, @transition-<br>
> > key=4:7:0:fe1a9b0a-816c-4b97-<wbr>96cb-b90dbf71417a, @transition-<br>
> > magic=0:0;4:7:0:fe1a9b0a-816c-<wbr>4b97-96cb-b90dbf71417a, @call-id=6,<br>
> > @rc-code=0, @last-run=1504215507, @last-rc-change=1504215507,<br>
> > @exec-time=57<br>
> > >         > Aug 31 23:38:27 [1521] vdicnode02        cib:     info:<br>
> > >         > cib_process_request:    Completed cib_modify operation<br>
> > for<br>
> > >         section<br>
> > >         > status: OK (rc=0, origin=vdicnode01/crmd/43,<br>
> > >         version=0.163.3)<br>
> > >         > Aug 31 23:38:30 [1523] vdicnode02       lrmd:     info:<br>
> > >         log_finished:<br>
> > >         > finished - rsc:vm-vdicdb01 action:stop call_id:10<br>
> > pid:5285<br>
> > >         exit-code:0<br>
> > >         > exec-time:3159ms queue-time:0ms<br>
> > >         > Aug 31 23:38:30 [1526] vdicnode02       crmd:   notice:<br>
> > >         > process_lrm_event:      Result of stop operation for<br>
> > >         vm-vdicdb01 on<br>
> > >         > vdicnode02: 0 (ok) | call=10 key=vm-vdicdb01_stop_0<br>
> > >         confirmed=true<br>
> > >         > cib-update=15<br>
> > >         > Aug 31 23:38:30 [1521] vdicnode02        cib:     info:<br>
> > >         > cib_process_request:    Forwarding cib_modify operation<br>
> > for<br>
> > >         section<br>
> > >         > status to all (origin=local/crmd/15)<br>
> > >         > Aug 31 23:38:30 [1521] vdicnode02        cib:     info:<br>
> > >         > cib_perform_op: Diff: --- 0.163.3 2<br>
> > >         > Aug 31 23:38:30 [1521] vdicnode02        cib:     info:<br>
> > >         > cib_perform_op: Diff: +++ 0.163.4 (null)<br>
> > >         > Aug 31 23:38:30 [1521] vdicnode02        cib:     info:<br>
> > >         > cib_perform_op: +  /cib:  @num_updates=4<br>
> > >         > Aug 31 23:38:30 [1521] vdicnode02        cib:     info:<br>
> > >         > cib_perform_op: +<br>
> > >         > <br>
> > /cib/status/node_state[@id='2'<wbr>]/lrm[@id='2']/lrm_resources/<wbr>lrm_reso<br>
> > urce[@id='vm-vdicdb01']/lrm_<wbr>rsc_op[@id='vm-vdicdb01_last_<wbr>0']: <br>
> > @operation_key=vm-vdicdb01_<wbr>stop_0, @operation=stop, @transition-<br>
</div></div>> > key=2:7:0:fe1a9b0a-816c-4b97-<wbr>96cb-b90dbf71417a, @transition-<br>
<div><div class="gmail-h5">> > magic=0:0;2:7:0:fe1a9b0a-816c-<wbr>4b97-96cb-b90dbf71417a, @call-id=10,<br>
> > @rc-code=0, @exec-time=3159<br>
> > >         > Aug 31 23:38:30 [1521] vdicnode02        cib:     info:<br>
> > >         > cib_process_request:    Completed cib_modify operation<br>
> > for<br>
> > >         section<br>
> > >         > status: OK (rc=0, origin=vdicnode02/crmd/15,<br>
> > >         version=0.163.4)<br>
> > >         > Aug 31 23:38:31 [1521] vdicnode02        cib:     info:<br>
> > >         > cib_perform_op: Diff: --- 0.163.4 2<br>
> > >         > Aug 31 23:38:31 [1521] vdicnode02        cib:     info:<br>
> > >         > cib_perform_op: Diff: +++ 0.163.5 (null)<br>
> > >         > Aug 31 23:38:31 [1521] vdicnode02        cib:     info:<br>
> > >         > cib_perform_op: +  /cib:  @num_updates=5<br>
> > >         > Aug 31 23:38:31 [1521] vdicnode02        cib:     info:<br>
> > >         > cib_perform_op: +<br>
> > >         > <br>
> > /cib/status/node_state[@id='1'<wbr>]/lrm[@id='1']/lrm_resources/<wbr>lrm_reso<br>
> > urce[@id='vm-vdicdb01']/lrm_<wbr>rsc_op[@id='vm-vdicdb01_last_<wbr>0']: <br>
> > @operation_key=vm-vdicdb01_<wbr>start_0, @operation=start, @transition-<br>
> > key=5:7:0:fe1a9b0a-816c-4b97-<wbr>96cb-b90dbf71417a, @transition-<br>
> > magic=0:0;5:7:0:fe1a9b0a-816c-<wbr>4b97-96cb-b90dbf71417a, @call-id=7,<br>
> > @last-run=1504215510, @last-rc-change=1504215510, @exec-time=528<br>
> > >         > Aug 31 23:38:31 [1521] vdicnode02        cib:     info:<br>
> > >         > cib_process_request:    Completed cib_modify operation<br>
> > for<br>
> > >         section<br>
> > >         > status: OK (rc=0, origin=vdicnode01/crmd/44,<br>
> > >         version=0.163.5)<br>
> > >         > Aug 31 23:38:31 [1521] vdicnode02        cib:     info:<br>
> > >         > cib_perform_op: Diff: --- 0.163.5 2<br>
> > >         > Aug 31 23:38:31 [1521] vdicnode02        cib:     info:<br>
> > >         > cib_perform_op: Diff: +++ 0.163.6 (null)<br>
> > >         > Aug 31 23:38:31 [1521] vdicnode02        cib:     info:<br>
> > >         > cib_perform_op: +  /cib:  @num_updates=6<br>
> > >         > Aug 31 23:38:31 [1521] vdicnode02        cib:     info:<br>
> > >         > cib_perform_op:<br>
> > >         ><br>
> > >         ++<br>
> > /cib/status/node_state[@id='1'<wbr>]/lrm[@id='1']/lrm_resources/<wbr>lrm_reso<br>
> > urce[@id='vm-vdicdb01']:  <lrm_rsc_op id="vm-<br>
> > vdicdb01_monitor_10000" operation_key="vm-vdicdb01_<wbr>monitor_10000"<br>
> > operation="monitor" crm-debug-origin="do_update_<wbr>resource"<br>
> > crm_feature_set="3.0.10" transition-key="6:7:0:<wbr>fe1a9b0a-816c-4b97-<br>
> > 96cb-b90dbf71417a" transition-magic="0:0;6:7:0:<wbr>fe1a9b0a-816c-4b97-<br>
> > 96cb-b90dbf71417a" on_node="vdicnode01" call-id="8" rc-code="0" op-<br>
> > s<br>
> > >         > Aug 31 23:38:31 [1521] vdicnode02        cib:     info:<br>
> > >         > cib_process_request:    Completed cib_modify operation<br>
> > for<br>
> > >         section<br>
> > >         > status: OK (rc=0, origin=vdicnode01/crmd/45,<br>
> > >         version=0.163.6)<br>
> > >         > Aug 31 23:38:36 [1521] vdicnode02        cib:     info:<br>
> > >         > cib_process_ping:       Reporting our current digest to<br>
> > >         vdicnode01:<br>
> > >         > 9141ea9880f5a44b133003982d863b<wbr>c8 for 0.163.6<br>
> > (0x7f96bb2625a0<br>
> > >         0)<br>
> > >         ><br>
> > >         ><br>
> > >         ><br>
> > >         ><br>
> > >         ><br>
> > >         ><br>
> > >         > vdicnode01 - corosync.log<br>
> > >         > Aug 31 23:38:27 [1531] vdicnode01        cib:     info:<br>
> > >         > cib_process_request:    Forwarding cib_replace<br>
> > operation for<br>
> > >         section<br>
> > >         > configuration to all (origin=local/cibadmin/2)<br>
> > >         > Aug 31 23:38:27 [1531] vdicnode01        cib:     info:<br>
> > >         > cib_perform_op: Diff: --- 0.162.0 2<br>
> > >         > Aug 31 23:38:27 [1531] vdicnode01        cib:     info:<br>
> > >         > cib_perform_op: Diff: +++ 0.163.0 (null)<br>
> > >         > Aug 31 23:38:27 [1531] vdicnode01        cib:     info:<br>
> > >         > cib_perform_op: +  /cib:  @epoch=163<br>
> > >         > Aug 31 23:38:27 [1531] vdicnode01        cib:     info:<br>
> > >         > cib_perform_op: ++ /cib/configuration/<wbr>constraints:<br>
> > >         <rsc_location<br>
> > >         > id="location-vm-vdicdb01-<wbr>vdicnode02--INFINITY"<br>
> > >         node="vdicnode02"<br>
> > >         > rsc="vm-vdicdb01" score="-INFINITY"/><br>
> > >         > Aug 31 23:38:27 [1531] vdicnode01        cib:     info:<br>
> > >         > cib_process_request:    Completed cib_replace operation<br>
> > for<br>
> > >         section<br>
> > >         > configuration: OK (rc=0, origin=vdicnode01/cibadmin/2,<br>
> > >         > version=0.163.0)<br>
> > >         > Aug 31 23:38:27 [1536] vdicnode01       crmd:     info:<br>
> > >         > abort_transition_graph: Transition aborted by<br>
> > >         > rsc_location.location-vm-<wbr>vdicdb01-vdicnode02--INFINITY<br>
> > >         'create':<br>
> > >         > Non-status change | cib=0.163.0<br>
> > source=te_update_diff:436<br>
> > >         > path=/cib/configuration/<wbr>constraints complete=true<br>
> > >         > Aug 31 23:38:27 [1536] vdicnode01       crmd:   notice:<br>
> > >         > do_state_transition:    State transition S_IDLE -><br>
> > >         S_POLICY_ENGINE |<br>
> > >         > input=I_PE_CALC cause=C_FSA_INTERNAL<br>
> > >         origin=abort_transition_graph<br>
> > >         > Aug 31 23:38:27 [1531] vdicnode01        cib:     info:<br>
> > >         > cib_file_backup:        Archived previous version<br>
> > >         > as /var/lib/pacemaker/cib/cib-85.<wbr>raw<br>
> > >         > Aug 31 23:38:27 [1531] vdicnode01        cib:     info:<br>
> > >         > cib_file_write_with_digest:     Wrote version 0.163.0<br>
> > of the<br>
> > >         CIB to<br>
> > >         > disk (digest: 47a548b36746de9275d66cc6aeb0fd<wbr>c4)<br>
> > >         > Aug 31 23:38:27 [1531] vdicnode01        cib:     info:<br>
> > >         > cib_file_write_with_digest:     Reading cluster<br>
> > >         configuration<br>
> > >         > file /var/lib/pacemaker/cib/cib.<wbr>npBIW2<br>
> > >         > (digest: /var/lib/pacemaker/cib/cib.<wbr>bDogoB)<br>
> > >         > Aug 31 23:38:27 [1535] vdicnode01    pengine:     info:<br>
> > >         > determine_online_status:        Node vdicnode02 is<br>
> > online<br>
> > >         > Aug 31 23:38:27 [1535] vdicnode01    pengine:     info:<br>
> > >         > determine_online_status:        Node vdicnode01 is<br>
> > online<br>
> > >         > Aug 31 23:38:27 [1535] vdicnode01    pengine:     info:<br>
> > >         native_print:<br>
> > >         > vm-vdicdb01     (ocf::heartbeat:<wbr>VirtualDomain): Started<br>
> > >         vdicnode02<br>
> > >         > Aug 31 23:38:27 [1535] vdicnode01    pengine:     info:<br>
> > >         RecurringOp:<br>
> > >         > Start recurring monitor (10s) for vm-vdicdb01 on<br>
> > vdicnode01<br>
> > >         > Aug 31 23:38:27 [1535] vdicnode01    pengine:   notice:<br>
> > >         LogActions:<br>
> > >         > Migrate vm-vdicdb01     (Started vdicnode02 -><br>
> > vdicnode01)<br>
> > >         > Aug 31 23:38:27 [1535] vdicnode01    pengine:   notice:<br>
> > >         > process_pe_message:     Calculated transition 6, saving<br>
> > >         inputs<br>
> > >         > in /var/lib/pacemaker/pengine/pe-<wbr>input-96.bz2<br>
> > >         > Aug 31 23:38:27 [1536] vdicnode01       crmd:     info:<br>
> > >         > do_state_transition:    State transition<br>
> > S_POLICY_ENGINE -><br>
> > >         > S_TRANSITION_ENGINE | input=I_PE_SUCCESS<br>
> > cause=C_IPC_MESSAGE<br>
> > >         > origin=handle_response<br>
> > >         > Aug 31 23:38:27 [1536] vdicnode01       crmd:     info:<br>
> > >         do_te_invoke:<br>
> > >         > Processing graph 6 (ref=pe_calc-dc-1504215507-24)<br>
> > derived<br>
> > >         > from /var/lib/pacemaker/pengine/pe-<wbr>input-96.bz2<br>
> > >         > Aug 31 23:38:27 [1536] vdicnode01       crmd:   notice:<br>
> > >         > te_rsc_command: Initiating migrate_to operation<br>
> > >         > vm-vdicdb01_migrate_to_0 on vdicnode02 | action 6<br>
> > >         > Aug 31 23:38:27 [1536] vdicnode01       crmd:     info:<br>
> > >         > create_operation_update:        cib_action_update:<br>
> > Updating<br>
> > >         resource<br>
> > >         > vm-vdicdb01 after migrate_to op pending (interval=0)<br>
> > >         > Aug 31 23:38:27 [1531] vdicnode01        cib:     info:<br>
> > >         > cib_process_request:    Forwarding cib_modify operation<br>
> > for<br>
> > >         section<br>
> > >         > status to all (origin=local/crmd/41)<br>
> > >         > Aug 31 23:38:27 [1531] vdicnode01        cib:     info:<br>
> > >         > cib_perform_op: Diff: --- 0.163.0 2<br>
> > >         > Aug 31 23:38:27 [1531] vdicnode01        cib:     info:<br>
> > >         > cib_perform_op: Diff: +++ 0.163.1 (null)<br>
> > >         > Aug 31 23:38:27 [1531] vdicnode01        cib:     info:<br>
> > >         > cib_perform_op: +  /cib:  @num_updates=1<br>
> > >         > Aug 31 23:38:27 [1531] vdicnode01        cib:     info:<br>
> > >         > cib_perform_op: +<br>
> > >         > <br>
> > /cib/status/node_state[@id='2'<wbr>]/lrm[@id='2']/lrm_resources/<wbr>lrm_reso<br>
</div></div><span class="gmail-">> > urce[@id='vm-vdicdb01']/lrm_<wbr>rsc_op[@id='vm-vdicdb01_last_<wbr>0']: <br>
> > @operation_key=vm-vdicdb01_<wbr>migrate_to_0, @operation=migrate_to,<br>
> > @crm-debug-origin=cib_action_<wbr>update, @transition-<br>
</span>> > key=6:6:0:fe1a9b0a-816c-4b97-<wbr>96cb-b90dbf71417a, @transition-magic=-<br>
<span class="gmail-">> > 1:193;6:6:0:fe1a9b0a-816c-<wbr>4b97-96cb-b90dbf71417a, @call-id=-1, @rc-<br>
</span><span class="gmail-">> > code=193, @op-status=-1, @last-run=1504215507, @last-rc-cha<br>
> > >         > Aug 31 23:38:27 [1531] vdicnode01        cib:     info:<br>
> > >         > cib_process_request:    Completed cib_modify operation<br>
> > for<br>
> > >         section<br>
> > >         > status: OK (rc=0, origin=vdicnode01/crmd/41,<br>
> > >         version=0.163.1)<br>
> > >         > Aug 31 23:38:27 [1531] vdicnode01        cib:     info:<br>
> > >         > cib_perform_op: Diff: --- 0.163.1 2<br>
> > >         > Aug 31 23:38:27 [1531] vdicnode01        cib:     info:<br>
> > >         > cib_perform_op: Diff: +++ 0.163.2 (null)<br>
> > >         > Aug 31 23:38:27 [1531] vdicnode01        cib:     info:<br>
> > >         > cib_perform_op: +  /cib:  @num_updates=2<br>
> > >         > Aug 31 23:38:27 [1531] vdicnode01        cib:     info:<br>
> > >         > cib_perform_op: +<br>
> > >         > <br>
> > /cib/status/node_state[@id='2'<wbr>]/lrm[@id='2']/lrm_resources/<wbr>lrm_reso<br>
</span><span class="gmail-">> > urce[@id='vm-vdicdb01']/lrm_<wbr>rsc_op[@id='vm-vdicdb01_last_<wbr>0']: <br>
> > @crm-debug-origin=do_update_<wbr>resource, @transition-<br>
</span><span class="gmail-">> > magic=0:1;6:6:0:fe1a9b0a-816c-<wbr>4b97-96cb-b90dbf71417a, @call-id=9,<br>
> > @rc-code=1, @op-status=0, @exec-time=78, @exit-reason=vdicdb01:<br>
> > live migration to vdicnode01 failed: 1<br>
> > >         > Aug 31 23:38:27 [1531] vdicnode01        cib:     info:<br>
> > >         > cib_perform_op:<br>
> > >         ><br>
> > >         ++<br>
> > /cib/status/node_state[@id='2'<wbr>]/lrm[@id='2']/lrm_resources/<wbr>lrm_reso<br>
> > urce[@id='vm-vdicdb01']:  <lrm_rsc_op id="vm-<br>
</span><span class="gmail-">> > vdicdb01_last_failure_0" operation_key="vm-vdicdb01_<wbr>migrate_to_0"<br>
> > operation="migrate_to" crm-debug-origin="do_update_<wbr>resource"<br>
> > crm_feature_set="3.0.10" transition-key="6:6:0:<wbr>fe1a9b0a-816c-4b97-<br>
> > 96cb-b90dbf71417a" transition-magic="0:1;6:6:0:<wbr>fe1a9b0a-816c-4b97-<br>
</span><div><div class="gmail-h5">> > 96cb-b90dbf71417a" exit-reason="vdicdb01: live migration to vdicn<br>
> > >         > Aug 31 23:38:27 [1531] vdicnode01        cib:     info:<br>
> > >         > cib_process_request:    Completed cib_modify operation<br>
> > for<br>
> > >         section<br>
> > >         > status: OK (rc=0, origin=vdicnode02/crmd/14,<br>
> > >         version=0.163.2)<br>
> > >         > Aug 31 23:38:27 [1536] vdicnode01       crmd:  warning:<br>
> > >         > status_from_rc: Action 6 (vm-vdicdb01_migrate_to_0) on<br>
> > >         vdicnode02<br>
> > >         > failed (target: 0 vs. rc: 1): Error<br>
> > >         > Aug 31 23:38:27 [1536] vdicnode01       crmd:   notice:<br>
> > >         > abort_transition_graph: Transition aborted by operation<br>
> > >         > vm-vdicdb01_migrate_to_0 'modify' on vdicnode02: Event<br>
> > >         failed |<br>
> > >         > magic=0:1;6:6:0:fe1a9b0a-816c-<wbr>4b97-96cb-b90dbf71417a<br>
> > >         cib=0.163.2<br>
> > >         > source=match_graph_event:310 complete=false<br>
> > >         > Aug 31 23:38:27 [1536] vdicnode01       crmd:     info:<br>
> > >         > match_graph_event:      Action vm-vdicdb01_migrate_to_0<br>
> > (6)<br>
> > >         confirmed<br>
> > >         > on vdicnode02 (rc=1)<br>
> > >         > Aug 31 23:38:27 [1536] vdicnode01       crmd:     info:<br>
> > >         > process_graph_event:    Detected action (6.6)<br>
> > >         > vm-vdicdb01_migrate_to_0.9=<wbr>unknown error: failed<br>
> > >         > Aug 31 23:38:27 [1536] vdicnode01       crmd:  warning:<br>
> > >         > status_from_rc: Action 6 (vm-vdicdb01_migrate_to_0) on<br>
> > >         vdicnode02<br>
> > >         > failed (target: 0 vs. rc: 1): Error<br>
> > >         > Aug 31 23:38:27 [1536] vdicnode01       crmd:     info:<br>
> > >         > abort_transition_graph: Transition aborted by operation<br>
> > >         > vm-vdicdb01_migrate_to_0 'create' on vdicnode02: Event<br>
> > >         failed |<br>
> > >         > magic=0:1;6:6:0:fe1a9b0a-816c-<wbr>4b97-96cb-b90dbf71417a<br>
> > >         cib=0.163.2<br>
> > >         > source=match_graph_event:310 complete=false<br>
> > >         > Aug 31 23:38:27 [1536] vdicnode01       crmd:     info:<br>
> > >         > match_graph_event:      Action vm-vdicdb01_migrate_to_0<br>
> > (6)<br>
> > >         confirmed<br>
> > >         > on vdicnode02 (rc=1)<br>
> > >         > Aug 31 23:38:27 [1536] vdicnode01       crmd:     info:<br>
> > >         > process_graph_event:    Detected action (6.6)<br>
> > >         > vm-vdicdb01_migrate_to_0.9=<wbr>unknown error: failed<br>
> > >         > Aug 31 23:38:27 [1536] vdicnode01       crmd:   notice:<br>
> > >         run_graph:<br>
> > >         >    Transition 6 (Complete=1, Pending=0, Fired=0,<br>
> > Skipped=0,<br>
> > >         > Incomplete=5,<br>
> > >         Source=/var/lib/pacemaker/<wbr>pengine/pe-input-96.bz2):<br>
> > >         > Complete<br>
> > >         > Aug 31 23:38:27 [1536] vdicnode01       crmd:     info:<br>
> > >         > do_state_transition:    State transition<br>
> > S_TRANSITION_ENGINE<br>
> > >         -><br>
> > >         > S_POLICY_ENGINE | input=I_PE_CALC cause=C_FSA_INTERNAL<br>
> > >         > origin=notify_crmd<br>
> > >         > Aug 31 23:38:27 [1535] vdicnode01    pengine:     info:<br>
> > >         > determine_online_status:        Node vdicnode02 is<br>
> > online<br>
> > >         > Aug 31 23:38:27 [1535] vdicnode01    pengine:     info:<br>
> > >         > determine_online_status:        Node vdicnode01 is<br>
> > online<br>
> > >         > Aug 31 23:38:27 [1535] vdicnode01    pengine:  warning:<br>
> > >         > unpack_rsc_op_failure:  Processing failed op migrate_to<br>
> > for<br>
> > >         > vm-vdicdb01 on vdicnode02: unknown error (1)<br>
> > >         > Aug 31 23:38:27 [1535] vdicnode01    pengine:  warning:<br>
> > >         > unpack_rsc_op_failure:  Processing failed op migrate_to<br>
> > for<br>
> > >         > vm-vdicdb01 on vdicnode02: unknown error (1)<br>
> > >         > Aug 31 23:38:27 [1535] vdicnode01    pengine:     info:<br>
> > >         native_print:<br>
> > >         > vm-vdicdb01     (ocf::heartbeat:<wbr>VirtualDomain): FAILED<br>
> > >         > Aug 31 23:38:27 [1535] vdicnode01    pengine:     info:<br>
> > >         native_print:<br>
> > >         > 1 : vdicnode01<br>
> > >         > Aug 31 23:38:27 [1535] vdicnode01    pengine:     info:<br>
> > >         native_print:<br>
> > >         > 2 : vdicnode02<br>
> > >         > Aug 31 23:38:27 [1535] vdicnode01    pengine:    error:<br>
> > >         > native_create_actions:  Resource vm-vdicdb01<br>
> > >         (ocf::VirtualDomain) is<br>
> > >         > active on 2 nodes attempting recovery<br>
> > >         > Aug 31 23:38:27 [1535] vdicnode01    pengine:  warning:<br>
> > >         > native_create_actions:  See<br>
> > >         > <a href="http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active" rel="noreferrer" target="_blank">http://clusterlabs.org/wiki/<wbr>FAQ#Resource_is_Too_Active</a><br>
> > for<br>
> > >         more<br>
> > >         > information.<br>
> > >         > Aug 31 23:38:27 [1535] vdicnode01    pengine:     info:<br>
> > >         RecurringOp:<br>
> > >         > Start recurring monitor (10s) for vm-vdicdb01 on<br>
> > vdicnode01<br>
> > >         > Aug 31 23:38:27 [1535] vdicnode01    pengine:   notice:<br>
> > >         LogActions:<br>
> > >         > Recover vm-vdicdb01     (Started vdicnode01)<br>
> > >         > Aug 31 23:38:27 [1535] vdicnode01    pengine:    error:<br>
> > >         > process_pe_message:     Calculated transition 7 (with<br>
> > >         errors), saving<br>
> > >         > inputs in /var/lib/pacemaker/pengine/pe-<wbr>error-8.bz2<br>
> > >         > Aug 31 23:38:27 [1536] vdicnode01       crmd:     info:<br>
> > >         > do_state_transition:    State transition<br>
> > S_POLICY_ENGINE -><br>
> > >         > S_TRANSITION_ENGINE | input=I_PE_SUCCESS<br>
> > cause=C_IPC_MESSAGE<br>
> > >         > origin=handle_response<br>
> > >         > Aug 31 23:38:27 [1536] vdicnode01       crmd:     info:<br>
> > >         do_te_invoke:<br>
> > >         > Processing graph 7 (ref=pe_calc-dc-1504215507-26)<br>
> > derived<br>
> > >         > from /var/lib/pacemaker/pengine/pe-<wbr>error-8.bz2<br>
> > >         > Aug 31 23:38:27 [1536] vdicnode01       crmd:   notice:<br>
> > >         > te_rsc_command: Initiating stop operation vm-<br>
> > vdicdb01_stop_0<br>
> > >         locally<br>
> > >         > on vdicnode01 | action 4<br>
> > >         > Aug 31 23:38:27 [1536] vdicnode01       crmd:     info:<br>
> > >         > do_lrm_rsc_op:  Performing<br>
> > >         > key=4:7:0:fe1a9b0a-816c-4b97-<wbr>96cb-b90dbf71417a<br>
> > >         op=vm-vdicdb01_stop_0<br>
> > >         > Aug 31 23:38:27 [1533] vdicnode01       lrmd:     info:<br>
> > >         log_execute:<br>
> > >         >    executing - rsc:vm-vdicdb01 action:stop call_id:6<br>
> > >         > Aug 31 23:38:27 [1536] vdicnode01       crmd:   notice:<br>
> > >         > te_rsc_command: Initiating stop operation vm-<br>
> > vdicdb01_stop_0<br>
> > >         on<br>
> > >         > vdicnode02 | action 2<br>
> > >         > VirtualDomain(vm-vdicdb01)[<wbr>5268]:     <br>
> >  2017/08/31_23:38:27<br>
> > >         INFO:<br>
> > >         > Domain vdicdb01 already stopped.<br>
> > >         > Aug 31 23:38:27 [1533] vdicnode01       lrmd:     info:<br>
> > >         log_finished:<br>
> > >         > finished - rsc:vm-vdicdb01 action:stop call_id:6<br>
> > pid:5268<br>
> > >         exit-code:0<br>
> > >         > exec-time:57ms queue-time:0ms<br>
> > >         > Aug 31 23:38:27 [1536] vdicnode01       crmd:   notice:<br>
> > >         > process_lrm_event:      Result of stop operation for<br>
> > >         vm-vdicdb01 on<br>
> > >         > vdicnode01: 0 (ok) | call=6 key=vm-vdicdb01_stop_0<br>
> > >         confirmed=true<br>
> > >         > cib-update=43<br>
> > >         > Aug 31 23:38:27 [1531] vdicnode01        cib:     info:<br>
> > >         > cib_process_request:    Forwarding cib_modify operation<br>
> > for<br>
> > >         section<br>
> > >         > status to all (origin=local/crmd/43)<br>
> > >         > Aug 31 23:38:27 [1531] vdicnode01        cib:     info:<br>
> > >         > cib_perform_op: Diff: --- 0.163.2 2<br>
> > >         > Aug 31 23:38:27 [1531] vdicnode01        cib:     info:<br>
> > >         > cib_perform_op: Diff: +++ 0.163.3 (null)<br>
> > >         > Aug 31 23:38:27 [1531] vdicnode01        cib:     info:<br>
> > >         > cib_perform_op: +  /cib:  @num_updates=3<br>
> > >         > Aug 31 23:38:27 [1531] vdicnode01        cib:     info:<br>
> > >         > cib_perform_op: +<br>
> > >         > <br>
> > /cib/status/node_state[@id='1'<wbr>]/lrm[@id='1']/lrm_resources/<wbr>lrm_reso<br>
</div></div><span class="gmail-">> > urce[@id='vm-vdicdb01']/lrm_<wbr>rsc_op[@id='vm-vdicdb01_last_<wbr>0']: <br>
> > @operation_key=vm-vdicdb01_<wbr>stop_0, @operation=stop, @transition-<br>
> > key=4:7:0:fe1a9b0a-816c-4b97-<wbr>96cb-b90dbf71417a, @transition-<br>
</span><span class="gmail-">> > magic=0:0;4:7:0:fe1a9b0a-816c-<wbr>4b97-96cb-b90dbf71417a, @call-id=6,<br>
> > @rc-code=0, @last-run=1504215507, @last-rc-change=1504215507,<br>
> > @exec-time=57<br>
> > >         > Aug 31 23:38:27 [1536] vdicnode01       crmd:     info:<br>
> > >         > match_graph_event:      Action vm-vdicdb01_stop_0 (4)<br>
> > >         confirmed on<br>
> > >         > vdicnode01 (rc=0)<br>
> > >         > Aug 31 23:38:27 [1531] vdicnode01        cib:     info:<br>
> > >         > cib_process_request:    Completed cib_modify operation<br>
> > for<br>
> > >         section<br>
> > >         > status: OK (rc=0, origin=vdicnode01/crmd/43,<br>
> > >         version=0.163.3)<br>
> > >         > Aug 31 23:38:30 [1531] vdicnode01        cib:     info:<br>
> > >         > cib_perform_op: Diff: --- 0.163.3 2<br>
> > >         > Aug 31 23:38:30 [1531] vdicnode01        cib:     info:<br>
> > >         > cib_perform_op: Diff: +++ 0.163.4 (null)<br>
> > >         > Aug 31 23:38:30 [1531] vdicnode01        cib:     info:<br>
> > >         > cib_perform_op: +  /cib:  @num_updates=4<br>
> > >         > Aug 31 23:38:30 [1531] vdicnode01        cib:     info:<br>
> > >         > cib_perform_op: +<br>
> > >         > <br>
> > /cib/status/node_state[@id='2'<wbr>]/lrm[@id='2']/lrm_resources/<wbr>lrm_reso<br>
</span><span class="gmail-">> > urce[@id='vm-vdicdb01']/lrm_<wbr>rsc_op[@id='vm-vdicdb01_last_<wbr>0']: <br>
> > @operation_key=vm-vdicdb01_<wbr>stop_0, @operation=stop, @transition-<br>
</span>> > key=2:7:0:fe1a9b0a-816c-4b97-<wbr>96cb-b90dbf71417a, @transition-<br>
<div class="gmail-HOEnZb"><div class="gmail-h5">> > magic=0:0;2:7:0:fe1a9b0a-816c-<wbr>4b97-96cb-b90dbf71417a, @call-id=10,<br>
> > @rc-code=0, @exec-time=3159<br>
> > >         > Aug 31 23:38:30 [1531] vdicnode01        cib:     info:<br>
> > >         > cib_process_request:    Completed cib_modify operation<br>
> > for<br>
> > >         section<br>
> > >         > status: OK (rc=0, origin=vdicnode02/crmd/15,<br>
> > >         version=0.163.4)<br>
> > >         > Aug 31 23:38:30 [1536] vdicnode01       crmd:     info:<br>
> > >         > match_graph_event:      Action vm-vdicdb01_stop_0 (2)<br>
> > >         confirmed on<br>
> > >         > vdicnode02 (rc=0)<br>
> > >         > Aug 31 23:38:30 [1536] vdicnode01       crmd:   notice:<br>
> > >         > te_rsc_command: Initiating start operation<br>
> > >         vm-vdicdb01_start_0 locally<br>
> > >         > on vdicnode01 | action 5<br>
> > >         > Aug 31 23:38:30 [1536] vdicnode01       crmd:     info:<br>
> > >         > do_lrm_rsc_op:  Performing<br>
> > >         > key=5:7:0:fe1a9b0a-816c-4b97-<wbr>96cb-b90dbf71417a<br>
> > >         op=vm-vdicdb01_start_0<br>
> > >         > Aug 31 23:38:30 [1533] vdicnode01       lrmd:     info:<br>
> > >         log_execute:<br>
> > >         >    executing - rsc:vm-vdicdb01 action:start call_id:7<br>
> > >         > Aug 31 23:38:31 [1533] vdicnode01       lrmd:     info:<br>
> > >         log_finished:<br>
> > >         > finished - rsc:vm-vdicdb01 action:start call_id:7<br>
> > pid:5401<br>
> > >         exit-code:0<br>
> > >         > exec-time:528ms queue-time:0ms<br>
> > >         > Aug 31 23:38:31 [1536] vdicnode01       crmd:     info:<br>
> > >         > action_synced_wait:     Managed VirtualDomain_meta-<br>
> > data_0<br>
> > >         process 5486<br>
> > >         > exited with rc=0<br>
> > >         > Aug 31 23:38:31 [1536] vdicnode01       crmd:   notice:<br>
> > >         > process_lrm_event:      Result of start operation for<br>
> > >         vm-vdicdb01 on<br>
> > >         > vdicnode01: 0 (ok) | call=7 key=vm-vdicdb01_start_0<br>
> > >         confirmed=true<br>
> > >         > cib-update=44<br>
> > >         > Aug 31 23:38:31 [1531] vdicnode01        cib:     info:<br>
> > >         > cib_process_request:    Forwarding cib_modify operation<br>
> > for<br>
> > >         section<br>
> > >         > status to all (origin=local/crmd/44)<br>
> > >         > Aug 31 23:38:31 [1531] vdicnode01        cib:     info:<br>
> > >         > cib_perform_op: Diff: --- 0.163.4 2<br>
> > >         > Aug 31 23:38:31 [1531] vdicnode01        cib:     info:<br>
> > >         > cib_perform_op: Diff: +++ 0.163.5 (null)<br>
> > >         > Aug 31 23:38:31 [1531] vdicnode01        cib:     info:<br>
> > >         > cib_perform_op: +  /cib:  @num_updates=5<br>
> > >         > Aug 31 23:38:31 [1531] vdicnode01        cib:     info:<br>
> > >         > cib_perform_op: +<br>
> > >         > <br>
> > /cib/status/node_state[@id='1'<wbr>]/lrm[@id='1']/lrm_resources/<wbr>lrm_reso<br>
</div></div><span class="gmail-im gmail-HOEnZb">> > urce[@id='vm-vdicdb01']/lrm_<wbr>rsc_op[@id='vm-vdicdb01_last_<wbr>0']: <br>
> > @operation_key=vm-vdicdb01_<wbr>start_0, @operation=start, @transition-<br>
> > key=5:7:0:fe1a9b0a-816c-4b97-<wbr>96cb-b90dbf71417a, @transition-<br>
</span><div class="gmail-HOEnZb"><div class="gmail-h5">> > magic=0:0;5:7:0:fe1a9b0a-816c-<wbr>4b97-96cb-b90dbf71417a, @call-id=7,<br>
> > @last-run=1504215510, @last-rc-change=1504215510, @exec-time=528<br>
> > >         > Aug 31 23:38:31 [1531] vdicnode01        cib:     info:<br>
> > >         > cib_process_request:    Completed cib_modify operation<br>
> > for<br>
> > >         section<br>
> > >         > status: OK (rc=0, origin=vdicnode01/crmd/44,<br>
> > >         version=0.163.5)<br>
> > >         > Aug 31 23:38:31 [1536] vdicnode01       crmd:     info:<br>
> > >         > match_graph_event:      Action vm-vdicdb01_start_0 (5)<br>
> > >         confirmed on<br>
> > >         > vdicnode01 (rc=0)<br>
> > >         > Aug 31 23:38:31 [1536] vdicnode01       crmd:   notice:<br>
> > >         > te_rsc_command: Initiating monitor operation<br>
> > >         vm-vdicdb01_monitor_10000<br>
> > >         > locally on vdicnode01 | action 6<br>
> > >         > Aug 31 23:38:31 [1536] vdicnode01       crmd:     info:<br>
> > >         > do_lrm_rsc_op:  Performing<br>
> > >         > key=6:7:0:fe1a9b0a-816c-4b97-<wbr>96cb-b90dbf71417a<br>
> > >         > op=vm-vdicdb01_monitor_10000<br>
> > >         > Aug 31 23:38:31 [1536] vdicnode01       crmd:     info:<br>
> > >         > process_lrm_event:      Result of monitor operation for<br>
> > >         vm-vdicdb01 on<br>
> > >         > vdicnode01: 0 (ok) | call=8 key=vm-<br>
> > vdicdb01_monitor_10000<br>
> > >         > confirmed=false cib-update=45<br>
> > >         > Aug 31 23:38:31 [1531] vdicnode01        cib:     info:<br>
> > >         > cib_process_request:    Forwarding cib_modify operation<br>
> > for<br>
> > >         section<br>
> > >         > status to all (origin=local/crmd/45)<br>
> > >         > Aug 31 23:38:31 [1531] vdicnode01        cib:     info:<br>
> > >         > cib_perform_op: Diff: --- 0.163.5 2<br>
> > >         > Aug 31 23:38:31 [1531] vdicnode01        cib:     info:<br>
> > >         > cib_perform_op: Diff: +++ 0.163.6 (null)<br>
> > >         > Aug 31 23:38:31 [1531] vdicnode01        cib:     info:<br>
> > >         > cib_perform_op: +  /cib:  @num_updates=6<br>
> > >         > Aug 31 23:38:31 [1531] vdicnode01        cib:     info:<br>
> > >         > cib_perform_op:<br>
> > >         ><br>
> > >         ++<br>
> > /cib/status/node_state[@id='1'<wbr>]/lrm[@id='1']/lrm_resources/<wbr>lrm_reso<br>
> > urce[@id='vm-vdicdb01']:  <lrm_rsc_op id="vm-<br>
</div></div><span class="gmail-im gmail-HOEnZb">> > vdicdb01_monitor_10000" operation_key="vm-vdicdb01_<wbr>monitor_10000"<br>
> > operation="monitor" crm-debug-origin="do_update_<wbr>resource"<br>
> > crm_feature_set="3.0.10" transition-key="6:7:0:<wbr>fe1a9b0a-816c-4b97-<br>
> > 96cb-b90dbf71417a" transition-magic="0:0;6:7:0:<wbr>fe1a9b0a-816c-4b97-<br>
> > 96cb-b90dbf71417a" on_node="vdicnode01" call-id="8" rc-code="0" op-<br>
> > s<br>
</span><div class="gmail-HOEnZb"><div class="gmail-h5">> > >         > Aug 31 23:38:31 [1531] vdicnode01        cib:     info:<br>
> > >         > cib_process_request:    Completed cib_modify operation<br>
> > for<br>
> > >         section<br>
> > >         > status: OK (rc=0, origin=vdicnode01/crmd/45,<br>
> > >         version=0.163.6)<br>
> > >         > Aug 31 23:38:31 [1536] vdicnode01       crmd:     info:<br>
> > >         > match_graph_event:      Action vm-<br>
> > vdicdb01_monitor_10000 (6)<br>
> > >         confirmed<br>
> > >         > on vdicnode01 (rc=0)<br>
> > >         > Aug 31 23:38:31 [1536] vdicnode01       crmd:   notice:<br>
> > >         run_graph:<br>
> > >         >    Transition 7 (Complete=5, Pending=0, Fired=0,<br>
> > Skipped=0,<br>
> > >         > Incomplete=0,<br>
> > >         Source=/var/lib/pacemaker/<wbr>pengine/pe-error-8.bz2):<br>
> > >         > Complete<br>
> > >         > Aug 31 23:38:31 [1536] vdicnode01       crmd:     info:<br>
> > >         do_log: Input<br>
> > >         > I_TE_SUCCESS received in state S_TRANSITION_ENGINE from<br>
> > >         notify_crmd<br>
> > >         > Aug 31 23:38:31 [1536] vdicnode01       crmd:   notice:<br>
> > >         > do_state_transition:    State transition<br>
> > S_TRANSITION_ENGINE<br>
> > >         -> S_IDLE<br>
> > >         > | input=I_TE_SUCCESS cause=C_FSA_INTERNAL<br>
> > origin=notify_crmd<br>
> > >         > Aug 31 23:38:36 [1531] vdicnode01        cib:     info:<br>
> > >         > cib_process_ping:       Reporting our current digest to<br>
> > >         vdicnode01:<br>
> > >         > 9141ea9880f5a44b133003982d863b<wbr>c8 for 0.163.6<br>
> > (0x7f61cec09270<br>
> > >         0)<br>
> > >         ><br>
> > >         ><br>
> > >         > Thanks a lot<br>
> > >         ><br>
> > >         > 2017-08-31 16:20 GMT+02:00 Ken Gaillot<br>
> > >         <<a href="mailto:kgaillot@redhat.com">kgaillot@redhat.com</a>>:<br>
> > >         >         On Thu, 2017-08-31 at 01:13 +0200, Oscar<br>
> > Segarra<br>
> > >         wrote:<br>
> > >         >         > Hi,<br>
> > >         >         ><br>
> > >         >         ><br>
> > >         >         > In my environment, I have just two hosts,<br>
> > where<br>
> > >         qemu-kvm<br>
> > >         >         process is<br>
> > >         >         > launched by a regular user (oneadmin) - open<br>
> > >         nebula -<br>
> > >         >         ><br>
> > >         >         ><br>
> > >         >         > I have created a VirtualDomain resource that<br>
> > >         starts and<br>
> > >         >         stops the VM<br>
> > >         >         > perfectly. Nevertheless, when I change the<br>
> > >         location weight<br>
> > >         >         in order to<br>
> > >         >         > force the migration, It raises a migration<br>
> > failure<br>
> > >         "error:<br>
> > >         >         1"<br>
> > >         >         ><br>
> > >         >         ><br>
> > >         >         > If I execute the virsh migrate command (that<br>
> > >         appears in<br>
> > >         >         corosync.log)<br>
> > >         >         > from command line, it works perfectly.<br>
> > >         >         ><br>
> > >         >         ><br>
> > >         >         > Anybody has experienced the same issue?<br>
> > >         >         ><br>
> > >         >         ><br>
> > >         >         > Thanks in advance for your help<br>
> > >         ><br>
> > >         ><br>
> > >         >         If something works from the command line but<br>
> > not<br>
> > >         when run by a<br>
> > >         >         daemon,<br>
> > >         >         my first suspicion is SELinux. Check the audit<br>
> > log<br>
> > >         for denials<br>
> > >         >         around<br>
> > >         >         that time.<br>
> > >         ><br>
> > >         >         I'd also check the system log and Pacemaker<br>
> > detail<br>
> > >         log around<br>
> > >         >         that time<br>
> > >         >         to see if there is any more information.<br>
> > >         >         --<br>
> > >         >         Ken Gaillot <<a href="mailto:kgaillot@redhat.com">kgaillot@redhat.com</a>><br>
> > >         ><br>
> > >         ><br>
> > >         ><br>
> > >         ><br>
> > >         ><br>
> > >         >         _____________________________<wbr>__________________<br>
> > >         >         Users mailing list: <a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a><br>
> > >         >         <a href="http://lists.clusterlabs.org/mailman/listinfo/u" rel="noreferrer" target="_blank">http://lists.clusterlabs.org/<wbr>mailman/listinfo/u</a><br>
> > sers<br>
> > >         ><br>
> > >         >         Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
> > >         >         Getting started:<br>
> > >         ><br>
> > >          <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/<wbr>doc/Cluster_from_Scratch.pdf</a><br>
> > >         >         Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
> > >         ><br>
> > >         ><br>
> > ><br>
> > ><br>
> > >         --<br>
> > >         Ken Gaillot <<a href="mailto:kgaillot@redhat.com">kgaillot@redhat.com</a>><br>
> > ><br>
> > ><br>
> > ><br>
> > ><br>
> > ><br>
> > ><br>
> ><br>
> > --<br>
> > Ken Gaillot <<a href="mailto:kgaillot@redhat.com">kgaillot@redhat.com</a>><br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
><br>
><br>
</div></div><span class="gmail-HOEnZb"><font color="#888888">--<br>
Ken Gaillot <<a href="mailto:kgaillot@redhat.com">kgaillot@redhat.com</a>><br>
</font></span></blockquote></div><br></div></div>