<div dir="ltr">Hi, <div><br></div><div>I have updated the known_hosts:</div><div><br></div><div>Now, I get the following error:</div><div><br></div><div><div><font face="monospace, monospace">Sep 02 01:03:41 [1535] vdicnode01 cib: info: cib_perform_op: + /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='vm-vdicdb01']/lrm_rsc_op[@id='vm-vdicdb01_last_0']: @operation_key=vm-vdicdb01_migrate_to_0, @operation=migrate_to, @crm-debug-origin=cib_action_update, @transition-key=6:27:0:a7fef266-46c3-429e-ab00-c1a0aab24da5, @transition-magic=-1:193;6:27:0:a7fef266-46c3-429e-ab00-c1a0aab24da5, @call-id=-1, @rc-code=193, @op-status=-1, @last-run=1504307021, @last-rc-c</font></div><div><font face="monospace, monospace">Sep 02 01:03:41 [1535] vdicnode01 cib: info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=vdicnode01/crmd/77, version=0.169.1)</font></div><div><font face="monospace, monospace">VirtualDomain(vm-vdicdb01)[13085]: 2017/09/02_01:03:41 INFO: vdicdb01: Starting live migration to vdicnode02 (using: virsh --connect=qemu:///system --quiet migrate --live vdicdb01 qemu+ssh://vdicnode02/system ).</font></div><div><font face="monospace, monospace">VirtualDomain(vm-vdicdb01)[13085]: 2017/09/02_01:03:41 ERROR: vdicdb01: live migration to vdicnode02 failed: 1</font></div><div><font face="monospace, monospace"> ]p 02 01:03:41 [1537] vdicnode01 lrmd: notice: operation_finished: vm-vdicdb01_migrate_to_0:13085:stderr [ error: Cannot recv data: Permission denied, please try again.</font></div><div><font face="monospace, monospace"> ]p 02 01:03:41 [1537] vdicnode01 lrmd: notice: operation_finished: vm-vdicdb01_migrate_to_0:13085:stderr [ Permission denied, please try again.</font></div><div><font face="monospace, monospace">Sep 02 01:03:41 [1537] vdicnode01 lrmd: notice: operation_finished: vm-vdicdb01_migrate_to_0:13085:stderr [ Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).: Connection reset by peer ]</font></div><div><font face="monospace, monospace">Sep 02 01:03:41 [1537] vdicnode01 lrmd: notice: operation_finished: vm-vdicdb01_migrate_to_0:13085:stderr [ ocf-exit-reason:vdicdb01: live migration to vdicnode02 failed: 1 ]</font></div><div><font face="monospace, monospace">Sep 02 01:03:41 [1537] vdicnode01 lrmd: info: log_finished: finished - rsc:vm-vdicdb01 action:migrate_to call_id:16 pid:13085 exit-code:1 exec-time:119ms queue-time:0ms</font></div><div><font face="monospace, monospace">Sep 02 01:03:41 [1540] vdicnode01 crmd: notice: process_lrm_event: Result of migrate_to operation for vm-vdicdb01 on vdicnode01: 1 (unknown error) | call=16 key=vm-vdicdb01_migrate_to_0 confirmed=true cib-update=78</font></div><div><font face="monospace, monospace">Sep 02 01:03:41 [1540] vdicnode01 crmd: notice: process_lrm_event: vdicnode01-vm-vdicdb01_migrate_to_0:16 [ error: Cannot recv data: Permission denied, please try again.\r\nPermission denied, please try again.\r\nPermission denied (publickey,gssapi-keyex,gssapi-with-mic,password).: Connection reset by peer\nocf-exit-reason:vdicdb01: live migration to vdicnode02 failed: 1\n ]</font></div><div><font face="monospace, monospace">Sep 02 01:03:41 [1535] vdicnode01 cib: info: cib_process_request: Forwarding cib_modify operation for section status to all (origin=local/crmd/78)</font></div><div><font face="monospace, monospace">Sep 02 01:03:41 [1535] vdicnode01 cib: info: cib_perform_op: Diff: --- 0.169.1 2</font></div><div><font face="monospace, monospace">Sep 02 01:03:41 [1535] vdicnode01 cib: info: cib_perform_op: Diff: +++ 0.169.2 (null)</font></div><div><font face="monospace, monospace">Sep 02 01:03:41 [1535] vdicnode01 cib: info: cib_perform_op: + /cib: @num_updates=2</font></div><div><font face="monospace, monospace">Sep 02 01:03:41 [1535] vdicnode01 cib: info: cib_perform_op: + /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='vm-vdicdb01']/lrm_rsc_op[@id='vm-vdicdb01_last_0']: @crm-debug-origin=do_update_resource, @transition-magic=0:1;6:27:0:a7fef266-46c3-429e-ab00-c1a0aab24da5, @call-id=16, @rc-code=1, @op-status=0, @exec-time=119, @exit-reason=vdicdb01: live migration to vdicnode02 failed: 1</font></div><div><font face="monospace, monospace">Sep 02 01:03:4</font></div></div><div><br></div><div>as root <-- system prompts the password<br></div><div><div><font face="monospace, monospace">[root@vdicnode01 .ssh]# virsh --connect=qemu:///system --quiet migrate --live vdicdb01 qemu+ssh://vdicnode02/system</font></div><div><font face="monospace, monospace">root@vdicnode02's password:</font></div></div><div><br></div><div>as oneadmin (the user that executes the qemu-kvm) <-- does not prompt the password</div><div><font face="monospace, monospace">virsh --connect=qemu:///system --quiet migrate --live vdicdb01 qemu+ssh://vdicnode02/system<br></font></div><div><br></div><div>Must I configure passwordless connection with root in order to make live migration work?<br></div><div><br></div><div>Or maybe is there any way to instruct pacemaker to use my oneadmin user for migrations inestad of root?</div><div><br></div><div>Thanks a lot:<br></div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">2017-09-01 23:14 GMT+02:00 Ken Gaillot <span dir="ltr"><<a href="mailto:kgaillot@redhat.com" target="_blank">kgaillot@redhat.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">On Fri, 2017-09-01 at 00:26 +0200, Oscar Segarra wrote:<br>
> Hi,<br>
><br>
><br>
</span><span class="">> Yes, it is....<br>
><br>
><br>
> The qemu-kvm process is executed by the oneadmin user.<br>
><br>
><br>
> When I cluster tries the live migration, what users do play?<br>
><br>
><br>
> Oneadmin<br>
> Root<br>
> Hacluster<br>
><br>
><br>
> I have just configured pasworless ssh connection with oneadmin.<br>
><br>
><br>
> Do I need to configure any other passwordless ssh connection with any<br>
> other user?<br>
><br>
><br>
> What user executes the virsh migrate - - live?<br>
<br>
</span>The cluster executes resource actions as root.<br>
<span class=""><br>
> Is there any way to check ssk keys?<br>
<br>
</span>I'd just login once to the host as root from the cluster nodes, to make<br>
it sure it works, and accept the host when asked.<br>
<div class="HOEnZb"><div class="h5"><br>
><br>
> Sorry for all theese questions.<br>
><br>
><br>
> Thanks a lot<br>
><br>
><br>
><br>
><br>
><br>
><br>
> El 1 sept. 2017 0:12, "Ken Gaillot" <<a href="mailto:kgaillot@redhat.com">kgaillot@redhat.com</a>> escribió:<br>
> On Thu, 2017-08-31 at 23:45 +0200, Oscar Segarra wrote:<br>
> > Hi Ken,<br>
> ><br>
> ><br>
> > Thanks a lot for you quick answer:<br>
> ><br>
> ><br>
> > Regarding to selinux, it is disabled. The FW is disabled as<br>
> well.<br>
> ><br>
> ><br>
> > [root@vdicnode01 ~]# sestatus<br>
> > SELinux status: disabled<br>
> ><br>
> ><br>
> > [root@vdicnode01 ~]# service firewalld status<br>
> > Redirecting to /bin/systemctl status firewalld.service<br>
> > ● firewalld.service - firewalld - dynamic firewall daemon<br>
> > Loaded: loaded<br>
> (/usr/lib/systemd/system/<wbr>firewalld.service;<br>
> > disabled; vendor preset: enabled)<br>
> > Active: inactive (dead)<br>
> > Docs: man:firewalld(1)<br>
> ><br>
> ><br>
> > On migration, it performs a gracefully shutdown and a start<br>
> on the new<br>
> > node.<br>
> ><br>
> ><br>
> > I attach the logs when trying to migrate from vdicnode02 to<br>
> > vdicnode01:<br>
> ><br>
> ><br>
> > vdicnode02 corosync.log:<br>
> > Aug 31 23:38:17 [1521] vdicnode02 cib: info:<br>
> > cib_perform_op: Diff: --- 0.161.2 2<br>
> > Aug 31 23:38:17 [1521] vdicnode02 cib: info:<br>
> > cib_perform_op: Diff: +++ 0.162.0 (null)<br>
> > Aug 31 23:38:17 [1521] vdicnode02 cib: info:<br>
> > cib_perform_op:<br>
> ><br>
> -- /cib/configuration/<wbr>constraints/rsc_location[@id='<wbr>location-vm-vdicdb01-<wbr>vdicnode01--INFINITY']<br>
> > Aug 31 23:38:17 [1521] vdicnode02 cib: info:<br>
> > cib_perform_op: + /cib: @epoch=162, @num_updates=0<br>
> > Aug 31 23:38:17 [1521] vdicnode02 cib: info:<br>
> > cib_process_request: Completed cib_replace operation for<br>
> section<br>
> > configuration: OK (rc=0, origin=vdicnode01/cibadmin/2,<br>
> > version=0.162.0)<br>
> > Aug 31 23:38:17 [1521] vdicnode02 cib: info:<br>
> > cib_file_backup: Archived previous version<br>
> > as /var/lib/pacemaker/cib/cib-65.<wbr>raw<br>
> > Aug 31 23:38:17 [1521] vdicnode02 cib: info:<br>
> > cib_file_write_with_digest: Wrote version 0.162.0 of the<br>
> CIB to<br>
> > disk (digest: 1f87611b60cd7c48b95b6b788b47f6<wbr>5f)<br>
> > Aug 31 23:38:17 [1521] vdicnode02 cib: info:<br>
> > cib_file_write_with_digest: Reading cluster<br>
> configuration<br>
> > file /var/lib/pacemaker/cib/cib.<wbr>jt2KPw<br>
> > (digest: /var/lib/pacemaker/cib/cib.<wbr>Kwqfpl)<br>
> > Aug 31 23:38:22 [1521] vdicnode02 cib: info:<br>
> > cib_process_ping: Reporting our current digest to<br>
> vdicnode01:<br>
> > dace3a23264934279d439420d5a716<wbr>cc for 0.162.0 (0x7f96bb26c5c0<br>
> 0)<br>
> > Aug 31 23:38:27 [1521] vdicnode02 cib: info:<br>
> > cib_perform_op: Diff: --- 0.162.0 2<br>
> > Aug 31 23:38:27 [1521] vdicnode02 cib: info:<br>
> > cib_perform_op: Diff: +++ 0.163.0 (null)<br>
> > Aug 31 23:38:27 [1521] vdicnode02 cib: info:<br>
> > cib_perform_op: + /cib: @epoch=163<br>
> > Aug 31 23:38:27 [1521] vdicnode02 cib: info:<br>
> > cib_perform_op: ++ /cib/configuration/<wbr>constraints:<br>
> <rsc_location<br>
> > id="location-vm-vdicdb01-<wbr>vdicnode02--INFINITY"<br>
> node="vdicnode02"<br>
> > rsc="vm-vdicdb01" score="-INFINITY"/><br>
> > Aug 31 23:38:27 [1521] vdicnode02 cib: info:<br>
> > cib_process_request: Completed cib_replace operation for<br>
> section<br>
> > configuration: OK (rc=0, origin=vdicnode01/cibadmin/2,<br>
> > version=0.163.0)<br>
> > Aug 31 23:38:27 [1521] vdicnode02 cib: info:<br>
> > cib_file_backup: Archived previous version<br>
> > as /var/lib/pacemaker/cib/cib-66.<wbr>raw<br>
> > Aug 31 23:38:27 [1521] vdicnode02 cib: info:<br>
> > cib_file_write_with_digest: Wrote version 0.163.0 of the<br>
> CIB to<br>
> > disk (digest: 47a548b36746de9275d66cc6aeb0fd<wbr>c4)<br>
> > Aug 31 23:38:27 [1521] vdicnode02 cib: info:<br>
> > cib_file_write_with_digest: Reading cluster<br>
> configuration<br>
> > file /var/lib/pacemaker/cib/cib.<wbr>rcgXiT<br>
> > (digest: /var/lib/pacemaker/cib/cib.<wbr>7geMfi)<br>
> > Aug 31 23:38:27 [1523] vdicnode02 lrmd: info:<br>
> > cancel_recurring_action: Cancelling ocf operation<br>
> > vm-vdicdb01_monitor_10000<br>
> > Aug 31 23:38:27 [1526] vdicnode02 crmd: info:<br>
> > do_lrm_rsc_op: Performing<br>
> > key=6:6:0:fe1a9b0a-816c-4b97-<wbr>96cb-b90dbf71417a<br>
> > op=vm-vdicdb01_migrate_to_0<br>
> > Aug 31 23:38:27 [1523] vdicnode02 lrmd: info:<br>
> log_execute:<br>
> > executing - rsc:vm-vdicdb01 action:migrate_to call_id:9<br>
> > Aug 31 23:38:27 [1526] vdicnode02 crmd: info:<br>
> > process_lrm_event: Result of monitor operation for<br>
> vm-vdicdb01 on<br>
> > vdicnode02: Cancelled | call=7 key=vm-vdicdb01_monitor_10000<br>
> > confirmed=true<br>
> > Aug 31 23:38:27 [1521] vdicnode02 cib: info:<br>
> > cib_perform_op: Diff: --- 0.163.0 2<br>
> > Aug 31 23:38:27 [1521] vdicnode02 cib: info:<br>
> > cib_perform_op: Diff: +++ 0.163.1 (null)<br>
> > Aug 31 23:38:27 [1521] vdicnode02 cib: info:<br>
> > cib_perform_op: + /cib: @num_updates=1<br>
> > Aug 31 23:38:27 [1521] vdicnode02 cib: info:<br>
> > cib_perform_op: +<br>
> > /cib/status/node_state[@id='2'<wbr>]/lrm[@id='2']/lrm_resources/<wbr>lrm_resource[@id='vm-vdicdb01'<wbr>]/lrm_rsc_op[@id='vm-vdicdb01_<wbr>last_0']: @operation_key=vm-vdicdb01_<wbr>migrate_to_0, @operation=migrate_to, @crm-debug-origin=cib_action_<wbr>update, @transition-key=6:6:0:<wbr>fe1a9b0a-816c-4b97-96cb-<wbr>b90dbf71417a, @transition-magic=-1:193;6:6:<wbr>0:fe1a9b0a-816c-4b97-96cb-<wbr>b90dbf71417a, @call-id=-1, @rc-code=193, @op-status=-1, @last-run=1504215507, @last-rc-cha<br>
> > Aug 31 23:38:27 [1521] vdicnode02 cib: info:<br>
> > cib_process_request: Completed cib_modify operation for<br>
> section<br>
> > status: OK (rc=0, origin=vdicnode01/crmd/41,<br>
> version=0.163.1)<br>
> > VirtualDomain(vm-vdicdb01)[<wbr>5241]: 2017/08/31_23:38:27<br>
> INFO:<br>
> > vdicdb01: Starting live migration to vdicnode01 (using:<br>
> virsh<br>
> > --connect=qemu:///system --quiet migrate --live vdicdb01<br>
> qemu<br>
> > +ssh://vdicnode01/system ).<br>
> > VirtualDomain(vm-vdicdb01)[<wbr>5241]: 2017/08/31_23:38:27<br>
> ERROR:<br>
> > vdicdb01: live migration to vdicnode01 failed: 1<br>
> > Aug 31 23:38:27 [1523] vdicnode02 lrmd: notice:<br>
> > operation_finished: vm-vdicdb01_migrate_to_0:5241:<wbr>stderr<br>
> [ error:<br>
> > Cannot recv data: Host key verification failed.: Connection<br>
> reset by<br>
> > peer ]<br>
><br>
><br>
> ^^^ There you go. Sounds like the ssh key isn't being<br>
> accepted. No idea<br>
> why though.<br>
><br>
><br>
><br>
> > Aug 31 23:38:27 [1523] vdicnode02 lrmd: notice:<br>
> > operation_finished: vm-vdicdb01_migrate_to_0:5241:<wbr>stderr<br>
> > [ ocf-exit-reason:vdicdb01: live migration to vdicnode01<br>
> failed: 1 ]<br>
> > Aug 31 23:38:27 [1523] vdicnode02 lrmd: info:<br>
> log_finished:<br>
> > finished - rsc:vm-vdicdb01 action:migrate_to call_id:9<br>
> pid:5241<br>
> > exit-code:1 exec-time:78ms queue-time:0ms<br>
> > Aug 31 23:38:27 [1526] vdicnode02 crmd: notice:<br>
> > process_lrm_event: Result of migrate_to operation for<br>
> vm-vdicdb01<br>
> > on vdicnode02: 1 (unknown error) | call=9<br>
> key=vm-vdicdb01_migrate_to_0<br>
> > confirmed=true cib-update=14<br>
> > Aug 31 23:38:27 [1526] vdicnode02 crmd: notice:<br>
> > process_lrm_event:<br>
> vdicnode02-vm-vdicdb01_<wbr>migrate_to_0:9 [ error:<br>
> > Cannot recv data: Host key verification failed.: Connection<br>
> reset by<br>
> > peer\nocf-exit-reason:<wbr>vdicdb01: live migration to vdicnode01<br>
> failed: 1<br>
> > \n ]<br>
> > Aug 31 23:38:27 [1521] vdicnode02 cib: info:<br>
> > cib_process_request: Forwarding cib_modify operation for<br>
> section<br>
> > status to all (origin=local/crmd/14)<br>
> > Aug 31 23:38:27 [1521] vdicnode02 cib: info:<br>
> > cib_perform_op: Diff: --- 0.163.1 2<br>
> > Aug 31 23:38:27 [1521] vdicnode02 cib: info:<br>
> > cib_perform_op: Diff: +++ 0.163.2 (null)<br>
> > Aug 31 23:38:27 [1521] vdicnode02 cib: info:<br>
> > cib_perform_op: + /cib: @num_updates=2<br>
> > Aug 31 23:38:27 [1521] vdicnode02 cib: info:<br>
> > cib_perform_op: +<br>
> > /cib/status/node_state[@id='2'<wbr>]/lrm[@id='2']/lrm_resources/<wbr>lrm_resource[@id='vm-vdicdb01'<wbr>]/lrm_rsc_op[@id='vm-vdicdb01_<wbr>last_0']: @crm-debug-origin=do_update_<wbr>resource, @transition-magic=0:1;6:6:0:<wbr>fe1a9b0a-816c-4b97-96cb-<wbr>b90dbf71417a, @call-id=9, @rc-code=1, @op-status=0, @exec-time=78, @exit-reason=vdicdb01: live migration to vdicnode01 failed: 1<br>
> > Aug 31 23:38:27 [1521] vdicnode02 cib: info:<br>
> > cib_perform_op:<br>
> ><br>
> ++ /cib/status/node_state[@id='2'<wbr>]/lrm[@id='2']/lrm_resources/<wbr>lrm_resource[@id='vm-vdicdb01'<wbr>]: <lrm_rsc_op id="vm-vdicdb01_last_failure_<wbr>0" operation_key="vm-vdicdb01_<wbr>migrate_to_0" operation="migrate_to" crm-debug-origin="do_update_<wbr>resource" crm_feature_set="3.0.10" transition-key="6:6:0:<wbr>fe1a9b0a-816c-4b97-96cb-<wbr>b90dbf71417a" transition-magic="0:1;6:6:0:<wbr>fe1a9b0a-816c-4b97-96cb-<wbr>b90dbf71417a" exit-reason="vdicdb01: live migration to vdicn<br>
> > Aug 31 23:38:27 [1521] vdicnode02 cib: info:<br>
> > cib_process_request: Completed cib_modify operation for<br>
> section<br>
> > status: OK (rc=0, origin=vdicnode02/crmd/14,<br>
> version=0.163.2)<br>
> > Aug 31 23:38:27 [1526] vdicnode02 crmd: info:<br>
> > do_lrm_rsc_op: Performing<br>
> > key=2:7:0:fe1a9b0a-816c-4b97-<wbr>96cb-b90dbf71417a<br>
> op=vm-vdicdb01_stop_0<br>
> > Aug 31 23:38:27 [1523] vdicnode02 lrmd: info:<br>
> log_execute:<br>
> > executing - rsc:vm-vdicdb01 action:stop call_id:10<br>
> > VirtualDomain(vm-vdicdb01)[<wbr>5285]: 2017/08/31_23:38:27<br>
> INFO:<br>
> > Issuing graceful shutdown request for domain vdicdb01.<br>
> > Aug 31 23:38:27 [1521] vdicnode02 cib: info:<br>
> > cib_perform_op: Diff: --- 0.163.2 2<br>
> > Aug 31 23:38:27 [1521] vdicnode02 cib: info:<br>
> > cib_perform_op: Diff: +++ 0.163.3 (null)<br>
> > Aug 31 23:38:27 [1521] vdicnode02 cib: info:<br>
> > cib_perform_op: + /cib: @num_updates=3<br>
> > Aug 31 23:38:27 [1521] vdicnode02 cib: info:<br>
> > cib_perform_op: +<br>
> > /cib/status/node_state[@id='1'<wbr>]/lrm[@id='1']/lrm_resources/<wbr>lrm_resource[@id='vm-vdicdb01'<wbr>]/lrm_rsc_op[@id='vm-vdicdb01_<wbr>last_0']: @operation_key=vm-vdicdb01_<wbr>stop_0, @operation=stop, @transition-key=4:7:0:<wbr>fe1a9b0a-816c-4b97-96cb-<wbr>b90dbf71417a, @transition-magic=0:0;4:7:0:<wbr>fe1a9b0a-816c-4b97-96cb-<wbr>b90dbf71417a, @call-id=6, @rc-code=0, @last-run=1504215507, @last-rc-change=1504215507, @exec-time=57<br>
> > Aug 31 23:38:27 [1521] vdicnode02 cib: info:<br>
> > cib_process_request: Completed cib_modify operation for<br>
> section<br>
> > status: OK (rc=0, origin=vdicnode01/crmd/43,<br>
> version=0.163.3)<br>
> > Aug 31 23:38:30 [1523] vdicnode02 lrmd: info:<br>
> log_finished:<br>
> > finished - rsc:vm-vdicdb01 action:stop call_id:10 pid:5285<br>
> exit-code:0<br>
> > exec-time:3159ms queue-time:0ms<br>
> > Aug 31 23:38:30 [1526] vdicnode02 crmd: notice:<br>
> > process_lrm_event: Result of stop operation for<br>
> vm-vdicdb01 on<br>
> > vdicnode02: 0 (ok) | call=10 key=vm-vdicdb01_stop_0<br>
> confirmed=true<br>
> > cib-update=15<br>
> > Aug 31 23:38:30 [1521] vdicnode02 cib: info:<br>
> > cib_process_request: Forwarding cib_modify operation for<br>
> section<br>
> > status to all (origin=local/crmd/15)<br>
> > Aug 31 23:38:30 [1521] vdicnode02 cib: info:<br>
> > cib_perform_op: Diff: --- 0.163.3 2<br>
> > Aug 31 23:38:30 [1521] vdicnode02 cib: info:<br>
> > cib_perform_op: Diff: +++ 0.163.4 (null)<br>
> > Aug 31 23:38:30 [1521] vdicnode02 cib: info:<br>
> > cib_perform_op: + /cib: @num_updates=4<br>
> > Aug 31 23:38:30 [1521] vdicnode02 cib: info:<br>
> > cib_perform_op: +<br>
> > /cib/status/node_state[@id='2'<wbr>]/lrm[@id='2']/lrm_resources/<wbr>lrm_resource[@id='vm-vdicdb01'<wbr>]/lrm_rsc_op[@id='vm-vdicdb01_<wbr>last_0']: @operation_key=vm-vdicdb01_<wbr>stop_0, @operation=stop, @transition-key=2:7:0:<wbr>fe1a9b0a-816c-4b97-96cb-<wbr>b90dbf71417a, @transition-magic=0:0;2:7:0:<wbr>fe1a9b0a-816c-4b97-96cb-<wbr>b90dbf71417a, @call-id=10, @rc-code=0, @exec-time=3159<br>
> > Aug 31 23:38:30 [1521] vdicnode02 cib: info:<br>
> > cib_process_request: Completed cib_modify operation for<br>
> section<br>
> > status: OK (rc=0, origin=vdicnode02/crmd/15,<br>
> version=0.163.4)<br>
> > Aug 31 23:38:31 [1521] vdicnode02 cib: info:<br>
> > cib_perform_op: Diff: --- 0.163.4 2<br>
> > Aug 31 23:38:31 [1521] vdicnode02 cib: info:<br>
> > cib_perform_op: Diff: +++ 0.163.5 (null)<br>
> > Aug 31 23:38:31 [1521] vdicnode02 cib: info:<br>
> > cib_perform_op: + /cib: @num_updates=5<br>
> > Aug 31 23:38:31 [1521] vdicnode02 cib: info:<br>
> > cib_perform_op: +<br>
> > /cib/status/node_state[@id='1'<wbr>]/lrm[@id='1']/lrm_resources/<wbr>lrm_resource[@id='vm-vdicdb01'<wbr>]/lrm_rsc_op[@id='vm-vdicdb01_<wbr>last_0']: @operation_key=vm-vdicdb01_<wbr>start_0, @operation=start, @transition-key=5:7:0:<wbr>fe1a9b0a-816c-4b97-96cb-<wbr>b90dbf71417a, @transition-magic=0:0;5:7:0:<wbr>fe1a9b0a-816c-4b97-96cb-<wbr>b90dbf71417a, @call-id=7, @last-run=1504215510, @last-rc-change=1504215510, @exec-time=528<br>
> > Aug 31 23:38:31 [1521] vdicnode02 cib: info:<br>
> > cib_process_request: Completed cib_modify operation for<br>
> section<br>
> > status: OK (rc=0, origin=vdicnode01/crmd/44,<br>
> version=0.163.5)<br>
> > Aug 31 23:38:31 [1521] vdicnode02 cib: info:<br>
> > cib_perform_op: Diff: --- 0.163.5 2<br>
> > Aug 31 23:38:31 [1521] vdicnode02 cib: info:<br>
> > cib_perform_op: Diff: +++ 0.163.6 (null)<br>
> > Aug 31 23:38:31 [1521] vdicnode02 cib: info:<br>
> > cib_perform_op: + /cib: @num_updates=6<br>
> > Aug 31 23:38:31 [1521] vdicnode02 cib: info:<br>
> > cib_perform_op:<br>
> ><br>
> ++ /cib/status/node_state[@id='1'<wbr>]/lrm[@id='1']/lrm_resources/<wbr>lrm_resource[@id='vm-vdicdb01'<wbr>]: <lrm_rsc_op id="vm-vdicdb01_monitor_10000" operation_key="vm-vdicdb01_<wbr>monitor_10000" operation="monitor" crm-debug-origin="do_update_<wbr>resource" crm_feature_set="3.0.10" transition-key="6:7:0:<wbr>fe1a9b0a-816c-4b97-96cb-<wbr>b90dbf71417a" transition-magic="0:0;6:7:0:<wbr>fe1a9b0a-816c-4b97-96cb-<wbr>b90dbf71417a" on_node="vdicnode01" call-id="8" rc-code="0" op-s<br>
> > Aug 31 23:38:31 [1521] vdicnode02 cib: info:<br>
> > cib_process_request: Completed cib_modify operation for<br>
> section<br>
> > status: OK (rc=0, origin=vdicnode01/crmd/45,<br>
> version=0.163.6)<br>
> > Aug 31 23:38:36 [1521] vdicnode02 cib: info:<br>
> > cib_process_ping: Reporting our current digest to<br>
> vdicnode01:<br>
> > 9141ea9880f5a44b133003982d863b<wbr>c8 for 0.163.6 (0x7f96bb2625a0<br>
> 0)<br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> > vdicnode01 - corosync.log<br>
> > Aug 31 23:38:27 [1531] vdicnode01 cib: info:<br>
> > cib_process_request: Forwarding cib_replace operation for<br>
> section<br>
> > configuration to all (origin=local/cibadmin/2)<br>
> > Aug 31 23:38:27 [1531] vdicnode01 cib: info:<br>
> > cib_perform_op: Diff: --- 0.162.0 2<br>
> > Aug 31 23:38:27 [1531] vdicnode01 cib: info:<br>
> > cib_perform_op: Diff: +++ 0.163.0 (null)<br>
> > Aug 31 23:38:27 [1531] vdicnode01 cib: info:<br>
> > cib_perform_op: + /cib: @epoch=163<br>
> > Aug 31 23:38:27 [1531] vdicnode01 cib: info:<br>
> > cib_perform_op: ++ /cib/configuration/<wbr>constraints:<br>
> <rsc_location<br>
> > id="location-vm-vdicdb01-<wbr>vdicnode02--INFINITY"<br>
> node="vdicnode02"<br>
> > rsc="vm-vdicdb01" score="-INFINITY"/><br>
> > Aug 31 23:38:27 [1531] vdicnode01 cib: info:<br>
> > cib_process_request: Completed cib_replace operation for<br>
> section<br>
> > configuration: OK (rc=0, origin=vdicnode01/cibadmin/2,<br>
> > version=0.163.0)<br>
> > Aug 31 23:38:27 [1536] vdicnode01 crmd: info:<br>
> > abort_transition_graph: Transition aborted by<br>
> > rsc_location.location-vm-<wbr>vdicdb01-vdicnode02--INFINITY<br>
> 'create':<br>
> > Non-status change | cib=0.163.0 source=te_update_diff:436<br>
> > path=/cib/configuration/<wbr>constraints complete=true<br>
> > Aug 31 23:38:27 [1536] vdicnode01 crmd: notice:<br>
> > do_state_transition: State transition S_IDLE -><br>
> S_POLICY_ENGINE |<br>
> > input=I_PE_CALC cause=C_FSA_INTERNAL<br>
> origin=abort_transition_graph<br>
> > Aug 31 23:38:27 [1531] vdicnode01 cib: info:<br>
> > cib_file_backup: Archived previous version<br>
> > as /var/lib/pacemaker/cib/cib-85.<wbr>raw<br>
> > Aug 31 23:38:27 [1531] vdicnode01 cib: info:<br>
> > cib_file_write_with_digest: Wrote version 0.163.0 of the<br>
> CIB to<br>
> > disk (digest: 47a548b36746de9275d66cc6aeb0fd<wbr>c4)<br>
> > Aug 31 23:38:27 [1531] vdicnode01 cib: info:<br>
> > cib_file_write_with_digest: Reading cluster<br>
> configuration<br>
> > file /var/lib/pacemaker/cib/cib.<wbr>npBIW2<br>
> > (digest: /var/lib/pacemaker/cib/cib.<wbr>bDogoB)<br>
> > Aug 31 23:38:27 [1535] vdicnode01 pengine: info:<br>
> > determine_online_status: Node vdicnode02 is online<br>
> > Aug 31 23:38:27 [1535] vdicnode01 pengine: info:<br>
> > determine_online_status: Node vdicnode01 is online<br>
> > Aug 31 23:38:27 [1535] vdicnode01 pengine: info:<br>
> native_print:<br>
> > vm-vdicdb01 (ocf::heartbeat:VirtualDomain)<wbr>: Started<br>
> vdicnode02<br>
> > Aug 31 23:38:27 [1535] vdicnode01 pengine: info:<br>
> RecurringOp:<br>
> > Start recurring monitor (10s) for vm-vdicdb01 on vdicnode01<br>
> > Aug 31 23:38:27 [1535] vdicnode01 pengine: notice:<br>
> LogActions:<br>
> > Migrate vm-vdicdb01 (Started vdicnode02 -> vdicnode01)<br>
> > Aug 31 23:38:27 [1535] vdicnode01 pengine: notice:<br>
> > process_pe_message: Calculated transition 6, saving<br>
> inputs<br>
> > in /var/lib/pacemaker/pengine/pe-<wbr>input-96.bz2<br>
> > Aug 31 23:38:27 [1536] vdicnode01 crmd: info:<br>
> > do_state_transition: State transition S_POLICY_ENGINE -><br>
> > S_TRANSITION_ENGINE | input=I_PE_SUCCESS cause=C_IPC_MESSAGE<br>
> > origin=handle_response<br>
> > Aug 31 23:38:27 [1536] vdicnode01 crmd: info:<br>
> do_te_invoke:<br>
> > Processing graph 6 (ref=pe_calc-dc-1504215507-24) derived<br>
> > from /var/lib/pacemaker/pengine/pe-<wbr>input-96.bz2<br>
> > Aug 31 23:38:27 [1536] vdicnode01 crmd: notice:<br>
> > te_rsc_command: Initiating migrate_to operation<br>
> > vm-vdicdb01_migrate_to_0 on vdicnode02 | action 6<br>
> > Aug 31 23:38:27 [1536] vdicnode01 crmd: info:<br>
> > create_operation_update: cib_action_update: Updating<br>
> resource<br>
> > vm-vdicdb01 after migrate_to op pending (interval=0)<br>
> > Aug 31 23:38:27 [1531] vdicnode01 cib: info:<br>
> > cib_process_request: Forwarding cib_modify operation for<br>
> section<br>
> > status to all (origin=local/crmd/41)<br>
> > Aug 31 23:38:27 [1531] vdicnode01 cib: info:<br>
> > cib_perform_op: Diff: --- 0.163.0 2<br>
> > Aug 31 23:38:27 [1531] vdicnode01 cib: info:<br>
> > cib_perform_op: Diff: +++ 0.163.1 (null)<br>
> > Aug 31 23:38:27 [1531] vdicnode01 cib: info:<br>
> > cib_perform_op: + /cib: @num_updates=1<br>
> > Aug 31 23:38:27 [1531] vdicnode01 cib: info:<br>
> > cib_perform_op: +<br>
> > /cib/status/node_state[@id='2'<wbr>]/lrm[@id='2']/lrm_resources/<wbr>lrm_resource[@id='vm-vdicdb01'<wbr>]/lrm_rsc_op[@id='vm-vdicdb01_<wbr>last_0']: @operation_key=vm-vdicdb01_<wbr>migrate_to_0, @operation=migrate_to, @crm-debug-origin=cib_action_<wbr>update, @transition-key=6:6:0:<wbr>fe1a9b0a-816c-4b97-96cb-<wbr>b90dbf71417a, @transition-magic=-1:193;6:6:<wbr>0:fe1a9b0a-816c-4b97-96cb-<wbr>b90dbf71417a, @call-id=-1, @rc-code=193, @op-status=-1, @last-run=1504215507, @last-rc-cha<br>
> > Aug 31 23:38:27 [1531] vdicnode01 cib: info:<br>
> > cib_process_request: Completed cib_modify operation for<br>
> section<br>
> > status: OK (rc=0, origin=vdicnode01/crmd/41,<br>
> version=0.163.1)<br>
> > Aug 31 23:38:27 [1531] vdicnode01 cib: info:<br>
> > cib_perform_op: Diff: --- 0.163.1 2<br>
> > Aug 31 23:38:27 [1531] vdicnode01 cib: info:<br>
> > cib_perform_op: Diff: +++ 0.163.2 (null)<br>
> > Aug 31 23:38:27 [1531] vdicnode01 cib: info:<br>
> > cib_perform_op: + /cib: @num_updates=2<br>
> > Aug 31 23:38:27 [1531] vdicnode01 cib: info:<br>
> > cib_perform_op: +<br>
> > /cib/status/node_state[@id='2'<wbr>]/lrm[@id='2']/lrm_resources/<wbr>lrm_resource[@id='vm-vdicdb01'<wbr>]/lrm_rsc_op[@id='vm-vdicdb01_<wbr>last_0']: @crm-debug-origin=do_update_<wbr>resource, @transition-magic=0:1;6:6:0:<wbr>fe1a9b0a-816c-4b97-96cb-<wbr>b90dbf71417a, @call-id=9, @rc-code=1, @op-status=0, @exec-time=78, @exit-reason=vdicdb01: live migration to vdicnode01 failed: 1<br>
> > Aug 31 23:38:27 [1531] vdicnode01 cib: info:<br>
> > cib_perform_op:<br>
> ><br>
> ++ /cib/status/node_state[@id='2'<wbr>]/lrm[@id='2']/lrm_resources/<wbr>lrm_resource[@id='vm-vdicdb01'<wbr>]: <lrm_rsc_op id="vm-vdicdb01_last_failure_<wbr>0" operation_key="vm-vdicdb01_<wbr>migrate_to_0" operation="migrate_to" crm-debug-origin="do_update_<wbr>resource" crm_feature_set="3.0.10" transition-key="6:6:0:<wbr>fe1a9b0a-816c-4b97-96cb-<wbr>b90dbf71417a" transition-magic="0:1;6:6:0:<wbr>fe1a9b0a-816c-4b97-96cb-<wbr>b90dbf71417a" exit-reason="vdicdb01: live migration to vdicn<br>
> > Aug 31 23:38:27 [1531] vdicnode01 cib: info:<br>
> > cib_process_request: Completed cib_modify operation for<br>
> section<br>
> > status: OK (rc=0, origin=vdicnode02/crmd/14,<br>
> version=0.163.2)<br>
> > Aug 31 23:38:27 [1536] vdicnode01 crmd: warning:<br>
> > status_from_rc: Action 6 (vm-vdicdb01_migrate_to_0) on<br>
> vdicnode02<br>
> > failed (target: 0 vs. rc: 1): Error<br>
> > Aug 31 23:38:27 [1536] vdicnode01 crmd: notice:<br>
> > abort_transition_graph: Transition aborted by operation<br>
> > vm-vdicdb01_migrate_to_0 'modify' on vdicnode02: Event<br>
> failed |<br>
> > magic=0:1;6:6:0:fe1a9b0a-816c-<wbr>4b97-96cb-b90dbf71417a<br>
> cib=0.163.2<br>
> > source=match_graph_event:310 complete=false<br>
> > Aug 31 23:38:27 [1536] vdicnode01 crmd: info:<br>
> > match_graph_event: Action vm-vdicdb01_migrate_to_0 (6)<br>
> confirmed<br>
> > on vdicnode02 (rc=1)<br>
> > Aug 31 23:38:27 [1536] vdicnode01 crmd: info:<br>
> > process_graph_event: Detected action (6.6)<br>
> > vm-vdicdb01_migrate_to_0.9=<wbr>unknown error: failed<br>
> > Aug 31 23:38:27 [1536] vdicnode01 crmd: warning:<br>
> > status_from_rc: Action 6 (vm-vdicdb01_migrate_to_0) on<br>
> vdicnode02<br>
> > failed (target: 0 vs. rc: 1): Error<br>
> > Aug 31 23:38:27 [1536] vdicnode01 crmd: info:<br>
> > abort_transition_graph: Transition aborted by operation<br>
> > vm-vdicdb01_migrate_to_0 'create' on vdicnode02: Event<br>
> failed |<br>
> > magic=0:1;6:6:0:fe1a9b0a-816c-<wbr>4b97-96cb-b90dbf71417a<br>
> cib=0.163.2<br>
> > source=match_graph_event:310 complete=false<br>
> > Aug 31 23:38:27 [1536] vdicnode01 crmd: info:<br>
> > match_graph_event: Action vm-vdicdb01_migrate_to_0 (6)<br>
> confirmed<br>
> > on vdicnode02 (rc=1)<br>
> > Aug 31 23:38:27 [1536] vdicnode01 crmd: info:<br>
> > process_graph_event: Detected action (6.6)<br>
> > vm-vdicdb01_migrate_to_0.9=<wbr>unknown error: failed<br>
> > Aug 31 23:38:27 [1536] vdicnode01 crmd: notice:<br>
> run_graph:<br>
> > Transition 6 (Complete=1, Pending=0, Fired=0, Skipped=0,<br>
> > Incomplete=5,<br>
> Source=/var/lib/pacemaker/<wbr>pengine/pe-input-96.bz2):<br>
> > Complete<br>
> > Aug 31 23:38:27 [1536] vdicnode01 crmd: info:<br>
> > do_state_transition: State transition S_TRANSITION_ENGINE<br>
> -><br>
> > S_POLICY_ENGINE | input=I_PE_CALC cause=C_FSA_INTERNAL<br>
> > origin=notify_crmd<br>
> > Aug 31 23:38:27 [1535] vdicnode01 pengine: info:<br>
> > determine_online_status: Node vdicnode02 is online<br>
> > Aug 31 23:38:27 [1535] vdicnode01 pengine: info:<br>
> > determine_online_status: Node vdicnode01 is online<br>
> > Aug 31 23:38:27 [1535] vdicnode01 pengine: warning:<br>
> > unpack_rsc_op_failure: Processing failed op migrate_to for<br>
> > vm-vdicdb01 on vdicnode02: unknown error (1)<br>
> > Aug 31 23:38:27 [1535] vdicnode01 pengine: warning:<br>
> > unpack_rsc_op_failure: Processing failed op migrate_to for<br>
> > vm-vdicdb01 on vdicnode02: unknown error (1)<br>
> > Aug 31 23:38:27 [1535] vdicnode01 pengine: info:<br>
> native_print:<br>
> > vm-vdicdb01 (ocf::heartbeat:VirtualDomain)<wbr>: FAILED<br>
> > Aug 31 23:38:27 [1535] vdicnode01 pengine: info:<br>
> native_print:<br>
> > 1 : vdicnode01<br>
> > Aug 31 23:38:27 [1535] vdicnode01 pengine: info:<br>
> native_print:<br>
> > 2 : vdicnode02<br>
> > Aug 31 23:38:27 [1535] vdicnode01 pengine: error:<br>
> > native_create_actions: Resource vm-vdicdb01<br>
> (ocf::VirtualDomain) is<br>
> > active on 2 nodes attempting recovery<br>
> > Aug 31 23:38:27 [1535] vdicnode01 pengine: warning:<br>
> > native_create_actions: See<br>
> > <a href="http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active" rel="noreferrer" target="_blank">http://clusterlabs.org/wiki/<wbr>FAQ#Resource_is_Too_Active</a> for<br>
> more<br>
> > information.<br>
> > Aug 31 23:38:27 [1535] vdicnode01 pengine: info:<br>
> RecurringOp:<br>
> > Start recurring monitor (10s) for vm-vdicdb01 on vdicnode01<br>
> > Aug 31 23:38:27 [1535] vdicnode01 pengine: notice:<br>
> LogActions:<br>
> > Recover vm-vdicdb01 (Started vdicnode01)<br>
> > Aug 31 23:38:27 [1535] vdicnode01 pengine: error:<br>
> > process_pe_message: Calculated transition 7 (with<br>
> errors), saving<br>
> > inputs in /var/lib/pacemaker/pengine/pe-<wbr>error-8.bz2<br>
> > Aug 31 23:38:27 [1536] vdicnode01 crmd: info:<br>
> > do_state_transition: State transition S_POLICY_ENGINE -><br>
> > S_TRANSITION_ENGINE | input=I_PE_SUCCESS cause=C_IPC_MESSAGE<br>
> > origin=handle_response<br>
> > Aug 31 23:38:27 [1536] vdicnode01 crmd: info:<br>
> do_te_invoke:<br>
> > Processing graph 7 (ref=pe_calc-dc-1504215507-26) derived<br>
> > from /var/lib/pacemaker/pengine/pe-<wbr>error-8.bz2<br>
> > Aug 31 23:38:27 [1536] vdicnode01 crmd: notice:<br>
> > te_rsc_command: Initiating stop operation vm-vdicdb01_stop_0<br>
> locally<br>
> > on vdicnode01 | action 4<br>
> > Aug 31 23:38:27 [1536] vdicnode01 crmd: info:<br>
> > do_lrm_rsc_op: Performing<br>
> > key=4:7:0:fe1a9b0a-816c-4b97-<wbr>96cb-b90dbf71417a<br>
> op=vm-vdicdb01_stop_0<br>
> > Aug 31 23:38:27 [1533] vdicnode01 lrmd: info:<br>
> log_execute:<br>
> > executing - rsc:vm-vdicdb01 action:stop call_id:6<br>
> > Aug 31 23:38:27 [1536] vdicnode01 crmd: notice:<br>
> > te_rsc_command: Initiating stop operation vm-vdicdb01_stop_0<br>
> on<br>
> > vdicnode02 | action 2<br>
> > VirtualDomain(vm-vdicdb01)[<wbr>5268]: 2017/08/31_23:38:27<br>
> INFO:<br>
> > Domain vdicdb01 already stopped.<br>
> > Aug 31 23:38:27 [1533] vdicnode01 lrmd: info:<br>
> log_finished:<br>
> > finished - rsc:vm-vdicdb01 action:stop call_id:6 pid:5268<br>
> exit-code:0<br>
> > exec-time:57ms queue-time:0ms<br>
> > Aug 31 23:38:27 [1536] vdicnode01 crmd: notice:<br>
> > process_lrm_event: Result of stop operation for<br>
> vm-vdicdb01 on<br>
> > vdicnode01: 0 (ok) | call=6 key=vm-vdicdb01_stop_0<br>
> confirmed=true<br>
> > cib-update=43<br>
> > Aug 31 23:38:27 [1531] vdicnode01 cib: info:<br>
> > cib_process_request: Forwarding cib_modify operation for<br>
> section<br>
> > status to all (origin=local/crmd/43)<br>
> > Aug 31 23:38:27 [1531] vdicnode01 cib: info:<br>
> > cib_perform_op: Diff: --- 0.163.2 2<br>
> > Aug 31 23:38:27 [1531] vdicnode01 cib: info:<br>
> > cib_perform_op: Diff: +++ 0.163.3 (null)<br>
> > Aug 31 23:38:27 [1531] vdicnode01 cib: info:<br>
> > cib_perform_op: + /cib: @num_updates=3<br>
> > Aug 31 23:38:27 [1531] vdicnode01 cib: info:<br>
> > cib_perform_op: +<br>
> > /cib/status/node_state[@id='1'<wbr>]/lrm[@id='1']/lrm_resources/<wbr>lrm_resource[@id='vm-vdicdb01'<wbr>]/lrm_rsc_op[@id='vm-vdicdb01_<wbr>last_0']: @operation_key=vm-vdicdb01_<wbr>stop_0, @operation=stop, @transition-key=4:7:0:<wbr>fe1a9b0a-816c-4b97-96cb-<wbr>b90dbf71417a, @transition-magic=0:0;4:7:0:<wbr>fe1a9b0a-816c-4b97-96cb-<wbr>b90dbf71417a, @call-id=6, @rc-code=0, @last-run=1504215507, @last-rc-change=1504215507, @exec-time=57<br>
> > Aug 31 23:38:27 [1536] vdicnode01 crmd: info:<br>
> > match_graph_event: Action vm-vdicdb01_stop_0 (4)<br>
> confirmed on<br>
> > vdicnode01 (rc=0)<br>
> > Aug 31 23:38:27 [1531] vdicnode01 cib: info:<br>
> > cib_process_request: Completed cib_modify operation for<br>
> section<br>
> > status: OK (rc=0, origin=vdicnode01/crmd/43,<br>
> version=0.163.3)<br>
> > Aug 31 23:38:30 [1531] vdicnode01 cib: info:<br>
> > cib_perform_op: Diff: --- 0.163.3 2<br>
> > Aug 31 23:38:30 [1531] vdicnode01 cib: info:<br>
> > cib_perform_op: Diff: +++ 0.163.4 (null)<br>
> > Aug 31 23:38:30 [1531] vdicnode01 cib: info:<br>
> > cib_perform_op: + /cib: @num_updates=4<br>
> > Aug 31 23:38:30 [1531] vdicnode01 cib: info:<br>
> > cib_perform_op: +<br>
> > /cib/status/node_state[@id='2'<wbr>]/lrm[@id='2']/lrm_resources/<wbr>lrm_resource[@id='vm-vdicdb01'<wbr>]/lrm_rsc_op[@id='vm-vdicdb01_<wbr>last_0']: @operation_key=vm-vdicdb01_<wbr>stop_0, @operation=stop, @transition-key=2:7:0:<wbr>fe1a9b0a-816c-4b97-96cb-<wbr>b90dbf71417a, @transition-magic=0:0;2:7:0:<wbr>fe1a9b0a-816c-4b97-96cb-<wbr>b90dbf71417a, @call-id=10, @rc-code=0, @exec-time=3159<br>
> > Aug 31 23:38:30 [1531] vdicnode01 cib: info:<br>
> > cib_process_request: Completed cib_modify operation for<br>
> section<br>
> > status: OK (rc=0, origin=vdicnode02/crmd/15,<br>
> version=0.163.4)<br>
> > Aug 31 23:38:30 [1536] vdicnode01 crmd: info:<br>
> > match_graph_event: Action vm-vdicdb01_stop_0 (2)<br>
> confirmed on<br>
> > vdicnode02 (rc=0)<br>
> > Aug 31 23:38:30 [1536] vdicnode01 crmd: notice:<br>
> > te_rsc_command: Initiating start operation<br>
> vm-vdicdb01_start_0 locally<br>
> > on vdicnode01 | action 5<br>
> > Aug 31 23:38:30 [1536] vdicnode01 crmd: info:<br>
> > do_lrm_rsc_op: Performing<br>
> > key=5:7:0:fe1a9b0a-816c-4b97-<wbr>96cb-b90dbf71417a<br>
> op=vm-vdicdb01_start_0<br>
> > Aug 31 23:38:30 [1533] vdicnode01 lrmd: info:<br>
> log_execute:<br>
> > executing - rsc:vm-vdicdb01 action:start call_id:7<br>
> > Aug 31 23:38:31 [1533] vdicnode01 lrmd: info:<br>
> log_finished:<br>
> > finished - rsc:vm-vdicdb01 action:start call_id:7 pid:5401<br>
> exit-code:0<br>
> > exec-time:528ms queue-time:0ms<br>
> > Aug 31 23:38:31 [1536] vdicnode01 crmd: info:<br>
> > action_synced_wait: Managed VirtualDomain_meta-data_0<br>
> process 5486<br>
> > exited with rc=0<br>
> > Aug 31 23:38:31 [1536] vdicnode01 crmd: notice:<br>
> > process_lrm_event: Result of start operation for<br>
> vm-vdicdb01 on<br>
> > vdicnode01: 0 (ok) | call=7 key=vm-vdicdb01_start_0<br>
> confirmed=true<br>
> > cib-update=44<br>
> > Aug 31 23:38:31 [1531] vdicnode01 cib: info:<br>
> > cib_process_request: Forwarding cib_modify operation for<br>
> section<br>
> > status to all (origin=local/crmd/44)<br>
> > Aug 31 23:38:31 [1531] vdicnode01 cib: info:<br>
> > cib_perform_op: Diff: --- 0.163.4 2<br>
> > Aug 31 23:38:31 [1531] vdicnode01 cib: info:<br>
> > cib_perform_op: Diff: +++ 0.163.5 (null)<br>
> > Aug 31 23:38:31 [1531] vdicnode01 cib: info:<br>
> > cib_perform_op: + /cib: @num_updates=5<br>
> > Aug 31 23:38:31 [1531] vdicnode01 cib: info:<br>
> > cib_perform_op: +<br>
> > /cib/status/node_state[@id='1'<wbr>]/lrm[@id='1']/lrm_resources/<wbr>lrm_resource[@id='vm-vdicdb01'<wbr>]/lrm_rsc_op[@id='vm-vdicdb01_<wbr>last_0']: @operation_key=vm-vdicdb01_<wbr>start_0, @operation=start, @transition-key=5:7:0:<wbr>fe1a9b0a-816c-4b97-96cb-<wbr>b90dbf71417a, @transition-magic=0:0;5:7:0:<wbr>fe1a9b0a-816c-4b97-96cb-<wbr>b90dbf71417a, @call-id=7, @last-run=1504215510, @last-rc-change=1504215510, @exec-time=528<br>
> > Aug 31 23:38:31 [1531] vdicnode01 cib: info:<br>
> > cib_process_request: Completed cib_modify operation for<br>
> section<br>
> > status: OK (rc=0, origin=vdicnode01/crmd/44,<br>
> version=0.163.5)<br>
> > Aug 31 23:38:31 [1536] vdicnode01 crmd: info:<br>
> > match_graph_event: Action vm-vdicdb01_start_0 (5)<br>
> confirmed on<br>
> > vdicnode01 (rc=0)<br>
> > Aug 31 23:38:31 [1536] vdicnode01 crmd: notice:<br>
> > te_rsc_command: Initiating monitor operation<br>
> vm-vdicdb01_monitor_10000<br>
> > locally on vdicnode01 | action 6<br>
> > Aug 31 23:38:31 [1536] vdicnode01 crmd: info:<br>
> > do_lrm_rsc_op: Performing<br>
> > key=6:7:0:fe1a9b0a-816c-4b97-<wbr>96cb-b90dbf71417a<br>
> > op=vm-vdicdb01_monitor_10000<br>
> > Aug 31 23:38:31 [1536] vdicnode01 crmd: info:<br>
> > process_lrm_event: Result of monitor operation for<br>
> vm-vdicdb01 on<br>
> > vdicnode01: 0 (ok) | call=8 key=vm-vdicdb01_monitor_10000<br>
> > confirmed=false cib-update=45<br>
> > Aug 31 23:38:31 [1531] vdicnode01 cib: info:<br>
> > cib_process_request: Forwarding cib_modify operation for<br>
> section<br>
> > status to all (origin=local/crmd/45)<br>
> > Aug 31 23:38:31 [1531] vdicnode01 cib: info:<br>
> > cib_perform_op: Diff: --- 0.163.5 2<br>
> > Aug 31 23:38:31 [1531] vdicnode01 cib: info:<br>
> > cib_perform_op: Diff: +++ 0.163.6 (null)<br>
> > Aug 31 23:38:31 [1531] vdicnode01 cib: info:<br>
> > cib_perform_op: + /cib: @num_updates=6<br>
> > Aug 31 23:38:31 [1531] vdicnode01 cib: info:<br>
> > cib_perform_op:<br>
> ><br>
> ++ /cib/status/node_state[@id='1'<wbr>]/lrm[@id='1']/lrm_resources/<wbr>lrm_resource[@id='vm-vdicdb01'<wbr>]: <lrm_rsc_op id="vm-vdicdb01_monitor_10000" operation_key="vm-vdicdb01_<wbr>monitor_10000" operation="monitor" crm-debug-origin="do_update_<wbr>resource" crm_feature_set="3.0.10" transition-key="6:7:0:<wbr>fe1a9b0a-816c-4b97-96cb-<wbr>b90dbf71417a" transition-magic="0:0;6:7:0:<wbr>fe1a9b0a-816c-4b97-96cb-<wbr>b90dbf71417a" on_node="vdicnode01" call-id="8" rc-code="0" op-s<br>
> > Aug 31 23:38:31 [1531] vdicnode01 cib: info:<br>
> > cib_process_request: Completed cib_modify operation for<br>
> section<br>
> > status: OK (rc=0, origin=vdicnode01/crmd/45,<br>
> version=0.163.6)<br>
> > Aug 31 23:38:31 [1536] vdicnode01 crmd: info:<br>
> > match_graph_event: Action vm-vdicdb01_monitor_10000 (6)<br>
> confirmed<br>
> > on vdicnode01 (rc=0)<br>
> > Aug 31 23:38:31 [1536] vdicnode01 crmd: notice:<br>
> run_graph:<br>
> > Transition 7 (Complete=5, Pending=0, Fired=0, Skipped=0,<br>
> > Incomplete=0,<br>
> Source=/var/lib/pacemaker/<wbr>pengine/pe-error-8.bz2):<br>
> > Complete<br>
> > Aug 31 23:38:31 [1536] vdicnode01 crmd: info:<br>
> do_log: Input<br>
> > I_TE_SUCCESS received in state S_TRANSITION_ENGINE from<br>
> notify_crmd<br>
> > Aug 31 23:38:31 [1536] vdicnode01 crmd: notice:<br>
> > do_state_transition: State transition S_TRANSITION_ENGINE<br>
> -> S_IDLE<br>
> > | input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd<br>
> > Aug 31 23:38:36 [1531] vdicnode01 cib: info:<br>
> > cib_process_ping: Reporting our current digest to<br>
> vdicnode01:<br>
> > 9141ea9880f5a44b133003982d863b<wbr>c8 for 0.163.6 (0x7f61cec09270<br>
> 0)<br>
> ><br>
> ><br>
> > Thanks a lot<br>
> ><br>
> > 2017-08-31 16:20 GMT+02:00 Ken Gaillot<br>
> <<a href="mailto:kgaillot@redhat.com">kgaillot@redhat.com</a>>:<br>
> > On Thu, 2017-08-31 at 01:13 +0200, Oscar Segarra<br>
> wrote:<br>
> > > Hi,<br>
> > ><br>
> > ><br>
> > > In my environment, I have just two hosts, where<br>
> qemu-kvm<br>
> > process is<br>
> > > launched by a regular user (oneadmin) - open<br>
> nebula -<br>
> > ><br>
> > ><br>
> > > I have created a VirtualDomain resource that<br>
> starts and<br>
> > stops the VM<br>
> > > perfectly. Nevertheless, when I change the<br>
> location weight<br>
> > in order to<br>
> > > force the migration, It raises a migration failure<br>
> "error:<br>
> > 1"<br>
> > ><br>
> > ><br>
> > > If I execute the virsh migrate command (that<br>
> appears in<br>
> > corosync.log)<br>
> > > from command line, it works perfectly.<br>
> > ><br>
> > ><br>
> > > Anybody has experienced the same issue?<br>
> > ><br>
> > ><br>
> > > Thanks in advance for your help<br>
> ><br>
> ><br>
> > If something works from the command line but not<br>
> when run by a<br>
> > daemon,<br>
> > my first suspicion is SELinux. Check the audit log<br>
> for denials<br>
> > around<br>
> > that time.<br>
> ><br>
> > I'd also check the system log and Pacemaker detail<br>
> log around<br>
> > that time<br>
> > to see if there is any more information.<br>
> > --<br>
> > Ken Gaillot <<a href="mailto:kgaillot@redhat.com">kgaillot@redhat.com</a>><br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> > ______________________________<wbr>_________________<br>
> > Users mailing list: <a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a><br>
> > <a href="http://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.clusterlabs.org/<wbr>mailman/listinfo/users</a><br>
> ><br>
> > Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
> > Getting started:<br>
> ><br>
> <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/<wbr>doc/Cluster_from_Scratch.pdf</a><br>
> > Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
> ><br>
> ><br>
><br>
><br>
> --<br>
> Ken Gaillot <<a href="mailto:kgaillot@redhat.com">kgaillot@redhat.com</a>><br>
><br>
><br>
><br>
><br>
><br>
><br>
<br>
</div></div><span class="HOEnZb"><font color="#888888">--<br>
Ken Gaillot <<a href="mailto:kgaillot@redhat.com">kgaillot@redhat.com</a>><br>
<br>
<br>
<br>
<br>
</font></span></blockquote></div><br></div>