[ClusterLabs] Resources restart when a node joins in

Citron Vert citron_vert at hotmail.com
Thu Aug 27 03:46:14 EDT 2020


Hi,

Sorry for using this email adress, my name is Quentin. Thank you for 
your reply.

I have already tried the stickiness solution (with the deprecated  
value). I tried the one you gave me, and it does not change anything.

Resources don't seem to move from node to node (i don't see the changes 
with crm_mon command).


In the logs i found this line /"error: native_create_actions:     
Resource SERVICE1 is active on 2 nodes/"

Which led me to contact you to understand and learn a little more about 
this cluster. And why there are running resources on the passive node.


You will find attached the logs during the reboot of the passive node 
and my cluster configuration.

I think I'm missing out on something in the configuration / logs that I 
don't understand..


Thank you in advance for your help,

Quentin


Le 26/08/2020 à 20:16, Reid Wahl a écrit :
> Hi, Citron.
>
> Based on your description, it sounds like some resources **might** be 
> moving from node 1 to node 2, failing on node 2, and then moving back 
> to node 1. If that's what's happening (and even if it's not), then 
> it's probably smart to set some resource stickiness as a resource 
> default. The below command sets a resource stickiness score of 1.
>
>     # pcs resource defaults resource-stickiness=1
>
> Also note that the "default-resource-stickiness" cluster property is 
> deprecated and should not be used.
>
> Finally, an explicit default resource stickiness score of 0 can 
> interfere with the placement of cloned resource instances. If you 
> don't want any stickiness, then it's better to leave stickiness unset. 
> That way, primitives will have a stickiness of 0, but clone instances 
> will have a stickiness of 1.
>
> If adding stickiness does not resolve the issue, can you share your 
> cluster configuration and some logs that show the issue happening? Off 
> the top of my head I'm not sure why resources would start and stop on 
> node 2 without moving away from node1, unless they're clone instances 
> that are starting and then failing a monitor operation on node 2.
>
> On Wed, Aug 26, 2020 at 8:42 AM Citron Vert <citron_vert at hotmail.com 
> <mailto:citron_vert at hotmail.com>> wrote:
>
>     Hello,
>     I am contacting you because I have a problem with my cluster and I
>     cannot find (nor understand) any information that can help me.
>
>     I have a 2 nodes cluster (pacemaker, corosync, pcs) installed on
>     CentOS 7 with a set of configuration.
>     Everything seems to works fine, but here is what happens:
>
>       * Node1 and Node2 are running well with Node1 as primary
>       * I reboot Node2 wich is passive (no changes on Node1)
>       * Node2 comes back in the cluster as passive
>       * corosync logs shows resources getting started then stopped on
>         Node2
>       * "crm_mon" command shows some ressources on Node1 getting
>         restarted
>
>     I don't understand how it should work.
>     If a node comes back, and becomes passive (since Node1 is running
>     primary), there is no reason for the resources to be started then
>     stopped on the new passive node ?
>
>     One of my resources becomes unstable because it gets started and
>     then stoped too quickly on Node2, wich seems to make it restart on
>     Node1 without a failover.
>
>     I tried several things and solution proposed by different sites
>     and forums but without success.
>
>
>     Is there a way so that the node, which joins the cluster as
>     passive, does not start its own resources ?
>
>
>     thanks in advance
>
>
>     Here are some information just in case :
>
>     $ rpm -qa | grep -E "corosync|pacemaker|pcs"
>     corosync-2.4.5-4.el7.x86_64
>     pacemaker-cli-1.1.21-4.el7.x86_64
>     pacemaker-1.1.21-4.el7.x86_64
>     pcs-0.9.168-4.el7.centos.x86_64
>     corosynclib-2.4.5-4.el7.x86_64
>     pacemaker-libs-1.1.21-4.el7.x86_64
>     pacemaker-cluster-libs-1.1.21-4.el7.x86_64
>
>
>             <nvpair id="cib-bootstrap-options-stonith-enabled" name="stonith-enabled" value="false"/>
>             <nvpair id="cib-bootstrap-options-no-quorum-policy" name="no-quorum-policy" value="ignore"/>
>             <nvpair id="cib-bootstrap-options-dc-deadtime" name="dc-deadtime" value="120s"/>
>             <nvpair id="cib-bootstrap-options-have-watchdog" name="have-watchdog" value="false"/>
>             <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.21-4.el7-f14e36fd43"/>
>             <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="corosync"/>
>             <nvpair id="cib-bootstrap-options-cluster-name" name="cluster-name" value="CLUSTER"/>
>             <nvpair id="cib-bootstrap-options-last-lrm-refresh" name="last-lrm-refresh" value="1598446314"/>
>             <nvpair id="cib-bootstrap-options-default-resource-stickiness" name="default-resource-stickiness" value="0"/>
>
>
>
>
>     _______________________________________________
>     Manage your subscription:
>     https://lists.clusterlabs.org/mailman/listinfo/users
>
>     ClusterLabs home: https://www.clusterlabs.org/
>
>
>
> -- 
> Regards,
>
> Reid Wahl, RHCA
> Software Maintenance Engineer, Red Hat
> CEE - Platform Support Delivery - ClusterHA
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.clusterlabs.org/pipermail/users/attachments/20200827/de555ff8/attachment-0001.htm>
-------------- next part --------------
$ pcs cluster cib
<cib crm_feature_set="3.0.14" validate-with="pacemaker-2.10" epoch="451" num_updates="2" admin_epoch="0" cib-last-written="Thu Aug 27 08:48:57 2020" update-origin="NODE1" update-client="crmd" update-user="hacluster" have-quorum="1" dc-uuid="2">
  <configuration>
    <crm_config>
      <cluster_property_set id="cib-bootstrap-options">
        <nvpair id="cib-bootstrap-options-stonith-enabled" name="stonith-enabled" value="false"/>
        <nvpair id="cib-bootstrap-options-no-quorum-policy" name="no-quorum-policy" value="ignore"/>
        <nvpair id="cib-bootstrap-options-dc-deadtime" name="dc-deadtime" value="120s"/>
        <nvpair id="cib-bootstrap-options-have-watchdog" name="have-watchdog" value="false"/>
        <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.21-4.el7-f14e36fd43"/>
        <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="corosync"/>
        <nvpair id="cib-bootstrap-options-cluster-name" name="cluster-name" value="CLUSTER"/>
        <nvpair id="cib-bootstrap-options-last-lrm-refresh" name="last-lrm-refresh" value="1598510960"/>
      </cluster_property_set>
    </crm_config>
    <nodes>
      <node id="1" uname="NODE1">
        <instance_attributes id="nodes-1"/>
      </node>
      <node id="2" uname="NODE2">
        <instance_attributes id="nodes-2"/>
      </node>
    </nodes>
    <resources>
      <group id="IPV">
        <primitive class="ocf" id="VIRTUALIP" provider="heartbeat" type="IPaddr2">
          <instance_attributes id="VIRTUALIP-instance_attributes">
            <nvpair id="VIRTUALIP-instance_attributes-cidr_netmask" name="cidr_netmask" value="16"/>
            <nvpair id="VIRTUALIP-instance_attributes-ip" name="ip" value="10.130.0.3"/>
          </instance_attributes>
          <operations>
            <op id="VIRTUALIP-monitor-interval-5s" interval="5s" name="monitor"/>
            <op id="VIRTUALIP-start-interval-0s" interval="0s" name="start" timeout="20s"/>
            <op id="VIRTUALIP-stop-interval-0s" interval="0s" name="stop" timeout="20s"/>
          </operations>
        </primitive>
        <primitive class="ocf" id="SOURCEIP" provider="heartbeat" type="IPsrcaddr">
          <instance_attributes id="SOURCEIP-instance_attributes">
            <nvpair id="SOURCEIP-instance_attributes-cidr_netmask" name="cidr_netmask" value="16"/>
            <nvpair id="SOURCEIP-instance_attributes-ipaddress" name="ipaddress" value="10.130.0.3"/>
          </instance_attributes>
          <operations>
            <op id="SOURCEIP-monitor-interval-5s" interval="5s" name="monitor"/>
            <op id="SOURCEIP-start-interval-0s" interval="0s" name="start" timeout="20s"/>
            <op id="SOURCEIP-stop-interval-0s" interval="0s" name="stop" timeout="20s"/>
          </operations>
        </primitive>
      </group>
      <primitive class="systemd" id="SERVICE1" type="service1">
        <operations>
          <op id="SERVICE1-monitor-interval-5s" interval="5s" name="monitor"/>
          <op id="SERVICE1-start-interval-0s" interval="0s" name="start" timeout="20s"/>
          <op id="SERVICE1-stop-interval-0s" interval="0s" name="stop" on-fail="ignore" timeout="20s"/>
        </operations>
      </primitive>
      <primitive class="systemd" id="SERVICE2" type="service2">
        <operations>
          <op id="SERVICE2-monitor-interval-5s" interval="5s" name="monitor"/>
          <op id="SERVICE2-start-interval-0s" interval="0s" name="start" timeout="20s"/>
          <op id="SERVICE2-stop-interval-0s" interval="0s" name="stop" on-fail="ignore" timeout="20s"/>
        </operations>
        <meta_attributes id="SERVICE2-meta_attributes"/>
      </primitive>
      <primitive class="systemd" id="SERVICE3" type="service3">
        <operations>
          <op id="SERVICE3-monitor-interval-5s" interval="5s" name="monitor"/>
          <op id="SERVICE3-start-interval-0s" interval="0s" name="start" timeout="20s"/>
          <op id="SERVICE3-stop-interval-0s" interval="0s" name="stop" on-fail="ignore" timeout="20s"/>
        </operations>
      </primitive>
      <primitive class="systemd" id="SERVICE4" type="service4">
        <operations>
          <op id="SERVICE4-monitor-interval-5s" interval="5s" name="monitor"/>
          <op id="SERVICE4-start-interval-0s" interval="0s" name="start" timeout="20s"/>
          <op id="SERVICE4-stop-interval-0s" interval="0s" name="stop" on-fail="ignore" timeout="20s"/>
        </operations>
        <meta_attributes id="SERVICE4-meta_attributes"/>
      </primitive>
      <primitive class="systemd" id="SERVICE5" type="service5">
        <operations>
          <op id="SERVICE5-monitor-interval-5s" interval="5s" name="monitor"/>
          <op id="SERVICE5-start-interval-0s" interval="0s" name="start" timeout="20s"/>
          <op id="SERVICE5-stop-interval-0s" interval="0s" name="stop" on-fail="ignore" timeout="20s"/>
        </operations>
      </primitive>
      <primitive class="systemd" id="SERVICE6" type="service6">
        <operations>
          <op id="SERVICE6-monitor-interval-5s" interval="5s" name="monitor"/>
          <op id="SERVICE6-start-interval-0s" interval="0s" name="start" timeout="20s"/>
          <op id="SERVICE6-stop-interval-0s" interval="0s" name="stop" on-fail="ignore" timeout="20s"/>
        </operations>
        <meta_attributes id="SERVICE6-meta_attributes"/>
      </primitive>
      <primitive class="systemd" id="SERVICE7" type="service7">
        <operations>
          <op id="SERVICE7-monitor-interval-5s" interval="5s" name="monitor"/>
          <op id="SERVICE7-start-interval-0s" interval="0s" name="start" timeout="20s"/>
          <op id="SERVICE7-stop-interval-0s" interval="0s" name="stop" on-fail="ignore" timeout="20s"/>
        </operations>
      </primitive>
      <primitive class="systemd" id="SERVICE8" type="service8">
        <operations>
          <op id="SERVICE8-monitor-interval-5s" interval="5s" name="monitor"/>
          <op id="SERVICE8-start-interval-0s" interval="0s" name="start" timeout="20s"/>
          <op id="SERVICE8-stop-interval-0s" interval="0s" name="stop" on-fail="ignore" timeout="20s"/>
        </operations>
      </primitive>
      <primitive class="systemd" id="SERVICE9" type="service9">
        <operations>
          <op id="SERVICE9-monitor-interval-5s" interval="5s" name="monitor"/>
          <op id="SERVICE9-start-interval-0s" interval="0s" name="start" timeout="20s"/>
          <op id="SERVICE9-stop-interval-0s" interval="0s" name="stop" on-fail="ignore" timeout="20s"/>
        </operations>
      </primitive>
      <primitive class="systemd" id="SERVICE10" type="service10">
        <operations>
          <op id="SERVICE10-monitor-interval-5s" interval="5s" name="monitor"/>
          <op id="SERVICE10-start-interval-0s" interval="0s" name="start" timeout="20s"/>
          <op id="SERVICE10-stop-interval-0s" interval="0s" name="stop" on-fail="ignore" timeout="20s"/>
        </operations>
      </primitive>
      <primitive class="systemd" id="SERVICE11" type="service11">
        <operations>
          <op id="SERVICE11-monitor-interval-5s" interval="5s" name="monitor"/>
          <op id="SERVICE11-start-interval-0s" interval="0s" name="start" timeout="20s"/>
          <op id="SERVICE11-stop-interval-0s" interval="0s" name="stop" on-fail="ignore" timeout="20s"/>
        </operations>
        <meta_attributes id="SERVICE11-meta_attributes"/>
      </primitive>
      <primitive class="systemd" id="SERVICE12" type="service12">
        <operations>
          <op id="SERVICE12-monitor-interval-5s" interval="5s" name="monitor"/>
          <op id="SERVICE12-start-interval-0s" interval="0s" name="start" timeout="20s"/>
          <op id="SERVICE12-stop-interval-0s" interval="0s" name="stop" on-fail="ignore" timeout="20s"/>
        </operations>
      </primitive>
      <primitive class="systemd" id="SERVICE13" type="service13">
        <operations>
          <op id="SERVICE13-monitor-interval-5s" interval="5s" name="monitor"/>
          <op id="SERVICE13-start-interval-0s" interval="0s" name="start" timeout="20s"/>
          <op id="SERVICE13-stop-interval-0s" interval="0s" name="stop" on-fail="ignore" timeout="20s"/>
        </operations>
      </primitive>
      <clone id="SERVICE14-clone">
        <primitive class="systemd" id="SERVICE14" type="service14">
          <operations>
            <op id="SERVICE14-monitor-interval-10s" interval="10s" name="monitor" timeout="60s"/>
            <op id="SERVICE14-start-interval-0s" interval="0s" name="start" timeout="60s"/>
            <op id="SERVICE14-stop-interval-0s" interval="0s" name="stop" timeout="60s"/>
          </operations>
        </primitive>
      </clone>
      <clone id="SERVICE15-clone">
        <primitive class="systemd" id="SERVICE15" type="service15">
          <operations>
            <op id="SERVICE15-monitor-interval-10s" interval="10s" name="monitor" timeout="60s"/>
            <op id="SERVICE15-start-interval-0s" interval="0s" name="start" timeout="60s"/>
            <op id="SERVICE15-stop-interval-0s" interval="0s" name="stop" timeout="60s"/>
          </operations>
        </primitive>
        <meta_attributes id="SERVICE15-clone-meta_attributes"/>
      </clone>
    </resources>
    <constraints>
      <rsc_colocation id="colocation-SERVICE1-VIRTUALIP-INFINITY" rsc="SERVICE1" score="INFINITY" with-rsc="VIRTUALIP"/>
      <rsc_colocation id="colocation-SERVICE2-SERVICE1-INFINITY" rsc="SERVICE2" score="INFINITY" with-rsc="SERVICE1"/>
      <rsc_colocation id="colocation-SERVICE3-SERVICE1-INFINITY" rsc="SERVICE3" score="INFINITY" with-rsc="SERVICE1"/>
      <rsc_colocation id="colocation-SERVICE4-SERVICE1-INFINITY" rsc="SERVICE4" score="INFINITY" with-rsc="SERVICE1"/>
      <rsc_colocation id="colocation-SERVICE5-VIRTUALIP-INFINITY" rsc="SERVICE5" score="INFINITY" with-rsc="VIRTUALIP"/>
      <rsc_colocation id="colocation-SERVICE6-VIRTUALIP-INFINITY" rsc="SERVICE6" score="INFINITY" with-rsc="VIRTUALIP"/>
      <rsc_colocation id="colocation-SERVICE7-VIRTUALIP-INFINITY" rsc="SERVICE7" score="INFINITY" with-rsc="VIRTUALIP"/>
      <rsc_colocation id="colocation-SERVICE8-VIRTUALIP-INFINITY" rsc="SERVICE8" score="INFINITY" with-rsc="VIRTUALIP"/>
      <rsc_colocation id="colocation-SERVICE9-VIRTUALIP-INFINITY" rsc="SERVICE9" score="INFINITY" with-rsc="VIRTUALIP"/>
      <rsc_colocation id="colocation-SERVICE10-VIRTUALIP-INFINITY" rsc="SERVICE10" score="INFINITY" with-rsc="VIRTUALIP"/>
      <rsc_colocation id="colocation-SERVICE11-VIRTUALIP-INFINITY" rsc="SERVICE11" score="INFINITY" with-rsc="VIRTUALIP"/>
      <rsc_colocation id="colocation-SERVICE12-VIRTUALIP-INFINITY" rsc="SERVICE12" score="INFINITY" with-rsc="VIRTUALIP"/>
      <rsc_colocation id="colocation-SERVICE13-VIRTUALIP-INFINITY" rsc="SERVICE13" score="INFINITY" with-rsc="VIRTUALIP"/>
    </constraints>
    <rsc_defaults>
      <meta_attributes id="rsc_defaults-options">
        <nvpair id="rsc_defaults-options-migration-threshold" name="migration-threshold" value="1"/>
        <nvpair id="rsc_defaults-options-failure-timeout" name="failure-timeout" value="30s"/>
        <nvpair id="rsc_defaults-options-resource-stickiness" name="resource-stickiness" value="1"/>
      </meta_attributes>
    </rsc_defaults>
  </configuration>
  <status>
    <node_state id="1" uname="NODE1" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
      <lrm id="1">
        <lrm_resources>
          <lrm_resource id="VIRTUALIP" type="IPaddr2" class="ocf" provider="heartbeat">
            <lrm_rsc_op id="VIRTUALIP_last_0" operation_key="VIRTUALIP_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.14" transition-key="18:333:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="0:7;18:333:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE1" call-id="5" rc-code="7" op-status="0" interval="0" last-run="1598510843" last-rc-change="1598510843" exec-time="441" queue-time="0" op-digest="81452beaabf19c71189916d97c5ea2a8"/>
          </lrm_resource>
          <lrm_resource id="SOURCEIP" type="IPsrcaddr" class="ocf" provider="heartbeat">
            <lrm_rsc_op id="SOURCEIP_last_0" operation_key="SOURCEIP_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.14" transition-key="19:333:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="0:7;19:333:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE1" call-id="9" rc-code="7" op-status="0" interval="0" last-run="1598510844" last-rc-change="1598510844" exec-time="40" queue-time="0" op-digest="4238bbb83fc1ef2f001ab06eb7d14fe0"/>
          </lrm_resource>
          <lrm_resource id="SERVICE2" type="service2" class="systemd">
            <lrm_rsc_op id="SERVICE2_last_0" operation_key="SERVICE2_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.14" transition-key="21:333:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="0:7;21:333:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE1" call-id="17" rc-code="7" op-status="0" interval="0" last-run="1598510844" last-rc-change="1598510844" exec-time="239" queue-time="0" op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/>
          </lrm_resource>
          <lrm_resource id="SERVICE3" type="service3" class="systemd">
            <lrm_rsc_op id="SERVICE3_last_0" operation_key="SERVICE3_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.14" transition-key="22:333:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="0:7;22:333:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE1" call-id="21" rc-code="7" op-status="0" interval="0" last-run="1598510844" last-rc-change="1598510844" exec-time="240" queue-time="0" op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/>
          </lrm_resource>
          <lrm_resource id="SERVICE4" type="service4" class="systemd">
            <lrm_rsc_op id="SERVICE4_last_0" operation_key="SERVICE4_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.14" transition-key="23:333:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="0:7;23:333:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE1" call-id="25" rc-code="7" op-status="0" interval="0" last-run="1598510844" last-rc-change="1598510844" exec-time="241" queue-time="0" op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/>
          </lrm_resource>
          <lrm_resource id="SERVICE5" type="service5" class="systemd">
            <lrm_rsc_op id="SERVICE5_last_0" operation_key="SERVICE5_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.14" transition-key="24:333:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="0:7;24:333:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE1" call-id="29" rc-code="7" op-status="0" interval="0" last-run="1598510844" last-rc-change="1598510844" exec-time="241" queue-time="0" op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/>
          </lrm_resource>
          <lrm_resource id="SERVICE6" type="service6" class="systemd">
            <lrm_rsc_op id="SERVICE6_last_0" operation_key="SERVICE6_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.14" transition-key="25:333:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="0:7;25:333:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE1" call-id="33" rc-code="7" op-status="0" interval="0" last-run="1598510844" last-rc-change="1598510844" exec-time="242" queue-time="0" op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/>
          </lrm_resource>
          <lrm_resource id="SERVICE7" type="service7" class="systemd">
            <lrm_rsc_op id="SERVICE7_last_0" operation_key="SERVICE7_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.14" transition-key="18:334:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="0:7;18:334:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE1" call-id="38" rc-code="7" op-status="0" interval="0" last-run="1598510844" last-rc-change="1598510844" exec-time="125" queue-time="0" op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/>
          </lrm_resource>
          <lrm_resource id="SERVICE8" type="service8" class="systemd">
            <lrm_rsc_op id="SERVICE8_last_0" operation_key="SERVICE8_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.14" transition-key="19:334:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="0:7;19:334:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE1" call-id="42" rc-code="7" op-status="0" interval="0" last-run="1598510844" last-rc-change="1598510844" exec-time="123" queue-time="1" op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/>
          </lrm_resource>
          <lrm_resource id="SERVICE9" type="service9" class="systemd">
            <lrm_rsc_op id="SERVICE9_last_0" operation_key="SERVICE9_stop_0" operation="stop" crm-debug-origin="do_update_resource" crm_feature_set="3.0.14" transition-key="42:336:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="0:0;42:336:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE1" call-id="68" rc-code="0" op-status="0" interval="0" last-run="1598510864" last-rc-change="1598510864" exec-time="2006" queue-time="0" op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/>
            <lrm_rsc_op id="SERVICE9_last_failure_0" operation_key="SERVICE9_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.14" transition-key="20:334:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="0:0;20:334:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE1" call-id="46" rc-code="0" op-status="0" interval="0" last-run="1598510844" last-rc-change="1598510844" exec-time="122" queue-time="0" op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/>
          </lrm_resource>
          <lrm_resource id="SERVICE10" type="service10" class="systemd">
            <lrm_rsc_op id="SERVICE10_last_0" operation_key="SERVICE10_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.14" transition-key="21:334:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="0:7;21:334:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE1" call-id="50" rc-code="7" op-status="0" interval="0" last-run="1598510844" last-rc-change="1598510844" exec-time="121" queue-time="0" op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/>
          </lrm_resource>
          <lrm_resource id="SERVICE11" type="service11" class="systemd">
            <lrm_rsc_op id="SERVICE11_last_0" operation_key="SERVICE11_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.14" transition-key="22:334:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="0:7;22:334:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE1" call-id="54" rc-code="7" op-status="0" interval="0" last-run="1598510844" last-rc-change="1598510844" exec-time="118" queue-time="0" op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/>
          </lrm_resource>
          <lrm_resource id="SERVICE12" type="service12" class="systemd">
            <lrm_rsc_op id="SERVICE12_last_0" operation_key="SERVICE12_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.14" transition-key="23:334:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="0:7;23:334:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE1" call-id="58" rc-code="7" op-status="0" interval="0" last-run="1598510844" last-rc-change="1598510844" exec-time="117" queue-time="0" op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/>
          </lrm_resource>
          <lrm_resource id="SERVICE13" type="service13" class="systemd">
            <lrm_rsc_op id="SERVICE13_last_0" operation_key="SERVICE13_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.14" transition-key="24:334:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="0:7;24:334:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE1" call-id="62" rc-code="7" op-status="0" interval="0" last-run="1598510844" last-rc-change="1598510844" exec-time="118" queue-time="1" op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/>
          </lrm_resource>
          <lrm_resource id="SERVICE14" type="service14" class="systemd">
            <lrm_rsc_op id="SERVICE14_last_0" operation_key="SERVICE14_start_0" operation="start" crm-debug-origin="do_update_resource" crm_feature_set="3.0.14" transition-key="55:336:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="0:0;55:336:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE1" call-id="74" rc-code="0" op-status="0" interval="0" last-run="1598510864" last-rc-change="1598510864" exec-time="2209" queue-time="0" op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/>
            <lrm_rsc_op id="SERVICE14_monitor_10000" operation_key="SERVICE14_monitor_10000" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.14" transition-key="56:336:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="0:0;56:336:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE1" call-id="76" rc-code="0" op-status="0" interval="10000" last-rc-change="1598510866" exec-time="2" queue-time="0" op-digest="873ed4f07792aa8ff18f3254244675ea"/>
          </lrm_resource>
          <lrm_resource id="SERVICE15" type="service15" class="systemd">
            <lrm_rsc_op id="SERVICE15_last_0" operation_key="SERVICE15_start_0" operation="start" crm-debug-origin="do_update_resource" crm_feature_set="3.0.14" transition-key="63:336:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="0:0;63:336:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE1" call-id="75" rc-code="0" op-status="0" interval="0" last-run="1598510864" last-rc-change="1598510864" exec-time="2316" queue-time="0" op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/>
            <lrm_rsc_op id="SERVICE15_monitor_10000" operation_key="SERVICE15_monitor_10000" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.14" transition-key="64:336:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="0:0;64:336:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE1" call-id="77" rc-code="0" op-status="0" interval="10000" last-rc-change="1598510866" exec-time="1" queue-time="0" op-digest="873ed4f07792aa8ff18f3254244675ea"/>
          </lrm_resource>
          <lrm_resource id="SERVICE1" type="service1" class="systemd">
            <lrm_rsc_op id="SERVICE1_last_0" operation_key="SERVICE1_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.14" transition-key="20:370:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="0:7;20:370:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE1" call-id="193" rc-code="7" op-status="0" interval="0" last-run="1598510961" last-rc-change="1598510961" exec-time="1" queue-time="0" op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/>
          </lrm_resource>
        </lrm_resources>
      </lrm>
      <transient_attributes id="1">
        <instance_attributes id="status-1"/>
      </transient_attributes>
    </node_state>
    <node_state id="2" uname="NODE2" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
      <transient_attributes id="2">
        <instance_attributes id="status-2"/>
      </transient_attributes>
      <lrm id="2">
        <lrm_resources>
          <lrm_resource id="SERVICE4" type="service4" class="systemd">
            <lrm_rsc_op id="SERVICE4_last_failure_0" operation_key="SERVICE4_monitor_0" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="7:1079:7:96431d62-66ac-46b4-9fbc-1cd89548a77d" transition-magic="0:0;7:1079:7:96431d62-66ac-46b4-9fbc-1cd89548a77d" exit-reason="" on_node="NODE2" call-id="97" rc-code="0" op-status="0" interval="0" last-run="1598436620" last-rc-change="1598436620" exec-time="1" queue-time="0" op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/>
            <lrm_rsc_op id="SERVICE4_last_0" operation_key="SERVICE4_start_0" operation="start" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="33:226:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="0:0;33:226:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE2" call-id="385" rc-code="0" op-status="0" interval="0" last-run="1598455186" last-rc-change="1598455186" exec-time="2417" queue-time="0" op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/>
            <lrm_rsc_op id="SERVICE4_monitor_5000" operation_key="SERVICE4_monitor_5000" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="34:226:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="0:0;34:226:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE2" call-id="386" rc-code="0" op-status="0" interval="5000" last-rc-change="1598455188" exec-time="2" queue-time="0" op-digest="4811cef7f7f94e3a35a70be7916cb2fd"/>
          </lrm_resource>
          <lrm_resource id="SERVICE9" type="service9" class="systemd">
            <lrm_rsc_op id="SERVICE9_last_0" operation_key="SERVICE9_start_0" operation="start" crm-debug-origin="do_update_resource" crm_feature_set="3.0.14" transition-key="44:336:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="0:0;44:336:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE2" call-id="433" rc-code="0" op-status="0" interval="0" last-run="1598510844" last-rc-change="1598510844" exec-time="2079" queue-time="1" op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/>
            <lrm_rsc_op id="SERVICE9_monitor_5000" operation_key="SERVICE9_monitor_5000" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.14" transition-key="2:336:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="0:0;2:336:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE2" call-id="434" rc-code="0" op-status="0" interval="5000" last-rc-change="1598510846" exec-time="5" queue-time="0" op-digest="4811cef7f7f94e3a35a70be7916cb2fd"/>
          </lrm_resource>
          <lrm_resource id="SERVICE13" type="service13" class="systemd">
            <lrm_rsc_op id="SERVICE13_last_0" operation_key="SERVICE13_start_0" operation="start" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="48:0:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="0:0;48:0:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE2" call-id="108" rc-code="0" op-status="0" interval="0" last-run="1598436624" last-rc-change="1598436624" exec-time="2129" queue-time="0" op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/>
            <lrm_rsc_op id="SERVICE13_monitor_5000" operation_key="SERVICE13_monitor_5000" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="51:1:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="0:0;51:1:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE2" call-id="110" rc-code="0" op-status="0" interval="5000" last-rc-change="1598436626" exec-time="2" queue-time="0" op-digest="4811cef7f7f94e3a35a70be7916cb2fd"/>
          </lrm_resource>
          <lrm_resource id="SERVICE11" type="service11" class="systemd">
            <lrm_rsc_op id="SERVICE11_last_0" operation_key="SERVICE11_start_0" operation="start" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="43:1079:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" transition-magic="0:0;43:1079:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" exit-reason="" on_node="NODE2" call-id="104" rc-code="0" op-status="0" interval="0" last-run="1598436620" last-rc-change="1598436620" exec-time="2060" queue-time="0" op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/>
            <lrm_rsc_op id="SERVICE11_monitor_5000" operation_key="SERVICE11_monitor_5000" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="45:0:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="0:0;45:0:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE2" call-id="106" rc-code="0" op-status="0" interval="5000" last-rc-change="1598436624" exec-time="10" queue-time="0" op-digest="4811cef7f7f94e3a35a70be7916cb2fd"/>
          </lrm_resource>
          <lrm_resource id="SERVICE6" type="service6" class="systemd">
            <lrm_rsc_op id="SERVICE6_last_0" operation_key="SERVICE6_start_0" operation="start" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="46:1078:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" transition-magic="0:0;46:1078:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" exit-reason="" on_node="NODE2" call-id="85" rc-code="0" op-status="0" interval="0" last-run="1598436614" last-rc-change="1598436614" exec-time="2055" queue-time="0" op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/>
            <lrm_rsc_op id="SERVICE6_monitor_5000" operation_key="SERVICE6_monitor_5000" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="30:1079:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" transition-magic="0:0;30:1079:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" exit-reason="" on_node="NODE2" call-id="99" rc-code="0" op-status="0" interval="5000" last-rc-change="1598436620" exec-time="1" queue-time="0" op-digest="4811cef7f7f94e3a35a70be7916cb2fd"/>
          </lrm_resource>
          <lrm_resource id="SERVICE1" type="service1" class="systemd">
            <lrm_rsc_op id="SERVICE1_last_failure_0" operation_key="SERVICE1_monitor_0" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="19:296:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="0:0;19:296:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE2" call-id="397" rc-code="0" op-status="0" interval="0" last-run="1598510455" last-rc-change="1598510455" exec-time="4" queue-time="0" op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/>
            <lrm_rsc_op id="SERVICE1_last_0" operation_key="SERVICE1_start_0" operation="start" crm-debug-origin="do_update_resource" crm_feature_set="3.0.14" transition-key="30:369:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="0:0;30:369:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE2" call-id="465" rc-code="0" op-status="0" interval="0" last-run="1598510934" last-rc-change="1598510934" exec-time="2208" queue-time="0" op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/>
            <lrm_rsc_op id="SERVICE1_monitor_5000" operation_key="SERVICE1_monitor_5000" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.14" transition-key="9:369:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="0:0;9:369:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE2" call-id="466" rc-code="0" op-status="0" interval="5000" last-rc-change="1598510936" exec-time="21" queue-time="1" op-digest="4811cef7f7f94e3a35a70be7916cb2fd"/>
          </lrm_resource>
          <lrm_resource id="SERVICE2" type="service2" class="systemd">
            <lrm_rsc_op id="SERVICE2_last_0" operation_key="SERVICE2_start_0" operation="start" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="29:154:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="0:0;29:154:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE2" call-id="249" rc-code="0" op-status="0" interval="0" last-run="1598440560" last-rc-change="1598440560" exec-time="2072" queue-time="0" op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/>
            <lrm_rsc_op id="SERVICE2_monitor_5000" operation_key="SERVICE2_monitor_5000" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="30:154:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="0:0;30:154:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE2" call-id="250" rc-code="0" op-status="0" interval="5000" last-rc-change="1598440562" exec-time="1" queue-time="0" op-digest="4811cef7f7f94e3a35a70be7916cb2fd"/>
          </lrm_resource>
          <lrm_resource id="VIRTUALIP" type="IPaddr2" class="ocf" provider="heartbeat">
            <lrm_rsc_op id="VIRTUALIP_last_0" operation_key="VIRTUALIP_start_0" operation="start" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="21:1078:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" transition-magic="0:0;21:1078:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" exit-reason="" on_node="NODE2" call-id="76" rc-code="0" op-status="0" interval="0" last-run="1598436612" last-rc-change="1598436612" exec-time="508" queue-time="0" op-digest="81452beaabf19c71189916d97c5ea2a8"/>
            <lrm_rsc_op id="VIRTUALIP_monitor_5000" operation_key="VIRTUALIP_monitor_5000" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="22:1078:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" transition-magic="0:0;22:1078:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" exit-reason="" on_node="NODE2" call-id="77" rc-code="0" op-status="0" interval="5000" last-rc-change="1598436612" exec-time="44" queue-time="0" op-digest="c9f93be4b942aa24f2bef92561fe636f"/>
          </lrm_resource>
          <lrm_resource id="SERVICE3" type="service3" class="systemd">
            <lrm_rsc_op id="SERVICE3_last_0" operation_key="SERVICE3_start_0" operation="start" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="37:1078:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" transition-magic="0:0;37:1078:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" exit-reason="" on_node="NODE2" call-id="82" rc-code="0" op-status="0" interval="0" last-run="1598436614" last-rc-change="1598436614" exec-time="2130" queue-time="0" op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/>
            <lrm_rsc_op id="SERVICE3_monitor_5000" operation_key="SERVICE3_monitor_5000" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="38:1078:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" transition-magic="0:0;38:1078:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" exit-reason="" on_node="NODE2" call-id="90" rc-code="0" op-status="0" interval="5000" last-rc-change="1598436616" exec-time="1" queue-time="0" op-digest="4811cef7f7f94e3a35a70be7916cb2fd"/>
          </lrm_resource>
          <lrm_resource id="SERVICE10" type="service10" class="systemd">
            <lrm_rsc_op id="SERVICE10_last_0" operation_key="SERVICE10_start_0" operation="start" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="58:1078:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" transition-magic="0:0;58:1078:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" exit-reason="" on_node="NODE2" call-id="91" rc-code="0" op-status="0" interval="0" last-run="1598436616" last-rc-change="1598436616" exec-time="2058" queue-time="0" op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/>
            <lrm_rsc_op id="SERVICE10_monitor_5000" operation_key="SERVICE10_monitor_5000" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="42:1079:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" transition-magic="0:0;42:1079:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" exit-reason="" on_node="NODE2" call-id="103" rc-code="0" op-status="0" interval="5000" last-rc-change="1598436620" exec-time="4" queue-time="0" op-digest="4811cef7f7f94e3a35a70be7916cb2fd"/>
          </lrm_resource>
          <lrm_resource id="SERVICE8" type="service8" class="systemd">
            <lrm_rsc_op id="SERVICE8_last_0" operation_key="SERVICE8_start_0" operation="start" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="52:1078:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" transition-magic="0:0;52:1078:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" exit-reason="" on_node="NODE2" call-id="87" rc-code="0" op-status="0" interval="0" last-run="1598436615" last-rc-change="1598436615" exec-time="2055" queue-time="0" op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/>
            <lrm_rsc_op id="SERVICE8_monitor_5000" operation_key="SERVICE8_monitor_5000" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="36:1079:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" transition-magic="0:0;36:1079:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" exit-reason="" on_node="NODE2" call-id="101" rc-code="0" op-status="0" interval="5000" last-rc-change="1598436620" exec-time="1" queue-time="0" op-digest="4811cef7f7f94e3a35a70be7916cb2fd"/>
          </lrm_resource>
          <lrm_resource id="SERVICE7" type="service7" class="systemd">
            <lrm_rsc_op id="SERVICE7_last_0" operation_key="SERVICE7_start_0" operation="start" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="49:1078:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" transition-magic="0:0;49:1078:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" exit-reason="" on_node="NODE2" call-id="86" rc-code="0" op-status="0" interval="0" last-run="1598436614" last-rc-change="1598436614" exec-time="2085" queue-time="0" op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/>
            <lrm_rsc_op id="SERVICE7_monitor_5000" operation_key="SERVICE7_monitor_5000" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="33:1079:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" transition-magic="0:0;33:1079:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" exit-reason="" on_node="NODE2" call-id="100" rc-code="0" op-status="0" interval="5000" last-rc-change="1598436620" exec-time="0" queue-time="1" op-digest="4811cef7f7f94e3a35a70be7916cb2fd"/>
          </lrm_resource>
          <lrm_resource id="SERVICE15" type="service15" class="systemd">
            <lrm_rsc_op id="SERVICE15_last_0" operation_key="SERVICE15_start_0" operation="start" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="79:1077:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" transition-magic="0:0;79:1077:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" exit-reason="" on_node="NODE2" call-id="73" rc-code="0" op-status="0" interval="0" last-run="1598436522" last-rc-change="1598436522" exec-time="2237" queue-time="0" op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/>
            <lrm_rsc_op id="SERVICE15_monitor_10000" operation_key="SERVICE15_monitor_10000" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="80:1077:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" transition-magic="0:0;80:1077:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" exit-reason="" on_node="NODE2" call-id="75" rc-code="0" op-status="0" interval="10000" last-rc-change="1598436525" exec-time="3" queue-time="0" op-digest="873ed4f07792aa8ff18f3254244675ea"/>
          </lrm_resource>
          <lrm_resource id="SERVICE14" type="service14" class="systemd">
            <lrm_rsc_op id="SERVICE14_last_0" operation_key="SERVICE14_start_0" operation="start" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="71:1077:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" transition-magic="0:0;71:1077:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" exit-reason="" on_node="NODE2" call-id="72" rc-code="0" op-status="0" interval="0" last-run="1598436522" last-rc-change="1598436522" exec-time="2053" queue-time="0" op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/>
            <lrm_rsc_op id="SERVICE14_monitor_10000" operation_key="SERVICE14_monitor_10000" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="72:1077:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" transition-magic="0:0;72:1077:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" exit-reason="" on_node="NODE2" call-id="74" rc-code="0" op-status="0" interval="10000" last-rc-change="1598436524" exec-time="1" queue-time="0" op-digest="873ed4f07792aa8ff18f3254244675ea"/>
          </lrm_resource>
          <lrm_resource id="SERVICE5" type="service5" class="systemd">
            <lrm_rsc_op id="SERVICE5_last_0" operation_key="SERVICE5_start_0" operation="start" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="43:1078:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" transition-magic="0:0;43:1078:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" exit-reason="" on_node="NODE2" call-id="84" rc-code="0" op-status="0" interval="0" last-run="1598436614" last-rc-change="1598436614" exec-time="2098" queue-time="0" op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/>
            <lrm_rsc_op id="SERVICE5_monitor_5000" operation_key="SERVICE5_monitor_5000" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="27:1079:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" transition-magic="0:0;27:1079:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" exit-reason="" on_node="NODE2" call-id="98" rc-code="0" op-status="0" interval="5000" last-rc-change="1598436620" exec-time="2" queue-time="0" op-digest="4811cef7f7f94e3a35a70be7916cb2fd"/>
          </lrm_resource>
          <lrm_resource id="SOURCEIP" type="IPsrcaddr" class="ocf" provider="heartbeat">
            <lrm_rsc_op id="SOURCEIP_last_0" operation_key="SOURCEIP_start_0" operation="start" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="24:1078:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" transition-magic="0:0;24:1078:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" exit-reason="" on_node="NODE2" call-id="78" rc-code="0" op-status="0" interval="0" last-run="1598436612" last-rc-change="1598436612" exec-time="216" queue-time="0" op-digest="4238bbb83fc1ef2f001ab06eb7d14fe0"/>
            <lrm_rsc_op id="SOURCEIP_monitor_5000" operation_key="SOURCEIP_monitor_5000" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="25:1078:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" transition-magic="0:0;25:1078:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" exit-reason="" on_node="NODE2" call-id="79" rc-code="0" op-status="0" interval="5000" last-rc-change="1598436613" exec-time="31" queue-time="0" op-digest="44fe0c6ab563ac3595f20eb3473b89fa"/>
          </lrm_resource>
          <lrm_resource id="SERVICE12" type="service12" class="systemd">
            <lrm_rsc_op id="SERVICE12_last_0" operation_key="SERVICE12_start_0" operation="start" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="46:0:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="0:0;46:0:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE2" call-id="107" rc-code="0" op-status="0" interval="0" last-run="1598436624" last-rc-change="1598436624" exec-time="2060" queue-time="0" op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/>
            <lrm_rsc_op id="SERVICE12_monitor_5000" operation_key="SERVICE12_monitor_5000" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="48:1:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="0:0;48:1:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE2" call-id="109" rc-code="0" op-status="0" interval="5000" last-rc-change="1598436626" exec-time="2" queue-time="0" op-digest="4811cef7f7f94e3a35a70be7916cb2fd"/>
          </lrm_resource>
        </lrm_resources>
      </lrm>
    </node_state>
  </status>
</cib>

-------------- next part --------------
## CLUSTER IS RUNNING 

*****
Stack: corosync
Current DC: NODE2 (version 1.1.21-4.el7-f14e36fd43) - partition with quorum
Last updated: Thu Aug 27 08:42:21 2020
Last change: Thu Aug 27 08:42:18 2020 by hacluster via crmd on NODE1

2 nodes configured
19 resources configured

Online: [ NODE1 NODE2 ]

Active resources:

 Resource Group: IPV
     VIRTUALIP  (ocf::heartbeat:IPaddr2):       Started NODE2
     SOURCEIP   (ocf::heartbeat:IPsrcaddr):      Started NODE2
SERVICE1        (systemd:service1):     Started NODE2
SERVICE2    (systemd:service2): Started NODE2
SERVICE3       (systemd:service3):  Started NODE2
SERVICE4       (systemd:service4):  Started NODE2
SERVICE5      (systemd:service5): Started NODE2
SERVICE6 (systemd:service6):    Started NODE2
SERVICE7  (systemd:service7):      Started NODE2
SERVICE8   (systemd:service8):  Started NODE2
SERVICE9        (systemd:service9):   Started NODE2
SERVICE10        (systemd:service10):        Started NODE2
SERVICE11       (systemd:service11):       Started NODE2
SERVICE12   (systemd:service12):       Started NODE2
SERVICE13  (systemd:service13):       Started NODE2
 Clone Set: SERVICE14-clone [SERVICE14]
     Started: [ NODE1 NODE2 ]
 Clone Set: SERVICE15-clone [SERVICE15]
     Started: [ NODE1 NODE2 ]

*****


## REBOOTING NODE1
# /var/log/cluster/corosync.log

Aug 27 08:46:16 [1331] NODE2       crmd:     info: crm_update_peer_expected:  handle_request: Node NODE1[1] - expected state is now down (was member)
Aug 27 08:46:16 [1331] NODE2       crmd:     info: handle_shutdown_request:   Creating shutdown request for NODE1 (state=S_IDLE)
Aug 27 08:46:16 [1329] NODE2      attrd:     info: attrd_peer_update: Setting shutdown[NODE1]: (null) -> 1598510776 from NODE2
Aug 27 08:46:16 [1329] NODE2      attrd:     info: write_attribute:   Sent CIB request 167 with 1 change for shutdown (id n/a, set n/a)
Aug 27 08:46:16 [1326] NODE2        cib:     info: cib_process_request:       Forwarding cib_modify operation for section status to all (origin=local/attrd/167)
Aug 27 08:46:16 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.2 2
Aug 27 08:46:16 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.3 (null)
Aug 27 08:46:16 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=3
Aug 27 08:46:16 [1326] NODE2        cib:     info: cib_perform_op:    ++ /cib/status/node_state[@id='1']/transient_attributes[@id='1']/instance_attributes[@id='status-1']:  <nvpair id="status-1-shutdown" name="shutdown" value="1598510776"/>
Aug 27 08:46:16 [1331] NODE2       crmd:     info: abort_transition_graph:    Transition aborted by status-1-shutdown doing create shutdown=1598510776: Transient attribute change | cib=0.434.3 source=abort_unless_down:356 path=/cib/status/node_state[@id='1']/transient_attributes[@id='1']/instance_attributes[@id='status-1'] complete=true
Aug 27 08:46:16 [1331] NODE2       crmd:   notice: do_state_transition:       State transition S_IDLE -> S_POLICY_ENGINE | input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph
Aug 27 08:46:16 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE2/attrd/167, version=0.434.3)
Aug 27 08:46:16 [1329] NODE2      attrd:     info: attrd_cib_callback:        CIB update 167 result for shutdown: OK | rc=0
Aug 27 08:46:16 [1329] NODE2      attrd:     info: attrd_cib_callback:        * shutdown[NODE1]=1598510776
Aug 27 08:46:16 [1330] NODE2    pengine:   notice: unpack_config:     On loss of CCM Quorum: Ignore
Aug 27 08:46:16 [1330] NODE2    pengine:     info: determine_online_status:   Node NODE1 is shutting down
Aug 27 08:46:16 [1330] NODE2    pengine:     info: determine_online_status:   Node NODE2 is online
Aug 27 08:46:16 [1330] NODE2    pengine:     info: determine_op_status:       Operation monitor found resource SERVICE9 active on NODE1
Aug 27 08:46:16 [1330] NODE2    pengine:     info: determine_op_status:       Operation monitor found resource SERVICE4 active on NODE2
Aug 27 08:46:16 [1330] NODE2    pengine:     info: determine_op_status:       Operation monitor found resource SERVICE1 active on NODE2
Aug 27 08:46:16 [1330] NODE2    pengine:     info: unpack_node_loop:  Node 1 is already processed
Aug 27 08:46:16 [1330] NODE2    pengine:     info: unpack_node_loop:  Node 2 is already processed
Aug 27 08:46:16 [1330] NODE2    pengine:     info: unpack_node_loop:  Node 1 is already processed
Aug 27 08:46:16 [1330] NODE2    pengine:     info: unpack_node_loop:  Node 2 is already processed
Aug 27 08:46:16 [1330] NODE2    pengine:     info: group_print:        Resource Group: IPV
Aug 27 08:46:16 [1330] NODE2    pengine:     info: common_print:           VIRTUALIP  (ocf::heartbeat:IPaddr2):       Started NODE2
Aug 27 08:46:16 [1330] NODE2    pengine:     info: common_print:           SOURCEIP   (ocf::heartbeat:IPsrcaddr):      Started NODE2
Aug 27 08:46:16 [1330] NODE2    pengine:     info: common_print:      SERVICE1        (systemd:service1):     Started NODE2
Aug 27 08:46:16 [1330] NODE2    pengine:     info: common_print:      SERVICE2    (systemd:service2): Started NODE2
Aug 27 08:46:16 [1330] NODE2    pengine:     info: common_print:      SERVICE3       (systemd:service3):  Started NODE2
Aug 27 08:46:16 [1330] NODE2    pengine:     info: common_print:      SERVICE4       (systemd:service4):  Started NODE2
Aug 27 08:46:16 [1330] NODE2    pengine:     info: common_print:      SERVICE5      (systemd:service5): Started NODE2
Aug 27 08:46:16 [1330] NODE2    pengine:     info: common_print:      SERVICE6 (systemd:service6):    Started NODE2
Aug 27 08:46:16 [1330] NODE2    pengine:     info: common_print:      SERVICE7  (systemd:service7):      Started NODE2
Aug 27 08:46:16 [1330] NODE2    pengine:     info: common_print:      SERVICE8   (systemd:service8):  Started NODE2
Aug 27 08:46:16 [1330] NODE2    pengine:     info: common_print:      SERVICE9        (systemd:service9):   Started NODE2
Aug 27 08:46:16 [1330] NODE2    pengine:     info: common_print:      SERVICE10        (systemd:service10):        Started NODE2
Aug 27 08:46:16 [1330] NODE2    pengine:     info: common_print:      SERVICE11       (systemd:service11):       Started NODE2
Aug 27 08:46:16 [1330] NODE2    pengine:     info: common_print:      SERVICE12   (systemd:service12):       Started NODE2
Aug 27 08:46:16 [1330] NODE2    pengine:     info: common_print:      SERVICE13  (systemd:service13):       Started NODE2
Aug 27 08:46:16 [1330] NODE2    pengine:     info: clone_print:        Clone Set: SERVICE14-clone [SERVICE14]
Aug 27 08:46:16 [1330] NODE2    pengine:     info: short_print:            Started: [ NODE1 NODE2 ]
Aug 27 08:46:16 [1330] NODE2    pengine:     info: clone_print:        Clone Set: SERVICE15-clone [SERVICE15]
Aug 27 08:46:16 [1330] NODE2    pengine:     info: short_print:            Started: [ NODE1 NODE2 ]
Aug 27 08:46:16 [1330] NODE2    pengine:     info: native_color:      Resource SERVICE14:0 cannot run anywhere
Aug 27 08:46:16 [1330] NODE2    pengine:     info: native_color:      Resource SERVICE15:0 cannot run anywhere
Aug 27 08:46:16 [1330] NODE2    pengine:   notice: sched_shutdown_op: Scheduling shutdown of node NODE1
Aug 27 08:46:16 [1330] NODE2    pengine:   notice: LogNodeActions:     * Shutdown NODE1
Aug 27 08:46:16 [1330] NODE2    pengine:     info: LogActions:        Leave   VIRTUALIP       (Started NODE2)
Aug 27 08:46:16 [1330] NODE2    pengine:     info: LogActions:        Leave   SOURCEIP        (Started NODE2)
Aug 27 08:46:16 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE1        (Started NODE2)
Aug 27 08:46:16 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE2    (Started NODE2)
Aug 27 08:46:16 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE3       (Started NODE2)
Aug 27 08:46:16 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE4       (Started NODE2)
Aug 27 08:46:16 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE5      (Started NODE2)
Aug 27 08:46:16 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE6 (Started NODE2)
Aug 27 08:46:16 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE7  (Started NODE2)
Aug 27 08:46:16 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE8   (Started NODE2)
Aug 27 08:46:16 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE9        (Started NODE2)
Aug 27 08:46:16 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE10        (Started NODE2)
Aug 27 08:46:16 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE11       (Started NODE2)
Aug 27 08:46:16 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE12   (Started NODE2)
Aug 27 08:46:16 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE13  (Started NODE2)
Aug 27 08:46:16 [1330] NODE2    pengine:   notice: LogAction:  * Stop       SERVICE14:0         (            NODE1 )   due to node availability
Aug 27 08:46:16 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE14:1        (Started NODE2)
Aug 27 08:46:16 [1330] NODE2    pengine:   notice: LogAction:  * Stop       SERVICE15:0                  (            NODE1 )   due to node availability
Aug 27 08:46:16 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE15:1 (Started NODE2)
Aug 27 08:46:16 [1330] NODE2    pengine:   notice: process_pe_message:        Calculated transition 331, saving inputs in /var/lib/pacemaker/pengine/pe-input-1508.bz2
Aug 27 08:46:16 [1331] NODE2       crmd:     info: do_state_transition:       State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE | input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response
Aug 27 08:46:16 [1331] NODE2       crmd:     info: do_te_invoke:      Processing graph 331 (ref=pe_calc-dc-1598510776-962) derived from /var/lib/pacemaker/pengine/pe-input-1508.bz2
Aug 27 08:46:16 [1331] NODE2       crmd:   notice: te_rsc_command:    Initiating stop operation SERVICE14_stop_0 on NODE1 | action 54
Aug 27 08:46:16 [1331] NODE2       crmd:   notice: te_rsc_command:    Initiating stop operation SERVICE15_stop_0 on NODE1 | action 61
Aug 27 08:46:16 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.3 2
Aug 27 08:46:16 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.4 (null)
Aug 27 08:46:16 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=4
Aug 27 08:46:16 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='SERVICE14']/lrm_rsc_op[@id='SERVICE14_last_0']:  @operation_key=SERVICE14_stop_0, @operation=stop, @transition-key=54:331:0:d10dd5e7-af4d-4bba-a226-516824f8f60e, @transition-magic=-1:193;54:331:0:d10dd5e7-af4d-4bba-a226-516824f8f60e, @call-id=-1, @rc-code=193, @op-status=-1, @last-run=1598510798, @last-rc-change=1598510798, @exec-time=0
Aug 27 08:46:16 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE1/crmd/198, version=0.434.4)
Aug 27 08:46:16 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.4 2
Aug 27 08:46:16 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.5 (null)
Aug 27 08:46:16 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=5
Aug 27 08:46:16 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='SERVICE15']/lrm_rsc_op[@id='SERVICE15_last_0']:  @operation_key=SERVICE15_stop_0, @operation=stop, @transition-key=61:331:0:d10dd5e7-af4d-4bba-a226-516824f8f60e, @transition-magic=-1:193;61:331:0:d10dd5e7-af4d-4bba-a226-516824f8f60e, @call-id=-1, @rc-code=193, @op-status=-1, @last-run=1598510798, @last-rc-change=1598510798, @exec-time=0
Aug 27 08:46:16 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE1/crmd/199, version=0.434.5)
Aug 27 08:46:18 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.5 2
Aug 27 08:46:18 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.6 (null)
Aug 27 08:46:18 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=6
Aug 27 08:46:18 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='SERVICE14']/lrm_rsc_op[@id='SERVICE14_last_0']:  @transition-magic=0:0;54:331:0:d10dd5e7-af4d-4bba-a226-516824f8f60e, @call-id=186, @rc-code=0, @op-status=0, @exec-time=2052
Aug 27 08:46:18 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE1/crmd/200, version=0.434.6)
Aug 27 08:46:18 [1331] NODE2       crmd:     info: match_graph_event: Action SERVICE14_stop_0 (54) confirmed on NODE1 (rc=0)
Aug 27 08:46:18 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.6 2
Aug 27 08:46:18 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.7 (null)
Aug 27 08:46:18 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=7
Aug 27 08:46:18 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='SERVICE15']/lrm_rsc_op[@id='SERVICE15_last_0']:  @transition-magic=0:0;61:331:0:d10dd5e7-af4d-4bba-a226-516824f8f60e, @call-id=188, @rc-code=0, @op-status=0, @exec-time=2104
Aug 27 08:46:18 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE1/crmd/201, version=0.434.7)
Aug 27 08:46:18 [1331] NODE2       crmd:     info: match_graph_event: Action SERVICE15_stop_0 (61) confirmed on NODE1 (rc=0)
Aug 27 08:46:18 [1331] NODE2       crmd:     info: te_crm_command:    Executing crm-event (68): do_shutdown on NODE1
Aug 27 08:46:18 [1331] NODE2       crmd:   notice: run_graph: Transition 331 (Complete=7, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-1508.bz2): Complete
Aug 27 08:46:18 [1331] NODE2       crmd:     info: do_log:    Input I_TE_SUCCESS received in state S_TRANSITION_ENGINE from notify_crmd
Aug 27 08:46:18 [1331] NODE2       crmd:   notice: do_state_transition:       State transition S_TRANSITION_ENGINE -> S_IDLE | input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd
Aug 27 08:46:18 [1331] NODE2       crmd:     info: pcmk_cpg_membership:       Group crmd event 13: NODE1 (node 1 pid 1261) left via cpg_leave
Aug 27 08:46:18 [1331] NODE2       crmd:     info: crm_update_peer_proc:      pcmk_cpg_membership: Node NODE1[1] - corosync-cpg is now offline
Aug 27 08:46:18 [1331] NODE2       crmd:     info: peer_update_callback:      Client NODE1/peer now has status [offline] (DC=true, changed=4000000)
Aug 27 08:46:18 [1331] NODE2       crmd:     info: controld_delete_node_state:        Deleting transient attributes for node NODE1 (via CIB call 1357) | xpath=//node_state[@uname='NODE1']/transient_attributes
Aug 27 08:46:18 [1331] NODE2       crmd:   notice: peer_update_callback:      do_shutdown of peer NODE1 is complete | op=68
Aug 27 08:46:18 [1331] NODE2       crmd:     info: pcmk_cpg_membership:       Group crmd event 13: NODE2 (node 2 pid 1331) is member
Aug 27 08:46:18 [1326] NODE2        cib:     info: cib_process_request:       Forwarding cib_delete operation for section //node_state[@uname='NODE1']/transient_attributes to all (origin=local/crmd/1357)
Aug 27 08:46:18 [1326] NODE2        cib:     info: cib_process_request:       Forwarding cib_modify operation for section status to all (origin=local/crmd/1358)
Aug 27 08:46:18 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.7 2
Aug 27 08:46:18 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.8 b48848cd3dacdd25352599c5c5add0cc
Aug 27 08:46:18 [1326] NODE2        cib:     info: cib_perform_op:    -- /cib/status/node_state[@id='1']/transient_attributes[@id='1']
Aug 27 08:46:18 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=8
Aug 27 08:46:18 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_delete operation for section //node_state[@uname='NODE1']/transient_attributes: OK (rc=0, origin=NODE2/crmd/1357, version=0.434.8)
Aug 27 08:46:18 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.8 2
Aug 27 08:46:18 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.9 (null)
Aug 27 08:46:18 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=9
Aug 27 08:46:18 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/status/node_state[@id='1']:  @crmd=offline, @crm-debug-origin=peer_update_callback, @join=down, @expected=down
Aug 27 08:46:18 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE2/crmd/1358, version=0.434.9)
Aug 27 08:46:18 [1329] NODE2      attrd:     info: pcmk_cpg_membership:       Group attrd event 13: NODE1 (node 1 pid 1259) left for unknown reason
Aug 27 08:46:18 [1329] NODE2      attrd:     info: crm_update_peer_proc:      pcmk_cpg_membership: Node NODE1[1] - corosync-cpg is now offline
Aug 27 08:46:18 [1329] NODE2      attrd:   notice: crm_update_peer_state_iter:        Node NODE1 state is now lost | nodeid=1 previous=member source=crm_update_peer_proc
Aug 27 08:46:18 [1329] NODE2      attrd:   notice: attrd_peer_remove: Removing all NODE1 attributes for peer loss
Aug 27 08:46:18 [1329] NODE2      attrd:     info: crm_reap_dead_member:      Removing node with name NODE1 and id 1 from membership cache
Aug 27 08:46:18 [1329] NODE2      attrd:   notice: reap_crm_member:   Purged 1 peer with id=1 and/or uname=NODE1 from the membership cache
Aug 27 08:46:18 [1329] NODE2      attrd:     info: pcmk_cpg_membership:       Group attrd event 13: NODE2 (node 2 pid 1329) is member
Aug 27 08:46:18 [1327] NODE2 stonith-ng:     info: pcmk_cpg_membership:       Group stonith-ng event 13: NODE1 (node 1 pid 1257) left for unknown reason
Aug 27 08:46:18 [1327] NODE2 stonith-ng:     info: crm_update_peer_proc:      pcmk_cpg_membership: Node NODE1[1] - corosync-cpg is now offline
Aug 27 08:46:18 [1327] NODE2 stonith-ng:   notice: crm_update_peer_state_iter:        Node NODE1 state is now lost | nodeid=1 previous=member source=crm_update_peer_proc
Aug 27 08:46:18 [1327] NODE2 stonith-ng:     info: crm_reap_dead_member:      Removing node with name NODE1 and id 1 from membership cache
Aug 27 08:46:18 [1327] NODE2 stonith-ng:   notice: reap_crm_member:   Purged 1 peer with id=1 and/or uname=NODE1 from the membership cache
Aug 27 08:46:18 [1327] NODE2 stonith-ng:     info: pcmk_cpg_membership:       Group stonith-ng event 13: NODE2 (node 2 pid 1327) is member
Aug 27 08:46:18 [1326] NODE2        cib:     info: cib_process_shutdown_req:  Shutdown REQ from NODE1
Aug 27 08:46:18 [1326] NODE2        cib:     info: pcmk_cpg_membership:       Group cib event 13: NODE1 (node 1 pid 1256) left via cpg_leave
Aug 27 08:46:18 [1326] NODE2        cib:     info: crm_update_peer_proc:      pcmk_cpg_membership: Node NODE1[1] - corosync-cpg is now offline
Aug 27 08:46:18 [1326] NODE2        cib:   notice: crm_update_peer_state_iter:        Node NODE1 state is now lost | nodeid=1 previous=member source=crm_update_peer_proc
Aug 27 08:46:18 [1326] NODE2        cib:     info: crm_reap_dead_member:      Removing node with name NODE1 and id 1 from membership cache
Aug 27 08:46:18 [1326] NODE2        cib:   notice: reap_crm_member:   Purged 1 peer with id=1 and/or uname=NODE1 from the membership cache
Aug 27 08:46:18 [1326] NODE2        cib:     info: pcmk_cpg_membership:       Group cib event 13: NODE2 (node 2 pid 1326) is member
Aug 27 08:46:18 [1240] NODE2 pacemakerd:     info: pcmk_cpg_membership:       Group pacemakerd event 13: NODE1 (node 1 pid 1247) left via cpg_leave
Aug 27 08:46:18 [1240] NODE2 pacemakerd:     info: crm_update_peer_proc:      pcmk_cpg_membership: Node NODE1[1] - corosync-cpg is now offline
Aug 27 08:46:18 [1240] NODE2 pacemakerd:     info: pcmk_cpg_membership:       Group pacemakerd event 13: NODE2 (node 2 pid 1240) is member
Aug 27 08:46:18 [1240] NODE2 pacemakerd:     info: mcp_cpg_deliver:   Ignoring process list sent by peer for local node
[1160] NODE2 corosyncnotice  [TOTEM ] A new membership (10.130.10.2:208639) was formed. Members left: 1
[1160] NODE2 corosyncwarning [CPG   ] downlist left_list: 1 received
[1160] NODE2 corosyncnotice  [QUORUM] Members[1]: 2
[1160] NODE2 corosyncnotice  [MAIN  ] Completed service synchronization, ready to provide service.
Aug 27 08:46:19 [1240] NODE2 pacemakerd:     info: pcmk_quorum_notification:  Quorum retained | membership=208639 members=1
Aug 27 08:46:19 [1240] NODE2 pacemakerd:   notice: crm_update_peer_state_iter:        Node NODE1 state is now lost | nodeid=1 previous=member source=crm_reap_unseen_nodes
Aug 27 08:46:19 [1331] NODE2       crmd:     info: pcmk_quorum_notification:  Quorum retained | membership=208639 members=1
Aug 27 08:46:19 [1331] NODE2       crmd:   notice: crm_update_peer_state_iter:        Node NODE1 state is now lost | nodeid=1 previous=member source=crm_reap_unseen_nodes
Aug 27 08:46:19 [1331] NODE2       crmd:     info: peer_update_callback:      Cluster node NODE1 is now lost (was member)
Aug 27 08:46:19 [1331] NODE2       crmd:   notice: peer_update_callback:      do_shutdown of peer NODE1 is complete | op=68
Aug 27 08:46:19 [1326] NODE2        cib:     info: cib_process_request:       Forwarding cib_modify operation for section status to all (origin=local/crmd/1359)
Aug 27 08:46:19 [1326] NODE2        cib:     info: cib_process_request:       Forwarding cib_modify operation for section nodes to all (origin=local/crmd/1362)
Aug 27 08:46:19 [1326] NODE2        cib:     info: cib_process_request:       Forwarding cib_modify operation for section status to all (origin=local/crmd/1363)
Aug 27 08:46:19 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE2/crmd/1359, version=0.434.9)
Aug 27 08:46:19 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section nodes: OK (rc=0, origin=NODE2/crmd/1362, version=0.434.9)
Aug 27 08:46:19 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.9 2
Aug 27 08:46:19 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.10 (null)
Aug 27 08:46:19 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=10
Aug 27 08:46:19 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/status/node_state[@id='1']:  @in_ccm=false, @crm-debug-origin=post_cache_update
Aug 27 08:46:19 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/status/node_state[@id='2']:  @crm-debug-origin=post_cache_update
Aug 27 08:46:19 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE2/crmd/1363, version=0.434.10)
Aug 27 08:46:24 [1326] NODE2        cib:     info: cib_process_ping:  Reporting our current digest to NODE2: 6dfb70e00efc6d1de9f85e9eac251ba6 for 0.434.10 (0x563b74317400 0)


*****
Stack: corosync
Current DC: NODE2 (version 1.1.21-4.el7-f14e36fd43) - partition with quorum
Last updated: Thu Aug 27 08:46:21 2020
Last change: Thu Aug 27 08:42:18 2020 by hacluster via crmd on NODE1

2 nodes configured
19 resources configured

Online: [ NODE2 ]
OFFLINE: [ NODE1 ]

Active resources:

 Resource Group: IPV
     VIRTUALIP  (ocf::heartbeat:IPaddr2):       Started NODE2
     SOURCEIP   (ocf::heartbeat:IPsrcaddr):      Started NODE2
SERVICE1        (systemd:service1):     Started NODE2
SERVICE2    (systemd:service2): Started NODE2
SERVICE3       (systemd:service3):  Started NODE2
SERVICE4       (systemd:service4):  Started NODE2
SERVICE5      (systemd:service5): Started NODE2
SERVICE6 (systemd:service6):    Started NODE2
SERVICE7  (systemd:service7):      Started NODE2
SERVICE8   (systemd:service8):  Started NODE2
SERVICE9        (systemd:service9):   Started NODE2
SERVICE10        (systemd:service10):        Started NODE2
SERVICE11       (systemd:service11):       Started NODE2
SERVICE12   (systemd:service12):       Started NODE2
SERVICE13  (systemd:service13):       Started NODE2
 Clone Set: SERVICE14-clone [SERVICE14]
     Started: [ NODE2 ]
 Clone Set: SERVICE15-clone [SERVICE15]
     Started: [ NODE2 ]
*****





## WAITING FOR NODE1 TO COME BACK




*****
Stack: corosync
Current DC: NODE2 (version 1.1.21-4.el7-f14e36fd43) - partition with quorum
Last updated: Thu Aug 27 08:47:02 2020
Last change: Thu Aug 27 08:42:18 2020 by hacluster via crmd on NODE1

2 nodes configured
19 resources configured

Online: [ NODE1 NODE2 ]

Active resources:

 Resource Group: IPV
     VIRTUALIP  (ocf::heartbeat:IPaddr2):       Started NODE2
     SOURCEIP   (ocf::heartbeat:IPsrcaddr):      Started NODE2
SERVICE1        (systemd:service1):     Stopping[ NODE1 NODE2 ]                      # <-- !!!!!!!!!!
SERVICE2    (systemd:service2): Started NODE2
SERVICE3       (systemd:service3):  Started NODE2
SERVICE4       (systemd:service4):  Started NODE2
SERVICE5      (systemd:service5): Started NODE2
SERVICE6 (systemd:service6):    Started NODE2
SERVICE7  (systemd:service7):      Started NODE2
SERVICE8   (systemd:service8):  Started NODE2
SERVICE9        (systemd:service9):   Started NODE2
SERVICE10        (systemd:service10):        Started NODE2
SERVICE11       (systemd:service11):       Started NODE2
SERVICE12   (systemd:service12):       Started NODE2
SERVICE13  (systemd:service13):       Started NODE2
 Clone Set: SERVICE14-clone [SERVICE14]
     Started: [ NODE2 ]
 Clone Set: SERVICE15-clone [SERVICE15]
     Started: [ NODE2 ]
*****



# /var/log/cluster/corosync.log
[1160] NODE2 corosyncnotice  [TOTEM ] A new membership (10.130.10.1:208644) was formed. Members joined: 1
[1160] NODE2 corosyncwarning [CPG   ] downlist left_list: 0 received
[1160] NODE2 corosyncwarning [CPG   ] downlist left_list: 0 received
[1160] NODE2 corosyncnotice  [QUORUM] Members[2]: 1 2
[1160] NODE2 corosyncnotice  [MAIN  ] Completed service synchronization, ready to provide service.
Aug 27 08:46:54 [1240] NODE2 pacemakerd:     info: pcmk_quorum_notification:  Quorum retained | membership=208644 members=2
Aug 27 08:46:54 [1240] NODE2 pacemakerd:   notice: crm_update_peer_state_iter:        Node NODE1 state is now member | nodeid=1 previous=lost source=pcmk_quorum_notification
Aug 27 08:46:54 [1331] NODE2       crmd:     info: pcmk_quorum_notification:  Quorum retained | membership=208644 members=2
Aug 27 08:46:54 [1331] NODE2       crmd:   notice: crm_update_peer_state_iter:        Node NODE1 state is now member | nodeid=1 previous=lost source=pcmk_quorum_notification
Aug 27 08:46:54 [1331] NODE2       crmd:     info: peer_update_callback:      Cluster node NODE1 is now member (was lost)
Aug 27 08:46:54 [1331] NODE2       crmd:   notice: peer_update_callback:      do_shutdown of peer NODE1 is complete | op=68
Aug 27 08:46:54 [1326] NODE2        cib:     info: cib_process_request:       Forwarding cib_modify operation for section status to all (origin=local/crmd/1364)
Aug 27 08:46:54 [1331] NODE2       crmd:   notice: do_state_transition:       State transition S_IDLE -> S_INTEGRATION | input=I_NODE_JOIN cause=C_FSA_INTERNAL origin=check_join_state
Aug 27 08:46:54 [1331] NODE2       crmd:     info: do_dc_join_offer_one:      Making join-1 offers to any unconfirmed nodes because an unknown node joined
Aug 27 08:46:54 [1331] NODE2       crmd:     info: join_make_offer:   Making join-1 offers based on membership event 208644
Aug 27 08:46:54 [1331] NODE2       crmd:     info: join_make_offer:   Not making join-1 offer to already known node NODE2 (confirmed)
Aug 27 08:46:54 [1331] NODE2       crmd:     info: join_make_offer:   Not making join-1 offer to inactive node NODE1
Aug 27 08:46:54 [1331] NODE2       crmd:     info: abort_transition_graph:    Transition aborted: Peer Halt | source=do_te_invoke:150 complete=true
Aug 27 08:46:54 [1331] NODE2       crmd:     info: do_state_transition:       State transition S_INTEGRATION -> S_FINALIZE_JOIN | input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state
Aug 27 08:46:54 [1331] NODE2       crmd:     info: do_state_transition:       State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE | input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state
Aug 27 08:46:54 [1326] NODE2        cib:     info: cib_process_request:       Forwarding cib_modify operation for section nodes to all (origin=local/crmd/1367)
Aug 27 08:46:54 [1326] NODE2        cib:     info: cib_process_request:       Forwarding cib_modify operation for section status to all (origin=local/crmd/1368)
Aug 27 08:46:54 [1331] NODE2       crmd:     info: abort_transition_graph:    Transition aborted: Peer Cancelled | source=do_te_invoke:143 complete=true
Aug 27 08:46:54 [1326] NODE2        cib:     info: cib_process_request:       Forwarding cib_modify operation for section nodes to all (origin=local/crmd/1371)
Aug 27 08:46:54 [1326] NODE2        cib:     info: cib_process_request:       Forwarding cib_modify operation for section status to all (origin=local/crmd/1372)
Aug 27 08:46:54 [1326] NODE2        cib:     info: cib_process_request:       Forwarding cib_modify operation for section cib to all (origin=local/crmd/1373)
Aug 27 08:46:54 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.10 2
Aug 27 08:46:54 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.11 (null)
Aug 27 08:46:54 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=11
Aug 27 08:46:54 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/status/node_state[@id='1']:  @crm-debug-origin=peer_update_callback
Aug 27 08:46:54 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE2/crmd/1364, version=0.434.11)
Aug 27 08:46:54 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section nodes: OK (rc=0, origin=NODE2/crmd/1367, version=0.434.11)
Aug 27 08:46:54 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.11 2
Aug 27 08:46:54 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.12 (null)
Aug 27 08:46:54 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=12
Aug 27 08:46:54 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/status/node_state[@id='1']:  @in_ccm=true, @crm-debug-origin=post_cache_update
Aug 27 08:46:54 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE2/crmd/1368, version=0.434.12)
Aug 27 08:46:54 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section nodes: OK (rc=0, origin=NODE2/crmd/1371, version=0.434.12)
Aug 27 08:46:54 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.12 2
Aug 27 08:46:54 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.13 (null)
Aug 27 08:46:54 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=13
Aug 27 08:46:54 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/status/node_state[@id='1']:  @crm-debug-origin=do_state_transition
Aug 27 08:46:54 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/status/node_state[@id='2']:  @crm-debug-origin=do_state_transition
Aug 27 08:46:54 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE2/crmd/1372, version=0.434.13)
Aug 27 08:46:54 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section cib: OK (rc=0, origin=NODE2/crmd/1373, version=0.434.13)
Aug 27 08:46:55 [1330] NODE2    pengine:   notice: unpack_config:     On loss of CCM Quorum: Ignore
Aug 27 08:46:55 [1330] NODE2    pengine:     info: determine_online_status:   Node NODE2 is online
Aug 27 08:46:55 [1330] NODE2    pengine:     info: determine_op_status:       Operation monitor found resource SERVICE4 active on NODE2
Aug 27 08:46:55 [1330] NODE2    pengine:     info: determine_op_status:       Operation monitor found resource SERVICE1 active on NODE2
Aug 27 08:46:55 [1330] NODE2    pengine:     info: unpack_node_loop:  Node 2 is already processed
Aug 27 08:46:55 [1330] NODE2    pengine:     info: unpack_node_loop:  Node 2 is already processed
Aug 27 08:46:55 [1330] NODE2    pengine:     info: group_print:        Resource Group: IPV
Aug 27 08:46:55 [1330] NODE2    pengine:     info: common_print:           VIRTUALIP  (ocf::heartbeat:IPaddr2):       Started NODE2
Aug 27 08:46:55 [1330] NODE2    pengine:     info: common_print:           SOURCEIP   (ocf::heartbeat:IPsrcaddr):      Started NODE2
Aug 27 08:46:55 [1330] NODE2    pengine:     info: common_print:      SERVICE1        (systemd:service1):     Started NODE2
Aug 27 08:46:55 [1330] NODE2    pengine:     info: common_print:      SERVICE2    (systemd:service2): Started NODE2
Aug 27 08:46:55 [1330] NODE2    pengine:     info: common_print:      SERVICE3       (systemd:service3):  Started NODE2
Aug 27 08:46:55 [1330] NODE2    pengine:     info: common_print:      SERVICE4       (systemd:service4):  Started NODE2
Aug 27 08:46:55 [1330] NODE2    pengine:     info: common_print:      SERVICE5      (systemd:service5): Started NODE2
Aug 27 08:46:55 [1330] NODE2    pengine:     info: common_print:      SERVICE6 (systemd:service6):    Started NODE2
Aug 27 08:46:55 [1330] NODE2    pengine:     info: common_print:      SERVICE7  (systemd:service7):      Started NODE2
Aug 27 08:46:55 [1330] NODE2    pengine:     info: common_print:      SERVICE8   (systemd:service8):  Started NODE2
Aug 27 08:46:55 [1330] NODE2    pengine:     info: common_print:      SERVICE9        (systemd:service9):   Started NODE2
Aug 27 08:46:55 [1330] NODE2    pengine:     info: common_print:      SERVICE10        (systemd:service10):        Started NODE2
Aug 27 08:46:55 [1330] NODE2    pengine:     info: common_print:      SERVICE11       (systemd:service11):       Started NODE2
Aug 27 08:46:55 [1330] NODE2    pengine:     info: common_print:      SERVICE12   (systemd:service12):       Started NODE2
Aug 27 08:46:55 [1330] NODE2    pengine:     info: common_print:      SERVICE13  (systemd:service13):       Started NODE2
Aug 27 08:46:55 [1330] NODE2    pengine:     info: clone_print:        Clone Set: SERVICE14-clone [SERVICE14]
Aug 27 08:46:55 [1330] NODE2    pengine:     info: short_print:            Started: [ NODE2 ]
Aug 27 08:46:55 [1330] NODE2    pengine:     info: short_print:            Stopped: [ NODE1 ]
Aug 27 08:46:55 [1330] NODE2    pengine:     info: clone_print:        Clone Set: SERVICE15-clone [SERVICE15]
Aug 27 08:46:55 [1330] NODE2    pengine:     info: short_print:            Started: [ NODE2 ]
Aug 27 08:46:55 [1330] NODE2    pengine:     info: short_print:            Stopped: [ NODE1 ]
Aug 27 08:46:55 [1330] NODE2    pengine:     info: native_color:      Resource SERVICE14:1 cannot run anywhere
Aug 27 08:46:55 [1330] NODE2    pengine:     info: native_color:      Resource SERVICE15:1 cannot run anywhere
Aug 27 08:46:55 [1330] NODE2    pengine:     info: LogActions:        Leave   VIRTUALIP       (Started NODE2)
Aug 27 08:46:55 [1330] NODE2    pengine:     info: LogActions:        Leave   SOURCEIP        (Started NODE2)
Aug 27 08:46:55 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE1        (Started NODE2)
Aug 27 08:46:55 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE2    (Started NODE2)
Aug 27 08:46:55 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE3       (Started NODE2)
Aug 27 08:46:55 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE4       (Started NODE2)
Aug 27 08:46:55 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE5      (Started NODE2)
Aug 27 08:46:55 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE6 (Started NODE2)
Aug 27 08:46:55 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE7  (Started NODE2)
Aug 27 08:46:55 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE8   (Started NODE2)
Aug 27 08:46:55 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE9        (Started NODE2)
Aug 27 08:46:55 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE10        (Started NODE2)
Aug 27 08:46:55 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE11       (Started NODE2)
Aug 27 08:46:55 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE12   (Started NODE2)
Aug 27 08:46:55 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE13  (Started NODE2)
Aug 27 08:46:55 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE14:0        (Started NODE2)
Aug 27 08:46:55 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE14:1        (Stopped)
Aug 27 08:46:55 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE15:0 (Started NODE2)
Aug 27 08:46:55 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE15:1 (Stopped)
Aug 27 08:46:55 [1330] NODE2    pengine:   notice: process_pe_message:        Calculated transition 332, saving inputs in /var/lib/pacemaker/pengine/pe-input-1509.bz2
Aug 27 08:46:55 [1331] NODE2       crmd:     info: do_state_transition:       State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE | input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response
Aug 27 08:46:55 [1331] NODE2       crmd:     info: do_te_invoke:      Processing graph 332 (ref=pe_calc-dc-1598510815-968) derived from /var/lib/pacemaker/pengine/pe-input-1509.bz2
Aug 27 08:46:55 [1331] NODE2       crmd:   notice: run_graph: Transition 332 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-1509.bz2): Complete
Aug 27 08:46:55 [1331] NODE2       crmd:     info: do_log:    Input I_TE_SUCCESS received in state S_TRANSITION_ENGINE from notify_crmd
Aug 27 08:46:55 [1331] NODE2       crmd:   notice: do_state_transition:       State transition S_TRANSITION_ENGINE -> S_IDLE | input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd
Aug 27 08:46:56 [1240] NODE2 pacemakerd:     info: pcmk_cpg_membership:       Group pacemakerd event 14: node 1 pid 1254 joined via cpg_join
Aug 27 08:46:56 [1240] NODE2 pacemakerd:     info: pcmk_cpg_membership:       Group pacemakerd event 14: NODE1 (node 1 pid 1254) is member
Aug 27 08:46:56 [1240] NODE2 pacemakerd:     info: crm_update_peer_proc:      pcmk_cpg_membership: Node NODE1[1] - corosync-cpg is now online
Aug 27 08:46:56 [1240] NODE2 pacemakerd:     info: pcmk_cpg_membership:       Group pacemakerd event 14: NODE2 (node 2 pid 1240) is member
Aug 27 08:46:56 [1240] NODE2 pacemakerd:     info: mcp_cpg_deliver:   Ignoring process list sent by peer for local node
Aug 27 08:46:56 [1329] NODE2      attrd:     info: pcmk_cpg_membership:       Group attrd event 14: node 1 pid 1262 joined via cpg_join
Aug 27 08:46:56 [1329] NODE2      attrd:     info: crm_get_peer:      Created entry 90dd23fb-b65a-41cb-99d6-45b858fa9f67/0x562bfd3a2510 for node NODE1/1 (2 total)
Aug 27 08:46:56 [1329] NODE2      attrd:     info: crm_get_peer:      Node 1 is now known as NODE1
Aug 27 08:46:56 [1329] NODE2      attrd:  warning: crm_update_peer_uname:     Node names with capitals are discouraged, consider changing 'NODE1'
Aug 27 08:46:56 [1329] NODE2      attrd:     info: crm_get_peer:      Node 1 has uuid 1
Aug 27 08:46:56 [1329] NODE2      attrd:     info: pcmk_cpg_membership:       Group attrd event 14: NODE1 (node 1 pid 1262) is member
Aug 27 08:46:56 [1329] NODE2      attrd:     info: crm_update_peer_proc:      pcmk_cpg_membership: Node NODE1[1] - corosync-cpg is now online
Aug 27 08:46:56 [1329] NODE2      attrd:   notice: crm_update_peer_state_iter:        Node NODE1 state is now member | nodeid=1 previous=unknown source=crm_update_peer_proc
Aug 27 08:46:56 [1329] NODE2      attrd:     info: pcmk_cpg_membership:       Group attrd event 14: NODE2 (node 2 pid 1329) is member
Aug 27 08:46:56 [1327] NODE2 stonith-ng:     info: pcmk_cpg_membership:       Group stonith-ng event 14: node 1 pid 1260 joined via cpg_join
Aug 27 08:46:56 [1327] NODE2 stonith-ng:     info: crm_get_peer:      Created entry 7da09b35-a6ad-4694-ab1f-f7572dec96be/0x5592dc152db0 for node NODE1/1 (2 total)
Aug 27 08:46:56 [1327] NODE2 stonith-ng:     info: crm_get_peer:      Node 1 is now known as NODE1
Aug 27 08:46:56 [1327] NODE2 stonith-ng:  warning: crm_update_peer_uname:     Node names with capitals are discouraged, consider changing 'NODE1'
Aug 27 08:46:56 [1327] NODE2 stonith-ng:     info: crm_get_peer:      Node 1 has uuid 1
Aug 27 08:46:56 [1327] NODE2 stonith-ng:     info: pcmk_cpg_membership:       Group stonith-ng event 14: NODE1 (node 1 pid 1260) is member
Aug 27 08:46:56 [1327] NODE2 stonith-ng:     info: crm_update_peer_proc:      pcmk_cpg_membership: Node NODE1[1] - corosync-cpg is now online
Aug 27 08:46:56 [1327] NODE2 stonith-ng:   notice: crm_update_peer_state_iter:        Node NODE1 state is now member | nodeid=1 previous=unknown source=crm_update_peer_proc
Aug 27 08:46:56 [1327] NODE2 stonith-ng:     info: pcmk_cpg_membership:       Group stonith-ng event 14: NODE2 (node 2 pid 1327) is member
Aug 27 08:46:57 [1326] NODE2        cib:     info: pcmk_cpg_membership:       Group cib event 14: node 1 pid 1259 joined via cpg_join
Aug 27 08:46:57 [1326] NODE2        cib:     info: crm_get_peer:      Created entry 9b5a2951-192e-49bd-b5eb-eab2bd4873db/0x563b74302580 for node NODE1/1 (2 total)
Aug 27 08:46:57 [1326] NODE2        cib:     info: crm_get_peer:      Node 1 is now known as NODE1
Aug 27 08:46:57 [1326] NODE2        cib:  warning: crm_update_peer_uname:     Node names with capitals are discouraged, consider changing 'NODE1'
Aug 27 08:46:57 [1326] NODE2        cib:     info: crm_get_peer:      Node 1 has uuid 1
Aug 27 08:46:57 [1326] NODE2        cib:     info: pcmk_cpg_membership:       Group cib event 14: NODE1 (node 1 pid 1259) is member
Aug 27 08:46:57 [1326] NODE2        cib:     info: crm_update_peer_proc:      pcmk_cpg_membership: Node NODE1[1] - corosync-cpg is now online
Aug 27 08:46:57 [1326] NODE2        cib:   notice: crm_update_peer_state_iter:        Node NODE1 state is now member | nodeid=1 previous=unknown source=crm_update_peer_proc
Aug 27 08:46:57 [1326] NODE2        cib:     info: pcmk_cpg_membership:       Group cib event 14: NODE2 (node 2 pid 1326) is member
Aug 27 08:46:57 [1331] NODE2       crmd:     info: pcmk_cpg_membership:       Group crmd event 14: node 1 pid 1264 joined via cpg_join
Aug 27 08:46:57 [1331] NODE2       crmd:     info: pcmk_cpg_membership:       Group crmd event 14: NODE1 (node 1 pid 1264) is member
Aug 27 08:46:57 [1331] NODE2       crmd:     info: crm_update_peer_proc:      pcmk_cpg_membership: Node NODE1[1] - corosync-cpg is now online
Aug 27 08:46:57 [1331] NODE2       crmd:     info: peer_update_callback:      Client NODE1/peer now has status [online] (DC=true, changed=4000000)
Aug 27 08:46:57 [1331] NODE2       crmd:     info: te_trigger_stonith_history_sync:   Fence history will be synchronized cluster-wide within 5 seconds
Aug 27 08:46:57 [1331] NODE2       crmd:     info: pcmk_cpg_membership:       Group crmd event 14: NODE2 (node 2 pid 1331) is member
Aug 27 08:46:57 [1331] NODE2       crmd:   notice: do_state_transition:       State transition S_IDLE -> S_INTEGRATION | input=I_NODE_JOIN cause=C_FSA_INTERNAL origin=peer_update_callback
Aug 27 08:46:57 [1331] NODE2       crmd:     info: do_dc_join_offer_one:      Making join-1 offers to any unconfirmed nodes because an unknown node joined
Aug 27 08:46:57 [1331] NODE2       crmd:     info: join_make_offer:   Not making join-1 offer to already known node NODE2 (confirmed)
Aug 27 08:46:57 [1331] NODE2       crmd:     info: join_make_offer:   Sending join-1 offer to NODE1
Aug 27 08:46:57 [1326] NODE2        cib:     info: cib_process_request:       Forwarding cib_modify operation for section status to all (origin=local/crmd/1376)
Aug 27 08:46:57 [1331] NODE2       crmd:     info: abort_transition_graph:    Transition aborted: Peer Halt | source=do_te_invoke:150 complete=true
Aug 27 08:46:57 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.13 2
Aug 27 08:46:57 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.14 (null)
Aug 27 08:46:57 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=14
Aug 27 08:46:57 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/status/node_state[@id='1']:  @crmd=online, @crm-debug-origin=peer_update_callback
Aug 27 08:46:57 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE2/crmd/1376, version=0.434.14)
Aug 27 08:46:57 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_delete operation for section //node_state[@uname='NODE1']/transient_attributes: OK (rc=0, origin=NODE1/attrd/2, version=0.434.14)
Aug 27 08:46:57 [1329] NODE2      attrd:     info: election_count_vote:       election-attrd round 1 (owner node ID 1) pass: vote from NODE1 (Uptime)
Aug 27 08:46:57 [1329] NODE2      attrd:     info: attrd_peer_update: Setting #attrd-protocol[NODE1]: (null) -> 2 from NODE1
Aug 27 08:46:57 [1329] NODE2      attrd:     info: election_check:    election-attrd won by local node
Aug 27 08:46:57 [1329] NODE2      attrd:   notice: attrd_declare_winner:      Recorded local node as attribute writer (was unset)
Aug 27 08:46:57 [1329] NODE2      attrd:     info: write_attribute:   Sent CIB request 168 with 1 change for fail-count-SERVICE1#start_0 (id n/a, set n/a)
Aug 27 08:46:57 [1329] NODE2      attrd:     info: write_attribute:   Processed 2 private changes for #attrd-protocol, id=n/a, set=n/a
Aug 27 08:46:57 [1329] NODE2      attrd:     info: write_attribute:   Sent CIB request 169 with 1 change for fail-count-SERVICE4#start_0 (id n/a, set n/a)
Aug 27 08:46:57 [1329] NODE2      attrd:     info: write_attribute:   Sent CIB request 170 with 1 change for last-failure-SERVICE1#start_0 (id n/a, set n/a)
Aug 27 08:46:57 [1329] NODE2      attrd:     info: write_attribute:   Sent CIB request 171 with 1 change for last-failure-SERVICE4#start_0 (id n/a, set n/a)
Aug 27 08:46:57 [1329] NODE2      attrd:     info: write_attribute:   Sent CIB request 172 with 1 change for last-failure-SERVICE1#stop_0 (id n/a, set n/a)
Aug 27 08:46:57 [1326] NODE2        cib:     info: cib_process_request:       Forwarding cib_modify operation for section status to all (origin=local/attrd/168)
Aug 27 08:46:57 [1326] NODE2        cib:     info: cib_process_request:       Forwarding cib_modify operation for section status to all (origin=local/attrd/169)
Aug 27 08:46:57 [1326] NODE2        cib:     info: cib_process_request:       Forwarding cib_modify operation for section status to all (origin=local/attrd/170)
Aug 27 08:46:57 [1326] NODE2        cib:     info: cib_process_request:       Forwarding cib_modify operation for section status to all (origin=local/attrd/171)
Aug 27 08:46:57 [1326] NODE2        cib:     info: cib_process_request:       Forwarding cib_modify operation for section status to all (origin=local/attrd/172)
Aug 27 08:46:57 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.14 2
Aug 27 08:46:57 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.15 (null)
Aug 27 08:46:57 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=15
Aug 27 08:46:57 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE2/attrd/168, version=0.434.15)
Aug 27 08:46:57 [1329] NODE2      attrd:     info: attrd_cib_callback:        CIB update 168 result for fail-count-SERVICE1#start_0: OK | rc=0
Aug 27 08:46:57 [1329] NODE2      attrd:     info: attrd_cib_callback:        * fail-count-SERVICE1#start_0[NODE2]=(null)
Aug 27 08:46:57 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.15 2
Aug 27 08:46:57 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.16 (null)
Aug 27 08:46:57 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=16
Aug 27 08:46:57 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE2/attrd/169, version=0.434.16)
Aug 27 08:46:57 [1329] NODE2      attrd:     info: attrd_cib_callback:        CIB update 169 result for fail-count-SERVICE4#start_0: OK | rc=0
Aug 27 08:46:57 [1329] NODE2      attrd:     info: attrd_cib_callback:        * fail-count-SERVICE4#start_0[NODE2]=(null)
Aug 27 08:46:57 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.16 2
Aug 27 08:46:57 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.17 (null)
Aug 27 08:46:57 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=17
Aug 27 08:46:57 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE2/attrd/170, version=0.434.17)
Aug 27 08:46:57 [1329] NODE2      attrd:     info: attrd_cib_callback:        CIB update 170 result for last-failure-SERVICE1#start_0: OK | rc=0
Aug 27 08:46:57 [1329] NODE2      attrd:     info: attrd_cib_callback:        * last-failure-SERVICE1#start_0[NODE2]=(null)
Aug 27 08:46:57 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.17 2
Aug 27 08:46:57 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.18 (null)
Aug 27 08:46:57 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=18
Aug 27 08:46:57 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE2/attrd/171, version=0.434.18)
Aug 27 08:46:57 [1329] NODE2      attrd:     info: attrd_cib_callback:        CIB update 171 result for last-failure-SERVICE4#start_0: OK | rc=0
Aug 27 08:46:57 [1329] NODE2      attrd:     info: attrd_cib_callback:        * last-failure-SERVICE4#start_0[NODE2]=(null)
Aug 27 08:46:57 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.18 2
Aug 27 08:46:57 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.19 (null)
Aug 27 08:46:57 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=19
Aug 27 08:46:57 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE2/attrd/172, version=0.434.19)
Aug 27 08:46:57 [1329] NODE2      attrd:     info: attrd_cib_callback:        CIB update 172 result for last-failure-SERVICE1#stop_0: OK | rc=0
Aug 27 08:46:57 [1329] NODE2      attrd:     info: attrd_cib_callback:        * last-failure-SERVICE1#stop_0[NODE2]=(null)
Aug 27 08:46:57 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section nodes: OK (rc=0, origin=NODE1/crmd/5, version=0.434.19)
Aug 27 08:46:58 [1331] NODE2       crmd:     info: join_make_offer:   Sending join-1 offer to NODE1
Aug 27 08:46:58 [1331] NODE2       crmd:     info: join_make_offer:   Sending join-1 offer to NODE2
Aug 27 08:46:58 [1331] NODE2       crmd:     info: abort_transition_graph:    Transition aborted: Node join | source=do_dc_join_offer_one:269 complete=true
Aug 27 08:46:58 [1331] NODE2       crmd:     info: do_dc_join_offer_one:      Waiting on join-1 requests from 2 outstanding nodes
Aug 27 08:46:59 [1331] NODE2       crmd:     info: crm_update_peer_expected:  do_dc_join_filter_offer: Node NODE1[1] - expected state is now member (was down)
Aug 27 08:46:59 [1331] NODE2       crmd:     info: do_state_transition:       State transition S_INTEGRATION -> S_FINALIZE_JOIN | input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_process_request:       Forwarding cib_modify operation for section nodes to all (origin=local/crmd/1379)
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_process_request:       Forwarding cib_modify operation for section nodes to all (origin=local/crmd/1380)
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_process_replace:       Digest matched on replace from NODE2: a73e34b494d0ff4a4eadead9620ea5b4
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_process_replace:       Replaced 0.434.19 with 0.434.19 from NODE2
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_replace operation for section 'all': OK (rc=0, origin=NODE2/crmd/1378, version=0.434.19)
Aug 27 08:46:59 [1331] NODE2       crmd:     info: do_state_transition:       State transition S_FINALIZE_JOIN -> S_INTEGRATION | input=I_JOIN_REQUEST cause=C_HA_MESSAGE origin=route_message
Aug 27 08:46:59 [1331] NODE2       crmd:     info: join_make_offer:   Sending join-1 offer to NODE1
Aug 27 08:46:59 [1331] NODE2       crmd:     info: join_make_offer:   Sending join-1 offer to NODE2
Aug 27 08:46:59 [1331] NODE2       crmd:     info: abort_transition_graph:    Transition aborted: Node join | source=do_dc_join_offer_one:269 complete=true
Aug 27 08:46:59 [1331] NODE2       crmd:     info: do_dc_join_offer_one:      Waiting on join-1 requests from 2 outstanding nodes
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section nodes: OK (rc=0, origin=NODE2/crmd/1379, version=0.434.19)
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section nodes: OK (rc=0, origin=NODE2/crmd/1380, version=0.434.19)
Aug 27 08:46:59 [1331] NODE2       crmd:     info: do_log:    Input I_JOIN_RESULT received in state S_INTEGRATION from route_message
Aug 27 08:46:59 [1331] NODE2       crmd:     info: do_dc_join_ack:    Ignoring out-of-sequence join-1 confirmation from NODE2 (currently welcomed not finalized)
Aug 27 08:46:59 [1331] NODE2       crmd:     info: do_log:    Input I_JOIN_RESULT received in state S_INTEGRATION from route_message
Aug 27 08:46:59 [1331] NODE2       crmd:     info: do_dc_join_ack:    Ignoring out-of-sequence join-1 confirmation from NODE1 (currently welcomed not finalized)
Aug 27 08:46:59 [1331] NODE2       crmd:     info: do_state_transition:       State transition S_INTEGRATION -> S_FINALIZE_JOIN | input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_process_request:       Forwarding cib_modify operation for section nodes to all (origin=local/crmd/1383)
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_process_request:       Forwarding cib_modify operation for section nodes to all (origin=local/crmd/1384)
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_process_replace:       Digest matched on replace from NODE2: a73e34b494d0ff4a4eadead9620ea5b4
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_process_replace:       Replaced 0.434.19 with 0.434.19 from NODE2
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_replace operation for section 'all': OK (rc=0, origin=NODE2/crmd/1382, version=0.434.19)
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section nodes: OK (rc=0, origin=NODE2/crmd/1383, version=0.434.19)
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section nodes: OK (rc=0, origin=NODE2/crmd/1384, version=0.434.19)
Aug 27 08:46:59 [1331] NODE2       crmd:     info: controld_delete_node_state:        Deleting resource history for node NODE1 (via CIB call 1385) | xpath=//node_state[@uname='NODE1']/lrm
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_process_request:       Forwarding cib_delete operation for section //node_state[@uname='NODE1']/lrm to all (origin=local/crmd/1385)
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_process_request:       Forwarding cib_modify operation for section status to all (origin=local/crmd/1386)
Aug 27 08:46:59 [1331] NODE2       crmd:     info: controld_delete_node_state:        Deleting resource history for node NODE2 (via CIB call 1387) | xpath=//node_state[@uname='NODE2']/lrm
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_process_request:       Forwarding cib_delete operation for section //node_state[@uname='NODE2']/lrm to all (origin=local/crmd/1387)
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_process_request:       Forwarding cib_modify operation for section status to all (origin=local/crmd/1388)
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.19 2
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.20 (null)
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    -- /cib/status/node_state[@id='1']/lrm[@id='1']
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=20
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_delete operation for section //node_state[@uname='NODE1']/lrm: OK (rc=0, origin=NODE2/crmd/1385, version=0.434.20)
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.20 2
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.21 (null)
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=21
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/status/node_state[@id='1']:  @crm-debug-origin=do_lrm_query_internal, @join=member, @expected=member
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++ /cib/status/node_state[@id='1']:  <lrm id="1"/>
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                     <lrm_resources/>
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                   </lrm>
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE2/crmd/1386, version=0.434.21)
Aug 27 08:46:59 [1331] NODE2       crmd:     info: do_state_transition:       State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE | input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state
Aug 27 08:46:59 [1331] NODE2       crmd:     info: abort_transition_graph:    Transition aborted: Peer Cancelled | source=do_te_invoke:143 complete=true
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.21 2
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.22 (null)
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    -- /cib/status/node_state[@id='2']/lrm[@id='2']
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=22
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_delete operation for section //node_state[@uname='NODE2']/lrm: OK (rc=0, origin=NODE2/crmd/1387, version=0.434.22)
Aug 27 08:46:59 [1331] NODE2       crmd:     info: abort_transition_graph:    Transition aborted by deletion of lrm[@id='2']: Resource state removal | cib=0.434.22 source=abort_unless_down:370 path=/cib/status/node_state[@id='2']/lrm[@id='2'] complete=true
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.22 2
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.23 (null)
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=23
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/status/node_state[@id='2']:  @crm-debug-origin=do_lrm_query_internal
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++ /cib/status/node_state[@id='2']:  <lrm id="2"/>
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                     <lrm_resources>
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                       <lrm_resource id="SERVICE4" type="service4" class="systemd">
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                         <lrm_rsc_op id="SERVICE4_last_failure_0" operation_key="SERVICE4_monitor_0" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="7:1079:7:96431d62-66ac-46b4-9fbc-1cd89548a77d" transition-magic="0:0;7:1079:7:96431d62-66ac-46b4-9fbc-1cd89548a77d" exit-reason="" on_node="NODE2" call-id="97" rc-code="0" op-status="0" interval="0" last-ru
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                         <lrm_rsc_op id="SERVICE4_last_0" operation_key="SERVICE4_start_0" operation="start" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="33:226:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="0:0;33:226:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE2" call-id="385" rc-code="0" op-status="0" interval="0" last-run="15984551
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                         <lrm_rsc_op id="SERVICE4_monitor_5000" operation_key="SERVICE4_monitor_5000" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="34:226:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="0:0;34:226:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE2" call-id="386" rc-code="0" op-status="0" interval="5000" la
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                       </lrm_resource>
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                       <lrm_resource id="SERVICE9" type="service9" class="systemd">
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                         <lrm_rsc_op id="SERVICE9_last_0" operation_key="SERVICE9_start_0" operation="start" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="44:296:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="0:0;44:296:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE2" call-id="398" rc-code="0" op-status="0" interval="0" last-run="1598510455" last-rc-change
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                         <lrm_rsc_op id="SERVICE9_monitor_5000" operation_key="SERVICE9_monitor_5000" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="46:297:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="0:0;46:297:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE2" call-id="400" rc-code="0" op-status="0" interval="5000" last-rc-change="1598
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                       </lrm_resource>
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                       <lrm_resource id="SERVICE13" type="service13" class="systemd">
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                         <lrm_rsc_op id="SERVICE13_last_0" operation_key="SERVICE13_start_0" operation="start" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="48:0:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="0:0;48:0:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE2" call-id="108" rc-code="0" op-status="0" interval="0" last-run="1598436624" last-rc-change="159843
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                         <lrm_rsc_op id="SERVICE13_monitor_5000" operation_key="SERVICE13_monitor_5000" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="51:1:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="0:0;51:1:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE2" call-id="110" rc-code="0" op-status="0" interval="5000" last-rc-change="1598436626"
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                       </lrm_resource>
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                       <lrm_resource id="SERVICE11" type="service11" class="systemd">
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                         <lrm_rsc_op id="SERVICE11_last_0" operation_key="SERVICE11_start_0" operation="start" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="43:1079:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" transition-magic="0:0;43:1079:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" exit-reason="" on_node="NODE2" call-id="104" rc-code="0" op-status="0" interval="0" last-run="1598436620" last-rc-ch
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                         <lrm_rsc_op id="SERVICE11_monitor_5000" operation_key="SERVICE11_monitor_5000" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="45:0:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="0:0;45:0:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE2" call-id="106" rc-code="0" op-status="0" interval="5000" last-rc-change="159843
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                       </lrm_resource>
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                       <lrm_resource id="SERVICE6" type="service6" class="systemd">
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                         <lrm_rsc_op id="SERVICE6_last_0" operation_key="SERVICE6_start_0" operation="start" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="46:1078:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" transition-magic="0:0;46:1078:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" exit-reason="" on_node="NODE2" call-id="85" rc-code="0" op-status="0" interval="0" last-run="1598436614"
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                         <lrm_rsc_op id="SERVICE6_monitor_5000" operation_key="SERVICE6_monitor_5000" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="30:1079:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" transition-magic="0:0;30:1079:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" exit-reason="" on_node="NODE2" call-id="99" rc-code="0" op-status="0" interval="5000" last-
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                       </lrm_resource>
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                       <lrm_resource id="SERVICE1" type="service1" class="systemd">
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                         <lrm_rsc_op id="SERVICE1_last_failure_0" operation_key="SERVICE1_monitor_0" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="19:296:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="0:0;19:296:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE2" call-id="397" rc-code="0" op-status="0" interval="0" last-run="1598510455" la
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                         <lrm_rsc_op id="SERVICE1_last_0" operation_key="SERVICE1_start_0" operation="start" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="30:328:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="0:0;30:328:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE2" call-id="425" rc-code="0" op-status="0" interval="0" last-run="1598510531" last-rc-change
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                         <lrm_rsc_op id="SERVICE1_monitor_5000" operation_key="SERVICE1_monitor_5000" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="20:328:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="0:0;20:328:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE2" call-id="426" rc-code="0" op-status="0" interval="5000" last-rc-change="1598
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                       </lrm_resource>
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                       <lrm_resource id="SERVICE2" type="service2" class="systemd">
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                         <lrm_rsc_op id="SERVICE2_last_0" operation_key="SERVICE2_start_0" operation="start" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="29:154:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="0:0;29:154:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE2" call-id="249" rc-code="0" op-status="0" interval="0" last-run="15
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                         <lrm_rsc_op id="SERVICE2_monitor_5000" operation_key="SERVICE2_monitor_5000" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="30:154:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="0:0;30:154:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE2" call-id="250" rc-code="0" op-status="0" interval="50
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                       </lrm_resource>
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                       <lrm_resource id="VIRTUALIP" type="IPaddr2" class="ocf" provider="heartbeat">
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                         <lrm_rsc_op id="VIRTUALIP_last_0" operation_key="VIRTUALIP_start_0" operation="start" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="21:1078:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" transition-magic="0:0;21:1078:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" exit-reason="" on_node="NODE2" call-id="76" rc-code="0" op-status="0" interval="0" last-run="1598436612" last-rc-cha
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                         <lrm_rsc_op id="VIRTUALIP_monitor_5000" operation_key="VIRTUALIP_monitor_5000" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="22:1078:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" transition-magic="0:0;22:1078:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" exit-reason="" on_node="NODE2" call-id="77" rc-code="0" op-status="0" interval="5000" last-rc-change="1
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                       </lrm_resource>
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                       <lrm_resource id="SERVICE3" type="service3" class="systemd">
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                         <lrm_rsc_op id="SERVICE3_last_0" operation_key="SERVICE3_start_0" operation="start" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="37:1078:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" transition-magic="0:0;37:1078:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" exit-reason="" on_node="NODE2" call-id="82" rc-code="0" op-status="0" interval="0" l
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                         <lrm_rsc_op id="SERVICE3_monitor_5000" operation_key="SERVICE3_monitor_5000" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="38:1078:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" transition-magic="0:0;38:1078:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" exit-reason="" on_node="NODE2" call-id="90" rc-code="0" op-status="0" i
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                       </lrm_resource>
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                       <lrm_resource id="SERVICE10" type="service10" class="systemd">
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                         <lrm_rsc_op id="SERVICE10_last_0" operation_key="SERVICE10_start_0" operation="start" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="58:1078:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" transition-magic="0:0;58:1078:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" exit-reason="" on_node="NODE2" call-id="91" rc-code="0" op-status="0" interval="0" last-run="1598436616" last-rc-chang
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                         <lrm_rsc_op id="SERVICE10_monitor_5000" operation_key="SERVICE10_monitor_5000" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="42:1079:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" transition-magic="0:0;42:1079:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" exit-reason="" on_node="NODE2" call-id="103" rc-code="0" op-status="0" interval="5000" last-rc-change="15
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                       </lrm_resource>
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                       <lrm_resource id="SERVICE8" type="service8" class="systemd">
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                         <lrm_rsc_op id="SERVICE8_last_0" operation_key="SERVICE8_start_0" operation="start" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="52:1078:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" transition-magic="0:0;52:1078:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" exit-reason="" on_node="NODE2" call-id="87" rc-code="0" op-status="0" interval="0" last-run="1598436615" las
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                         <lrm_rsc_op id="SERVICE8_monitor_5000" operation_key="SERVICE8_monitor_5000" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="36:1079:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" transition-magic="0:0;36:1079:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" exit-reason="" on_node="NODE2" call-id="101" rc-code="0" op-status="0" interval="5000" last-rc-
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                       </lrm_resource>
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                       <lrm_resource id="SERVICE7" type="service7" class="systemd">
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                         <lrm_rsc_op id="SERVICE7_last_0" operation_key="SERVICE7_start_0" operation="start" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="49:1078:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" transition-magic="0:0;49:1078:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" exit-reason="" on_node="NODE2" call-id="86" rc-code="0" op-status="0" interval="0" last-run="1598436614" l
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                         <lrm_rsc_op id="SERVICE7_monitor_5000" operation_key="SERVICE7_monitor_5000" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="33:1079:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" transition-magic="0:0;33:1079:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" exit-reason="" on_node="NODE2" call-id="100" rc-code="0" op-status="0" interval="5000" last-r
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                       </lrm_resource>
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                       <lrm_resource id="SERVICE15" type="service15" class="systemd">
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                         <lrm_rsc_op id="SERVICE15_last_0" operation_key="SERVICE15_start_0" operation="start" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="79:1077:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" transition-magic="0:0;79:1077:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" exit-reason="" on_node="NODE2" call-id="73" rc-code="0" op-status="0" interval="0" last-run="1598436522" last-rc-change="159
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                         <lrm_rsc_op id="SERVICE15_monitor_10000" operation_key="SERVICE15_monitor_10000" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="80:1077:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" transition-magic="0:0;80:1077:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" exit-reason="" on_node="NODE2" call-id="75" rc-code="0" op-status="0" interval="10000" last-rc-change="159843
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                       </lrm_resource>
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                       <lrm_resource id="SERVICE14" type="service14" class="systemd">
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                         <lrm_rsc_op id="SERVICE14_last_0" operation_key="SERVICE14_start_0" operation="start" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="71:1077:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" transition-magic="0:0;71:1077:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" exit-reason="" on_node="NODE2" call-id="72" rc-code="0" op-status="0" interval="0" last-run="1598436522" l
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                         <lrm_rsc_op id="SERVICE14_monitor_10000" operation_key="SERVICE14_monitor_10000" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="72:1077:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" transition-magic="0:0;72:1077:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" exit-reason="" on_node="NODE2" call-id="74" rc-code="0" op-status="0" interval="10000" last
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                       </lrm_resource>
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                       <lrm_resource id="SERVICE5" type="service5" class="systemd">
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                         <lrm_rsc_op id="SERVICE5_last_0" operation_key="SERVICE5_start_0" operation="start" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="43:1078:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" transition-magic="0:0;43:1078:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" exit-reason="" on_node="NODE2" call-id="84" rc-code="0" op-status="0" interval="0" last-run="1598436614" last-rc-c
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                         <lrm_rsc_op id="SERVICE5_monitor_5000" operation_key="SERVICE5_monitor_5000" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="27:1079:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" transition-magic="0:0;27:1079:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" exit-reason="" on_node="NODE2" call-id="98" rc-code="0" op-status="0" interval="5000" last-rc-change=
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                       </lrm_resource>
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                       <lrm_resource id="SOURCEIP" type="IPsrcaddr" class="ocf" provider="heartbeat">
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                         <lrm_rsc_op id="SOURCEIP_last_0" operation_key="SOURCEIP_start_0" operation="start" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="24:1078:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" transition-magic="0:0;24:1078:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" exit-reason="" on_node="NODE2" call-id="78" rc-code="0" op-status="0" interval="0" last-run="1598436612" last-rc-chang
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                         <lrm_rsc_op id="SOURCEIP_monitor_5000" operation_key="SOURCEIP_monitor_5000" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="25:1078:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" transition-magic="0:0;25:1078:0:96431d62-66ac-46b4-9fbc-1cd89548a77d" exit-reason="" on_node="NODE2" call-id="79" rc-code="0" op-status="0" interval="5000" last-rc-change="159
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                       </lrm_resource>
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                       <lrm_resource id="SERVICE12" type="service12" class="systemd">
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                         <lrm_rsc_op id="SERVICE12_last_0" operation_key="SERVICE12_start_0" operation="start" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="46:0:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="0:0;46:0:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE2" call-id="107" rc-code="0" op-status="0" interval="0" last-run="1598436624" last-rc-
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                         <lrm_rsc_op id="SERVICE12_monitor_5000" operation_key="SERVICE12_monitor_5000" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-key="48:1:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="0:0;48:1:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE2" call-id="109" rc-code="0" op-status="0" interval="5000" last-rc-change
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                       </lrm_resource>
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                     </lrm_resources>
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_perform_op:    ++                                   </lrm>
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE2/crmd/1388, version=0.434.23)
Aug 27 08:46:59 [1331] NODE2       crmd:     info: abort_transition_graph:    Transition aborted: LRM Refresh | source=process_resource_updates:294 complete=true
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_process_request:       Forwarding cib_modify operation for section nodes to all (origin=local/crmd/1391)
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_process_request:       Forwarding cib_modify operation for section status to all (origin=local/crmd/1392)
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_process_request:       Forwarding cib_modify operation for section cib to all (origin=local/crmd/1393)
Aug 27 08:46:59 [1326] NODE2        cib:     info: cib_file_backup:   Archived previous version as /var/lib/pacemaker/cib/cib-54.raw
Aug 27 08:47:00 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section nodes: OK (rc=0, origin=NODE2/crmd/1391, version=0.434.23)
Aug 27 08:47:00 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.23 2
Aug 27 08:47:00 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.24 (null)
Aug 27 08:47:00 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=24
Aug 27 08:47:00 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/status/node_state[@id='1']:  @crm-debug-origin=do_state_transition
Aug 27 08:47:00 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/status/node_state[@id='2']:  @crm-debug-origin=do_state_transition
Aug 27 08:47:00 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE2/crmd/1392, version=0.434.24)
Aug 27 08:47:00 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section cib: OK (rc=0, origin=NODE2/crmd/1393, version=0.434.24)
Aug 27 08:47:00 [1326] NODE2        cib:     info: cib_file_write_with_digest:        Wrote version 0.434.0 of the CIB to disk (digest: 5698b0502258efe142e4750fe055e120)
Aug 27 08:47:00 [1326] NODE2        cib:     info: cib_file_write_with_digest:        Reading cluster configuration file /var/lib/pacemaker/cib/cib.xnvdSF (digest: /var/lib/pacemaker/cib/cib.m8Q2QM)
Aug 27 08:47:00 [1326] NODE2        cib:     info: cib_file_backup:   Archived previous version as /var/lib/pacemaker/cib/cib-55.raw
Aug 27 08:47:00 [1326] NODE2        cib:     info: cib_file_write_with_digest:        Wrote version 0.434.0 of the CIB to disk (digest: 6230f319a9bccbd7c4b84b80c2299632)
Aug 27 08:47:00 [1326] NODE2        cib:     info: cib_file_write_with_digest:        Reading cluster configuration file /var/lib/pacemaker/cib/cib.AaHAsG (digest: /var/lib/pacemaker/cib/cib.Zq949N)
Aug 27 08:47:01 [1330] NODE2    pengine:   notice: unpack_config:     On loss of CCM Quorum: Ignore
Aug 27 08:47:01 [1330] NODE2    pengine:     info: determine_online_status:   Node NODE1 is online
Aug 27 08:47:01 [1330] NODE2    pengine:     info: determine_online_status:   Node NODE2 is online
Aug 27 08:47:01 [1330] NODE2    pengine:     info: determine_op_status:       Operation monitor found resource SERVICE4 active on NODE2
Aug 27 08:47:01 [1330] NODE2    pengine:     info: determine_op_status:       Operation monitor found resource SERVICE1 active on NODE2
Aug 27 08:47:01 [1330] NODE2    pengine:     info: unpack_node_loop:  Node 1 is already processed
Aug 27 08:47:01 [1330] NODE2    pengine:     info: unpack_node_loop:  Node 2 is already processed
Aug 27 08:47:01 [1330] NODE2    pengine:     info: unpack_node_loop:  Node 1 is already processed
Aug 27 08:47:01 [1330] NODE2    pengine:     info: unpack_node_loop:  Node 2 is already processed
Aug 27 08:47:01 [1330] NODE2    pengine:     info: group_print:        Resource Group: IPV
Aug 27 08:47:01 [1330] NODE2    pengine:     info: common_print:           VIRTUALIP  (ocf::heartbeat:IPaddr2):       Started NODE2
Aug 27 08:47:01 [1330] NODE2    pengine:     info: common_print:           SOURCEIP   (ocf::heartbeat:IPsrcaddr):      Started NODE2
Aug 27 08:47:01 [1330] NODE2    pengine:     info: common_print:      SERVICE1        (systemd:service1):     Started NODE2
Aug 27 08:47:01 [1330] NODE2    pengine:     info: common_print:      SERVICE2    (systemd:service2): Started NODE2
Aug 27 08:47:01 [1330] NODE2    pengine:     info: common_print:      SERVICE3       (systemd:service3):  Started NODE2
Aug 27 08:47:01 [1330] NODE2    pengine:     info: common_print:      SERVICE4       (systemd:service4):  Started NODE2
Aug 27 08:47:01 [1330] NODE2    pengine:     info: common_print:      SERVICE5      (systemd:service5): Started NODE2
Aug 27 08:47:01 [1330] NODE2    pengine:     info: common_print:      SERVICE6 (systemd:service6):    Started NODE2
Aug 27 08:47:01 [1330] NODE2    pengine:     info: common_print:      SERVICE7  (systemd:service7):      Started NODE2
Aug 27 08:47:01 [1330] NODE2    pengine:     info: common_print:      SERVICE8   (systemd:service8):  Started NODE2
Aug 27 08:47:01 [1330] NODE2    pengine:     info: common_print:      SERVICE9        (systemd:service9):   Started NODE2
Aug 27 08:47:01 [1330] NODE2    pengine:     info: common_print:      SERVICE10        (systemd:service10):        Started NODE2
Aug 27 08:47:01 [1330] NODE2    pengine:     info: common_print:      SERVICE11       (systemd:service11):       Started NODE2
Aug 27 08:47:01 [1330] NODE2    pengine:     info: common_print:      SERVICE12   (systemd:service12):       Started NODE2
Aug 27 08:47:01 [1330] NODE2    pengine:     info: common_print:      SERVICE13  (systemd:service13):       Started NODE2
Aug 27 08:47:01 [1330] NODE2    pengine:     info: clone_print:        Clone Set: SERVICE14-clone [SERVICE14]
Aug 27 08:47:01 [1330] NODE2    pengine:     info: short_print:            Started: [ NODE2 ]
Aug 27 08:47:01 [1330] NODE2    pengine:     info: short_print:            Stopped: [ NODE1 ]
Aug 27 08:47:01 [1330] NODE2    pengine:     info: clone_print:        Clone Set: SERVICE15-clone [SERVICE15]
Aug 27 08:47:01 [1330] NODE2    pengine:     info: short_print:            Started: [ NODE2 ]
Aug 27 08:47:01 [1330] NODE2    pengine:     info: short_print:            Stopped: [ NODE1 ]
Aug 27 08:47:01 [1330] NODE2    pengine:     info: RecurringOp:        Start recurring monitor (10s) for SERVICE14:1 on NODE1
Aug 27 08:47:01 [1330] NODE2    pengine:     info: RecurringOp:        Start recurring monitor (10s) for SERVICE15:1 on NODE1
Aug 27 08:47:01 [1330] NODE2    pengine:     info: LogActions:        Leave   VIRTUALIP       (Started NODE2)
Aug 27 08:47:01 [1330] NODE2    pengine:     info: LogActions:        Leave   SOURCEIP        (Started NODE2)
Aug 27 08:47:01 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE1        (Started NODE2)
Aug 27 08:47:01 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE2    (Started NODE2)
Aug 27 08:47:01 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE3       (Started NODE2)
Aug 27 08:47:01 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE4       (Started NODE2)
Aug 27 08:47:01 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE5      (Started NODE2)
Aug 27 08:47:01 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE6 (Started NODE2)
Aug 27 08:47:01 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE7  (Started NODE2)
Aug 27 08:47:01 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE8   (Started NODE2)
Aug 27 08:47:01 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE9        (Started NODE2)
Aug 27 08:47:01 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE10        (Started NODE2)
Aug 27 08:47:01 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE11       (Started NODE2)
Aug 27 08:47:01 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE12   (Started NODE2)
Aug 27 08:47:01 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE13  (Started NODE2)
Aug 27 08:47:01 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE14:0        (Started NODE2)
Aug 27 08:47:01 [1330] NODE2    pengine:   notice: LogAction:  * Start      SERVICE14:1         (            NODE1 )
Aug 27 08:47:01 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE15:0 (Started NODE2)
Aug 27 08:47:01 [1330] NODE2    pengine:   notice: LogAction:  * Start      SERVICE15:1                  (            NODE1 )
Aug 27 08:47:01 [1330] NODE2    pengine:   notice: process_pe_message:        Calculated transition 333, saving inputs in /var/lib/pacemaker/pengine/pe-input-1510.bz2
Aug 27 08:47:01 [1331] NODE2       crmd:     info: do_state_transition:       State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE | input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response
Aug 27 08:47:01 [1331] NODE2       crmd:     info: do_te_invoke:      Processing graph 333 (ref=pe_calc-dc-1598510821-982) derived from /var/lib/pacemaker/pengine/pe-input-1510.bz2
Aug 27 08:47:01 [1331] NODE2       crmd:   notice: te_rsc_command:    Initiating monitor operation VIRTUALIP_monitor_0 on NODE1 | action 18
Aug 27 08:47:01 [1331] NODE2       crmd:   notice: te_rsc_command:    Initiating monitor operation SOURCEIP_monitor_0 on NODE1 | action 19
Aug 27 08:47:01 [1331] NODE2       crmd:   notice: te_rsc_command:    Initiating monitor operation SERVICE1_monitor_0 on NODE1 | action 20
Aug 27 08:47:01 [1331] NODE2       crmd:   notice: te_rsc_command:    Initiating monitor operation SERVICE2_monitor_0 on NODE1 | action 21
Aug 27 08:47:01 [1331] NODE2       crmd:   notice: te_rsc_command:    Initiating monitor operation SERVICE3_monitor_0 on NODE1 | action 22
Aug 27 08:47:01 [1331] NODE2       crmd:   notice: te_rsc_command:    Initiating monitor operation SERVICE4_monitor_0 on NODE1 | action 23
Aug 27 08:47:01 [1331] NODE2       crmd:   notice: te_rsc_command:    Initiating monitor operation SERVICE5_monitor_0 on NODE1 | action 24
Aug 27 08:47:01 [1331] NODE2       crmd:   notice: te_rsc_command:    Initiating monitor operation SERVICE6_monitor_0 on NODE1 | action 25
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.24 2
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.25 (null)
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=25
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/status/node_state[@id='1']:  @crm-debug-origin=do_update_resource
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_perform_op:    ++ /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources:  <lrm_resource id="VIRTUALIP" type="IPaddr2" class="ocf" provider="heartbeat"/>
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_perform_op:    ++                                                                <lrm_rsc_op id="VIRTUALIP_last_0" operation_key="VIRTUALIP_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.14" transition-key="18:333:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="-1:193;18:333:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE1" call-id="-1" rc-code="193" op-status="-1" interval="0"
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_perform_op:    ++                                                              </lrm_resource>
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE1/crmd/12, version=0.434.25)
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.25 2
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.26 (null)
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=26
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_perform_op:    ++ /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources:  <lrm_resource id="SOURCEIP" type="IPsrcaddr" class="ocf" provider="heartbeat"/>
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_perform_op:    ++                                                                <lrm_rsc_op id="SOURCEIP_last_0" operation_key="SOURCEIP_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.14" transition-key="19:333:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="-1:193;19:333:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE1" call-id="-1" rc-code="193" op-status="-1" interval="0" la
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_perform_op:    ++                                                              </lrm_resource>
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE1/crmd/13, version=0.434.26)
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.26 2
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.27 (null)
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=27
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_perform_op:    ++ /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources:  <lrm_resource id="SERVICE1" type="service1" class="systemd"/>
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_perform_op:    ++                                                                <lrm_rsc_op id="SERVICE1_last_0" operation_key="SERVICE1_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.14" transition-key="20:333:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="-1:193;20:333:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE1" call-id="-1" rc-code="193" op-status="-1" interval="0" la
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_perform_op:    ++                                                              </lrm_resource>
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE1/crmd/14, version=0.434.27)
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.27 2
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.28 (null)
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=28
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_perform_op:    ++ /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources:  <lrm_resource id="SERVICE2" type="service2" class="systemd"/>
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_perform_op:    ++                                                                <lrm_rsc_op id="SERVICE2_last_0" operation_key="SERVICE2_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.14" transition-key="21:333:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="-1:193;21:333:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE1" call-id="-1" rc-code="193" op-sta
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_perform_op:    ++                                                              </lrm_resource>
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE1/crmd/15, version=0.434.28)
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.28 2
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.29 (null)
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=29
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_perform_op:    ++ /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources:  <lrm_resource id="SERVICE3" type="service3" class="systemd"/>
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_perform_op:    ++                                                                <lrm_rsc_op id="SERVICE3_last_0" operation_key="SERVICE3_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.14" transition-key="22:333:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="-1:193;22:333:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE1" call-id="-1" rc-code="1
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_perform_op:    ++                                                              </lrm_resource>
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE1/crmd/16, version=0.434.29)
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.29 2
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.30 (null)
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=30
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_perform_op:    ++ /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources:  <lrm_resource id="SERVICE4" type="service4" class="systemd"/>
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_perform_op:    ++                                                                <lrm_rsc_op id="SERVICE4_last_0" operation_key="SERVICE4_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.14" transition-key="23:333:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="-1:193;23:333:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE1" call-id="-1" rc-code="193" op-status="-
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_perform_op:    ++                                                              </lrm_resource>
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE1/crmd/17, version=0.434.30)
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.30 2
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.31 (null)
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=31
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_perform_op:    ++ /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources:  <lrm_resource id="SERVICE5" type="service5" class="systemd"/>
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_perform_op:    ++                                                                <lrm_rsc_op id="SERVICE5_last_0" operation_key="SERVICE5_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.14" transition-key="24:333:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="-1:193;24:333:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE1" call-id="-1" rc-code="193" op-status="-1" interval="0
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_perform_op:    ++                                                              </lrm_resource>
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE1/crmd/18, version=0.434.31)
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.31 2
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.32 (null)
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=32
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='SERVICE1']/lrm_rsc_op[@id='SERVICE1_last_0']:  @transition-magic=0:0;20:333:7:d10dd5e7-af4d-4bba-a226-516824f8f60e, @call-id=13, @rc-code=0, @op-status=0, @exec-time=4
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_perform_op:    ++ /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='SERVICE1']:  <lrm_rsc_op id="SERVICE1_last_failure_0" operation_key="SERVICE1_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.14" transition-key="20:333:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="0:0;20:333:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE1" call-id="13" rc-code="0"
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE1/crmd/19, version=0.434.32)
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.32 2
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.33 (null)
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=33
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_perform_op:    ++ /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources:  <lrm_resource id="SERVICE6" type="service6" class="systemd"/>
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_perform_op:    ++                                                                <lrm_rsc_op id="SERVICE6_last_0" operation_key="SERVICE6_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.14" transition-key="25:333:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="-1:193;25:333:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE1" call-id="-1" rc-code="193" op-status="-1" i
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_perform_op:    ++                                                              </lrm_resource>
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE1/crmd/20, version=0.434.33)
Aug 27 08:47:01 [1331] NODE2       crmd:  warning: status_from_rc:    Action 20 (SERVICE1_monitor_0) on NODE1 failed (target: 7 vs. rc: 0): Error
Aug 27 08:47:01 [1331] NODE2       crmd:   notice: abort_transition_graph:    Transition aborted by operation SERVICE1_monitor_0 'modify' on NODE1: Event failed | magic=0:0;20:333:7:d10dd5e7-af4d-4bba-a226-516824f8f60e cib=0.434.32 source=match_graph_event:299 complete=false
Aug 27 08:47:01 [1331] NODE2       crmd:     info: match_graph_event: Action SERVICE1_monitor_0 (20) confirmed on NODE1 (rc=0)
Aug 27 08:47:01 [1331] NODE2       crmd:     info: process_graph_event:       Detected action (333.20) SERVICE1_monitor_0.13=ok: failed
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.33 2
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.34 (null)
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=34
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='SOURCEIP']/lrm_rsc_op[@id='SOURCEIP_last_0']:  @transition-magic=0:7;19:333:7:d10dd5e7-af4d-4bba-a226-516824f8f60e, @call-id=9, @rc-code=7, @op-status=0, @last-run=1598510844, @last-rc-change=1598510844, @exec-time=40
Aug 27 08:47:01 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE1/crmd/21, version=0.434.34)
Aug 27 08:47:01 [1331] NODE2       crmd:     info: match_graph_event: Action SOURCEIP_monitor_0 (19) confirmed on NODE1 (rc=7)
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.34 2
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.35 (null)
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=35
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='VIRTUALIP']/lrm_rsc_op[@id='VIRTUALIP_last_0']:  @transition-magic=0:7;18:333:7:d10dd5e7-af4d-4bba-a226-516824f8f60e, @call-id=5, @rc-code=7, @op-status=0, @exec-time=441
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE1/crmd/22, version=0.434.35)
Aug 27 08:47:02 [1331] NODE2       crmd:     info: match_graph_event: Action VIRTUALIP_monitor_0 (18) confirmed on NODE1 (rc=7)
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.35 2
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.36 (null)
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=36
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='SERVICE2']/lrm_rsc_op[@id='SERVICE2_last_0']:  @transition-magic=0:7;21:333:7:d10dd5e7-af4d-4bba-a226-516824f8f60e, @call-id=17, @rc-code=7, @op-status=0, @exec-time=239
Aug 27 08:47:02 [1331] NODE2       crmd:     info: match_graph_event: Action SERVICE2_monitor_0 (21) confirmed on NODE1 (rc=7)
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE1/crmd/23, version=0.434.36)
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.36 2
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.37 (null)
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=37
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='SERVICE3']/lrm_rsc_op[@id='SERVICE3_last_0']:  @transition-magic=0:7;22:333:7:d10dd5e7-af4d-4bba-a226-516824f8f60e, @call-id=21, @rc-code=7, @op-status=0, @exec-time=240
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE1/crmd/24, version=0.434.37)
Aug 27 08:47:02 [1331] NODE2       crmd:     info: match_graph_event: Action SERVICE3_monitor_0 (22) confirmed on NODE1 (rc=7)
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.37 2
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.38 (null)
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=38
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='SERVICE4']/lrm_rsc_op[@id='SERVICE4_last_0']:  @transition-magic=0:7;23:333:7:d10dd5e7-af4d-4bba-a226-516824f8f60e, @call-id=25, @rc-code=7, @op-status=0, @exec-time=241
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE1/crmd/25, version=0.434.38)
Aug 27 08:47:02 [1331] NODE2       crmd:     info: match_graph_event: Action SERVICE4_monitor_0 (23) confirmed on NODE1 (rc=7)
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.38 2
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.39 (null)
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=39
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='SERVICE5']/lrm_rsc_op[@id='SERVICE5_last_0']:  @transition-magic=0:7;24:333:7:d10dd5e7-af4d-4bba-a226-516824f8f60e, @call-id=29, @rc-code=7, @op-status=0, @exec-time=241
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE1/crmd/26, version=0.434.39)
Aug 27 08:47:02 [1331] NODE2       crmd:     info: match_graph_event: Action SERVICE5_monitor_0 (24) confirmed on NODE1 (rc=7)
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.39 2
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.40 (null)
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=40
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='SERVICE6']/lrm_rsc_op[@id='SERVICE6_last_0']:  @transition-magic=0:7;25:333:7:d10dd5e7-af4d-4bba-a226-516824f8f60e, @call-id=33, @rc-code=7, @op-status=0, @exec-time=242
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE1/crmd/27, version=0.434.40)
Aug 27 08:47:02 [1331] NODE2       crmd:     info: match_graph_event: Action SERVICE6_monitor_0 (25) confirmed on NODE1 (rc=7)
Aug 27 08:47:02 [1331] NODE2       crmd:   notice: run_graph: Transition 333 (Complete=8, Pending=0, Fired=0, Skipped=9, Incomplete=17, Source=/var/lib/pacemaker/pengine/pe-input-1510.bz2): Stopped
Aug 27 08:47:02 [1331] NODE2       crmd:     info: do_state_transition:       State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE | input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd
Aug 27 08:47:02 [1330] NODE2    pengine:   notice: unpack_config:     On loss of CCM Quorum: Ignore
Aug 27 08:47:02 [1330] NODE2    pengine:     info: determine_online_status:   Node NODE1 is online
Aug 27 08:47:02 [1330] NODE2    pengine:     info: determine_online_status:   Node NODE2 is online
Aug 27 08:47:02 [1330] NODE2    pengine:     info: determine_op_status:       Operation monitor found resource SERVICE1 active on NODE1
Aug 27 08:47:02 [1330] NODE2    pengine:     info: determine_op_status:       Operation monitor found resource SERVICE1 active on NODE1
Aug 27 08:47:02 [1330] NODE2    pengine:     info: determine_op_status:       Operation monitor found resource SERVICE4 active on NODE2
Aug 27 08:47:02 [1330] NODE2    pengine:     info: determine_op_status:       Operation monitor found resource SERVICE1 active on NODE2
Aug 27 08:47:02 [1330] NODE2    pengine:     info: unpack_node_loop:  Node 1 is already processed
Aug 27 08:47:02 [1330] NODE2    pengine:     info: unpack_node_loop:  Node 2 is already processed
Aug 27 08:47:02 [1330] NODE2    pengine:     info: unpack_node_loop:  Node 1 is already processed
Aug 27 08:47:02 [1330] NODE2    pengine:     info: unpack_node_loop:  Node 2 is already processed
Aug 27 08:47:02 [1330] NODE2    pengine:     info: group_print:        Resource Group: IPV
Aug 27 08:47:02 [1330] NODE2    pengine:     info: common_print:           VIRTUALIP  (ocf::heartbeat:IPaddr2):       Started NODE2
Aug 27 08:47:02 [1330] NODE2    pengine:     info: common_print:           SOURCEIP   (ocf::heartbeat:IPsrcaddr):      Started NODE2
Aug 27 08:47:02 [1330] NODE2    pengine:     info: common_print:      SERVICE1        (systemd:service1):     Started
Aug 27 08:47:02 [1330] NODE2    pengine:     info: common_print:              1 : NODE1
Aug 27 08:47:02 [1330] NODE2    pengine:     info: common_print:              2 : NODE2
Aug 27 08:47:02 [1330] NODE2    pengine:     info: common_print:      SERVICE2    (systemd:service2): Started NODE2
Aug 27 08:47:02 [1330] NODE2    pengine:     info: common_print:      SERVICE3       (systemd:service3):  Started NODE2
Aug 27 08:47:02 [1330] NODE2    pengine:     info: common_print:      SERVICE4       (systemd:service4):  Started NODE2
Aug 27 08:47:02 [1330] NODE2    pengine:     info: common_print:      SERVICE5      (systemd:service5): Started NODE2
Aug 27 08:47:02 [1330] NODE2    pengine:     info: common_print:      SERVICE6 (systemd:service6):    Started NODE2
Aug 27 08:47:02 [1330] NODE2    pengine:     info: common_print:      SERVICE7  (systemd:service7):      Started NODE2
Aug 27 08:47:02 [1330] NODE2    pengine:     info: common_print:      SERVICE8   (systemd:service8):  Started NODE2
Aug 27 08:47:02 [1330] NODE2    pengine:     info: common_print:      SERVICE9        (systemd:service9):   Started NODE2
Aug 27 08:47:02 [1330] NODE2    pengine:     info: common_print:      SERVICE10        (systemd:service10):        Started NODE2
Aug 27 08:47:02 [1330] NODE2    pengine:     info: common_print:      SERVICE11       (systemd:service11):       Started NODE2
Aug 27 08:47:02 [1330] NODE2    pengine:     info: common_print:      SERVICE12   (systemd:service12):       Started NODE2
Aug 27 08:47:02 [1330] NODE2    pengine:     info: common_print:      SERVICE13  (systemd:service13):       Started NODE2
Aug 27 08:47:02 [1330] NODE2    pengine:     info: clone_print:        Clone Set: SERVICE14-clone [SERVICE14]
Aug 27 08:47:02 [1330] NODE2    pengine:     info: short_print:            Started: [ NODE2 ]
Aug 27 08:47:02 [1330] NODE2    pengine:     info: short_print:            Stopped: [ NODE1 ]
Aug 27 08:47:02 [1330] NODE2    pengine:     info: clone_print:        Clone Set: SERVICE15-clone [SERVICE15]
Aug 27 08:47:02 [1330] NODE2    pengine:     info: short_print:            Started: [ NODE2 ]
Aug 27 08:47:02 [1330] NODE2    pengine:     info: short_print:            Stopped: [ NODE1 ]
Aug 27 08:47:02 [1330] NODE2    pengine:    error: native_create_actions:     Resource SERVICE1 is active on 2 nodes (attempting recovery)
Aug 27 08:47:02 [1330] NODE2    pengine:   notice: native_create_actions:     See https://wiki.clusterlabs.org/wiki/FAQ#Resource_is_Too_Active for more information
Aug 27 08:47:02 [1330] NODE2    pengine:     info: RecurringOp:        Start recurring monitor (5s) for SERVICE1 on NODE2
Aug 27 08:47:02 [1330] NODE2    pengine:     info: RecurringOp:        Start recurring monitor (10s) for SERVICE14:1 on NODE1
Aug 27 08:47:02 [1330] NODE2    pengine:     info: RecurringOp:        Start recurring monitor (10s) for SERVICE15:1 on NODE1
Aug 27 08:47:02 [1330] NODE2    pengine:     info: LogActions:        Leave   VIRTUALIP       (Started NODE2)
Aug 27 08:47:02 [1330] NODE2    pengine:     info: LogActions:        Leave   SOURCEIP        (Started NODE2)
Aug 27 08:47:02 [1330] NODE2    pengine:   notice: LogAction:  * Move       SERVICE1                 ( NODE1 -> NODE2 )
Aug 27 08:47:02 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE2    (Started NODE2)
Aug 27 08:47:02 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE3       (Started NODE2)
Aug 27 08:47:02 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE4       (Started NODE2)
Aug 27 08:47:02 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE5      (Started NODE2)
Aug 27 08:47:02 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE6 (Started NODE2)
Aug 27 08:47:02 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE7  (Started NODE2)
Aug 27 08:47:02 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE8   (Started NODE2)
Aug 27 08:47:02 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE9        (Started NODE2)
Aug 27 08:47:02 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE10        (Started NODE2)
Aug 27 08:47:02 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE11       (Started NODE2)
Aug 27 08:47:02 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE12   (Started NODE2)
Aug 27 08:47:02 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE13  (Started NODE2)
Aug 27 08:47:02 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE14:0        (Started NODE2)
Aug 27 08:47:02 [1330] NODE2    pengine:   notice: LogAction:  * Start      SERVICE14:1         (            NODE1 )
Aug 27 08:47:02 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE15:0 (Started NODE2)
Aug 27 08:47:02 [1330] NODE2    pengine:   notice: LogAction:  * Start      SERVICE15:1                  (            NODE1 )
Aug 27 08:47:02 [1330] NODE2    pengine:    error: process_pe_message:        Calculated transition 334 (with errors), saving inputs in /var/lib/pacemaker/pengine/pe-error-171.bz2
Aug 27 08:47:02 [1331] NODE2       crmd:     info: do_state_transition:       State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE | input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response
Aug 27 08:47:02 [1331] NODE2       crmd:     info: do_te_invoke:      Processing graph 334 (ref=pe_calc-dc-1598510822-991) derived from /var/lib/pacemaker/pengine/pe-error-171.bz2
Aug 27 08:47:02 [1331] NODE2       crmd:   notice: te_rsc_command:    Initiating stop operation SERVICE1_stop_0 locally on NODE2 | action 36
Aug 27 08:47:02 [1328] NODE2       lrmd:     info: cancel_recurring_action:   Cancelling systemd operation SERVICE1_status_5000
Aug 27 08:47:02 [1331] NODE2       crmd:     info: do_lrm_rsc_op:     Performing key=36:334:0:d10dd5e7-af4d-4bba-a226-516824f8f60e op=SERVICE1_stop_0
Aug 27 08:47:02 [1328] NODE2       lrmd:     info: log_execute:       executing - rsc:SERVICE1 action:stop call_id:428
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_process_request:       Forwarding cib_modify operation for section status to all (origin=local/crmd/1398)
Aug 27 08:47:02 [1331] NODE2       crmd:   notice: te_rsc_command:    Initiating stop operation SERVICE1_stop_0 on NODE1 | action 35
Aug 27 08:47:02 [1331] NODE2       crmd:   notice: te_rsc_command:    Initiating monitor operation SERVICE7_monitor_0 on NODE1 | action 18
Aug 27 08:47:02 [1331] NODE2       crmd:   notice: te_rsc_command:    Initiating monitor operation SERVICE8_monitor_0 on NODE1 | action 19
Aug 27 08:47:02 [1331] NODE2       crmd:   notice: te_rsc_command:    Initiating monitor operation SERVICE9_monitor_0 on NODE1 | action 20
Aug 27 08:47:02 [1331] NODE2       crmd:   notice: te_rsc_command:    Initiating monitor operation SERVICE10_monitor_0 on NODE1 | action 21
Aug 27 08:47:02 [1331] NODE2       crmd:   notice: te_rsc_command:    Initiating monitor operation SERVICE11_monitor_0 on NODE1 | action 22
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.40 2
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.41 (null)
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=41
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/status/node_state[@id='2']:  @crm-debug-origin=do_update_resource
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/status/node_state[@id='2']/lrm[@id='2']/lrm_resources/lrm_resource[@id='SERVICE1']/lrm_rsc_op[@id='SERVICE1_last_0']:  @operation_key=SERVICE1_stop_0, @operation=stop, @crm-debug-origin=do_update_resource, @transition-key=36:334:0:d10dd5e7-af4d-4bba-a226-516824f8f60e, @transition-magic=-1:193;36:334:0:d10dd5e7-af4d-4bba-a226-516824f8f60e, @call-id=-1, @rc-code=193, @op-status=-1, @last-run=1598510822, @last-rc-change=1598510822, @
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE2/crmd/1398, version=0.434.41)
Aug 27 08:47:02 [1331] NODE2       crmd:   notice: te_rsc_command:    Initiating monitor operation SERVICE12_monitor_0 on NODE1 | action 23
Aug 27 08:47:02 [1331] NODE2       crmd:   notice: te_rsc_command:    Initiating monitor operation SERVICE13_monitor_0 on NODE1 | action 24
Aug 27 08:47:02 [1331] NODE2       crmd:     info: process_lrm_event: Result of monitor operation for SERVICE1 on NODE2: Cancelled | call=426 key=SERVICE1_monitor_5000 confirmed=true
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.41 2
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.42 (null)
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=42
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='SERVICE1']/lrm_rsc_op[@id='SERVICE1_last_0']:  @operation_key=SERVICE1_stop_0, @operation=stop, @transition-key=35:334:0:d10dd5e7-af4d-4bba-a226-516824f8f60e, @transition-magic=-1:193;35:334:0:d10dd5e7-af4d-4bba-a226-516824f8f60e, @call-id=-1, @rc-code=193, @op-status=-1, @exec-time=0
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE1/crmd/28, version=0.434.42)
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.42 2
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.43 (null)
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=43
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    ++ /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources:  <lrm_resource id="SERVICE7" type="service7" class="systemd"/>
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    ++                                                                <lrm_rsc_op id="SERVICE7_last_0" operation_key="SERVICE7_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.14" transition-key="18:334:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="-1:193;18:334:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE1" call-id="-1" rc-code="193" op-status="-1" int
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    ++                                                              </lrm_resource>
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE1/crmd/29, version=0.434.43)
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.43 2
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.44 (null)
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=44
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    ++ /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources:  <lrm_resource id="SERVICE8" type="service8" class="systemd"/>
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    ++                                                                <lrm_rsc_op id="SERVICE8_last_0" operation_key="SERVICE8_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.14" transition-key="19:334:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="-1:193;19:334:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE1" call-id="-1" rc-code="193" op-status="-1" inter
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    ++                                                              </lrm_resource>
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE1/crmd/30, version=0.434.44)
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.44 2
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.45 (null)
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=45
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    ++ /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources:  <lrm_resource id="SERVICE9" type="service9" class="systemd"/>
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    ++                                                                <lrm_rsc_op id="SERVICE9_last_0" operation_key="SERVICE9_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.14" transition-key="20:334:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="-1:193;20:334:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE1" call-id="-1" rc-code="193" op-status="-1" interval="0" la
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    ++                                                              </lrm_resource>
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE1/crmd/31, version=0.434.45)
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.45 2
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.46 (null)
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=46
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    ++ /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources:  <lrm_resource id="SERVICE10" type="service10" class="systemd"/>
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    ++                                                                <lrm_rsc_op id="SERVICE10_last_0" operation_key="SERVICE10_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.14" transition-key="21:334:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="-1:193;21:334:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE1" call-id="-1" rc-code="193" op-status="-1" interval="0" la
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    ++                                                              </lrm_resource>
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE1/crmd/32, version=0.434.46)
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.46 2
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.47 (null)
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=47
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    ++ /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources:  <lrm_resource id="SERVICE11" type="service11" class="systemd"/>
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    ++                                                                <lrm_rsc_op id="SERVICE11_last_0" operation_key="SERVICE11_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.14" transition-key="22:334:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="-1:193;22:334:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE1" call-id="-1" rc-code="193" op-status="-1" interval="0"
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    ++                                                              </lrm_resource>
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE1/crmd/33, version=0.434.47)
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.47 2
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.48 (null)
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=48
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    ++ /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources:  <lrm_resource id="SERVICE12" type="service12" class="systemd"/>
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    ++                                                                <lrm_rsc_op id="SERVICE12_last_0" operation_key="SERVICE12_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.14" transition-key="23:334:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="-1:193;23:334:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE1" call-id="-1" rc-code="193" op-status="-1" inter
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    ++                                                              </lrm_resource>
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE1/crmd/34, version=0.434.48)
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.48 2
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.49 (null)
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=49
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    ++ /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources:  <lrm_resource id="SERVICE13" type="service13" class="systemd"/>
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    ++                                                                <lrm_rsc_op id="SERVICE13_last_0" operation_key="SERVICE13_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.14" transition-key="24:334:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="-1:193;24:334:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE1" call-id="-1" rc-code="193" op-status="-1" interval="0" last-r
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    ++                                                              </lrm_resource>
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE1/crmd/35, version=0.434.49)
Aug 27 08:47:02 [1328] NODE2       lrmd:     info: systemd_exec_result:       Call to stop passed: /org/freedesktop/systemd1/job/42670
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.49 2
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.50 (null)
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=50
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='SERVICE7']/lrm_rsc_op[@id='SERVICE7_last_0']:  @transition-magic=0:7;18:334:7:d10dd5e7-af4d-4bba-a226-516824f8f60e, @call-id=38, @rc-code=7, @op-status=0, @exec-time=125
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE1/crmd/36, version=0.434.50)
Aug 27 08:47:02 [1331] NODE2       crmd:     info: match_graph_event: Action SERVICE7_monitor_0 (18) confirmed on NODE1 (rc=7)
Aug 27 08:47:02 [1331] NODE2       crmd:   notice: te_rsc_command:    Initiating monitor operation SERVICE14:1_monitor_0 on NODE1 | action 25
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.50 2
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.51 (null)
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=51
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='SERVICE8']/lrm_rsc_op[@id='SERVICE8_last_0']:  @transition-magic=0:7;19:334:7:d10dd5e7-af4d-4bba-a226-516824f8f60e, @call-id=42, @rc-code=7, @op-status=0, @exec-time=123, @queue-time=1
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE1/crmd/37, version=0.434.51)
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.51 2
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.52 (null)
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=52
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='SERVICE9']/lrm_rsc_op[@id='SERVICE9_last_0']:  @transition-magic=0:0;20:334:7:d10dd5e7-af4d-4bba-a226-516824f8f60e, @call-id=46, @rc-code=0, @op-status=0, @exec-time=122
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    ++ /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='SERVICE9']:  <lrm_rsc_op id="SERVICE9_last_failure_0" operation_key="SERVICE9_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.14" transition-key="20:334:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="0:0;20:334:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE1" call-id="46" rc-code="0"
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE1/crmd/38, version=0.434.52)
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.52 2
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.53 (null)
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=53
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='SERVICE10']/lrm_rsc_op[@id='SERVICE10_last_0']:  @transition-magic=0:7;21:334:7:d10dd5e7-af4d-4bba-a226-516824f8f60e, @call-id=50, @rc-code=7, @op-status=0, @exec-time=121
Aug 27 08:47:02 [1331] NODE2       crmd:     info: match_graph_event: Action SERVICE8_monitor_0 (19) confirmed on NODE1 (rc=7)
Aug 27 08:47:02 [1331] NODE2       crmd:  warning: status_from_rc:    Action 20 (SERVICE9_monitor_0) on NODE1 failed (target: 7 vs. rc: 0): Error
Aug 27 08:47:02 [1331] NODE2       crmd:   notice: abort_transition_graph:    Transition aborted by operation SERVICE9_monitor_0 'modify' on NODE1: Event failed | magic=0:0;20:334:7:d10dd5e7-af4d-4bba-a226-516824f8f60e cib=0.434.52 source=match_graph_event:299 complete=false
Aug 27 08:47:02 [1331] NODE2       crmd:     info: match_graph_event: Action SERVICE9_monitor_0 (20) confirmed on NODE1 (rc=0)
Aug 27 08:47:02 [1331] NODE2       crmd:     info: process_graph_event:       Detected action (334.20) SERVICE9_monitor_0.46=ok: failed
Aug 27 08:47:02 [1331] NODE2       crmd:     info: match_graph_event: Action SERVICE10_monitor_0 (21) confirmed on NODE1 (rc=7)
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE1/crmd/39, version=0.434.53)
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.53 2
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.54 (null)
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=54
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='SERVICE11']/lrm_rsc_op[@id='SERVICE11_last_0']:  @transition-magic=0:7;22:334:7:d10dd5e7-af4d-4bba-a226-516824f8f60e, @call-id=54, @rc-code=7, @op-status=0, @exec-time=118
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE1/crmd/40, version=0.434.54)
Aug 27 08:47:02 [1331] NODE2       crmd:     info: match_graph_event: Action SERVICE11_monitor_0 (22) confirmed on NODE1 (rc=7)
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.54 2
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.55 (null)
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=55
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='SERVICE12']/lrm_rsc_op[@id='SERVICE12_last_0']:  @transition-magic=0:7;23:334:7:d10dd5e7-af4d-4bba-a226-516824f8f60e, @call-id=58, @rc-code=7, @op-status=0, @exec-time=117
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE1/crmd/41, version=0.434.55)
Aug 27 08:47:02 [1331] NODE2       crmd:     info: match_graph_event: Action SERVICE12_monitor_0 (23) confirmed on NODE1 (rc=7)
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.55 2
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.56 (null)
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=56
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='SERVICE13']/lrm_rsc_op[@id='SERVICE13_last_0']:  @transition-magic=0:7;24:334:7:d10dd5e7-af4d-4bba-a226-516824f8f60e, @call-id=62, @rc-code=7, @op-status=0, @exec-time=118, @queue-time=1
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE1/crmd/42, version=0.434.56)
Aug 27 08:47:02 [1331] NODE2       crmd:     info: match_graph_event: Action SERVICE13_monitor_0 (24) confirmed on NODE1 (rc=7)
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.56 2
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.57 (null)
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=57
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    ++ /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources:  <lrm_resource id="SERVICE14" type="service14" class="systemd"/>
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    ++                                                                <lrm_rsc_op id="SERVICE14_last_0" operation_key="SERVICE14_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.14" transition-key="25:334:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="-1:193;25:334:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE1" call-id="-1" rc-code="193" op-status="-1" int
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    ++                                                              </lrm_resource>
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE1/crmd/43, version=0.434.57)
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.57 2
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.58 (null)
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=58
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='SERVICE14']/lrm_rsc_op[@id='SERVICE14_last_0']:  @transition-magic=0:7;25:334:7:d10dd5e7-af4d-4bba-a226-516824f8f60e, @call-id=67, @rc-code=7, @op-status=0, @exec-time=46
Aug 27 08:47:02 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE1/crmd/44, version=0.434.58)
Aug 27 08:47:02 [1331] NODE2       crmd:     info: match_graph_event: Action SERVICE14_monitor_0 (25) confirmed on NODE1 (rc=7)
Aug 27 08:47:04 [1331] NODE2       crmd:   notice: process_lrm_event: Result of stop operation for SERVICE1 on NODE2: 0 (ok) | call=428 key=SERVICE1_stop_0 confirmed=true cib-update=1399
Aug 27 08:47:04 [1326] NODE2        cib:     info: cib_process_request:       Forwarding cib_modify operation for section status to all (origin=local/crmd/1399)
Aug 27 08:47:04 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.58 2
Aug 27 08:47:04 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.59 (null)
Aug 27 08:47:04 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=59
Aug 27 08:47:04 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/status/node_state[@id='2']/lrm[@id='2']/lrm_resources/lrm_resource[@id='SERVICE1']/lrm_rsc_op[@id='SERVICE1_last_0']:  @transition-magic=0:0;36:334:0:d10dd5e7-af4d-4bba-a226-516824f8f60e, @call-id=428, @rc-code=0, @op-status=0, @exec-time=2072
Aug 27 08:47:04 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE2/crmd/1399, version=0.434.59)
Aug 27 08:47:04 [1331] NODE2       crmd:     info: match_graph_event: Action SERVICE1_stop_0 (36) confirmed on NODE2 (rc=0)
Aug 27 08:47:09 [1326] NODE2        cib:     info: cib_process_ping:  Reporting our current digest to NODE2: dff1c3d3f4cfcb4d59dd341bb0268506 for 0.434.59 (0x563b7430a810 0)
Aug 27 08:47:22 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.59 2
Aug 27 08:47:22 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.60 (null)
Aug 27 08:47:22 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=60
Aug 27 08:47:22 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='SERVICE1']/lrm_rsc_op[@id='SERVICE1_last_0']:  @transition-magic=2:198;35:334:0:d10dd5e7-af4d-4bba-a226-516824f8f60e, @call-id=34, @rc-code=198, @op-status=2, @exec-time=19986, @queue-time=1
Aug 27 08:47:22 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='SERVICE1']/lrm_rsc_op[@id='SERVICE1_last_failure_0']:  @operation_key=SERVICE1_stop_0, @operation=stop, @transition-key=35:334:0:d10dd5e7-af4d-4bba-a226-516824f8f60e, @transition-magic=2:198;35:334:0:d10dd5e7-af4d-4bba-a226-516824f8f60e, @call-id=34, @rc-code=198, @op-status=2, @exec-time=19986, @queue-time=1
Aug 27 08:47:22 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE1/crmd/45, version=0.434.60)
Aug 27 08:47:22 [1331] NODE2       crmd:  warning: status_from_rc:    Action 35 (SERVICE1_stop_0) on NODE1 failed (target: 0 vs. rc: 198): Error
Aug 27 08:47:22 [1331] NODE2       crmd:     info: match_graph_event: Action SERVICE1_stop_0 (35) confirmed on NODE1 (rc=198, ignoring failure)
Aug 27 08:47:22 [1331] NODE2       crmd:     info: update_failcount:  Updating last failure for SERVICE1 on NODE1 after failed stop: rc=198 (update=INFINITY, time=1598510842)
Aug 27 08:47:22 [1331] NODE2       crmd:     info: process_graph_event:       Detected action (334.35) SERVICE1_stop_0.34=OCF_TIMEOUT: failed
Aug 27 08:47:22 [1331] NODE2       crmd:   notice: run_graph: Transition 334 (Complete=11, Pending=0, Fired=0, Skipped=3, Incomplete=10, Source=/var/lib/pacemaker/pengine/pe-error-171.bz2): Stopped
Aug 27 08:47:22 [1331] NODE2       crmd:     info: do_state_transition:       State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE | input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd
Aug 27 08:47:22 [1329] NODE2      attrd:     info: attrd_peer_update: Setting last-failure-SERVICE1#stop_0[NODE1]: (null) -> 1598510842 from NODE2
Aug 27 08:47:22 [1329] NODE2      attrd:     info: write_attribute:   Sent CIB request 173 with 2 changes for last-failure-SERVICE1#stop_0 (id n/a, set n/a)
Aug 27 08:47:22 [1326] NODE2        cib:     info: cib_process_request:       Forwarding cib_modify operation for section status to all (origin=local/attrd/173)
Aug 27 08:47:22 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.60 2
Aug 27 08:47:22 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.61 (null)
Aug 27 08:47:22 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=61
Aug 27 08:47:22 [1326] NODE2        cib:     info: cib_perform_op:    ++ /cib/status/node_state[@id='1']:  <transient_attributes id="1"/>
Aug 27 08:47:22 [1326] NODE2        cib:     info: cib_perform_op:    ++                                     <instance_attributes id="status-1">
Aug 27 08:47:22 [1326] NODE2        cib:     info: cib_perform_op:    ++                                       <nvpair id="status-1-last-failure-SERVICE1.stop_0" name="last-failure-SERVICE1#stop_0" value="1598510842"/>
Aug 27 08:47:22 [1326] NODE2        cib:     info: cib_perform_op:    ++                                     </instance_attributes>
Aug 27 08:47:22 [1326] NODE2        cib:     info: cib_perform_op:    ++                                   </transient_attributes>
Aug 27 08:47:22 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE2/attrd/173, version=0.434.61)
Aug 27 08:47:22 [1329] NODE2      attrd:     info: attrd_cib_callback:        CIB update 173 result for last-failure-SERVICE1#stop_0: OK | rc=0
Aug 27 08:47:22 [1329] NODE2      attrd:     info: attrd_cib_callback:        * last-failure-SERVICE1#stop_0[NODE2]=(null)
Aug 27 08:47:22 [1329] NODE2      attrd:     info: attrd_cib_callback:        * last-failure-SERVICE1#stop_0[NODE1]=1598510842
Aug 27 08:47:22 [1330] NODE2    pengine:   notice: unpack_config:     On loss of CCM Quorum: Ignore
Aug 27 08:47:22 [1331] NODE2       crmd:     info: abort_transition_graph:    Transition aborted by transient_attributes.1 'create': Transient attribute change | cib=0.434.61 source=abort_unless_down:356 path=/cib/status/node_state[@id='1'] complete=true
Aug 27 08:47:22 [1330] NODE2    pengine:     info: determine_online_status:   Node NODE1 is online
Aug 27 08:47:22 [1330] NODE2    pengine:     info: determine_online_status:   Node NODE2 is online
Aug 27 08:47:22 [1330] NODE2    pengine:  warning: unpack_rsc_op:     Pretending the failure of SERVICE1_stop_0 (rc=198) on NODE1 succeeded
Aug 27 08:47:22 [1330] NODE2    pengine:  warning: unpack_rsc_op:     Pretending the failure of SERVICE1_stop_0 (rc=198) on NODE1 succeeded
Aug 27 08:47:22 [1330] NODE2    pengine:     info: determine_op_status:       Operation monitor found resource SERVICE9 active on NODE1
Aug 27 08:47:22 [1330] NODE2    pengine:     info: determine_op_status:       Operation monitor found resource SERVICE9 active on NODE1
Aug 27 08:47:22 [1330] NODE2    pengine:     info: determine_op_status:       Operation monitor found resource SERVICE4 active on NODE2
Aug 27 08:47:22 [1330] NODE2    pengine:     info: determine_op_status:       Operation monitor found resource SERVICE1 active on NODE2
Aug 27 08:47:22 [1330] NODE2    pengine:     info: unpack_node_loop:  Node 1 is already processed
Aug 27 08:47:22 [1330] NODE2    pengine:     info: unpack_node_loop:  Node 2 is already processed
Aug 27 08:47:22 [1330] NODE2    pengine:     info: unpack_node_loop:  Node 1 is already processed
Aug 27 08:47:22 [1330] NODE2    pengine:     info: unpack_node_loop:  Node 2 is already processed
Aug 27 08:47:22 [1330] NODE2    pengine:     info: group_print:        Resource Group: IPV
Aug 27 08:47:22 [1330] NODE2    pengine:     info: common_print:           VIRTUALIP  (ocf::heartbeat:IPaddr2):       Started NODE2
Aug 27 08:47:22 [1330] NODE2    pengine:     info: common_print:           SOURCEIP   (ocf::heartbeat:IPsrcaddr):      Started NODE2
Aug 27 08:47:22 [1330] NODE2    pengine:     info: common_print:      SERVICE1        (systemd:service1):     Stopped (failure ignored)
Aug 27 08:47:22 [1330] NODE2    pengine:     info: common_print:      SERVICE2    (systemd:service2): Started NODE2
Aug 27 08:47:22 [1330] NODE2    pengine:     info: common_print:      SERVICE3       (systemd:service3):  Started NODE2
Aug 27 08:47:22 [1330] NODE2    pengine:     info: common_print:      SERVICE4       (systemd:service4):  Started NODE2
Aug 27 08:47:22 [1330] NODE2    pengine:     info: common_print:      SERVICE5      (systemd:service5): Started NODE2
Aug 27 08:47:22 [1330] NODE2    pengine:     info: common_print:      SERVICE6 (systemd:service6):    Started NODE2
Aug 27 08:47:22 [1330] NODE2    pengine:     info: common_print:      SERVICE7  (systemd:service7):      Started NODE2
Aug 27 08:47:22 [1330] NODE2    pengine:     info: common_print:      SERVICE8   (systemd:service8):  Started NODE2
Aug 27 08:47:22 [1330] NODE2    pengine:     info: common_print:      SERVICE9        (systemd:service9):   Started
Aug 27 08:47:22 [1330] NODE2    pengine:     info: common_print:              1 : NODE1
Aug 27 08:47:22 [1330] NODE2    pengine:     info: common_print:              2 : NODE2
Aug 27 08:47:22 [1330] NODE2    pengine:     info: common_print:      SERVICE10        (systemd:service10):        Started NODE2
Aug 27 08:47:22 [1330] NODE2    pengine:     info: common_print:      SERVICE11       (systemd:service11):       Started NODE2
Aug 27 08:47:22 [1330] NODE2    pengine:     info: common_print:      SERVICE12   (systemd:service12):       Started NODE2
Aug 27 08:47:22 [1330] NODE2    pengine:     info: common_print:      SERVICE13  (systemd:service13):       Started NODE2
Aug 27 08:47:22 [1330] NODE2    pengine:     info: clone_print:        Clone Set: SERVICE14-clone [SERVICE14]
Aug 27 08:47:22 [1330] NODE2    pengine:     info: short_print:            Started: [ NODE2 ]
Aug 27 08:47:22 [1330] NODE2    pengine:     info: short_print:            Stopped: [ NODE1 ]
Aug 27 08:47:22 [1330] NODE2    pengine:     info: clone_print:        Clone Set: SERVICE15-clone [SERVICE15]
Aug 27 08:47:22 [1330] NODE2    pengine:     info: short_print:            Started: [ NODE2 ]
Aug 27 08:47:22 [1330] NODE2    pengine:     info: short_print:            Stopped: [ NODE1 ]
Aug 27 08:47:22 [1330] NODE2    pengine:     info: RecurringOp:        Start recurring monitor (5s) for SERVICE1 on NODE2
Aug 27 08:47:22 [1330] NODE2    pengine:    error: native_create_actions:     Resource SERVICE9 is active on 2 nodes (attempting recovery)
Aug 27 08:47:22 [1330] NODE2    pengine:   notice: native_create_actions:     See https://wiki.clusterlabs.org/wiki/FAQ#Resource_is_Too_Active for more information
Aug 27 08:47:22 [1330] NODE2    pengine:     info: RecurringOp:        Start recurring monitor (5s) for SERVICE9 on NODE2
Aug 27 08:47:22 [1330] NODE2    pengine:     info: RecurringOp:        Start recurring monitor (10s) for SERVICE14:1 on NODE1
Aug 27 08:47:22 [1330] NODE2    pengine:     info: RecurringOp:        Start recurring monitor (10s) for SERVICE15:1 on NODE1
Aug 27 08:47:22 [1330] NODE2    pengine:     info: LogActions:        Leave   VIRTUALIP       (Started NODE2)
Aug 27 08:47:22 [1330] NODE2    pengine:     info: LogActions:        Leave   SOURCEIP        (Started NODE2)
Aug 27 08:47:22 [1330] NODE2    pengine:   notice: LogAction:  * Start      SERVICE1                 (            NODE2 )
Aug 27 08:47:22 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE2    (Started NODE2)
Aug 27 08:47:22 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE3       (Started NODE2)
Aug 27 08:47:22 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE4       (Started NODE2)
Aug 27 08:47:22 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE5      (Started NODE2)
Aug 27 08:47:22 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE6 (Started NODE2)
Aug 27 08:47:22 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE7  (Started NODE2)
Aug 27 08:47:22 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE8   (Started NODE2)
Aug 27 08:47:22 [1330] NODE2    pengine:   notice: LogAction:  * Move       SERVICE9                 ( NODE1 -> NODE2 )
Aug 27 08:47:22 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE10        (Started NODE2)
Aug 27 08:47:22 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE11       (Started NODE2)
Aug 27 08:47:22 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE12   (Started NODE2)
Aug 27 08:47:22 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE13  (Started NODE2)
Aug 27 08:47:22 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE14:0        (Started NODE2)
Aug 27 08:47:22 [1330] NODE2    pengine:   notice: LogAction:  * Start      SERVICE14:1         (            NODE1 )
Aug 27 08:47:22 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE15:0 (Started NODE2)
Aug 27 08:47:22 [1330] NODE2    pengine:   notice: LogAction:  * Start      SERVICE15:1                  (            NODE1 )
Aug 27 08:47:22 [1330] NODE2    pengine:    error: process_pe_message:        Calculated transition 335 (with errors), saving inputs in /var/lib/pacemaker/pengine/pe-error-172.bz2
Aug 27 08:47:22 [1331] NODE2       crmd:     info: handle_response:   pe_calc calculation pe_calc-dc-1598510842-1002 is obsolete
Aug 27 08:47:22 [1330] NODE2    pengine:   notice: unpack_config:     On loss of CCM Quorum: Ignore
Aug 27 08:47:22 [1330] NODE2    pengine:     info: determine_online_status:   Node NODE1 is online
Aug 27 08:47:22 [1330] NODE2    pengine:     info: determine_online_status:   Node NODE2 is online
Aug 27 08:47:22 [1330] NODE2    pengine:  warning: unpack_rsc_op:     Pretending the failure of SERVICE1_stop_0 (rc=198) on NODE1 succeeded
Aug 27 08:47:22 [1330] NODE2    pengine:  warning: unpack_rsc_op:     Pretending the failure of SERVICE1_stop_0 (rc=198) on NODE1 succeeded
Aug 27 08:47:22 [1330] NODE2    pengine:     info: determine_op_status:       Operation monitor found resource SERVICE9 active on NODE1
Aug 27 08:47:22 [1330] NODE2    pengine:     info: determine_op_status:       Operation monitor found resource SERVICE9 active on NODE1
Aug 27 08:47:22 [1330] NODE2    pengine:     info: determine_op_status:       Operation monitor found resource SERVICE4 active on NODE2
Aug 27 08:47:22 [1330] NODE2    pengine:     info: determine_op_status:       Operation monitor found resource SERVICE1 active on NODE2
Aug 27 08:47:22 [1330] NODE2    pengine:     info: unpack_node_loop:  Node 1 is already processed
Aug 27 08:47:22 [1330] NODE2    pengine:     info: unpack_node_loop:  Node 2 is already processed
Aug 27 08:47:22 [1330] NODE2    pengine:     info: unpack_node_loop:  Node 1 is already processed
Aug 27 08:47:22 [1330] NODE2    pengine:     info: unpack_node_loop:  Node 2 is already processed
Aug 27 08:47:22 [1330] NODE2    pengine:     info: group_print:        Resource Group: IPV
Aug 27 08:47:22 [1330] NODE2    pengine:     info: common_print:           VIRTUALIP  (ocf::heartbeat:IPaddr2):       Started NODE2
Aug 27 08:47:22 [1330] NODE2    pengine:     info: common_print:           SOURCEIP   (ocf::heartbeat:IPsrcaddr):      Started NODE2
Aug 27 08:47:22 [1330] NODE2    pengine:     info: common_print:      SERVICE1        (systemd:service1):     Stopped (failure ignored)
Aug 27 08:47:22 [1330] NODE2    pengine:     info: common_print:      SERVICE2    (systemd:service2): Started NODE2
Aug 27 08:47:22 [1330] NODE2    pengine:     info: common_print:      SERVICE3       (systemd:service3):  Started NODE2
Aug 27 08:47:22 [1330] NODE2    pengine:     info: common_print:      SERVICE4       (systemd:service4):  Started NODE2
Aug 27 08:47:22 [1330] NODE2    pengine:     info: common_print:      SERVICE5      (systemd:service5): Started NODE2
Aug 27 08:47:22 [1330] NODE2    pengine:     info: common_print:      SERVICE6 (systemd:service6):    Started NODE2
Aug 27 08:47:22 [1330] NODE2    pengine:     info: common_print:      SERVICE7  (systemd:service7):      Started NODE2
Aug 27 08:47:22 [1330] NODE2    pengine:     info: common_print:      SERVICE8   (systemd:service8):  Started NODE2
Aug 27 08:47:22 [1330] NODE2    pengine:     info: common_print:      SERVICE9        (systemd:service9):   Started
Aug 27 08:47:22 [1330] NODE2    pengine:     info: common_print:              1 : NODE1
Aug 27 08:47:22 [1330] NODE2    pengine:     info: common_print:              2 : NODE2
Aug 27 08:47:22 [1330] NODE2    pengine:     info: common_print:      SERVICE10        (systemd:service10):        Started NODE2
Aug 27 08:47:22 [1330] NODE2    pengine:     info: common_print:      SERVICE11       (systemd:service11):       Started NODE2
Aug 27 08:47:22 [1330] NODE2    pengine:     info: common_print:      SERVICE12   (systemd:service12):       Started NODE2
Aug 27 08:47:22 [1330] NODE2    pengine:     info: common_print:      SERVICE13  (systemd:service13):       Started NODE2
Aug 27 08:47:22 [1330] NODE2    pengine:     info: clone_print:        Clone Set: SERVICE14-clone [SERVICE14]
Aug 27 08:47:22 [1330] NODE2    pengine:     info: short_print:            Started: [ NODE2 ]
Aug 27 08:47:22 [1330] NODE2    pengine:     info: short_print:            Stopped: [ NODE1 ]
Aug 27 08:47:22 [1330] NODE2    pengine:     info: clone_print:        Clone Set: SERVICE15-clone [SERVICE15]
Aug 27 08:47:22 [1330] NODE2    pengine:     info: short_print:            Started: [ NODE2 ]
Aug 27 08:47:22 [1330] NODE2    pengine:     info: short_print:            Stopped: [ NODE1 ]
Aug 27 08:47:22 [1330] NODE2    pengine:     info: RecurringOp:        Start recurring monitor (5s) for SERVICE1 on NODE2
Aug 27 08:47:22 [1330] NODE2    pengine:    error: native_create_actions:     Resource SERVICE9 is active on 2 nodes (attempting recovery)
Aug 27 08:47:22 [1330] NODE2    pengine:   notice: native_create_actions:     See https://wiki.clusterlabs.org/wiki/FAQ#Resource_is_Too_Active for more information
Aug 27 08:47:22 [1330] NODE2    pengine:     info: RecurringOp:        Start recurring monitor (5s) for SERVICE9 on NODE2
Aug 27 08:47:22 [1330] NODE2    pengine:     info: RecurringOp:        Start recurring monitor (10s) for SERVICE14:1 on NODE1
Aug 27 08:47:22 [1330] NODE2    pengine:     info: RecurringOp:        Start recurring monitor (10s) for SERVICE15:1 on NODE1
Aug 27 08:47:22 [1330] NODE2    pengine:     info: LogActions:        Leave   VIRTUALIP       (Started NODE2)
Aug 27 08:47:22 [1330] NODE2    pengine:     info: LogActions:        Leave   SOURCEIP        (Started NODE2)
Aug 27 08:47:22 [1330] NODE2    pengine:   notice: LogAction:  * Start      SERVICE1                 (            NODE2 )
Aug 27 08:47:22 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE2    (Started NODE2)
Aug 27 08:47:22 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE3       (Started NODE2)
Aug 27 08:47:22 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE4       (Started NODE2)
Aug 27 08:47:22 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE5      (Started NODE2)
Aug 27 08:47:22 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE6 (Started NODE2)
Aug 27 08:47:22 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE7  (Started NODE2)
Aug 27 08:47:22 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE8   (Started NODE2)
Aug 27 08:47:22 [1330] NODE2    pengine:   notice: LogAction:  * Move       SERVICE9                 ( NODE1 -> NODE2 )
Aug 27 08:47:22 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE10        (Started NODE2)
Aug 27 08:47:22 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE11       (Started NODE2)
Aug 27 08:47:22 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE12   (Started NODE2)
Aug 27 08:47:22 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE13  (Started NODE2)
Aug 27 08:47:22 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE14:0        (Started NODE2)
Aug 27 08:47:22 [1330] NODE2    pengine:   notice: LogAction:  * Start      SERVICE14:1         (            NODE1 )
Aug 27 08:47:22 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE15:0 (Started NODE2)
Aug 27 08:47:22 [1330] NODE2    pengine:   notice: LogAction:  * Start      SERVICE15:1                  (            NODE1 )
Aug 27 08:47:22 [1330] NODE2    pengine:    error: process_pe_message:        Calculated transition 336 (with errors), saving inputs in /var/lib/pacemaker/pengine/pe-error-173.bz2
Aug 27 08:47:22 [1331] NODE2       crmd:     info: do_state_transition:       State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE | input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response
Aug 27 08:47:22 [1331] NODE2       crmd:     info: do_te_invoke:      Processing graph 336 (ref=pe_calc-dc-1598510842-1003) derived from /var/lib/pacemaker/pengine/pe-error-173.bz2
Aug 27 08:47:22 [1331] NODE2       crmd:   notice: te_rsc_command:    Initiating start operation SERVICE1_start_0 locally on NODE2 | action 26
Aug 27 08:47:22 [1331] NODE2       crmd:     info: do_lrm_rsc_op:     Performing key=26:336:0:d10dd5e7-af4d-4bba-a226-516824f8f60e op=SERVICE1_start_0
Aug 27 08:47:22 [1326] NODE2        cib:     info: cib_process_request:       Forwarding cib_modify operation for section status to all (origin=local/crmd/1402)
Aug 27 08:47:22 [1328] NODE2       lrmd:     info: log_execute:       executing - rsc:SERVICE1 action:start call_id:429
Aug 27 08:47:22 [1331] NODE2       crmd:   notice: te_rsc_command:    Initiating stop operation SERVICE9_stop_0 locally on NODE2 | action 43
Aug 27 08:47:22 [1328] NODE2       lrmd:     info: cancel_recurring_action:   Cancelling systemd operation SERVICE9_status_5000
Aug 27 08:47:22 [1331] NODE2       crmd:     info: do_lrm_rsc_op:     Performing key=43:336:0:d10dd5e7-af4d-4bba-a226-516824f8f60e op=SERVICE9_stop_0
Aug 27 08:47:22 [1326] NODE2        cib:     info: cib_process_request:       Forwarding cib_modify operation for section status to all (origin=local/crmd/1403)
Aug 27 08:47:22 [1328] NODE2       lrmd:     info: log_execute:       executing - rsc:SERVICE9 action:stop call_id:431
Aug 27 08:47:22 [1331] NODE2       crmd:   notice: te_rsc_command:    Initiating stop operation SERVICE9_stop_0 on NODE1 | action 42
Aug 27 08:47:22 [1331] NODE2       crmd:   notice: te_rsc_command:    Initiating monitor operation SERVICE15:1_monitor_0 on NODE1 | action 17
Aug 27 08:47:22 [1331] NODE2       crmd:     info: process_lrm_event: Result of monitor operation for SERVICE9 on NODE2: Cancelled | call=400 key=SERVICE9_monitor_5000 confirmed=true
Aug 27 08:47:22 [1331] NODE2       crmd:   notice: te_rsc_command:    Initiating start operation SERVICE14:1_start_0 on NODE1 | action 55
Aug 27 08:47:22 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.61 2
Aug 27 08:47:22 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.62 (null)
Aug 27 08:47:22 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=62
Aug 27 08:47:22 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/status/node_state[@id='2']/lrm[@id='2']/lrm_resources/lrm_resource[@id='SERVICE1']/lrm_rsc_op[@id='SERVICE1_last_0']:  @operation_key=SERVICE1_start_0, @operation=start, @transition-key=26:336:0:d10dd5e7-af4d-4bba-a226-516824f8f60e, @transition-magic=-1:193;26:336:0:d10dd5e7-af4d-4bba-a226-516824f8f60e, @call-id=-1, @rc-code=193, @op-status=-1, @last-run=1598510842, @last-rc-change=1598510842, @exec-time=0
Aug 27 08:47:22 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE2/crmd/1402, version=0.434.62)
Aug 27 08:47:22 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.62 2
Aug 27 08:47:22 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.63 (null)
Aug 27 08:47:22 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=63
Aug 27 08:47:22 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/status/node_state[@id='2']/lrm[@id='2']/lrm_resources/lrm_resource[@id='SERVICE9']/lrm_rsc_op[@id='SERVICE9_last_0']:  @operation_key=SERVICE9_stop_0, @operation=stop, @crm-debug-origin=do_update_resource, @transition-key=43:336:0:d10dd5e7-af4d-4bba-a226-516824f8f60e, @transition-magic=-1:193;43:336:0:d10dd5e7-af4d-4bba-a226-516824f8f60e, @call-id=-1, @rc-code=193, @op-status=-1, @last-run=1598510842, @last-rc-change=1598510842, @
Aug 27 08:47:22 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE2/crmd/1403, version=0.434.63)
Aug 27 08:47:22 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.63 2
Aug 27 08:47:22 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.64 (null)
Aug 27 08:47:22 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=64
Aug 27 08:47:22 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='SERVICE9']/lrm_rsc_op[@id='SERVICE9_last_0']:  @operation_key=SERVICE9_stop_0, @operation=stop, @transition-key=42:336:0:d10dd5e7-af4d-4bba-a226-516824f8f60e, @transition-magic=-1:193;42:336:0:d10dd5e7-af4d-4bba-a226-516824f8f60e, @call-id=-1, @rc-code=193, @op-status=-1, @last-run=1598510864, @last-rc-change=1598510864, @exec-time=0
Aug 27 08:47:22 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE1/crmd/46, version=0.434.64)
Aug 27 08:47:22 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.64 2
Aug 27 08:47:22 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.65 (null)
Aug 27 08:47:22 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=65
Aug 27 08:47:22 [1326] NODE2        cib:     info: cib_perform_op:    ++ /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources:  <lrm_resource id="SERVICE15" type="service15" class="systemd"/>
Aug 27 08:47:22 [1326] NODE2        cib:     info: cib_perform_op:    ++                                                                <lrm_rsc_op id="SERVICE15_last_0" operation_key="SERVICE15_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.14" transition-key="17:336:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="-1:193;17:336:7:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE1" call-id="-1" rc-code="193" op-status="-1" interval="0" last-run
Aug 27 08:47:22 [1326] NODE2        cib:     info: cib_perform_op:    ++                                                              </lrm_resource>
Aug 27 08:47:22 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE1/crmd/47, version=0.434.65)
Aug 27 08:47:22 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.65 2
Aug 27 08:47:22 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.66 (null)
Aug 27 08:47:22 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=66
Aug 27 08:47:22 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='SERVICE14']/lrm_rsc_op[@id='SERVICE14_last_0']:  @operation_key=SERVICE14_start_0, @operation=start, @transition-key=55:336:0:d10dd5e7-af4d-4bba-a226-516824f8f60e, @transition-magic=-1:193;55:336:0:d10dd5e7-af4d-4bba-a226-516824f8f60e, @call-id=-1, @rc-code=193, @op-status=-1, @last-run=1598510864, @last-rc-change=1598510864, @exec-time=0
Aug 27 08:47:22 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE1/crmd/48, version=0.434.66)
Aug 27 08:47:22 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.66 2
Aug 27 08:47:22 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.67 (null)
Aug 27 08:47:22 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=67
Aug 27 08:47:22 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='SERVICE15']/lrm_rsc_op[@id='SERVICE15_last_0']:  @transition-magic=0:7;17:336:7:d10dd5e7-af4d-4bba-a226-516824f8f60e, @call-id=73, @rc-code=7, @op-status=0, @exec-time=34
Aug 27 08:47:22 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE1/crmd/49, version=0.434.67)
Aug 27 08:47:22 [1331] NODE2       crmd:     info: match_graph_event: Action SERVICE15_monitor_0 (17) confirmed on NODE1 (rc=7)
Aug 27 08:47:22 [1331] NODE2       crmd:   notice: te_rsc_command:    Initiating start operation SERVICE15:1_start_0 on NODE1 | action 63
Aug 27 08:47:22 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.67 2
Aug 27 08:47:22 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.68 (null)
Aug 27 08:47:22 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=68
Aug 27 08:47:22 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='SERVICE15']/lrm_rsc_op[@id='SERVICE15_last_0']:  @operation_key=SERVICE15_start_0, @operation=start, @transition-key=63:336:0:d10dd5e7-af4d-4bba-a226-516824f8f60e, @transition-magic=-1:193;63:336:0:d10dd5e7-af4d-4bba-a226-516824f8f60e, @call-id=-1, @rc-code=193, @op-status=-1, @exec-time=0
Aug 27 08:47:22 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE1/crmd/50, version=0.434.68)
Aug 27 08:47:22 [1328] NODE2       lrmd:     info: systemd_exec_result:       Call to start passed: /org/freedesktop/systemd1/job/42671
Aug 27 08:47:22 [1328] NODE2       lrmd:     info: systemd_exec_result:       Call to stop passed: /org/freedesktop/systemd1/job/42812
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.68 2
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.69 (null)
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=69
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='SERVICE9']/lrm_rsc_op[@id='SERVICE9_last_0']:  @transition-magic=0:0;42:336:0:d10dd5e7-af4d-4bba-a226-516824f8f60e, @call-id=68, @rc-code=0, @op-status=0, @exec-time=2006
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE1/crmd/51, version=0.434.69)
Aug 27 08:47:24 [1331] NODE2       crmd:     info: match_graph_event: Action SERVICE9_stop_0 (42) confirmed on NODE1 (rc=0)
Aug 27 08:47:24 [1331] NODE2       crmd:   notice: process_lrm_event: Result of start operation for SERVICE1 on NODE2: 0 (ok) | call=429 key=SERVICE1_start_0 confirmed=true cib-update=1404
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_process_request:       Forwarding cib_modify operation for section status to all (origin=local/crmd/1404)
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.69 2
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.70 (null)
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=70
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/status/node_state[@id='2']/lrm[@id='2']/lrm_resources/lrm_resource[@id='SERVICE1']/lrm_rsc_op[@id='SERVICE1_last_0']:  @transition-magic=0:0;26:336:0:d10dd5e7-af4d-4bba-a226-516824f8f60e, @call-id=429, @rc-code=0, @op-status=0, @exec-time=2063
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE2/crmd/1404, version=0.434.70)
Aug 27 08:47:24 [1331] NODE2       crmd:     info: match_graph_event: Action SERVICE1_start_0 (26) confirmed on NODE2 (rc=0)
Aug 27 08:47:24 [1331] NODE2       crmd:   notice: te_rsc_command:    Initiating monitor operation SERVICE1_monitor_5000 locally on NODE2 | action 27
Aug 27 08:47:24 [1331] NODE2       crmd:     info: do_lrm_rsc_op:     Performing key=27:336:0:d10dd5e7-af4d-4bba-a226-516824f8f60e op=SERVICE1_monitor_5000
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_process_request:       Forwarding cib_modify operation for section status to all (origin=local/crmd/1405)
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.70 2
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.71 (null)
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=71
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/status/node_state[@id='2']/lrm[@id='2']/lrm_resources/lrm_resource[@id='SERVICE1']/lrm_rsc_op[@id='SERVICE1_monitor_5000']:  @crm-debug-origin=do_update_resource, @transition-key=27:336:0:d10dd5e7-af4d-4bba-a226-516824f8f60e, @transition-magic=-1:193;27:336:0:d10dd5e7-af4d-4bba-a226-516824f8f60e, @call-id=-1, @rc-code=193, @op-status=-1, @last-rc-change=1598510844, @exec-time=0, @queue-time=0
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE2/crmd/1405, version=0.434.71)
Aug 27 08:47:24 [1331] NODE2       crmd:     info: process_lrm_event: Result of monitor operation for SERVICE1 on NODE2: 0 (ok) | call=432 key=SERVICE1_monitor_5000 confirmed=false cib-update=1406
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_process_request:       Forwarding cib_modify operation for section status to all (origin=local/crmd/1406)
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.71 2
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.72 (null)
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=72
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/status/node_state[@id='2']/lrm[@id='2']/lrm_resources/lrm_resource[@id='SERVICE1']/lrm_rsc_op[@id='SERVICE1_monitor_5000']:  @transition-magic=0:0;27:336:0:d10dd5e7-af4d-4bba-a226-516824f8f60e, @call-id=432, @rc-code=0, @op-status=0, @exec-time=2
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE2/crmd/1406, version=0.434.72)
Aug 27 08:47:24 [1331] NODE2       crmd:     info: match_graph_event: Action SERVICE1_monitor_5000 (27) confirmed on NODE2 (rc=0)
Aug 27 08:47:24 [1331] NODE2       crmd:   notice: process_lrm_event: Result of stop operation for SERVICE9 on NODE2: 0 (ok) | call=431 key=SERVICE9_stop_0 confirmed=true cib-update=1407
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_process_request:       Forwarding cib_modify operation for section status to all (origin=local/crmd/1407)
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.72 2
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.73 (null)
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=73
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/status/node_state[@id='2']/lrm[@id='2']/lrm_resources/lrm_resource[@id='SERVICE9']/lrm_rsc_op[@id='SERVICE9_last_0']:  @transition-magic=0:0;43:336:0:d10dd5e7-af4d-4bba-a226-516824f8f60e, @call-id=431, @rc-code=0, @op-status=0, @exec-time=2194
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE2/crmd/1407, version=0.434.73)
Aug 27 08:47:24 [1331] NODE2       crmd:     info: match_graph_event: Action SERVICE9_stop_0 (43) confirmed on NODE2 (rc=0)
Aug 27 08:47:24 [1331] NODE2       crmd:   notice: te_rsc_command:    Initiating start operation SERVICE9_start_0 locally on NODE2 | action 44
Aug 27 08:47:24 [1331] NODE2       crmd:     info: do_lrm_rsc_op:     Performing key=44:336:0:d10dd5e7-af4d-4bba-a226-516824f8f60e op=SERVICE9_start_0
Aug 27 08:47:24 [1328] NODE2       lrmd:     info: log_execute:       executing - rsc:SERVICE9 action:start call_id:433
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_process_request:       Forwarding cib_modify operation for section status to all (origin=local/crmd/1408)
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.73 2
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.74 (null)
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=74
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/status/node_state[@id='2']/lrm[@id='2']/lrm_resources/lrm_resource[@id='SERVICE9']/lrm_rsc_op[@id='SERVICE9_last_0']:  @operation_key=SERVICE9_start_0, @operation=start, @transition-key=44:336:0:d10dd5e7-af4d-4bba-a226-516824f8f60e, @transition-magic=-1:193;44:336:0:d10dd5e7-af4d-4bba-a226-516824f8f60e, @call-id=-1, @rc-code=193, @op-status=-1, @last-run=1598510844, @last-rc-change=1598510844, @exec-time=0
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE2/crmd/1408, version=0.434.74)
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.74 2
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.75 (null)
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=75
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='SERVICE14']/lrm_rsc_op[@id='SERVICE14_last_0']:  @transition-magic=0:0;55:336:0:d10dd5e7-af4d-4bba-a226-516824f8f60e, @call-id=74, @rc-code=0, @op-status=0, @exec-time=2209
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE1/crmd/52, version=0.434.75)
Aug 27 08:47:24 [1331] NODE2       crmd:     info: match_graph_event: Action SERVICE14_start_0 (55) confirmed on NODE1 (rc=0)
Aug 27 08:47:24 [1331] NODE2       crmd:   notice: te_rsc_command:    Initiating monitor operation SERVICE14:1_monitor_10000 on NODE1 | action 56
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.75 2
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.76 (null)
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=76
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_perform_op:    ++ /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='SERVICE14']:  <lrm_rsc_op id="SERVICE14_monitor_10000" operation_key="SERVICE14_monitor_10000" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.14" transition-key="56:336:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="-1:193;56:336:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE1" c
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE1/crmd/53, version=0.434.76)
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.76 2
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.77 (null)
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=77
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='SERVICE14']/lrm_rsc_op[@id='SERVICE14_monitor_10000']:  @transition-magic=0:0;56:336:0:d10dd5e7-af4d-4bba-a226-516824f8f60e, @call-id=76, @rc-code=0, @op-status=0, @exec-time=2
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE1/crmd/54, version=0.434.77)
Aug 27 08:47:24 [1331] NODE2       crmd:     info: match_graph_event: Action SERVICE14_monitor_10000 (56) confirmed on NODE1 (rc=0)
Aug 27 08:47:24 [1328] NODE2       lrmd:     info: systemd_exec_result:       Call to start passed: /org/freedesktop/systemd1/job/42813
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.77 2
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.78 (null)
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=78
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='SERVICE15']/lrm_rsc_op[@id='SERVICE15_last_0']:  @transition-magic=0:0;63:336:0:d10dd5e7-af4d-4bba-a226-516824f8f60e, @call-id=75, @rc-code=0, @op-status=0, @exec-time=2316
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE1/crmd/55, version=0.434.78)
Aug 27 08:47:24 [1331] NODE2       crmd:     info: match_graph_event: Action SERVICE15_start_0 (63) confirmed on NODE1 (rc=0)
Aug 27 08:47:24 [1331] NODE2       crmd:   notice: te_rsc_command:    Initiating monitor operation SERVICE15:1_monitor_10000 on NODE1 | action 64
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.78 2
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.79 (null)
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=79
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_perform_op:    ++ /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='SERVICE15']:  <lrm_rsc_op id="SERVICE15_monitor_10000" operation_key="SERVICE15_monitor_10000" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.14" transition-key="64:336:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" transition-magic="-1:193;64:336:0:d10dd5e7-af4d-4bba-a226-516824f8f60e" exit-reason="" on_node="NODE1" call-id="-1" rc-code="193" o
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE1/crmd/56, version=0.434.79)
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.79 2
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.80 (null)
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=80
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='SERVICE15']/lrm_rsc_op[@id='SERVICE15_monitor_10000']:  @transition-magic=0:0;64:336:0:d10dd5e7-af4d-4bba-a226-516824f8f60e, @call-id=77, @rc-code=0, @op-status=0, @exec-time=1
Aug 27 08:47:24 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE1/crmd/57, version=0.434.80)
Aug 27 08:47:24 [1331] NODE2       crmd:     info: match_graph_event: Action SERVICE15_monitor_10000 (64) confirmed on NODE1 (rc=0)
Aug 27 08:47:26 [1331] NODE2       crmd:   notice: process_lrm_event: Result of start operation for SERVICE9 on NODE2: 0 (ok) | call=433 key=SERVICE9_start_0 confirmed=true cib-update=1409
Aug 27 08:47:26 [1326] NODE2        cib:     info: cib_process_request:       Forwarding cib_modify operation for section status to all (origin=local/crmd/1409)
Aug 27 08:47:26 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.80 2
Aug 27 08:47:26 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.81 (null)
Aug 27 08:47:26 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=81
Aug 27 08:47:26 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/status/node_state[@id='2']/lrm[@id='2']/lrm_resources/lrm_resource[@id='SERVICE9']/lrm_rsc_op[@id='SERVICE9_last_0']:  @transition-magic=0:0;44:336:0:d10dd5e7-af4d-4bba-a226-516824f8f60e, @call-id=433, @rc-code=0, @op-status=0, @exec-time=2079, @queue-time=1
Aug 27 08:47:26 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE2/crmd/1409, version=0.434.81)
Aug 27 08:47:26 [1331] NODE2       crmd:     info: match_graph_event: Action SERVICE9_start_0 (44) confirmed on NODE2 (rc=0)
Aug 27 08:47:26 [1331] NODE2       crmd:   notice: te_rsc_command:    Initiating monitor operation SERVICE9_monitor_5000 locally on NODE2 | action 2
Aug 27 08:47:26 [1331] NODE2       crmd:     info: do_lrm_rsc_op:     Performing key=2:336:0:d10dd5e7-af4d-4bba-a226-516824f8f60e op=SERVICE9_monitor_5000
Aug 27 08:47:26 [1326] NODE2        cib:     info: cib_process_request:       Forwarding cib_modify operation for section status to all (origin=local/crmd/1410)
Aug 27 08:47:26 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.81 2
Aug 27 08:47:26 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.82 (null)
Aug 27 08:47:26 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=82
Aug 27 08:47:26 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/status/node_state[@id='2']/lrm[@id='2']/lrm_resources/lrm_resource[@id='SERVICE9']/lrm_rsc_op[@id='SERVICE9_monitor_5000']:  @crm-debug-origin=do_update_resource, @transition-key=2:336:0:d10dd5e7-af4d-4bba-a226-516824f8f60e, @transition-magic=-1:193;2:336:0:d10dd5e7-af4d-4bba-a226-516824f8f60e, @call-id=-1, @rc-code=193, @op-status=-1, @last-rc-change=1598510846, @exec-time=0
Aug 27 08:47:26 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE2/crmd/1410, version=0.434.82)
Aug 27 08:47:26 [1331] NODE2       crmd:     info: process_lrm_event: Result of monitor operation for SERVICE9 on NODE2: 0 (ok) | call=434 key=SERVICE9_monitor_5000 confirmed=false cib-update=1411
Aug 27 08:47:26 [1326] NODE2        cib:     info: cib_process_request:       Forwarding cib_modify operation for section status to all (origin=local/crmd/1411)
Aug 27 08:47:26 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.82 2
Aug 27 08:47:26 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.83 (null)
Aug 27 08:47:26 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=83
Aug 27 08:47:26 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/status/node_state[@id='2']/lrm[@id='2']/lrm_resources/lrm_resource[@id='SERVICE9']/lrm_rsc_op[@id='SERVICE9_monitor_5000']:  @transition-magic=0:0;2:336:0:d10dd5e7-af4d-4bba-a226-516824f8f60e, @call-id=434, @rc-code=0, @op-status=0, @exec-time=5
Aug 27 08:47:26 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE2/crmd/1411, version=0.434.83)
Aug 27 08:47:26 [1331] NODE2       crmd:     info: match_graph_event: Action SERVICE9_monitor_5000 (2) confirmed on NODE2 (rc=0)
Aug 27 08:47:26 [1331] NODE2       crmd:   notice: run_graph: Transition 336 (Complete=15, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-error-173.bz2): Complete
Aug 27 08:47:26 [1331] NODE2       crmd:     info: do_log:    Input I_TE_SUCCESS received in state S_TRANSITION_ENGINE from notify_crmd
Aug 27 08:47:26 [1331] NODE2       crmd:   notice: do_state_transition:       State transition S_TRANSITION_ENGINE -> S_IDLE | input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd
Aug 27 08:47:27 [1329] NODE2      attrd:     info: attrd_peer_update: Setting last-failure-SERVICE1#stop_0[NODE1]: 1598510842 -> (null) from NODE2
Aug 27 08:47:27 [1329] NODE2      attrd:     info: write_attribute:   Sent CIB request 174 with 2 changes for last-failure-SERVICE1#stop_0 (id n/a, set n/a)
Aug 27 08:47:27 [1326] NODE2        cib:     info: cib_process_request:       Forwarding cib_modify operation for section status to all (origin=local/attrd/174)
Aug 27 08:47:27 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.83 2
Aug 27 08:47:27 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.84 b548502a016f55db1c8a902a6483c308
Aug 27 08:47:27 [1326] NODE2        cib:     info: cib_perform_op:    -- /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='SERVICE1']
Aug 27 08:47:27 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=84
Aug 27 08:47:27 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_delete operation for section //node_state[@uname='NODE1'] /lrm/lrm_resources/lrm_resource[@id='SERVICE1']: OK (rc=0, origin=NODE1/crmd/58, version=0.434.83)
Aug 27 08:47:27 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.83 2
Aug 27 08:47:27 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.84 (null)
Aug 27 08:47:27 [1326] NODE2        cib:     info: cib_perform_op:    -- /cib/status/node_state[@id='1']/transient_attributes[@id='1']/instance_attributes[@id='status-1']/nvpair[@id='status-1-last-failure-SERVICE1.stop_0']
Aug 27 08:47:27 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=84
Aug 27 08:47:27 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section status: OK (rc=0, origin=NODE2/attrd/174, version=0.434.84)
Aug 27 08:47:27 [1329] NODE2      attrd:     info: attrd_cib_callback:        CIB update 174 result for last-failure-SERVICE1#stop_0: OK | rc=0
Aug 27 08:47:27 [1329] NODE2      attrd:     info: attrd_cib_callback:        * last-failure-SERVICE1#stop_0[NODE2]=(null)
Aug 27 08:47:27 [1329] NODE2      attrd:     info: attrd_cib_callback:        * last-failure-SERVICE1#stop_0[NODE1]=(null)
Aug 27 08:47:27 [1331] NODE2       crmd:     info: abort_transition_graph:    Transition aborted by deletion of nvpair[@id='status-1-last-failure-SERVICE1.stop_0']: Transient attribute change | cib=0.434.84 source=abort_unless_down:370 path=/cib/status/node_state[@id='1']/transient_attributes[@id='1']/instance_attributes[@id='status-1']/nvpair[@id='status-1-last-failure-SERVICE1.stop_0'] complete=true
Aug 27 08:47:27 [1331] NODE2       crmd:   notice: do_state_transition:       State transition S_IDLE -> S_POLICY_ENGINE | input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph
Aug 27 08:47:27 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.84 2
Aug 27 08:47:27 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.434.85 (null)
Aug 27 08:47:27 [1326] NODE2        cib:     info: cib_perform_op:    -- /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='SERVICE1']
Aug 27 08:47:27 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @num_updates=85
Aug 27 08:47:27 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_delete operation for section //node_state[@uname='NODE1'] /lrm/lrm_resources/lrm_resource[@id='SERVICE1']: OK (rc=0, origin=NODE1/crmd/59, version=0.434.85)
Aug 27 08:47:27 [1326] NODE2        cib:     info: cib_perform_op:    Diff: --- 0.434.85 2
Aug 27 08:47:27 [1326] NODE2        cib:     info: cib_perform_op:    Diff: +++ 0.435.0 (null)
Aug 27 08:47:27 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib:  @epoch=435, @num_updates=0
Aug 27 08:47:27 [1326] NODE2        cib:     info: cib_perform_op:    +  /cib/configuration/crm_config/cluster_property_set[@id='cib-bootstrap-options']/nvpair[@id='cib-bootstrap-options-last-lrm-refresh']:  @value=1598510869
Aug 27 08:47:27 [1326] NODE2        cib:     info: cib_process_request:       Completed cib_modify operation for section crm_config: OK (rc=0, origin=NODE1/crmd/61, version=0.435.0)
Aug 27 08:47:27 [1331] NODE2       crmd:     info: abort_transition_graph:    Transition aborted by deletion of lrm_resource[@id='SERVICE1']: Resource state removal | cib=0.434.85 source=abort_unless_down:370 path=/cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='SERVICE1'] complete=true
Aug 27 08:47:27 [1331] NODE2       crmd:     info: abort_transition_graph:    Transition aborted by cib-bootstrap-options-last-lrm-refresh doing modify last-lrm-refresh=1598510869: Configuration change | cib=0.435.0 source=te_update_diff_v2:522 path=/cib/configuration/crm_config/cluster_property_set[@id='cib-bootstrap-options']/nvpair[@id='cib-bootstrap-options-last-lrm-refresh'] complete=true
Aug 27 08:47:27 [1330] NODE2    pengine:   notice: unpack_config:     On loss of CCM Quorum: Ignore
Aug 27 08:47:27 [1330] NODE2    pengine:     info: determine_online_status:   Node NODE1 is online
Aug 27 08:47:27 [1330] NODE2    pengine:     info: determine_online_status:   Node NODE2 is online
Aug 27 08:47:27 [1330] NODE2    pengine:  warning: unpack_rsc_op:     Pretending the failure of SERVICE1_stop_0 (rc=198) on NODE1 succeeded
Aug 27 08:47:27 [1330] NODE2    pengine:  warning: unpack_rsc_op:     Pretending the failure of SERVICE1_stop_0 (rc=198) on NODE1 succeeded
Aug 27 08:47:27 [1330] NODE2    pengine:     info: determine_op_status:       Operation monitor found resource SERVICE9 active on NODE1
Aug 27 08:47:27 [1330] NODE2    pengine:     info: determine_op_status:       Operation monitor found resource SERVICE4 active on NODE2
Aug 27 08:47:27 [1330] NODE2    pengine:     info: determine_op_status:       Operation monitor found resource SERVICE1 active on NODE2
Aug 27 08:47:27 [1330] NODE2    pengine:     info: unpack_node_loop:  Node 1 is already processed
Aug 27 08:47:27 [1330] NODE2    pengine:     info: unpack_node_loop:  Node 2 is already processed
Aug 27 08:47:27 [1330] NODE2    pengine:     info: unpack_node_loop:  Node 1 is already processed
Aug 27 08:47:27 [1330] NODE2    pengine:     info: unpack_node_loop:  Node 2 is already processed
Aug 27 08:47:27 [1330] NODE2    pengine:     info: group_print:        Resource Group: IPV
Aug 27 08:47:27 [1330] NODE2    pengine:     info: common_print:           VIRTUALIP  (ocf::heartbeat:IPaddr2):       Started NODE2
Aug 27 08:47:27 [1330] NODE2    pengine:     info: common_print:           SOURCEIP   (ocf::heartbeat:IPsrcaddr):      Started NODE2
Aug 27 08:47:27 [1330] NODE2    pengine:     info: common_print:      SERVICE1        (systemd:service1):     Started NODE2 (failure ignored)
Aug 27 08:47:27 [1330] NODE2    pengine:     info: common_print:      SERVICE2    (systemd:service2): Started NODE2
Aug 27 08:47:27 [1330] NODE2    pengine:     info: common_print:      SERVICE3       (systemd:service3):  Started NODE2
Aug 27 08:47:27 [1330] NODE2    pengine:     info: common_print:      SERVICE4       (systemd:service4):  Started NODE2
Aug 27 08:47:27 [1330] NODE2    pengine:     info: common_print:      SERVICE5      (systemd:service5): Started NODE2
Aug 27 08:47:27 [1330] NODE2    pengine:     info: common_print:      SERVICE6 (systemd:service6):    Started NODE2
Aug 27 08:47:27 [1330] NODE2    pengine:     info: common_print:      SERVICE7  (systemd:service7):      Started NODE2
Aug 27 08:47:27 [1330] NODE2    pengine:     info: common_print:      SERVICE8   (systemd:service8):  Started NODE2
Aug 27 08:47:27 [1330] NODE2    pengine:     info: common_print:      SERVICE9        (systemd:service9):   Started NODE2
Aug 27 08:47:27 [1330] NODE2    pengine:     info: common_print:      SERVICE10        (systemd:service10):        Started NODE2
Aug 27 08:47:27 [1330] NODE2    pengine:     info: common_print:      SERVICE11       (systemd:service11):       Started NODE2
Aug 27 08:47:27 [1330] NODE2    pengine:     info: common_print:      SERVICE12   (systemd:service12):       Started NODE2
Aug 27 08:47:27 [1330] NODE2    pengine:     info: common_print:      SERVICE13  (systemd:service13):       Started NODE2
Aug 27 08:47:27 [1330] NODE2    pengine:     info: clone_print:        Clone Set: SERVICE14-clone [SERVICE14]
Aug 27 08:47:27 [1330] NODE2    pengine:     info: short_print:            Started: [ NODE1 NODE2 ]
Aug 27 08:47:27 [1330] NODE2    pengine:     info: clone_print:        Clone Set: SERVICE15-clone [SERVICE15]
Aug 27 08:47:27 [1330] NODE2    pengine:     info: short_print:            Started: [ NODE1 NODE2 ]
Aug 27 08:47:27 [1330] NODE2    pengine:     info: LogActions:        Leave   VIRTUALIP       (Started NODE2)
Aug 27 08:47:27 [1330] NODE2    pengine:     info: LogActions:        Leave   SOURCEIP        (Started NODE2)
Aug 27 08:47:27 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE1        (Started NODE2)
Aug 27 08:47:27 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE2    (Started NODE2)
Aug 27 08:47:27 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE3       (Started NODE2)
Aug 27 08:47:27 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE4       (Started NODE2)
Aug 27 08:47:27 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE5      (Started NODE2)
Aug 27 08:47:27 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE6 (Started NODE2)
Aug 27 08:47:27 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE7  (Started NODE2)
Aug 27 08:47:27 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE8   (Started NODE2)
Aug 27 08:47:27 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE9        (Started NODE2)
Aug 27 08:47:27 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE10        (Started NODE2)
Aug 27 08:47:27 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE11       (Started NODE2)
Aug 27 08:47:27 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE12   (Started NODE2)
Aug 27 08:47:27 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE13  (Started NODE2)
Aug 27 08:47:27 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE14:0        (Started NODE1)
Aug 27 08:47:27 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE14:1        (Started NODE2)
Aug 27 08:47:27 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE15:0 (Started NODE1)
Aug 27 08:47:27 [1330] NODE2    pengine:     info: LogActions:        Leave   SERVICE15:1 (Started NODE2)


More information about the Users mailing list