<div dir="ltr">Sure thing. Just to highlight the differences from before: current constraints config, also the mail-services group is growing with systemd resources.<div><br></div><div>What happened: mail2 was running all resources, then I killed the amavisd master process.</div><div><div><br></div><div>Best regards,</div><div>Lorand</div><div><br></div><div><div>Location Constraints:<br></div><div>Ordering Constraints:</div><div>  promote mail-clone then start fs-services (kind:Mandatory)</div><div>  promote spool-clone then start fs-services (kind:Mandatory)</div><div>  start network-services then start fs-services (kind:Mandatory)</div><div>  start fs-services then start mail-services (kind:Mandatory)</div><div>Colocation Constraints:</div><div>  fs-services with spool-clone (score:INFINITY) (rsc-role:Started) (with-rsc-role:Master)</div><div>  fs-services with mail-clone (score:INFINITY) (rsc-role:Started) (with-rsc-role:Master)</div><div>  mail-services with fs-services (score:INFINITY)</div><div>  network-services with mail-services (score:INFINITY)</div><div>  </div><div>Group: mail-services</div><div>  Resource: amavisd (class=systemd type=amavisd)</div><div>   Operations: monitor interval=60s (amavisd-monitor-interval-60s)</div><div>  Resource: spamassassin (class=systemd type=spamassassin)</div><div>   Operations: monitor interval=60s (spamassassin-monitor-interval-60s)</div><div>  Resource: clamd (class=systemd type=clamd@amavisd)</div><div>   Operations: monitor interval=60s (clamd-monitor-interval-60s)</div><div><br></div><div><br></div><div><br></div><div>Cluster name: mailcluster<br></div><div>Last updated: Fri Mar 18 10:43:57 2016          Last change: Fri Mar 18 10:40:28 2016 by hacluster via crmd on mail1</div><div>Stack: corosync</div><div>Current DC: mail2 (version 1.1.13-10.el7_2.2-44eb2dd) - partition with quorum</div><div>2 nodes and 10 resources configured</div><div><br></div><div>Online: [ mail1 mail2 ]</div><div><br></div><div>Full list of resources:</div><div><br></div><div> Resource Group: network-services</div><div>     virtualip-1        (ocf::heartbeat:IPaddr2):       Stopped</div><div> Master/Slave Set: spool-clone [spool]</div><div>     Masters: [ mail2 ]</div><div>     Slaves: [ mail1 ]</div><div> Master/Slave Set: mail-clone [mail]</div><div>     Masters: [ mail2 ]</div><div>     Slaves: [ mail1 ]</div><div> Resource Group: fs-services</div><div>     fs-spool   (ocf::heartbeat:Filesystem):    Stopped</div><div>     fs-mail    (ocf::heartbeat:Filesystem):    Stopped</div><div> Resource Group: mail-services</div><div>     amavisd    (systemd:amavisd):      Stopped</div><div>     spamassassin       (systemd:spamassassin): Stopped</div><div>     clamd      (systemd:clamd@amavisd):        Stopped</div><div><br></div><div>Failed Actions:</div><div>* amavisd_monitor_60000 on mail2 'not running' (7): call=2499, status=complete, exitreason='none',</div><div>    last-rc-change='Fri Mar 18 10:42:29 2016', queued=0ms, exec=0ms</div><div><br></div><div><br></div><div>PCSD Status:</div><div>  mail1: Online</div><div>  mail2: Online</div><div><br></div><div>Daemon Status:</div><div>  corosync: active/enabled</div><div>  pacemaker: active/enabled</div><div>  pcsd: active/enabled</div><div><br></div><div><br></div><div><br></div><div><cib crm_feature_set="3.0.10" validate-with="pacemaker-2.3" epoch="277" num_updates="22" admin_epoch="0" cib-last-written="Fri Mar 18 10:40:28 2016" update-origin="mail1" update-client="crmd" update-user="hacluster" have-quorum="1" dc-uuid="2"><br></div><div>  <configuration></div><div>    <crm_config></div><div>      <cluster_property_set id="cib-bootstrap-options"></div><div>        <nvpair id="cib-bootstrap-options-have-watchdog" name="have-watchdog" value="false"/></div><div>        <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.13-10.el7_2.2-44eb2dd"/></div><div>        <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="corosync"/></div><div>        <nvpair id="cib-bootstrap-options-cluster-name" name="cluster-name" value="mailcluster"/></div><div>        <nvpair id="cib-bootstrap-options-stonith-enabled" name="stonith-enabled" value="false"/></div><div>        <nvpair id="cib-bootstrap-options-pe-error-series-max" name="pe-error-series-max" value="1024"/></div><div>        <nvpair id="cib-bootstrap-options-pe-warn-series-max" name="pe-warn-series-max" value="1024"/></div><div>        <nvpair id="cib-bootstrap-options-pe-input-series-max" name="pe-input-series-max" value="1024"/></div><div>        <nvpair id="cib-bootstrap-options-no-quorum-policy" name="no-quorum-policy" value="ignore"/></div><div>        <nvpair id="cib-bootstrap-options-cluster-recheck-interval" name="cluster-recheck-interval" value="5min"/></div><div>        <nvpair id="cib-bootstrap-options-last-lrm-refresh" name="last-lrm-refresh" value="1458294028"/></div><div>        <nvpair id="cib-bootstrap-options-default-resource-stickiness" name="default-resource-stickiness" value="infinity"/></div><div>      </cluster_property_set></div><div>    </crm_config></div><div>    <nodes></div><div>      <node id="1" uname="mail1"></div><div>        <instance_attributes id="nodes-1"/></div><div>      </node></div><div>      <node id="2" uname="mail2"></div><div>        <instance_attributes id="nodes-2"/></div><div>      </node></div><div>    </nodes></div><div>    <resources></div><div>      <group id="network-services"></div><div>        <primitive class="ocf" id="virtualip-1" provider="heartbeat" type="IPaddr2"></div><div>          <instance_attributes id="virtualip-1-instance_attributes"></div><div>            <nvpair id="virtualip-1-instance_attributes-ip" name="ip" value="10.20.64.10"/></div><div>            <nvpair id="virtualip-1-instance_attributes-cidr_netmask" name="cidr_netmask" value="24"/></div><div>            <nvpair id="virtualip-1-instance_attributes-nic" name="nic" value="lan0"/></div><div>          </instance_attributes></div><div>          <operations></div><div>            <op id="virtualip-1-start-interval-0s" interval="0s" name="start" timeout="20s"/></div><div>            <op id="virtualip-1-stop-interval-0s" interval="0s" name="stop" timeout="20s"/></div><div>            <op id="virtualip-1-monitor-interval-30s" interval="30s" name="monitor"/></div><div>          </operations></div><div>        </primitive></div><div>      </group></div><div>      <master id="spool-clone"></div><div>        <primitive class="ocf" id="spool" provider="linbit" type="drbd"></div><div>          <instance_attributes id="spool-instance_attributes"></div><div>            <nvpair id="spool-instance_attributes-drbd_resource" name="drbd_resource" value="spool"/></div><div>          </instance_attributes></div><div>          <operations></div><div>            <op id="spool-start-interval-0s" interval="0s" name="start" timeout="240"/></div><div>            <op id="spool-promote-interval-0s" interval="0s" name="promote" timeout="90"/></div><div>            <op id="spool-demote-interval-0s" interval="0s" name="demote" timeout="90"/></div><div>            <op id="spool-stop-interval-0s" interval="0s" name="stop" timeout="100"/></div><div>            <op id="spool-monitor-interval-10s" interval="10s" name="monitor"/></div><div>          </operations></div><div>        </primitive></div><div>        <meta_attributes id="spool-clone-meta_attributes"></div><div>          <nvpair id="spool-clone-meta_attributes-master-max" name="master-max" value="1"/></div><div>          <nvpair id="spool-clone-meta_attributes-master-node-max" name="master-node-max" value="1"/></div><div>          <nvpair id="spool-clone-meta_attributes-clone-max" name="clone-max" value="2"/></div><div>          <nvpair id="spool-clone-meta_attributes-clone-node-max" name="clone-node-max" value="1"/></div><div>          <nvpair id="spool-clone-meta_attributes-notify" name="notify" value="true"/></div><div>        </meta_attributes></div><div>      </master></div><div>      <master id="mail-clone"></div><div>        <primitive class="ocf" id="mail" provider="linbit" type="drbd"></div><div>          <instance_attributes id="mail-instance_attributes"></div><div>            <nvpair id="mail-instance_attributes-drbd_resource" name="drbd_resource" value="mail"/></div><div>          </instance_attributes></div><div>          <operations></div><div>            <op id="mail-start-interval-0s" interval="0s" name="start" timeout="240"/></div><div>            <op id="mail-promote-interval-0s" interval="0s" name="promote" timeout="90"/></div><div>            <op id="mail-demote-interval-0s" interval="0s" name="demote" timeout="90"/></div><div>            <op id="mail-stop-interval-0s" interval="0s" name="stop" timeout="100"/></div><div>            <op id="mail-monitor-interval-10s" interval="10s" name="monitor"/></div><div>          </operations></div><div>        </primitive></div><div>        <meta_attributes id="mail-clone-meta_attributes"></div><div>          <nvpair id="mail-clone-meta_attributes-master-max" name="master-max" value="1"/></div><div>          <nvpair id="mail-clone-meta_attributes-master-node-max" name="master-node-max" value="1"/></div><div>          <nvpair id="mail-clone-meta_attributes-clone-max" name="clone-max" value="2"/></div><div>          <nvpair id="mail-clone-meta_attributes-clone-node-max" name="clone-node-max" value="1"/></div><div>          <nvpair id="mail-clone-meta_attributes-notify" name="notify" value="true"/></div><div>        </meta_attributes></div><div>      </master></div><div>      <group id="fs-services"></div><div>        <primitive class="ocf" id="fs-spool" provider="heartbeat" type="Filesystem"></div><div>          <instance_attributes id="fs-spool-instance_attributes"></div><div>            <nvpair id="fs-spool-instance_attributes-device" name="device" value="/dev/drbd0"/></div><div>            <nvpair id="fs-spool-instance_attributes-directory" name="directory" value="/var/spool/postfix"/></div><div>            <nvpair id="fs-spool-instance_attributes-fstype" name="fstype" value="ext4"/></div><div>            <nvpair id="fs-spool-instance_attributes-options" name="options" value="nodev,nosuid,noexec"/></div><div>          </instance_attributes></div><div>          <operations></div><div>            <op id="fs-spool-start-interval-0s" interval="0s" name="start" timeout="60"/></div><div>            <op id="fs-spool-stop-interval-0s" interval="0s" name="stop" timeout="60"/></div><div>            <op id="fs-spool-monitor-interval-20" interval="20" name="monitor" timeout="40"/></div><div>          </operations></div><div>        </primitive></div><div>        <primitive class="ocf" id="fs-mail" provider="heartbeat" type="Filesystem"></div><div>          <instance_attributes id="fs-mail-instance_attributes"></div><div>            <nvpair id="fs-mail-instance_attributes-device" name="device" value="/dev/drbd1"/></div><div>            <nvpair id="fs-mail-instance_attributes-directory" name="directory" value="/var/spool/mail"/></div><div>            <nvpair id="fs-mail-instance_attributes-fstype" name="fstype" value="ext4"/></div><div>            <nvpair id="fs-mail-instance_attributes-options" name="options" value="nodev,nosuid,noexec"/></div><div>          </instance_attributes></div><div>          <operations></div><div>            <op id="fs-mail-start-interval-0s" interval="0s" name="start" timeout="60"/></div><div>            <op id="fs-mail-stop-interval-0s" interval="0s" name="stop" timeout="60"/></div><div>            <op id="fs-mail-monitor-interval-20" interval="20" name="monitor" timeout="40"/></div><div>          </operations></div><div>        </primitive></div><div>      </group></div><div>      <group id="mail-services"></div><div>        <primitive class="systemd" id="amavisd" type="amavisd"></div><div>          <instance_attributes id="amavisd-instance_attributes"/></div><div>          <operations></div><div>            <op id="amavisd-monitor-interval-60s" interval="60s" name="monitor"/></div><div>          </operations></div><div>        </primitive></div><div>        <primitive class="systemd" id="spamassassin" type="spamassassin"></div><div>          <instance_attributes id="spamassassin-instance_attributes"/></div><div>          <operations></div><div>            <op id="spamassassin-monitor-interval-60s" interval="60s" name="monitor"/></div><div>          </operations></div><div>        </primitive></div><div>        <primitive class="systemd" id="clamd" type="clamd@amavisd"></div><div>          <instance_attributes id="clamd-instance_attributes"/></div><div>          <operations></div><div>            <op id="clamd-monitor-interval-60s" interval="60s" name="monitor"/></div><div>          </operations></div><div>        </primitive></div><div>      </group></div><div>    </resources></div><div>    <constraints></div><div>      <rsc_order first="mail-clone" first-action="promote" id="order-mail-clone-fs-services-mandatory" then="fs-services" then-action="start"/></div><div>      <rsc_order first="spool-clone" first-action="promote" id="order-spool-clone-fs-services-mandatory" then="fs-services" then-action="start"/></div><div>      <rsc_order first="network-services" first-action="start" id="order-network-services-fs-services-mandatory" then="fs-services" then-action="start"/></div><div>      <rsc_order first="fs-services" first-action="start" id="order-fs-services-mail-services-mandatory" then="mail-services" then-action="start"/></div><div>      <rsc_colocation id="colocation-fs-services-spool-clone-INFINITY" rsc="fs-services" rsc-role="Started" score="INFINITY" with-rsc="spool-clone" with-rsc-role="Master"/></div><div>      <rsc_colocation id="colocation-fs-services-mail-clone-INFINITY" rsc="fs-services" rsc-role="Started" score="INFINITY" with-rsc="mail-clone" with-rsc-role="Master"/></div><div>      <rsc_colocation id="colocation-mail-services-fs-services-INFINITY" rsc="mail-services" score="INFINITY" with-rsc="fs-services"/></div><div>      <rsc_colocation id="colocation-network-services-mail-services-INFINITY" rsc="network-services" score="INFINITY" with-rsc="mail-services"/></div><div>    </constraints></div><div>    <op_defaults></div><div>      <meta_attributes id="op_defaults-options"></div><div>        <nvpair id="op_defaults-options-on-fail" name="on-fail" value="restart"/></div><div>      </meta_attributes></div><div>    </op_defaults></div><div>    <rsc_defaults></div><div>      <meta_attributes id="rsc_defaults-options"></div><div>        <nvpair id="rsc_defaults-options-migration-threshold" name="migration-threshold" value="1"/></div><div>      </meta_attributes></div><div>    </rsc_defaults></div><div>  </configuration></div><div>  <status></div><div>    <node_state id="1" uname="mail1" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member"></div><div>      <transient_attributes id="1"></div><div>        <instance_attributes id="status-1"></div><div>          <nvpair id="status-1-shutdown" name="shutdown" value="0"/></div><div>          <nvpair id="status-1-probe_complete" name="probe_complete" value="true"/></div><div>          <nvpair id="status-1-last-failure-fs-mail" name="last-failure-fs-mail" value="1458145164"/></div><div>          <nvpair id="status-1-last-failure-amavisd" name="last-failure-amavisd" value="1458144572"/></div><div>          <nvpair id="status-1-master-spool" name="master-spool" value="10000"/></div><div>          <nvpair id="status-1-master-mail" name="master-mail" value="10000"/></div><div>        </instance_attributes></div><div>      </transient_attributes></div><div>      <lrm id="1"></div><div>        <lrm_resources></div><div>          <lrm_resource id="virtualip-1" type="IPaddr2" class="ocf" provider="heartbeat"></div><div>            <lrm_rsc_op id="virtualip-1_last_0" operation_key="virtualip-1_stop_0" operation="stop" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" transition-key="13:3651:0:ae755a85-c250-498f-9c94-ddd8a7e2788a" transition-magic="0:0;13:3651:0:ae755a85-c250-498f-9c94-ddd8a7e2788a" on_node="mail1" call-id="1930" rc-code="0" op-status="0" interval="0" last-run="1458292925" last-rc-change="1458292925" exec-time="285" queue-time="0" op-digest="28a9f5254eca47bbb2a9892a336ab8d6"/></div><div>            <lrm_rsc_op id="virtualip-1_monitor_30000" operation_key="virtualip-1_monitor_30000" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" transition-key="13:3390:0:ae755a85-c250-498f-9c94-ddd8a7e2788a" transition-magic="0:0;13:3390:0:ae755a85-c250-498f-9c94-ddd8a7e2788a" on_node="mail1" call-id="1886" rc-code="0" op-status="0" interval="30000" last-rc-change="1458216597" exec-time="46" queue-time="0" op-digest="c2158e684c2fe8758a545e9a9387caed"/></div><div>          </lrm_resource></div><div>          <lrm_resource id="mail" type="drbd" class="ocf" provider="linbit"></div><div>            <lrm_rsc_op id="mail_last_failure_0" operation_key="mail_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" transition-key="9:3026:7:ae755a85-c250-498f-9c94-ddd8a7e2788a" transition-magic="0:0;9:3026:7:ae755a85-c250-498f-9c94-ddd8a7e2788a" on_node="mail1" call-id="1451" rc-code="0" op-status="0" interval="0" last-run="1458128284" last-rc-change="1458128284" exec-time="72" queue-time="0" op-digest="98235597a9743aebee92a6c373a068d5"/></div><div>            <lrm_rsc_op id="mail_last_0" operation_key="mail_start_0" operation="start" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" transition-key="50:3669:0:ae755a85-c250-498f-9c94-ddd8a7e2788a" transition-magic="0:0;50:3669:0:ae755a85-c250-498f-9c94-ddd8a7e2788a" on_node="mail1" call-id="2014" rc-code="0" op-status="0" interval="0" last-run="1458294003" last-rc-change="1458294003" exec-time="270" queue-time="0" op-digest="98235597a9743aebee92a6c373a068d5"/></div><div>            <lrm_rsc_op id="mail_monitor_10000" operation_key="mail_monitor_10000" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" transition-key="50:3670:0:ae755a85-c250-498f-9c94-ddd8a7e2788a" transition-magic="0:0;50:3670:0:ae755a85-c250-498f-9c94-ddd8a7e2788a" on_node="mail1" call-id="2019" rc-code="0" op-status="0" interval="10000" last-rc-change="1458294004" exec-time="79" queue-time="0" op-digest="57464d93900365abea1493a8f6b22159"/></div><div>          </lrm_resource></div><div>          <lrm_resource id="spool" type="drbd" class="ocf" provider="linbit"></div><div>            <lrm_rsc_op id="spool_last_failure_0" operation_key="spool_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" transition-key="9:3028:7:ae755a85-c250-498f-9c94-ddd8a7e2788a" transition-magic="0:0;9:3028:7:ae755a85-c250-498f-9c94-ddd8a7e2788a" on_node="mail1" call-id="1459" rc-code="0" op-status="0" interval="0" last-run="1458128289" last-rc-change="1458128289" exec-time="73" queue-time="0" op-digest="dbbf364a9d070ebe47b97831a0be60f4"/></div><div>            <lrm_rsc_op id="spool_last_0" operation_key="spool_start_0" operation="start" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" transition-key="20:3669:0:ae755a85-c250-498f-9c94-ddd8a7e2788a" transition-magic="0:0;20:3669:0:ae755a85-c250-498f-9c94-ddd8a7e2788a" on_node="mail1" call-id="2015" rc-code="0" op-status="0" interval="0" last-run="1458294003" last-rc-change="1458294003" exec-time="266" queue-time="0" op-digest="dbbf364a9d070ebe47b97831a0be60f4"/></div><div>            <lrm_rsc_op id="spool_monitor_10000" operation_key="spool_monitor_10000" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" transition-key="19:3670:0:ae755a85-c250-498f-9c94-ddd8a7e2788a" transition-magic="0:0;19:3670:0:ae755a85-c250-498f-9c94-ddd8a7e2788a" on_node="mail1" call-id="2018" rc-code="0" op-status="0" interval="10000" last-rc-change="1458294004" exec-time="80" queue-time="0" op-digest="97f3ae82d78b8755a2179c6797797580"/></div><div>          </lrm_resource></div><div>          <lrm_resource id="fs-spool" type="Filesystem" class="ocf" provider="heartbeat"></div><div>            <lrm_rsc_op id="fs-spool_last_0" operation_key="fs-spool_stop_0" operation="stop" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" transition-key="78:3651:0:ae755a85-c250-498f-9c94-ddd8a7e2788a" transition-magic="0:0;78:3651:0:ae755a85-c250-498f-9c94-ddd8a7e2788a" on_node="mail1" call-id="1928" rc-code="0" op-status="0" interval="0" last-run="1458292923" last-rc-change="1458292923" exec-time="1258" queue-time="0" op-digest="54f97a4890ac973bd096580098e40914"/></div><div>            <lrm_rsc_op id="fs-spool_monitor_20000" operation_key="fs-spool_monitor_20000" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" transition-key="69:3392:0:ae755a85-c250-498f-9c94-ddd8a7e2788a" transition-magic="0:0;69:3392:0:ae755a85-c250-498f-9c94-ddd8a7e2788a" on_node="mail1" call-id="1896" rc-code="0" op-status="0" interval="20000" last-rc-change="1458216598" exec-time="47" queue-time="0" op-digest="e85a7e24c0c0b05f5d196e3d363e4dfc"/></div><div>          </lrm_resource></div><div>          <lrm_resource id="fs-mail" type="Filesystem" class="ocf" provider="heartbeat"></div><div>            <lrm_rsc_op id="fs-mail_last_0" operation_key="fs-mail_stop_0" operation="stop" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" transition-key="81:3651:0:ae755a85-c250-498f-9c94-ddd8a7e2788a" transition-magic="0:0;81:3651:0:ae755a85-c250-498f-9c94-ddd8a7e2788a" on_node="mail1" call-id="1926" rc-code="0" op-status="0" interval="0" last-run="1458292923" last-rc-change="1458292923" exec-time="85" queue-time="1" op-digest="57adf8df552907571679154e346a4403"/></div><div>            <lrm_rsc_op id="fs-mail_monitor_20000" operation_key="fs-mail_monitor_20000" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" transition-key="71:3392:0:ae755a85-c250-498f-9c94-ddd8a7e2788a" transition-magic="0:0;71:3392:0:ae755a85-c250-498f-9c94-ddd8a7e2788a" on_node="mail1" call-id="1898" rc-code="0" op-status="0" interval="20000" last-rc-change="1458216598" exec-time="67" queue-time="0" op-digest="ad82e3ec600949a8e869e8afe9a21fef"/></div><div>          </lrm_resource></div><div>          <lrm_resource id="amavisd" type="amavisd" class="systemd"></div><div>            <lrm_rsc_op id="amavisd_last_0" operation_key="amavisd_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" transition-key="9:3674:7:ae755a85-c250-498f-9c94-ddd8a7e2788a" transition-magic="0:7;9:3674:7:ae755a85-c250-498f-9c94-ddd8a7e2788a" on_node="mail1" call-id="2026" rc-code="7" op-status="0" interval="0" last-run="1458294028" last-rc-change="1458294028" exec-time="5" queue-time="0" op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/></div><div>          </lrm_resource></div><div>          <lrm_resource id="spamassassin" type="spamassassin" class="systemd"></div><div>            <lrm_rsc_op id="spamassassin_last_0" operation_key="spamassassin_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" transition-key="10:3674:7:ae755a85-c250-498f-9c94-ddd8a7e2788a" transition-magic="0:7;10:3674:7:ae755a85-c250-498f-9c94-ddd8a7e2788a" on_node="mail1" call-id="2030" rc-code="7" op-status="0" interval="0" last-run="1458294028" last-rc-change="1458294028" exec-time="5" queue-time="0" op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/></div><div>          </lrm_resource></div><div>          <lrm_resource id="clamd" type="clamd@amavisd" class="systemd"></div><div>            <lrm_rsc_op id="clamd_last_0" operation_key="clamd_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" transition-key="11:3674:7:ae755a85-c250-498f-9c94-ddd8a7e2788a" transition-magic="0:7;11:3674:7:ae755a85-c250-498f-9c94-ddd8a7e2788a" on_node="mail1" call-id="2034" rc-code="7" op-status="0" interval="0" last-run="1458294028" last-rc-change="1458294028" exec-time="7" queue-time="0" op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/></div><div>          </lrm_resource></div><div>        </lrm_resources></div><div>      </lrm></div><div>    </node_state></div><div>    <node_state id="2" uname="mail2" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member"></div><div>      <transient_attributes id="2"></div><div>        <instance_attributes id="status-2"></div><div>          <nvpair id="status-2-shutdown" name="shutdown" value="0"/></div><div>          <nvpair id="status-2-last-failure-spool" name="last-failure-spool" value="1457364470"/></div><div>          <nvpair id="status-2-probe_complete" name="probe_complete" value="true"/></div><div>          <nvpair id="status-2-last-failure-mail" name="last-failure-mail" value="1457527103"/></div><div>          <nvpair id="status-2-last-failure-fs-spool" name="last-failure-fs-spool" value="1457524256"/></div><div>          <nvpair id="status-2-last-failure-fs-mail" name="last-failure-fs-mail" value="1457611139"/></div><div>          <nvpair id="status-2-last-failure-amavisd" name="last-failure-amavisd" value="1458294149"/></div><div>          <nvpair id="status-2-master-mail" name="master-mail" value="10000"/></div><div>          <nvpair id="status-2-master-spool" name="master-spool" value="10000"/></div><div>          <nvpair id="status-2-fail-count-amavisd" name="fail-count-amavisd" value="1"/></div><div>        </instance_attributes></div><div>      </transient_attributes></div><div>      <lrm id="2"></div><div>        <lrm_resources></div><div>          <lrm_resource id="virtualip-1" type="IPaddr2" class="ocf" provider="heartbeat"></div><div>            <lrm_rsc_op id="virtualip-1_last_failure_0" operation_key="virtualip-1_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" transition-key="11:3024:7:ae755a85-c250-498f-9c94-ddd8a7e2788a" transition-magic="0:0;11:3024:7:ae755a85-c250-498f-9c94-ddd8a7e2788a" on_node="mail2" call-id="1904" rc-code="0" op-status="0" interval="0" last-run="1458128280" last-rc-change="1458128280" exec-time="49" queue-time="0" op-digest="28a9f5254eca47bbb2a9892a336ab8d6"/></div><div>            <lrm_rsc_op id="virtualip-1_last_0" operation_key="virtualip-1_stop_0" operation="stop" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" transition-key="14:3677:0:ae755a85-c250-498f-9c94-ddd8a7e2788a" transition-magic="0:0;14:3677:0:ae755a85-c250-498f-9c94-ddd8a7e2788a" on_node="mail2" call-id="2513" rc-code="0" op-status="0" interval="0" last-run="1458294156" last-rc-change="1458294156" exec-time="51" queue-time="0" op-digest="28a9f5254eca47bbb2a9892a336ab8d6"/></div><div>            <lrm_rsc_op id="virtualip-1_monitor_30000" operation_key="virtualip-1_monitor_30000" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" transition-key="12:3664:0:ae755a85-c250-498f-9c94-ddd8a7e2788a" transition-magic="0:0;12:3664:0:ae755a85-c250-498f-9c94-ddd8a7e2788a" on_node="mail2" call-id="2425" rc-code="0" op-status="0" interval="30000" last-rc-change="1458293985" exec-time="48" queue-time="0" op-digest="c2158e684c2fe8758a545e9a9387caed"/></div><div>          </lrm_resource></div><div>          <lrm_resource id="mail" type="drbd" class="ocf" provider="linbit"></div><div>            <lrm_rsc_op id="mail_last_failure_0" operation_key="mail_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" transition-key="11:3026:7:ae755a85-c250-498f-9c94-ddd8a7e2788a" transition-magic="0:8;11:3026:7:ae755a85-c250-498f-9c94-ddd8a7e2788a" on_node="mail2" call-id="1911" rc-code="8" op-status="0" interval="0" last-run="1458128284" last-rc-change="1458128284" exec-time="79" queue-time="0" op-digest="98235597a9743aebee92a6c373a068d5"/></div><div>            <lrm_rsc_op id="mail_last_0" operation_key="mail_promote_0" operation="promote" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" transition-key="41:3652:0:ae755a85-c250-498f-9c94-ddd8a7e2788a" transition-magic="0:0;41:3652:0:ae755a85-c250-498f-9c94-ddd8a7e2788a" on_node="mail2" call-id="2333" rc-code="0" op-status="0" interval="0" last-run="1458292925" last-rc-change="1458292925" exec-time="41" queue-time="0" op-digest="98235597a9743aebee92a6c373a068d5"/></div><div>          </lrm_resource></div><div>          <lrm_resource id="spool" type="drbd" class="ocf" provider="linbit"></div><div>            <lrm_rsc_op id="spool_last_failure_0" operation_key="spool_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" transition-key="11:3028:7:ae755a85-c250-498f-9c94-ddd8a7e2788a" transition-magic="0:8;11:3028:7:ae755a85-c250-498f-9c94-ddd8a7e2788a" on_node="mail2" call-id="1917" rc-code="8" op-status="0" interval="0" last-run="1458128289" last-rc-change="1458128289" exec-time="73" queue-time="0" op-digest="dbbf364a9d070ebe47b97831a0be60f4"/></div><div>            <lrm_rsc_op id="spool_last_0" operation_key="spool_promote_0" operation="promote" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" transition-key="14:3652:0:ae755a85-c250-498f-9c94-ddd8a7e2788a" transition-magic="0:0;14:3652:0:ae755a85-c250-498f-9c94-ddd8a7e2788a" on_node="mail2" call-id="2332" rc-code="0" op-status="0" interval="0" last-run="1458292925" last-rc-change="1458292925" exec-time="45" queue-time="0" op-digest="dbbf364a9d070ebe47b97831a0be60f4"/></div><div>          </lrm_resource></div><div>          <lrm_resource id="fs-mail" type="Filesystem" class="ocf" provider="heartbeat"></div><div>            <lrm_rsc_op id="fs-mail_last_failure_0" operation_key="fs-mail_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" transition-key="11:3150:7:ae755a85-c250-498f-9c94-ddd8a7e2788a" transition-magic="0:0;11:3150:7:ae755a85-c250-498f-9c94-ddd8a7e2788a" on_node="mail2" call-id="2281" rc-code="0" op-status="0" interval="0" last-run="1458145187" last-rc-change="1458145187" exec-time="77" queue-time="1" op-digest="57adf8df552907571679154e346a4403"/></div><div>            <lrm_rsc_op id="fs-mail_last_0" operation_key="fs-mail_stop_0" operation="stop" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" transition-key="81:3677:0:ae755a85-c250-498f-9c94-ddd8a7e2788a" transition-magic="0:0;81:3677:0:ae755a85-c250-498f-9c94-ddd8a7e2788a" on_node="mail2" call-id="2509" rc-code="0" op-status="0" interval="0" last-run="1458294155" last-rc-change="1458294155" exec-time="78" queue-time="0" op-digest="57adf8df552907571679154e346a4403"/></div><div>            <lrm_rsc_op id="fs-mail_monitor_20000" operation_key="fs-mail_monitor_20000" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" transition-key="76:3664:0:ae755a85-c250-498f-9c94-ddd8a7e2788a" transition-magic="0:0;76:3664:0:ae755a85-c250-498f-9c94-ddd8a7e2788a" on_node="mail2" call-id="2429" rc-code="0" op-status="0" interval="20000" last-rc-change="1458293985" exec-time="62" queue-time="0" op-digest="ad82e3ec600949a8e869e8afe9a21fef"/></div><div>          </lrm_resource></div><div>          <lrm_resource id="fs-spool" type="Filesystem" class="ocf" provider="heartbeat"></div><div>            <lrm_rsc_op id="fs-spool_last_failure_0" operation_key="fs-spool_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" transition-key="10:3150:7:ae755a85-c250-498f-9c94-ddd8a7e2788a" transition-magic="0:0;10:3150:7:ae755a85-c250-498f-9c94-ddd8a7e2788a" on_node="mail2" call-id="2277" rc-code="0" op-status="0" interval="0" last-run="1458145187" last-rc-change="1458145187" exec-time="81" queue-time="0" op-digest="54f97a4890ac973bd096580098e40914"/></div><div>            <lrm_rsc_op id="fs-spool_last_0" operation_key="fs-spool_stop_0" operation="stop" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" transition-key="79:3677:0:ae755a85-c250-498f-9c94-ddd8a7e2788a" transition-magic="0:0;79:3677:0:ae755a85-c250-498f-9c94-ddd8a7e2788a" on_node="mail2" call-id="2511" rc-code="0" op-status="0" interval="0" last-run="1458294155" last-rc-change="1458294155" exec-time="1220" queue-time="0" op-digest="54f97a4890ac973bd096580098e40914"/></div><div>            <lrm_rsc_op id="fs-spool_monitor_20000" operation_key="fs-spool_monitor_20000" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" transition-key="74:3664:0:ae755a85-c250-498f-9c94-ddd8a7e2788a" transition-magic="0:0;74:3664:0:ae755a85-c250-498f-9c94-ddd8a7e2788a" on_node="mail2" call-id="2427" rc-code="0" op-status="0" interval="20000" last-rc-change="1458293985" exec-time="44" queue-time="0" op-digest="e85a7e24c0c0b05f5d196e3d363e4dfc"/></div><div>          </lrm_resource></div><div>          <lrm_resource id="amavisd" type="amavisd" class="systemd"></div><div>            <lrm_rsc_op id="amavisd_last_failure_0" operation_key="amavisd_monitor_60000" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" transition-key="86:3675:0:ae755a85-c250-498f-9c94-ddd8a7e2788a" transition-magic="0:7;86:3675:0:ae755a85-c250-498f-9c94-ddd8a7e2788a" on_node="mail2" call-id="2499" rc-code="7" op-status="0" interval="60000" last-run="1458294028" last-rc-change="1458294149" exec-time="0" queue-time="0" op-digest="4811cef7f7f94e3a35a70be7916cb2fd"/></div><div>            <lrm_rsc_op id="amavisd_last_0" operation_key="amavisd_stop_0" operation="stop" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" transition-key="7:3677:0:ae755a85-c250-498f-9c94-ddd8a7e2788a" transition-magic="0:0;7:3677:0:ae755a85-c250-498f-9c94-ddd8a7e2788a" on_node="mail2" call-id="2507" rc-code="0" op-status="0" interval="0" last-run="1458294153" last-rc-change="1458294153" exec-time="2068" queue-time="0" op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/></div><div>            <lrm_rsc_op id="amavisd_monitor_60000" operation_key="amavisd_monitor_60000" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" transition-key="86:3675:0:ae755a85-c250-498f-9c94-ddd8a7e2788a" transition-magic="0:0;86:3675:0:ae755a85-c250-498f-9c94-ddd8a7e2788a" on_node="mail2" call-id="2499" rc-code="0" op-status="0" interval="60000" last-rc-change="1458294028" exec-time="2" queue-time="0" op-digest="4811cef7f7f94e3a35a70be7916cb2fd"/></div><div>          </lrm_resource></div><div>          <lrm_resource id="spamassassin" type="spamassassin" class="systemd"></div><div>            <lrm_rsc_op id="spamassassin_last_failure_0" operation_key="spamassassin_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" transition-key="14:3674:7:ae755a85-c250-498f-9c94-ddd8a7e2788a" transition-magic="0:0;14:3674:7:ae755a85-c250-498f-9c94-ddd8a7e2788a" on_node="mail2" call-id="2494" rc-code="0" op-status="0" interval="0" last-run="1458294028" last-rc-change="1458294028" exec-time="11" queue-time="0" op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/></div><div>            <lrm_rsc_op id="spamassassin_last_0" operation_key="spamassassin_stop_0" operation="stop" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" transition-key="87:3677:0:ae755a85-c250-498f-9c94-ddd8a7e2788a" transition-magic="0:0;87:3677:0:ae755a85-c250-498f-9c94-ddd8a7e2788a" on_node="mail2" call-id="2505" rc-code="0" op-status="0" interval="0" last-run="1458294151" last-rc-change="1458294151" exec-time="2072" queue-time="0" op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/></div><div>            <lrm_rsc_op id="spamassassin_monitor_60000" operation_key="spamassassin_monitor_60000" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" transition-key="89:3675:0:ae755a85-c250-498f-9c94-ddd8a7e2788a" transition-magic="0:0;89:3675:0:ae755a85-c250-498f-9c94-ddd8a7e2788a" on_node="mail2" call-id="2500" rc-code="0" op-status="0" interval="60000" last-rc-change="1458294028" exec-time="1" queue-time="0" op-digest="4811cef7f7f94e3a35a70be7916cb2fd"/></div><div>          </lrm_resource></div><div>          <lrm_resource id="clamd" type="clamd@amavisd" class="systemd"></div><div>            <lrm_rsc_op id="clamd_last_failure_0" operation_key="clamd_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" transition-key="15:3674:7:ae755a85-c250-498f-9c94-ddd8a7e2788a" transition-magic="0:0;15:3674:7:ae755a85-c250-498f-9c94-ddd8a7e2788a" on_node="mail2" call-id="2498" rc-code="0" op-status="0" interval="0" last-run="1458294028" last-rc-change="1458294028" exec-time="10" queue-time="0" op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/></div><div>            <lrm_rsc_op id="clamd_last_0" operation_key="clamd_stop_0" operation="stop" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" transition-key="88:3677:0:ae755a85-c250-498f-9c94-ddd8a7e2788a" transition-magic="0:0;88:3677:0:ae755a85-c250-498f-9c94-ddd8a7e2788a" on_node="mail2" call-id="2503" rc-code="0" op-status="0" interval="0" last-run="1458294149" last-rc-change="1458294149" exec-time="2085" queue-time="0" op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/></div><div>            <lrm_rsc_op id="clamd_monitor_60000" operation_key="clamd_monitor_60000" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" transition-key="92:3675:0:ae755a85-c250-498f-9c94-ddd8a7e2788a" transition-magic="0:0;92:3675:0:ae755a85-c250-498f-9c94-ddd8a7e2788a" on_node="mail2" call-id="2501" rc-code="0" op-status="0" interval="60000" last-rc-change="1458294029" exec-time="2" queue-time="0" op-digest="4811cef7f7f94e3a35a70be7916cb2fd"/></div><div>          </lrm_resource></div><div>        </lrm_resources></div><div>      </lrm></div><div>    </node_state></div><div>  </status></div><div></cib></div><div><br></div><div>   </div></div></div><div><br></div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Mar 17, 2016 at 8:30 PM, Ken Gaillot <span dir="ltr"><<a href="mailto:kgaillot@redhat.com" target="_blank">kgaillot@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On 03/16/2016 11:20 AM, Lorand Kelemen wrote:<br>
> Dear Ken,<br>
><br>
> I already modified the startup as suggested during testing, thanks! I<br>
> swapped the postfix ocf resource to the amavisd systemd resource, as latter<br>
> controls postfix startup also as it turns out and having both resouces in<br>
> the mail-services group causes conflicts (postfix is detected as not<br>
> running).<br>
><br>
> Still experiencing the same behaviour, killing amavisd returns an rc=7 for<br>
> the monitoring operation on the "victim" node, this soungs logical, but the<br>
> logs contain the same: amavisd and virtualip cannot run anywhere.<br>
><br>
> I made sure systemd is clean (amavisd = inactive, not running instead of<br>
> failed) and also reset the failcount on all resources before killing<br>
> amavisd.<br>
><br>
> How can I make sure to have a clean state for the resources beside above<br>
> actions?<br>
<br>
What you did is fine. I'm not sure why amavisd and virtualip can't run.<br>
Can you show the output of "cibadmin -Q" when the cluster is in that state?<br>
<br>
> Also note: when causing a filesystem resource to fail (e.g. with unmout),<br>
> the failover happens successfully, all resources are started on the<br>
> "survivor" node.<br>
><br>
> Best regards,<br>
> Lorand<br>
><br>
><br>
> On Wed, Mar 16, 2016 at 4:34 PM, Ken Gaillot <<a href="mailto:kgaillot@redhat.com">kgaillot@redhat.com</a>> wrote:<br>
><br>
>> On 03/16/2016 05:49 AM, Lorand Kelemen wrote:<br>
>>> Dear Ken,<br>
>>><br>
>>> Thanks for the reply! I lowered migration-threshold to 1 and rearranged<br>
>>> contraints like you suggested:<br>
>>><br>
>>> Location Constraints:<br>
>>> Ordering Constraints:<br>
>>>   promote mail-clone then start fs-services (kind:Mandatory)<br>
>>>   promote spool-clone then start fs-services (kind:Mandatory)<br>
>>>   start fs-services then start network-services (kind:Mandatory)<br>
>><br>
>> Certainly not a big deal, but I would change the above constraint to<br>
>> start fs-services then start mail-services. The IP doesn't care whether<br>
>> the filesystems are up yet or not, but postfix does.<br>
>><br>
>>>   start network-services then start mail-services (kind:Mandatory)<br>
>>> Colocation Constraints:<br>
>>>   fs-services with spool-clone (score:INFINITY) (rsc-role:Started)<br>
>>> (with-rsc-role:Master)<br>
>>>   fs-services with mail-clone (score:INFINITY) (rsc-role:Started)<br>
>>> (with-rsc-role:Master)<br>
>>>   network-services with mail-services (score:INFINITY)<br>
>>>   mail-services with fs-services (score:INFINITY)<br>
>>><br>
>>> Now virtualip and postfix becomes stopped, I guess these are relevant<br>
>> but I<br>
>>> attach also full logs:<br>
>>><br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> native_color: Resource postfix cannot run anywhere<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> native_color: Resource virtualip-1 cannot run anywhere<br>
>>><br>
>>> Interesting, will try to play around with ordering - colocation, the<br>
>>> solution must be in these settings...<br>
>>><br>
>>> Best regards,<br>
>>> Lorand<br>
>>><br>
>>> Mar 16 11:38:06 [7415] HWJ-626.domain.local        cib:     info:<br>
>>> cib_perform_op:       Diff: --- 0.215.7 2<br>
>>> Mar 16 11:38:06 [7415] HWJ-626.domain.local        cib:     info:<br>
>>> cib_perform_op:       Diff: +++ 0.215.8 (null)<br>
>>> Mar 16 11:38:06 [7415] HWJ-626.domain.local        cib:     info:<br>
>>> cib_perform_op:       +  /cib:  @num_updates=8<br>
>>> Mar 16 11:38:06 [7415] HWJ-626.domain.local        cib:     info:<br>
>>> cib_perform_op:       ++<br>
>>><br>
>> /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='postfix']:<br>
>>>  <lrm_rsc_op id="postfix_last_failure_0"<br>
>>> operation_key="postfix_monitor_45000" operation="monitor"<br>
>>> crm-debug-origin="do_update_resource" crm_feature_set="3.0.10"<br>
>>> transition-key="86:2962:0:ae755a85-c250-498f-9c94-ddd8a7e2788a"<br>
>>> transition-magic="0:7;86:2962:0:ae755a85-c250-498f-9c94-ddd8a7e2788a"<br>
>>> on_node="mail1" call-id="1333" rc-code="7"<br>
>>> Mar 16 11:38:06 [7420] HWJ-626.domain.local       crmd:     info:<br>
>>> abort_transition_graph:       Transition aborted by postfix_monitor_45000<br>
>>> 'create' on mail1: Inactive graph<br>
>>> (magic=0:7;86:2962:0:ae755a85-c250-498f-9c94-ddd8a7e2788a, cib=0.215.8,<br>
>>> source=process_graph_event:598, 1)<br>
>>> Mar 16 11:38:06 [7420] HWJ-626.domain.local       crmd:     info:<br>
>>> update_failcount:     Updating failcount for postfix on mail1 after<br>
>> failed<br>
>>> monitor: rc=7 (update=value++, time=1458124686)<br>
>><br>
>> I don't think your constraints are causing problems now; the above<br>
>> message indicates that the postfix resource failed. Postfix may not be<br>
>> able to run anywhere because it's already failed on both nodes, and the<br>
>> IP would be down because it has to be colocated with postfix, and<br>
>> postfix can't run.<br>
>><br>
>> The rc=7 above indicates that the postfix agent's monitor operation<br>
>> returned 7, which is "not running". I'd check the logs for postfix errors.<br>
>><br>
>>> Mar 16 11:38:06 [7420] HWJ-626.domain.local       crmd:     info:<br>
>>> process_graph_event:  Detected action (2962.86)<br>
>>> postfix_monitor_45000.1333=not running: failed<br>
>>> Mar 16 11:38:06 [7418] HWJ-626.domain.local      attrd:     info:<br>
>>> attrd_client_update:  Expanded fail-count-postfix=value++ to 1<br>
>>> Mar 16 11:38:06 [7415] HWJ-626.domain.local        cib:     info:<br>
>>> cib_process_request:  Completed cib_modify operation for section status:<br>
>> OK<br>
>>> (rc=0, origin=mail1/crmd/253, version=0.215.8)<br>
>>> Mar 16 11:38:06 [7418] HWJ-626.domain.local      attrd:     info:<br>
>>> attrd_peer_update:    Setting fail-count-postfix[mail1]: (null) -> 1 from<br>
>>> mail2<br>
>>> Mar 16 11:38:06 [7420] HWJ-626.domain.local       crmd:   notice:<br>
>>> do_state_transition:  State transition S_IDLE -> S_POLICY_ENGINE [<br>
>>> input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]<br>
>>> Mar 16 11:38:06 [7418] HWJ-626.domain.local      attrd:     info:<br>
>>> write_attribute:      Sent update 406 with 2 changes for<br>
>>> fail-count-postfix, id=<n/a>, set=(null)<br>
>>> Mar 16 11:38:06 [7418] HWJ-626.domain.local      attrd:     info:<br>
>>> attrd_peer_update:    Setting last-failure-postfix[mail1]: 1458124291 -><br>
>>> 1458124686 from mail2<br>
>>> Mar 16 11:38:06 [7418] HWJ-626.domain.local      attrd:     info:<br>
>>> write_attribute:      Sent update 407 with 2 changes for<br>
>>> last-failure-postfix, id=<n/a>, set=(null)<br>
>>> Mar 16 11:38:06 [7415] HWJ-626.domain.local        cib:     info:<br>
>>> cib_process_request:  Forwarding cib_modify operation for section status<br>
>> to<br>
>>> master (origin=local/attrd/406)<br>
>>> Mar 16 11:38:06 [7415] HWJ-626.domain.local        cib:     info:<br>
>>> cib_process_request:  Forwarding cib_modify operation for section status<br>
>> to<br>
>>> master (origin=local/attrd/407)<br>
>>> Mar 16 11:38:06 [7415] HWJ-626.domain.local        cib:     info:<br>
>>> cib_perform_op:       Diff: --- 0.215.8 2<br>
>>> Mar 16 11:38:06 [7415] HWJ-626.domain.local        cib:     info:<br>
>>> cib_perform_op:       Diff: +++ 0.215.9 (null)<br>
>>> Mar 16 11:38:06 [7415] HWJ-626.domain.local        cib:     info:<br>
>>> cib_perform_op:       +  /cib:  @num_updates=9<br>
>>> Mar 16 11:38:06 [7415] HWJ-626.domain.local        cib:     info:<br>
>>> cib_perform_op:       ++<br>
>>><br>
>> /cib/status/node_state[@id='1']/transient_attributes[@id='1']/instance_attributes[@id='status-1']:<br>
>>>  <nvpair id="status-1-fail-count-postfix" name="fail-count-postfix"<br>
>>> value="1"/><br>
>>> Mar 16 11:38:06 [7415] HWJ-626.domain.local        cib:     info:<br>
>>> cib_process_request:  Completed cib_modify operation for section status:<br>
>> OK<br>
>>> (rc=0, origin=mail2/attrd/406, version=0.215.9)<br>
>>> Mar 16 11:38:06 [7415] HWJ-626.domain.local        cib:     info:<br>
>>> cib_perform_op:       Diff: --- 0.215.9 2<br>
>>> Mar 16 11:38:06 [7415] HWJ-626.domain.local        cib:     info:<br>
>>> cib_perform_op:       Diff: +++ 0.215.10 (null)<br>
>>> Mar 16 11:38:06 [7415] HWJ-626.domain.local        cib:     info:<br>
>>> cib_perform_op:       +  /cib:  @num_updates=10<br>
>>> Mar 16 11:38:06 [7415] HWJ-626.domain.local        cib:     info:<br>
>>> cib_perform_op:       +<br>
>>><br>
>> /cib/status/node_state[@id='1']/transient_attributes[@id='1']/instance_attributes[@id='status-1']/nvpair[@id='status-1-last-failure-postfix']:<br>
>>>  @value=1458124686<br>
>>> Mar 16 11:38:06 [7418] HWJ-626.domain.local      attrd:     info:<br>
>>> attrd_cib_callback:   Update 406 for fail-count-postfix: OK (0)<br>
>>> Mar 16 11:38:06 [7418] HWJ-626.domain.local      attrd:     info:<br>
>>> attrd_cib_callback:   Update 406 for fail-count-postfix[mail1]=1: OK (0)<br>
>>> Mar 16 11:38:06 [7415] HWJ-626.domain.local        cib:     info:<br>
>>> cib_process_request:  Completed cib_modify operation for section status:<br>
>> OK<br>
>>> (rc=0, origin=mail2/attrd/407, version=0.215.10)<br>
>>> Mar 16 11:38:06 [7418] HWJ-626.domain.local      attrd:     info:<br>
>>> attrd_cib_callback:   Update 406 for fail-count-postfix[mail2]=(null): OK<br>
>>> (0)<br>
>>> Mar 16 11:38:06 [7418] HWJ-626.domain.local      attrd:     info:<br>
>>> attrd_cib_callback:   Update 407 for last-failure-postfix: OK (0)<br>
>>> Mar 16 11:38:06 [7418] HWJ-626.domain.local      attrd:     info:<br>
>>> attrd_cib_callback:   Update 407 for<br>
>>> last-failure-postfix[mail1]=1458124686: OK (0)<br>
>>> Mar 16 11:38:06 [7418] HWJ-626.domain.local      attrd:     info:<br>
>>> attrd_cib_callback:   Update 407 for<br>
>>> last-failure-postfix[mail2]=1457610376: OK (0)<br>
>>> Mar 16 11:38:06 [7420] HWJ-626.domain.local       crmd:     info:<br>
>>> abort_transition_graph:       Transition aborted by<br>
>>> status-1-fail-count-postfix, fail-count-postfix=1: Transient attribute<br>
>>> change (create cib=0.215.9, source=abort_unless_down:319,<br>
>>><br>
>> path=/cib/status/node_state[@id='1']/transient_attributes[@id='1']/instance_attributes[@id='status-1'],<br>
>>> 1)<br>
>>> Mar 16 11:38:06 [7420] HWJ-626.domain.local       crmd:     info:<br>
>>> abort_transition_graph:       Transition aborted by<br>
>>> status-1-last-failure-postfix, last-failure-postfix=1458124686: Transient<br>
>>> attribute change (modify cib=0.215.10, source=abort_unless_down:319,<br>
>>><br>
>> path=/cib/status/node_state[@id='1']/transient_attributes[@id='1']/instance_attributes[@id='status-1']/nvpair[@id='status-1-last-failure-postfix'],<br>
>>> 1)<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:   notice:<br>
>>> unpack_config:        On loss of CCM Quorum: Ignore<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> determine_online_status:      Node mail1 is online<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> determine_online_status:      Node mail2 is online<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> determine_op_status:  Operation monitor found resource mail:0 active in<br>
>>> master mode on mail1<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> determine_op_status:  Operation monitor found resource spool:0 active in<br>
>>> master mode on mail1<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> determine_op_status:  Operation monitor found resource fs-spool active on<br>
>>> mail1<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> determine_op_status:  Operation monitor found resource fs-spool active on<br>
>>> mail1<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> determine_op_status:  Operation monitor found resource fs-mail active on<br>
>>> mail1<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> determine_op_status:  Operation monitor found resource fs-mail active on<br>
>>> mail1<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:  warning:<br>
>>> unpack_rsc_op_failure:        Processing failed op monitor for postfix on<br>
>>> mail1: not running (7)<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> determine_op_status:  Operation monitor found resource spool:1 active in<br>
>>> master mode on mail2<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> determine_op_status:  Operation monitor found resource mail:1 active in<br>
>>> master mode on mail2<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> group_print:   Resource Group: network-services<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> native_print:      virtualip-1        (ocf::heartbeat:IPaddr2):<br>
>>  Started<br>
>>> mail1<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> clone_print:   Master/Slave Set: spool-clone [spool]<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> short_print:       Masters: [ mail1 ]<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> short_print:       Slaves: [ mail2 ]<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> clone_print:   Master/Slave Set: mail-clone [mail]<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> short_print:       Masters: [ mail1 ]<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> short_print:       Slaves: [ mail2 ]<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> group_print:   Resource Group: fs-services<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> native_print:      fs-spool   (ocf::heartbeat:Filesystem):    Started<br>
>> mail1<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> native_print:      fs-mail    (ocf::heartbeat:Filesystem):    Started<br>
>> mail1<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> group_print:   Resource Group: mail-services<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> native_print:      postfix    (ocf::heartbeat:postfix):       FAILED<br>
>> mail1<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> master_color: Promoting mail:0 (Master mail1)<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> master_color: mail-clone: Promoted 1 instances of a possible 1 to master<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> master_color: Promoting spool:0 (Master mail1)<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> master_color: spool-clone: Promoted 1 instances of a possible 1 to master<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> RecurringOp:   Start recurring monitor (45s) for postfix on mail1<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> LogActions:   Leave   virtualip-1     (Started mail1)<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> LogActions:   Leave   spool:0 (Master mail1)<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> LogActions:   Leave   spool:1 (Slave mail2)<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> LogActions:   Leave   mail:0  (Master mail1)<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> LogActions:   Leave   mail:1  (Slave mail2)<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> LogActions:   Leave   fs-spool        (Started mail1)<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> LogActions:   Leave   fs-mail (Started mail1)<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:   notice:<br>
>>> LogActions:   Recover postfix (Started mail1)<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:   notice:<br>
>>> process_pe_message:   Calculated Transition 2963:<br>
>>> /var/lib/pacemaker/pengine/pe-input-330.bz2<br>
>>> Mar 16 11:38:06 [7420] HWJ-626.domain.local       crmd:     info:<br>
>>> handle_response:      pe_calc calculation pe_calc-dc-1458124686-5541 is<br>
>>> obsolete<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:   notice:<br>
>>> unpack_config:        On loss of CCM Quorum: Ignore<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> determine_online_status:      Node mail1 is online<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> determine_online_status:      Node mail2 is online<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> determine_op_status:  Operation monitor found resource mail:0 active in<br>
>>> master mode on mail1<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> determine_op_status:  Operation monitor found resource spool:0 active in<br>
>>> master mode on mail1<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> determine_op_status:  Operation monitor found resource fs-spool active on<br>
>>> mail1<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> determine_op_status:  Operation monitor found resource fs-spool active on<br>
>>> mail1<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> determine_op_status:  Operation monitor found resource fs-mail active on<br>
>>> mail1<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> determine_op_status:  Operation monitor found resource fs-mail active on<br>
>>> mail1<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:  warning:<br>
>>> unpack_rsc_op_failure:        Processing failed op monitor for postfix on<br>
>>> mail1: not running (7)<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> determine_op_status:  Operation monitor found resource spool:1 active in<br>
>>> master mode on mail2<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> determine_op_status:  Operation monitor found resource mail:1 active in<br>
>>> master mode on mail2<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> group_print:   Resource Group: network-services<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> native_print:      virtualip-1        (ocf::heartbeat:IPaddr2):<br>
>>  Started<br>
>>> mail1<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> clone_print:   Master/Slave Set: spool-clone [spool]<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> short_print:       Masters: [ mail1 ]<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> short_print:       Slaves: [ mail2 ]<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> clone_print:   Master/Slave Set: mail-clone [mail]<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> short_print:       Masters: [ mail1 ]<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> short_print:       Slaves: [ mail2 ]<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> group_print:   Resource Group: fs-services<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> native_print:      fs-spool   (ocf::heartbeat:Filesystem):    Started<br>
>> mail1<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> native_print:      fs-mail    (ocf::heartbeat:Filesystem):    Started<br>
>> mail1<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> group_print:   Resource Group: mail-services<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> native_print:      postfix    (ocf::heartbeat:postfix):       FAILED<br>
>> mail1<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> get_failcount_full:   postfix has failed 1 times on mail1<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:  warning:<br>
>>> common_apply_stickiness:      Forcing postfix away from mail1 after 1<br>
>>> failures (max=1)<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> master_color: Promoting mail:0 (Master mail1)<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> master_color: mail-clone: Promoted 1 instances of a possible 1 to master<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> master_color: Promoting spool:0 (Master mail1)<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> master_color: spool-clone: Promoted 1 instances of a possible 1 to master<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> rsc_merge_weights:    fs-mail: Rolling back scores from postfix<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> rsc_merge_weights:    postfix: Rolling back scores from virtualip-1<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> native_color: Resource postfix cannot run anywhere<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> native_color: Resource virtualip-1 cannot run anywhere<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:   notice:<br>
>>> LogActions:   Stop    virtualip-1     (mail1)<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> LogActions:   Leave   spool:0 (Master mail1)<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> LogActions:   Leave   spool:1 (Slave mail2)<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> LogActions:   Leave   mail:0  (Master mail1)<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> LogActions:   Leave   mail:1  (Slave mail2)<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> LogActions:   Leave   fs-spool        (Started mail1)<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>> LogActions:   Leave   fs-mail (Started mail1)<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:   notice:<br>
>>> LogActions:   Stop    postfix (mail1)<br>
>>> Mar 16 11:38:06 [7420] HWJ-626.domain.local       crmd:     info:<br>
>>> do_state_transition:  State transition S_POLICY_ENGINE -><br>
>>> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE<br>
>>> origin=handle_response ]<br>
>>> Mar 16 11:38:06 [7419] HWJ-626.domain.local    pengine:   notice:<br>
>>> process_pe_message:   Calculated Transition 2964:<br>
>>> /var/lib/pacemaker/pengine/pe-input-331.bz2<br>
>>> Mar 16 11:38:06 [7420] HWJ-626.domain.local       crmd:     info:<br>
>>> do_te_invoke: Processing graph 2964 (ref=pe_calc-dc-1458124686-5542)<br>
>>> derived from /var/lib/pacemaker/pengine/pe-input-331.bz2<br>
>>> Mar 16 11:38:06 [7420] HWJ-626.domain.local       crmd:   notice:<br>
>>> te_rsc_command:       Initiating action 5: stop postfix_stop_0 on mail1<br>
>>> Mar 16 11:38:07 [7415] HWJ-626.domain.local        cib:     info:<br>
>>> cib_perform_op:       Diff: --- 0.215.10 2<br>
>>> Mar 16 11:38:07 [7415] HWJ-626.domain.local        cib:     info:<br>
>>> cib_perform_op:       Diff: +++ 0.215.11 (null)<br>
>>> Mar 16 11:38:07 [7415] HWJ-626.domain.local        cib:     info:<br>
>>> cib_perform_op:       +  /cib:  @num_updates=11<br>
>>> Mar 16 11:38:07 [7415] HWJ-626.domain.local        cib:     info:<br>
>>> cib_perform_op:       +<br>
>>><br>
>> /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='postfix']/lrm_rsc_op[@id='postfix_last_0']:<br>
>>>  @operation_key=postfix_stop_0, @operation=stop,<br>
>>> @transition-key=5:2964:0:ae755a85-c250-498f-9c94-ddd8a7e2788a,<br>
>>> @transition-magic=0:0;5:2964:0:ae755a85-c250-498f-9c94-ddd8a7e2788a,<br>
>>> @call-id=1335, @last-run=1458124686, @last-rc-change=1458124686,<br>
>>> @exec-time=435<br>
>>> Mar 16 11:38:07 [7420] HWJ-626.domain.local       crmd:     info:<br>
>>> match_graph_event:    Action postfix_stop_0 (5) confirmed on mail1 (rc=0)<br>
>>> Mar 16 11:38:07 [7415] HWJ-626.domain.local        cib:     info:<br>
>>> cib_process_request:  Completed cib_modify operation for section status:<br>
>> OK<br>
>>> (rc=0, origin=mail1/crmd/254, version=0.215.11)<br>
>>> Mar 16 11:38:07 [7420] HWJ-626.domain.local       crmd:   notice:<br>
>>> te_rsc_command:       Initiating action 12: stop virtualip-1_stop_0 on<br>
>> mail1<br>
>>> Mar 16 11:38:07 [7415] HWJ-626.domain.local        cib:     info:<br>
>>> cib_perform_op:       Diff: --- 0.215.11 2<br>
>>> Mar 16 11:38:07 [7415] HWJ-626.domain.local        cib:     info:<br>
>>> cib_perform_op:       Diff: +++ 0.215.12 (null)<br>
>>> Mar 16 11:38:07 [7415] HWJ-626.domain.local        cib:     info:<br>
>>> cib_perform_op:       +  /cib:  @num_updates=12<br>
>>> Mar 16 11:38:07 [7415] HWJ-626.domain.local        cib:     info:<br>
>>> cib_perform_op:       +<br>
>>><br>
>> /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='virtualip-1']/lrm_rsc_op[@id='virtualip-1_last_0']:<br>
>>>  @operation_key=virtualip-1_stop_0, @operation=stop,<br>
>>> @transition-key=12:2964:0:ae755a85-c250-498f-9c94-ddd8a7e2788a,<br>
>>> @transition-magic=0:0;12:2964:0:ae755a85-c250-498f-9c94-ddd8a7e2788a,<br>
>>> @call-id=1337, @last-run=1458124687, @last-rc-change=1458124687,<br>
>>> @exec-time=56<br>
>>> Mar 16 11:38:07 [7420] HWJ-626.domain.local       crmd:     info:<br>
>>> match_graph_event:    Action virtualip-1_stop_0 (12) confirmed on mail1<br>
>>> (rc=0)<br>
>>> Mar 16 11:38:07 [7415] HWJ-626.domain.local        cib:     info:<br>
>>> cib_process_request:  Completed cib_modify operation for section status:<br>
>> OK<br>
>>> (rc=0, origin=mail1/crmd/255, version=0.215.12)<br>
>>> Mar 16 11:38:07 [7420] HWJ-626.domain.local       crmd:   notice:<br>
>>> run_graph:    Transition 2964 (Complete=7, Pending=0, Fired=0, Skipped=0,<br>
>>> Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-331.bz2):<br>
>> Complete<br>
>>> Mar 16 11:38:07 [7420] HWJ-626.domain.local       crmd:     info: do_log:<br>
>>>     FSA: Input I_TE_SUCCESS from notify_crmd() received in state<br>
>>> S_TRANSITION_ENGINE<br>
>>> Mar 16 11:38:07 [7420] HWJ-626.domain.local       crmd:   notice:<br>
>>> do_state_transition:  State transition S_TRANSITION_ENGINE -> S_IDLE [<br>
>>> input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]<br>
>>> Mar 16 11:38:12 [7415] HWJ-626.domain.local        cib:     info:<br>
>>> cib_process_ping:     Reporting our current digest to mail2:<br>
>>> ed43bc3ecf0f15853900ba49fc514870 for 0.215.12 (0x152b110 0)<br>
>>><br>
>>><br>
>>> On Mon, Mar 14, 2016 at 6:44 PM, Ken Gaillot <<a href="mailto:kgaillot@redhat.com">kgaillot@redhat.com</a>><br>
>> wrote:<br>
>>><br>
>>>> On 03/10/2016 09:49 AM, Lorand Kelemen wrote:<br>
>>>>> Dear List,<br>
>>>>><br>
>>>>> After the creation and testing of a simple 2 node active-passive<br>
>>>>> drbd+postfix cluster nearly everything works flawlessly (standby,<br>
>> failure<br>
>>>>> of a filesystem resource + failover, splitbrain + manual recovery)<br>
>>>> however<br>
>>>>> when delibarately killing the postfix instance, after reaching the<br>
>>>>> migration threshold failover does not occur and resources revert to the<br>
>>>>> Stopped state (except the master-slave drbd resource, which works as<br>
>>>>> expected).<br>
>>>>><br>
>>>>> Ordering and colocation is configured, STONITH and quorum disabled, the<br>
>>>>> goal is to always have one node running all the resources and at any<br>
>> sign<br>
>>>>> of error it should fail over to the passive node, nothing fancy.<br>
>>>>><br>
>>>>> Is my configuration wrong or am I hitting a bug?<br>
>>>>><br>
>>>>> All software from centos 7 + elrepo repositories.<br>
>>>><br>
>>>> With these versions, you can set "two_node: 1" in<br>
>>>> /etc/corosync/corosync.conf (which will be done automatically if you<br>
>>>> used "pcs cluster setup" initially), and then you don't need to ignore<br>
>>>> quorum in pacemaker.<br>
>>>><br>
>>>>> Regarding STONITH: the machines are running on free ESXi instances on<br>
>>>>> separate machines, so the Vmware fencing agents won't work because in<br>
>> the<br>
>>>>> free version the API is read-only.<br>
>>>>> Still trying to figure out a way to go, until then manual recovery +<br>
>> huge<br>
>>>>> arp cache times on the upstream firewall...<br>
>>>>><br>
>>>>> Please find pe-input*.bz files attached, logs and config below. The<br>
>>>>> situation: on node mail1 postfix was killed 3 times (migration<br>
>>>> threshold),<br>
>>>>> it should have failed over to mail2.<br>
>>>>> When killing a filesystem resource three times this happens flawlessly.<br>
>>>>><br>
>>>>> Thanks for your input!<br>
>>>>><br>
>>>>> Best regards,<br>
>>>>> Lorand<br>
>>>>><br>
>>>>><br>
>>>>> Cluster Name: mailcluster<br>
>>>>> Corosync Nodes:<br>
>>>>>  mail1 mail2<br>
>>>>> Pacemaker Nodes:<br>
>>>>>  mail1 mail2<br>
>>>>><br>
>>>>> Resources:<br>
>>>>>  Group: network-services<br>
>>>>>   Resource: virtualip-1 (class=ocf provider=heartbeat type=IPaddr2)<br>
>>>>>    Attributes: ip=10.20.64.10 cidr_netmask=24 nic=lan0<br>
>>>>>    Operations: start interval=0s timeout=20s<br>
>>>> (virtualip-1-start-interval-0s)<br>
>>>>>                stop interval=0s timeout=20s<br>
>>>> (virtualip-1-stop-interval-0s)<br>
>>>>>                monitor interval=30s (virtualip-1-monitor-interval-30s)<br>
>>>>>  Master: spool-clone<br>
>>>>>   Meta Attrs: master-max=1 master-node-max=1 clone-max=2<br>
>> clone-node-max=1<br>
>>>>> notify=true<br>
>>>>>   Resource: spool (class=ocf provider=linbit type=drbd)<br>
>>>>>    Attributes: drbd_resource=spool<br>
>>>>>    Operations: start interval=0s timeout=240 (spool-start-interval-0s)<br>
>>>>>                promote interval=0s timeout=90<br>
>> (spool-promote-interval-0s)<br>
>>>>>                demote interval=0s timeout=90 (spool-demote-interval-0s)<br>
>>>>>                stop interval=0s timeout=100 (spool-stop-interval-0s)<br>
>>>>>                monitor interval=10s (spool-monitor-interval-10s)<br>
>>>>>  Master: mail-clone<br>
>>>>>   Meta Attrs: master-max=1 master-node-max=1 clone-max=2<br>
>> clone-node-max=1<br>
>>>>> notify=true<br>
>>>>>   Resource: mail (class=ocf provider=linbit type=drbd)<br>
>>>>>    Attributes: drbd_resource=mail<br>
>>>>>    Operations: start interval=0s timeout=240 (mail-start-interval-0s)<br>
>>>>>                promote interval=0s timeout=90<br>
>> (mail-promote-interval-0s)<br>
>>>>>                demote interval=0s timeout=90 (mail-demote-interval-0s)<br>
>>>>>                stop interval=0s timeout=100 (mail-stop-interval-0s)<br>
>>>>>                monitor interval=10s (mail-monitor-interval-10s)<br>
>>>>>  Group: fs-services<br>
>>>>>   Resource: fs-spool (class=ocf provider=heartbeat type=Filesystem)<br>
>>>>>    Attributes: device=/dev/drbd0 directory=/var/spool/postfix<br>
>> fstype=ext4<br>
>>>>> options=nodev,nosuid,noexec<br>
>>>>>    Operations: start interval=0s timeout=60<br>
>> (fs-spool-start-interval-0s)<br>
>>>>>                stop interval=0s timeout=60 (fs-spool-stop-interval-0s)<br>
>>>>>                monitor interval=20 timeout=40<br>
>>>> (fs-spool-monitor-interval-20)<br>
>>>>>   Resource: fs-mail (class=ocf provider=heartbeat type=Filesystem)<br>
>>>>>    Attributes: device=/dev/drbd1 directory=/var/spool/mail fstype=ext4<br>
>>>>> options=nodev,nosuid,noexec<br>
>>>>>    Operations: start interval=0s timeout=60 (fs-mail-start-interval-0s)<br>
>>>>>                stop interval=0s timeout=60 (fs-mail-stop-interval-0s)<br>
>>>>>                monitor interval=20 timeout=40<br>
>>>> (fs-mail-monitor-interval-20)<br>
>>>>>  Group: mail-services<br>
>>>>>   Resource: postfix (class=ocf provider=heartbeat type=postfix)<br>
>>>>>    Operations: start interval=0s timeout=20s<br>
>> (postfix-start-interval-0s)<br>
>>>>>                stop interval=0s timeout=20s (postfix-stop-interval-0s)<br>
>>>>>                monitor interval=45s (postfix-monitor-interval-45s)<br>
>>>>><br>
>>>>> Stonith Devices:<br>
>>>>> Fencing Levels:<br>
>>>>><br>
>>>>> Location Constraints:<br>
>>>>> Ordering Constraints:<br>
>>>>>   start network-services then promote mail-clone (kind:Mandatory)<br>
>>>>> (id:order-network-services-mail-clone-mandatory)<br>
>>>>>   promote mail-clone then promote spool-clone (kind:Mandatory)<br>
>>>>> (id:order-mail-clone-spool-clone-mandatory)<br>
>>>>>   promote spool-clone then start fs-services (kind:Mandatory)<br>
>>>>> (id:order-spool-clone-fs-services-mandatory)<br>
>>>>>   start fs-services then start mail-services (kind:Mandatory)<br>
>>>>> (id:order-fs-services-mail-services-mandatory)<br>
>>>>> Colocation Constraints:<br>
>>>>>   network-services with spool-clone (score:INFINITY) (rsc-role:Started)<br>
>>>>> (with-rsc-role:Master)<br>
>>>> (id:colocation-network-services-spool-clone-INFINITY)<br>
>>>>>   network-services with mail-clone (score:INFINITY) (rsc-role:Started)<br>
>>>>> (with-rsc-role:Master)<br>
>>>> (id:colocation-network-services-mail-clone-INFINITY)<br>
>>>>>   network-services with fs-services (score:INFINITY)<br>
>>>>> (id:colocation-network-services-fs-services-INFINITY)<br>
>>>>>   network-services with mail-services (score:INFINITY)<br>
>>>>> (id:colocation-network-services-mail-services-INFINITY)<br>
>>>><br>
>>>> I'm not sure whether it's causing your issue, but I would make the<br>
>>>> constraints reflect the logical relationships better.<br>
>>>><br>
>>>> For example, network-services only needs to be colocated with<br>
>>>> mail-services logically; it's mail-services that needs to be with<br>
>>>> fs-services, and fs-services that needs to be with<br>
>>>> spool-clone/mail-clone master. In other words, don't make the<br>
>>>> highest-level resource depend on everything else, make each level depend<br>
>>>> on the level below it.<br>
>>>><br>
>>>> Also, I would guess that the virtual IP only needs to be ordered before<br>
>>>> mail-services, and mail-clone and spool-clone could both be ordered<br>
>>>> before fs-services, rather than ordering mail-clone before spool-clone.<br>
>>>><br>
>>>>> Resources Defaults:<br>
>>>>>  migration-threshold: 3<br>
>>>>> Operations Defaults:<br>
>>>>>  on-fail: restart<br>
>>>>><br>
>>>>> Cluster Properties:<br>
>>>>>  cluster-infrastructure: corosync<br>
>>>>>  cluster-name: mailcluster<br>
>>>>>  cluster-recheck-interval: 5min<br>
>>>>>  dc-version: 1.1.13-10.el7_2.2-44eb2dd<br>
>>>>>  default-resource-stickiness: infinity<br>
>>>>>  have-watchdog: false<br>
>>>>>  last-lrm-refresh: 1457613674<br>
>>>>>  no-quorum-policy: ignore<br>
>>>>>  pe-error-series-max: 1024<br>
>>>>>  pe-input-series-max: 1024<br>
>>>>>  pe-warn-series-max: 1024<br>
>>>>>  stonith-enabled: false<br>
>>>>><br>
>>>>><br>
>>>>><br>
>>>>><br>
>>>>><br>
>>>>> Mar 10 13:37:20 [7415] HWJ-626.domain.local        cib:     info:<br>
>>>>> cib_perform_op:       Diff: --- 0.197.15 2<br>
>>>>> Mar 10 13:37:20 [7415] HWJ-626.domain.local        cib:     info:<br>
>>>>> cib_perform_op:       Diff: +++ 0.197.16 (null)<br>
>>>>> Mar 10 13:37:20 [7415] HWJ-626.domain.local        cib:     info:<br>
>>>>> cib_perform_op:       +  /cib:  @num_updates=16<br>
>>>>> Mar 10 13:37:20 [7415] HWJ-626.domain.local        cib:     info:<br>
>>>>> cib_perform_op:       +<br>
>>>>><br>
>>>><br>
>> /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='postfix']/lrm_rsc_op[@id='postfix_last_failure_0']:<br>
>>>>>  @transition-key=4:1234:0:ae755a85-c250-498f-9c94-ddd8a7e2788a,<br>
>>>>> @transition-magic=0:7;4:1234:0:ae755a85-c250-498f-9c94-ddd8a7e2788a,<br>
>>>>> @call-id=1274, @last-rc-change=1457613440<br>
>>>>> Mar 10 13:37:20 [7420] HWJ-626.domain.local       crmd:     info:<br>
>>>>> abort_transition_graph:       Transition aborted by<br>
>> postfix_monitor_45000<br>
>>>>> 'modify' on mail1: Inactive graph<br>
>>>>> (magic=0:7;4:1234:0:ae755a85-c250-498f-9c94-ddd8a7e2788a, cib=0.197.16,<br>
>>>>> source=process_graph_event:598, 1)<br>
>>>>> Mar 10 13:37:20 [7420] HWJ-626.domain.local       crmd:     info:<br>
>>>>> update_failcount:     Updating failcount for postfix on mail1 after<br>
>>>> failed<br>
>>>>> monitor: rc=7 (update=value++, time=1457613440)<br>
>>>>> Mar 10 13:37:20 [7418] HWJ-626.domain.local      attrd:     info:<br>
>>>>> attrd_client_update:  Expanded fail-count-postfix=value++ to 3<br>
>>>>> Mar 10 13:37:20 [7415] HWJ-626.domain.local        cib:     info:<br>
>>>>> cib_process_request:  Completed cib_modify operation for section<br>
>> status:<br>
>>>> OK<br>
>>>>> (rc=0, origin=mail1/crmd/196, version=0.197.16)<br>
>>>>> Mar 10 13:37:20 [7418] HWJ-626.domain.local      attrd:     info:<br>
>>>>> attrd_peer_update:    Setting fail-count-postfix[mail1]: 2 -> 3 from<br>
>>>> mail2<br>
>>>>> Mar 10 13:37:20 [7418] HWJ-626.domain.local      attrd:     info:<br>
>>>>> write_attribute:      Sent update 400 with 2 changes for<br>
>>>>> fail-count-postfix, id=<n/a>, set=(null)<br>
>>>>> Mar 10 13:37:20 [7415] HWJ-626.domain.local        cib:     info:<br>
>>>>> cib_process_request:  Forwarding cib_modify operation for section<br>
>> status<br>
>>>> to<br>
>>>>> master (origin=local/attrd/400)<br>
>>>>> Mar 10 13:37:20 [7420] HWJ-626.domain.local       crmd:     info:<br>
>>>>> process_graph_event:  Detected action (1234.4)<br>
>>>>> postfix_monitor_45000.1274=not running: failed<br>
>>>>> Mar 10 13:37:20 [7418] HWJ-626.domain.local      attrd:     info:<br>
>>>>> attrd_peer_update:    Setting last-failure-postfix[mail1]: 1457613347<br>
>> -><br>
>>>>> 1457613440 from mail2<br>
>>>>> Mar 10 13:37:20 [7420] HWJ-626.domain.local       crmd:   notice:<br>
>>>>> do_state_transition:  State transition S_IDLE -> S_POLICY_ENGINE [<br>
>>>>> input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]<br>
>>>>> Mar 10 13:37:20 [7418] HWJ-626.domain.local      attrd:     info:<br>
>>>>> write_attribute:      Sent update 401 with 2 changes for<br>
>>>>> last-failure-postfix, id=<n/a>, set=(null)<br>
>>>>> Mar 10 13:37:20 [7415] HWJ-626.domain.local        cib:     info:<br>
>>>>> cib_perform_op:       Diff: --- 0.197.16 2<br>
>>>>> Mar 10 13:37:20 [7415] HWJ-626.domain.local        cib:     info:<br>
>>>>> cib_perform_op:       Diff: +++ 0.197.17 (null)<br>
>>>>> Mar 10 13:37:20 [7415] HWJ-626.domain.local        cib:     info:<br>
>>>>> cib_perform_op:       +  /cib:  @num_updates=17<br>
>>>>> Mar 10 13:37:20 [7415] HWJ-626.domain.local        cib:     info:<br>
>>>>> cib_perform_op:       +<br>
>>>>><br>
>>>><br>
>> /cib/status/node_state[@id='1']/transient_attributes[@id='1']/instance_attributes[@id='status-1']/nvpair[@id='status-1-fail-count-postfix']:<br>
>>>>>  @value=3<br>
>>>>> Mar 10 13:37:20 [7415] HWJ-626.domain.local        cib:     info:<br>
>>>>> cib_process_request:  Completed cib_modify operation for section<br>
>> status:<br>
>>>> OK<br>
>>>>> (rc=0, origin=mail2/attrd/400, version=0.197.17)<br>
>>>>> Mar 10 13:37:20 [7420] HWJ-626.domain.local       crmd:     info:<br>
>>>>> abort_transition_graph:       Transition aborted by<br>
>>>>> status-1-fail-count-postfix, fail-count-postfix=3: Transient attribute<br>
>>>>> change (modify cib=0.197.17, source=abort_unless_down:319,<br>
>>>>><br>
>>>><br>
>> path=/cib/status/node_state[@id='1']/transient_attributes[@id='1']/instance_attributes[@id='status-1']/nvpair[@id='status-1-fail-count-postfix'],<br>
>>>>> 1)<br>
>>>>> Mar 10 13:37:20 [7418] HWJ-626.domain.local      attrd:     info:<br>
>>>>> attrd_cib_callback:   Update 400 for fail-count-postfix: OK (0)<br>
>>>>> Mar 10 13:37:20 [7418] HWJ-626.domain.local      attrd:     info:<br>
>>>>> attrd_cib_callback:   Update 400 for fail-count-postfix[mail1]=3: OK<br>
>> (0)<br>
>>>>> Mar 10 13:37:20 [7418] HWJ-626.domain.local      attrd:     info:<br>
>>>>> attrd_cib_callback:   Update 400 for fail-count-postfix[mail2]=(null):<br>
>> OK<br>
>>>>> (0)<br>
>>>>> Mar 10 13:37:21 [7415] HWJ-626.domain.local        cib:     info:<br>
>>>>> cib_process_request:  Forwarding cib_modify operation for section<br>
>> status<br>
>>>> to<br>
>>>>> master (origin=local/attrd/401)<br>
>>>>> Mar 10 13:37:21 [7415] HWJ-626.domain.local        cib:     info:<br>
>>>>> cib_perform_op:       Diff: --- 0.197.17 2<br>
>>>>> Mar 10 13:37:21 [7415] HWJ-626.domain.local        cib:     info:<br>
>>>>> cib_perform_op:       Diff: +++ 0.197.18 (null)<br>
>>>>> Mar 10 13:37:21 [7415] HWJ-626.domain.local        cib:     info:<br>
>>>>> cib_perform_op:       +  /cib:  @num_updates=18<br>
>>>>> Mar 10 13:37:21 [7415] HWJ-626.domain.local        cib:     info:<br>
>>>>> cib_perform_op:       +<br>
>>>>><br>
>>>><br>
>> /cib/status/node_state[@id='1']/transient_attributes[@id='1']/instance_attributes[@id='status-1']/nvpair[@id='status-1-last-failure-postfix']:<br>
>>>>>  @value=1457613440<br>
>>>>> Mar 10 13:37:21 [7415] HWJ-626.domain.local        cib:     info:<br>
>>>>> cib_process_request:  Completed cib_modify operation for section<br>
>> status:<br>
>>>> OK<br>
>>>>> (rc=0, origin=mail2/attrd/401, version=0.197.18)<br>
>>>>> Mar 10 13:37:21 [7418] HWJ-626.domain.local      attrd:     info:<br>
>>>>> attrd_cib_callback:   Update 401 for last-failure-postfix: OK (0)<br>
>>>>> Mar 10 13:37:21 [7418] HWJ-626.domain.local      attrd:     info:<br>
>>>>> attrd_cib_callback:   Update 401 for<br>
>>>>> last-failure-postfix[mail1]=1457613440: OK (0)<br>
>>>>> Mar 10 13:37:21 [7418] HWJ-626.domain.local      attrd:     info:<br>
>>>>> attrd_cib_callback:   Update 401 for<br>
>>>>> last-failure-postfix[mail2]=1457610376: OK (0)<br>
>>>>> Mar 10 13:37:21 [7420] HWJ-626.domain.local       crmd:     info:<br>
>>>>> abort_transition_graph:       Transition aborted by<br>
>>>>> status-1-last-failure-postfix, last-failure-postfix=1457613440:<br>
>> Transient<br>
>>>>> attribute change (modify cib=0.197.18, source=abort_unless_down:319,<br>
>>>>><br>
>>>><br>
>> path=/cib/status/node_state[@id='1']/transient_attributes[@id='1']/instance_attributes[@id='status-1']/nvpair[@id='status-1-last-failure-postfix'],<br>
>>>>> 1)<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:   notice:<br>
>>>>> unpack_config:        On loss of CCM Quorum: Ignore<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> determine_online_status:      Node mail1 is online<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> determine_online_status:      Node mail2 is online<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> determine_op_status:  Operation monitor found resource mail:0 active in<br>
>>>>> master mode on mail1<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> determine_op_status:  Operation monitor found resource spool:0 active<br>
>> in<br>
>>>>> master mode on mail1<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> determine_op_status:  Operation monitor found resource fs-spool active<br>
>> on<br>
>>>>> mail1<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> determine_op_status:  Operation monitor found resource fs-mail active<br>
>> on<br>
>>>>> mail1<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:  warning:<br>
>>>>> unpack_rsc_op_failure:        Processing failed op monitor for postfix<br>
>> on<br>
>>>>> mail1: not running (7)<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> determine_op_status:  Operation monitor found resource spool:1 active<br>
>> in<br>
>>>>> master mode on mail2<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> determine_op_status:  Operation monitor found resource mail:1 active in<br>
>>>>> master mode on mail2<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> group_print:   Resource Group: network-services<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> native_print:      virtualip-1        (ocf::heartbeat:IPaddr2):<br>
>>>>  Started<br>
>>>>> mail1<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> clone_print:   Master/Slave Set: spool-clone [spool]<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> short_print:       Masters: [ mail1 ]<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> short_print:       Slaves: [ mail2 ]<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> clone_print:   Master/Slave Set: mail-clone [mail]<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> short_print:       Masters: [ mail1 ]<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> short_print:       Slaves: [ mail2 ]<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> group_print:   Resource Group: fs-services<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> native_print:      fs-spool   (ocf::heartbeat:Filesystem):    Started<br>
>>>> mail1<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> native_print:      fs-mail    (ocf::heartbeat:Filesystem):    Started<br>
>>>> mail1<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> group_print:   Resource Group: mail-services<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> native_print:      postfix    (ocf::heartbeat:postfix):       FAILED<br>
>>>> mail1<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> get_failcount_full:   postfix has failed 3 times on mail1<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:  warning:<br>
>>>>> common_apply_stickiness:      Forcing postfix away from mail1 after 3<br>
>>>>> failures (max=3)<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> master_color: Promoting mail:0 (Master mail1)<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> master_color: mail-clone: Promoted 1 instances of a possible 1 to<br>
>> master<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> master_color: Promoting spool:0 (Master mail1)<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> master_color: spool-clone: Promoted 1 instances of a possible 1 to<br>
>> master<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> rsc_merge_weights:    postfix: Rolling back scores from virtualip-1<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> native_color: Resource virtualip-1 cannot run anywhere<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> RecurringOp:   Start recurring monitor (45s) for postfix on mail2<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:   notice:<br>
>>>>> LogActions:   Stop    virtualip-1     (mail1)<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> LogActions:   Leave   spool:0 (Master mail1)<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> LogActions:   Leave   spool:1 (Slave mail2)<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> LogActions:   Leave   mail:0  (Master mail1)<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> LogActions:   Leave   mail:1  (Slave mail2)<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:   notice:<br>
>>>>> LogActions:   Stop    fs-spool        (Started mail1)<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:   notice:<br>
>>>>> LogActions:   Stop    fs-mail (Started mail1)<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:   notice:<br>
>>>>> LogActions:   Stop    postfix (Started mail1)<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:   notice:<br>
>>>>> process_pe_message:   Calculated Transition 1235:<br>
>>>>> /var/lib/pacemaker/pengine/pe-input-302.bz2<br>
>>>>> Mar 10 13:37:21 [7420] HWJ-626.domain.local       crmd:     info:<br>
>>>>> handle_response:      pe_calc calculation pe_calc-dc-1457613441-3756 is<br>
>>>>> obsolete<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:   notice:<br>
>>>>> unpack_config:        On loss of CCM Quorum: Ignore<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> determine_online_status:      Node mail1 is online<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> determine_online_status:      Node mail2 is online<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> determine_op_status:  Operation monitor found resource mail:0 active in<br>
>>>>> master mode on mail1<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> determine_op_status:  Operation monitor found resource spool:0 active<br>
>> in<br>
>>>>> master mode on mail1<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> determine_op_status:  Operation monitor found resource fs-spool active<br>
>> on<br>
>>>>> mail1<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> determine_op_status:  Operation monitor found resource fs-mail active<br>
>> on<br>
>>>>> mail1<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:  warning:<br>
>>>>> unpack_rsc_op_failure:        Processing failed op monitor for postfix<br>
>> on<br>
>>>>> mail1: not running (7)<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> determine_op_status:  Operation monitor found resource spool:1 active<br>
>> in<br>
>>>>> master mode on mail2<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> determine_op_status:  Operation monitor found resource mail:1 active in<br>
>>>>> master mode on mail2<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> group_print:   Resource Group: network-services<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> native_print:      virtualip-1        (ocf::heartbeat:IPaddr2):<br>
>>>>  Started<br>
>>>>> mail1<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> clone_print:   Master/Slave Set: spool-clone [spool]<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> short_print:       Masters: [ mail1 ]<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> short_print:       Slaves: [ mail2 ]<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> clone_print:   Master/Slave Set: mail-clone [mail]<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> short_print:       Masters: [ mail1 ]<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> short_print:       Slaves: [ mail2 ]<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> group_print:   Resource Group: fs-services<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> native_print:      fs-spool   (ocf::heartbeat:Filesystem):    Started<br>
>>>> mail1<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> native_print:      fs-mail    (ocf::heartbeat:Filesystem):    Started<br>
>>>> mail1<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> group_print:   Resource Group: mail-services<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> native_print:      postfix    (ocf::heartbeat:postfix):       FAILED<br>
>>>> mail1<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> get_failcount_full:   postfix has failed 3 times on mail1<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:  warning:<br>
>>>>> common_apply_stickiness:      Forcing postfix away from mail1 after 3<br>
>>>>> failures (max=3)<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> master_color: Promoting mail:0 (Master mail1)<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> master_color: mail-clone: Promoted 1 instances of a possible 1 to<br>
>> master<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> master_color: Promoting spool:0 (Master mail1)<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> master_color: spool-clone: Promoted 1 instances of a possible 1 to<br>
>> master<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> rsc_merge_weights:    postfix: Rolling back scores from virtualip-1<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> native_color: Resource virtualip-1 cannot run anywhere<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> RecurringOp:   Start recurring monitor (45s) for postfix on mail2<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:   notice:<br>
>>>>> LogActions:   Stop    virtualip-1     (mail1)<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> LogActions:   Leave   spool:0 (Master mail1)<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> LogActions:   Leave   spool:1 (Slave mail2)<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> LogActions:   Leave   mail:0  (Master mail1)<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:     info:<br>
>>>>> LogActions:   Leave   mail:1  (Slave mail2)<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:   notice:<br>
>>>>> LogActions:   Stop    fs-spool        (Started mail1)<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:   notice:<br>
>>>>> LogActions:   Stop    fs-mail (Started mail1)<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:   notice:<br>
>>>>> LogActions:   Stop    postfix (Started mail1)<br>
>>>>> Mar 10 13:37:21 [7420] HWJ-626.domain.local       crmd:     info:<br>
>>>>> do_state_transition:  State transition S_POLICY_ENGINE -><br>
>>>>> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE<br>
>>>>> origin=handle_response ]<br>
>>>>> Mar 10 13:37:21 [7419] HWJ-626.domain.local    pengine:   notice:<br>
>>>>> process_pe_message:   Calculated Transition 1236:<br>
>>>>> /var/lib/pacemaker/pengine/pe-input-303.bz2<br>
>>>>> Mar 10 13:37:21 [7420] HWJ-626.domain.local       crmd:     info:<br>
>>>>> do_te_invoke: Processing graph 1236 (ref=pe_calc-dc-1457613441-3757)<br>
>>>>> derived from /var/lib/pacemaker/pengine/pe-input-303.bz2<br>
>>>>> Mar 10 13:37:21 [7420] HWJ-626.domain.local       crmd:   notice:<br>
>>>>> te_rsc_command:       Initiating action 12: stop virtualip-1_stop_0 on<br>
>>>> mail1<br>
>>>>> Mar 10 13:37:21 [7420] HWJ-626.domain.local       crmd:   notice:<br>
>>>>> te_rsc_command:       Initiating action 5: stop postfix_stop_0 on mail1<br>
>>>>> Mar 10 13:37:21 [7415] HWJ-626.domain.local        cib:     info:<br>
>>>>> cib_perform_op:       Diff: --- 0.197.18 2<br>
>>>>> Mar 10 13:37:21 [7415] HWJ-626.domain.local        cib:     info:<br>
>>>>> cib_perform_op:       Diff: +++ 0.197.19 (null)<br>
>>>>> Mar 10 13:37:21 [7415] HWJ-626.domain.local        cib:     info:<br>
>>>>> cib_perform_op:       +  /cib:  @num_updates=19<br>
>>>>> Mar 10 13:37:21 [7415] HWJ-626.domain.local        cib:     info:<br>
>>>>> cib_perform_op:       +<br>
>>>>><br>
>>>><br>
>> /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='virtualip-1']/lrm_rsc_op[@id='virtualip-1_last_0']:<br>
>>>>>  @operation_key=virtualip-1_stop_0, @operation=stop,<br>
>>>>> @transition-key=12:1236:0:ae755a85-c250-498f-9c94-ddd8a7e2788a,<br>
>>>>> @transition-magic=0:0;12:1236:0:ae755a85-c250-498f-9c94-ddd8a7e2788a,<br>
>>>>> @call-id=1276, @last-run=1457613441, @last-rc-change=1457613441,<br>
>>>>> @exec-time=66<br>
>>>>> Mar 10 13:37:21 [7415] HWJ-626.domain.local        cib:     info:<br>
>>>>> cib_process_request:  Completed cib_modify operation for section<br>
>> status:<br>
>>>> OK<br>
>>>>> (rc=0, origin=mail1/crmd/197, version=0.197.19)<br>
>>>>> Mar 10 13:37:21 [7420] HWJ-626.domain.local       crmd:     info:<br>
>>>>> match_graph_event:    Action virtualip-1_stop_0 (12) confirmed on mail1<br>
>>>>> (rc=0)<br>
>>>>> Mar 10 13:37:21 [7415] HWJ-626.domain.local        cib:     info:<br>
>>>>> cib_perform_op:       Diff: --- 0.197.19 2<br>
>>>>> Mar 10 13:37:21 [7415] HWJ-626.domain.local        cib:     info:<br>
>>>>> cib_perform_op:       Diff: +++ 0.197.20 (null)<br>
>>>>> Mar 10 13:37:21 [7415] HWJ-626.domain.local        cib:     info:<br>
>>>>> cib_perform_op:       +  /cib:  @num_updates=20<br>
>>>>> Mar 10 13:37:21 [7415] HWJ-626.domain.local        cib:     info:<br>
>>>>> cib_perform_op:       +<br>
>>>>><br>
>>>><br>
>> /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='postfix']/lrm_rsc_op[@id='postfix_last_0']:<br>
>>>>>  @operation_key=postfix_stop_0, @operation=stop,<br>
>>>>> @transition-key=5:1236:0:ae755a85-c250-498f-9c94-ddd8a7e2788a,<br>
>>>>> @transition-magic=0:0;5:1236:0:ae755a85-c250-498f-9c94-ddd8a7e2788a,<br>
>>>>> @call-id=1278, @last-run=1457613441, @last-rc-change=1457613441,<br>
>>>>> @exec-time=476<br>
>>>>> Mar 10 13:37:21 [7420] HWJ-626.domain.local       crmd:     info:<br>
>>>>> match_graph_event:    Action postfix_stop_0 (5) confirmed on mail1<br>
>> (rc=0)<br>
>>>>> Mar 10 13:37:21 [7420] HWJ-626.domain.local       crmd:   notice:<br>
>>>>> te_rsc_command:       Initiating action 79: stop fs-mail_stop_0 on<br>
>> mail1<br>
>>>>> Mar 10 13:37:21 [7415] HWJ-626.domain.local        cib:     info:<br>
>>>>> cib_process_request:  Completed cib_modify operation for section<br>
>> status:<br>
>>>> OK<br>
>>>>> (rc=0, origin=mail1/crmd/198, version=0.197.20)<br>
>>>>> Mar 10 13:37:21 [7415] HWJ-626.domain.local        cib:     info:<br>
>>>>> cib_perform_op:       Diff: --- 0.197.20 2<br>
>>>>> Mar 10 13:37:21 [7415] HWJ-626.domain.local        cib:     info:<br>
>>>>> cib_perform_op:       Diff: +++ 0.197.21 (null)<br>
>>>>> Mar 10 13:37:21 [7415] HWJ-626.domain.local        cib:     info:<br>
>>>>> cib_perform_op:       +  /cib:  @num_updates=21<br>
>>>>> Mar 10 13:37:21 [7415] HWJ-626.domain.local        cib:     info:<br>
>>>>> cib_perform_op:       +<br>
>>>>><br>
>>>><br>
>> /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='fs-mail']/lrm_rsc_op[@id='fs-mail_last_0']:<br>
>>>>>  @operation_key=fs-mail_stop_0, @operation=stop,<br>
>>>>> @transition-key=79:1236:0:ae755a85-c250-498f-9c94-ddd8a7e2788a,<br>
>>>>> @transition-magic=0:0;79:1236:0:ae755a85-c250-498f-9c94-ddd8a7e2788a,<br>
>>>>> @call-id=1280, @last-run=1457613441, @last-rc-change=1457613441,<br>
>>>>> @exec-time=88, @queue-time=1<br>
>>>>> Mar 10 13:37:21 [7415] HWJ-626.domain.local        cib:     info:<br>
>>>>> cib_process_request:  Completed cib_modify operation for section<br>
>> status:<br>
>>>> OK<br>
>>>>> (rc=0, origin=mail1/crmd/199, version=0.197.21)<br>
>>>>> Mar 10 13:37:21 [7420] HWJ-626.domain.local       crmd:     info:<br>
>>>>> match_graph_event:    Action fs-mail_stop_0 (79) confirmed on mail1<br>
>>>> (rc=0)<br>
>>>>> Mar 10 13:37:21 [7420] HWJ-626.domain.local       crmd:   notice:<br>
>>>>> te_rsc_command:       Initiating action 77: stop fs-spool_stop_0 on<br>
>> mail1<br>
>>>>> Mar 10 13:37:21 [7415] HWJ-626.domain.local        cib:     info:<br>
>>>>> cib_perform_op:       Diff: --- 0.197.21 2<br>
>>>>> Mar 10 13:37:21 [7415] HWJ-626.domain.local        cib:     info:<br>
>>>>> cib_perform_op:       Diff: +++ 0.197.22 (null)<br>
>>>>> Mar 10 13:37:21 [7415] HWJ-626.domain.local        cib:     info:<br>
>>>>> cib_perform_op:       +  /cib:  @num_updates=22<br>
>>>>> Mar 10 13:37:21 [7415] HWJ-626.domain.local        cib:     info:<br>
>>>>> cib_perform_op:       +<br>
>>>>><br>
>>>><br>
>> /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='fs-spool']/lrm_rsc_op[@id='fs-spool_last_0']:<br>
>>>>>  @operation_key=fs-spool_stop_0, @operation=stop,<br>
>>>>> @transition-key=77:1236:0:ae755a85-c250-498f-9c94-ddd8a7e2788a,<br>
>>>>> @transition-magic=0:0;77:1236:0:ae755a85-c250-498f-9c94-ddd8a7e2788a,<br>
>>>>> @call-id=1282, @last-run=1457613441, @last-rc-change=1457613441,<br>
>>>>> @exec-time=86<br>
>>>>> Mar 10 13:37:21 [7415] HWJ-626.domain.local        cib:     info:<br>
>>>>> cib_process_request:  Completed cib_modify operation for section<br>
>> status:<br>
>>>> OK<br>
>>>>> (rc=0, origin=mail1/crmd/200, version=0.197.22)<br>
>>>>> Mar 10 13:37:21 [7420] HWJ-626.domain.local       crmd:     info:<br>
>>>>> match_graph_event:    Action fs-spool_stop_0 (77) confirmed on mail1<br>
>>>> (rc=0)<br>
>>>>> Mar 10 13:37:21 [7420] HWJ-626.domain.local       crmd:  warning:<br>
>>>>> run_graph:    Transition 1236 (Complete=11, Pending=0, Fired=0,<br>
>>>> Skipped=0,<br>
>>>>> Incomplete=1, Source=/var/lib/pacemaker/pengine/pe-input-303.bz2):<br>
>>>>> Terminated<br>
>>>>> Mar 10 13:37:21 [7420] HWJ-626.domain.local       crmd:  warning:<br>
>>>>> te_graph_trigger:     Transition failed: terminated<br>
>>>>> Mar 10 13:37:21 [7420] HWJ-626.domain.local       crmd:   notice:<br>
>>>>> print_graph:  Graph 1236 with 12 actions: batch-limit=12 jobs,<br>
>>>>> network-delay=0ms<br>
>>>>> Mar 10 13:37:21 [7420] HWJ-626.domain.local       crmd:   notice:<br>
>>>>> print_synapse:        [Action   16]: Completed pseudo op<br>
>>>>> network-services_stopped_0     on N/A (priority: 0, waiting: none)<br>
>>>>> Mar 10 13:37:21 [7420] HWJ-626.domain.local       crmd:   notice:<br>
>>>>> print_synapse:        [Action   15]: Completed pseudo op<br>
>>>>> network-services_stop_0        on N/A (priority: 0, waiting: none)<br>
>>>>> Mar 10 13:37:21 [7420] HWJ-626.domain.local       crmd:   notice:<br>
>>>>> print_synapse:        [Action   12]: Completed rsc op<br>
>> virtualip-1_stop_0<br>
>>>>>              on mail1 (priority: 0, waiting: none)<br>
>>>>> Mar 10 13:37:21 [7420] HWJ-626.domain.local       crmd:   notice:<br>
>>>>> print_synapse:        [Action   84]: Completed pseudo op<br>
>>>>> fs-services_stopped_0          on N/A (priority: 0, waiting: none)<br>
>>>>> Mar 10 13:37:21 [7420] HWJ-626.domain.local       crmd:   notice:<br>
>>>>> print_synapse:        [Action   83]: Completed pseudo op<br>
>>>> fs-services_stop_0<br>
>>>>>             on N/A (priority: 0, waiting: none)<br>
>>>>> Mar 10 13:37:21 [7420] HWJ-626.domain.local       crmd:   notice:<br>
>>>>> print_synapse:        [Action   77]: Completed rsc op fs-spool_stop_0<br>
>>>>>             on mail1 (priority: 0, waiting: none)<br>
>>>>> Mar 10 13:37:21 [7420] HWJ-626.domain.local       crmd:   notice:<br>
>>>>> print_synapse:        [Action   79]: Completed rsc op fs-mail_stop_0<br>
>>>>>              on mail1 (priority: 0, waiting: none)<br>
>>>>> Mar 10 13:37:21 [7420] HWJ-626.domain.local       crmd:   notice:<br>
>>>>> print_synapse:        [Action   90]: Completed pseudo op<br>
>>>>> mail-services_stopped_0        on N/A (priority: 0, waiting: none)<br>
>>>>> Mar 10 13:37:21 [7420] HWJ-626.domain.local       crmd:   notice:<br>
>>>>> print_synapse:        [Action   89]: Completed pseudo op<br>
>>>>> mail-services_stop_0           on N/A (priority: 0, waiting: none)<br>
>>>>> Mar 10 13:37:21 [7420] HWJ-626.domain.local       crmd:   notice:<br>
>>>>> print_synapse:        [Action   86]: Pending rsc op<br>
>> postfix_monitor_45000<br>
>>>>>             on mail2 (priority: 0, waiting: none)<br>
>>>>> Mar 10 13:37:21 [7420] HWJ-626.domain.local       crmd:   notice:<br>
>>>>> print_synapse:         * [Input 85]: Unresolved dependency rsc op<br>
>>>>> postfix_start_0 on mail2<br>
>>>>> Mar 10 13:37:21 [7420] HWJ-626.domain.local       crmd:   notice:<br>
>>>>> print_synapse:        [Action    5]: Completed rsc op postfix_stop_0<br>
>>>>>              on mail1 (priority: 0, waiting: none)<br>
>>>>> Mar 10 13:37:21 [7420] HWJ-626.domain.local       crmd:   notice:<br>
>>>>> print_synapse:        [Action    8]: Completed pseudo op all_stopped<br>
>>>>>              on N/A (priority: 0, waiting: none)<br>
>>>>> Mar 10 13:37:21 [7420] HWJ-626.domain.local       crmd:     info:<br>
>> do_log:<br>
>>>>>     FSA: Input I_TE_SUCCESS from notify_crmd() received in state<br>
>>>>> S_TRANSITION_ENGINE<br>
>>>>> Mar 10 13:37:21 [7420] HWJ-626.domain.local       crmd:   notice:<br>
>>>>> do_state_transition:  State transition S_TRANSITION_ENGINE -> S_IDLE [<br>
>>>>> input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]<br>
>>>>> Mar 10 13:37:26 [7415] HWJ-626.domain.local        cib:     info:<br>
>>>>> cib_process_ping:     Reporting our current digest to mail2:<br>
>>>>> 3896ee29cdb6ba128330b0ef6e41bd79 for 0.197.22 (0x1544a30 0)<br>
>><br>
>><br>
><br>
<br>
</blockquote></div><br></div>