[Pacemaker] fence_vmware_soap fence does reboot VM instead of powering it off

Andrew Beekhof andrew at beekhof.net
Mon Aug 26 21:37:43 EDT 2013


Specifying 'action="off"' for a device was not possible prior to 1.1.10 (because we perform other actions on the devices too, so it was unconditionally overwritten)
You should be using this cluster property from "man pengine"

       stonith-action = enum [reboot]
           Action to send to STONITH device

           Action to send to STONITH device Allowed values: reboot, poweroff, off


On 13/08/2013, at 9:27 PM, Mistina Michal <Michal.Mistina at virte.sk> wrote:

> Hi.
> I am using 2 node cluster. Nodes are virtual machines running on the ESX 5.1.
> Installed:
> -          RHEL 6.3
> -          pacemaker-1.1.7-6
> -          fence-agents-3.1.5-25.el6_4.2
>  
> During test resources are running on the VM tstcaps02. Fencing is done with fence_vmware_soap agent.
> If I plug out VM machine tstcaps01 “network cable” to test fencing the VM machine tstcaps02 does fencing, but not that I expected. I would like to setup action off, so VM stays off after fencing. I thought that I did correct setup of that action. But fencing does action reboot.
>  
> What could be wrong in my setup of the CIB and how can I set it up correctly if there’s mistake?
> Does somebody know why the fence agent tries to list all VM machines in the ESX server during fencing as it can be seen in the /var/log/messages?
>  
> [root at tstcaps02 ~]# crm configure show
> node tstcaps01
> node tstcaps02
> primitive drbd_pg ocf:linbit:drbd \
>         params drbd_resource="postgres" \
>         op monitor interval="15" role="Master" \
>         op monitor interval="16" role="Slave" \
>         op start interval="0" timeout="240" \
>         op stop interval="0" timeout="120"
> primitive pg_fs ocf:heartbeat:Filesystem \
>         params device="/dev/vg_local-lv_pgsql/lv_pgsql" directory="/var/lib/pgsql/9.2/data" options="noatime,nodiratime" fstype="xfs" \
>         op start interval="0" timeout="60" \
>         op stop interval="0" timeout="120"
> primitive pg_lsb lsb:postgresql-9.2 \
>         op monitor interval="30" timeout="60" \
>         op start interval="0" timeout="120" \
>         op stop interval="0" timeout="120" \
>         meta target-role="Started"
> primitive pg_lvm ocf:heartbeat:LVM \
>         params volgrpname="vg_local-lv_pgsql" \
>         op start interval="0" timeout="30" \
>         op stop interval="0" timeout="30"
> primitive pg_vip ocf:heartbeat:IPaddr2 \
>         params ip="192.168.106.19" iflabel="tstcapsvip" \
>         op monitor interval="5"
> primitive vm-fence-tstcaps01 stonith:fence_vmware_soap \
>         params ipaddr="x.x.x.x" login="administrator" passwd="password" port="tstcaps01" ssl="1" retry_on="20" shell_timeout="10" login_timeout="10" action="off" verbose="true"
> primitive vm-fence-tstcaps02 stonith:fence_vmware_soap \
>         params ipaddr="x.x.x.x" login="administrator" passwd="password" port="tstcaps02" ssl="1" retry_on="20" shell_timeout="10" login_timeout="10" action="off" verbose="true"
> group PGServer pg_lvm pg_fs pg_lsb pg_vip \
>         meta target-role="Started"
> ms ms_drbd_pg drbd_pg \
>         meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true"
> location cli-prefer-PGServer PGServer \
>         rule $id="cli-prefer-rule-PGServer" inf: #uname eq tstcaps02
> location l-st-tstcaps01 vm-fence-tstcaps01 -inf: tstcaps01
> location l-st-tstcaps02 vm-fence-tstcaps02 -inf: tstcaps02
> location master-prefer-node1 pg_vip 50: tstcaps01
> colocation col_pg_drbd inf: PGServer ms_drbd_pg:Master
> order ord_pg inf: ms_drbd_pg:promote PGServer:start
> property $id="cib-bootstrap-options" \
>         dc-version="1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14" \
>         cluster-infrastructure="openais" \
>         expected-quorum-votes="4" \
>         stonith-enabled="true" \
>         no-quorum-policy="ignore" \
>         maintenance-mode="false" \
>         last-lrm-refresh="1376385351"
> rsc_defaults $id="rsc-options" \
>         resource-stickiness="100
>  
> [root at tstcaps02 ~]# stonith_admin -a fence_vmware_soap -M
> <resource-agent name="fence_vmware_soap" shortdesc="Fence agent for VMWare over SOAP API">
>   <longdesc>fence_vmware_soap is an I/O Fencing agent which can be used with the virtual machines managed by VMWare products that have SOAP API v4.1+.
> .P
> Name of virtual machine (-n / port) has to be used in inventory path format (e.g. /datacenter/vm/Discovered virtual machine/myMachine). In the cases when name of yours VM is unique you can use it instead. Alternatively you can always use UUID (-U / uuid) to access virtual machine.</longdesc>
>   <vendor-url>http://www.vmware.com</vendor-url>
>   <parameters>
>     <parameter name="action" unique="0" required="1">
>       <getopt mixed="-o, --action=<action>"/>
>       <content type="string" default="reboot"/>
>       <shortdesc lang="en">Fencing Action</shortdesc>
>     </parameter>
>     <parameter name="ipaddr" unique="0" required="1">
>       <getopt mixed="-a, --ip=<ip>"/>
>       <content type="string"/>
>       <shortdesc lang="en">IP Address or Hostname</shortdesc>
>     </parameter>
>     <parameter name="login" unique="0" required="1">
>       <getopt mixed="-l, --username=<name>"/>
>       <content type="string"/>
>       <shortdesc lang="en">Login Name</shortdesc>
>     </parameter>
>     <parameter name="passwd" unique="0" required="0">
>       <getopt mixed="-p, --password=<password>"/>
>       <content type="string"/>
>       <shortdesc lang="en">Login password or passphrase</shortdesc>
>     </parameter>
>     <parameter name="passwd_script" unique="0" required="0">
>       <getopt mixed="-S, --password-script=<script>"/>
>       <content type="string"/>
>       <shortdesc lang="en">Script to retrieve password</shortdesc>
>     </parameter>
>     <parameter name="ssl" unique="0" required="0">
>       <getopt mixed="-z, --ssl"/>
>       <content type="boolean"/>
>       <shortdesc lang="en">SSL connection</shortdesc>
>     </parameter>
>     <parameter name="port" unique="0" required="0">
>       <getopt mixed="-n, --plug=<id>"/>
>       <content type="string"/>
>       <shortdesc lang="en">Physical plug number or name of virtual machine</shortdesc>
>     </parameter>
>     <parameter name="uuid" unique="0" required="0">
>       <getopt mixed="-U, --uuid"/>
>       <content type="string"/>
>       <shortdesc lang="en">The UUID of the virtual machine to fence.</shortdesc>
>     </parameter>
>     <parameter name="ipport" unique="0" required="0">
>       <getopt mixed="-u, --ipport=<port>"/>
>       <content type="string"/>
>       <shortdesc lang="en">TCP port to use for connection with device</shortdesc>
>     </parameter>
>     <parameter name="verbose" unique="0" required="0">
>       <getopt mixed="-v, --verbose"/>
>       <content type="boolean"/>
>       <shortdesc lang="en">Verbose mode</shortdesc>
>     </parameter>
>     <parameter name="debug" unique="0" required="0">
>       <getopt mixed="-D, --debug-file=<debugfile>"/>
>       <content type="string"/>
>       <shortdesc lang="en">Write debug information to given file</shortdesc>
>     </parameter>
>     <parameter name="version" unique="0" required="0">
>       <getopt mixed="-V, --version"/>
>       <content type="boolean"/>
>       <shortdesc lang="en">Display version information and exit</shortdesc>
>     </parameter>
>     <parameter name="help" unique="0" required="0">
>       <getopt mixed="-h, --help"/>
>       <content type="boolean"/>
>       <shortdesc lang="en">Display help and exit</shortdesc>
>     </parameter>
>     <parameter name="separator" unique="0" required="0">
>       <getopt mixed="-C, --separator=<char>"/>
>      <content type="string" default=","/>
>       <shortdesc lang="en">Separator for CSV created by operation list</shortdesc>
>     </parameter>
>     <parameter name="power_timeout" unique="0" required="0">
>       <getopt mixed="--power-timeout"/>
>       <content type="string" default="20"/>
>       <shortdesc lang="en">Test X seconds for status change after ON/OFF</shortdesc>
>     </parameter>
>     <parameter name="shell_timeout" unique="0" required="0">
>       <getopt mixed="--shell-timeout"/>
>       <content type="string" default="3"/>
>       <shortdesc lang="en">Wait X seconds for cmd prompt after issuing command</shortdesc>
>     </parameter>
>     <parameter name="login_timeout" unique="0" required="0">
>       <getopt mixed="--login-timeout"/>
>       <content type="string" default="5"/>
>       <shortdesc lang="en">Wait X seconds for cmd prompt after login</shortdesc>
>     </parameter>
>     <parameter name="power_wait" unique="0" required="0">
>       <getopt mixed="--power-wait"/>
>       <content type="string" default="0"/>
>       <shortdesc lang="en">Wait X seconds after issuing ON/OFF</shortdesc>
>     </parameter>
>     <parameter name="delay" unique="0" required="0">
>       <getopt mixed="--delay"/>
>       <content type="string" default="0"/>
>       <shortdesc lang="en">Wait X seconds before fencing is started</shortdesc>
>     </parameter>
>     <parameter name="retry_on" unique="0" required="0">
>       <getopt mixed="--retry-on"/>
>       <content type="string" default="1"/>
>       <shortdesc lang="en">Count of attempts to retry power on</shortdesc>
>     </parameter>
>   </parameters>
>   <actions>
>     <action name="on"/>
>     <action name="off"/>
>     <action name="reboot"/>
>     <action name="status"/>
>     <action name="list"/>
>     <action name="monitor"/>
>     <action name="metadata"/>
>     <action name="stop" timeout="20s"/>
>     <action name="start" timeout="20s"/>
>   </actions>
> </resource-agent>
>  
> [root at tstcaps02 ~]# less /var/log/messages
> ….omitted…
> Aug 13 13:20:40 TSTCAPS02 corosync[1464]:   [TOTEM ] A processor failed, forming new configuration.
> Aug 13 13:20:41 TSTCAPS02 cib[1470]:     info: ais_dispatch_message: Membership 2176: quorum still lost
> Aug 13 13:20:41 TSTCAPS02 cib[1470]:     info: crm_update_peer: Node tstcaps01: id=174762176 state=lost (new) addr=r(0) ip(192.168.106.10)  votes=1 born=2172 seen=2172 proc=00000000000000000000000000111312
> Aug 13 13:20:41 TSTCAPS02 crmd[1475]:     info: ais_dispatch_message: Membership 2176: quorum still lost
> Aug 13 13:20:41 TSTCAPS02 crmd[1475]:     info: ais_status_callback: status: tstcaps01 is now lost (was member)
> Aug 13 13:20:41 TSTCAPS02 crmd[1475]:     info: crm_update_peer: Node tstcaps01: id=174762176 state=lost (new) addr=r(0) ip(192.168.106.10)  votes=1 born=2172 seen=2172 proc=00000000000000000000000000111312
> Aug 13 13:20:41 TSTCAPS02 cib[1470]:     info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/1191, version=0.528.34): ok (rc=0)
> Aug 13 13:20:41 TSTCAPS02 crmd[1475]:     info: crmd_ais_dispatch: Setting expected votes to 4
> Aug 13 13:20:41 TSTCAPS02 cib[1470]:     info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/1194, version=0.528.36): ok (rc=0)
> Aug 13 13:20:41 TSTCAPS02 crmd[1475]:  warning: match_down_event: No match for shutdown action on tstcaps01
> Aug 13 13:20:41 TSTCAPS02 crmd[1475]:     info: te_update_diff: Stonith/shutdown of tstcaps01 not matched
> Aug 13 13:20:41 TSTCAPS02 crmd[1475]:     info: abort_transition_graph: te_update_diff:234 - Triggered transition abort (complete=1, tag=node_state, id=tstcaps01, magic=NA, cib=0.528.35) : Node failure
> Aug 13 13:20:41 TSTCAPS02 crmd[1475]:   notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
> Aug 13 13:20:41 TSTCAPS02 corosync[1464]:   [TOTEM ] A processor joined or left the membership and a new membership was formed.
> Aug 13 13:20:41 TSTCAPS02 pengine[1474]:     info: unpack_config: Startup probes: enabled
> Aug 13 13:20:41 TSTCAPS02 pengine[1474]:   notice: unpack_config: On loss of CCM Quorum: Ignore
> Aug 13 13:20:41 TSTCAPS02 pengine[1474]:     info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
> Aug 13 13:20:41 TSTCAPS02 pengine[1474]:     info: unpack_domains: Unpacking domains
> Aug 13 13:20:41 TSTCAPS02 pengine[1474]:     info: determine_online_status: Node tstcaps02 is online
> Aug 13 13:20:41 TSTCAPS02 pengine[1474]:  warning: pe_fence_node: Node tstcaps01 will be fenced because it is un-expectedly down
> Aug 13 13:20:41 TSTCAPS02 pengine[1474]:     info: determine_online_status_fencing: #011ha_state=active, ccm_state=false, crm_state=online, join_state=member, expected=member
> Aug 13 13:20:41 TSTCAPS02 pengine[1474]:  warning: determine_online_status: Node tstcaps01 is unclean
> Aug 13 13:20:41 TSTCAPS02 pengine[1474]:     info: group_print:  Resource Group: PGServer
> Aug 13 13:20:41 TSTCAPS02 pengine[1474]:     info: native_print:      pg_lvm#011(ocf::heartbeat:LVM):#011Started tstcaps02
> Aug 13 13:20:41 TSTCAPS02 pengine[1474]:     info: native_print:      pg_fs#011(ocf::heartbeat:Filesystem):#011Started tstcaps02
> Aug 13 13:20:41 TSTCAPS02 pengine[1474]:     info: native_print:      pg_lsb#011(lsb:postgresql-9.2):#011Started tstcaps02
> Aug 13 13:20:41 TSTCAPS02 pengine[1474]:     info: native_print:      pg_vip#011(ocf::heartbeat:IPaddr2):#011Started tstcaps02
> Aug 13 13:20:41 TSTCAPS02 pengine[1474]:     info: clone_print:  Master/Slave Set: ms_drbd_pg [drbd_pg]
> Aug 13 13:20:41 TSTCAPS02 pengine[1474]:     info: short_print:      Masters: [ tstcaps02 ]
> Aug 13 13:20:41 TSTCAPS02 pengine[1474]:     info: short_print:      Slaves: [ tstcaps01 ]
> Aug 13 13:20:41 TSTCAPS02 pengine[1474]:     info: native_print: vm-fence-tstcaps01#011(stonith:fence_vmware_soap):#011Started tstcaps02
> Aug 13 13:20:41 TSTCAPS02 pengine[1474]:     info: native_print: vm-fence-tstcaps02#011(stonith:fence_vmware_soap):#011Started tstcaps01
> Aug 13 13:20:41 TSTCAPS02 pengine[1474]:     info: native_color: Resource drbd_pg:0 cannot run anywhere
> Aug 13 13:20:41 TSTCAPS02 corosync[1464]:   [CPG   ] chosen downlist: sender r(0) ip(192.168.106.11) ; members(old:2 left:1)
> Aug 13 13:20:41 TSTCAPS02 pengine[1474]:     info: master_color: Promoting drbd_pg:1 (Master tstcaps02)
> Aug 13 13:20:41 TSTCAPS02 pengine[1474]:     info: master_color: ms_drbd_pg: Promoted 1 instances of a possible 1 to master
> Aug 13 13:20:41 TSTCAPS02 pengine[1474]:     info: native_color: Resource vm-fence-tstcaps02 cannot run anywhere
> Aug 13 13:20:41 TSTCAPS02 corosync[1464]:   [MAIN  ] Completed service synchronization, ready to provide service.
> Aug 13 13:20:41 TSTCAPS02 pengine[1474]:  warning: custom_action: Action drbd_pg:0_stop_0 on tstcaps01 is unrunnable (offline)
> Aug 13 13:20:41 TSTCAPS02 pengine[1474]:  warning: custom_action: Marking node tstcaps01 unclean
> Aug 13 13:20:41 TSTCAPS02 pengine[1474]:  warning: custom_action: Action drbd_pg:0_stop_0 on tstcaps01 is unrunnable (offline)
> Aug 13 13:20:41 TSTCAPS02 pengine[1474]:  warning: custom_action: Marking node tstcaps01 unclean
> Aug 13 13:20:41 TSTCAPS02 pengine[1474]:  warning: custom_action: Action vm-fence-tstcaps02_stop_0 on tstcaps01 is unrunnable (offline)
> Aug 13 13:20:41 TSTCAPS02 pengine[1474]:  warning: custom_action: Marking node tstcaps01 unclean
> Aug 13 13:20:41 TSTCAPS02 pengine[1474]:  warning: stage6: Scheduling Node tstcaps01 for STONITH
> Aug 13 13:20:41 TSTCAPS02 pengine[1474]:     info: native_stop_constraints: drbd_pg:0_stop_0 is implicit after tstcaps01 is fenced
> Aug 13 13:20:41 TSTCAPS02 pengine[1474]:     info: native_stop_constraints: Creating secondary notification for drbd_pg:0_stop_0
> Aug 13 13:20:41 TSTCAPS02 pengine[1474]:     info: native_stop_constraints: vm-fence-tstcaps02_stop_0 is implicit after tstcaps01 is fenced
> Aug 13 13:20:41 TSTCAPS02 pengine[1474]:     info: LogActions: Leave   pg_lvm#011(Started tstcaps02)
> Aug 13 13:20:41 TSTCAPS02 pengine[1474]:     info: LogActions: Leave   pg_fs#011(Started tstcaps02)
> Aug 13 13:20:41 TSTCAPS02 pengine[1474]:     info: LogActions: Leave   pg_lsb#011(Started tstcaps02)
> Aug 13 13:20:41 TSTCAPS02 pengine[1474]:     info: LogActions: Leave   pg_vip#011(Started tstcaps02)
> Aug 13 13:20:41 TSTCAPS02 pengine[1474]:   notice: LogActions: Stop    drbd_pg:0#011(tstcaps01)
> Aug 13 13:20:41 TSTCAPS02 pengine[1474]:     info: LogActions: Leave   drbd_pg:1#011(Master tstcaps02)
> Aug 13 13:20:41 TSTCAPS02 pengine[1474]:     info: LogActions: Leave   vm-fence-tstcaps01#011(Started tstcaps02)
> Aug 13 13:20:41 TSTCAPS02 pengine[1474]:   notice: LogActions: Stop    vm-fence-tstcaps02#011(tstcaps01)
> Aug 13 13:20:41 TSTCAPS02 rsyslogd-2177: imuxsock begins to drop messages from pid 1475 due to rate-limiting
> Aug 13 13:20:41 TSTCAPS02 pengine[1474]:  warning: process_pe_message: Transition 200: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-58.bz2
> Aug 13 13:20:41 TSTCAPS02 pengine[1474]:   notice: process_pe_message: Configuration WARNINGs found during PE processing.  Please run "crm_verify -L" to identify issues.
> Aug 13 13:20:41 TSTCAPS02 lrmd: [1472]: info: rsc:drbd_pg:1:503: notify
> Aug 13 13:20:41 TSTCAPS02 stonith-ng[1471]:     info: initiate_remote_stonith_op: Initiating remote operation reboot for tstcaps01: ba67f838-1461-414a-b309-adbfaff9a029
> Aug 13 13:20:41 TSTCAPS02 lrmd: [1472]: info: Managed drbd_pg:1:notify process 5332 exited with return code 0.
> Aug 13 13:20:47 TSTCAPS02 kernel: d-con postgres: PingAck did not arrive in time.
> Aug 13 13:20:47 TSTCAPS02 kernel: d-con postgres: peer( Secondary -> Unknown ) conn( Connected -> NetworkFailure ) pdsk( UpToDate -> DUnknown )
> Aug 13 13:20:47 TSTCAPS02 kernel: block drbd0: new current UUID D64F405226C19FB1:3E088B92FFDCE195:DA3DF4ABF79829FF:DA3CF4ABF79829FF
> Aug 13 13:20:47 TSTCAPS02 kernel: d-con postgres: asender terminated
> Aug 13 13:20:47 TSTCAPS02 kernel: d-con postgres: Terminating drbd_a_postgres
> Aug 13 13:20:47 TSTCAPS02 kernel: d-con postgres: Connection closed
> Aug 13 13:20:47 TSTCAPS02 kernel: d-con postgres: conn( NetworkFailure -> Unconnected )
> Aug 13 13:20:47 TSTCAPS02 kernel: d-con postgres: receiver terminated
> Aug 13 13:20:47 TSTCAPS02 kernel: d-con postgres: Restarting receiver thread
> Aug 13 13:20:47 TSTCAPS02 kernel: d-con postgres: receiver (re)started
> Aug 13 13:20:47 TSTCAPS02 kernel: d-con postgres: conn( Unconnected -> WFConnection )
> Aug 13 13:20:47 TSTCAPS02 kernel: d-con postgres: helper command: /sbin/drbdadm fence-peer postgres
> Aug 13 13:20:47 TSTCAPS02 crm-fence-peer.sh[5430]: invoked for postgres
> Aug 13 13:20:48 TSTCAPS02 cib[1470]:     info: cib:diff: - <cib admin_epoch="0" epoch="528" num_updates="36" />
> Aug 13 13:20:48 TSTCAPS02 cib[1470]:     info: cib:diff: + <cib epoch="529" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="tstcaps02" update-client="cibadmin" cib-last-written="Tue Aug 13 13:15:29 2013" have-quorum="0" dc-uuid="tstcaps02" >
> Aug 13 13:20:48 TSTCAPS02 cib[1470]:     info: cib:diff: +   <configuration >
> Aug 13 13:20:48 TSTCAPS02 cib[1470]:     info: cib:diff: +     <constraints >
> Aug 13 13:20:48 TSTCAPS02 cib[1470]:     info: cib:diff: +       <rsc_location rsc="ms_drbd_pg" id="drbd-fence-by-handler-postgres-ms_drbd_pg" __crm_diff_marker__="added:top" >
> Aug 13 13:20:48 TSTCAPS02 cib[1470]:     info: cib:diff: +         <rule role="Master" score="-INFINITY" id="drbd-fence-by-handler-postgres-rule-ms_drbd_pg" >
> Aug 13 13:20:48 TSTCAPS02 cib[1470]:     info: cib:diff: +           <expression attribute="#uname" operation="ne" value="tstcaps02" id="drbd-fence-by-handler-postgres-expr-ms_drbd_pg" />
> Aug 13 13:20:48 TSTCAPS02 cib[1470]:     info: cib:diff: +         </rule>
> Aug 13 13:20:48 TSTCAPS02 cib[1470]:     info: cib:diff: +       </rsc_location>
> Aug 13 13:20:48 TSTCAPS02 cib[1470]:     info: cib:diff: +     </constraints>
> Aug 13 13:20:48 TSTCAPS02 cib[1470]:     info: cib:diff: +   </configuration>
> Aug 13 13:20:48 TSTCAPS02 cib[1470]:     info: cib:diff: + </cib>
> Aug 13 13:20:48 TSTCAPS02 rsyslogd-2177: imuxsock lost 82 messages from pid 1475 due to rate-limiting
> Aug 13 13:20:48 TSTCAPS02 cib[1470]:     info: cib_process_request: Operation complete: op cib_create for section constraints (origin=local/cibadmin/2, version=0.529.1): ok (rc=0)
> Aug 13 13:20:48 TSTCAPS02 crmd[1475]:     info: abort_transition_graph: te_update_diff:126 - Triggered transition abort (complete=0, tag=diff, id=(null), magic=NA, cib=0.529.1) : Non-status change
> Aug 13 13:20:48 TSTCAPS02 crm-fence-peer.sh[5430]: INFO peer is reachable, my disk is UpToDate: placed constraint 'drbd-fence-by-handler-postgres-ms_drbd_pg'
> Aug 13 13:20:48 TSTCAPS02 kernel: d-con postgres: helper command: /sbin/drbdadm fence-peer postgres exit code 4 (0x400)
> Aug 13 13:20:48 TSTCAPS02 kernel: d-con postgres: fence-peer helper returned 4 (peer was fenced)
> Aug 13 13:20:48 TSTCAPS02 kernel: d-con postgres: pdsk( DUnknown -> Outdated )
> Aug 13 13:20:53 TSTCAPS02 stonith-ng[1471]:     info: can_fence_host_with_device: Refreshing port list for vm-fence-tstcaps01
> Aug 13 13:20:53 TSTCAPS02 stonith-ng[1471]:  warning: parse_host_line: Could not parse (13 21): [107.25],42224003-b614-5eb2-f141-5437fc8319d8
> Aug 13 13:20:53 TSTCAPS02 stonith-ng[1471]:  warning: parse_host_line: Could not parse (13 21): [107.29],4222719f-7bdc-84b2-4494-848a29c2bd5f
> Aug 13 13:20:53 TSTCAPS02 stonith-ng[1471]:  warning: parse_host_line: Could not parse (13 21): [107.27],4222da62-3c55-37f8-f6b8-239657892914
> Aug 13 13:20:53 TSTCAPS02 stonith-ng[1471]:  warning: parse_host_line: Could not parse (0 1): [ MEDI WIN7 32-bit  - MSDN],42223e4a-9541-2326-2a21-3b3532756b47
> Aug 13 13:20:53 TSTCAPS02 stonith-ng[1471]:  warning: parse_host_line: Could not parse (13 22): [105.233],42220acd-6e21-4380-9b81-89d86f14317d
> Aug 13 13:20:53 TSTCAPS02 stonith-ng[1471]:  warning: parse_host_line: Could not parse (12 20): [106.28],422235ab-83c4-c0b7-812b-bc5b7019aff7
> Aug 13 13:20:53 TSTCAPS02 stonith-ng[1471]:  warning: parse_host_line: Could not parse (9 17): [106.21],42223377-1443-a44c-1dc0-815c2542898e
> Aug 13 13:20:53 TSTCAPS02 stonith-ng[1471]:  warning: parse_host_line: Could not parse (13 21): [106.15],4222ac70-92c3-bddf-b524-24d848080cb2
> Aug 13 13:20:53 TSTCAPS02 stonith-ng[1471]:  warning: parse_host_line: Could not parse (12 20): [106.29],4222ac1b-41df-2e8f-c2f1-a7fb66e47751
> Aug 13 13:20:53 TSTCAPS02 stonith-ng[1471]:  warning: parse_host_line: Could not parse (0 1): [ MEDI W2K8 R2 SP1 STD - MSDN ],4222dc65-6752-b1b4-c0f7-38c94cd5609a
> Aug 13 13:20:53 TSTCAPS02 stonith-ng[1471]:  warning: parse_host_line: Could not parse (30 31): ],4222dc65-6752-b1b4-c0f7-38c94cd5609a
> Aug 13 13:20:53 TSTCAPS02 stonith-ng[1471]:  warning: parse_host_line: Could not parse (12 20): [106.52],4222aa80-0fe6-66c4-8d11-fea5f547b566
> Aug 13 13:20:53 TSTCAPS02 stonith-ng[1471]:  warning: parse_host_line: Could not parse (13 21): [106.14],422249fc-a902-ba5c-deb0-e6db6198b984
> Aug 13 13:20:53 TSTCAPS02 stonith-ng[1471]:  warning: parse_host_line: Could not parse (18 25): [106.2],4222851c-1a9d-021a-4e16-9f8adc5bcc42
> Aug 13 13:20:53 TSTCAPS02 stonith-ng[1471]:  warning: parse_host_line: Could not parse (17 26): [105.242],42228b51-4ef6-f9b8-b64a-882d68023074
> Aug 13 13:20:53 TSTCAPS02 stonith-ng[1471]:  warning: parse_host_line: Could not parse (20 29): [105.230],42223dcd-22c1-a0f7-c629-5c4489e2c55d
> Aug 13 13:20:53 TSTCAPS02 stonith-ng[1471]:  warning: parse_host_line: Could not parse (14 23): [105.210],42224816-cdf7-8016-747c-a45b8869d239
> Aug 13 13:20:53 TSTCAPS02 stonith-ng[1471]:  warning: parse_host_line: Could not parse (0 1): [ MEDI - WinXP with SP3 - MSDN ],4222238c-c927-3af1-f2e7-e0dd374d373b
> Aug 13 13:20:53 TSTCAPS02 stonith-ng[1471]:  warning: parse_host_line: Could not parse (31 32): ],4222238c-c927-3af1-f2e7-e0dd374d373b
> Aug 13 13:20:53 TSTCAPS02 stonith-ng[1471]:  warning: parse_host_line: Could not parse (21 30): [105.231],4222308c-41c7-02e9-3b20-c6df71838db9
> Aug 13 13:20:53 TSTCAPS02 stonith-ng[1471]:  warning: parse_host_line: Could not parse (25 28): !!! [105.235],422283ac-c5d9-4bf1-96eb-a57d8d18c118
> Aug 13 13:20:53 TSTCAPS02 stonith-ng[1471]:  warning: parse_host_line: Could not parse (29 38): [105.235],422283ac-c5d9-4bf1-96eb-a57d8d18c118
> Aug 13 13:20:53 TSTCAPS02 stonith-ng[1471]:  warning: parse_host_line: Could not parse (17 26): [105.241],4222a40f-d91a-0e4f-2292-ef92c4836bb5
> Aug 13 13:20:53 TSTCAPS02 stonith-ng[1471]:  warning: parse_host_line: Could not parse (0 1): [ W2K3 R2 ENT 32-bit ENG ],4233c1c8-e0f9-26f3-b854-6376ec6b1d1c
> Aug 13 13:20:53 TSTCAPS02 stonith-ng[1471]:  warning: parse_host_line: Could not parse (25 26): ],4233c1c8-e0f9-26f3-b854-6376ec6b1d1c
> Aug 13 13:20:53 TSTCAPS02 stonith-ng[1471]:  warning: parse_host_line: Could not parse (12 20): [106.13],42222137-0d67-ac9b-e3b6-11fb6d2c33e0
> Aug 13 13:20:53 TSTCAPS02 stonith-ng[1471]:  warning: parse_host_line: Could not parse (17 26): [105.243],42222a9a-7440-6d19-b654-42c08a2abd69
> Aug 13 13:20:53 TSTCAPS02 stonith-ng[1471]:  warning: parse_host_line: Could not parse (0 1): [ MEDI W2K8 R2 SP1 ENT - MSDN ],42227507-c4fd-c5aa-b7d7-4ececd284f84
> Aug 13 13:20:53 TSTCAPS02 stonith-ng[1471]:  warning: parse_host_line: Could not parse (30 31): ],42227507-c4fd-c5aa-b7d7-4ececd284f84
> Aug 13 13:20:53 TSTCAPS02 stonith-ng[1471]:  warning: parse_host_line: Could not parse (0 1): [ MEDI_gw_chckpnt ],4222f42e-58c6-dc59-2a00-10041ad5ac08
> Aug 13 13:20:53 TSTCAPS02 stonith-ng[1471]:  warning: parse_host_line: Could not parse (18 19): ],4222f42e-58c6-dc59-2a00-10041ad5ac08
> Aug 13 13:20:53 TSTCAPS02 stonith-ng[1471]:  warning: parse_host_line: Could not parse (13 22): [105.234],422295e3-644e-8b51-a373-e7f166b2fd5d
> Aug 13 13:20:53 TSTCAPS02 stonith-ng[1471]:  warning: parse_host_line: Could not parse (0 1): [ MEDI WIN7 64-bit - MSDN ],4222289e-0bd2-4280-c0f4-548fd42e7eab
> Aug 13 13:20:53 TSTCAPS02 stonith-ng[1471]:  warning: parse_host_line: Could not parse (26 27): ],4222289e-0bd2-4280-c0f4-548fd42e7eab
> Aug 13 13:20:53 TSTCAPS02 stonith-ng[1471]:  warning: parse_host_line: Could not parse (13 22): [105.232],42228f9d-615f-1c3b-2158-d3ad08d40357
> Aug 13 13:20:53 TSTCAPS02 stonith-ng[1471]:  warning: parse_host_line: Could not parse (9 17): [106.20],422285ba-6a31-0832-1b38-a910031cd057
> Aug 13 13:20:53 TSTCAPS02 stonith-ng[1471]:  warning: parse_host_line: Could not parse (17 26): [105.240],4222b273-68e7-379d-b874-6a47211e9449
> Aug 13 13:20:53 TSTCAPS02 stonith-ng[1471]:  warning: parse_host_line: Could not parse (13 21): [107.28],4222cbc8-565d-eee1-4430-555b059663d0
> Aug 13 13:20:53 TSTCAPS02 stonith-ng[1471]:  warning: parse_host_line: Could not parse (13 22): [105.236],4222115e-789a-66dd-95e9-786ec0d84ec0
> Aug 13 13:20:53 TSTCAPS02 stonith-ng[1471]:  warning: parse_host_line: Could not parse (13 21): [107.26],4222fb16-fadc-9031-8e3d-110225505a0f
> Aug 13 13:20:53 TSTCAPS02 stonith-ng[1471]:  warning: parse_host_line: Could not parse (12 20): [106.12],42226bf9-8e78-9356-773c-ecde31cf2fa2
> Aug 13 13:20:53 TSTCAPS02 stonith-ng[1471]:  warning: parse_host_line: Could not parse (12 20): [106.51],4222ae99-f1d9-9811-d72b-10e875c58f56
> Aug 13 13:20:53 TSTCAPS02 stonith-ng[1471]:     info: can_fence_host_with_device: vm-fence-tstcaps01 can fence tstcaps01: dynamic-list
> Aug 13 13:20:53 TSTCAPS02 stonith-ng[1471]:     info: call_remote_stonith: Requesting that tstcaps02 perform op reboot tstcaps01
> Aug 13 13:20:53 TSTCAPS02 stonith-ng[1471]:     info: can_fence_host_with_device: vm-fence-tstcaps01 can fence tstcaps01: dynamic-list
> Aug 13 13:20:53 TSTCAPS02 stonith-ng[1471]:     info: stonith_fence: Found 1 matching devices for 'tstcaps01'
> Aug 13 13:20:53 TSTCAPS02 stonith-ng[1471]:     info: stonith_command: Processed st_fence from tstcaps02: rc=-1
>  
> Best regards,
> Michal Mistina
>  
> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 841 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20130827/50b5e8e4/attachment-0003.sig>


More information about the Pacemaker mailing list