[Pacemaker] Question about the resource to fence a node

Kazunori INOUE kazunori.inoue3 at gmail.com
Wed Nov 13 08:45:19 UTC 2013


2013/11/13 Andrew Beekhof <andrew at beekhof.net>:
>
> On 16 Oct 2013, at 8:51 am, Andrew Beekhof <andrew at beekhof.net> wrote:
>
>>
>> On 15/10/2013, at 8:24 PM, Kazunori INOUE <kazunori.inoue3 at gmail.com> wrote:
>>
>>> Hi,
>>>
>>> I'm using pacemaker-1.1 (the latest devel).
>>> I started resource (f1 and f2) which fence vm3 on vm1.
>>>
>>> $ crm_mon -1
>>> Last updated: Tue Oct 15 15:16:37 2013
>>> Last change: Tue Oct 15 15:16:21 2013 via crmd on vm1
>>> Stack: corosync
>>> Current DC: vm1 (3232261517) - partition with quorum
>>> Version: 1.1.11-0.284.6a5e863.git.el6-6a5e863
>>> 3 Nodes configured
>>> 3 Resources configured
>>>
>>> Online: [ vm1 vm2 vm3 ]
>>>
>>> pDummy (ocf::pacemaker:Dummy): Started vm3
>>> Resource Group: gStonith3
>>>    f1 (stonith:external/libvirt):     Started vm1
>>>    f2 (stonith:external/ssh): Started vm1
>>>
>>>
>>> "reset" of f1 which hasn't been started on vm2 was performed when vm3 is fenced.
>>>
>>> $ ssh vm3 'rm -f /var/run/Dummy-pDummy.state'
>>> $ for i in vm1 vm2; do ssh $i 'hostname; egrep " reset | off "
>>> /var/log/ha-log'; done
>>> vm1
>>> Oct 15 15:17:35 vm1 stonith-ng[14870]:  warning: log_operation:
>>> f2:15076 [ Performing: stonith -t external/ssh -T reset vm3 ]
>>> Oct 15 15:18:06 vm1 stonith-ng[14870]:  warning: log_operation:
>>> f2:15464 [ Performing: stonith -t external/ssh -T reset vm3 ]
>>> vm2
>>> Oct 15 15:17:16 vm2 stonith-ng[9160]:  warning: log_operation: f1:9273
>>> [ Performing: stonith -t external/libvirt -T reset vm3 ]
>>> Oct 15 15:17:46 vm2 stonith-ng[9160]:  warning: log_operation: f1:9588
>>> [ Performing: stonith -t external/libvirt -T reset vm3 ]
>>>
>>> Is it specifications?
>>
>> Yes, although the host on which the device is started usually gets priority.
>> I will try to find some time to look through the report to see why this didn't happen.
>
> Reading through this again, it sounds like it should be fixed by your earlier pull request:
>
>    https://github.com/beekhof/pacemaker/commit/6b4bfd6
>
> Yes?

No.
It prevents shooting from the resources which cannot start since score is -inf.

Now that you mention it, I think that this function is not complete.
> although the host on which the device is started usually gets priority.

The info of device (resource) which started is replied, but thinks
that a timing to call stonith_choose_peer() is not good.
 * the info on whether it started or not is set to "st_monitor_verified".

I think that started device is used if stonith_choose_peer() is called
after receiving all the st-reply.
However, since the case where there is no reply must be taken into
consideration, I cannot fix yet..

Nov 13 13:47:08 [15883] vm1       crmd: (te_actions.c:140   )  notice:
te_fence_node: Executing reboot fencing operation (13) on vm3
(timeout=60000)
Nov 13 13:47:08 [15879] vm1 stonith-ng: (      main.c:121   )   trace:
st_ipc_dispatch: Flags 0/0 for command 94 from crmd.15883
Nov 13 13:47:08 [15879] vm1 stonith-ng: (      main.c:133   )   trace:
st_ipc_dispatch: Client[inbound]   <stonith_command
__name__="stonith_command" t="stonith-ng"
st_async_id="e6cedc3f-e233-4a66-9291-ce359bd76aad" st_op="st_fence"
st_callid="2" st_callopt="0" st_timeout="60"
st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad"
st_clientname="crmd.15883" st_clientnode="vm1">
Nov 13 13:47:08 [15879] vm1 stonith-ng: (      main.c:133   )   trace:
st_ipc_dispatch: Client[inbound]     <st_calldata>
Nov 13 13:47:08 [15879] vm1 stonith-ng: (      main.c:133   )   trace:
st_ipc_dispatch: Client[inbound]       <stonith_api_fence
st_target="vm3" st_device_action="reboot" st_timeout="60"
st_tolerance="0"/>
Nov 13 13:47:08 [15879] vm1 stonith-ng: (      main.c:133   )   trace:
st_ipc_dispatch: Client[inbound]     </st_calldata>
Nov 13 13:47:08 [15879] vm1 stonith-ng: (      main.c:133   )   trace:
st_ipc_dispatch: Client[inbound]   </stonith_command>
Nov 13 13:47:08 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug:
stonith_command: Processing st_fence 94 from crmd.15883 (
 0)
Nov 13 13:47:08 [15879] vm1 stonith-ng: (  commands.c:1875  )  notice:
handle_request: Client crmd.15883.e6cedc3f wants to fence (reboot)
'vm3' with device '(any)'
Nov 13 13:47:08 [15879] vm1 stonith-ng: (    remote.c:1412  )   trace:
stonith_check_fence_tolerance: tolerance=0, remote_op_list=(nil)
Nov 13 13:47:08 [15879] vm1 stonith-ng: (    remote.c:593   )   trace:
create_remote_stonith_op: Created 696fb2c3-e11a-4124-ba9b-bafc9ab28426
Nov 13 13:47:08 [15879] vm1 stonith-ng: (    remote.c:623   )   trace:
create_remote_stonith_op: Generated new stonith op:
696fb2c3-e11a-4124-ba9b-bafc9ab28426 - reboot of vm3 for crmd.15883
Nov 13 13:47:08 [15879] vm1 stonith-ng: (    remote.c:474   )   trace:
merge_duplicates: Must be for different clients: crmd.15883
Nov 13 13:47:08 [15879] vm1 stonith-ng: (    remote.c:437   )   trace:
stonith_topology_next: Attempting fencing level 1 for vm3 (1 devices)
- crmd.15883 at vm1.696fb2c3
Nov 13 13:47:08 [15879] vm1 stonith-ng: (    remote.c:689   )  notice:
initiate_remote_stonith_op: Initiating remote operation reboot for
vm3: 696fb2c3-e11a-4124-ba9b-bafc9ab28426 (0)
Nov 13 13:47:08 [15879] vm1 stonith-ng: (  commands.c:2063  )    info:
stonith_command: Processed st_fence from crmd.15883: Operation now in
progress (-115)
Nov 13 13:47:08 [15883] vm1       crmd: (     graph.c:336   )   debug:
run_graph: Transition 2 (Complete=0, Pending=1, Fired=1, Skipped=0,
Incomplete=5, Source=/var/lib/pacemaker/pengine/pe-warn-0.bz2):
In-progress
Nov 13 13:47:08 [15879] vm1 stonith-ng: (      main.c:169   )   trace:
stonith_peer_callback: Peer[inbound]   <stonith_command
__name__="stonith_command" t="stonith-ng"
st_async_id="696fb2c3-e11a-4124-ba9b-bafc9ab28426" st_op="st_query"
st_callid="2" st_callopt="0"
st_remote_op="696fb2c3-e11a-4124-ba9b-bafc9ab28426" st_target="vm3"
st_device_action="reboot" st_origin="vm1"
st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad"
st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:08 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug:
stonith_command: Processing st_query 0 from vm1 (               0)
Nov 13 13:47:08 [15879] vm1 stonith-ng: (    remote.c:576   )   debug:
create_remote_stonith_op: 696fb2c3-e11a-4124-ba9b-bafc9ab28426 already
exists
Nov 13 13:47:08 [15879] vm1 stonith-ng: (  commands.c:1296  )   debug:
stonith_query: Query   <stonith_command __name__="stonith_command"
t="stonith-ng" st_async_id="696fb2c3-e11a-4124-ba9b-bafc9ab28426"
st_op="st_query" st_callid="2" st_callopt="0"
st_remote_op="696fb2c3-e11a-4124-ba9b-bafc9ab28426" st_target="vm3"
st_device_action="reboot" st_origin="vm1"
st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad"
st_clientname="crmd.15883" st_timeout="60" src="vm1"/>
Nov 13 13:47:08 [15879] vm1 stonith-ng: (  commands.c:1650  )   trace:
stonith_construct_reply: Creating a basic reply
Nov 13 13:47:08 [15879] vm1 stonith-ng: (  commands.c:1199  )   debug:
get_capable_devices: Searching through 1 devices to see what is
capable of action (reboot) for target vm3
Nov 13 13:47:08 [15879] vm1 stonith-ng: (  commands.c:288   )   debug:
schedule_stonith_command: Scheduling list on F1 for stonith-ng
(timeout=60s)
Nov 13 13:47:08 [15879] vm1 stonith-ng: (  commands.c:2063  )    info:
stonith_command: Processed st_query from vm1: OK (0)
Nov 13 13:47:08 [15879] vm1 stonith-ng: ( st_client.c:576   )    info:
stonith_action_create: Initiating action list for agent fence_legacy
(target=(null))
Nov 13 13:47:08 [15879] vm1 stonith-ng: ( st_client.c:739   )   debug:
internal_stonith_action_execute: forking
Nov 13 13:47:08 [15882] vm1    pengine: (   pengine.c:175   ) warning:
process_pe_message: Calculated Transition 2:
/var/lib/pacemaker/pengine/pe-warn-0.bz2
Nov 13 13:47:08 [15879] vm1 stonith-ng: ( st_client.c:785   )   debug:
internal_stonith_action_execute: sending args
Nov 13 13:47:08 [15879] vm1 stonith-ng: (  commands.c:246   )   debug:
stonith_device_execute: Operation list on F1 now running with
pid=16051, timeout=60s
Nov 13 13:47:08 [15879] vm1 stonith-ng: (      main.c:169   )   trace:
stonith_peer_callback: Peer[inbound]   <st-reply
st_origin="stonith_construct_reply" t="stonith-ng" st_rc="0"
st_op="st_query" st_callid="2"
st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad"
st_clientname="crmd.15883"
st_remote_op="696fb2c3-e11a-4124-ba9b-bafc9ab28426" st_callopt="0"
src="vm3">
Nov 13 13:47:08 [15879] vm1 stonith-ng: (      main.c:169   )   trace:
stonith_peer_callback: Peer[inbound]     <st_calldata>
Nov 13 13:47:08 [15879] vm1 stonith-ng: (      main.c:169   )   trace:
stonith_peer_callback: Peer[inbound]
<stonith_query_capable_device_cb st_target="vm3"
st-available-devices="0"/>
Nov 13 13:47:08 [15879] vm1 stonith-ng: (      main.c:169   )   trace:
stonith_peer_callback: Peer[inbound]     </st_calldata>
Nov 13 13:47:08 [15879] vm1 stonith-ng: (      main.c:169   )   trace:
stonith_peer_callback: Peer[inbound]   </st-reply>
Nov 13 13:47:08 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug:
stonith_command: Processing st_query reply 0 from vm3 (
0)
Nov 13 13:47:08 [15879] vm1 stonith-ng: (    remote.c:1137  )   trace:
process_remote_stonith_query: Query result from vm3 (0 devices)
Nov 13 13:47:08 [15879] vm1 stonith-ng: (  commands.c:2063  )    info:
stonith_command: Processed st_query reply from vm3: OK (0)
Nov 13 13:47:08 [15879] vm1 stonith-ng: (      main.c:169   )   trace:
stonith_peer_callback: Peer[inbound]   <st-reply
st_origin="stonith_construct_reply" t="stonith-ng" st_rc="0"
st_op="st_query" st_callid="2"
st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad"
st_clientname="crmd.15883"
st_remote_op="696fb2c3-e11a-4124-ba9b-bafc9ab28426" st_callopt="0"
src="vm2">
Nov 13 13:47:08 [15879] vm1 stonith-ng: (      main.c:169   )   trace:
stonith_peer_callback: Peer[inbound]     <st_calldata>
Nov 13 13:47:08 [15879] vm1 stonith-ng: (      main.c:169   )   trace:
stonith_peer_callback: Peer[inbound]
<stonith_query_capable_device_cb st_target="vm3"
st-available-devices="1">
Nov 13 13:47:08 [15879] vm1 stonith-ng: (      main.c:169   )   trace:
stonith_peer_callback: Peer[inbound]         <st_device_id id="F1"
namespace="heartbeat" agent="fence_legacy" st_monitor_verified="0"/>
Nov 13 13:47:08 [15879] vm1 stonith-ng: (      main.c:169   )   trace:
stonith_peer_callback: Peer[inbound]
</stonith_query_capable_device_cb>
Nov 13 13:47:08 [15879] vm1 stonith-ng: (      main.c:169   )   trace:
stonith_peer_callback: Peer[inbound]     </st_calldata>
Nov 13 13:47:08 [15879] vm1 stonith-ng: (      main.c:169   )   trace:
stonith_peer_callback: Peer[inbound]   </st-reply>
Nov 13 13:47:08 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug:
stonith_command: Processing st_query reply 0 from vm2 (
0)
Nov 13 13:47:08 [15879] vm1 stonith-ng: (    remote.c:1154  )    info:
process_remote_stonith_query: Query result 2 of 3 from vm2 (1 devices)
Nov 13 13:47:08 [15879] vm1 stonith-ng: (    remote.c:1196  )   trace:
process_remote_stonith_query: All topology devices found
Nov 13 13:47:08 [15879] vm1 stonith-ng: (    remote.c:943   )   trace:
call_remote_stonith: State for vm3.crmd.158:
696fb2c3-e11a-4124-ba9b-bafc9ab28426 0
Nov 13 13:47:08 [15879] vm1 stonith-ng: (    remote.c:904   )   trace:
report_timeout_period: Reporting timeout for crmd.15883.696fb2c3
Nov 13 13:47:08 [15879] vm1 stonith-ng: (      main.c:343   )   trace:
do_stonith_async_timeout_update: timeout update is 72 for client
e6cedc3f-e233-4a66-9291-ce359bd76aad and call id 2
Nov 13 13:47:08 [15879] vm1 stonith-ng: (    remote.c:955   )    info:
call_remote_stonith: Total remote op timeout set to 60 for fencing of
node vm3 for crmd.15883.696fb2c3
Nov 13 13:47:08 [15879] vm1 stonith-ng: (    remote.c:786   )   trace:
stonith_choose_peer: Checking for someone to fence vm3 with F1
Nov 13 13:47:08 [15879] vm1 stonith-ng: (    remote.c:757   )   trace:
find_best_peer: Removing F1 from vm2 (1 remaining)
Nov 13 13:47:08 [15879] vm1 stonith-ng: (    remote.c:984   )    info:
call_remote_stonith: Requesting that vm2 perform op reboot vm3 with F1
for crmd.15883 (72s)
Nov 13 13:47:08 [15879] vm1 stonith-ng: (  commands.c:2063  )    info:
stonith_command: Processed st_query reply from vm2: OK (0)
Nov 13 13:47:08 [15879] vm1 stonith-ng: ( st_client.c:678   )   debug:
stonith_action_async_done: Child process 16051 performing action
'list' exited with rc 0
Nov 13 13:47:08 [15879] vm1 stonith-ng: (  commands.c:749   )    info:
dynamic_list_search_cb: Refreshing port list for F1
Nov 13 13:47:08 [15879] vm1 stonith-ng: (  commands.c:409   )   trace:
parse_host_line: Processing 3 bytes: [vm3]
Nov 13 13:47:08 [15879] vm1 stonith-ng: (  commands.c:436   )   trace:
parse_host_line: Adding 'vm3'
Nov 13 13:47:08 [15879] vm1 stonith-ng: (  commands.c:409   )   trace:
parse_host_line: Processing 11 bytes: [success:  0]
Nov 13 13:47:08 [15879] vm1 stonith-ng: (  commands.c:436   )   trace:
parse_host_line: Adding 'success'
Nov 13 13:47:08 [15879] vm1 stonith-ng: (  commands.c:436   )   trace:
parse_host_line: Adding '0'
Nov 13 13:47:08 [15879] vm1 stonith-ng: (  commands.c:480   )   trace:
parse_host_list: Parsed 3 entries from 'vm3
success:  0
'
Nov 13 13:47:08 [15879] vm1 stonith-ng: (  commands.c:1037  )   debug:
search_devices_record_result: Finished Search. 1 devices can perform
action (reboot) on node vm3
Nov 13 13:47:08 [15879] vm1 stonith-ng: (  commands.c:1254  )   debug:
stonith_query_capable_device_cb: Found 1 matching devices for 'vm3'
Nov 13 13:47:08 [15879] vm1 stonith-ng: (  commands.c:1260  )   trace:
stonith_query_capable_device_cb: Attaching query list output
Nov 13 13:47:08 [15879] vm1 stonith-ng: (  commands.c:213   )   trace:
stonith_device_execute: Nothing further to do for F1
Nov 13 13:47:08 [15879] vm1 stonith-ng: (      main.c:169   )   trace:
stonith_peer_callback: Peer[inbound]   <st-reply
st_origin="stonith_construct_reply" t="stonith-ng" st_rc="0"
st_op="st_query" st_callid="2"
st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad"
st_clientname="crmd.15883"
st_remote_op="696fb2c3-e11a-4124-ba9b-bafc9ab28426" st_callopt="0"
src="vm1">
Nov 13 13:47:08 [15879] vm1 stonith-ng: (      main.c:169   )   trace:
stonith_peer_callback: Peer[inbound]     <st_calldata>
Nov 13 13:47:08 [15879] vm1 stonith-ng: (      main.c:169   )   trace:
stonith_peer_callback: Peer[inbound]
<stonith_query_capable_device_cb st_target="vm3"
st-available-devices="1">
Nov 13 13:47:08 [15879] vm1 stonith-ng: (      main.c:169   )   trace:
stonith_peer_callback: Peer[inbound]         <st_device_id id="F1"
namespace="heartbeat" agent="fence_legacy" st_monitor_verified="1"/>
Nov 13 13:47:08 [15879] vm1 stonith-ng: (      main.c:169   )   trace:
stonith_peer_callback: Peer[inbound]
</stonith_query_capable_device_cb>
Nov 13 13:47:08 [15879] vm1 stonith-ng: (      main.c:169   )   trace:
stonith_peer_callback: Peer[inbound]     </st_calldata>
Nov 13 13:47:08 [15879] vm1 stonith-ng: (      main.c:169   )   trace:
stonith_peer_callback: Peer[inbound]   </st-reply>
Nov 13 13:47:08 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug:
stonith_command: Processing st_query reply 0 from vm1 (
0)
Nov 13 13:47:08 [15879] vm1 stonith-ng: (    remote.c:1154  )    info:
process_remote_stonith_query: Query result 3 of 3 from vm1 (1 devices)
Nov 13 13:47:08 [15879] vm1 stonith-ng: (    remote.c:1177  )   trace:
process_remote_stonith_query: Peer vm1 has confirmed a verified device
F1
Nov 13 13:47:08 [15879] vm1 stonith-ng: (  commands.c:2063  )    info:
stonith_command: Processed st_query reply from vm1: OK (0)
Nov 13 13:47:12 [15879] vm1 stonith-ng: (      main.c:169   )   trace:
stonith_peer_callback: Peer[inbound]   <st-reply st_origin="vm1"
t="stonith-ng" st_op="st_fence" st_device_id="F1"
st_remote_op="696fb2c3-e11a-4124-ba9b-bafc9ab28426"
st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad"
st_clientname="crmd.15883" st_target="vm3" st_device_action="st_fence"
st_callid="2" st_callopt="0" st_rc="-201" st_output="Performing:
stonith -t external/libvirt -T reset vm3\nfailed: vm3 5\n" src="vm2"/>
Nov 13 13:47:12 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug:
stonith_command: Processing st_fence reply 0 from vm2 (
0)
Nov 13 13:47:12 [15879] vm1 stonith-ng: (    remote.c:1303  )  notice:
process_remote_stonith_exec: Call to F1 for vm3 on behalf of
crmd.15883 at vm1: Generic Pacemaker error (-201)
Nov 13 13:47:12 [15879] vm1 stonith-ng: (    remote.c:443   )  notice:
stonith_topology_next: All fencing options to fence vm3 for
crmd.15883 at vm1.696fb2c3 failed
Nov 13 13:47:12 [15879] vm1 stonith-ng: (    remote.c:160   )   trace:
bcast_result_to_peers: Broadcasting result to peers
Nov 13 13:47:12 [15879] vm1 stonith-ng: (  commands.c:2063  )    info:
stonith_command: Processed st_fence reply from vm2: OK (0)
Nov 13 13:47:12 [15879] vm1 stonith-ng: (      main.c:169   )   trace:
stonith_peer_callback: Peer[inbound]   <st-reply t="st_notify"
subt="broadcast" st_op="st_notify" count="1" src="vm1">
Nov 13 13:47:12 [15879] vm1 stonith-ng: (      main.c:169   )   trace:
stonith_peer_callback: Peer[inbound]     <st_calldata>
Nov 13 13:47:12 [15879] vm1 stonith-ng: (      main.c:169   )   trace:
stonith_peer_callback: Peer[inbound]       <st_notify_fence state="4"
st_rc="-201" st_target="vm3" st_device_action="reboot"
st_delegate="vm2" st_remote_op="696fb2c3-e11a-4124-ba9b-bafc9ab28426"
st_origin="vm1" st_clientid="e6cedc3f-e233-4a66-9291-ce359bd76aad"
st_clientname="crmd.15883"/>
Nov 13 13:47:12 [15879] vm1 stonith-ng: (      main.c:169   )   trace:
stonith_peer_callback: Peer[inbound]     </st_calldata>
Nov 13 13:47:12 [15879] vm1 stonith-ng: (      main.c:169   )   trace:
stonith_peer_callback: Peer[inbound]   </st-reply>
Nov 13 13:47:12 [15879] vm1 stonith-ng: (  commands.c:2049  )   debug:
stonith_command: Processing st_notify reply 0 from vm1 (
0)
Nov 13 13:47:12 [15879] vm1 stonith-ng: (    remote.c:1281  )   debug:
process_remote_stonith_exec: Marking call to reboot for vm3 on behalf
of crmd.15883 at 696fb2c3-e11a-4124-ba9b-bafc9ab28426.vm1: Generic
Pacemaker error (-201)
Nov 13 13:47:12 [15879] vm1 stonith-ng: (    remote.c:297   )   error:
remote_op_done: Operation reboot of vm3 by vm2 for
crmd.15883 at vm1.696fb2c3: Generic Pacemaker error
Nov 13 13:47:12 [15879] vm1 stonith-ng: (  commands.c:1650  )   trace:
stonith_construct_reply: Creating a basic reply
Nov 13 13:47:12 [15879] vm1 stonith-ng: (  commands.c:1666  )   trace:
stonith_construct_reply: Attaching reply output
Nov 13 13:47:12 [15879] vm1 stonith-ng: (      main.c:241   )   trace:
do_local_reply: Sending response
Nov 13 13:47:12 [15879] vm1 stonith-ng: (      main.c:244   )   trace:
do_local_reply: Sending callback to request originator
Nov 13 13:47:12 [15879] vm1 stonith-ng: (      main.c:263   )   trace:
do_local_reply: Sending an event to crmd.15883

Best Regards,
Kazunori INOUE

>
>> I'm kind of swamped at the moment though.
>>
>>>
>>> Best Regards,
>>> Kazunori INOUE
>>> <stopped_resource_performed_reset.tar.bz2>_______________________________________________
>>> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
>>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>>>
>>> Project Home: http://www.clusterlabs.org
>>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>>> Bugs: http://bugs.clusterlabs.org
>
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
-------------- next part --------------
A non-text attachment was scrubbed...
Name: stonith_choose.tar.bz2
Type: application/x-bzip2
Size: 254324 bytes
Desc: not available
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20131113/c9bb741e/attachment-0004.bz2>


More information about the Pacemaker mailing list