<div dir="ltr">In my case this does not work - read my original post. So I wonder if there is a pacemaker bug (version 1.1.9-2db99f1). Killing pengine and stonithd on the node which is supposed to "shoot" seems to resolve the problem, though this is not a solution of course.<div>
<br></div><div>I also tested two separate stonith resources, one on each node. This stonith'ing works fine with this configuration. Is there somehing "wrong" about doing it this way?<br><div><br></div><div><br>
</div><div>Best regards</div><div>Jan</div><div><br></div><div><br></div><div><br></div><div><br></div></div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Tue, Aug 6, 2013 at 11:50 AM, Dejan Muhamedagic <span dir="ltr"><<a href="mailto:dejanmm@fastmail.fm" target="_blank">dejanmm@fastmail.fm</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi,<br>
<div class="im"><br>
On Tue, Aug 06, 2013 at 11:22:56AM +0200, Andreas Mock wrote:<br>
> Hi Dejan,<br>
><br>
> can you explain how the SDB agent works, when this resource<br>
> is running on exactly that node which has to be stonithed?<br>
<br>
</div>It's actually in the hands of the resource manager to take care<br>
of that. The pacemaker is going to start the stonith resource in<br>
case another node is to be fenced.<br>
<br>
Thanks,<br>
<br>
Dejan<br>
<div class="HOEnZb"><div class="h5"><br>
> Thank you in advance.<br>
><br>
> Best regards<br>
> Andreas Mock<br>
><br>
><br>
> -----Ursprüngliche Nachricht-----<br>
> Von: Dejan Muhamedagic [mailto:<a href="mailto:dejanmm@fastmail.fm">dejanmm@fastmail.fm</a>]<br>
> Gesendet: Dienstag, 6. August 2013 11:15<br>
> An: The Pacemaker cluster resource manager<br>
> Betreff: Re: [Pacemaker] Problems with SBD fencing<br>
><br>
> Hi,<br>
><br>
> On Thu, Aug 01, 2013 at 07:58:55PM +0200, Jan Christian Kaldestad wrote:<br>
> > Thanks for the explanation. But I'm quite confused about the SBD stonith<br>
> > resource configuration, as the SBD fencing wiki clearly states:<br>
> > "The sbd agent does not need to and should not be cloned. If all of your<br>
> > nodes run SBD, as is most likely, not even a monitor action provides a<br>
> real<br>
> > benefit, since the daemon would suicide the node if there was a problem. "<br>
> ><br>
> > and also this thread<br>
> ><br>
> <a href="http://oss.clusterlabs.org/pipermail/pacemaker/2012-March/013507.htmlmention" target="_blank">http://oss.clusterlabs.org/pipermail/pacemaker/2012-March/013507.htmlmention</a><br>
> > that there should be only one SBD resource configured.<br>
> ><br>
> > Can someone please clarify? Should I configure 2 separate SBD resources,<br>
> > one for each cluster node?<br>
><br>
> No. One sbd resource is sufficient.<br>
><br>
> Thanks,<br>
><br>
> Dejan<br>
><br>
> ><br>
> > --<br>
> > Best regards<br>
> > Jan<br>
> ><br>
> ><br>
> > On Thu, Aug 1, 2013 at 6:47 PM, Andreas Mock <<a href="mailto:andreas.mock@web.de">andreas.mock@web.de</a>> wrote:<br>
> ><br>
> > > Hi Jan,****<br>
> > ><br>
> > > ** **<br>
> > ><br>
> > > first of all I don't know the SBD-Fencing-Infrastructure (just read<br>
> the***<br>
> > > *<br>
> > ><br>
> > > article linked by you). But as far as I understand the "normal"<br>
> fencing***<br>
> > > *<br>
> > ><br>
> > > (initiated on behalf of pacemaker) is done in the following way.****<br>
> > ><br>
> > > ** **<br>
> > ><br>
> > > SBD fencing resoure (agent) is writing a request for self-stonithing<br>
> into*<br>
> > > ***<br>
> > ><br>
> > > one or more SBD partitions where the SBD-daemon is listening and<br>
> hopefully<br>
> > > ****<br>
> > ><br>
> > > reacting on.****<br>
> > ><br>
> > > So, I'm pretty sure (without knowing) that you have to configure the****<br>
> > ><br>
> > > stonith agent in a way that pacemaker knows howto talk to the stonith<br>
> agent<br>
> > > ****<br>
> > ><br>
> > > to kill a certain cluster node.****<br>
> > ><br>
> > > What is the problem in you scenario: The agent which should be<br>
> contacted**<br>
> > > **<br>
> > ><br>
> > > to stonith the node2 is/was running on node2 and can't be connected<br>
> > > anymore.****<br>
> > ><br>
> > > ** **<br>
> > ><br>
> > > Because of that stonith agent configuration is most of the times done<br>
> the*<br>
> > > ***<br>
> > ><br>
> > > following way in a two node cluster:****<br>
> > ><br>
> > > On every node runs a stonith agent. The stonith agent is configured<br>
> to****<br>
> > ><br>
> > > stonith the OTHER node. You have to be sure that this is technically<br>
> ****<br>
> > ><br>
> > > always possible.****<br>
> > ><br>
> > > This can be achieved with resource clones or - which is IMHO simpler -<br>
> in<br>
> > > ****<br>
> > ><br>
> > > a 2-node-environment with two stonith resources and a negative<br>
> colocation*<br>
> > > ***<br>
> > ><br>
> > > constraint.****<br>
> > ><br>
> > > ** **<br>
> > ><br>
> > > As far as I know there is also a self-stonith-safty-belt implemented****<br>
> > ><br>
> > > in a way that a stonith agent on a node to be shot is never<br>
> contacted.****<br>
> > ><br>
> > > (Do I remember correct?)****<br>
> > ><br>
> > > ** **<br>
> > ><br>
> > > I'm sure this may solve your problem.****<br>
> > ><br>
> > > ** **<br>
> > ><br>
> > > Best regards****<br>
> > ><br>
> > > Andreas Mock****<br>
> > ><br>
> > > ** **<br>
> > ><br>
> > > ** **<br>
> > ><br>
> > > *Von:* Jan Christian Kaldestad [mailto:<a href="mailto:janck76@gmail.com">janck76@gmail.com</a>]<br>
> > > *Gesendet:* Donnerstag, 1. August 2013 15:46<br>
> > > *An:* <a href="mailto:pacemaker@oss.clusterlabs.org">pacemaker@oss.clusterlabs.org</a><br>
> > > *Betreff:* [Pacemaker] Problems with SBD fencing****<br>
> > ><br>
> > > ** **<br>
> > ><br>
> > > Hi,****<br>
> > ><br>
> > ><br>
> > > I am evaluating the SLES HA Extension 11 SP3 product. The cluster<br>
> > > consists of 2-nodes (active/passive), using SBD stonith resource on a<br>
> > > shared SAN disk. Configuration according to<br>
> > > <a href="http://www.linux-ha.org/wiki/SBD_Fencing****" target="_blank">http://www.linux-ha.org/wiki/SBD_Fencing****</a><br>
> > ><br>
> > > The SBD daemon is running on both nodes, and the stontih resource<br>
> (defined<br>
> > > as primitive) is running on one node only.<br>
> > > There is also a monitor operation for the stonith resource<br>
> > > (interval=36000, timeout=20)****<br>
> > ><br>
> > > I am having some problems getting failover/fencing to work as expected<br>
> in<br>
> > > the following scenario:<br>
> > > - Node 1 is running the resources that I created (except stonith)<br>
> > > - Node 2 is running the stonith resource<br>
> > > - Disconnect Node 2 from the network by bringing the interface down<br>
> > > - Node 2 status changes to UNCLEAN (offline), but the stonith resource<br>
> > > does not switch over to Node 1 and Node 2 does not reboot as I would<br>
> expect.<br>
> > > - Checking the logs on Node 1, I notice the following:<br>
> > > Aug 1 12:00:01 slesha1n1i-u pengine[8915]: warning: pe_fence_node:<br>
> Node<br>
> > > slesha1n2i-u will be fenced because the node is no longer part of the<br>
> > > cluster<br>
> > > Aug 1 12:00:01 slesha1n1i-u pengine[8915]: warning:<br>
> > > determine_online_status: Node slesha1n2i-u is unclean<br>
> > > Aug 1 12:00:01 slesha1n1i-u pengine[8915]: warning: custom_action:<br>
> > > Action stonith_sbd_stop_0 on slesha1n2i-u is unrunnable (offline)<br>
> > > Aug 1 12:00:01 slesha1n1i-u pengine[8915]: warning: stage6:<br>
> Scheduling<br>
> > > Node slesha1n2i-u for STONITH<br>
> > > Aug 1 12:00:01 slesha1n1i-u pengine[8915]: notice: LogActions: Move<br>
> > > stonith_sbd (Started slesha1n2i-u -> slesha1n1i-u)<br>
> > > ...<br>
> > > Aug 1 12:00:01 slesha1n1i-u crmd[8916]: notice: te_fence_node:<br>
> > > Executing reboot fencing operation (24) on slesha1n2i-u (timeout=60000)<br>
> > > Aug 1 12:00:01 slesha1n1i-u stonith-ng[8912]: notice:<br>
> handle_request:<br>
> > > Client crmd.8916.3144546f wants to fence (reboot) 'slesha1n2i-u' with<br>
> > > device '(any)'<br>
> > > Aug 1 12:00:01 slesha1n1i-u stonith-ng[8912]: notice:<br>
> > > initiate_remote_stonith_op: Initiating remote operation reboot for<br>
> > > slesha1n2i-u: 8c00ff7b-2986-4b2a-8b4a-760e8346349b (0)<br>
> > > Aug 1 12:00:01 slesha1n1i-u stonith-ng[8912]: error:<br>
> remote_op_done:<br>
> > > Operation reboot of slesha1n2i-u by slesha1n1i-u for<br>
> > > crmd.8916@slesha1n1i-u.8c00ff7b: No route to host<br>
> > > Aug 1 12:00:01 slesha1n1i-u crmd[8916]: notice:<br>
> > > tengine_stonith_callback: Stonith operation<br>
> > > 3/24:3:0:8a0f32b2-f91c-4cdf-9cee-1ba9b6e187ab: No route to host (-113)<br>
> > > Aug 1 12:00:01 slesha1n1i-u crmd[8916]: notice:<br>
> > > tengine_stonith_callback: Stonith operation 3 for slesha1n2i-u failed<br>
> (No<br>
> > > route to host): aborting transition.<br>
> > > Aug 1 12:00:01 slesha1n1i-u crmd[8916]: notice:<br>
> > > tengine_stonith_notify: Peer slesha1n2i-u was not terminated<br>
> > > (st_notify_fence) by slesha1n1i-u for slesha1n1i-u: No route to host<br>
> > > (ref=8c00ff7b-2986-4b2a-8b4a-760e8346349b) by client crmd.8916<br>
> > > Aug 1 12:00:01 slesha1n1i-u crmd[8916]: notice: run_graph:<br>
> Transition<br>
> > > 3 (Complete=1, Pending=0, Fired=0, Skipped=5, Incomplete=0,<br>
> > > Source=/var/lib/pacemaker/pengine/pe-warn-15.bz2): Stopped<br>
> > > Aug 1 12:00:01 slesha1n1i-u pengine[8915]: notice: unpack_config: On<br>
> > > loss of CCM Quorum: Ignore<br>
> > > Aug 1 12:00:01 slesha1n1i-u pengine[8915]: warning: pe_fence_node:<br>
> Node<br>
> > > slesha1n2i-u will be fenced because the node is no longer part of the<br>
> > > cluster<br>
> > > Aug 1 12:00:01 slesha1n1i-u pengine[8915]: warning:<br>
> > > determine_online_status: Node slesha1n2i-u is unclean<br>
> > > Aug 1 12:00:01 slesha1n1i-u pengine[8915]: warning: custom_action:<br>
> > > Action stonith_sbd_stop_0 on slesha1n2i-u is unrunnable (offline)<br>
> > > Aug 1 12:00:01 slesha1n1i-u pengine[8915]: warning: stage6:<br>
> Scheduling<br>
> > > Node slesha1n2i-u for STONITH<br>
> > > Aug 1 12:00:01 slesha1n1i-u pengine[8915]: notice: LogActions: Move<br>
> > > stonith_sbd (Started slesha1n2i-u -> slesha1n1i-u)<br>
> > > ...<br>
> > > Aug 1 12:00:02 slesha1n1i-u crmd[8916]: notice:<br>
> too_many_st_failures:<br>
> > > Too many failures to fence slesha1n2i-u (11), giving up<br>
> > > ****<br>
> > ><br>
> > > ****<br>
> > ><br>
> > ><br>
> > > - Then I bring Node 1 online again and start the cluster service,<br>
> checking<br>
> > > logs:<br>
> > > Aug 1 12:31:13 slesha1n1i-u corosync[8905]: [CLM ] CLM<br>
> CONFIGURATION<br>
> > > CHANGE<br>
> > > Aug 1 12:31:13 slesha1n1i-u corosync[8905]: [CLM ] New<br>
> Configuration:<br>
> > > Aug 1 12:31:13 slesha1n1i-u corosync[8905]: [CLM ] r(0)<br>
> ip(x.x.x.x)<br>
> > > Aug 1 12:31:13 slesha1n1i-u corosync[8905]: [CLM ] Members Left:<br>
> > > Aug 1 12:31:13 slesha1n1i-u corosync[8905]: [CLM ] Members Joined:<br>
> > > Aug 1 12:31:13 slesha1n1i-u corosync[8905]: [pcmk ] notice:<br>
> > > pcmk_peer_update: Transitional membership event on ring 376: memb=1,<br>
> new=0,<br>
> > > lost=0<br>
> > > Aug 1 12:31:13 slesha1n1i-u corosync[8905]: [pcmk ] info:<br>
> > > pcmk_peer_update: memb: slesha1n1i-u 168824371<br>
> > > Aug 1 12:31:13 slesha1n1i-u corosync[8905]: [CLM ] CLM<br>
> CONFIGURATION<br>
> > > CHANGE<br>
> > > Aug 1 12:31:13 slesha1n1i-u corosync[8905]: [CLM ] New<br>
> Configuration:<br>
> > > Aug 1 12:31:13 slesha1n1i-u corosync[8905]: [CLM ] r(0)<br>
> ip(x.x.x.x)<br>
> > > Aug 1 12:31:13 slesha1n1i-u corosync[8905]: [CLM ] r(0)<br>
> ip(y.y.y.y)<br>
> > > Aug 1 12:31:13 slesha1n1i-u corosync[8905]: [CLM ] Members Left:<br>
> > > Aug 1 12:31:13 slesha1n1i-u corosync[8905]: [CLM ] Members Joined:<br>
> > > Aug 1 12:31:13 slesha1n1i-u cib[8911]: notice: ais_dispatch_message:<br>
> > > Membership 376: quorum acquired<br>
> > > Aug 1 12:31:13 slesha1n1i-u corosync[8905]: [CLM ] r(0)<br>
> ip(y.y.y.y)<br>
> > > Aug 1 12:31:13 slesha1n1i-u crmd[8916]: notice:<br>
> ais_dispatch_message:<br>
> > > Membership 376: quorum acquired<br>
> > > Aug 1 12:31:13 slesha1n1i-u cib[8911]: notice:<br>
> crm_update_peer_state:<br>
> > > crm_update_ais_node: Node slesha1n2i-u[168824372] - state is now member<br>
> > > (was lost)<br>
> > > Aug 1 12:31:13 slesha1n1i-u corosync[8905]: [pcmk ] notice:<br>
> > > pcmk_peer_update: Stable membership event on ring 376: memb=2, new=1,<br>
> lost=0<br>
> > > Aug 1 12:31:13 slesha1n1i-u corosync[8905]: [pcmk ] info:<br>
> > > update_member: Node 168824372/slesha1n2i-u is now: member<br>
> > > Aug 1 12:31:13 slesha1n1i-u corosync[8905]: [pcmk ] info:<br>
> > > pcmk_peer_update: NEW: slesha1n2i-u 168824372<br>
> > > Aug 1 12:31:13 slesha1n1i-u crmd[8916]: notice:<br>
> crm_update_peer_state:<br>
> > > crm_update_ais_node: Node slesha1n2i-u[168824372] - state is now member<br>
> > > (was lost)<br>
> > > Aug 1 12:31:13 slesha1n1i-u corosync[8905]: [pcmk ] info:<br>
> > > pcmk_peer_update: MEMB: slesha1n1i-u 168824371<br>
> > > Aug 1 12:31:13 slesha1n1i-u crmd[8916]: notice:<br>
> peer_update_callback:<br>
> > > Node return implies stonith of slesha1n2i-u (action 24) completed<br>
> > > Aug 1 12:31:13 slesha1n1i-u corosync[8905]: [pcmk ] info:<br>
> > > pcmk_peer_update: MEMB: slesha1n2i-u 168824372<br>
> > > Aug 1 12:31:13 slesha1n1i-u corosync[8905]: [pcmk ] info:<br>
> > > send_member_notification: Sending membership update 376 to 2 children<br>
> > > Aug 1 12:31:13 slesha1n1i-u corosync[8905]: [TOTEM ] A processor<br>
> joined<br>
> > > or left the membership and a new membership was formed.<br>
> > > Aug 1 12:31:13 slesha1n1i-u corosync[8905]: [pcmk ] info:<br>
> > > update_member: 0x69f2f0 Node 168824372 (slesha1n2i-u) born on: 376<br>
> > > Aug 1 12:31:13 slesha1n1i-u corosync[8905]: [pcmk ] info:<br>
> > > send_member_notification: Sending membership update 376 to 2 children<br>
> > > Aug 1 12:31:13 slesha1n1i-u crmd[8916]: notice: crm_update_quorum:<br>
> > > Updating quorum status to true (call=119)<br>
> > > Aug 1 12:31:13 slesha1n1i-u corosync[8905]: [CPG ] chosen downlist:<br>
> > > sender r(0) ip(x.x.x.x) ; members(old:1 left:0)<br>
> > > Aug 1 12:31:13 slesha1n1i-u corosync[8905]: [MAIN ] Completed<br>
> service<br>
> > > synchronization, ready to provide service.<br>
> > > Aug 1 12:31:13 slesha1n1i-u crmd[8916]: notice:<br>
> too_many_st_failures:<br>
> > > Too many failures to fence slesha1n2i-u (13), giving up<br>
> > > Aug 1 12:31:13 slesha1n1i-u crmd[8916]: notice:<br>
> too_many_st_failures:<br>
> > > Too many failures to fence slesha1n2i-u (13), giving up<br>
> > > Aug 1 12:31:14 slesha1n1i-u mgmtd: [8917]: info: CIB query: cib<br>
> > > Aug 1 12:31:14 slesha1n1i-u corosync[8905]: [pcmk ] info:<br>
> > > update_member: Node slesha1n2i-u now has process list:<br>
> > > 00000000000000000000000000151302 (1381122)<br>
> > > Aug 1 12:31:14 slesha1n1i-u corosync[8905]: [pcmk ] info:<br>
> > > send_member_notification: Sending membership update 376 to 2 children<br>
> > > Aug 1 12:31:14 slesha1n1i-u corosync[8905]: [pcmk ] info:<br>
> > > update_member: Node slesha1n2i-u now has process list:<br>
> > > 00000000000000000000000000141302 (1315586)<br>
> > > Aug 1 12:31:14 slesha1n1i-u corosync[8905]: [pcmk ] info:<br>
> > > send_member_notification: Sending membership update 376 to 2 children<br>
> > > Aug 1 12:31:14 slesha1n1i-u corosync[8905]: [pcmk ] info:<br>
> > > update_member: Node slesha1n2i-u now has process list:<br>
> > > 00000000000000000000000000101302 (1053442)<br>
> > > Aug 1 12:31:14 slesha1n1i-u corosync[8905]: [pcmk ] info:<br>
> > > send_member_notification: Sending membership update 376 to 2 children<br>
> > > Aug 1 12:31:15 slesha1n1i-u crmd[8916]: notice: do_state_transition:<br>
> > > State transition S_IDLE -> S_INTEGRATION [ input=I_NODE_JOIN<br>
> > > cause=C_HA_MESSAGE origin=route_message ]<br>
> > > Aug 1 12:31:15 slesha1n1i-u crmd[8916]: notice:<br>
> too_many_st_failures:<br>
> > > Too many failures to fence slesha1n2i-u (13), giving up<br>
> > > Aug 1 12:31:15 slesha1n1i-u crmd[8916]: notice:<br>
> too_many_st_failures:<br>
> > > Too many failures to fence slesha1n2i-u (13), giving up****<br>
> > ><br>
> > > - Cluster status changes to Online for both nodes, but the stonith<br>
> > > resource won't start on any of the nodes.<br>
> > > - Trying to start the resource manually, but no success.<br>
> > > - Trying to restart the corosync process on Node 1 (rcopenais restart),<br>
> > > but it hangs forever. Checking logs:<br>
> > > Aug 1 12:42:08 slesha1n1i-u corosync[8905]: [SERV ] Unloading all<br>
> > > Corosync service engines.<br>
> > > Aug 1 12:42:08 slesha1n1i-u corosync[8905]: [pcmk ] notice:<br>
> > > pcmk_shutdown: Shuting down Pacemaker<br>
> > > Aug 1 12:42:08 slesha1n1i-u corosync[8905]: [pcmk ] notice:<br>
> > > stop_child: Sent -15 to mgmtd: [8917]<br>
> > > Aug 1 12:42:08 slesha1n1i-u mgmtd: [8917]: info: mgmtd is shutting<br>
> down<br>
> > > Aug 1 12:42:08 slesha1n1i-u mgmtd: [8917]: info: final_crm:<br>
> client_id=1<br>
> > > cib_name=live<br>
> > > Aug 1 12:42:08 slesha1n1i-u mgmtd: [8917]: info: final_crm:<br>
> client_id=2<br>
> > > cib_name=live<br>
> > > Aug 1 12:42:08 slesha1n1i-u mgmtd: [8917]: debug: [mgmtd] stopped<br>
> > > Aug 1 12:42:08 slesha1n1i-u corosync[8905]: [pcmk ] notice:<br>
> > > pcmk_shutdown: mgmtd confirmed stopped<br>
> > > Aug 1 12:42:08 slesha1n1i-u corosync[8905]: [pcmk ] notice:<br>
> > > stop_child: Sent -15 to crmd: [8916]<br>
> > > Aug 1 12:42:08 slesha1n1i-u crmd[8916]: notice: crm_shutdown:<br>
> > > Requesting shutdown, upper limit is 1200000ms<br>
> > > Aug 1 12:42:08 slesha1n1i-u attrd[8914]: notice:<br>
> attrd_trigger_update:<br>
> > > Sending flush op to all hosts for: shutdown (1375353728)<br>
> > > Aug 1 12:42:08 slesha1n1i-u attrd[8914]: notice:<br>
> attrd_perform_update:<br>
> > > Sent update 22: shutdown=1375353728<br>
> > > Aug 1 12:42:08 slesha1n1i-u crmd[8916]: notice:<br>
> too_many_st_failures:<br>
> > > Too many failures to fence slesha1n2i-u (13), giving up<br>
> > > Aug 1 12:42:08 slesha1n1i-u crmd[8916]: warning: do_log: FSA: Input<br>
> > > I_TE_SUCCESS from abort_transition_graph() received in state<br>
> S_POLICY_ENGINE<br>
> > > Aug 1 12:42:38 slesha1n1i-u corosync[8905]: [pcmk ] notice:<br>
> > > pcmk_shutdown: Still waiting for crmd (pid=8916, seq=6) to terminate...<br>
> > > Aug 1 12:43:08 slesha1n1i-u corosync[8905]: [pcmk ] notice:<br>
> > > pcmk_shutdown: Still waiting for crmd (pid=8916, seq=6) to<br>
> terminate...***<br>
> > > *<br>
> > ><br>
> > > ...****<br>
> > ><br>
> > > - Finally I kill the corosync process on Node 1 (killall -9 corosync),<br>
> > > then corsoync restarts.<br>
> > > - Checking status. All resources are up and running on Node 1, and the<br>
> > > stonith resource is running on Node 2 again.****<br>
> > ><br>
> > > ****<br>
> > ><br>
> > > I have tested the same scenario several times. Sometimes the fencing<br>
> > > mechaism works as expected, but other times the stonith resource is not<br>
> > > transferred to Node 1 - as described here. So I need some assistance to<br>
> > > overcome this problem....****<br>
> > ><br>
> > > ****<br>
> > ><br>
> > > --<br>
> > > Best regards<br>
> > > Jan ****<br>
> > ><br>
> > > _______________________________________________<br>
> > > Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a><br>
> > > <a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
> > ><br>
> > > Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
> > > Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
> > > Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
> > ><br>
> > ><br>
> ><br>
> ><br>
> > --<br>
> > mvh<br>
> > Jan Christian<br>
><br>
> > _______________________________________________<br>
> > Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a><br>
> > <a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
> ><br>
> > Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
> > Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
> > Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
><br>
><br>
> _______________________________________________<br>
> Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a><br>
> <a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
><br>
> Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
> Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
> Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
><br>
><br>
> _______________________________________________<br>
> Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a><br>
> <a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
><br>
> Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
> Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
> Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
<br>
_______________________________________________<br>
Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a><br>
<a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
</div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br>mvh<div>Jan Christian</div>
</div>