Yes i missed this line in logs: <span style="color:rgb(255,0,0)">info: RA output: (mysqld:stop:stderr) /usr/lib/ocf/resource.d//heartbeat/mysql: line 45: /usr/lib/ocf/lib/heartbeat/ocf-shellfuncs: No such file or directory<br>
<br><font color="#000000">some libs are missing on my second node, i don't know why (same install with yum on the two nodes), i have copied them from the first node and now it's work fine.<br>I'm sorry for the inconvenience and thank you very much for your help this is much appreciated!<br>
<br><br></font></span><span id="result_box" class="short_text" lang="en"><span class="hps"></span><span class=""></span><span class="hps"></span><span class="hps"></span><span class="hps"></span><span class="hps"></span></span><br>
<br><div class="gmail_quote">2012/3/23 emmanuel segura <span dir="ltr"><<a href="mailto:emi2fast@gmail.com">emi2fast@gmail.com</a>></span><br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
try to look the 45 in your resource agent,i have this in a cluster with mysql<br><br>=====================================================<br># Initialization:<br><br>: ${OCF_FUNCTIONS_DIR=${OCF_ROOT}/resource.d/heartbeat}<br>
. ${OCF_FUNCTIONS_DIR}/.ocf-shellfuncs<br>=====================================================<br><br><div class="gmail_quote">Il giorno 23 marzo 2012 14:17, coma <span dir="ltr"><<a href="mailto:coma.inf@gmail.com" target="_blank">coma.inf@gmail.com</a>></span> ha scritto:<div>
<div class="h5"><br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">it's done,<br><br>Here is the log from the second node, thank for your help, it very apreciated!:<br><br>Mar 23 13:54:53 node2 attrd: [3102]: info: find_hash_entry: Creating hash entry for last-failure-mysqld<br>
Mar 23 13:54:53 node2 attrd: [3102]: info: attrd_perform_update: Delaying operation last-failure-mysqld=<null>: cib not connected<br>
Mar 23 13:54:53 node2 attrd: [3102]: info: find_hash_entry: Creating hash entry for fail-count-mysqld<br>Mar 23 13:54:53 node2 attrd: [3102]: info: attrd_perform_update: Delaying operation fail-count-mysqld=<null>: cib not connected<br>
Mar 23 13:54:53 node2 lrmd: [3101]: debug: on_msg_add_rsc:client [3104] adds resource mysqld<br>Mar 23 13:54:53 node2 crmd: [3104]: info: do_lrm_rsc_op: Performing key=10:46:7:1f6f7a59-8e04-46d9-8a47-4b1ada0e6ea1 op=mysqld_monitor_0 )<br>
Mar 23 13:54:53 node2 lrmd: [3101]: debug: on_msg_perform_op:2359: copying parameters for rsc mysqld<br>Mar 23 13:54:53 node2 lrmd: [3101]: debug: on_msg_perform_op: add an operation operation monitor[6] on ocf::mysql::mysqld for client 3104, its parameters: socket=[/var/lib/mysql/mysql.sock] binary=[/usr/bin/mysqld_safe] group=[mysql] CRM_meta_timeout=[20000] crm_feature_set=[3.0.5] pid=[/var/run/mysqld/mysqld.pid] user=[mysql] config=[/etc/my.cnf] datadir=[/data/mysql/databases] to the operation list.<br>
Mar 23 13:54:53 node2 lrmd: [3101]: info: rsc:mysqld:6: probe<br>Mar 23 13:54:53 node2 lrmd: [3101]: WARN: Managed mysqld:monitor process 3223 exited with return code 1.<br>Mar 23 13:54:53 node2 lrmd: [3101]: info: RA output: (mysqld:monitor:stderr) /usr/lib/ocf/resource.d//heartbeat/mysql: line 45: /usr/lib/ocf/lib/heartbeat/ocf-shellfuncs: No such file or directory<br>
Mar 23 13:54:53 node2 crmd: [3104]: debug: create_operation_update: do_update_resource: Updating resouce mysqld after complete monitor op (interval=0)<br>Mar 23 13:54:53 node2 crmd: [3104]: info: process_lrm_event: LRM operation mysqld_monitor_0 (call=6, rc=1, cib-update=10, confirmed=true) unknown error<br>
Mar 23 13:54:54 node2 crmd: [3104]: info: do_lrm_rsc_op: Performing key=3:47:0:1f6f7a59-8e04-46d9-8a47-4b1ada0e6ea1 op=mysqld_stop_0 )<br>Mar 23 13:54:54 node2 lrmd: [3101]: debug: on_msg_perform_op: add an operation operation stop[7] on ocf::mysql::mysqld for client 3104, its parameters: crm_feature_set=[3.0.5] to the operation list.<br>
Mar 23 13:54:54 node2 lrmd: [3101]: info: rsc:mysqld:7: stop<br><span style="color:rgb(255,0,0)">Mar 23 13:54:54 node2 lrmd: [3101]: info: RA output: (mysqld:stop:stderr) /usr/lib/ocf/resource.d//heartbeat/mysql: line 45: /usr/lib/ocf/lib/heartbeat/ocf-shellfuncs: No such file or directory</span> <span style="color:rgb(255,0,0)">-> </span><span style="color:rgb(255,0,0)" lang="en"><span>I just saw</span> <span>it</span><span>,</span> <span>I will look</span> <span>over</span></span><br>
Mar 23 13:54:54 node2 lrmd: [3101]: WARN: Managed mysqld:stop process 3241 exited with return code 1.<br>Mar 23 13:54:54 node2 crmd: [3104]: debug: create_operation_update: do_update_resource: Updating resouce mysqld after complete stop op (interval=0)<br>
Mar 23 13:54:54 node2 crmd: [3104]: info: process_lrm_event: LRM operation mysqld_stop_0 (call=7, rc=1, cib-update=12, confirmed=true) unknown error<br>Mar 23 13:54:54 node2 attrd: [3102]: debug: attrd_local_callback: update message from node1: fail-count-mysqld=INFINITY<br>
Mar 23 13:54:54 node2 attrd: [3102]: debug: attrd_local_callback: New value of fail-count-mysqld is INFINITY<br>Mar 23 13:54:54 node2 attrd: [3102]: info: attrd_trigger_update: Sending flush op to all hosts for: fail-count-mysqld (INFINITY)<br>
Mar 23 13:54:54 node2 attrd: [3102]: info: attrd_perform_update: Delaying operation fail-count-mysqld=INFINITY: cib not connected<br>Mar 23 13:54:54 node2 attrd: [3102]: debug: attrd_local_callback: update message from node1: last-failure-mysqld=1332507294<br>
Mar 23 13:54:54 node2 attrd: [3102]: debug: attrd_local_callback: New value of last-failure-mysqld is 1332507294<br>Mar 23 13:54:54 node2 attrd: [3102]: info: attrd_trigger_update: Sending flush op to all hosts for: last-failure-mysqld (1332507294)<br>
Mar 23 13:54:54 node2 attrd: [3102]: info: attrd_perform_update: Delaying operation last-failure-mysqld=1332507294: cib not connected<br>Mar 23 13:54:54 node2 attrd: [3102]: info: attrd_trigger_update: Sending flush op to all hosts for: last-failure-mysqld (1332507294)<br>
Mar 23 13:54:54 node2 cib: [3100]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='node2']//transient_attributes//nvpair[@name='last-failure-mysqld'] does not exist<br>Mar 23 13:54:54 node2 attrd: [3102]: info: attrd_perform_update: Sent update 4: last-failure-mysqld=1332507294<br>
Mar 23 13:54:54 node2 attrd: [3102]: info: attrd_trigger_update: Sending flush op to all hosts for: fail-count-mysqld (INFINITY)<br>Mar 23 13:54:54 node2 cib: [3100]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='node2']//transient_attributes//nvpair[@name='fail-count-mysqld'] does not exist<br>
Mar 23 13:54:54 node2 attrd: [3102]: info: attrd_perform_update: Sent update 11: fail-count-mysqld=INFINITY<br>Mar 23 13:54:54 node2 attrd: [3102]: debug: attrd_cib_callback: Update 4 for last-failure-mysqld=1332507294 passed<br>
Mar 23 13:54:54 node2 attrd: [3102]: debug: attrd_cib_callback: Update 11 for fail-count-mysqld=INFINITY passed<div><div><br><br><br><br><div class="gmail_quote">2012/3/23 emmanuel segura <span dir="ltr"><<a href="mailto:emi2fast@gmail.com" target="_blank">emi2fast@gmail.com</a>></span><br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">The first thing you can do it's eliminate this "location master-prefer-node-1 Cluster-VIP 25: node1"<br>
<br>
because you have your virtual in group and i would like to see the log from the second node<br><br>Thanks :-)<br>
<br><div class="gmail_quote">Il giorno 23 marzo 2012 13:42, coma <span dir="ltr"><<a href="mailto:coma.inf@gmail.com" target="_blank">coma.inf@gmail.com</a>></span> ha scritto:<div><div><br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Thank you for your responses, <br><br>i have fixed my migration-treshold problem with lsb:mysqld ressource (i can see migration-threshold=2 with crm_mon failcounts so its ok) bu failover still doesn't work when mysql fail (but work fine when node fail or standby).<br>
So i've tried with the ocf ressource agent, it's work fine on my first node but fail on my second node with unknown error. <br><br>crm_mon --failcount:<br><br>Failed actions:<br> mysqld_monitor_0 (node=node2, call=6, rc=1, status=complete): unknown error<br>
mysqld_stop_0 (node=node2, call=7, rc=1, status=complete): unknown error<br><br><br>I have exactly the same mysql packages version and configuration on my two nodes (with proper permissions), also corosync/heartbeat and pacemaker are in the same version too:<br>
<br>corosynclib-1.2.7-1.1.el5<br>corosync-1.2.7-1.1.el5<br>pacemaker-libs-1.1.5-1.1.el5<br>pacemaker-1.1.5-1.1.el5<br>heartbeat-3.0.3-2.3.el5<br>heartbeat-libs-3.0.3-2.3.el5<br>heartbeat-debuginfo-3.0.2-2.el5<br><br>So i don't andersand wy it's works on one node but not on the second?<br>
<br><br>ressource config:<br><br>primitive mysqld ocf:heartbeat:mysql \<br> params binary="/usr/bin/mysqld_safe" config="/etc/my.cnf" \<br> user="mysql" group="mysql" pid="/var/run/mysqld/mysqld.pid" \<br>
datadir="/data/mysql/databases" socket="/var/lib/mysql/mysql.sock" \<br> op start interval="0" timeout="120" \<br> op stop interval="0" timeout="120" \<br>
op monitor interval="30" timeout="30" depth="0" \<br> target-role="Started"<br><br><br>And same with (i have created test database / table + grant test user on it):<br>
<br>primitive mysqld ocf:heartbeat:mysql \<br> params binary="/usr/bin/mysqld_safe" config="/etc/my.cnf" \<br> datadir="/data/mysql/databases" user="mysql" \<br> pid="/var/run/mysqld/mysqld.pid" socket="/var/lib/mysql/mysql.sock" \<br>
test_passwd="test" test_table="Cluster.dbcheck" test_user="test" \<br> op start interval="0" timeout="120" \<br> op stop interval="0" timeout="120" \<br>
op monitor interval="30s" timeout="30s" OCF_CHECK_LEVEL="1" \<br> meta migration-threshold="3" target-role="Started"<br><br><br>Full config:<br><br>node node2 \<br>
attributes standby="off"<br>node node1 \<br> attributes standby="off"<br>primitive Cluster-VIP ocf:heartbeat:IPaddr2 \<br> params ip="x.x.x.x" broadcast="x.x.x.x" nic="eth0" cidr_netmask="21" iflabel="VIP1" \<br>
op monitor interval="10s" timeout="20s" \<br> meta is-managed="true"<br>primitive datavg ocf:heartbeat:LVM \<br> params volgrpname="datavg" exclusive="true" \<br>
op start interval="0" timeout="30" \<br> op stop interval="0" timeout="30"<br>primitive drbd_mysql ocf:linbit:drbd \<br> params drbd_resource="drbd-mysql" \<br>
op monitor interval="15s"<br>primitive fs_mysql ocf:heartbeat:Filesystem \<br> params device="/dev/datavg/data" directory="/data" fstype="ext3"<br>primitive mysqld ocf:heartbeat:mysql \<br>
params binary="/usr/bin/mysqld_safe" config="/etc/my.cnf" user="mysql" group="mysql" pid="/var/run/mysqld/mysqld.pid" datadir="/data/mysql/databases" socket="/var/lib/mysql/mysql.sock" \<br>
op start interval="0" timeout="120" \<br> op stop interval="0" timeout="120" \<br> op monitor interval="30" timeout="30" depth="0"<br>
group mysql datavg fs_mysql Cluster-VIP mysqld<br>ms ms_drbd_mysql drbd_mysql \<br> meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true"<br>
location master-prefer-node-1 Cluster-VIP 25: node1<br>colocation mysql_on_drbd inf: mysql ms_drbd_mysql:Master<br>order mysql_after_drbd inf: ms_drbd_mysql:promote mysql:start<br>property $id="cib-bootstrap-options" \<br>
dc-version="1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f" \<br> cluster-infrastructure="openais" \<br> expected-quorum-votes="2" \<br> stonith-enabled="false" \<br>
no-quorum-policy="ignore" \<br> last-lrm-refresh="1332504626"<br>rsc_defaults $id="rsc-options" \<br> resource-stickiness="100"<br><br><br><br><br><br><div class="gmail_quote">
2012/3/22 Andreas Kurz <span dir="ltr"><<a href="mailto:andreas@hastexo.com" target="_blank">andreas@hastexo.com</a>></span><br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div>
On 03/22/2012 03:23 PM, coma wrote:<br>
> Thank you for your responses,<br>
><br>
> I have added the migration-treshold on my mysqld ressource, when i kill<br>
> or manually stop mysql on one node, there is not failover on the second<br>
> node.<br>
> Also, when i look crm_mon --failcounts, i can see "mysqld:<br>
> migration-threshold=1000000 fail-count=1000000", so i don"t anderstand<br>
> why migration-threshold not equal 2?<br>
><br>
> Migration summary:<br>
> * Node node1:<br>
> mysqld: migration-threshold=1000000 fail-count=1000000<br>
> * Node node2:<br>
><br>
> Failed actions:<br>
> mysqld_monitor_10000 (node=node1, call=90, rc=7, status=complete):<br>
> not running<br>
> mysqld_stop_0 (node=node1, call=93, rc=1, status=complete): unknown<br>
> error<br>
<br>
</div>The lsb init script you are using seems to be not LSB compliant ...<br>
looks like it returns an error on stopping an already stopped mysql.<br>
<br>
<a href="http://clusterlabs.org/doc/en-US/Pacemaker/1.1/html-single/Pacemaker_Explained/index.html#ap-lsb" target="_blank">http://clusterlabs.org/doc/en-US/Pacemaker/1.1/html-single/Pacemaker_Explained/index.html#ap-lsb</a><br>
<br>
Fix the script ... or better use the ocf resource agent.<br>
<div><br>
Regards,<br>
Andreas<br>
<br>
--<br>
Need help with Pacemaker?<br>
<a href="http://www.hastexo.com/now" target="_blank">http://www.hastexo.com/now</a><br>
<br>
><br>
><br>
><br>
</div><div><div>> configuration:<br>
><br>
> node node1 \<br>
> attributes standby="off"<br>
> node node2 \<br>
> attributes standby="off"<br>
> primitive Cluster-VIP ocf:heartbeat:IPaddr2 \<br>
> params ip="x.x.x.x" broadcast="x.x.x.x" nic="eth0"<br>
> cidr_netmask="21" iflabel="VIP1" \<br>
> op monitor interval="10s" timeout="20s" \<br>
> meta is-managed="true"<br>
> primitive datavg ocf:heartbeat:LVM \<br>
> params volgrpname="datavg" exclusive="true" \<br>
> op start interval="0" timeout="30" \<br>
> op stop interval="0" timeout="30"<br>
> primitive drbd_mysql ocf:linbit:drbd \<br>
> params drbd_resource="drbd-mysql" \<br>
> op monitor interval="15s"<br>
> primitive fs_mysql ocf:heartbeat:Filesystem \<br>
> params device="/dev/datavg/data" directory="/data" fstype="ext3"<br>
> primitive mysqld lsb:mysqld \<br>
> op monitor interval="10s" timeout="30s" \<br>
> op start interval="0" timeout="120" \<br>
> op stop interval="0" timeout="120" \<br>
> meta target-role="Started" migration-threshold="2"<br>
> failure-timeout="20s"<br>
> group mysql datavg fs_mysql Cluster-VIP mysqld<br>
> ms ms_drbd_mysql drbd_mysql \<br>
> meta master-max="1" master-node-max="1" clone-max="2"<br>
> clone-node-max="1" notify="true"<br>
> location master-prefer-node-1 Cluster-VIP 25: node1<br>
> colocation mysql_on_drbd inf: mysql ms_drbd_mysql:Master<br>
> order mysql_after_drbd inf: ms_drbd_mysql:promote mysql:start<br>
> property $id="cib-bootstrap-options" \<br>
><br>
> dc-version="1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f" \<br>
> cluster-infrastructure="openais" \<br>
> expected-quorum-votes="2" \<br>
> stonith-enabled="false" \<br>
> no-quorum-policy="ignore" \<br>
> last-lrm-refresh="1332425337"<br>
> rsc_defaults $id="rsc-options" \<br>
> resource-stickiness="100"<br>
><br>
><br>
><br>
</div></div>> 2012/3/22 Andreas Kurz <<a href="mailto:andreas@hastexo.com" target="_blank">andreas@hastexo.com</a> <mailto:<a href="mailto:andreas@hastexo.com" target="_blank">andreas@hastexo.com</a>>><br>
<div>><br>
> On 03/22/2012 01:51 PM, coma wrote:<br>
> > Ah yes thank you, the service status mysql is now monitored, but the<br>
> > failover is not performed?<br>
><br>
> As long as local restarts are successful there is no need for a failover<br>
> ... there is migration-treshold to limit local restart tries.<br>
><br>
> Regards,<br>
> Andreas<br>
><br>
> --<br>
> Need help with Pacemaker?<br>
> <a href="http://www.hastexo.com/now" target="_blank">http://www.hastexo.com/now</a><br>
><br>
> ><br>
> ><br>
> ><br>
> > 2012/3/22 emmanuel segura <<a href="mailto:emi2fast@gmail.com" target="_blank">emi2fast@gmail.com</a><br>
</div>> <mailto:<a href="mailto:emi2fast@gmail.com" target="_blank">emi2fast@gmail.com</a>> <mailto:<a href="mailto:emi2fast@gmail.com" target="_blank">emi2fast@gmail.com</a><br>
<div>> <mailto:<a href="mailto:emi2fast@gmail.com" target="_blank">emi2fast@gmail.com</a>>>><br>
> ><br>
> > sorry<br>
> > I think you missed the op monitor operetion in your primitive<br>
> definition<br>
> ><br>
> ><br>
> ><br>
> > Il giorno 22 marzo 2012 11:52, emmanuel segura<br>
> <<a href="mailto:emi2fast@gmail.com" target="_blank">emi2fast@gmail.com</a> <mailto:<a href="mailto:emi2fast@gmail.com" target="_blank">emi2fast@gmail.com</a>><br>
</div>> > <mailto:<a href="mailto:emi2fast@gmail.com" target="_blank">emi2fast@gmail.com</a> <mailto:<a href="mailto:emi2fast@gmail.com" target="_blank">emi2fast@gmail.com</a>>>> ha<br>
<div>> scritto:<br>
> ><br>
> > I think you missed the op monitor operetion you primitive<br>
> definition<br>
> ><br>
> > Il giorno 22 marzo 2012 11:33, coma <<a href="mailto:coma.inf@gmail.com" target="_blank">coma.inf@gmail.com</a><br>
> <mailto:<a href="mailto:coma.inf@gmail.com" target="_blank">coma.inf@gmail.com</a>><br>
</div>> > <mailto:<a href="mailto:coma.inf@gmail.com" target="_blank">coma.inf@gmail.com</a> <mailto:<a href="mailto:coma.inf@gmail.com" target="_blank">coma.inf@gmail.com</a>>>><br>
<div><div>> ha scritto:<br>
> ><br>
> > Hello,<br>
> ><br>
> > I have a question about mysql service monitoring into a<br>
> > MySQL HA cluster with pacemaker and DRBD,<br>
> > I have set up a configuration to allow a failover between<br>
> > two nodes, it work fine when a node is offline (or<br>
> standby),<br>
> > but i want to know if it is possible to monitor the mysql<br>
> > service to perform a failover if mysql is stopped or<br>
> > unavailable?<br>
> ><br>
> > Thank you in advance for any response.<br>
> ><br>
> > My crm configuration:<br>
> ><br>
> > node node1 \<br>
> > attributes standby="off"<br>
> > node node2 \<br>
> > attributes standby="off"<br>
> > primitive Cluster-VIP ocf:heartbeat:IPaddr2 \<br>
> > params ip="x.x.x.x" broadcast="x.x.x.x" nic="eth0"<br>
> > cidr_netmask="21" iflabel="VIP1" \<br>
> > op monitor interval="10s" timeout="20s" \<br>
> > meta is-managed="true"<br>
> > primitive datavg ocf:heartbeat:LVM \<br>
> > params volgrpname="datavg" exclusive="true" \<br>
> > op start interval="0" timeout="30" \<br>
> > op stop interval="0" timeout="30"<br>
> > primitive drbd_mysql ocf:linbit:drbd \<br>
> > params drbd_resource="drbd-mysql" \<br>
> > op monitor interval="15s"<br>
> > primitive fs_mysql ocf:heartbeat:Filesystem \<br>
> > params device="/dev/datavg/data" directory="/data"<br>
> > fstype="ext3"<br>
> > primitive mysqld lsb:mysqld<br>
> > group mysql datavg fs_mysql Cluster-VIP mysqld<br>
> > ms ms_drbd_mysql drbd_mysql \<br>
> > meta master-max="1" master-node-max="1"<br>
> > clone-max="2" clone-node-max="1" notify="true"<br>
> > location master-prefer-node-1 Cluster-VIP 25: node1<br>
> > colocation mysql_on_drbd inf: mysql ms_drbd_mysql:Master<br>
> > order mysql_after_drbd inf: ms_drbd_mysql:promote<br>
> mysql:start<br>
> > property $id="cib-bootstrap-options" \<br>
> ><br>
> ><br>
> dc-version="1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f"<br>
> > \<br>
> > cluster-infrastructure="openais" \<br>
> > expected-quorum-votes="2" \<br>
> > stonith-enabled="false" \<br>
> > no-quorum-policy="ignore" \<br>
> > last-lrm-refresh="1332254494"<br>
> > rsc_defaults $id="rsc-options" \<br>
> > resource-stickiness="100"<br>
> ><br>
> ><br>
> > _______________________________________________<br>
> > Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org" target="_blank">Pacemaker@oss.clusterlabs.org</a><br>
> <mailto:<a href="mailto:Pacemaker@oss.clusterlabs.org" target="_blank">Pacemaker@oss.clusterlabs.org</a>><br>
</div></div>> > <mailto:<a href="mailto:Pacemaker@oss.clusterlabs.org" target="_blank">Pacemaker@oss.clusterlabs.org</a><br>
<div>> <mailto:<a href="mailto:Pacemaker@oss.clusterlabs.org" target="_blank">Pacemaker@oss.clusterlabs.org</a>>><br>
> > <a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
> ><br>
> > Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
> > Getting started:<br>
> > <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
> > Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
> ><br>
> ><br>
> ><br>
> ><br>
> > --<br>
> > esta es mi vida e me la vivo hasta que dios quiera<br>
> ><br>
> ><br>
> ><br>
> ><br>
> > --<br>
> > esta es mi vida e me la vivo hasta que dios quiera<br>
> ><br>
> > _______________________________________________<br>
> > Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org" target="_blank">Pacemaker@oss.clusterlabs.org</a><br>
> <mailto:<a href="mailto:Pacemaker@oss.clusterlabs.org" target="_blank">Pacemaker@oss.clusterlabs.org</a>><br>
</div>> > <mailto:<a href="mailto:Pacemaker@oss.clusterlabs.org" target="_blank">Pacemaker@oss.clusterlabs.org</a><br>
<div><div>> <mailto:<a href="mailto:Pacemaker@oss.clusterlabs.org" target="_blank">Pacemaker@oss.clusterlabs.org</a>>><br>
> > <a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
> ><br>
> > Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
> > Getting started:<br>
> <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
> > Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
> ><br>
> ><br>
> ><br>
> ><br>
> > _______________________________________________<br>
> > Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org" target="_blank">Pacemaker@oss.clusterlabs.org</a><br>
> <mailto:<a href="mailto:Pacemaker@oss.clusterlabs.org" target="_blank">Pacemaker@oss.clusterlabs.org</a>><br>
> > <a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
> ><br>
> > Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
> > Getting started:<br>
> <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
> > Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
><br>
><br>
><br>
> _______________________________________________<br>
> Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org" target="_blank">Pacemaker@oss.clusterlabs.org</a><br>
> <mailto:<a href="mailto:Pacemaker@oss.clusterlabs.org" target="_blank">Pacemaker@oss.clusterlabs.org</a>><br>
> <a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
><br>
> Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
> Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
> Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
><br>
><br>
><br>
><br>
> _______________________________________________<br>
> Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org" target="_blank">Pacemaker@oss.clusterlabs.org</a><br>
> <a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
><br>
> Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
> Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
> Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
<br>
<br>
</div></div><br>_______________________________________________<br>
Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org" target="_blank">Pacemaker@oss.clusterlabs.org</a><br>
<a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
<br></blockquote></div><br>
<br>_______________________________________________<br>
Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org" target="_blank">Pacemaker@oss.clusterlabs.org</a><br>
<a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
<br></blockquote></div></div></div><div><div><br><br clear="all"><br>-- <br>esta es mi vida e me la vivo hasta que dios quiera<br>
</div></div><br>_______________________________________________<br>
Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org" target="_blank">Pacemaker@oss.clusterlabs.org</a><br>
<a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
<br></blockquote></div><br>
</div></div><br>_______________________________________________<br>
Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org" target="_blank">Pacemaker@oss.clusterlabs.org</a><br>
<a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
<br></blockquote></div></div></div><div class="HOEnZb"><div class="h5"><br><br clear="all"><br>-- <br>esta es mi vida e me la vivo hasta que dios quiera<br>
</div></div><br>_______________________________________________<br>
Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a><br>
<a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
<br></blockquote></div><br>