The first thing you can do it's eliminate this "location master-prefer-node-1 Cluster-VIP 25: node1"<br><br>because you have your virtual in group and i would like to see the log from the second node<br><br>Thanks :-)<br>
<br><div class="gmail_quote">Il giorno 23 marzo 2012 13:42, coma <span dir="ltr"><<a href="mailto:coma.inf@gmail.com">coma.inf@gmail.com</a>></span> ha scritto:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Thank you for your responses, <br><br>i have fixed my migration-treshold problem with lsb:mysqld ressource (i can see migration-threshold=2 with crm_mon failcounts so its ok) bu failover still doesn't work when mysql fail (but work fine when node fail or standby).<br>
So i've tried with the ocf ressource agent, it's work fine on my first node but fail on my second node with unknown error. <br><br>crm_mon --failcount:<br><br>Failed actions:<br> mysqld_monitor_0 (node=node2, call=6, rc=1, status=complete): unknown error<br>
mysqld_stop_0 (node=node2, call=7, rc=1, status=complete): unknown error<br><br><br>I have exactly the same mysql packages version and configuration on my two nodes (with proper permissions), also corosync/heartbeat and pacemaker are in the same version too:<br>
<br>corosynclib-1.2.7-1.1.el5<br>corosync-1.2.7-1.1.el5<br>pacemaker-libs-1.1.5-1.1.el5<br>pacemaker-1.1.5-1.1.el5<br>heartbeat-3.0.3-2.3.el5<br>heartbeat-libs-3.0.3-2.3.el5<br>heartbeat-debuginfo-3.0.2-2.el5<br><br>So i don't andersand wy it's works on one node but not on the second?<br>
<br><br>ressource config:<br><br>primitive mysqld ocf:heartbeat:mysql \<br> params binary="/usr/bin/mysqld_safe" config="/etc/my.cnf" \<br> user="mysql" group="mysql" pid="/var/run/mysqld/mysqld.pid" \<br>
datadir="/data/mysql/databases" socket="/var/lib/mysql/mysql.sock" \<br> op start interval="0" timeout="120" \<br> op stop interval="0" timeout="120" \<br>
op monitor interval="30" timeout="30" depth="0" \<br> target-role="Started"<br><br><br>And same with (i have created test database / table + grant test user on it):<br>
<br>primitive mysqld ocf:heartbeat:mysql \<br> params binary="/usr/bin/mysqld_safe" config="/etc/my.cnf" \<br> datadir="/data/mysql/databases" user="mysql" \<br> pid="/var/run/mysqld/mysqld.pid" socket="/var/lib/mysql/mysql.sock" \<br>
test_passwd="test" test_table="Cluster.dbcheck" test_user="test" \<br> op start interval="0" timeout="120" \<br> op stop interval="0" timeout="120" \<br>
op monitor interval="30s" timeout="30s" OCF_CHECK_LEVEL="1" \<br> meta migration-threshold="3" target-role="Started"<br><br><br>Full config:<br><br>node node2 \<br>
attributes standby="off"<br>node node1 \<br> attributes standby="off"<br>primitive Cluster-VIP ocf:heartbeat:IPaddr2 \<br> params ip="x.x.x.x" broadcast="x.x.x.x" nic="eth0" cidr_netmask="21" iflabel="VIP1" \<br>
op monitor interval="10s" timeout="20s" \<br> meta is-managed="true"<br>primitive datavg ocf:heartbeat:LVM \<br> params volgrpname="datavg" exclusive="true" \<br>
op start interval="0" timeout="30" \<br> op stop interval="0" timeout="30"<br>primitive drbd_mysql ocf:linbit:drbd \<br> params drbd_resource="drbd-mysql" \<br>
op monitor interval="15s"<br>primitive fs_mysql ocf:heartbeat:Filesystem \<br> params device="/dev/datavg/data" directory="/data" fstype="ext3"<br>primitive mysqld ocf:heartbeat:mysql \<br>
params binary="/usr/bin/mysqld_safe" config="/etc/my.cnf" user="mysql" group="mysql" pid="/var/run/mysqld/mysqld.pid" datadir="/data/mysql/databases" socket="/var/lib/mysql/mysql.sock" \<br>
op start interval="0" timeout="120" \<br> op stop interval="0" timeout="120" \<br> op monitor interval="30" timeout="30" depth="0"<br>
group mysql datavg fs_mysql Cluster-VIP mysqld<br>ms ms_drbd_mysql drbd_mysql \<br> meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true"<br>
location master-prefer-node-1 Cluster-VIP 25: node1<br>colocation mysql_on_drbd inf: mysql ms_drbd_mysql:Master<br>order mysql_after_drbd inf: ms_drbd_mysql:promote mysql:start<br>property $id="cib-bootstrap-options" \<br>
dc-version="1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f" \<br> cluster-infrastructure="openais" \<br> expected-quorum-votes="2" \<br> stonith-enabled="false" \<br>
no-quorum-policy="ignore" \<br> last-lrm-refresh="1332504626"<br>rsc_defaults $id="rsc-options" \<br> resource-stickiness="100"<br><br><br><br><br><br><div class="gmail_quote">
2012/3/22 Andreas Kurz <span dir="ltr"><<a href="mailto:andreas@hastexo.com" target="_blank">andreas@hastexo.com</a>></span><br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div>
On 03/22/2012 03:23 PM, coma wrote:<br>
> Thank you for your responses,<br>
><br>
> I have added the migration-treshold on my mysqld ressource, when i kill<br>
> or manually stop mysql on one node, there is not failover on the second<br>
> node.<br>
> Also, when i look crm_mon --failcounts, i can see "mysqld:<br>
> migration-threshold=1000000 fail-count=1000000", so i don"t anderstand<br>
> why migration-threshold not equal 2?<br>
><br>
> Migration summary:<br>
> * Node node1:<br>
> mysqld: migration-threshold=1000000 fail-count=1000000<br>
> * Node node2:<br>
><br>
> Failed actions:<br>
> mysqld_monitor_10000 (node=node1, call=90, rc=7, status=complete):<br>
> not running<br>
> mysqld_stop_0 (node=node1, call=93, rc=1, status=complete): unknown<br>
> error<br>
<br>
</div>The lsb init script you are using seems to be not LSB compliant ...<br>
looks like it returns an error on stopping an already stopped mysql.<br>
<br>
<a href="http://clusterlabs.org/doc/en-US/Pacemaker/1.1/html-single/Pacemaker_Explained/index.html#ap-lsb" target="_blank">http://clusterlabs.org/doc/en-US/Pacemaker/1.1/html-single/Pacemaker_Explained/index.html#ap-lsb</a><br>
<br>
Fix the script ... or better use the ocf resource agent.<br>
<div><br>
Regards,<br>
Andreas<br>
<br>
--<br>
Need help with Pacemaker?<br>
<a href="http://www.hastexo.com/now" target="_blank">http://www.hastexo.com/now</a><br>
<br>
><br>
><br>
><br>
</div><div><div>> configuration:<br>
><br>
> node node1 \<br>
> attributes standby="off"<br>
> node node2 \<br>
> attributes standby="off"<br>
> primitive Cluster-VIP ocf:heartbeat:IPaddr2 \<br>
> params ip="x.x.x.x" broadcast="x.x.x.x" nic="eth0"<br>
> cidr_netmask="21" iflabel="VIP1" \<br>
> op monitor interval="10s" timeout="20s" \<br>
> meta is-managed="true"<br>
> primitive datavg ocf:heartbeat:LVM \<br>
> params volgrpname="datavg" exclusive="true" \<br>
> op start interval="0" timeout="30" \<br>
> op stop interval="0" timeout="30"<br>
> primitive drbd_mysql ocf:linbit:drbd \<br>
> params drbd_resource="drbd-mysql" \<br>
> op monitor interval="15s"<br>
> primitive fs_mysql ocf:heartbeat:Filesystem \<br>
> params device="/dev/datavg/data" directory="/data" fstype="ext3"<br>
> primitive mysqld lsb:mysqld \<br>
> op monitor interval="10s" timeout="30s" \<br>
> op start interval="0" timeout="120" \<br>
> op stop interval="0" timeout="120" \<br>
> meta target-role="Started" migration-threshold="2"<br>
> failure-timeout="20s"<br>
> group mysql datavg fs_mysql Cluster-VIP mysqld<br>
> ms ms_drbd_mysql drbd_mysql \<br>
> meta master-max="1" master-node-max="1" clone-max="2"<br>
> clone-node-max="1" notify="true"<br>
> location master-prefer-node-1 Cluster-VIP 25: node1<br>
> colocation mysql_on_drbd inf: mysql ms_drbd_mysql:Master<br>
> order mysql_after_drbd inf: ms_drbd_mysql:promote mysql:start<br>
> property $id="cib-bootstrap-options" \<br>
><br>
> dc-version="1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f" \<br>
> cluster-infrastructure="openais" \<br>
> expected-quorum-votes="2" \<br>
> stonith-enabled="false" \<br>
> no-quorum-policy="ignore" \<br>
> last-lrm-refresh="1332425337"<br>
> rsc_defaults $id="rsc-options" \<br>
> resource-stickiness="100"<br>
><br>
><br>
><br>
</div></div>> 2012/3/22 Andreas Kurz <<a href="mailto:andreas@hastexo.com" target="_blank">andreas@hastexo.com</a> <mailto:<a href="mailto:andreas@hastexo.com" target="_blank">andreas@hastexo.com</a>>><br>
<div>><br>
> On 03/22/2012 01:51 PM, coma wrote:<br>
> > Ah yes thank you, the service status mysql is now monitored, but the<br>
> > failover is not performed?<br>
><br>
> As long as local restarts are successful there is no need for a failover<br>
> ... there is migration-treshold to limit local restart tries.<br>
><br>
> Regards,<br>
> Andreas<br>
><br>
> --<br>
> Need help with Pacemaker?<br>
> <a href="http://www.hastexo.com/now" target="_blank">http://www.hastexo.com/now</a><br>
><br>
> ><br>
> ><br>
> ><br>
> > 2012/3/22 emmanuel segura <<a href="mailto:emi2fast@gmail.com" target="_blank">emi2fast@gmail.com</a><br>
</div>> <mailto:<a href="mailto:emi2fast@gmail.com" target="_blank">emi2fast@gmail.com</a>> <mailto:<a href="mailto:emi2fast@gmail.com" target="_blank">emi2fast@gmail.com</a><br>
<div>> <mailto:<a href="mailto:emi2fast@gmail.com" target="_blank">emi2fast@gmail.com</a>>>><br>
> ><br>
> > sorry<br>
> > I think you missed the op monitor operetion in your primitive<br>
> definition<br>
> ><br>
> ><br>
> ><br>
> > Il giorno 22 marzo 2012 11:52, emmanuel segura<br>
> <<a href="mailto:emi2fast@gmail.com" target="_blank">emi2fast@gmail.com</a> <mailto:<a href="mailto:emi2fast@gmail.com" target="_blank">emi2fast@gmail.com</a>><br>
</div>> > <mailto:<a href="mailto:emi2fast@gmail.com" target="_blank">emi2fast@gmail.com</a> <mailto:<a href="mailto:emi2fast@gmail.com" target="_blank">emi2fast@gmail.com</a>>>> ha<br>
<div>> scritto:<br>
> ><br>
> > I think you missed the op monitor operetion you primitive<br>
> definition<br>
> ><br>
> > Il giorno 22 marzo 2012 11:33, coma <<a href="mailto:coma.inf@gmail.com" target="_blank">coma.inf@gmail.com</a><br>
> <mailto:<a href="mailto:coma.inf@gmail.com" target="_blank">coma.inf@gmail.com</a>><br>
</div>> > <mailto:<a href="mailto:coma.inf@gmail.com" target="_blank">coma.inf@gmail.com</a> <mailto:<a href="mailto:coma.inf@gmail.com" target="_blank">coma.inf@gmail.com</a>>>><br>
<div><div>> ha scritto:<br>
> ><br>
> > Hello,<br>
> ><br>
> > I have a question about mysql service monitoring into a<br>
> > MySQL HA cluster with pacemaker and DRBD,<br>
> > I have set up a configuration to allow a failover between<br>
> > two nodes, it work fine when a node is offline (or<br>
> standby),<br>
> > but i want to know if it is possible to monitor the mysql<br>
> > service to perform a failover if mysql is stopped or<br>
> > unavailable?<br>
> ><br>
> > Thank you in advance for any response.<br>
> ><br>
> > My crm configuration:<br>
> ><br>
> > node node1 \<br>
> > attributes standby="off"<br>
> > node node2 \<br>
> > attributes standby="off"<br>
> > primitive Cluster-VIP ocf:heartbeat:IPaddr2 \<br>
> > params ip="x.x.x.x" broadcast="x.x.x.x" nic="eth0"<br>
> > cidr_netmask="21" iflabel="VIP1" \<br>
> > op monitor interval="10s" timeout="20s" \<br>
> > meta is-managed="true"<br>
> > primitive datavg ocf:heartbeat:LVM \<br>
> > params volgrpname="datavg" exclusive="true" \<br>
> > op start interval="0" timeout="30" \<br>
> > op stop interval="0" timeout="30"<br>
> > primitive drbd_mysql ocf:linbit:drbd \<br>
> > params drbd_resource="drbd-mysql" \<br>
> > op monitor interval="15s"<br>
> > primitive fs_mysql ocf:heartbeat:Filesystem \<br>
> > params device="/dev/datavg/data" directory="/data"<br>
> > fstype="ext3"<br>
> > primitive mysqld lsb:mysqld<br>
> > group mysql datavg fs_mysql Cluster-VIP mysqld<br>
> > ms ms_drbd_mysql drbd_mysql \<br>
> > meta master-max="1" master-node-max="1"<br>
> > clone-max="2" clone-node-max="1" notify="true"<br>
> > location master-prefer-node-1 Cluster-VIP 25: node1<br>
> > colocation mysql_on_drbd inf: mysql ms_drbd_mysql:Master<br>
> > order mysql_after_drbd inf: ms_drbd_mysql:promote<br>
> mysql:start<br>
> > property $id="cib-bootstrap-options" \<br>
> ><br>
> ><br>
> dc-version="1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f"<br>
> > \<br>
> > cluster-infrastructure="openais" \<br>
> > expected-quorum-votes="2" \<br>
> > stonith-enabled="false" \<br>
> > no-quorum-policy="ignore" \<br>
> > last-lrm-refresh="1332254494"<br>
> > rsc_defaults $id="rsc-options" \<br>
> > resource-stickiness="100"<br>
> ><br>
> ><br>
> > _______________________________________________<br>
> > Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org" target="_blank">Pacemaker@oss.clusterlabs.org</a><br>
> <mailto:<a href="mailto:Pacemaker@oss.clusterlabs.org" target="_blank">Pacemaker@oss.clusterlabs.org</a>><br>
</div></div>> > <mailto:<a href="mailto:Pacemaker@oss.clusterlabs.org" target="_blank">Pacemaker@oss.clusterlabs.org</a><br>
<div>> <mailto:<a href="mailto:Pacemaker@oss.clusterlabs.org" target="_blank">Pacemaker@oss.clusterlabs.org</a>>><br>
> > <a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
> ><br>
> > Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
> > Getting started:<br>
> > <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
> > Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
> ><br>
> ><br>
> ><br>
> ><br>
> > --<br>
> > esta es mi vida e me la vivo hasta que dios quiera<br>
> ><br>
> ><br>
> ><br>
> ><br>
> > --<br>
> > esta es mi vida e me la vivo hasta que dios quiera<br>
> ><br>
> > _______________________________________________<br>
> > Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org" target="_blank">Pacemaker@oss.clusterlabs.org</a><br>
> <mailto:<a href="mailto:Pacemaker@oss.clusterlabs.org" target="_blank">Pacemaker@oss.clusterlabs.org</a>><br>
</div>> > <mailto:<a href="mailto:Pacemaker@oss.clusterlabs.org" target="_blank">Pacemaker@oss.clusterlabs.org</a><br>
<div><div>> <mailto:<a href="mailto:Pacemaker@oss.clusterlabs.org" target="_blank">Pacemaker@oss.clusterlabs.org</a>>><br>
> > <a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
> ><br>
> > Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
> > Getting started:<br>
> <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
> > Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
> ><br>
> ><br>
> ><br>
> ><br>
> > _______________________________________________<br>
> > Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org" target="_blank">Pacemaker@oss.clusterlabs.org</a><br>
> <mailto:<a href="mailto:Pacemaker@oss.clusterlabs.org" target="_blank">Pacemaker@oss.clusterlabs.org</a>><br>
> > <a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
> ><br>
> > Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
> > Getting started:<br>
> <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
> > Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
><br>
><br>
><br>
> _______________________________________________<br>
> Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org" target="_blank">Pacemaker@oss.clusterlabs.org</a><br>
> <mailto:<a href="mailto:Pacemaker@oss.clusterlabs.org" target="_blank">Pacemaker@oss.clusterlabs.org</a>><br>
> <a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
><br>
> Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
> Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
> Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
><br>
><br>
><br>
><br>
> _______________________________________________<br>
> Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org" target="_blank">Pacemaker@oss.clusterlabs.org</a><br>
> <a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
><br>
> Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
> Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
> Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
<br>
<br>
</div></div><br>_______________________________________________<br>
Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org" target="_blank">Pacemaker@oss.clusterlabs.org</a><br>
<a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
<br></blockquote></div><br>
<br>_______________________________________________<br>
Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a><br>
<a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
<br></blockquote></div><br><br clear="all"><br>-- <br>esta es mi vida e me la vivo hasta que dios quiera<br>