Hi,<br><br>Thanks for the answer below, that clears up why that wasn't working without a stonith device.<br><br>I'm wondering if i would need a stonith device if we plan to have 2 redundant nic interfaces on each node (connected to a different switch) for the lan connection plus one nic for the drbd sync connection. The 2 redundant lan connections would be bonded together so they would have the same IP. If one lan connection goes down, the other would still be up so there would be no split brain scenario and we'd just have to make sure we fix the downed lan connection before the other one has a chance to fail and cause a split brain scenario.<br>
<br>I realize there's still a single point of failure there as the bonded interface could possibly fail as a whole. I don't think the company will spring to buy stonith devices and I don't see how i could use fence_ipmilan/ipmi as a stonith device in my 2 node primary/secondary scenario as my comprehension on this is still far from where it should be but my impression is that the stonith device will only reboot a failed node if the drbd sync connection is down.<br>
<br>Thanks,<br>Charles<br><br><div class="gmail_quote">On Thu, Sep 29, 2011 at 10:25 AM, Dejan Muhamedagic <span dir="ltr"><<a href="mailto:dejanmm@fastmail.fm">dejanmm@fastmail.fm</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
Hi,<br>
<div><div></div><div class="h5"><br>
On Thu, Sep 29, 2011 at 09:30:55AM -0300, Charles Richard wrote:<br>
> Here it is attached.<br>
><br>
> I also see the following 2 errors in the node 2 logs which I assume mean the<br>
> problem is really that node1 is not getting demoted and I'm not sure why:<br>
><br>
> Error 1:<br>
> Sep 28 19:53:20 staging2 drbd[8587]: ERROR: mysqld: Called drbdadm -c<br>
> /etc/drbd.conf primary mysqld<br>
> Sep 28 19:53:20 staging2 drbd[8587]: ERROR: mysqld: Exit code 11<br>
> Sep 28 19:53:20 staging2 drbd[8587]: ERROR: mysqld: Command output:<br>
> Sep 28 19:53:20 staging2 lrmd: [1442]: info: RA output:<br>
> (drbd_mysql:1:promote:stdout)<br>
> Sep 28 19:53:22 staging2 lrmd: [1442]: info: RA output:<br>
> (drbd_mysql:1:promote:stderr) 0: State change failed: (-1) Multiple<br>
> primaries not allowed by config<br>
><br>
> Error 2:<br>
> Sep 28 19:53:27 staging2 kernel: d-con mysqld: Requested state change failed<br>
> by peer: Refusing to be Primary while peer is not outdated (-7)<br>
> Sep 28 19:53:27 staging2 kernel: d-con mysqld: peer( Primary -> Unknown )<br>
> conn( Connected -> Disconnecting ) disk( UpToDate -> Outdated ) pdsk(<br>
> UpToDate -> DUnknown )<br>
> Sep 28 19:53:27 staging2 kernel: d-con mysqld: meta connection shut down by<br>
> peer.<br>
><br>
> Also, failover works fine if i reboot either machine. The outdated machines<br>
> comes back up as secondary. The scenario where i get the errors above is<br>
> when i pull the network cable from the primary. Is that a stonith device<br>
> that should be protecting from this scenario and potentially rebooting the<br>
> primary?<br>
<br>
</div></div>Yes. That's the only way for the cluster to keep sanity in case<br>
of split-brain caused by pulling the network cable.<br>
<br>
Thanks,<br>
<font color="#888888"><br>
Dejan<br>
</font><div><div></div><div class="h5"><br>
> Feels like I'm getting so close to getting this working!<br>
><br>
> Thanks!<br>
> Charles<br>
><br>
> On Thu, Sep 29, 2011 at 4:15 AM, Andrew Beekhof <<a href="mailto:andrew@beekhof.net">andrew@beekhof.net</a>> wrote:<br>
><br>
> > Could you attach /var/lib/pengine/pe-input-3802.bz2 from staging1?<br>
> > That would tell us why.<br>
> ><br>
> > On Mon, Sep 26, 2011 at 10:28 PM, Charles Richard<br>
> > <<a href="mailto:chachi.richard@gmail.com">chachi.richard@gmail.com</a>> wrote:<br>
> > > Hi,<br>
> > ><br>
> > > I'm making some headway finally with my pacemaker install but now that<br>
> > > crm_mon doesn't return errors any more and crm_verify is clear, I'm<br>
> > having a<br>
> > > problem where my master won't get promoted. Not sure what to do with<br>
> > this<br>
> > > one, any suggestions? Here's the log snippet and config files:<br>
> > ><br>
> > > Sep 26 04:06:12 staging1 crmd: [1686]: info: crm_timer_popped: PEngine<br>
> > > Recheck Timer (I_PE_CALC) just popped!<br>
> > > Sep 26 04:06:12 staging1 crmd: [1686]: info: do_state_transition: State<br>
> > > transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC<br>
> > cause=C_TIMER_POPPED<br>
> > > origin=crm_timer_popped ]<br>
> > > Sep 26 04:06:12 staging1 crmd: [1686]: info: do_state_transition:<br>
> > Progressed<br>
> > > to state S_POLICY_ENGINE after C_TIMER_POPPED<br>
> > > Sep 26 04:06:12 staging1 crmd: [1686]: info: do_state_transition: All 2<br>
> > > cluster nodes are eligible to run resources.<br>
> > > Sep 26 04:06:12 staging1 crmd: [1686]: info: do_pe_invoke: Query 106:<br>
> > > Requesting the current CIB: S_POLICY_ENGINE<br>
> > > Sep 26 04:06:12 staging1 crmd: [1686]: info: do_pe_invoke_callback:<br>
> > Invoking<br>
> > > the PE: query=106, ref=pe_calc-dc-1317020772-95, seq=2564, quorate=1<br>
> > > Sep 26 04:06:12 staging1 pengine: [1685]: info: unpack_config: Startup<br>
> > > probes: enabled<br>
> > > Sep 26 04:06:12 staging1 pengine: [1685]: notice: unpack_config: On loss<br>
> > of<br>
> > > CCM Quorum: Ignore<br>
> > > Sep 26 04:06:12 staging1 pengine: [1685]: info: unpack_config: Node<br>
> > scores:<br>
> > > 'red' = -INFINITY, 'yellow' = 0, 'green' = 0<br>
> > > Sep 26 04:06:12 staging1 pengine: [1685]: info: unpack_domains: Unpacking<br>
> > > domains<br>
> > > Sep 26 04:06:12 staging1 pengine: [1685]: info: determine_online_status:<br>
> > > Node <a href="http://staging1.dev.applepeak.com" target="_blank">staging1.dev.applepeak.com</a> is online<br>
> > > Sep 26 04:06:12 staging1 pengine: [1685]: info: determine_online_status:<br>
> > > Node <a href="http://staging2.dev.applepeak.com" target="_blank">staging2.dev.applepeak.com</a> is online<br>
> > > Sep 26 04:06:12 staging1 pengine: [1685]: notice: group_print: Resource<br>
> > > Group: mysql<br>
> > > Sep 26 04:06:12 staging1 pengine: [1685]: notice: native_print:<br>
> > > fs_mysql#011(ocf::heartbeat:Filesystem):#011Stopped<br>
> > > Sep 26 04:06:12 staging1 pengine: [1685]: notice: native_print:<br>
> > > ip_mysql#011(ocf::heartbeat:IPaddr2):#011Stopped<br>
> > > Sep 26 04:06:12 staging1 pengine: [1685]: notice: native_print:<br>
> > > mysqld#011(lsb:mysqld):#011Stopped<br>
> > > Sep 26 04:06:12 staging1 pengine: [1685]: notice: clone_print:<br>
> > Master/Slave<br>
> > > Set: ms_drbd_mysql<br>
> > > Sep 26 04:06:12 staging1 pengine: [1685]: notice: short_print:<br>
> > Stopped:<br>
> > > [ drbd_mysql:0 drbd_mysql:1 ]<br>
> > > Sep 26 04:06:12 staging1 pengine: [1685]: info: master_color:<br>
> > ms_drbd_mysql:<br>
> > > Promoted 0 instances of a possible 1 to master<br>
> > > Sep 26 04:06:12 staging1 pengine: [1685]: info: native_merge_weights:<br>
> > > fs_mysql: Rolling back scores from ip_mysql<br>
> > > Sep 26 04:06:12 staging1 pengine: [1685]: info: native_merge_weights:<br>
> > > ip_mysql: Rolling back scores from mysqld<br>
> > > Sep 26 04:06:12 staging1 pengine: [1685]: info: master_color:<br>
> > ms_drbd_mysql:<br>
> > > Promoted 0 instances of a possible 1 to master<br>
> > > Sep 26 04:06:12 staging1 pengine: [1685]: notice: LogActions: Leave<br>
> > resource<br>
> > > fs_mysql#011(Stopped)<br>
> > > Sep 26 04:06:12 staging1 pengine: [1685]: notice: LogActions: Leave<br>
> > resource<br>
> > > ip_mysql#011(Stopped)<br>
> > > Sep 26 04:06:12 staging1 pengine: [1685]: notice: LogActions: Leave<br>
> > resource<br>
> > > mysqld#011(Stopped)<br>
> > > Sep 26 04:06:12 staging1 pengine: [1685]: notice: LogActions: Leave<br>
> > resource<br>
> > > drbd_mysql:0#011(Stopped)<br>
> > > Sep 26 04:06:12 staging1 pengine: [1685]: notice: LogActions: Leave<br>
> > resource<br>
> > > drbd_mysql:1#011(Stopped)<br>
> > > Sep 26 04:06:12 staging1 crmd: [1686]: info: do_state_transition: State<br>
> > > transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS<br>
> > > cause=C_IPC_MESSAGE origin=handle_response ]<br>
> > > Sep 26 04:06:12 staging1 crmd: [1686]: info: unpack_graph: Unpacked<br>
> > > transition 72: 0 actions in 0 synapses<br>
> > > Sep 26 04:06:12 staging1 crmd: [1686]: info: do_te_invoke: Processing<br>
> > graph<br>
> > > 72 (ref=pe_calc-dc-1317020772-95) derived from<br>
> > > /var/lib/pengine/pe-input-3802.bz2<br>
> > > Sep 26 04:06:12 staging1 crmd: [1686]: info: run_graph:<br>
> > > ====================================================<br>
> > > Sep 26 04:06:12 staging1 crmd: [1686]: notice: run_graph: Transition 72<br>
> > > (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0,<br>
> > > Source=/var/lib/pengine/pe-input-3802.bz2): Complete<br>
> > > Sep 26 04:06:12 staging1 crmd: [1686]: info: te_graph_trigger: Transition<br>
> > 72<br>
> > > is now complete<br>
> > > Sep 26 04:06:12 staging1 crmd: [1686]: info: notify_crmd: Transition 72<br>
> > > status: done - <null><br>
> > > Sep 26 04:06:12 staging1 crmd: [1686]: info: do_state_transition: State<br>
> > > transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS<br>
> > > cause=C_FSA_INTERNAL origin=notify_crmd ]<br>
> > > Sep 26 04:06:12 staging1 crmd: [1686]: info: do_state_transition:<br>
> > Starting<br>
> > > PEngine Recheck Timer<br>
> > > Sep 26 04:06:12 staging1 pengine: [1685]: info: process_pe_message:<br>
> > > Transition 72: PEngine Input stored in:<br>
> > /var/lib/pengine/pe-input-3802.bz2<br>
> > > Sep 26 04:15:09 staging1 cib: [1682]: info: cib_stats: Processed 1<br>
> > > operations (0.00us average, 0% utilization) in the last 10min<br>
> > ><br>
> > > My drbd config file:<br>
> > ><br>
> > > resource mysqld {<br>
> > ><br>
> > > protocol C;<br>
> > ><br>
> > > startup { wfc-timeout 0; degr-wfc-timeout 120; }<br>
> > ><br>
> > > disk { on-io-error detach; }<br>
> > ><br>
> > ><br>
> > > on staging1 {<br>
> > ><br>
> > > device /dev/drbd0;<br>
> > ><br>
> > > disk /dev/vg_staging1/lv_data;<br>
> > ><br>
> > > meta-disk internal;<br>
> > ><br>
> > > address <a href="http://10.10.20.1:7788" target="_blank">10.10.20.1:7788</a>;<br>
> > ><br>
> > > }<br>
> > ><br>
> > > on staging2 {<br>
> > ><br>
> > > device /dev/drbd0;<br>
> > ><br>
> > > disk /dev/vg_staging2/lv_data;<br>
> > ><br>
> > > meta-disk internal;<br>
> > ><br>
> > > address <a href="http://10.10.20.2:7788" target="_blank">10.10.20.2:7788</a>;<br>
> > ><br>
> > > }<br>
> > ><br>
> > > }<br>
> > ><br>
> > > corosync.conf:<br>
> > ><br>
> > > compatibility: whitetank<br>
> > ><br>
> > > aisexec {<br>
> > > user: root<br>
> > > group: root<br>
> > > }<br>
> > ><br>
> > > totem {<br>
> > > version: 2<br>
> > > secauth: off<br>
> > > threads: 0<br>
> > > interface {<br>
> > > ringnumber: 0<br>
> > > bindnetaddr: 10.10.10.0<br>
> > > mcastaddr: 226.94.1.1<br>
> > > mcastport: 5405<br>
> > > }<br>
> > > }<br>
> > ><br>
> > > logging {<br>
> > > fileline: off<br>
> > > to_stderr: no<br>
> > > to_logfile: no<br>
> > > to_syslog: yes<br>
> > > logfile: /var/log/cluster/corosync.log<br>
> > > debug: off<br>
> > > timestamp: on<br>
> > > logger_subsys {<br>
> > > subsys: AMF<br>
> > > debug: off<br>
> > > }<br>
> > > }<br>
> > ><br>
> > > amf {<br>
> > > mode: disabled<br>
> > > }<br>
> > ><br>
> > > service {<br>
> > > #Load Pacemaker<br>
> > > name: pacemaker<br>
> > > ver: 0<br>
> > > use_mgmtd: yes<br>
> > > }<br>
> > ><br>
> > > And my crm config:<br>
> > ><br>
> > > node <a href="http://staging1.dev.applepeak.com" target="_blank">staging1.dev.applepeak.com</a><br>
> > > node <a href="http://staging2.dev.applepeak.com" target="_blank">staging2.dev.applepeak.com</a><br>
> > > primitive drbd_mysql ocf:linbit:drbd \<br>
> > > params drbd_resource="mysqld" \<br>
> > > op monitor interval="15s" \<br>
> > > op start interval="0" timeout="240s" \<br>
> > > op stop interval="0" timeout="100s"<br>
> > > primitive fs_mysql ocf:heartbeat:Filesystem \<br>
> > > params device="/dev/drbd0" directory="/opt/data/mysql/data/mysql"<br>
> > > fstype="ext4" \<br>
> > > op start interval="0" timeout="60s" \<br>
> > > op stop interval="0" timeout="60s"<br>
> > > primitive ip_mysql ocf:heartbeat:IPaddr2 \<br>
> > > params ip="10.10.10.31" nic="eth0"<br>
> > > primitive mysqld lsb:mysqld<br>
> > > group mysql fs_mysql ip_mysql mysqld<br>
> > > ms ms_drbd_mysql drbd_mysql \<br>
> > > meta master-max="1" master-node-max="1" clone-max="2"<br>
> > > clone-node-max="1" notify="true"<br>
> > > colocation mysql_on_drbd inf: mysql ms_drbd_mysql:Master<br>
> > > order mysql_after_drbd inf: ms_drbd_mysql:promote mysql:start<br>
> > > property $id="cib-bootstrap-options" \<br>
> > > dc-version="1.1.2-f059ec7ced7a86f18e5490b67ebf4a0b963bccfe" \<br>
> > > cluster-infrastructure="openais" \<br>
> > > expected-quorum-votes="2" \<br>
> > > stonith-enabled="false" \<br>
> > > last-lrm-refresh="1316961847" \<br>
> > > stop-all-resources="true" \<br>
> > > no-quorum-policy="ignore"<br>
> > > rsc_defaults $id="rsc-options" \<br>
> > > resource-stickiness="100"<br>
> > ><br>
> > > Thanks,<br>
> > > Charles<br>
> > ><br>
> > > _______________________________________________<br>
> > > Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a><br>
> > > <a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
> > ><br>
> > > Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
> > > Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
> > > Bugs:<br>
> > ><br>
> > <a href="http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker" target="_blank">http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker</a><br>
> > ><br>
> > ><br>
> ><br>
> > _______________________________________________<br>
> > Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a><br>
> > <a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
> ><br>
> > Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
> > Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
> > Bugs:<br>
> > <a href="http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker" target="_blank">http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker</a><br>
> ><br>
<br>
<br>
> _______________________________________________<br>
> Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a><br>
> <a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
><br>
> Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
> Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
> Bugs: <a href="http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker" target="_blank">http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker</a><br>
<br>
<br>
_______________________________________________<br>
Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a><br>
<a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker" target="_blank">http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker</a><br>
</div></div></blockquote></div><br>