<div dir="ltr"><div><div>Hello Andrea<br><br></div>Can you show me your multipath.conf?<br><br></div>Thanks<br></div><div class="gmail_extra"><br><br><div class="gmail_quote">2013/5/2 andrea cuozzo <span dir="ltr"><<a href="mailto:andrea.cuozzo@sysma.it" target="_blank">andrea.cuozzo@sysma.it</a>></span><br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div link="blue" vlink="purple" lang="IT"><div><p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US">Hi,<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US"><u></u> <u></u></span></p><p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US">It's my first try at asking for help on a mailing list, I hope I'll not make netiquette mistakes. I really could use some help on SBD, here's my scenario:<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US"><u></u> <u></u></span></p><p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US">I have three clusters with a similar configuration: two physical servers with a fibre channel shared storage, 4 resources (ip address, ext3 filesystem, oracle listener, oracle database) configured in a group, and external\SBD as stonith device. Operating system, is SLES 11 Sp1, cluster components come from the SLES Sp1 HA package and are these versions:<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US"><u></u> <u></u></span></p><p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US">openais: 1.1.4-5.6.3<u></u><u></u></span></p><p class="MsoNormal">
<span style="font-size:12.0pt" lang="EN-US">pacemaker: 1.1.5-5.9.11.1<u></u><u></u></span></p><p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US">resource-agents: 3.9.3-0.4.26.1<u></u><u></u></span></p><p class="MsoNormal">
<span style="font-size:12.0pt" lang="EN-US">cluster-glue: 1.0.8-0.4.4.1<u></u><u></u></span></p><p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US">corosync: 1.3.3-0.3.1<u></u><u></u></span></p><p class="MsoNormal">
<span style="font-size:12.0pt" lang="EN-US">csync2: 1.34-0.2.39<u></u><u></u></span></p><p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US"><u></u> <u></u></span></p><p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US">Each one of the three clusters will work fine for a couple of days, then both servers of one of the clusters at the same time will start the SBD "WARN: Latency: No liveness for" countdown and restart. It happens at different hours, and during different servers load (even at night, when servers are close to 0% load). No two clusters have ever went down at the same time. Their syslog is superclean, the only warning messages before the reboots are the ones telling the SBD liveness countdown. The SAN department can’t see anything wrong on their side, the SAN is used by many other servers, no-one seems to be experiencing similar problems.<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US"><u></u> <u></u></span></p><p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US">Hardware<u></u><u></u></span></p><p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US"><u></u> <u></u></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US">Cluster 1 and Cluster 2: two IBM blades, QLogic QMI2582 (one card, two ports), Brocade blade center FC switch, SAN switch, HP P9500 SAN <u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US">Cluster 3: two IBM x3650, QLogic QLE2560 (two cards per server), SAN switch, HP P9500 SAN<u></u><u></u></span></p><p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US"><u></u> <u></u></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US">Each cluster have a 50GB LUN on the HP P9500 SAN (the SAN is in common, the LUNs are different): partition 1 (7.8 MB) for SBD, partition 2 (49.99 GB) for Oracle on ext3<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US"><u></u> <u></u></span></p><p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US">What I have done so far:<u></u><u></u></span></p><p class="MsoNormal">
<span style="font-size:12.0pt" lang="EN-US"><u></u> <u></u></span></p><p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US">- introduced options qla2xxx ql2xmaxqdepth=16 qlport_down_retry=1 ql2xloginretrycount=5 ql2xextended_error_logging=1 in /etc/modprobe.conf.local (and mkinitrd and restarted the servers)<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US">- verified with the SAN department that the Qlogic firmware of my HBAs is compliant with their needs<u></u><u></u></span></p><p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US">- configured multipath.conf as per HP specifications for the OPEN-V type of SAN<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US">- verified multipathd is working as expected, shutting down one port at a time, links stay up on the other port, and then shutting down both, cluster switches on the other node<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US">- configured SBD to use the watchdog device (softdog), and the first partition of the LUN, and all relevant tests confirm SBD is working as expected (list, dump, message test, message exit, killing the SBD process the server reboots), here's my /etc/sysconfig/SBD<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US"><u></u> <u></u></span></p><p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US">server1:~ # cat /etc/sysconfig/SBD<u></u><u></u></span></p><p class="MsoNormal">
<span style="font-size:12.0pt" lang="EN-US">SBD_DEVICE="/dev/mapper/san_part1"<u></u><u></u></span></p><p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US">SBD_OPTS="-W"<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US"><u></u> <u></u></span></p><p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US">- enhanced (x2) the default values for Timeout (watchdog) and Timeout (msgwait), setting them at 10 and 20, while Stonith Timeout is 60s<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US"><u></u> <u></u></span></p><p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US">server1:~ # SBD -d /dev/mapper/san_part1 dump<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US">==Dumping header on disk /dev/mapper/san_part1<u></u><u></u></span></p><p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US">Header version : 2<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US">Number of slots : 255<u></u><u></u></span></p><p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US">Sector size : 512<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US">Timeout (watchdog) : 10<u></u><u></u></span></p><p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US">Timeout (allocate) : 2<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US">Timeout (loop) : 1<u></u><u></u></span></p><p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US">Timeout (msgwait) : 20<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US">==Header on disk /dev/mapper/san_part1 is dumped<u></u><u></u></span></p><p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US"><u></u> <u></u></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US">I’ve even tested with 60 and 120 for Timeout (watchdog) and Timeout (msgwait), when the problem happened again the serves went all through the 60 seconds delay countdown to reboot.<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US"><u></u> <u></u></span></p><p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US">Borrowing the idea from here <a href="http://www.gossamer-threads.com/lists/linuxha/users/79213" target="_blank">http://www.gossamer-threads.com/lists/linuxha/users/79213</a> , I'm monitoring access time on the SBD partition on the three clusters: average time to execute the dump command is 30ms, sometimes it spikes over 100ms a couple of times in an hour. There's no slow rise from the average when the problem comes, though, here's what it looked like the last time, frequency of the dump command is 2 seconds:<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US"><u></u> <u></u></span></p><p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US">...<u></u><u></u></span></p><p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US">real 0m0.031s<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US">real 0m0.031s<u></u><u></u></span></p><p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US">real 0m0.030s<u></u><u></u></span></p><p class="MsoNormal">
<span style="font-size:12.0pt" lang="EN-US">real 0m0.030s<u></u><u></u></span></p><p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US">real 0m0.030s<u></u><u></u></span></p><p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US">real 0m0.030s<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US">real 0m0.031s </span><span style="font-size:12.0pt;font-family:Wingdings" lang="EN-US">ß</span><span style="font-size:12.0pt" lang="EN-US">-- last record on the file, no more logging, server will reboot after the timeout watchdog period<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US">...<u></u><u></u></span></p><p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US"><u></u> <u></u></span></p><p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US">Right before the last cluster reboot I was monitoring Oracle I/O towards its datafiles, to verify whether Oracle could access its partition, on the same LUN as the SBD one, when the SBD countdown start, to identify if it’s an SBD-only problem or a LUN access problem), and there was no sign of Oracle I/O problems during the countdown, it seems Oracle stopped interacting with the I/O monitor software the very moment the Oracle servers rebooted (all servers involved have a common time-server, but I can’t be 100% sure they were in sync when I checked).<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US"><u></u> <u></u></span></p><p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US">I'm in close contact with the SAN department, the problem might well be the servers losing access to the LUN for some fibre channel matter they still can't see in their SAN logs, but I'd like to be 100% certain the cluster configuration is good. Here are my SBD related questions:<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US"><u></u> <u></u></span></p><p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US">- is the 1 MB size for the SBD partition strictly mandatory ? in SLES 11 Sp1 HA documentation it's written: "In an environment where all nodes have access to shared storage, a small partition (1MB) is formated for the use with SBD", while here <a href="http://linux-ha.org/wiki/SBD_Fencing" target="_blank">http://linux-ha.org/wiki/SBD_Fencing</a> there is no size suggested for it. At Os setup the SLES partitioner didn't allow us to create a 1MB partition, being it too small, the smallest size available was 7.8MB: can this difference in size introduce the random problem we're experiencing ? <u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US"><u></u> <u></u></span></p><p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US">- I've read here <a href="http://www.gossamer-threads.com/lists/linuxha/pacemaker/84951" target="_blank">http://www.gossamer-threads.com/lists/linuxha/pacemaker/84951</a> Mr. Lars Marowsky-Bree says: "The new SBD versions will not become stuck on IO anymore". Is the SBD version I'm using one that can become stuck on IO ? I've checked without luck for SLES HA packages newer than the one I'm using, but the SBD being stuck on IO really seems something that would apply to my case.<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US"><u></u> <u></u></span></p><p class="MsoNormal"><span style="font-size:12.0pt" lang="EN-US"><u></u> <u></u></span></p><p class="MsoNormal"><span style="font-size:12.0pt">Thanks and best regards.<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt"><u></u> <u></u></span></p><p class="MsoNormal"><span style="font-size:12.0pt"><u></u> <u></u></span></p><p class="MsoNormal"><span style="font-size:12.0pt"><u></u> <u></u></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt"><u></u> <u></u></span></p></div></div><br>_______________________________________________<br>
Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a><br>
<a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
<br></blockquote></div><br><br clear="all"><br>-- <br>esta es mi vida e me la vivo hasta que dios quiera
</div>