<font size=2 face="sans-serif">Hy,</font>
<br>
<br><font size=2 face="sans-serif">thx for answer. I tested this now, the
problem is, mdadm hangs totally when we simulate the fail of one storage.
(we already tried two ways: 1. removing the mapping., 2. removing one path,
and then disabling the remaining path through the port on the san switch
- which is nearly the same like a total fail of the storage).</font>
<br>
<br><font size=2 face="sans-serif">So I can't get the output of mdadm,
because it hangs.</font>
<br>
<br><font size=2 face="sans-serif">I think it must be a problem with mdadm.
This is my mdadm.conf:</font>
<br>
<br><font size=2 face="sans-serif">"DEVICE /dev/mapper/3600a0b800050c94e000007874d2e0028_part1
/dev/mapper/3600a0b8000511f54000014b14d2df1b1_part1 /dev/mapper/3600a0b800050c94e000007874d2e0028_part2
/dev/mapper/3600a0b8000511f54000014b14d2df1b1_part2 /dev/mapper/3600a0b800050c94e000007874d2e0028_part3
/dev/mapper/3600a0b8000511f54000014b14d2df1b1_part3</font>
<br><font size=2 face="sans-serif">ARRAY /dev/md0 metadata=0.90 UUID=c411c076:bb28916f:d50a93ef:800dd1f0</font>
<br><font size=2 face="sans-serif">ARRAY /dev/md1 metadata=0.90 UUID=522279fa:f3cdbe3a:d50a93ef:800dd1f0</font>
<br><font size=2 face="sans-serif">ARRAY /dev/md2 metadata=0.90 UUID=01e07d7d:5305e46c:d50a93ef:800dd1f0"</font>
<br>
<br><font size=2 face="sans-serif">kr Patrik</font>
<br>
<br><font size=2 face="sans-serif"><br>
</font><font size=2 color=#5f5f5f face="sans-serif">Mit freundlichen Grüßen
/ Best Regards<br>
<b><br>
Patrik Rapposch, BSc</b><br>
System Administration<br>
<b><br>
KNAPP Systemintegration GmbH</b><br>
Waltenbachstraße 9<br>
8700 Leoben, Austria <br>
Phone: +43 3842 805-915<br>
Fax: +43 3842 805-500<br>
patrik.rapposch@knapp.com <br>
</font><a href=www.KNAPP.com><font size=2 color=#5f5f5f face="sans-serif">www.KNAPP.com</font></a><font size=2 color=#5f5f5f face="sans-serif">
<br>
<br>
Commercial register number: FN 138870x<br>
Commercial register court: Leoben</font><font size=2 face="sans-serif"><br>
</font><font size=1 color=#d2d2d2 face="sans-serif"><br>
The information in this e-mail (including any attachment) is confidential
and intended to be for the use of the addressee(s) only. If you have received
the e-mail by mistake, any disclosure, copy, distribution or use of the
contents of the e-mail is prohibited, and you must delete the e-mail from
your system. As e-mail can be changed electronically KNAPP assumes no responsibility
for any alteration to this e-mail or its attachments. KNAPP has taken every
reasonable precaution to ensure that any attachment to this e-mail has
been swept for virus. However, KNAPP does not accept any liability for
damage sustained as a result of such attachment being virus infected and
strongly recommend that you carry out your own virus check before opening
any attachment.</font>
<br>
<br>
<br>
<table width=100%>
<tr valign=top>
<td width=40%><font size=1 face="sans-serif"><b>Holger Teutsch <holger.teutsch@web.de></b>
</font>
<p><font size=1 face="sans-serif">06.03.2011 19:56</font>
<table border>
<tr valign=top>
<td bgcolor=white>
<div align=center><font size=1 face="sans-serif">Bitte antworten an<br>
The Pacemaker cluster resource manager <pacemaker@oss.clusterlabs.org></font></div></table>
<br>
<td width=59%>
<table width=100%>
<tr valign=top>
<td>
<div align=right><font size=1 face="sans-serif">An</font></div>
<td><font size=1 face="sans-serif">The Pacemaker cluster resource manager
<pacemaker@oss.clusterlabs.org></font>
<tr valign=top>
<td>
<div align=right><font size=1 face="sans-serif">Kopie</font></div>
<td>
<tr valign=top>
<td>
<div align=right><font size=1 face="sans-serif">Thema</font></div>
<td><font size=1 face="sans-serif">Re: [Pacemaker] WG: time pressure -
software raid cluster, raid1 ressource agent, help needed</font></table>
<br>
<table>
<tr valign=top>
<td>
<td></table>
<br></table>
<br>
<br>
<br><tt><font size=2>On Sun, 2011-03-06 at 12:40 +0100, Patrik.Rapposch@knapp.com
wrote:<br>
Hi,<br>
assume the basic problem is in your raid configuration.<br>
<br>
If you unmap one box the devices should not be in status FAIL but in<br>
degraded.<br>
<br>
So what is the exit status of<br>
<br>
mdadm --detail --test /dev/md0<br>
<br>
after unmapping ?<br>
<br>
Furthermore I would start start with one isolated group containing the<br>
raid, LVM, and FS to keep it simple.<br>
<br>
Regards<br>
Holger<br>
<br>
> Hy, <br>
> <br>
> <br>
> does anyone have an idea to that? I only have the servers till next<br>
> week friday, so to my regret I am under time pressure :(<br>
> <br>
> <br>
> <br>
> Like I already wrote, I would appreciate and test any idea of you.<br>
> Also if someone already made clusters with lvm-mirror, I would be<br>
> happy to get a cib or some configuration examples.<br>
> <br>
> <br>
> <br>
> <br>
> <br>
> <br>
> <br>
> Thank you very much in advance.<br>
> <br>
> <br>
> <br>
> <br>
> <br>
> <br>
> <br>
> kr Patrik<br>
> <br>
> <br>
> <br>
> <br>
> <br>
> Patrik.Rapposch@knapp.com<br>
> 03.03.2011 15:11Bitte antworten anThe Pacemaker cluster resource<br>
> manager<br>
> <br>
> An pacemaker@oss.clusterlabs.org<br>
> Kopie <br>
> Blindkopie <br>
> Thema [Pacemaker] software raid cluster, raid1 ressource agent,help<br>
> needed<br>
> <br>
> <br>
> Good Day, <br>
> <br>
> I have a 2 node active/passive cluster which is connected to two ibm<br>
> 4700 storages. I configured 3 raids and I use the Raid1 ressource<br>
> agent for managing the Raid1s in the cluster. <br>
> When I now disable the mapping of one storage, to simulate the fail
of<br>
> one storage, the Raid1 Ressources change to the State "FAILED"
and the<br>
> second node then takes over the ressources and is able to start the<br>
> raid devices. <br>
> <br>
> So I am confused, why the active node can't keep the raid1 ressources<br>
> and the former passive node takes them over and can start them<br>
> correct. <br>
> <br>
> I would really appreciate your advice, or maybe someone already has
a<br>
> example configuration for Raid1 with two storages.<br>
> <br>
> Thank you very much in advance. Attached you can find my cib.xml.
<br>
> <br>
> kr Patrik <br>
> <br>
> <br>
> <br>
> Mit freundlichen Grüßen / Best Regards<br>
> <br>
> Patrik Rapposch, BSc<br>
> System Administration<br>
> <br>
> KNAPP Systemintegration GmbH<br>
> Waltenbachstraße 9<br>
> 8700 Leoben, Austria <br>
> Phone: +43 3842 805-915<br>
> Fax: +43 3842 805-500<br>
> patrik.rapposch@knapp.com <br>
> </font></tt><a href=www.KNAPP.com><tt><font size=2>www.KNAPP.com</font></tt></a><tt><font size=2>
<br>
> <br>
> Commercial register number: FN 138870x<br>
> Commercial register court: Leoben<br>
> <br>
> The information in this e-mail (including any attachment) is<br>
> confidential and intended to be for the use of the addressee(s) only.<br>
> If you have received the e-mail by mistake, any disclosure, copy,<br>
> distribution or use of the contents of the e-mail is prohibited, and<br>
> you must delete the e-mail from your system. As e-mail can be changed<br>
> electronically KNAPP assumes no responsibility for any alteration
to<br>
> this e-mail or its attachments. KNAPP has taken every reasonable<br>
> precaution to ensure that any attachment to this e-mail has been swept<br>
> for virus. However, KNAPP does not accept any liability for damage<br>
> sustained as a result of such attachment being virus infected and<br>
> strongly recommend that you carry out your own virus check before<br>
> opening any attachment.<br>
> _______________________________________________<br>
> Pacemaker mailing list: Pacemaker@oss.clusterlabs.org<br>
> </font></tt><a href=http://oss.clusterlabs.org/mailman/listinfo/pacemaker><tt><font size=2>http://oss.clusterlabs.org/mailman/listinfo/pacemaker</font></tt></a><tt><font size=2><br>
> <br>
> Project Home: </font></tt><a href=http://www.clusterlabs.org/><tt><font size=2>http://www.clusterlabs.org</font></tt></a><tt><font size=2><br>
> Getting started:<br>
> </font></tt><a href=http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf><tt><font size=2>http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</font></tt></a><tt><font size=2><br>
> Bugs:<br>
> </font></tt><a href="http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker"><tt><font size=2>http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker</font></tt></a><tt><font size=2><br>
> <br>
> <br>
> _______________________________________________<br>
> Pacemaker mailing list: Pacemaker@oss.clusterlabs.org<br>
> </font></tt><a href=http://oss.clusterlabs.org/mailman/listinfo/pacemaker><tt><font size=2>http://oss.clusterlabs.org/mailman/listinfo/pacemaker</font></tt></a><tt><font size=2><br>
> <br>
> Project Home: </font></tt><a href=http://www.clusterlabs.org/><tt><font size=2>http://www.clusterlabs.org</font></tt></a><tt><font size=2><br>
> Getting started: </font></tt><a href=http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf><tt><font size=2>http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</font></tt></a><tt><font size=2><br>
> Bugs: </font></tt><a href="http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker"><tt><font size=2>http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker</font></tt></a><tt><font size=2><br>
<br>
<br>
<br>
_______________________________________________<br>
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org<br>
</font></tt><a href=http://oss.clusterlabs.org/mailman/listinfo/pacemaker><tt><font size=2>http://oss.clusterlabs.org/mailman/listinfo/pacemaker</font></tt></a><tt><font size=2><br>
<br>
Project Home: </font></tt><a href=http://www.clusterlabs.org/><tt><font size=2>http://www.clusterlabs.org</font></tt></a><tt><font size=2><br>
Getting started: </font></tt><a href=http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf><tt><font size=2>http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</font></tt></a><tt><font size=2><br>
Bugs: </font></tt><a href="http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker"><tt><font size=2>http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker</font></tt></a><tt><font size=2><br>
</font></tt>
<br>