<table cellspacing="0" cellpadding="0" border="0" ><tr><td valign="top" style="font: inherit;">Is there a good source for some information concerning HPC clusters? Also, does PaceMaker support HPC?<br><br>--- On <b>Fri, 12/2/11, Florian Haas <i><florian@hastexo.com></i></b> wrote:<br><blockquote style="border-left: 2px solid rgb(16, 16, 255); margin-left: 5px; padding-left: 5px;"><br>From: Florian Haas <florian@hastexo.com><br>Subject: Re: [Pacemaker] Where to install applications<br>To: "The Pacemaker cluster resource manager" <pacemaker@oss.clusterlabs.org><br>Date: Friday, December 2, 2011, 2:41 PM<br><br><div class="plainMail">On Fri, Dec 2, 2011 at 5:35 PM, Charles DeVoe <<a ymailto="mailto:scarecrow_57@yahoo.com" href="/mc/compose?to=scarecrow_57@yahoo.com">scarecrow_57@yahoo.com</a>> wrote:<br>><br>> We are building a 4 node active/active cluster, which I believe is the same as High Performance.<br><br>Not
quite. That's still an HA cluster with some scale-out capability.<br>HPC is a slightly different ballgame.<br><br>> The Cluster has a SAN formatted with GFS2. The discussion is whether to install the applications on the shared drive and point each machine to that install point or install the applications locally.<br><br>Your call, really.<br><br>Slapping all applications onto the shared storage means that every<br>time you update that piece of software, you essentially have to<br>restart everything all at once -- but only once. So, for updates<br>you'll normally have downtime. If everything goes nicely, you're back<br>up very quickly. If something breaks, you're down for some time.<br><br>Putting just the data on shared storage, and the applications on<br>individual nodes, means you're capable of "rolling" upgrades where you<br>update your software node by node -- but then again, you have to do it<br>on every node. If everything works on
the first try, you'll normally<br>take a bit longer than in the approach explained above. If something<br>breaks on the upgrade of your first node, well, you shut it down, go<br>back to square one, find and fix the root cause, while three others<br>continue to hum along.<br><br>I for one much prefer the second approach.<br><br>Cheers,<br>Florian<br><br>--<br>Want to know how we've helped others?<br><a href="http://www.hastexo.com/shoutbox" target="_blank">http://www.hastexo.com/shoutbox</a><br><br>_______________________________________________<br>Pacemaker mailing list: <a ymailto="mailto:Pacemaker@oss.clusterlabs.org" href="/mc/compose?to=Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a><br><a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br><br>Project Home: <a href="http://www.clusterlabs.org"
target="_blank">http://www.clusterlabs.org</a><br>Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br></div></blockquote></td></tr></table>