Hi,<br><br><div><div class="gmail_quote">On Tue, Jan 25, 2011 at 2:41 PM, Robert van Leeuwen <span dir="ltr"><<a href="mailto:vanleeuwen@stone-it.com">vanleeuwen@stone-it.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div class="im">> > The only thing to do that remains would be a daemon that switches off<br>
> > unused machines to save energy. But this could be done using STONITH<br>
> > agents.<br>
> ><br>
> > Basically this would be an option to make cloud computing really green!<br>
> ><br>
> > Please mail me your comments about this idea. Thanks.<br>
> ><br>
> > Cheers,<br>
><br>
> No reply, no comments? Nothing at all?<br>
<br></div></blockquote><div><br></div><div>I for one think the that this type of green computing you're trying to envision make a lot of sense, I know that RedHat is one of the powers behind this project and they already have something like this vision, but more integrated and with all the nice GUI's already implemented in RHEV. </div>
<div><br></div><div>Yes, I second the the fact that it adds complexity, but then you can just split the large cluster into several smaller clusters (if it's really the case for such a thing). Anyway, there are solutions to this kind of an issue. </div>
<div><br></div><div>I was thinking to another extent of this idea, taking the example of openvz based VM's which can have limits set on the CPU usage, either as a percentage or as cpuunits. The utilization feature could (theoretically) allow moving multiple VM's together based on their CPU usage. In this case let's say you've got a quad-core CPU, that would mean 400%, and allocating 100% per VM (or each VM gets a CPU core) you can put a maximum of 4 VM's per node (or whatever other scenario one might think of, this is just an example).</div>
<div><br></div><div>Actually when I first read the utilization feature I immediately thought of this scenario.</div><div><br></div><div>Regards,</div><div>Dan</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div class="im">
<br>
</div>Hello Michael,<br>
<br>
Although in theory the idea is sound I use the cluster suite primarily for customers demanding high availability.<br>
Adding complexity, turning server's off & on and moving resources around probably won't be beneficial to the uptime off the resources & hosts.<br>
Turning nodes off also effects the number of quorum votes.<br>
If you have a major disaster when a part of the cluster is power-ed down you might create an scenario where the cluster can not recover itself but it would have recovered if all nodes were still running.<br>
<br>
I could see these features being very useful in non-ha environments like testing/development but most off MY customers cannot use this feature or would rather pay the power bill...<br>
<br>
Just my 2 cents,<br>
<font color="#888888"><br>
Robert van Leeuwen<br>
</font><div><div></div><div class="h5"><br>
_______________________________________________<br>
Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a><br>
<a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker" target="_blank">http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker</a><br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br>Dan Frîncu<br>CCNA, RHCE<br><br>
</div>