[Pacemaker] Does pingd works on openais?

Lars Marowsky-Bree lmb at suse.de
Tue Mar 18 10:58:30 EDT 2008

On 2008-03-10T11:13:51, Atanas Dyulgerov <atanas.dyulgerov at postpath.com> wrote:

> STONITH brutally shutdowns a node. To do that you need redundant
> communication lines, smart power devices and definitely a local
> cluster. For geographically separated cluster with remote nodes
> STONITH is not applicable.  The method is called Node Fencing and as I
> said it has too many obstacles. 

All fencing methods require a means of communicating with the device; in
case of WAN clusters, the link between the (replicating) storage arrays
(for example) will also be cut - resource fencing is unavailable for the
same reasons as STONITH is.

WAN clusters require the concept of self-fencing after loss of site

All discussions regarding node versus resource level fencing are
correct and hold true, but as Andrew explained, resource fencing we
already could support if the RAs implemented it - if you set
no-quorum-policy=ignore, you've turned the cluster into a
resource-driven model, like SteelEye's LifeKeeper.

Yet, the wide-area versus metro-area versus local data center cluster
discussion is quite separate from this.

> As for me a better choice is 'resources locking' option aka Resource
> Fencing. Instead of killing the errant node the cluster CRM just
> fence/lock its I/O access to the shared storage resource unit cluster
> messaging system reports back successful service stop on that node.
> Perfectly suits DR cluster and no need of additional communication
> lines. More elegant solution!

Again, that's not quite true; see above. How does the resource itself
ensure fencing if the links are all cut? If you have EMC Symmetrix with
builtin replication, that has exactly the same problem - you need a
tie-breaker to decide how to continue and where. Whether you then do
resource fencing or node fencing is not necessarily much different.

> Heartbeat/Pacemaker does not support resource fencing. To fence
> resources, the resources have to support locking features by
> themselves. You cannot lock something that cannot be locked. Software
> iSCSI targets does not have locking mechanisms whereas GNBD does.
> However, GNBD locking features work with RedHat ClusterSuite only.

iSCSI targets in theory support SCSI reservations. Yet that only works
if the target is centralized (and thus a SPoF) - if you replicate it,
see above.

> I don't understand what you mean by saying "Pacemaker already "locks"
> the resources to one node, and orders fencing ...". Which resources
> does Pacemaker lock?

It's internal, and not exposed.

> What I'm trying to say is that the resource fencing is at least as
> important as node fencing. Pacemaker should be able to support the
> feature like RedHat Cluster Suite does. Resource locking should be
> supported in CRM/Pacemaker as well as in the resource by itself (gnbd,
> iscsi, drbd, etc...).

True, resource level fencing is desirable, and the RAs should do it
automatically. Possibly, certain extensions to Pacemaker might also be
useful here, though I think that's not the biggest obstacle.

> >> I don't have SAN. I'm looking for a cheaper solution. The reason
> >> I'm not using NFS is the slower performance compared to the fastest
> >> GNBD and iSCSI.
> >Have you _benchmarked_ that for your workload?
> Yes, I have benchmarked the application performance. With GNBD it had
> the best score, then comes iSCSI and NFS is at the end. 

That I find quite surprising. I'd like to duplicate that benchmark
eventually; do you have the benchmark description and results available


Teamlead Kernel, SuSE Labs, Research and Development
SUSE LINUX Products GmbH, GF: Markus Rex, HRB 16746 (AG Nürnberg)
"Experience is the name everyone gives to their mistakes." -- Oscar Wilde

More information about the Pacemaker mailing list