[Pacemaker] Seeking suggestions for cluster configuration of HA iSCSI target and initiators

Phil Frost phil at macprofessionals.com
Mon Jul 16 13:34:47 EDT 2012


On 07/16/2012 01:14 PM, Digimer wrote:
> I've only tested this a little, so please take it as a general
> suggestion rather than strong advice.
>
> I created a two-node cluster, using red hat's high-availability add-on,
> using DRBD to keep the data replicated between the two "SAN" nodes and
> tgtd to export the LUNs. I had a virtual IP on the cluster to act as the
> target IP and I had DRBD in dual-primary mode with clustered LVM (so I
> had DRBD as the PV and exported the space from the LVs).
>
> Then I built a second cluster of five nodes to host KVM VMs. The
> underlying nodes used clustered LVM as well, but this time the LUNs was
> the PV. I carved this up into an LV per VM and made the VMs the HA
> service. Again using RH HA-Addon.
>
> In this setup, I was able to fail over the SAN without losing any VMs. I
> even messed up the fencing on the SAN cluster once, which meant it took
> 30s to fail over, and I didn't lose the VMs. So to the minimal extent I
> tested it, it worked excellently.
>
> I have some very rough notes on this setup. They're not fit for public
> consumption at all, but if you'd like I'll send them to you directly.
> They include the configurations which might help as a template or similar.

This sounds similar to what I have, except I'm doing it with only one 
cluster. The reason I'm using one cluster is twofold:

1) the storage is replicated between only two nodes, and I wish to avoid 
a two-node cluster so I can have a useful quorum.

2) my IO load is not high and my budget is low, so the storage nodes 
could also run VMs and not be overloaded. Having this capability in the 
event that too many VM nodes have failed is a robustness win.

As I have things configured, *usually* I can initiate a failover of the 
target, and everything is fine. The problem is when I am unlucky the 
initiator monitor action occurs while the target failover is occurring. 
It's easy to get unlucky if something is horribly wrong, and the target 
is down longer than a normal failover. It's also possible, though 
harder, to get unlucky by simply issuing "crm resource mirgate 
iscsitarget" at the right instant. My availability requirements aren't 
so high that I couldn't deal with the occasional long-term target 
failure being a special case, but simply performing a planned migration 
of the target having the potential to uncleanly reboot all the VMs on 
one node is pretty horrible.

I've been doing some study of the iscsi RA since my first post, and it 
seems to me now that the "failure" in the monitor action isn't actually 
in the monitor action at all. Rather, it appears that for *all* actions, 
the RA does a "discovery" step, and that's what is failing. I'm not 
really sure what this is, or why I need it. Is it simply to find an 
unspecified portal for a given IQN? Is it therefore useless in my case, 
since I've explicitly specified the portal in the resource parameters?

If I were to disable the "discovery" step, what are people's thoughts on 
the case where the target is operational, but the initiator for some 
reason (network failure) can't reach it? In this case, assume Pacemaker 
knows the target is up; is there a way to encourage it to decide to 
attempt migrating the initiator to another node?





More information about the Pacemaker mailing list