[ClusterLabs] (ELI5) Physical disk XXXXXXX does not have the inquiry data (SCSI page 83h VPD descriptor) that is required by failover clustering

Jason A Ramsey jason at eramsey.org
Wed Sep 7 11:53:48 EDT 2016


Anyone that follows this mailing list at all has probably noticed that I’m creating a 2-node HA iSCSI Target on RHEL 6 using Pacemaker/Corosync (and CMAN, I guess) and the available tgt scsi tools to use as shared file system for some Windows Server Failover Cluster nodes. After a great deal of trial and error, I’ve finally had success getting this to work, but I’m running into an issue with the file systems passing cluster validation on the Windows side. Initially, I was getting an error that the iSCSI target didn’t support SCSI-3 Persistent Reservations, which I was able to get around using the fence_scsi stonith module. I’ve found some extremely detailed conversations about the SCSI page 83h VPD descriptor error on the internet, but, frankly, I simply don’t follow them. Could someone ELI5 how to fix this **without scst, lio, lio-t**. Thanks!

# pcs status
Cluster name: cluster
Stack: cman
Current DC: node1 (version 1.1.15-1.9a34920.git.el6-9a34920) - partition with quorum
Last updated: Wed Sep  7 11:47:32 2016
Last change: Tue Sep  6 10:55:21 2016 by root via cibadmin on node1

2 nodes configured
8 resources configured

Online: [ node1 node2 ]

Full list of resources:

Master/Slave Set: cluster-fs2o [cluster-fs1o]
     Masters: [ node1 ]
     Slaves: [ node2 ]
cluster-vip            (ocf::heartbeat:IPaddr2): Started node1
cluster-lvm          (ocf::heartbeat:LVM):       Started node1
cluster-tgt           (ocf::heartbeat:iSCSITarget):          Started node1
cluster-lun1         (ocf::heartbeat:iSCSILogicalUnit):  Started node1
cluster-lun2         (ocf::heartbeat:iSCSILogicalUnit):  Started node1
cluster-fence       (stonith:fence_scsi):           Started node2

PCSD Status:
  node1: Online
  node2: Online

# cat /etc/cluster/cluster.conf
<cluster config_version="9" name="cluster">
  <fence_daemon/>
  <clusternodes>
    <clusternode name="node1" nodeid="1">
      <fence>
        <method name="pcmk-method">
          <device name="pcmk-redirect" port="node1"/>
        </method>
      </fence>
    </clusternode>
    <clusternode name="node2" nodeid="2">
      <fence>
        <method name="pcmk-method">
          <device name="pcmk-redirect" port="node2"/>
        </method>
      </fence>
    </clusternode>
  </clusternodes>
  <cman broadcast="no" expected_votes="1" transport="udpu" two_node="1"/>
  <fencedevices>
    <fencedevice agent="fence_pcmk" name="pcmk-redirect"/>
  </fencedevices>
  <rm>
    <failoverdomains/>
    <resources/>
  </rm>
</cluster>

# pcs config show
Cluster Name: cluster
Corosync Nodes:
node1 node2
Pacemaker Nodes:
node1 node2

Resources:
Master: cluster-fs2o
  Meta Attrs: master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true
  Resource: cluster-fs1o (class=ocf provider=linbit type=drbd)
   Attributes: drbd_resource=targetfs
   Operations: start interval=0s timeout=240 (cluster-fs1o-start-interval-0s)
               promote interval=0s timeout=90 (cluster-fs1o-promote-interval-0s)
               demote interval=0s timeout=90 (cluster-fs1o-demote-interval-0s)
               stop interval=0s timeout=100 (cluster-fs1o-stop-interval-0s)
               monitor interval=10s (cluster-fs1o-monitor-interval-10s)
Resource: cluster-vip (class=ocf provider=heartbeat type=IPaddr2)
  Attributes: ip=10.30.96.100 cidr_netmask=32 nic=eth0
  Operations: start interval=0s timeout=20s (cluster-vip-start-interval-0s)
              stop interval=0s timeout=20s (cluster-vip-stop-interval-0s)
              monitor interval=30s (cluster-vip-monitor-interval-30s)
Resource: cluster-lvm (class=ocf provider=heartbeat type=LVM)
  Attributes: volgrpname=targetfs
  Operations: start interval=0s timeout=30 (cluster-lvm-start-interval-0s)
              stop interval=0s timeout=30 (cluster-lvm-stop-interval-0s)
              monitor interval=10s timeout=30 (cluster-lvm-monitor-interval-10s)
Resource: cluster-tgt (class=ocf provider=heartbeat type=iSCSITarget)
  Attributes: iqn=iqn.2016-08.local.hsinauth.test:targetfs tid=1 incoming_username=iscsi incoming_password=@4TAt9-laObIrdeR
  Operations: start interval=0s timeout=10 (cluster-tgt-start-interval-0s)
              stop interval=0s timeout=10 (cluster-tgt-stop-interval-0s)
              monitor interval=10s timeout=20s (cluster-tgt-monitor-interval-10s)
Resource: cluster-lun1 (class=ocf provider=heartbeat type=iSCSILogicalUnit)
  Attributes: target_iqn=iqn.2016-08.local.hsinauth.test:targetfs lun=1 path=/dev/targetfs/lun1
  Operations: start interval=0s timeout=10 (cluster-lun1-start-interval-0s)
              stop interval=0s timeout=10 (cluster-lun1-stop-interval-0s)
              monitor interval=10 (cluster-lun1-monitor-interval-10)
Resource: cluster-lun2 (class=ocf provider=heartbeat type=iSCSILogicalUnit)
  Attributes: target_iqn=iqn.2016-08.local.hsinauth.test:targetfs lun=2 path=/dev/targetfs/lun2
  Operations: start interval=0s timeout=10 (cluster-lun2-start-interval-0s)
              stop interval=0s timeout=10 (cluster-lun2-stop-interval-0s)
              monitor interval=10 (cluster-lun2-monitor-interval-10)

Stonith Devices:
Resource: cluster-fence (class=stonith type=fence_scsi)
  Attributes: devices=/dev/targetfs/lun1,/dev/targetfs/lun2
  Meta Attrs: provides=unfencing
  Operations: monitor interval=60s (cluster-fence-monitor-interval-60s)
Fencing Levels:

Location Constraints:
Ordering Constraints:
  promote cluster-fs2o then start cluster-lvm (kind:Mandatory) (id:order-cluster-fs2o-cluster-lvm-mandatory)
  start cluster-vip then start cluster-lvm (kind:Mandatory) (id:order-cluster-vip-cluster-lvm-mandatory)
  start cluster-lvm then start cluster-tgt (kind:Mandatory) (id:order-cluster-lvm-cluster-tgt-mandatory)
  start cluster-tgt then start cluster-lun1 (kind:Mandatory) (id:order-cluster-tgt-cluster-lun1-mandatory)
  start cluster-tgt then start cluster-lun2 (kind:Mandatory) (id:order-cluster-tgt-cluster-lun2-mandatory)
Colocation Constraints:
  cluster-vip with cluster-fs2o (score:INFINITY) (with-rsc-role:Master) (id:colocation-cluster-vip-cluster-fs2o-INFINITY)
  cluster-lvm with cluster-fs2o (score:INFINITY) (with-rsc-role:Master) (id:colocation-cluster-lvm-cluster-fs2o-INFINITY)
  cluster-tgt with cluster-fs2o (score:INFINITY) (with-rsc-role:Master) (id:colocation-cluster-tgt-cluster-fs2o-INFINITY)
  cluster-lun1 with cluster-fs2o (score:INFINITY) (with-rsc-role:Master) (id:colocation-cluster-lun1-cluster-fs2o-INFINITY)
  cluster-lun2 with cluster-fs2o (score:INFINITY) (with-rsc-role:Master) (id:colocation-cluster-lun2-cluster-fs2o-INFINITY)

Resources Defaults:
resource-stickiness: 100
Operations Defaults:
No defaults set

Cluster Properties:
cluster-infrastructure: cman
dc-version: 1.1.15-1.9a34920.git.el6-9a34920
default-resource-stickiness: 200
have-watchdog: false
last-lrm-refresh: 1472233020
no-quorum-policy: ignore
stonith-enabled: false

--

[ jR ]

  there is no path to greatness; greatness is the path
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.clusterlabs.org/pipermail/users/attachments/20160907/cff9953b/attachment-0002.html>


More information about the Users mailing list