[ClusterLabs] questions about fence_scsi
Stefan Krueger
Shadow_7 at gmx.net
Fri Jun 15 03:14:48 EDT 2018
Hello,
so far as I understand I can use fence_scsi on a two node cluster, if the fence running on one cluster the other cluster has no access to this devices, correct? I've a 2node cluster with shared JBODs and configure fence_scsi, but I still can use/mount all this devices on both nodes. Did I something wrong?
My Config:
pcs resource
Resource Group: zfs-storage
vm_storage (ocf::heartbeat:ZFS): Started zfs-serv3
ha-ip (ocf::heartbeat:IPaddr2): Started zfs-serv3
root at zfs-serv4:~# pcs config
Cluster Name: zfs-vmstorage
Corosync Nodes:
zfs-serv3 zfs-serv4
Pacemaker Nodes:
zfs-serv3 zfs-serv4
Resources:
Group: zfs-storage
Resource: vm_storage (class=ocf provider=heartbeat type=ZFS)
Attributes: pool=vm_storage importargs="-d /dev/disk/by-vdev/"
Operations: monitor interval=5s timeout=30s (vm_storage-monitor-interval-5s)
start interval=0s timeout=90 (vm_storage-start-interval-0s)
stop interval=0s timeout=90 (vm_storage-stop-interval-0s)
Resource: ha-ip (class=ocf provider=heartbeat type=IPaddr2)
Attributes: ip=172.16.101.73 cidr_netmask=16
Operations: start interval=0s timeout=20s (ha-ip-start-interval-0s)
stop interval=0s timeout=20s (ha-ip-stop-interval-0s)
monitor interval=10s timeout=20s (ha-ip-monitor-interval-10s)
Stonith Devices:
Resource: fence-vm_storage (class=stonith type=fence_scsi)
Attributes: pcmk_monitor_action=metadata pcmk_host_list=172.16.101.74,172.16.101.75 devices=" /dev/disk/by-vdev/j3d03-hdd /dev/disk/by-vdev/j4d03-hdd /dev/disk/by-vdev/j3d04-hdd /dev/disk/by-vdev/j4d04-hdd /dev/disk/by-vdev/j3d05-hdd /dev/disk/by-vdev/j4d05-hdd /dev/disk/by-vdev/j3d06-hdd /dev/disk/by-vdev/j4d06-hdd /dev/disk/by-vdev/j3d07-hdd /dev/disk/by-vdev/j4d07-hdd /dev/disk/by-vdev/j3d08-hdd /dev/disk/by-vdev/j4d08-hdd /dev/disk/by-vdev/j3d09-hdd /dev/disk/by-vdev/j4d09-hdd /dev/disk/by-vdev/j3d10-hdd /dev/disk/by-vdev/j4d10-hdd /dev/disk/by-vdev/j3d11-hdd /dev/disk/by-vdev/j4d11-hdd /dev/disk/by-vdev/j3d12-hdd /dev/disk/by-vdev/j4d12-hdd /dev/disk/by-vdev/j3d13-hdd /dev/disk/by-vdev/j4d13-hdd /dev/disk/by-vdev/j3d14-hdd /dev/disk/by-vdev/j4d14-hdd /dev/disk/by-vdev/j3d15-hdd /dev/disk/by-vdev/j4d15-hdd /dev/disk/by-vdev/j3d16-hdd /dev/disk/by-vdev/j4d16-hdd /dev/disk/by-vdev/j3d17-hdd /dev/disk/by-vdev/j4d17-hdd /dev/disk/by-vdev/j3d18-hdd /dev/disk/by-vdev/j4d18-hdd /dev/disk/by-vdev/j3d19-hdd /dev/disk/by-vdev/j4d19-hdd log /dev/disk/by-vdev/j3d00-ssd /dev/disk/by-vdev/j4d00-ssd cache /dev/disk/by-vdev/j3d02-ssd"
Meta Attrs: provides=unfencing
Operations: monitor interval=60s (fence-vm_storage-monitor-interval-60s)
Fencing Levels:
Location Constraints:
Ordering Constraints:
Colocation Constraints:
Ticket Constraints:
Alerts:
No alerts defined
Resources Defaults:
resource-stickiness: 100
Operations Defaults:
No defaults set
Cluster Properties:
cluster-infrastructure: corosync
cluster-name: zfs-vmstorage
dc-version: 1.1.16-94ff4df
have-watchdog: false
last-lrm-refresh: 1528814481
no-quorum-policy: ignore
Quorum:
Options:
Thanks for help!
Best regards
Stefan
More information about the Users
mailing list