[ClusterLabs] questions about fence_scsi
Stefan Krueger
Shadow_7 at gmx.net
Fri Jun 15 08:47:09 EDT 2018
Hello Andrei,
thanks for this hint, at the moment I try to solve this with a colocation. (but it doesn't work, see mailinglist)
best regards
> Gesendet: Freitag, 15. Juni 2018 um 10:32 Uhr
> Von: "Andrei Borzenkov" <arvidjaar at gmail.com>
> An: "Cluster Labs - All topics related to open-source clustering welcomed" <users at clusterlabs.org>
> Betreff: Re: [ClusterLabs] questions about fence_scsi
>
> On Fri, Jun 15, 2018 at 11:18 AM, Andrei Borzenkov <arvidjaar at gmail.com> wrote:
> > On Fri, Jun 15, 2018 at 10:14 AM, Stefan Krueger <Shadow_7 at gmx.net> wrote:
> >> Hello,
> >>
> >> so far as I understand I can use fence_scsi on a two node cluster, if the fence running on one cluster the other cluster has no access to this devices, correct?
> >
> > If I parse this sentence correctly - no, that's not correct to my best
> > knowledge. All active nodes have access to shared resource - only when
> > node fails will it be fenced (i.e. access to devices revoked), and
> > unfenced again (i.e. access granted) when node comes back.
> >
>
> If you really want to allow only one node at a time to access device,
> you should look at something like sg_persist RA. See example in SLES
> focumentation (scroll down to the end of page):
>
> https://www.suse.com/documentation/sle-ha-12/book_sleha/data/sec_ha_storage_protect_fencing.html
>
>
> >> I've a 2node cluster with shared JBODs and configure fence_scsi, but I still can use/mount all this devices on both nodes. Did I something wrong?
> >> My Config:
> >>
> >> pcs resource
> >> Resource Group: zfs-storage
> >> vm_storage (ocf::heartbeat:ZFS): Started zfs-serv3
> >> ha-ip (ocf::heartbeat:IPaddr2): Started zfs-serv3
> >> root at zfs-serv4:~# pcs config
> >> Cluster Name: zfs-vmstorage
> >> Corosync Nodes:
> >> zfs-serv3 zfs-serv4
> >> Pacemaker Nodes:
> >> zfs-serv3 zfs-serv4
> >>
> >> Resources:
> >> Group: zfs-storage
> >> Resource: vm_storage (class=ocf provider=heartbeat type=ZFS)
> >> Attributes: pool=vm_storage importargs="-d /dev/disk/by-vdev/"
> >> Operations: monitor interval=5s timeout=30s (vm_storage-monitor-interval-5s)
> >> start interval=0s timeout=90 (vm_storage-start-interval-0s)
> >> stop interval=0s timeout=90 (vm_storage-stop-interval-0s)
> >> Resource: ha-ip (class=ocf provider=heartbeat type=IPaddr2)
> >> Attributes: ip=172.16.101.73 cidr_netmask=16
> >> Operations: start interval=0s timeout=20s (ha-ip-start-interval-0s)
> >> stop interval=0s timeout=20s (ha-ip-stop-interval-0s)
> >> monitor interval=10s timeout=20s (ha-ip-monitor-interval-10s)
> >>
> >> Stonith Devices:
> >> Resource: fence-vm_storage (class=stonith type=fence_scsi)
> >> Attributes: pcmk_monitor_action=metadata pcmk_host_list=172.16.101.74,172.16.101.75 devices=" /dev/disk/by-vdev/j3d03-hdd /dev/disk/by-vdev/j4d03-hdd /dev/disk/by-vdev/j3d04-hdd /dev/disk/by-vdev/j4d04-hdd /dev/disk/by-vdev/j3d05-hdd /dev/disk/by-vdev/j4d05-hdd /dev/disk/by-vdev/j3d06-hdd /dev/disk/by-vdev/j4d06-hdd /dev/disk/by-vdev/j3d07-hdd /dev/disk/by-vdev/j4d07-hdd /dev/disk/by-vdev/j3d08-hdd /dev/disk/by-vdev/j4d08-hdd /dev/disk/by-vdev/j3d09-hdd /dev/disk/by-vdev/j4d09-hdd /dev/disk/by-vdev/j3d10-hdd /dev/disk/by-vdev/j4d10-hdd /dev/disk/by-vdev/j3d11-hdd /dev/disk/by-vdev/j4d11-hdd /dev/disk/by-vdev/j3d12-hdd /dev/disk/by-vdev/j4d12-hdd /dev/disk/by-vdev/j3d13-hdd /dev/disk/by-vdev/j4d13-hdd /dev/disk/by-vdev/j3d14-hdd /dev/disk/by-vdev/j4d14-hdd /dev/disk/by-vdev/j3d15-hdd /dev/disk/by-vdev/j4d15-hdd /dev/disk/by-vdev/j3d16-hdd /dev/disk/by-vdev/j4d16-hdd /dev/disk/by-vdev/j3d17-hdd /dev/disk/by-vdev/j4d17-hdd /dev/disk/by-vdev/j3d18-hdd /dev/disk/by-vdev/j4d18-hdd /de
> v/d
> >> isk/by-vdev/j3d19-hdd /dev/disk/by-vdev/j4d19-hdd log /dev/disk/by-vdev/j3d00-ssd /dev/disk/by-vdev/j4d00-ssd cache /dev/disk/by-vdev/j3d02-ssd"
> >> Meta Attrs: provides=unfencing
> >> Operations: monitor interval=60s (fence-vm_storage-monitor-interval-60s)
> >> Fencing Levels:
> >>
> >> Location Constraints:
> >> Ordering Constraints:
> >> Colocation Constraints:
> >> Ticket Constraints:
> >>
> >> Alerts:
> >> No alerts defined
> >>
> >> Resources Defaults:
> >> resource-stickiness: 100
> >> Operations Defaults:
> >> No defaults set
> >>
> >> Cluster Properties:
> >> cluster-infrastructure: corosync
> >> cluster-name: zfs-vmstorage
> >> dc-version: 1.1.16-94ff4df
> >> have-watchdog: false
> >> last-lrm-refresh: 1528814481
> >> no-quorum-policy: ignore
> >>
> >> Quorum:
> >> Options:
> >>
> >>
> >> Thanks for help!
> >> Best regards
> >> Stefan
> >> _______________________________________________
> >> Users mailing list: Users at clusterlabs.org
> >> https://lists.clusterlabs.org/mailman/listinfo/users
> >>
> >> Project Home: http://www.clusterlabs.org
> >> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> >> Bugs: http://bugs.clusterlabs.org
> _______________________________________________
> Users mailing list: Users at clusterlabs.org
> https://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>
More information about the Users
mailing list