[ClusterLabs] ressource stopped unexpectly

Stefan Krueger Shadow_7 at gmx.net
Wed Jun 13 02:50:14 EDT 2018


Hello,

I've a problem with my cluster. When I use 'pcs cluster standby serv3' it moves all ressources to serv4 that works fine, but when I restart a node the Ressource ha-ip become stopped and I don't know why. Can somebody give me an hint why this happen and how to resolve that?

btw: i use this guide: https://github.com/ewwhite/zfs-ha/wiki
the logfile is here (i guess it is too long for the mailinglist) https://paste.debian.net/hidden/2e001867/

thanks for help!

best regards
Stefan


pcs status
Cluster name: zfs-vmstorage
Stack: corosync
Current DC: zfs-serv3 (version 1.1.16-94ff4df) - partition with quorum
Last updated: Tue Jun 12 16:56:45 2018
Last change: Tue Jun 12 16:44:52 2018 by hacluster via crm_attribute on zfs-serv3

2 nodes configured
3 resources configured

Online: [ zfs-serv3 zfs-serv4 ]

Full list of resources:

 fence-vm_storage       (stonith:fence_scsi):   Started zfs-serv3
 Resource Group: zfs-storage
     vm_storage (ocf::heartbeat:ZFS):   Started zfs-serv3
     ha-ip      (ocf::heartbeat:IPaddr2):       Stopped

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled



pcs config
Cluster Name: zfs-vmstorage
Corosync Nodes:
 zfs-serv3 zfs-serv4
Pacemaker Nodes:
 zfs-serv3 zfs-serv4

Resources:
 Group: zfs-storage
  Resource: vm_storage (class=ocf provider=heartbeat type=ZFS)
   Attributes: pool=vm_storage importargs="-d /dev/disk/by-vdev/"
   Operations: monitor interval=5s timeout=30s (vm_storage-monitor-interval-5s)
               start interval=0s timeout=90 (vm_storage-start-interval-0s)
               stop interval=0s timeout=90 (vm_storage-stop-interval-0s)
  Resource: ha-ip (class=ocf provider=heartbeat type=IPaddr2)
   Attributes: ip=172.16.101.73 cidr_netmask=16
   Operations: start interval=0s timeout=20s (ha-ip-start-interval-0s)
               stop interval=0s timeout=20s (ha-ip-stop-interval-0s)
               monitor interval=10s timeout=20s (ha-ip-monitor-interval-10s)

Stonith Devices:
 Resource: fence-vm_storage (class=stonith type=fence_scsi)
  Attributes: pcmk_monitor_action=metadata pcmk_host_list=172.16.101.74,172.16.101.75 devices=" /dev/disk/by-vdev/j3d03-hdd /dev/disk/by-vdev/j4d03-hdd /dev/disk/by-vdev/j3d04-hdd /dev/disk/by-vdev/j4d04-hdd /dev/disk/by-vdev/j3d05-hdd /dev/disk/by-vdev/j4d05-hdd /dev/disk/by-vdev/j3d06-hdd /dev/disk/by-vdev/j4d06-hdd /dev/disk/by-vdev/j3d07-hdd /dev/disk/by-vdev/j4d07-hdd /dev/disk/by-vdev/j3d08-hdd /dev/disk/by-vdev/j4d08-hdd /dev/disk/by-vdev/j3d09-hdd /dev/disk/by-vdev/j4d09-hdd /dev/disk/by-vdev/j3d10-hdd /dev/disk/by-vdev/j4d10-hdd /dev/disk/by-vdev/j3d11-hdd /dev/disk/by-vdev/j4d11-hdd /dev/disk/by-vdev/j3d12-hdd /dev/disk/by-vdev/j4d12-hdd /dev/disk/by-vdev/j3d13-hdd /dev/disk/by-vdev/j4d13-hdd /dev/disk/by-vdev/j3d14-hdd /dev/disk/by-vdev/j4d14-hdd /dev/disk/by-vdev/j3d15-hdd /dev/disk/by-vdev/j4d15-hdd /dev/disk/by-vdev/j3d16-hdd /dev/disk/by-vdev/j4d16-hdd /dev/disk/by-vdev/j3d17-hdd /dev/disk/by-vdev/j4d17-hdd /dev/disk/by-vdev/j3d18-hdd /dev/disk/by-vdev/j4d18-hdd /dev/disk/by-vdev/j3d19-hdd /dev/disk/by-vdev/j4d19-hdd log /dev/disk/by-vdev/j3d00-ssd /dev/disk/by-vdev/j4d00-ssd cache /dev/disk/by-vdev/j3d02-ssd"
  Meta Attrs: provides=unfencing 
  Operations: monitor interval=60s (fence-vm_storage-monitor-interval-60s)
Fencing Levels:

Location Constraints:
Ordering Constraints:
Colocation Constraints:
Ticket Constraints:

Alerts:
 No alerts defined

Resources Defaults:
 resource-stickiness: 100
Operations Defaults:
 No defaults set

Cluster Properties:
 cluster-infrastructure: corosync
 cluster-name: zfs-vmstorage
 dc-version: 1.1.16-94ff4df
 have-watchdog: false
 last-lrm-refresh: 1528814481
 no-quorum-policy: ignore

Quorum:
  Options:



More information about the Users mailing list