[ClusterLabs] Current config

Digimer lists at alteeve.ca
Tue Aug 11 15:09:32 EDT 2015


On 11/08/15 02:01 PM, Streeter, Michelle N wrote:
> Here is my current cluster config in a two node virtual using NFS.   Is
> there any other resources I need to add and why?

Without knowing the details of you site, advice is obviously limited and
relatively general.

> Cluster Name: CNAS
> 
> Corosync Nodes:
> 
> nas01 nas02
> 
> Pacemaker Nodes:
> 
> nas01 nas02

Do these map to the host names returned by 'uname -n' (or at least be
the short version of the fqdn)? It's not strictly required, but it is
recommended.

> Resources:
> 
> Group: nfsgroup
> 
>   Resource: nfsshare (class=ocf provider=heartbeat type=Filesystem)
> 
>    Attributes: device=/dev/sdb1 directory=/data fstype=ext4
> 
>    Operations: start interval=0s timeout=60 (nfsshare-start-timeout-60)
> 
>                stop interval=0s timeout=60 (nfsshare-stop-timeout-60)
> 
>                monitor interval=20 timeout=40 (nfsshare-monitor-interval-20)
> 
>   Resource: nfsServer (class=ocf provider=heartbeat type=nfsserver)
> 
>    Attributes: nfs_shared_infodir=/data/nfsinfo nfs_no_notify=true
> 
>    Operations: start interval=0s timeout=40 (nfsServer-start-timeout-40)
> 
>                stop interval=0s timeout=20s (nfsServer-stop-timeout-20s)
> 
>                monitor interval=10 timeout=20s
> (nfsServer-monitor-interval-10)
> 
>   Resource: NAS (class=ocf provider=heartbeat type=IPaddr2)
> 
>    Attributes: ip=192.168.56.110 cidr_netmask=24
> 
>    Operations: start interval=0s timeout=20s (NAS-start-timeout-20s)
> 
>                stop interval=0s timeout=20s (NAS-stop-timeout-20s)
> 
>                monitor interval=10s timeout=20s (NAS-monitor-interval-10s)
> 
>  
> 
> Stonith Devices:
> 
> Resource: nasapc (class=stonith type=fence_apc_snmp)
> 
>   Attributes: ipaddr=1.1.1.1 pcmk_host_map=em74nas01:1;em74nas02:2
> pcmk_host_check=static-list pcmk_host_list=nas01,nas02 login=apc passwd=apc
> 
>   Operations: monitor interval=60s (nasapc-monitor-interval-60s)

Have you tested this? It looks like you have one PDU for both nodes. If
that is the case, be aware that the PDU is a single point of failure.
Also be sure that the power cables never get moved or you will get a
false-positive fence, which could be disastrous. Do the nodes not have
IPMI that you could use a primary fence method with the PDU as a backup
method?

> Fencing Levels:
> 
>  
> 
> Location Constraints:
> 
> Ordering Constraints:
> 
> Colocation Constraints:
> 
>  
> 
> Cluster Properties:
> 
> cluster-infrastructure: classic openais (with plugin)
> 
> dc-version: 1.1.9-2.2-2db99f1

You should really upgrade to 1.1.10 or newer.

> expected-quorum-votes: 2
> 
> no-quorum-policy: ignore
> 
> stonith-enabled: false

Very, very bad. This means the PDU fencing, though defined, will never
be used. You must enable this (and test crashing both nodes!) in order
to prevent split-brains.

> Michelle Streeter



-- 
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without
access to education?




More information about the Users mailing list