[ClusterLabs] Using dedicated interface for the Cluster
Sergio Pelissari
sonared at gmail.com
Sun Jun 11 17:47:19 EDT 2017
Hello, im trying to set the cluster and use one dedicated interface for it,
the interface chosen was the eth1, but after configure the cluster the
resources didn't start, when i move the cluster configuration back to the
eth0 it works normally, anyone have some idea?
#Here is my configuration
totem {
version: 2
cluster_name: HA
token: 5000
token_retransmits_before_loss_const: 20
join: 1000
consensus: 7500
max_messages: 20
secauth: off
transport: udpu
interface {
member {
memberaddr: cluster00
}
member {
memberaddr: cluster01
}
ringnumber: 0
bindnetaddr: 1.1.1.1
mcastport: 5405
}
}
logging {
fileline: off
to_stderr: yes
to_logfile: yes
logfile: /var/log/corosync/corosync.log
to_syslog: yes
syslog_facility: daemon
syslog_priority: info
debug: off
}
quorum {
provider: corosync_votequorum
expected_votes: 2
two_node: 1
}
nodelist {
node {
ring0_addr: cluster00
nodeid: 1
}
node {
ring0_addr: cluster01
nodeid: 2
}
}
#my resources
Group: HA
Resource: VIP (class=ocf provider=heartbeat type=IPaddr2)
Attributes: ip=1.1.1.15 cidr_netmask=24
Operations: monitor interval=10s timeout=20s (VIP-monitor-interval-10s)
Clone: Glusterd-Service-Clone
Meta Attrs: clone-max=2 interleave=true
Resource: Glusterd-Service (class=systemd type=glusterfs-server)
Operations: stop interval=0s timeout=60s
(Glusterd-Service-stop-interval-0s)
monitor interval=60s timeout=10s
(Glusterd-Service-monitor-interval-60s)
#my constraints
Resource: fence_impmi_cluster00
Disabled on: cluster00 (score:-INFINITY) (id:Avoid_fencing_cluster00)
Resource: fence_impmi_cluster01
Disabled on: cluster01 (score:-INFINITY) (id:Avoid_fencing_cluster01)
Ordering Constraints:
start VIP then start Glusterd-Service-Clone (kind:Mandatory)
(id:VIP_BEFORE_Glusterd-Service-Clone)
#pcs resources
shell# pcs resource
Resource Group: HA
VIP (ocf::heartbeat:IPaddr2): Stopped
Clone Set: Glusterd-Service-Clone [Glusterd-Service]
Stopped: [ cluster00 cluster01 kfc6666red0 ]
#pcs status
root at lab0:~$ pcs status
Cluster name: HA
Stack: corosync
Current DC: cluster01 (version 1.1.16-94ff4df) - partition with quorum
Last updated: Sun Jun 11 17:42:45 2017
Last change: Sat Jun 10 15:10:26 2017 by root via cibadmin on cluster01
3 nodes configured
5 resources configured
Node lab0: UNCLEAN (offline)
Online: [ cluster00 cluster01 ]
Full list of resources:
fence_impmi_cluster00 (stonith:external/ipmi): Stopped
fence_impmi_cluster01 (stonith:external/ipmi): Stopped
Resource Group: HA
VIP (ocf::heartbeat:IPaddr2): Stopped
Clone Set: Glusterd-Service-Clone [Glusterd-Service]
Stopped: [ cluster00 cluster01 lab0 ]
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: failed/enabled
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.clusterlabs.org/pipermail/users/attachments/20170611/92d3ed2a/attachment-0002.html>
More information about the Users
mailing list