<div dir="ltr">Hello, im trying to set the cluster and use one dedicated interface for it, the interface chosen was the eth1, but after configure the cluster the resources didn't start, when i move the cluster configuration back to the eth0 it works normally, anyone have some idea? <div><br></div><div>#Here is my configuration</div><div><div>totem {</div><div> version: 2</div><div> cluster_name: HA</div><div> token: 5000</div><div> token_retransmits_before_loss_const: 20</div><div> join: 1000</div><div> consensus: 7500</div><div> max_messages: 20</div><div> secauth: off</div><div> transport: udpu</div><div> interface {</div><div> member {</div><div> memberaddr: cluster00</div><div> }</div><div> member {</div><div> memberaddr: cluster01</div><div> }</div><div> ringnumber: 0</div><div> bindnetaddr: 1.1.1.1</div><div> mcastport: 5405</div><div> }</div><div>}</div><div><br></div><div>logging {</div><div> fileline: off</div><div> to_stderr: yes</div><div> to_logfile: yes</div><div> logfile: /var/log/corosync/corosync.log</div><div> to_syslog: yes</div><div> syslog_facility: daemon</div><div> syslog_priority: info</div><div> debug: off</div><div>}</div><div><br></div><div>quorum {</div><div> provider: corosync_votequorum</div><div> expected_votes: 2</div><div> two_node: 1</div><div>}</div><div><br></div><div>nodelist {</div><div> node {</div><div> ring0_addr: cluster00</div><div> nodeid: 1</div><div> }</div><div> node {</div><div> ring0_addr: cluster01</div><div> nodeid: 2</div><div> }</div><div>}</div></div><div><br></div><div>#my resources</div><div><div>Group: HA</div><div> Resource: VIP (class=ocf provider=heartbeat type=IPaddr2)</div><div> Attributes: ip=1.1.1.15 cidr_netmask=24</div><div> Operations: monitor interval=10s timeout=20s (VIP-monitor-interval-10s)</div><div>Clone: Glusterd-Service-Clone</div><div> Meta Attrs: clone-max=2 interleave=true</div><div> Resource: Glusterd-Service (class=systemd type=glusterfs-server)</div><div> Operations: stop interval=0s timeout=60s (Glusterd-Service-stop-interval-0s)</div><div> monitor interval=60s timeout=10s (Glusterd-Service-monitor-interval-60s)</div></div><div><br></div><div>#my constraints</div><div><div>Resource: fence_impmi_cluster00</div><div> Disabled on: cluster00 (score:-INFINITY) (id:Avoid_fencing_cluster00)</div><div> Resource: fence_impmi_cluster01</div><div> Disabled on: cluster01 (score:-INFINITY) (id:Avoid_fencing_cluster01)</div><div>Ordering Constraints:</div><div> start VIP then start Glusterd-Service-Clone (kind:Mandatory) (id:VIP_BEFORE_Glusterd-Service-Clone)</div></div><div><br></div><div>#pcs resources</div><div><div>shell# pcs resource</div><div> Resource Group: HA</div><div> VIP (ocf::heartbeat:IPaddr2): Stopped</div><div> Clone Set: Glusterd-Service-Clone [Glusterd-Service]</div><div> Stopped: [ cluster00 cluster01 kfc6666red0 ]</div></div><div><br></div><div><br></div><div>#pcs status</div><div><div>root@lab0:~$ pcs status</div><div>Cluster name: HA</div><div>Stack: corosync</div><div>Current DC: cluster01 (version 1.1.16-94ff4df) - partition with quorum</div><div>Last updated: Sun Jun 11 17:42:45 2017</div><div>Last change: Sat Jun 10 15:10:26 2017 by root via cibadmin on cluster01</div><div><br></div><div>3 nodes configured</div><div>5 resources configured</div><div><br></div><div>Node lab0: UNCLEAN (offline)</div><div>Online: [ cluster00 cluster01 ]</div><div><br></div><div>Full list of resources:</div><div><br></div><div> fence_impmi_cluster00 (stonith:external/ipmi): Stopped</div><div> fence_impmi_cluster01 (stonith:external/ipmi): Stopped</div><div> Resource Group: HA</div><div> VIP (ocf::heartbeat:IPaddr2): Stopped</div><div> Clone Set: Glusterd-Service-Clone [Glusterd-Service]</div><div> Stopped: [ cluster00 cluster01 lab0 ]</div><div><br></div><div><br></div><div>Daemon Status:</div><div> corosync: active/enabled</div><div> pacemaker: active/enabled</div><div> pcsd: failed/enabled</div></div><div><br></div></div>