<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=us-ascii">
<meta name="Generator" content="Microsoft Exchange Server">
<!-- converted from rtf -->
<style><!-- .EmailQuote { margin-left: 1pt; padding-left: 4pt; border-left: #800000 2px solid; } --></style>
</head>
<body>
<font face="Calibri" size="2"><span style="font-size:11pt;">
<div>Hello,</div>
<div> </div>
<div>Following the SUSE Linux Enterprise High Availability Extension 12 SP1 documentation I got</div>
<div>a Pacemaker cluster running having the Hawk GUI.</div>
<div> </div>
<div>I use an LV as the DRDB backing store and put PV/VG/LVS on the drbd0 block device as</div>
<div>described in the DRBD 8.4 user guide chapter 10 Using LVM with DRBD.</div>
<div>DRBD is configured, tested and working and I can manually make one node the primary</div>
<div>and access the DRDB block device or switch it around and access the DRBD block device from the other node.</div>
<div> </div>
<div>I’m having problems putting DRDB under Pacemaker control. “crm status” shows </div>
<div> </div>
<div style="padding-left:36pt;"> crm(live)# status</div>
<div style="padding-left:36pt;"> Last updated: Fri Aug 5 04:59:57 2016 Last change: Fri Aug 5 04:59:45 2016 by hacluster via crmd on ars2</div>
<div style="padding-left:36pt;"> Stack: corosync</div>
<div style="padding-left:36pt;"> Current DC: ars1 (version 1.1.13-10.4-6f22ad7) - partition with quorum</div>
<div style="padding-left:36pt;"> 2 nodes and 5 resources configured</div>
<div style="padding-left:36pt;"> </div>
<div style="padding-left:36pt;"> Online: [ ars1 ars2 ]</div>
<div style="padding-left:36pt;"> </div>
<div style="padding-left:36pt;"> Full list of resources:</div>
<div style="padding-left:36pt;"> </div>
<div style="padding-left:36pt;"> Resource Group: cluster-mgmt</div>
<div style="padding-left:36pt;"> virtual-ip-mgmt (ocf::heartbeat:IPaddr2): Started ars1</div>
<div style="padding-left:36pt;"> mgmt (systemd:hawk): Started ars1</div>
<div style="padding-left:36pt;"> Resource Group: ars-services-1</div>
<div style="padding-left:36pt;"> virtual-ip (ocf::heartbeat:IPaddr2): Started ars2</div>
<div style="padding-left:36pt;"> myapache (systemd:apache2): Started ars2</div>
<div style="padding-left:36pt;"> drbd-data (ocf::linbit:drbd): FAILED (unmanaged)[ ars1 ars2 ]</div>
<div style="padding-left:36pt;"> </div>
<div style="padding-left:36pt;"> Failed Actions:</div>
<div style="padding-left:36pt;"> * drbd-data_stop_0 on ars1 'not configured' (6): call=40, status=complete, exitreason='none',</div>
<div style="padding-left:36pt;"> last-rc-change='Fri Aug 5 04:59:46 2016', queued=0ms, exec=29ms</div>
<div style="padding-left:36pt;"> * drbd-data_stop_0 on ars2 'not configured' (6): call=31, status=complete, exitreason='none',</div>
<div style="padding-left:36pt;"> last-rc-change='Fri Aug 5 04:59:43 2016', queued=0ms, exec=23ms</div>
<div> </div>
<div>This is the relevant error from the Pacemaker logfile.</div>
<div> </div>
<div style="padding-left:36pt;">Aug 05 06:46:39 [2526] ars1 pengine: warning: unpack_rsc_op_failure: Processing failed op stop for dbrd-data on ars1: not configured (6)</div>
<div style="padding-left:36pt;">Aug 05 06:46:39 [2526] ars1 pengine: error: unpack_rsc_op: Preventing dbrd-data from re-starting anywhere: operation stop failed 'not configured' (6)</div>
<div style="padding-left:36pt;">Aug 05 06:46:39 [2526] ars1 pengine: warning: unpack_rsc_op_failure: Processing failed op stop for dbrd-data on ars1: not configured (6)</div>
<div style="padding-left:36pt;">Aug 05 06:46:39 [2526] ars1 pengine: error: unpack_rsc_op: Preventing dbrd-data from re-starting anywhere: operation stop failed 'not configured' (6)</div>
<div style="padding-left:36pt;">Aug 05 06:46:39 [2526] ars1 pengine: warning: process_rsc_state: Cluster configured not to stop active orphans. dbrd-data must be stopped manually on ars1</div>
<div style="padding-left:36pt;">Aug 05 06:46:39 [2526] ars1 pengine: info: native_add_running: resource dbrd-data isnt managed</div>
<div style="padding-left:36pt;">Aug 05 06:46:39 [2526] ars1 pengine: warning: unpack_rsc_op_failure: Processing failed op stop for drbd-data on ars1: not configured (6)</div>
<div style="padding-left:36pt;">Aug 05 06:46:39 [2526] ars1 pengine: error: unpack_rsc_op: Preventing drbd-data from re-starting anywhere: operation stop failed 'not configured' (6)</div>
<div style="padding-left:36pt;">Aug 05 06:46:39 [2526] ars1 pengine: warning: unpack_rsc_op_failure: Processing failed op stop for drbd-data on ars1: not configured (6)</div>
<div style="padding-left:36pt;">Aug 05 06:46:39 [2526] ars1 pengine: error: unpack_rsc_op: Preventing drbd-data from re-starting anywhere: operation stop failed 'not configured' (6)</div>
<div style="padding-left:36pt;">Aug 05 06:46:39 [2526] ars1 pengine: info: native_add_running: resource drbd-data isnt managed</div>
<div> </div>
<div>It sort of looks as though Pacemaker doesn’t know how to stop/start DRBD on each node but I’m not sure what</div>
<div>commands/scripts I might have to tell it about.</div>
<div> </div>
<div>Manually, I would start drbd.service on both nodes, make one the primary, rescan the PVs,</div>
<div>activate the VG and finally mount the file systems from now available LVs.</div>
<div> </div>
<div style="padding-left:36pt;">systemctl start drbd.service</div>
<div style="padding-left:36pt;">drbdadm primary drbd0</div>
<div style="padding-left:36pt;">pvscan –cache </div>
<div style="padding-left:36pt;">vgchange -a y replicated</div>
<div> </div>
<div>Can anyone tell me how to convince Pacemaker to control DRBD properly on each of the master and slave?</div>
<div> </div>
<div>Thanks,</div>
<div>Darren</div>
<div> </div>
<div> </div>
<div> </div>
</span></font>
</body>
</html>