With pacemaker i can't set up a state primary/primary?<br>I'm trying to run a disk now, then i wan't put than in primary/primary state.<br><br>With drbdadm i put the disk in working very well. The drbd+ocfs2 is already working, but now i want the pacemaker init the drbd and ocfs2/o2cb deamon, set the drbddisks in primary/primary, mount ocfs2 partition and then start the virtual machine...<br>
<br>The drbd, ocfs2 and vm are ok, lack only the pacemaker function for me to finish my graduation project... :( ...<br><br><br><br><br><div class="gmail_quote">On Fri, May 15, 2009 at 12:01 PM, Dejan Muhamedagic <span dir="ltr"><<a href="mailto:dejanmm@fastmail.fm">dejanmm@fastmail.fm</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">Hi,<br>
<div class="im"><br>
On Fri, May 15, 2009 at 08:54:31AM -0300, Rafael Emerick wrote:<br>
> Hi, Dejan<br>
><br>
</div><div><div></div><div class="h5">> The fist problem are solved, but now i have another.<br>
> When i try to start de ms-drbd11 resource i don't get any error, but in the<br>
> crm_mon i get the log:<br>
><br>
> ============<br>
> Last updated: Fri May 15 08:44:11 2009<br>
> Current DC: node1 (57e0232d-5b78-4a1a-976e-e5335ba8266d) - partition with<br>
> quorum<br>
> Version: 1.0.3-b133b3f19797c00f9189f4b66b513963f9d25db9<br>
> 2 Nodes configured, unknown expected votes<br>
> 2 Resources configured.<br>
> ============<br>
><br>
> Online: [ node1 node2 ]<br>
><br>
> Clone Set: drbdinit<br>
> Started: [ node1 node2 ]<br>
><br>
> Failed actions:<br>
> drbd11:0_start_0 (node=node1, call=9, rc=1, status=complete): unknown<br>
> error<br>
> drbd11_start_0 (node=node1, call=17, rc=1, status=complete): unknown<br>
> error<br>
> drbd11:1_start_0 (node=node2, call=9, rc=1, status=complete): unknown<br>
> error<br>
> drbd11_start_0 (node=node2, call=16, rc=1, status=complete): unknown<br>
> error<br>
><br>
> So, in the messes log file, i get<br>
><br>
><br>
> May 15 08:25:03 node1 pengine: [4749]: WARN: unpack_resources: No STONITH<br>
> resources have been defined<br>
> May 15 08:25:03 node1 pengine: [4749]: info: determine_online_status: Node<br>
> node1 is online<br>
> May 15 08:25:03 node1 pengine: [4749]: info: unpack_rsc_op: drbd11:0_start_0<br>
> on node1 returned 1 (unknown error) instead of the expected value: 0 (ok)<br>
> May 15 08:25:03 node1 pengine: [4749]: WARN: unpack_rsc_op: Processing<br>
> failed op drbd11:0_start_0 on node1: unknown error<br>
> May 15 08:25:03 node1 pengine: [4749]: WARN: process_orphan_resource:<br>
> Nothing known about resource drbd11 running on node1<br>
> May 15 08:25:03 node1 pengine: [4749]: info: log_data_element:<br>
> create_fake_resource: Orphan resource <primitive id="drbd11" type="drbd"<br>
> class="ocf" provider="heartbeat" /><br>
> May 15 08:25:03 node1 pengine: [4749]: info: process_orphan_resource: Making<br>
> sure orphan drbd11 is stopped<br>
> May 15 08:25:03 node1 pengine: [4749]: info: unpack_rsc_op: drbd11_start_0<br>
> on node1 returned 1 (unknown error) instead of the expected value: 0 (ok)<br>
> May 15 08:25:03 node1 pengine: [4749]: WARN: unpack_rsc_op: Processing<br>
> failed op drbd11_start_0 on node1: unknown error<br>
> May 15 08:25:03 node1 pengine: [4749]: info: determine_online_status: Node<br>
> node2 is online<br>
> May 15 08:25:03 node1 pengine: [4749]: info: find_clone: Internally renamed<br>
> drbdi:0 on node2 to drbdi:1<br>
> May 15 08:25:03 node1 pengine: [4749]: info: unpack_rsc_op: drbd11:1_start_0<br>
> on node2 returned 1 (unknown error) instead of the expected value: 0 (ok)<br>
> May 15 08:25:03 node1 pengine: [4749]: WARN: unpack_rsc_op: Processing<br>
> failed op drbd11:1_start_0 on node2: unknown error<br>
> May 15 08:25:03 node1 pengine: [4749]: info: unpack_rsc_op: drbd11_start_0<br>
> on node2 returned 1 (unknown error) instead of the expected value: 0 (ok)<br>
> May 15 08:25:03 node1 pengine: [4749]: WARN: unpack_rsc_op: Processing<br>
> failed op drbd11_start_0 on node2: unknown error<br>
> May 15 08:25:03 node1 pengine: [4749]: notice: clone_print: Clone Set:<br>
> drbdinit<br>
> May 15 08:25:03 node1 pengine: [4749]: notice: print_list: Started: [<br>
> node1 node2 ]<br>
> May 15 08:25:03 node1 pengine: [4749]: notice: clone_print: Master/Slave<br>
> Set: ms-drbd11<br>
> May 15 08:25:03 node1 pengine: [4749]: notice: print_list: Stopped: [<br>
> drbd11:0 drbd11:1 ]<br>
> May 15 08:25:03 node1 pengine: [4749]: info: get_failcount: ms-drbd11 has<br>
> failed 1000000 times on node1<br>
> May 15 08:25:03 node1 pengine: [4749]: WARN: common_apply_stickiness:<br>
> Forcing ms-drbd11 away from node1 after 1000000 failures (max=1000000)<br>
> May 15 08:25:03 node1 pengine: [4749]: info: get_failcount: drbd11 has<br>
> failed 1000000 times on node1<br>
> May 15 08:25:03 node1 pengine: [4749]: WARN: common_apply_stickiness:<br>
> Forcing drbd11 away from node1 after 1000000 failures (max=1000000)<br>
> May 15 08:25:03 node1 pengine: [4749]: info: get_failcount: ms-drbd11 has<br>
> failed 1000000 times on node2<br>
> May 15 08:25:03 node1 pengine: [4749]: WARN: common_apply_stickiness:<br>
> Forcing ms-drbd11 away from node2 after 1000000 failures (max=1000000)<br>
> May 15 08:25:03 node1 pengine: [4749]: info: get_failcount: drbd11 has<br>
> failed 1000000 times on node2<br>
> May 15 08:25:03 node1 pengine: [4749]: WARN: common_apply_stickiness:<br>
> Forcing drbd11 away from node2 after 1000000 failures (max=1000000)<br>
> May 15 08:25:03 node1 pengine: [4749]: WARN: native_color: Resource drbd11:0<br>
> cannot run anywhere<br>
> May 15 08:25:03 node1 pengine: [4749]: WARN: native_color: Resource drbd11:1<br>
> cannot run anywhere<br>
> May 15 08:25:03 node1 pengine: [4749]: info: master_color: ms-drbd11:<br>
> Promoted 0 instances of a possible 1 to master<br>
> May 15 08:25:03 node1 pengine: [4749]: notice: LogActions: Leave resource<br>
> drbdi:0 (Started node1)<br>
> May 15 08:25:03 node1 pengine: [4749]: notice: LogActions: Leave resource<br>
> drbdi:1 (Started node2)<br>
> May 15 08:25:03 node1 pengine: [4749]: notice: LogActions: Leave resource<br>
> drbd11:0 (Stopped)<br>
> May 15 08:25:03 node1 pengine: [4749]: notice: LogActions: Leave resource<br>
> drbd11:1 (Stopped)<br>
><br>
><br>
> I had this problem with heartbeatV2, then i'm using pacemaker with the same<br>
> error.<br>
> My idea is that the crm does the management of the drbd, ocfs2 and vmxen<br>
<br>
</div></div>Can ocfs2 run on top of drbd? In that case you need master/master<br>
resource. What you have is master/slave.<br>
<div class="im"><br>
> resources to maintain them working...<br>
<br>
</div>It does, but this is a resource level problem. Funny that the<br>
logs don't show much. You'll have to try by hand using drbdadm.<br>
<div class="im"><br>
> To drbd resource init, the Sonith must be configured?<br>
<br>
</div>You must have stonith, in particular since it's shared storage.<br>
<br>
Also, set<br>
<br>
crm configure property no-quorum-policy=ignore<br>
<br>
Thanks,<br>
<font color="#888888"><br>
Dejan<br>
</font><div><div></div><div class="h5"><br>
> Thank you!<br>
><br>
> On Fri, May 15, 2009 at 7:02 AM, Dejan Muhamedagic <<a href="mailto:dejanmm@fastmail.fm">dejanmm@fastmail.fm</a>>wrote:<br>
><br>
> > Hi,<br>
> ><br>
> > On Fri, May 15, 2009 at 06:47:37AM -0300, Rafael Emerick wrote:<br>
> > > Hi, Dejan<br>
> > ><br>
> > > thanks for attention<br>
> > > following my cib xml conf<br>
> > > I am newbie with pacemaker, any hint is very welcome! : D<br>
> ><br>
> > The CIB as seen by crm:<br>
> ><br>
> > primitive drbd11 ocf:heartbeat:drbd \<br>
> > params drbd_resource="drbd11" \<br>
> > op monitor interval="59s" role="Master" timeout="30s" \<br>
> > op monitor interval="60s" role="Slave" timeout="30s" \<br>
> > meta target-role="started" is-managed="true"<br>
> > ms ms-drbd11 drbd11 \<br>
> > meta clone-max="2" notify="true" globally-unique="false"<br>
> > target-role="stopped"<br>
> ><br>
> > The target-role attribute is defined for both the primitive and<br>
> > the container (ms). You should remove the former:<br>
> ><br>
> > crm configure edit drbd11<br>
> ><br>
> > and remove all meta attributes (the whole "meta" part). And don't<br>
> > forget to remove the backslash in the line above it.<br>
> ><br>
> > Thanks,<br>
> ><br>
> > Dejan<br>
> ><br>
> > > thank you very much<br>
> > > for the help<br>
> > ><br>
> > ><br>
> > > On Fri, May 15, 2009 at 4:46 AM, Dejan Muhamedagic <<a href="mailto:dejanmm@fastmail.fm">dejanmm@fastmail.fm</a><br>
> > >wrote:<br>
> > ><br>
> > > > Hi,<br>
> > > ><br>
> > > > On Thu, May 14, 2009 at 05:13:50PM -0300, Rafael Emerick wrote:<br>
> > > > > Hi, Dejan<br>
> > > > ><br>
> > > > > There is no two set of meta-attributes.<br>
> > > > ><br>
> > > > > I remove the ms-drbd11, add again and the error is the same:<br>
> > > > > Error performing operation: Required data for this CIB API call not<br>
> > found<br>
> > > ><br>
> > > > Can you please post your CIB. As xml.<br>
> > > ><br>
> > > > Thanks,<br>
> > > ><br>
> > > > Dejan<br>
> > > ><br>
> > > > ><br>
> > > > > Thanks,<br>
> > > > ><br>
> > > > ><br>
> > > > > On Thu, May 14, 2009 at 3:43 PM, Dejan Muhamedagic <<br>
> > <a href="mailto:dejanmm@fastmail.fm">dejanmm@fastmail.fm</a><br>
> > > > >wrote:<br>
> > > > ><br>
> > > > > > Hi,<br>
> > > > > ><br>
> > > > > > On Thu, May 14, 2009 at 03:18:15PM -0300, Rafael Emerick wrote:<br>
> > > > > > > Hi,<br>
> > > > > > ><br>
> > > > > > > I'm tryng to make a cluster with xen-ha using drbd and ocfs2...<br>
> > > > > > ><br>
> > > > > > > I want that crm management all resources (xen machines, drbd<br>
> > disks<br>
> > > > and<br>
> > > > > > ocfs2<br>
> > > > > > > filesystem ).<br>
> > > > > > ><br>
> > > > > > > First, a create a clone lsb resource to init drbd with gui<br>
> > interface.<br>
> > > > > > > Now, I'm following this manual<br>
> > > > > > <a href="http://clusterlabs.org/wiki/DRBD_HowTo_1.0" target="_blank">http://clusterlabs.org/wiki/DRBD_HowTo_1.0</a> to<br>
> > > > > > > create the drbd disk managemnt and after make the ocfs2<br>
> > filesystem.<br>
> > > > > > ><br>
> > > > > > > So, when i run:<br>
> > > > > > > # crm resource start ms-drbd11<br>
> > > > > > > # Multiple attributes match name=target-role<br>
> > > > > > > # Value: stopped<br>
> > (id=ms-drbd11-meta_attributes-target-role)<br>
> > > > > > > # Value: started (id=drbd11-meta_attributes-target-role)<br>
> > > > > > > # Error performing operation: Required data for this CIB API call<br>
> > not<br>
> > > > > > found<br>
> > > > > ><br>
> > > > > > As it says, there are multiple matches for the attribute. Don't<br>
> > > > > > know how it came to be. Perhaps you can<br>
> > > > > ><br>
> > > > > > crm configure edit ms-drbd11<br>
> > > > > ><br>
> > > > > > and drop one of them. It could also be that there are two sets of<br>
> > > > > > meta-attributes.<br>
> > > > > ><br>
> > > > > > If crm can't edit the resource (in that case please report it)<br>
> > > > > > then you can try:<br>
> > > > > ><br>
> > > > > > crm configure edit xml ms-drbd11<br>
> > > > > ><br>
> > > > > > Thanks,<br>
> > > > > ><br>
> > > > > > Dejan<br>
> > > > > ><br>
> > > > > > > My messages:<br>
> > > > > > > May 14 15:07:11 node1 pengine: [4749]: info: get_fail count:<br>
> > > > ms-drbd11<br>
> > > > > > has<br>
> > > > > > > failed 1000000 times on node2<br>
> > > > > > > May 14 15:07:11 node1 pengine: [4749]: WARN:<br>
> > common_apply_stickiness:<br>
> > > > > > > Forcing ms-drbd11 away from node2 after 1000000 failures<br>
> > > > (max=1000000)<br>
> > > > > > > May 14 15:07:11 node1 pengine: [4749]: WARN: native_color:<br>
> > Resource<br>
> > > > > > drbd11:0<br>
> > > > > > > cannot run anywhere<br>
> > > > > > > May 14 15:07:11 node1 pengine: [4749]: WARN: native_color:<br>
> > Resource<br>
> > > > > > drbd11:1<br>
> > > > > > > cannot run anywhere<br>
> > > > > > > May 14 15:07:11 node1 pengine: [4749]: info: master_color:<br>
> > ms-drbd11:<br>
> > > > > > > Promoted 0 instances of a possible 1 to master<br>
> > > > > > > May 14 15:07:11 node1 pengine: [4749]: notice: LogActions: Leave<br>
> > > > resource<br>
> > > > > > > drbdi:0 (Started node1)<br>
> > > > > > > May 14 15:07:11 node1 pengine: [4749]: notice: LogActions: Leave<br>
> > > > resource<br>
> > > > > > > drbdi:1 (Started node2)<br>
> > > > > > > May 14 15:07:11 node1 pengine: [4749]: notice: LogActions: Leave<br>
> > > > resource<br>
> > > > > > > drbd11:0 (Stopped)<br>
> > > > > > > May 14 15:07:11 node1 pengine: [4749]: notice: LogActions: Leave<br>
> > > > resource<br>
> > > > > > > drbd11:1 (Stopped)<br>
> > > > > > ><br>
> > > > > > ><br>
> > > > > > > Thank you for any help!<br>
> > > > > ><br>
> > > > > > > _______________________________________________<br>
> > > > > > > Pacemaker mailing list<br>
> > > > > > > <a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a><br>
> > > > > > > <a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
> > > > > ><br>
> > > > > ><br>
> > > > > > _______________________________________________<br>
> > > > > > Pacemaker mailing list<br>
> > > > > > <a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a><br>
> > > > > > <a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
> > > > > ><br>
> > > ><br>
> > > > > _______________________________________________<br>
> > > > > Pacemaker mailing list<br>
> > > > > <a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a><br>
> > > > > <a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
> > > ><br>
> > > ><br>
> > > > _______________________________________________<br>
> > > > Pacemaker mailing list<br>
> > > > <a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a><br>
> > > > <a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
> > > ><br>
> ><br>
> > > _______________________________________________<br>
> > > Pacemaker mailing list<br>
> > > <a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a><br>
> > > <a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
> ><br>
> ><br>
> > _______________________________________________<br>
> > Pacemaker mailing list<br>
> > <a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a><br>
> > <a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
> ><br>
<br>
> _______________________________________________<br>
> Pacemaker mailing list<br>
> <a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a><br>
> <a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
<br>
<br>
_______________________________________________<br>
Pacemaker mailing list<br>
<a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a><br>
<a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
</div></div></blockquote></div><br>