[Pacemaker] Wrong stack o2cb

Andrew Beekhof andrew at beekhof.net
Wed Dec 16 10:59:29 UTC 2009


I suspect you forgot to stop o2cb from being started when the node comes up.
If you really want to be sure, just delete /etc/init.d/o2cb

Otherwise you can use the chkconfig commands listed in the PDF (and
then reboot the node)

2009/12/15 Поляченко Владимир Владимирович <strafer.admin at gmail.com>:
> Hi all (sorry for my english, i can read and understand, but not write
> in english)
> Configure cluster in Fedora 12(base manual "Cluster from Scratch
> Apache in Fedora 11")
>
> package from fedora repo
>
> [root at server1 /]# rpm -q pacemaker ocfs2-tools ocfs2-tools-pcmk
> dlm-pcmk heartbeat corosync resource-agents drbd
>
> pacemaker-1.0.5-4.fc12.i686
> ocfs2-tools-1.4.3-3.fc12.i686
> ocfs2-tools-pcmk-1.4.3-3.fc12.i686
> dlm-pcmk-3.0.6-1.fc12.i686
> heartbeat-3.0.0-0.5.0daab7da36a8.hg.fc12.i686
> corosync-1.2.0-1.fc12.i686
> resource-agents-3.0.6-1.fc12.i686
> drbd-8.3.6-2.fc12.i686
>
> Configuration Active/Active, next problem (/var/log/messages)
>
> Dec 15 16:07:21 server1 crmd: [1189]: info: te_rsc_command: Initiating action 4: monitor o2cb:0_monitor_0 on server1 (local)
> Dec 15 16:07:21 server1 crmd: [1189]: info: do_lrm_rsc_op: Performing key=4:91:7:78a6a7b0-ef15-434f-8aaf-e00cd0f9d6ef op=o2cb:0_monitor_0 )
> Dec 15 16:07:21 server1 lrmd: [1186]: info: rsc:o2cb:0:101: monitor
> Dec 15 16:07:21 server1 o2cb[20999]: ERROR: Wrong stack o2cb
> Dec 15 16:07:21 server1 lrmd: [1186]: info: RA output: (o2cb:0:monitor:stderr) 2009/12/15_16:07:21 ERROR: Wrong stack o2cb
> Dec 15 16:07:21 server1 crmd: [1189]: info: process_lrm_event: LRM operation o2cb:0_monitor_0 (call=101, rc=5, cib-update=430, confirmed=true) not installed
> Dec 15 16:07:21 server1 crmd: [1189]: WARN: status_from_rc: Action 4 (o2cb:0_monitor_0) on server1 failed (target: 7 vs. rc: 5): Error
> Dec 15 16:07:21 server1 crmd: [1189]: info: abort_transition_graph: match_graph_event:272 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=o2cb:0
> _monitor_0, magic=0:5;4:91:7:78a6a7b0-ef15-434f-8aaf-e00cd0f9d6ef, cib=0.329.2) : Event failed
> Dec 15 16:07:21 server1 crmd: [1189]: info: update_abort_priority: Abort priority upgraded from 0 to 1
> Dec 15 16:07:21 server1 crmd: [1189]: info: update_abort_priority: Abort action done superceeded by restart
> Dec 15 16:07:21 server1 crmd: [1189]: info: match_graph_event: Action o2cb:0_monitor_0 (4) confirmed on server1 (rc=4)
> Dec 15 16:07:21 server1 crmd: [1189]: info: te_rsc_command: Initiatingaction 3: probe_complete
> probe_complete on server1 (local) - no waiting
>
> but resource /dev/drbd1 mounted without problem(nodes online, mount
> not start, i mount monually)
>
> crm config (only need rows)
> ---------------------------------
> primitive DataFS ocf:heartbeat:Filesystem \
>        params device="/dev/drbd/by-res/data" directory="/opt" fstype="ocfs2" \
>        meta target-role="Started"
> primitive ServerData ocf:linbit:drbd \
>        params drbd_resource="data"
> primitive dlm ocf:pacemaker:controld \
>        op monitor interval="120s"
> primitive dlm ocf:pacemaker:controld \
>        op monitor interval="120s"
> primitive o2cb ocf:ocfs2:o2cb \
>        op monitor interval="120s"
> ms ServerDataClone ServerData \
>        meta master-max="2" master-node-max="1" clone-max="2"
>        clone-node-max="1" notify="true"
> clone dlm-clone dlm \
>        meta interleave="true"
> clone o2cb-clone o2cb \
>        meta interleave="true"
> colocation o2cb-with-dlm inf: o2cb-clone dlm-clone
> order start-o2cb-after-dlm inf: dlm-clone o2cb-clone
> -------------------------
> I create /etc/ocfs2/cluser.conf
> -------------------------
> node:
>        name = server1
>        cluster = ocfs2
>        number = 0
>        ip_address = 10.10.10.1
>        ip_port = 7777
>
> node:
>        name = server2
>        cluster = ocfs2
>        number = 1
>        ip_address = 10.10.10.2
>        ip_port = 7777
>
> cluster:
>        name = ocfs2
>        node_count = 2
> -----------------------------
> How resolve this problem?
>
> _______________________________________________
> Pacemaker mailing list
> Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>




More information about the Pacemaker mailing list