<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
Hello,<br>
<br>
first of all I'd like to ask you a general question:<br>
<br>
Does somebody successfully set up a clvm cluster with pacemaker and
run it in productive mode?<br>
<br>
Now back to the concrete problem:<br>
<br>
I configured two interfaces for corosync:<br>
<br>
root@bbzclnode04:~# corosync-cfgtool -s<br>
Printing ring status.<br>
Local node ID 897624256<br>
RING ID 0<br>
id = 192.168.128.53<br>
status = ring 0 active with no faults<br>
RING ID 1<br>
id = 192.168.129.23<br>
status = ring 1 active with no faults<br>
<br>
RRD set to passive<br>
<br>
I also made some changes to my cib:<br>
<br>
node bbzclnode04<br>
node bbzclnode06<br>
node bbzclnode07<br>
primitive clvm ocf:lvm2:clvmd \<br>
params daemon_timeout="30" \<br>
meta target-role="Started"<br>
primitive dlm ocf:pacemaker:controld \<br>
meta target-role="Started"<br>
group dlm-clvm dlm clvm<br>
clone dlm-clvm-clone dlm-clvm \<br>
meta interleave="true" ordered="true"<br>
property $id="cib-bootstrap-options" \<br>
dc-version="1.1.5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f" \<br>
cluster-infrastructure="openais" \<br>
expected-quorum-votes="3" \<br>
no-quorum-policy="ignore" \<br>
stonith-enabled="false" \<br>
last-lrm-refresh="1322643084"<br>
<br>
I cleaned and restarted the resources - nothing! :<br>
<br>
crm(live)resource# cleanup dlm-clvm-clone <br>
Cleaning up dlm:0 on bbzclnode04<br>
Cleaning up dlm:0 on bbzclnode06<br>
Cleaning up dlm:0 on bbzclnode07<br>
Cleaning up clvm:0 on bbzclnode04<br>
Cleaning up clvm:0 on bbzclnode06<br>
Cleaning up clvm:0 on bbzclnode07<br>
Cleaning up dlm:1 on bbzclnode04<br>
Cleaning up dlm:1 on bbzclnode06<br>
Cleaning up dlm:1 on bbzclnode07<br>
Cleaning up clvm:1 on bbzclnode04<br>
Cleaning up clvm:1 on bbzclnode06<br>
Cleaning up clvm:1 on bbzclnode07<br>
Cleaning up dlm:2 on bbzclnode04<br>
Cleaning up dlm:2 on bbzclnode06<br>
Cleaning up dlm:2 on bbzclnode07<br>
Cleaning up clvm:2 on bbzclnode04<br>
Cleaning up clvm:2 on bbzclnode06<br>
Cleaning up clvm:2 on bbzclnode07<br>
Waiting for 19 replies from the CRMd................... OK<br>
<br>
crm_mon:<br>
<br>
============<br>
Last updated: Wed Nov 30 10:15:09 2011<br>
Stack: openais<br>
Current DC: bbzclnode04 - partition with quorum<br>
Version: 1.1.5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f<br>
3 Nodes configured, 3 expected votes<br>
1 Resources configured.<br>
============<br>
<br>
Online: [ bbzclnode04 bbzclnode06 bbzclnode07 ]<br>
<br>
<br>
Failed actions:<br>
clvm:1_start_0 (node=bbzclnode06, call=11, rc=1,
status=complete): unknown error<br>
clvm:0_start_0 (node=bbzclnode04, call=11, rc=1,
status=complete): unknown error<br>
clvm:2_start_0 (node=bbzclnode07, call=11, rc=1,
status=complete): unknown error<br>
<br>
<br>
When I look in the log - there is a message which tells me that may
be another clvm process is already running - but it isn't so.<br>
<br>
"clvmd could not create local socket Another clvmd is probably
already running"<br>
<br>
Or is it a permission problem - writing to the filesystem? Is there
a way to get rid of it?<br>
<br>
Shell I use a different distro - our install from source?<br>
<br>
<br>
Am 24.11.2011 22:59, schrieb Andreas Kurz:
<blockquote cite="mid:4ECEBE2D.2040206@hastexo.com" type="cite">
<pre wrap="">Hello,
On 11/24/2011 10:12 PM, Vadim Bulst wrote:
</pre>
<blockquote type="cite">
<pre wrap="">Hi Andreas,
I changed my cib:
node bbzclnode04
node bbzclnode06
node bbzclnode07
primitive clvm ocf:lvm2:clvmd \
params daemon_timeout="30"
primitive dlm ocf:pacemaker:controld
group g_lock dlm clvm
clone g_lock-clone g_lock \
meta interleave="true"
property $id="cib-bootstrap-options" \
dc-version="1.1.5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f" \
cluster-infrastructure="openais" \
expected-quorum-votes="3" \
no-quorum-policy="ignore" \
stonith-enabled="false" \
last-lrm-refresh="1322049979
but no luck at all.
</pre>
</blockquote>
<pre wrap="">I assume you did at least a cleanup on clvm and it still does not work
... next step would be to grep for ERROR in your cluster log and look
for other suspicious messages to find out why clvm is not that motivated
to start.
</pre>
<blockquote type="cite">
<pre wrap="">"And use Corosync 1.4.x with redundant rings and automatic ring recovery
feature enabled."
I got two interfaces per server - there are bonded together and bridged
for virtualization. Only one untagged vlan. I tried to give a tagged
Vlan Bridge a Address but didn't worked. My network conf looks like that:
</pre>
</blockquote>
<pre wrap="">One ore two extra nics are quite affordable today to build e.g. a direct
connection between the nodes (if possible)
Regards,
Andreas
</pre>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
Pacemaker mailing list: <a class="moz-txt-link-abbreviated" href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a>
<a class="moz-txt-link-freetext" href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a>
Project Home: <a class="moz-txt-link-freetext" href="http://www.clusterlabs.org">http://www.clusterlabs.org</a>
Getting started: <a class="moz-txt-link-freetext" href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a>
Bugs: <a class="moz-txt-link-freetext" href="http://bugs.clusterlabs.org">http://bugs.clusterlabs.org</a></pre>
</blockquote>
<br>
<br>
<pre class="moz-signature" cols="100">--
Mit freundlichen Grüßen
Vadim Bulst
Systemadministrator BBZ
Biotechnologisch-Biomedizinisches Zentrum
Universität Leipzig
Deutscher Platz 5, 04103 Leipzig
Tel.: 0341 97 - 31 307
Fax : 0341 97 - 31 309
</pre>
</body>
</html>