<div dir="ltr"><br><div class="gmail_extra">2016-09-13 23:14 GMT+02:00 Ken Gaillot <span dir="ltr"><<a target="_blank" href="mailto:kgaillot@redhat.com">kgaillot@redhat.com</a>></span>:<br><div class="gmail_quote"><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote"><span>On 09/13/2016 03:27 PM, Gienek Nowacki wrote:<br>
> Hi,<br>
><br>
> I'm still testing (before production running) the solution with<br>
> pacemaker+corosync+drbd+dlm+gf<wbr>s2 on Centos7 with double-primary config.<br>
><br>
> I have two nodes: wirt1v and wirt2v - each node contains LVM partition<br>
> with DRBD (/dev/drbd2) and filesystem mounted as /virtfs2. Filesystems<br>
> /virtfs2 contain the images of virtual machines.<br>
><br>
> My problem is so - I can't start the cluster and the resources on one<br>
> node only (cold start) when the second node is completely powered off.<br>
<br>
</span>"two_node: 1" implies "wait_for_all: 1" in corosync.conf; see the<br>
votequorum(5) man page for details.<br>
<br>
This is a safeguard against the situation where the other node is up,<br>
but not reachable from the newly starting node.<br>
<br>
You can get around this by setting "wait_for_all: 0", and rely on<br>
pacemaker's fencing to resolve that situation. But if so, be careful<br>
about starting pacemaker when the nodes can't see each other, because<br>
each will try to fence the other.<br>(...)<br></blockquote><div><br> <br>Yes, this is the solution of my problem - now it's work as I expected.<br><br>Many thanks for your answer,<br>Gienek<br><br></div></div></div></div>