<div dir="ltr"><div dir="ltr"><div dir="ltr"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Fencing in clustering is always required, but unlike pacemaker that lets<br>you turn it off and take your chances, DLM doesn't.</blockquote><div><br></div><div>As a matter of fact, DLM has a setting "enable_fencing=0|1" for what that's worth. </div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">You must have<br>working fencing for DLM (and anything using it) to function correctly.<br></blockquote><div><br></div><div>We do have fencing enabled in the cluster; we've tested both node level fencing and resource fencing; DLM behaved identically in both scenarios, until we set it to 'enable_fencing=0' in the dlm.conf file. </div><div> <br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Basically, cluster config changes (node declared lost), dlm informed and<br>blocks, fence attempt begins and loops until it succeeds, on success,<br>informs DLM, dlm reaps locks held by the lost node and normal operation<br>continues.<br></blockquote><div>This isn't quite what I was seeing in the logs. The "failed" node would be fenced off, pacemaker appeared to be sane, reporting services running on the running nodes, but once the failed node was seen as missing by dlm (dlm_controld), dlm would request fencing, from what I can tell by the log entry. Here is an example of the suspect log entry:</div><div>Sep 26 09:41:35 pcmk-test-1 dlm_controld[837]: 38 fence request 2 pid 1446 startup time 1537969264 fence_all dlm_stonith<br></div><div> <br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">This isn't a question of node count or other configuration concerns.<br>It's simply that you must have proper fencing for DLM.</blockquote><div><br></div><div>Can you speak more to what "proper fencing" is for DLM? </div><div><br></div><div>Best,</div><div>-Pat</div><div><br></div><div> </div></div></div></div><br><div class="gmail_quote"><div dir="ltr">On Mon, Oct 1, 2018 at 12:30 PM Digimer <<a href="mailto:lists@alteeve.ca">lists@alteeve.ca</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On 2018-10-01 12:04 PM, Ferenc Wágner wrote:<br>
> Patrick Whitney <<a href="mailto:pwhitney@luminoso.com" target="_blank">pwhitney@luminoso.com</a>> writes:<br>
> <br>
>> I have a two node (test) cluster running corosync/pacemaker with DLM<br>
>> and CLVM.<br>
>><br>
>> I was running into an issue where when one node failed, the remaining node<br>
>> would appear to do the right thing, from the pcmk perspective, that is.<br>
>> It would create a new cluster (of one) and fence the other node, but<br>
>> then, rather surprisingly, DLM would see the other node offline, and it<br>
>> would go offline itself, abandoning the lockspace.<br>
>><br>
>> I changed my DLM settings to "enable_fencing=0", disabling DLM fencing, and<br>
>> our tests are now working as expected.<br>
> <br>
> I'm running a larger Pacemaker cluster with standalone DLM + cLVM (that<br>
> is, they are started by systemd, not by Pacemaker). I've seen weird DLM<br>
> fencing behavior, but not what you describe above (though I ran with<br>
> more than two nodes from the very start). Actually, I don't even<br>
> understand how it occured to you to disable DLM fencing to fix that...<br>
<br>
Fencing in clustering is always required, but unlike pacemaker that lets<br>
you turn it off and take your chances, DLM doesn't. You must have<br>
working fencing for DLM (and anything using it) to function correctly.<br>
<br>
Basically, cluster config changes (node declared lost), dlm informed and<br>
blocks, fence attempt begins and loops until it succeeds, on success,<br>
informs DLM, dlm reaps locks held by the lost node and normal operation<br>
continues.<br>
<br>
This isn't a question of node count or other configuration concerns.<br>
It's simply that you must have proper fencing for DLM.<br>
<br>
>> I'm a little concern I have masked an issue by doing this, as in all<br>
>> of the tutorials and docs I've read, there is no mention of having to<br>
>> configure DLM whatsoever.<br>
> <br>
> Unfortunately it's very hard to come by any reliable info about DLM. I<br>
> had a couple of enlightening exchanges with David Teigland (its primary<br>
> author) on this list, he is very helpful indeed, but I'm still very far<br>
> from having a working understanding of it.<br>
> <br>
> But I've been running with --enable_fencing=0 for years without issues,<br>
> leaving all fencing to Pacemaker. Note that manual cLVM operations are<br>
> the only users of DLM here, so delayed fencing does not cause any<br>
> problems, the cluster services do not depend on DLM being operational (I<br>
> mean it can stay frozen for several days -- as it happened in a couple<br>
> of pathological cases). GFS2 would be a very different thing, I guess.<br>
> <br>
<br>
<br>
-- <br>
Digimer<br>
Papers and Projects: <a href="https://alteeve.com/w/" rel="noreferrer" target="_blank">https://alteeve.com/w/</a><br>
"I am, somehow, less interested in the weight and convolutions of<br>
Einstein’s brain than in the near certainty that people of equal talent<br>
have lived and died in cotton fields and sweatshops." - Stephen Jay Gould<br>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr">Patrick Whitney<div>DevOps Engineer -- Tools</div></div></div>