<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
  </head>
  <body>
    <div class="moz-cite-prefix">On 1/12/21 12:46 PM, Steffen Vinther
      Sørensen wrote:<br>
    </div>
    <blockquote type="cite"
cite="mid:CALhdMBgkiNyDFp5qSMBNv9RAD1-0HSpxkpHHzE2Ea43pYVxKFw@mail.gmail.com">
      <meta http-equiv="content-type" content="text/html; charset=UTF-8">
      <div dir="ltr">Yes. 
        <div><br>
        </div>
        <div>'pcs cluster stop --all' + reboot all nodes</div>
      </div>
    </blockquote>
    Thanks! That is the ultimate action ;-)<br>
    Just starting the cluster via pcs would probably already<br>
    have the effect of making the pending fence actions<br>
    go away.<br>
    But still we should try to somehow reproduce the issue<br>
    as it shouldn't happen.<br>
    <br>
    Klaus<br>
    <blockquote type="cite"
cite="mid:CALhdMBgkiNyDFp5qSMBNv9RAD1-0HSpxkpHHzE2Ea43pYVxKFw@mail.gmail.com">
      <div dir="ltr">
        <div><br>
        </div>
        <div>/Steffen</div>
      </div>
      <br>
      <div class="gmail_quote">
        <div dir="ltr" class="gmail_attr">On Tue, Jan 12, 2021 at 11:43
          AM Klaus Wenninger <<a href="mailto:kwenning@redhat.com"
            moz-do-not-send="true">kwenning@redhat.com</a>> wrote:<br>
        </div>
        <blockquote class="gmail_quote" style="margin:0px 0px 0px
          0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
          <div>
            <div>On 1/12/21 11:23 AM, Steffen Vinther Sørensen wrote:<br>
            </div>
            <blockquote type="cite">
              <div dir="ltr">Hello Hideo.
                <div><br>
                </div>
                <div>I am overwhelmed by how serious this group is
                  taking good care of issues. </div>
                <div><br>
                </div>
                <div>For your information, the 'pending fencing action'
                  status disappeared after bringing the nodes offline,
                  and during that I found some gfs2 errors that were
                  fixed by fsck.gfs2, and since then my cluster has been
                  acting very stable. <br>
                </div>
              </div>
            </blockquote>
            <tt>By bringing offline you mean shutting down pacemaker?</tt><tt><br>
            </tt><tt>That would be expected as fence-history is kept
              solely in RAM.</tt><tt><br>
            </tt><tt>The history-knowledge is synced between the nodes
              so the</tt><tt><br>
            </tt><tt>history is just lost if all nodes are down at the
              same time.</tt><tt><br>
            </tt><tt>Unfortunately that mechanism keeps unwanted
              leftovers</tt><tt><br>
            </tt><tt>around as well.</tt><tt><br>
            </tt><tt><br>
            </tt><tt>Regards,</tt><tt><br>
            </tt><tt>Klaus</tt><br>
            <blockquote type="cite">
              <div dir="ltr">
                <div><br>
                </div>
                <div>If I can provide more info let me know. </div>
                <div><br>
                </div>
                <div>/Steffen</div>
              </div>
              <br>
              <div class="gmail_quote">
                <div dir="ltr" class="gmail_attr">On Tue, Jan 12, 2021
                  at 3:45 AM <<a
                    href="mailto:renayama19661014@ybb.ne.jp"
                    target="_blank" moz-do-not-send="true">renayama19661014@ybb.ne.jp</a>>
                  wrote:<br>
                </div>
                <blockquote class="gmail_quote" style="margin:0px 0px
                  0px 0.8ex;border-left:1px solid
                  rgb(204,204,204);padding-left:1ex">Hi Steffen,<br>
                  <br>
                  I've been experimenting with it since last weekend,
                  but I haven't been able to reproduce the same
                  situation.<br>
                  It seems that the cause is that the reproduction
                  method cannot be limited.<br>
                  <br>
                  Can I attach a problem log?<br>
                  <br>
                  Best Regards,<br>
                  Hideo Yamauchi.<br>
                  <br>
                  <br>
                  ----- Original Message -----<br>
                  > From: Klaus Wenninger <<a
                    href="mailto:kwenning@redhat.com" target="_blank"
                    moz-do-not-send="true">kwenning@redhat.com</a>><br>
                  > To: Steffen Vinther Sørensen <<a
                    href="mailto:svinther@gmail.com" target="_blank"
                    moz-do-not-send="true">svinther@gmail.com</a>>;
                  Cluster Labs - All topics related to open-source
                  clustering welcomed <<a
                    href="mailto:users@clusterlabs.org" target="_blank"
                    moz-do-not-send="true">users@clusterlabs.org</a>><br>
                  > Cc: <br>
                  > Date: 2021/1/7, Thu 21:42<br>
                  > Subject: Re: [ClusterLabs] Pending Fencing
                  Actions shown in pcs status<br>
                  > <br>
                  > On 1/7/21 1:13 PM, Steffen Vinther Sørensen
                  wrote:<br>
                  >>  Hi Klaus,<br>
                  >> <br>
                  >>  Yes then the status does sync to the other
                  nodes. Also it looks like<br>
                  >>  there are some hostname resolving problems
                  in play here, maybe causing<br>
                  >>  problems,  here is my notes from restarting
                  pacemaker etc.<br>
                  > Don't think there are hostname resolving
                  problems.<br>
                  > The messages you are seeing, that look as if, are
                  caused<br>
                  > by using -EHOSTUNREACH as error-code to fail a
                  pending<br>
                  > fence action when a node that is just coming up
                  sees<br>
                  > a pending action that is claimed to be handled by
                  himself.<br>
                  > Back then I chose that error-code as there was
                  none<br>
                  > that really matched available right away and it
                  was<br>
                  > urgent for some reason so that introduction of
                  something<br>
                  > new was to risky at that state.<br>
                  > Probably would make sense to introduce something
                  that<br>
                  > is more descriptive.<br>
                  > Back then the issue was triggered by fenced
                  crashing and<br>
                  > being restarted - so not a node-restart but just
                  fenced<br>
                  > restarting.<br>
                  > And it looks as if building the failed-message
                  failed somehow.<br>
                  > So that could be the reason why the pending
                  action persists.<br>
                  > Would be something else then what we solved with
                  Bug 5401.<br>
                  > But what triggers the logs below might as well
                  just be a<br>
                  > follow-up issue after the Bug 5401 thing.<br>
                  > Will try to find time for a deeper look later
                  today.<br>
                  > <br>
                  > Klaus<br>
                  >> <br>
                  >>  pcs cluster standby <a
                    href="http://kvm03-node02.avigol-gcs.dk"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">kvm03-node02.avigol-gcs.dk</a><br>
                  >>  pcs cluster stop <a
                    href="http://kvm03-node02.avigol-gcs.dk"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">kvm03-node02.avigol-gcs.dk</a><br>
                  >>  pcs status<br>
                  >> <br>
                  >>  Pending Fencing Actions:<br>
                  >>  * reboot of <a
                    href="http://kvm03-node02.avigol-gcs.dk"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">kvm03-node02.avigol-gcs.dk</a>
                  pending: client=crmd.37819,<br>
                  >>  origin=<a
                    href="http://kvm03-node03.avigol-gcs.dk"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">kvm03-node03.avigol-gcs.dk</a><br>
                  >> <br>
                  >>  # From logs on all 3 nodes:<br>
                  >>  Jan 07 12:48:18 kvm03-node03
                  stonith-ng[37815]:  warning: received<br>
                  >>  pending action we are supposed to be the
                  owner but it's not in our<br>
                  >>  records -> fail it<br>
                  >>  Jan 07 12:48:18 kvm03-node03
                  stonith-ng[37815]:    error: Operation<br>
                  >>  'reboot' targeting <a
                    href="http://kvm03-node02.avigol-gcs.dk"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">kvm03-node02.avigol-gcs.dk</a>
                  on <no-one> for<br>
                  >>  <a
                    href="mailto:crmd.37819@kvm03-node03.avigol-gcs.dk.56a3018c"
                    target="_blank" moz-do-not-send="true">crmd.37819@kvm03-node03.avigol-gcs.dk.56a3018c</a>:
                  No route to host<br>
                  >>  Jan 07 12:48:18 kvm03-node03
                  stonith-ng[37815]:    error:<br>
                  >>  stonith_construct_reply: Triggered assert at
                  commands.c:2406 : request<br>
                  >>  != NULL<br>
                  >>  Jan 07 12:48:18 kvm03-node03
                  stonith-ng[37815]:  warning: Can't create<br>
                  >>  a sane reply<br>
                  >>  Jan 07 12:48:18 kvm03-node03 crmd[37819]:  
                  notice: Peer<br>
                  >>  <a href="http://kvm03-node02.avigol-gcs.dk"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">kvm03-node02.avigol-gcs.dk</a>
                  was not terminated (reboot) by <anyone> on<br>
                  >>  behalf of crmd.37819: No route to host<br>
                  >> <br>
                  >>  pcs cluster start <a
                    href="http://kvm03-node02.avigol-gcs.dk"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">kvm03-node02.avigol-gcs.dk</a><br>
                  >>  pcs status (now outputs the same on all 3
                  nodes)<br>
                  >> <br>
                  >>  Failed Fencing Actions:<br>
                  >>  * reboot of <a
                    href="http://kvm03-node02.avigol-gcs.dk"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">kvm03-node02.avigol-gcs.dk</a>
                  failed: delegate=,<br>
                  >>  client=crmd.37819, origin=<a
                    href="http://kvm03-node03.avigol-gcs.dk"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">kvm03-node03.avigol-gcs.dk</a>,<br>
                  >>      last-failed='Thu Jan  7 12:48:18 2021'<br>
                  >> <br>
                  >> <br>
                  >>  pcs cluster unstandby <a
                    href="http://kvm03-node02.avigol-gcs.dk"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">kvm03-node02.avigol-gcs.dk</a><br>
                  >> <br>
                  >>  # Now libvirtd refuses to start<br>
                  >> <br>
                  >>  Jan 07 12:51:44 kvm03-node02 dnsmasq[20884]:
                  read /etc/hosts - 8 addresses<br>
                  >>  Jan 07 12:51:44 kvm03-node02 dnsmasq[20884]:
                  read<br>
                  >>  /var/lib/libvirt/dnsmasq/default.addnhosts -
                  0 addresses<br>
                  >>  Jan 07 12:51:44 kvm03-node02
                  dnsmasq-dhcp[20884]: read<br>
                  >>  /var/lib/libvirt/dnsmasq/default.hostsfile<br>
                  >>  Jan 07 12:51:44 kvm03-node02
                  libvirtd[24091]: 2021-01-07<br>
                  >>  11:51:44.729+0000: 24160: info : libvirt
                  version: 4.5.0, package:<br>
                  >>  36.el7_9.3 (CentOS BuildSystem <<a
                    href="http://bugs.centos.org" rel="noreferrer"
                    target="_blank" moz-do-not-send="true">http://bugs.centos.org</a>
                  >,<br>
                  >>  2020-11-16-16:25:20, <a
                    href="http://x86-01.bsys.centos.org"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">x86-01.bsys.centos.org</a>)<br>
                  >>  Jan 07 12:51:44 kvm03-node02
                  libvirtd[24091]: 2021-01-07<br>
                  >>  11:51:44.729+0000: 24160: info : hostname:
                  kvm03-node02<br>
                  >>  Jan 07 12:51:44 kvm03-node02
                  libvirtd[24091]: 2021-01-07<br>
                  >>  11:51:44.729+0000: 24160: error :
                  qemuMonitorOpenUnix:392 : failed to<br>
                  >>  connect to monitor socket: Connection
                  refused<br>
                  >>  Jan 07 12:51:44 kvm03-node02
                  libvirtd[24091]: 2021-01-07<br>
                  >>  11:51:44.729+0000: 24159: error :
                  qemuMonitorOpenUnix:392 : failed to<br>
                  >>  connect to monitor socket: Connection
                  refused<br>
                  >>  Jan 07 12:51:44 kvm03-node02
                  libvirtd[24091]: 2021-01-07<br>
                  >>  11:51:44.730+0000: 24161: error :
                  qemuMonitorOpenUnix:392 : failed to<br>
                  >>  connect to monitor socket: Connection
                  refused<br>
                  >>  Jan 07 12:51:44 kvm03-node02
                  libvirtd[24091]: 2021-01-07<br>
                  >>  11:51:44.730+0000: 24162: error :
                  qemuMonitorOpenUnix:392 : failed to<br>
                  >>  connect to monitor socket: Connection
                  refused<br>
                  >> <br>
                  >>  pcs status<br>
                  >> <br>
                  >>  Failed Resource Actions:<br>
                  >>  * libvirtd_start_0 on <a
                    href="http://kvm03-node02.avigol-gcs.dk"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">kvm03-node02.avigol-gcs.dk</a>
                  'unknown error' <br>
                  > (1):<br>
                  >>  call=142, status=complete, exitreason='',<br>
                  >>      last-rc-change='Thu Jan  7 12:51:44
                  2021', queued=0ms, <br>
                  > exec=2157ms<br>
                  >> <br>
                  >>  Failed Fencing Actions:<br>
                  >>  * reboot of <a
                    href="http://kvm03-node02.avigol-gcs.dk"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">kvm03-node02.avigol-gcs.dk</a>
                  failed: delegate=,<br>
                  >>  client=crmd.37819, origin=<a
                    href="http://kvm03-node03.avigol-gcs.dk"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">kvm03-node03.avigol-gcs.dk</a>,<br>
                  >>      last-failed='Thu Jan  7 12:48:18 2021'<br>
                  >> <br>
                  >> <br>
                  >>  # from /etc/hosts on all 3 nodes:<br>
                  >> <br>
                  >>  172.31.0.31    kvm03-node01 <a
                    href="http://kvm03-node01.avigol-gcs.dk"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">kvm03-node01.avigol-gcs.dk</a><br>
                  >>  172.31.0.32    kvm03-node02 <a
                    href="http://kvm03-node02.avigol-gcs.dk"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">kvm03-node02.avigol-gcs.dk</a><br>
                  >>  172.31.0.33    kvm03-node03 <a
                    href="http://kvm03-node03.avigol-gcs.dk"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">kvm03-node03.avigol-gcs.dk</a><br>
                  >> <br>
                  >>  On Thu, Jan 7, 2021 at 11:15 AM Klaus
                  Wenninger <<a href="mailto:kwenning@redhat.com"
                    target="_blank" moz-do-not-send="true">kwenning@redhat.com</a>>
                  <br>
                  > wrote:<br>
                  >>>  Hi Steffen,<br>
                  >>> <br>
                  >>>  If you just see the leftover
                  pending-action on one node<br>
                  >>>  it would be interesting if restarting of
                  pacemaker on<br>
                  >>>  one of the other nodes does sync it to
                  all of the<br>
                  >>>  nodes.<br>
                  >>> <br>
                  >>>  Regards,<br>
                  >>>  Klaus<br>
                  >>> <br>
                  >>>  On 1/7/21 9:54 AM, <a
                    href="mailto:renayama19661014@ybb.ne.jp"
                    target="_blank" moz-do-not-send="true">renayama19661014@ybb.ne.jp</a>
                  wrote:<br>
                  >>>>  Hi Steffen,<br>
                  >>>> <br>
                  >>>>>  Unfortunately not sure about the
                  exact scenario. But I have <br>
                  > been doing<br>
                  >>>>>  some recent experiments with
                  node standby/unstandby stop/start. <br>
                  > This<br>
                  >>>>>  to get procedures right for
                  updating node rpms etc.<br>
                  >>>>> <br>
                  >>>>>  Later I noticed the uncomforting
                  "pending fencing <br>
                  > actions" status msg.<br>
                  >>>>  Okay!<br>
                  >>>> <br>
                  >>>>  Repeat the standby and unstandby
                  steps in the same way to check.<br>
                  >>>>  We will start checking after
                  tomorrow, so I think it will take some <br>
                  > time until next week.<br>
                  >>>> <br>
                  >>>> <br>
                  >>>>  Many thanks,<br>
                  >>>>  Hideo Yamauchi.<br>
                  >>>> <br>
                  >>>> <br>
                  >>>> <br>
                  >>>>  ----- Original Message -----<br>
                  >>>>>  From: "<a
                    href="mailto:renayama19661014@ybb.ne.jp"
                    target="_blank" moz-do-not-send="true">renayama19661014@ybb.ne.jp</a>"
                  <br>
                  > <<a href="mailto:renayama19661014@ybb.ne.jp"
                    target="_blank" moz-do-not-send="true">renayama19661014@ybb.ne.jp</a>><br>
                  >>>>>  To: Reid Wahl <<a
                    href="mailto:nwahl@redhat.com" target="_blank"
                    moz-do-not-send="true">nwahl@redhat.com</a>>;
                  Cluster Labs - All <br>
                  > topics related to open-source clustering welcomed
                  <<a href="mailto:users@clusterlabs.org"
                    target="_blank" moz-do-not-send="true">users@clusterlabs.org</a>><br>
                  >>>>>  Cc:<br>
                  >>>>>  Date: 2021/1/7, Thu 17:51<br>
                  >>>>>  Subject: Re: [ClusterLabs]
                  Pending Fencing Actions shown in pcs <br>
                  > status<br>
                  >>>>> <br>
                  >>>>>  Hi Steffen,<br>
                  >>>>>  Hi Reid,<br>
                  >>>>> <br>
                  >>>>>  The fencing history is kept
                  inside stonith-ng and is not <br>
                  > written to cib.<br>
                  >>>>>  However, getting the entire cib
                  and getting it sent will help <br>
                  > you to reproduce<br>
                  >>>>>  the problem.<br>
                  >>>>> <br>
                  >>>>>  Best Regards,<br>
                  >>>>>  Hideo Yamauchi.<br>
                  >>>>> <br>
                  >>>>> <br>
                  >>>>>  ----- Original Message -----<br>
                  >>>>>>  From: Reid Wahl <<a
                    href="mailto:nwahl@redhat.com" target="_blank"
                    moz-do-not-send="true">nwahl@redhat.com</a>><br>
                  >>>>>>  To: <a
                    href="mailto:renayama19661014@ybb.ne.jp"
                    target="_blank" moz-do-not-send="true">renayama19661014@ybb.ne.jp</a>;
                  Cluster Labs - All topics <br>
                  > related to<br>
                  >>>>>  open-source clustering welcomed
                  <<a href="mailto:users@clusterlabs.org"
                    target="_blank" moz-do-not-send="true">users@clusterlabs.org</a>><br>
                  >>>>>>  Date: 2021/1/7, Thu 17:39<br>
                  >>>>>>  Subject: Re: [ClusterLabs]
                  Pending Fencing Actions shown in <br>
                  > pcs status<br>
                  >>>>>> <br>
                  >>>>>> <br>
                  >>>>>>  Hi, Steffen. Those
                  attachments don't contain the CIB. <br>
                  > They contain the<br>
                  >>>>>  `pcs config` output. You can get
                  the cib with `pcs cluster cib <br>
                  >> <br>
                  >>>>>  $(hostname).cib.xml`.<br>
                  >>>>>>  Granted, it's possible that
                  this fence action <br>
                  > information wouldn't<br>
                  >>>>>  be in the CIB at all. It might
                  be stored in fencer memory.<br>
                  >>>>>>  On Thu, Jan 7, 2021 at 12:26
                  AM <br>
                  > <<a href="mailto:renayama19661014@ybb.ne.jp"
                    target="_blank" moz-do-not-send="true">renayama19661014@ybb.ne.jp</a>>
                  wrote:<br>
                  >>>>>> <br>
                  >>>>>>  Hi Steffen,<br>
                  >>>>>>>>   Here CIB settings
                  attached (pcs config show) for <br>
                  > all 3 of my nodes<br>
                  >>>>>>>>   (all 3 seems 100%
                  identical), node03 is the DC.<br>
                  >>>>>>>  Thank you for the
                  attachment.<br>
                  >>>>>>> <br>
                  >>>>>>>  What is the scenario
                  when this situation occurs?<br>
                  >>>>>>>  In what steps did the
                  problem appear when fencing was <br>
                  > performed (or<br>
                  >>>>>  failed)?<br>
                  >>>>>>>  Best Regards,<br>
                  >>>>>>>  Hideo Yamauchi.<br>
                  >>>>>>> <br>
                  >>>>>>> <br>
                  >>>>>>>  ----- Original Message
                  -----<br>
                  >>>>>>>>   From: Steffen
                  Vinther Sørensen <br>
                  > <<a href="mailto:svinther@gmail.com"
                    target="_blank" moz-do-not-send="true">svinther@gmail.com</a>><br>
                  >>>>>>>>   To: <a
                    href="mailto:renayama19661014@ybb.ne.jp"
                    target="_blank" moz-do-not-send="true">renayama19661014@ybb.ne.jp</a>;
                  Cluster Labs - All <br>
                  > topics related<br>
                  >>>>>  to open-source clustering
                  welcomed <br>
                  > <<a href="mailto:users@clusterlabs.org"
                    target="_blank" moz-do-not-send="true">users@clusterlabs.org</a>><br>
                  >>>>>>>>   Cc:<br>
                  >>>>>>>>   Date: 2021/1/7, Thu
                  17:05<br>
                  >>>>>>>>   Subject: Re:
                  [ClusterLabs] Pending Fencing Actions <br>
                  > shown in pcs<br>
                  >>>>>  status<br>
                  >>>>>>>>   Hi Hideo,<br>
                  >>>>>>>> <br>
                  >>>>>>>>   Here CIB settings
                  attached (pcs config show) for <br>
                  > all 3 of my nodes<br>
                  >>>>>>>>   (all 3 seems 100%
                  identical), node03 is the DC.<br>
                  >>>>>>>> <br>
                  >>>>>>>>   Regards<br>
                  >>>>>>>>   Steffen<br>
                  >>>>>>>> <br>
                  >>>>>>>>   On Thu, Jan 7, 2021
                  at 8:06 AM <br>
                  > <<a href="mailto:renayama19661014@ybb.ne.jp"
                    target="_blank" moz-do-not-send="true">renayama19661014@ybb.ne.jp</a>><br>
                  >>>>>  wrote:<br>
                  >>>>>>>>>    Hi Steffen,<br>
                  >>>>>>>>>    Hi Reid,<br>
                  >>>>>>>>> <br>
                  >>>>>>>>>    I also checked
                  the Centos source rpm and it <br>
                  > seems to include a<br>
                  >>>>>  fix for the<br>
                  >>>>>>>>   problem.<br>
                  >>>>>>>>>    As Steffen
                  suggested, if you share your CIB <br>
                  > settings, I might<br>
                  >>>>>  know<br>
                  >>>>>>>>   something.<br>
                  >>>>>>>>>    If this issue
                  is the same as the fix, the <br>
                  > display will only be<br>
                  >>>>>  displayed on<br>
                  >>>>>>>>   the DC node and
                  will not affect the operation.<br>
                  >>>>>>>>>    The pending
                  actions shown will remain for a <br>
                  > long time, but<br>
                  >>>>>  will not have a<br>
                  >>>>>>>>   negative impact on
                  the cluster.<br>
                  >>>>>>>>>    Best Regards,<br>
                  >>>>>>>>>    Hideo
                  Yamauchi.<br>
                  >>>>>>>>> <br>
                  >>>>>>>>> <br>
                  >>>>>>>>>    ----- Original
                  Message -----<br>
                  >>>>>>>>>    > From:
                  Reid Wahl <<a href="mailto:nwahl@redhat.com"
                    target="_blank" moz-do-not-send="true">nwahl@redhat.com</a>><br>
                  >>>>>>>>>    > To:
                  Cluster Labs - All topics related to <br>
                  > open-source<br>
                  >>>>>  clustering<br>
                  >>>>>>>>   welcomed <<a
                    href="mailto:users@clusterlabs.org" target="_blank"
                    moz-do-not-send="true">users@clusterlabs.org</a>><br>
                  >>>>>>>>>    > Cc:<br>
                  >>>>>>>>>    > Date:
                  2021/1/7, Thu 15:58<br>
                  >>>>>>>>>    > Subject:
                  Re: [ClusterLabs] Pending <br>
                  > Fencing Actions shown<br>
                  >>>>>  in pcs status<br>
                  >>>>>>>>>    ><br>
                  >>>>>>>>>    > It's
                  supposedly fixed in that <br>
                  > version.<br>
                  >>>>>>>>>    >   - <br>
                  > <a
                    href="https://bugzilla.redhat.com/show_bug.cgi?id=1787749"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">https://bugzilla.redhat.com/show_bug.cgi?id=1787749</a>
                  <br>
                  >>>>>>>>>    >   - <br>
                  > <a
                    href="https://access.redhat.com/solutions/4713471"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">https://access.redhat.com/solutions/4713471</a>
                  <br>
                  >>>>>>>>>    ><br>
                  >>>>>>>>>    > So you
                  may be hitting a different issue <br>
                  > (unless<br>
                  >>>>>  there's a bug in<br>
                  >>>>>>>>   the<br>
                  >>>>>>>>>    > pcmk 1.1
                  backport of the fix).<br>
                  >>>>>>>>>    ><br>
                  >>>>>>>>>    > I may be
                  a little bit out of my area of <br>
                  > knowledge here,<br>
                  >>>>>  but can you<br>
                  >>>>>>>>>    > share the
                  CIBs from nodes 1 and 3? Maybe <br>
                  > Hideo, Klaus, or<br>
                  >>>>>  Ken has some<br>
                  >>>>>>>>>    > insight.<br>
                  >>>>>>>>>    ><br>
                  >>>>>>>>>    > On Wed,
                  Jan 6, 2021 at 10:53 PM Steffen <br>
                  > Vinther Sørensen<br>
                  >>>>>>>>>    > <<a
                    href="mailto:svinther@gmail.com" target="_blank"
                    moz-do-not-send="true">svinther@gmail.com</a>>
                  wrote:<br>
                  >>>>>>>>>    >><br>
                  >>>>>>>>>    >>  Hi
                  Hideo,<br>
                  >>>>>>>>>    >><br>
                  >>>>>>>>>    >>  If
                  the fix is not going to make it <br>
                  > into the CentOS7<br>
                  >>>>>  pacemaker<br>
                  >>>>>>>>   version,<br>
                  >>>>>>>>>    >>  I
                  guess the stable approach to take <br>
                  > advantage of it<br>
                  >>>>>  is to build<br>
                  >>>>>>>>   the<br>
                  >>>>>>>>>    >> 
                  cluster on another OS than CentOS7 <br>
                  > ? A little late<br>
                  >>>>>  for that in<br>
                  >>>>>>>>   this<br>
                  >>>>>>>>>    >>  case
                  though :)<br>
                  >>>>>>>>>    >><br>
                  >>>>>>>>>    >> 
                  Regards<br>
                  >>>>>>>>>    >> 
                  Steffen<br>
                  >>>>>>>>>    >><br>
                  >>>>>>>>>    >><br>
                  >>>>>>>>>    >><br>
                  >>>>>>>>>    >><br>
                  >>>>>>>>>    >>  On
                  Thu, Jan 7, 2021 at 7:27 AM<br>
                  >>>>>  <<a
                    href="mailto:renayama19661014@ybb.ne.jp"
                    target="_blank" moz-do-not-send="true">renayama19661014@ybb.ne.jp</a>><br>
                  >>>>>>>>   wrote:<br>
                  >>>>>>>>>    >>  ><br>
                  >>>>>>>>>    >>  >
                  Hi Steffen,<br>
                  >>>>>>>>>    >>  ><br>
                  >>>>>>>>>    >>  >
                  The fix pointed out by Reid is <br>
                  > affecting it.<br>
                  >>>>>>>>>    >>  ><br>
                  >>>>>>>>>    >>  >
                  Since the fencing action <br>
                  > requested by the DC<br>
                  >>>>>  node exists<br>
                  >>>>>>>>   only in the<br>
                  >>>>>>>>>    > DC node,
                  such an event occurs.<br>
                  >>>>>>>>>    >>  >
                  You will need to take <br>
                  > advantage of the modified<br>
                  >>>>>  pacemaker to<br>
                  >>>>>>>>   resolve<br>
                  >>>>>>>>>    > the
                  issue.<br>
                  >>>>>>>>>    >>  ><br>
                  >>>>>>>>>    >>  >
                  Best Regards,<br>
                  >>>>>>>>>    >>  >
                  Hideo Yamauchi.<br>
                  >>>>>>>>>    >>  ><br>
                  >>>>>>>>>    >>  ><br>
                  >>>>>>>>>    >>  ><br>
                  >>>>>>>>>    >>  >
                  ----- Original Message -----<br>
                  >>>>>>>>>    >>  >
                  > From: Reid Wahl <br>
                  > <<a href="mailto:nwahl@redhat.com"
                    target="_blank" moz-do-not-send="true">nwahl@redhat.com</a>><br>
                  >>>>>>>>>    >>  >
                  > To: Cluster Labs - All <br>
                  > topics related to<br>
                  >>>>>  open-source<br>
                  >>>>>>>>   clustering<br>
                  >>>>>>>>>    > welcomed
                  <<a href="mailto:users@clusterlabs.org"
                    target="_blank" moz-do-not-send="true">users@clusterlabs.org</a>><br>
                  >>>>>>>>>    >>  >
                  > Cc:<br>
                  >>>>>>>>>    >>  >
                  > Date: 2021/1/7, Thu 15:07<br>
                  >>>>>>>>>    >>  >
                  > Subject: Re: <br>
                  > [ClusterLabs] Pending Fencing<br>
                  >>>>>  Actions<br>
                  >>>>>>>>   shown in pcs<br>
                  >>>>>>>>>    > status<br>
                  >>>>>>>>>    >>  >
                  ><br>
                  >>>>>>>>>    >>  >
                  > Hi, Steffen. Are your <br>
                  > cluster nodes all<br>
                  >>>>>  running the<br>
                  >>>>>>>>   same<br>
                  >>>>>>>>>    > Pacemaker<br>
                  >>>>>>>>>    >>  >
                  > versions? This looks like <br>
                  > Bug 5401[1],<br>
                  >>>>>  which is fixed<br>
                  >>>>>>>>   by upstream<br>
                  >>>>>>>>>    >>  >
                  > commit df71a07[2]. <br>
                  > I'm a little bit<br>
                  >>>>>  confused about<br>
                  >>>>>>>>   why it<br>
                  >>>>>>>>>    > only
                  shows<br>
                  >>>>>>>>>    >>  >
                  > up on one out of three <br>
                  > nodes though.<br>
                  >>>>>>>>>    >>  >
                  ><br>
                  >>>>>>>>>    >>  >
                  > [1]<br>
                  >>>>>  <a
                    href="https://bugs.clusterlabs.org/show_bug.cgi?id=5401"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">https://bugs.clusterlabs.org/show_bug.cgi?id=5401</a>
                  <br>
                  >>>>>>>>>    >>  >
                  > [2]<br>
                  >>>>>>>>   <br>
                  > <a
                    href="https://github.com/ClusterLabs/pacemaker/commit/df71a07"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">https://github.com/ClusterLabs/pacemaker/commit/df71a07</a>
                  <br>
                  >>>>>>>>>    >>  >
                  ><br>
                  >>>>>>>>>    >>  >
                  > On Tue, Jan 5, 2021 at <br>
                  > 8:31 AM Steffen<br>
                  >>>>>  Vinther Sørensen<br>
                  >>>>>>>>>    >>  >
                  > <br>
                  > <<a href="mailto:svinther@gmail.com"
                    target="_blank" moz-do-not-send="true">svinther@gmail.com</a>>
                  wrote:<br>
                  >>>>>>>>>    >>  >
                  >><br>
                  >>>>>>>>>    >>  >
                  >>  Hello<br>
                  >>>>>>>>>    >>  >
                  >><br>
                  >>>>>>>>>    >>  >
                  >>  node 1 is showing <br>
                  > this in 'pcs<br>
                  >>>>>  status'<br>
                  >>>>>>>>>    >>  >
                  >><br>
                  >>>>>>>>>    >>  >
                  >>  Pending Fencing <br>
                  > Actions:<br>
                  >>>>>>>>>    >>  >
                  >>  * reboot of<br>
                  >>>>>  <a
                    href="http://kvm03-node02.avigol-gcs.dk"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">kvm03-node02.avigol-gcs.dk</a>
                  pending:<br>
                  >>>>>>>>>    >
                  client=crmd.37819,<br>
                  >>>>>>>>>    >>  >
                  >>  <br>
                  > origin=<a
                    href="http://kvm03-node03.avigol-gcs.dk"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">kvm03-node03.avigol-gcs.dk</a><br>
                  >>>>>>>>>    >>  >
                  >><br>
                  >>>>>>>>>    >>  >
                  >>  node 2 and node 3 <br>
                  > outputs no such<br>
                  >>>>>  thing (node 3 is<br>
                  >>>>>>>>   DC)<br>
                  >>>>>>>>>    >>  >
                  >><br>
                  >>>>>>>>>    >>  >
                  >>  Google is not much <br>
                  > help, how to<br>
                  >>>>>  investigate this<br>
                  >>>>>>>>   further and<br>
                  >>>>>>>>>    > get rid<br>
                  >>>>>>>>>    >>  >
                  >>  of such terrifying <br>
                  > status message ?<br>
                  >>>>>>>>>    >>  >
                  >><br>
                  >>>>>>>>>    >>  >
                  >>  Regards<br>
                  >>>>>>>>>    >>  >
                  >>  Steffen<br>
                  >>>>>>>>>    >>  >
                  >><br>
                  >>>>> 
                  _______________________________________________<br>
                  >>>>>>>>>    >>  >
                  >>  Manage your <br>
                  > subscription:<br>
                  >>>>>>>>>    >>  >
                  >><br>
                  >>>>>>>>   <br>
                  > <a
                    href="https://lists.clusterlabs.org/mailman/listinfo/users"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">https://lists.clusterlabs.org/mailman/listinfo/users</a>
                  <br>
                  >>>>>>>>>    >>  >
                  >><br>
                  >>>>>>>>>    >>  >
                  >>  ClusterLabs home:<br>
                  >>>>>  <a
                    href="https://www.clusterlabs.org/" rel="noreferrer"
                    target="_blank" moz-do-not-send="true">https://www.clusterlabs.org/</a>
                  <br>
                  >>>>>>>>>    >>  >
                  >><br>
                  >>>>>>>>>    >>  >
                  ><br>
                  >>>>>>>>>    >>  >
                  ><br>
                  >>>>>>>>>    >>  >
                  > --<br>
                  >>>>>>>>>    >>  >
                  > Regards,<br>
                  >>>>>>>>>    >>  >
                  ><br>
                  >>>>>>>>>    >>  >
                  > Reid Wahl, RHCA<br>
                  >>>>>>>>>    >>  >
                  > Senior Software <br>
                  > Maintenance Engineer, Red<br>
                  >>>>>  Hat<br>
                  >>>>>>>>>    >>  >
                  > CEE - Platform Support <br>
                  > Delivery -<br>
                  >>>>>  ClusterHA<br>
                  >>>>>>>>>    >>  >
                  ><br>
                  >>>>>>>>>    >>  >
                  ><br>
                  >>>>> 
                  _______________________________________________<br>
                  >>>>>>>>>    >>  >
                  > Manage your subscription:<br>
                  >>>>>>>>>    >>  >
                  ><br>
                  >>>>>  <a
                    href="https://lists.clusterlabs.org/mailman/listinfo/users"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">https://lists.clusterlabs.org/mailman/listinfo/users</a>
                  <br>
                  >>>>>>>>>    >>  >
                  ><br>
                  >>>>>>>>>    >>  >
                  > ClusterLabs home:<br>
                  >>>>>  <a
                    href="https://www.clusterlabs.org/" rel="noreferrer"
                    target="_blank" moz-do-not-send="true">https://www.clusterlabs.org/</a>
                  <br>
                  >>>>>>>>>    >>  >
                  ><br>
                  >>>>>>>>>    >>  ><br>
                  >>>>>>>>>    >>  >
                  <br>
                  > _______________________________________________<br>
                  >>>>>>>>>    >>  >
                  Manage your subscription:<br>
                  >>>>>>>>>    >>  ><br>
                  >>>>>  <a
                    href="https://lists.clusterlabs.org/mailman/listinfo/users"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">https://lists.clusterlabs.org/mailman/listinfo/users</a>
                  <br>
                  >>>>>>>>>    >>  ><br>
                  >>>>>>>>>    >>  >
                  ClusterLabs home: <br>
                  > <a href="https://www.clusterlabs.org/"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">https://www.clusterlabs.org/</a>
                  <br>
                  >>>>>>>>>    >>  <br>
                  > _______________________________________________<br>
                  >>>>>>>>>    >> 
                  Manage your subscription:<br>
                  >>>>>>>>>    >>  <br>
                  > <a
                    href="https://lists.clusterlabs.org/mailman/listinfo/users"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">https://lists.clusterlabs.org/mailman/listinfo/users</a>
                  <br>
                  >>>>>>>>>    >><br>
                  >>>>>>>>>    >> 
                  ClusterLabs home: <br>
                  > <a href="https://www.clusterlabs.org/"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">https://www.clusterlabs.org/</a>
                  <br>
                  >>>>>>>>>    ><br>
                  >>>>>>>>>    ><br>
                  >>>>>>>>>    ><br>
                  >>>>>>>>>    > --<br>
                  >>>>>>>>>    > Regards,<br>
                  >>>>>>>>>    ><br>
                  >>>>>>>>>    > Reid
                  Wahl, RHCA<br>
                  >>>>>>>>>    > Senior
                  Software Maintenance Engineer, <br>
                  > Red Hat<br>
                  >>>>>>>>>    > CEE -
                  Platform Support Delivery - <br>
                  > ClusterHA<br>
                  >>>>>>>>>    ><br>
                  >>>>>>>>>    > <br>
                  > _______________________________________________<br>
                  >>>>>>>>>    > Manage
                  your subscription:<br>
                  >>>>>>>>>    > <br>
                  > <a
                    href="https://lists.clusterlabs.org/mailman/listinfo/users"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">https://lists.clusterlabs.org/mailman/listinfo/users</a>
                  <br>
                  >>>>>>>>>    ><br>
                  >>>>>>>>>    >
                  ClusterLabs home: <br>
                  > <a href="https://www.clusterlabs.org/"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">https://www.clusterlabs.org/</a>
                  <br>
                  >>>>>>>>>    ><br>
                  >>>>>>>>> <br>
                  >>>>>>>>>   <br>
                  > _______________________________________________<br>
                  >>>>>>>>>    Manage your
                  subscription:<br>
                  >>>>>>>>>   <br>
                  > <a
                    href="https://lists.clusterlabs.org/mailman/listinfo/users"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">https://lists.clusterlabs.org/mailman/listinfo/users</a>
                  <br>
                  >>>>>>>>> <br>
                  >>>>>>>>>    ClusterLabs
                  home: <br>
                  > <a href="https://www.clusterlabs.org/"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">https://www.clusterlabs.org/</a>
                  <br>
                  >>>>>>> 
                  _______________________________________________<br>
                  >>>>>>>  Manage your
                  subscription:<br>
                  >>>>>>>  <a
                    href="https://lists.clusterlabs.org/mailman/listinfo/users"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">https://lists.clusterlabs.org/mailman/listinfo/users</a>
                  <br>
                  >>>>>>> <br>
                  >>>>>>>  ClusterLabs home: <a
                    href="https://www.clusterlabs.org/" rel="noreferrer"
                    target="_blank" moz-do-not-send="true">https://www.clusterlabs.org/</a>
                  <br>
                  >>>>>>> <br>
                  >>>>>>  --<br>
                  >>>>>> <br>
                  >>>>>>  Regards,<br>
                  >>>>>> <br>
                  >>>>>>  Reid Wahl, RHCA<br>
                  >>>>>> <br>
                  >>>>>>  Senior Software Maintenance
                  Engineer, Red Hat<br>
                  >>>>>>  CEE - Platform Support
                  Delivery - ClusterHA<br>
                  >>>>>> <br>
                  >>>>>> <br>
                  >>>>> 
                  _______________________________________________<br>
                  >>>>>  Manage your subscription:<br>
                  >>>>>  <a
                    href="https://lists.clusterlabs.org/mailman/listinfo/users"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">https://lists.clusterlabs.org/mailman/listinfo/users</a>
                  <br>
                  >>>>> <br>
                  >>>>>  ClusterLabs home: <a
                    href="https://www.clusterlabs.org/" rel="noreferrer"
                    target="_blank" moz-do-not-send="true">https://www.clusterlabs.org/</a>
                  <br>
                  >>>>> <br>
                  >>>> 
                  _______________________________________________<br>
                  >>>>  Manage your subscription:<br>
                  >>>>  <a
                    href="https://lists.clusterlabs.org/mailman/listinfo/users"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">https://lists.clusterlabs.org/mailman/listinfo/users</a>
                  <br>
                  >>>> <br>
                  >>>>  ClusterLabs home: <a
                    href="https://www.clusterlabs.org/" rel="noreferrer"
                    target="_blank" moz-do-not-send="true">https://www.clusterlabs.org/</a>
                  <br>
                  >>> 
                  _______________________________________________<br>
                  >>>  Manage your subscription:<br>
                  >>>  <a
                    href="https://lists.clusterlabs.org/mailman/listinfo/users"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">https://lists.clusterlabs.org/mailman/listinfo/users</a>
                  <br>
                  >>> <br>
                  >>>  ClusterLabs home: <a
                    href="https://www.clusterlabs.org/" rel="noreferrer"
                    target="_blank" moz-do-not-send="true">https://www.clusterlabs.org/</a>
                  <br>
                  > <br>
                  > _______________________________________________<br>
                  > Manage your subscription:<br>
                  > <a
                    href="https://lists.clusterlabs.org/mailman/listinfo/users"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">https://lists.clusterlabs.org/mailman/listinfo/users</a>
                  <br>
                  > <br>
                  > ClusterLabs home: <a
                    href="https://www.clusterlabs.org/" rel="noreferrer"
                    target="_blank" moz-do-not-send="true">https://www.clusterlabs.org/</a>
                  <br>
                  > <br>
                  <br>
                </blockquote>
              </div>
            </blockquote>
            <br>
          </div>
        </blockquote>
      </div>
    </blockquote>
    <br>
  </body>
</html>