<div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Feb 17, 2022 at 12:38 PM Ulrich Windl <<a href="mailto:Ulrich.Windl@rz.uni-regensburg.de">Ulrich.Windl@rz.uni-regensburg.de</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">>>> Klaus Wenninger <<a href="mailto:kwenning@redhat.com" target="_blank">kwenning@redhat.com</a>> schrieb am 17.02.2022 um 10:49 in<br>
Nachricht<br>
<<a href="mailto:CALrDAo0UngyYybnv9xwve9V4suXvjOn-y8c8vD51ZR5LT1OpKw@mail.gmail.com" target="_blank">CALrDAo0UngyYybnv9xwve9V4suXvjOn-y8c8vD51ZR5LT1OpKw@mail.gmail.com</a>>:<br>
...<br>
>> For completeness: Yes, sbd did recover:<br>
>> Feb 14 13:01:42 h18 sbd[6615]: warning: cleanup_servant_by_pid: Servant<br>
>> for /dev/disk/by-id/dm-name-SBD_1-3P1 (pid: 6619) has terminated<br>
>> Feb 14 13:01:42 h18 sbd[6615]: warning: cleanup_servant_by_pid: Servant<br>
>> for /dev/disk/by-id/dm-name-SBD_1-3P2 (pid: 6621) has terminated<br>
>> Feb 14 13:01:42 h18 sbd[31668]: /dev/disk/by-id/dm-name-SBD_1-3P1:<br>
>> notice: servant_md: Monitoring slot 4 on disk<br>
>> /dev/disk/by-id/dm-name-SBD_1-3P1<br>
>> Feb 14 13:01:42 h18 sbd[31669]: /dev/disk/by-id/dm-name-SBD_1-3P2:<br>
>> notice: servant_md: Monitoring slot 4 on disk<br>
>> /dev/disk/by-id/dm-name-SBD_1-3P2<br>
>> Feb 14 13:01:49 h18 sbd[6615]: notice: inquisitor_child: Servant<br>
>> /dev/disk/by-id/dm-name-SBD_1-3P1 is healthy (age: 0)<br>
>> Feb 14 13:01:49 h18 sbd[6615]: notice: inquisitor_child: Servant<br>
>> /dev/disk/by-id/dm-name-SBD_1-3P2 is healthy (age: 0)<br>
>><br>
> <br>
> Good to see that!<br>
> Did you try several times?<br>
<br>
Well, we only have two fabrics, and the server is productive, so both fabrics were interrupted once each (to change the cabling).<br>
sbd survived.<br></blockquote><div>Yup - sometimes the entities that would have to be failed are just too large</div><div>to have them as part of the playground/sandbox :-( </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
Second fabric:<br>
Feb 14 13:03:51 h18 kernel: qla2xxx [0000:01:00.0]-500b:2: LOOP DOWN detected (2 7 0 0).<br>
Feb 14 13:03:57 h18 multipathd[5180]: SBD_1-3P2: remaining active paths: 3<br>
Feb 14 13:03:57 h18 multipathd[5180]: SBD_1-3P2: remaining active paths: 2<br>
<br>
Feb 14 13:05:18 h18 kernel: qla2xxx [0000:01:00.0]-500a:2: LOOP UP detected (8 Gbps).<br>
Feb 14 13:05:22 h18 multipathd[5180]: SBD_1-3P2: sdr - tur checker reports path is up<br>
Feb 14 13:05:22 h18 multipathd[5180]: SBD_1-3P2: remaining active paths: 3<br>
Feb 14 13:05:23 h18 multipathd[5180]: SBD_1-3P2: sdae - tur checker reports path is up<br>
Feb 14 13:05:23 h18 multipathd[5180]: SBD_1-3P2: remaining active paths: 4<br>
Feb 14 13:05:25 h18 multipathd[5180]: SBD_1-3P1: sdl - tur checker reports path is up<br>
Feb 14 13:05:25 h18 multipathd[5180]: SBD_1-3P1: remaining active paths: 3<br>
Feb 14 13:05:26 h18 multipathd[5180]: SBD_1-3P1: sdo - tur checker reports path is up<br>
Feb 14 13:05:26 h18 multipathd[5180]: SBD_1-3P1: remaining active paths: 4<br>
<br>
So this time multipath reacted before SBD noticed anything (the way it should have been anyway)<br></blockquote><div>Depends on how you like it to behave.</div><div>You are free to configure the io-timeout in a way that sbd wouldn't see it or</div><div>if you'd rather have some notice in the sbd-logs, or the added reliability of</div><div>kicking off another try instead of waiting for a first - maybe doomed - one to</div><div>finish you give it enough time to retry within your msgwait-timeout.</div><div>Unfortunately it isn't possible to have one-fits-all defaults here.</div><div>But feedback is welcome so that we can do a little tweaking that makes them fit </div><div>for a larger audience.</div><div>Remember a case where devices stalled for 50s during a firmware-update</div><div>shouldn't trigger fencing - definitely a case that can't be covered by defaults.</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
> I have some memory that when testing with the kernel mentioned before<br>
> behavior<br>
> changed after a couple of timeouts and it wasn't able to create the<br>
> read-request<br>
> anymore (without the fix mentioned) - assume some kind of resource depletion<br>
> due to previously hanging attempts not destroyed properly.<br>
<br>
That can be a nasty rece condition, too, however. (I had my share of signal handlers, threads and race conditions).<br>
Of course more crude programming errors are possible, too.<br></blockquote><div>One single threaded process and it was gone once the api was handled properly.</div><div>I mean the different behavior after a couple of retries was gone. The basic issue</div><div>was persistent with that kernel. <br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Debugging can be very hard, but dmsetup can create bad disks for testing for you ;-)<br>
DEV=bad_disk<br>
dmsetup create "$DEV" <<EOF<br>
0 8 zero<br>
8 1 error<br>
9 7 zero<br>
16 1 error<br>
17 255 zero<br>
EOF<br></blockquote><div>We need to impose the problem dynamically.</div><div>Otherwise sbd wouldn't come up in the first place - which is of course a useful test</div><div>in itself as well.</div><div>Atm regressions.sh is using wipe_table to impose an error dynamically</div><div>but simultaneously on all blocks. The periodic reading is anyway done on just</div><div>a single block (more accurately the header as well). So we should be fine with that.</div><div>I saw that device-mapper offers a possibility to delay here as well. This looks as</div><div>if it was useful for a CI test-case that simulates what we have here - even multiple</div><div>times in a row without upsetting customers ;-)</div><div><br></div><div>Regards,</div><div>Klaus</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
Regards,<br>
Ulrich<br>
...<br>
<br>
<br>
_______________________________________________<br>
Manage your subscription:<br>
<a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br>
<br>
ClusterLabs home: <a href="https://www.clusterlabs.org/" rel="noreferrer" target="_blank">https://www.clusterlabs.org/</a><br>
<br>
</blockquote></div></div>