[ClusterLabs] pacemaker: 1.1.23 20sec timeout on cluster with disc I/O write delays
Windl, Ulrich
u.windl at ukr.de
Mon Mar 23 09:49:59 UTC 2026
I think you should provide more information, like SBD configuration, syslog messages, etc.
Usually node fencing via SBD works by writing a message to a shared disk slot.
Once written, SBD/pacemaker expects the node to suicide soon. However multiple timeouts are configurable.
Asking AI, I got this (treat with some care):
I'll search for the specific timeout parameters and their interdependencies in Linux Pacemaker clusters with SBD fencing.
SBD Timeout Parameters in Linux Pacemaker Clusters
In a Pacemaker cluster using SBD (STONITH Block Device) for fencing, there are four primary configurable timeout parameters, and they have strict interdependencies. Understanding these relationships is critical for reliable cluster operation.
________________________________
Timeout Parameters and Their Interdependencies
Parameter
Configuration Location
Type
Purpose
Default
SBD_WATCHDOG_TIMEOUT
/etc/sysconfig/sbd
SBD daemon config
Hardware watchdog timeout; triggers node self-fence if no kick received
5 seconds
msgwait
SBD device metadata
SBD device level
Time window for message delivery to node slot on SBD device
Set during device initialization
stonith-timeout
Pacemaker CIB (cluster property)
Global cluster property
Maximum time Pacemaker waits for STONITH action (reboot/off) to complete
60 seconds
stonith-watchdog-timeout
Pacemaker CIB (cluster property)
Global cluster property
Time Pacemaker assumes fencing has completed via watchdog (diskless SBD only)
0 (disabled by default)
________________________________
Critical Interdependencies
The timeout parameters have strict mathematical relationships that must be maintained for proper cluster behavior:
For Disk-Based SBD (with shared storage devices):
msgwait >= (watchdog_timeout × 2)stonith-timeout >= msgwait + 20%
Example: If watchdog timeout is 30 seconds:
* msgwait must be at least 60 seconds
* stonith-timeout must be at least 72 seconds (60 + 20%)
For Diskless SBD (watchdog-only, no shared storage):
stonith-watchdog-timeout >= (SBD_WATCHDOG_TIMEOUT × 2)stonith-timeout >= stonith-watchdog-timeout + 20%
Example: If SBD_WATCHDOG_TIMEOUT is 5 seconds:
* stonith-watchdog-timeout must be at least 10 seconds
* stonith-timeout must be at least 12 seconds (10 + 20%)
________________________________
How These Parameters Interact
Watchdog Timeout (SBD_WATCHDOG_TIMEOUT)
This is the foundation of the timeout hierarchy. It represents how long the hardware watchdog will wait for a "kick" (heartbeat) from the SBD daemon before forcibly resetting the node. If storage latency or system issues prevent the SBD daemon from operating, the node self-fences after this timeout expires.
Message Wait Timeout (msgwait)
This is set in the SBD device metadata during initialization and defines the grace period for a fencing message to be acknowledged as delivered to the target node's slot. It must be at least twice the watchdog timeout to ensure the node has time to detect the fencing message and self-fence gracefully before the watchdog triggers.
STONITH Timeout (stonith-timeout)
This is a Pacemaker cluster property that controls how long the cluster waits for the fencing action to complete. It must exceed msgwait by at least 20% to allow sufficient time for the message to be delivered and processed. If this timeout is too short, the cluster may consider the fencing action failed and retry, causing unnecessary delays.
STONITH Watchdog Timeout (stonith-watchdog-timeout)
This parameter is only used for diskless SBD and tells Pacemaker how long to wait before assuming a node has already self-fenced via the watchdog. It must be at least twice the SBD_WATCHDOG_TIMEOUT to provide a safety margin. Setting this to 0 (the default) disables resource recovery and is appropriate only for disk-based SBD configurations.
________________________________
Critical Warnings
Alice, pay attention to these constraints:
* Do not set stonith-watchdog-timeout until SBD is configured and running on every node, including Pacemaker Remote nodes.
* If stonith-timeout < stonith-watchdog-timeout in diskless SBD, nodes can become stuck in an UNCLEAN state, blocking failover.
* For multipath or iSCSI setups, the watchdog timeout should account for path failure detection and failover time. The max_polling_interval in /etc/multipath.conf must be less than the watchdog timeout.
* Changing watchdog timeout requires coordinating changes across all dependent timeouts to maintain the mathematical relationships.
* Storage latency is the primary driver of watchdog timeout values; high-latency storage requires longer timeouts, which cascades into longer msgwait and stonith-timeout values.
The interdependency structure ensures that each timeout layer provides sufficient time for the layer below it to complete, preventing race conditions and cluster deadlock scenarios.
Kind regards,
Ulrich Windl
From: Users <users-bounces at clusterlabs.org> On Behalf Of Dmytro Poliarush via Users
Sent: Tuesday, March 17, 2026 12:32 PM
To: users at clusterlabs.org
Cc: Dmytro Poliarush <Dmytro_Poliarush at epam.com>
Subject: [EXT] [EXT] [ClusterLabs] pacemaker: 1.1.23 20sec timeout on cluster with disc I/O write delays
Sicherheits-Hinweis: Diese E-Mail wurde von einer Person außerhalb des UKR gesendet. Seien Sie vorsichtig vor gefälschten Absendern, wenn Sie auf Links klicken, Anhänge öffnen oder weitere Aktionen ausführen, bevor Sie die Echtheit überprüft haben.
Hi all,
Need a small guidance on pacemaker: 1.1.23.
I'm chasing a stubborn issue in a 2node 2disc SBD cluster.
When running manual fencing test with `pcs stonith fence` command, I observe an error
```
Error: unable to fence '<nodehostname>'
```
Error manifests each time around a `20second` timeout(I assume this is a timeout).
`time` command is used to track how long execution runs: `time pcs stonith fence`.
Here is an example:
```
[root at node1 ~]# time pcs stonith fence --debug node2
Running: /usr/sbin/stonith_admin -B node2
> Return Value: 194
--Debug Output Start--
--Debug Output End--
Error: unable to fence 'node2'
> real 0m20.791s
user 0m0.063s
sys 0m0.033s
[root at node1 ~]#
```
For investigation, I've setup a testing cluster with 2 Virtualbox VMs.
Behaviour was NOT observed on testing cluster until I intentionally added disk write delays with dmsetup tool on one of the nodes.
Here is an example of setting a 22sec write delay:
```
# Create: read delay = 0 ms, write delay = 22000 ms
# Table format: delay <dev> <start> <read_ms> <dev> <start> <write_ms>
dmsetup --noudevsync create slow-sdc --table "0 ${SIZE} delay /dev/sdc 0 0 /dev/sdc 0 22000"
dmsetup mknodes
```
NOTE, that tests with delays upto(including) 19sec pass:
```
[root at node1 ~]# ./suspend-resume-slow-sdc-delay-write.sh 20000
[root at node1 ~]# dmsetup table slow-sdc
> 0 262144 delay 8:32 0 0 8:32 0 20000
[root at node1 ~]# time pcs stonith fence --debug node2
Running: /usr/sbin/stonith_admin -B node2
Return Value: 194
--Debug Output Start--
--Debug Output End--
```
> Error: unable to fence 'node2'
> real 0m20.588s
user 0m0.088s
sys 0m0.021s
> [root at node1 ~]# ./suspend-resume-slow-sdc-delay-write.sh 19000
++ blockdev --getsize /dev/sdc
+ SIZE=262144
++ lsblk -dn -o MAJ:MIN /dev/sdc
+ MAJMIN=' 8:32 '
+ dmsetup suspend slow-sdc
+ dmsetup reload slow-sdc --table '0 262144 delay /dev/sdc 0 0 /dev/sdc 0 19000'
+ dmsetup resume slow-sdc
+ dmsetup table slow-sdc
> 0 262144 delay 8:32 0 0 8:32 0 19000
[root at node1 ~]# pcs stonith history cleanup; pcs stonith cleanup # pcs-cleanup-error-cleanup
cleaning up fencing-history for node *
Cleaned up all resources on all nodes
[root at node1 ~]#
[root at node1 ~]# time pcs stonith fence --debug node2
Running: /usr/sbin/stonith_admin -B node2
Return Value: 0
--Debug Output Start--
--Debug Output End--
> Node: node2 fenced
> real 0m19.869s
user 0m0.098s
sys 0m0.035s
[root at node1 ~]#
```
So here is my question:
I assume there is a 20sec timeout value hardcoded somewhere in pacemaker 1.1.23 sources.
This hardcoded value impacts manual fencing in case of disc I/O delays(maybe in some other cases).
I expect that increasing timeout can mitigate clusters with disc I/O issues similar to ones described above.
Please note this timeout is NOT: stonith-timeout or stonith-watchdog-timeout.
Could you please comment if that is a meaningfull assumption and where does the 20sec timeout come from.
Regards, Dmytro
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/users/attachments/20260323/4dcf778d/attachment-0001.htm>
More information about the Users
mailing list