[ClusterLabs] resource going to blocked status while we restart service via systemctl twice

S Sathish S s.s.sathish at ericsson.com
Mon Apr 17 03:25:18 EDT 2023


Hi Team,

TEST_node1 resource going to blocked status while we restart service via systemctl twice in less time/before completion of 1st systemctl command.
In older pacemaker version 2.0.2 we don't see this issue, only observing this issue on latest pacemaker version 2.1.15.

[root at node1 ~]# pcs resource status TEST_node1
  * TEST_node1      (ocf::provider:TEST_RA):  Started node1
[root at node1 ~]# systemctl restart TESTec
[root at node1 ~]# cat /var/pid/TEST.pid
271466
[root at node1 ~]# systemctl restart TESTec
[root at node1 ~]# cat /var/pid/TEST.pid
271466
[root at node1 ~]# pcs resource status TEST_node1
  * TEST_node1      (ocf::provider:TEST_RA):  FAILED node1 (blocked)
[root at node1 ~]#


[root at node1 ~]# pcs resource config TEST_node1
Resource: TEST_node1 (class=ocf provider=provider type=TEST_RA)
  Meta Attributes: TEST_node1-meta_attributes
    failure-timeout=120s
    migration-threshold=5
    priority=60
  Operations:
    migrate_from: TEST_node1-migrate_from-interval-0s
      interval=0s
      timeout=20
    migrate_to: TEST_node1-migrate_to-interval-0s
      interval=0s
      timeout=20
    monitor: TEST_node1-monitor-interval-10s
      interval=10s
      timeout=120s
      on-fail=restart
    reload: TEST_node1-reload-interval-0s
      interval=0s
      timeout=20
    start: TEST_node1-start-interval-0s
      interval=0s
      timeout=120s
      on-fail=restart
    stop: TEST_node1-stop-interval-0s
      interval=0s
      timeout=120s
      on-fail=block
[root at node1 ~]#

Thanks and Regards,
S Sathish S
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/users/attachments/20230417/e01da788/attachment.htm>


More information about the Users mailing list