[ClusterLabs] trigger something at ?

lejeczek peljasz at yahoo.co.uk
Fri Feb 9 01:15:12 EST 2024



On 31/01/2024 16:37, lejeczek via Users wrote:
>
>
> On 31/01/2024 16:06, Jehan-Guillaume de Rorthais wrote:
>> On Wed, 31 Jan 2024 16:02:12 +0100
>> lejeczek via Users <users at clusterlabs.org> wrote:
>>
>>>
>>> On 29/01/2024 17:22, Ken Gaillot wrote:
>>>> On Fri, 2024-01-26 at 13:55 +0100, lejeczek via Users 
>>>> wrote:
>>>>> Hi guys.
>>>>>
>>>>> Is it possible to trigger some... action - I'm 
>>>>> thinking specifically
>>>>> at shutdown/start.
>>>>> If not within the cluster then - if you do that - 
>>>>> perhaps outside.
>>>>> I would like to create/remove constraints, when 
>>>>> cluster starts &
>>>>> stops, respectively.
>>>>>
>>>>> many thanks, L.
>>>>>
>>>> You could use node status alerts for that, but it's 
>>>> risky for alert
>>>> agents to change the configuration (since that may 
>>>> result in more
>>>> alerts and potentially some sort of infinite loop).
>>>>
>>>> Pacemaker has no concept of a full cluster start/stop, 
>>>> only node
>>>> start/stop. You could approximate that by checking 
>>>> whether the node
>>>> receiving the alert is the only active node.
>>>>
>>>> Another possibility would be to write a resource agent 
>>>> that does what
>>>> you want and order everything else after it. However 
>>>> it's even more
>>>> risky for a resource agent to modify the configuration.
>>>>
>>>> Finally you could write a systemd unit to do what you 
>>>> want and order it
>>>> after pacemaker.
>>>>
>>>> What's wrong with leaving the constraints permanently 
>>>> configured?
>>> yes, that would be for a node start/stop
>>> I struggle with using constraints to move pgsql (PAF) 
>>> master
>>> onto a given node - seems that co/locating paf's master
>>> results in troubles (replication brakes) at/after node
>>> shutdown/reboot (not always, but way too often)
>> What? What's wrong with colocating PAF's masters exactly? 
>> How does it brake any
>> replication? What's these constraints you are dealing with?
>>
>> Could you share your configuration?
> Constraints beyond/above of what is required by PAF agent 
> itself, say...
> you have multiple pgSQL cluster with PAF - thus multiple 
> (separate, for each pgSQL cluster) masters and you want to 
> spread/balance those across HA cluster
> (or in other words - avoid having more that 1 pgsql master 
> per HA node)
> These below, I've tried, those move the master onto chosen 
> node but.. then the issues I mentioned.
>
> -> $ pcs constraint location PGSQL-PAF-5438-clone prefers 
> ubusrv1=1002
> or
> -> $ pcs constraint colocation set PGSQL-PAF-5435-clone 
> PGSQL-PAF-5434-clone PGSQL-PAF-5433-clone role=Master 
> require-all=false setoptions score=-1000
>
Wanted to share an observation - not a measurement of 
anything, I did not take those - of different, latest pgSQL 
version which I put in place of version 14 which I've been 
using all this time.
(also with that upgrade -  from Postgres own repos - came 
update of PAF)
So, with pgSQL ver. 16  and the same of everything else - 
now paf/pgSQL resources behave a lot lot better, survives 
just fine all those cases - with ! extra constraints of 
course - where previously it had replication failures.


More information about the Users mailing list