[ClusterLabs] colocate Redis - weird

lejeczek peljasz at yahoo.co.uk
Wed Dec 20 05:16:17 EST 2023



On 19/12/2023 19:13, lejeczek via Users wrote:
> hi guys,
>
> Is this below not the weirdest thing?
>
> -> $ pcs constraint ref PGSQL-PAF-5435
> Resource: PGSQL-PAF-5435
>   colocation-HA-10-1-1-84-PGSQL-PAF-5435-clone-INFINITY
>   colocation-REDIS-6385-clone-PGSQL-PAF-5435-clone-INFINITY
>   order-PGSQL-PAF-5435-clone-HA-10-1-1-84-Mandatory
>   order-PGSQL-PAF-5435-clone-HA-10-1-1-84-Mandatory-1
>   colocation_set_PePePe
>
> Here Redis master should folow pgSQL master.
> Which such constraint:
>
> -> $ pcs resource status PGSQL-PAF-5435
>   * Clone Set: PGSQL-PAF-5435-clone [PGSQL-PAF-5435] 
> (promotable):
>     * Promoted: [ ubusrv1 ]
>     * Unpromoted: [ ubusrv2 ubusrv3 ]
> -> $ pcs resource status REDIS-6385-clone
>   * Clone Set: REDIS-6385-clone [REDIS-6385] (promotable):
>     * Unpromoted: [ ubusrv1 ubusrv2 ubusrv3 ]
>
> If I remove that constrain:
> -> $ pcs constraint delete 
> colocation-REDIS-6385-clone-PGSQL-PAF-5435-clone-INFINITY
> -> $ pcs resource status REDIS-6385-clone
>   * Clone Set: REDIS-6385-clone [REDIS-6385] (promotable):
>     * Promoted: [ ubusrv1 ]
>     * Unpromoted: [ ubusrv2 ubusrv3 ]
>
> and ! I can manually move Redis master around, master 
> moves to each server just fine.
> I again, add that constraint:
>
> -> $ pcs constraint colocation add master REDIS-6385-clone 
> with master PGSQL-PAF-5435-clone
>
> and the same...
>
>
What there might be about that one node - resource removed, 
created anew and cluster insists on keeping master there.
I can manually move the master anywhere but if I _clear_ the 
resource, no constraints then cluster move it back to the 
same node.

I wonder about:  a) "transient" node attrs & b) if this 
cluster is somewhat broken.
On a) - can we read more about those somewhere?(not the 
code/internals)
thanks, L.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/users/attachments/20231220/1a4a19c1/attachment-0001.htm>


More information about the Users mailing list