[ClusterLabs] Can't have 2 nodes as master with galera resource agent

Tomas Jelinek tojeline at redhat.com
Fri Dec 11 11:08:20 EST 2020


Dne 11. 12. 20 v 15:10 Andrei Borzenkov napsal(a):
> 11.12.2020 16:13, Raphael Laguerre пишет:
>> Hello,
>>
>> I'm trying to setup a 2 nodes cluster with 2 galera instances. I use the ocf:heartbeat:galera resource agent, however, after I create the resource, only one node appears to be in master role, the other one can't be promoted and stays in slave role. I expect to have both nodes with a mysqld instance running and synchronized in a galera cluster. Could you help me please ? When I do a debug-promote, it seems that mysqld is started on node-01 and shutdowned juste after, but I don't understand why. If I launch the galera cluster manually by doing on one node "galera_new_cluster" and on the second node "systemctl start mariadb", it works properly (I can't write on both nodes and they are synchronized)
>>
>> Here is the scenario that led to the current situation:
>> I did :
>>
>> pcs resource create r_galera ocf:heartbeat:galera enable_creation=true wsrep_cluster_address="gcomm://192.168.0.1,192.168.0.2" cluster_host_map="node-01:192.168.0.1;node-02:192.168.0.2" promotable meta master-max=2 promoted-max=2
>>

Try it like this:
pcs resource create r_galera ocf:heartbeat:galera enable_creation=true 
wsrep_cluster_address="gcomm://192.168.0.1,192.168.0.2" 
cluster_host_map="node-01:192.168.0.1;node-02:192.168.0.2" promotable 
master-max=2 promoted-max=2
i.e. drop "meta" after "promotable"

Options written after "meta" go to the primitive resource, options 
written after "promotable" (or "clone") go to the promotable (or clone) 
resource.

> 
> Promotable, promoted-max must be set on clone, not on primitive. From logs
> 
>        <clone id="r_galera-clone">
>          <primitive class="ocf" id="r_galera" provider="heartbeat"
> type="galera">
> ...
>            <meta_attributes id="r_galera-meta_attributes">
>              <nvpair id="r_galera-meta_attributes-master-max"
> name="master-max" value="2"/>
>              <nvpair id="r_galera-meta_attributes-promoted-max"
> name="promoted-max" value="2"/>
>            </meta_attributes>
> 
> Those are resource (primitive) attributes
> ...
>          </primitive>
>          <meta_attributes id="r_galera-clone-meta_attributes">
>            <nvpair id="r_galera-clone-meta_attributes-promotable"
> name="promotable" value="true"/>
>          </meta_attributes>
>        </clone>
> 
> 
> And clone attributes are default (1 master) so pacemaker promotes only
> one, the first, node.
> 
> Dec 11 11:35:23 node-02 pacemaker-schedulerd[5304] (color_promotable)
> info: r_galera-clone: Promoted 1 instances of a possible 1 to master
> 
> Resource on second node correctly sets master score, but pacemaker
> cannot promote more than one node.
> 
> Your pcs invocation lacks --master switch (and it in general looks
> strange, I am not sure how you managed to create clone with this
> command, but I am not familiar with pcs enough):
> 
> pcs resource create r_galera ocf:heartbeat:galera ... --master meta
> master-max=2 promoted-max=2

I guess Raphael is using pcs-0.10.x which brings a new syntax. There is 
no --master in pcs-0.10.x.


Regards,
Tomas

> 
> 
>> and I got:
>>
>> ============================================================================================
>> root at node-01:~# pcs status
>> Cluster name: cluster-ha-mariadb
>> Stack: corosync
>> Current DC: node-02 (version 2.0.1-9e909a5bdd) - partition with quorum
>> Last updated: Fri Dec 11 11:38:12 2020
>> Last change: Fri Dec 11 11:35:18 2020 by root via cibadmin on node-01
>>
>> 2 nodes configured
>> 3 resources configured
>>
>> Online: [ node-01 node-02 ]
>>
>> Full list of resources:
>>
>> r_vip (ocf::heartbeat:IPaddr2): Started node-01
>> Clone Set: r_galera-clone [r_galera] (promotable)
>> Masters: [ node-02 ]
>> Slaves: [ node-01 ]
>>
>> Daemon Status:
>> corosync: active/disabled
>> pacemaker: active/disabled
>> pcsd: active/enabled
>> ============================================================================================
>>
>> Please find attached the cib.xml, the pacemaker logs, the syslog logs and the mysql logs from the time of the creation of the resource for node-01 and node-02. There is no mysql logs generated after the resource creation on node-01.
>>
>> Here are info about my environment and configuration (except for IP and hostname, both nodes are identical) :
>>
>> ============================================================================================
>> root at node-01:~# cat /etc/debian_version
>> 10.7
>>
>> root at node-01:~# uname -a
>> Linux node-01 4.19.0-13-amd64 #1 SMP Debian 4.19.160-2 (2020-11-28) x86_64 GNU/Linux
>>
>> root at node-01:~# dpkg -l corosync pacemaker pcs pacemaker-cli-utils mariadb-server
>> Souhait=inconnU/Installé/suppRimé/Purgé/H=à garder
>> | État=Non/Installé/fichier-Config/dépaqUeté/échec-conFig/H=semi-installé/W=attend-traitement-déclenchements
>> |/ Err?=(aucune)/besoin Réinstallation (État,Err: majuscule=mauvais)
>> ||/ Nom Version Architecture Description
>> +++-===================-===================-============-=====================================================================
>> ii corosync 3.0.1-2+deb10u1 amd64 cluster engine daemon and utilities
>> ii mariadb-server 1:10.3.27-0+deb10u1 all MariaDB database server (metapackage depending on the latest version)
>> ii pacemaker 2.0.1-5+deb10u1 amd64 cluster resource manager
>> ii pacemaker-cli-utils 2.0.1-5+deb10u1 amd64 cluster resource manager command line utilities
>> ii pcs 0.10.1-2 all Pacemaker Configuration System
>>
>> root at node-01:~# cat /etc/mysql/mariadb.conf.d/50-galera.cnf
>> [galera]
>>
>> wsrep_provider = /usr/lib/libgalera_smm.so
>> wsrep_cluster_address = gcomm://192.168.0.1,192.168.0.2
>> #wsrep_cluster_address = dummy://192.168.0.1,192.168.0.2
>> binlog_format = ROW
>> innodb_autoinc_lock_mode = 2
>> innodb_doublewrite = 1
>> wsrep_on = ON
>> default-storage-engine = innodb
>> wsrep_node_address = 192.168.0.1
>> wsrep-debug = 1
>> wsrep_cluster_name="ha-cluster"
>> wsrep_node_name="node-01"
>> wsrep_provider_options='pc.ignore_sb=TRUE;gcs.fc_limit=256;gcs.fc_factor=0.99;gcs.fc_master_slave=YES;'
>>
>> root at node-01:~# grep -Ev '^#.*' /etc/mysql/mariadb.conf.d/50-server.cnf
>>
>> [server]
>>
>> [mysqld]
>>
>> user = mysql
>> pid-file = /run/mysqld/mysqld.pid
>> socket = /run/mysqld/mysqld.sock
>> basedir = /usr
>> datadir = /var/lib/mysql
>> tmpdir = /tmp
>> lc-messages-dir = /usr/share/mysql
>>
>> bind-address = 0.0.0.0
>>
>> query_cache_size = 16M
>>
>> log_error = /var/log/mysql/error.log
>> expire_logs_days = 10
>>
>> character-set-server = utf8mb4
>> collation-server = utf8mb4_general_ci
>> ============================================================================================
>>
>> Thank you,
>>
>> Best regards,
>>
>> Raphaël Laguerre
>>
>>
>> _______________________________________________
>> Manage your subscription:
>> https://lists.clusterlabs.org/mailman/listinfo/users
>>
>> ClusterLabs home: https://www.clusterlabs.org/
>>
> 
> _______________________________________________
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
> 
> ClusterLabs home: https://www.clusterlabs.org/
> 



More information about the Users mailing list