[Pacemaker] CentOS 6 - after update pacemaker floods log with warnings

Andrew Beekhof andrew at beekhof.net
Mon Oct 27 09:42:25 UTC 2014


> On 27 Oct 2014, at 6:42 pm, Andrew <nitr0 at seti.kr.ua> wrote:
> 
> Nobody calls pacemakerd by hand/in script - maybe this is resource monitoring?

Oh I remember now, pgsql does it for some reason.
There was a thread on it a while back, I forget the reason but there is probably a work-around

> Logging increased after update (and pacemakerd log ilines appeared); nothing else was changed in config.
> 
> I'll try to reboot nodes (to finish system update) - maybe this'll change something...
> 
> 27.10.2014 02:08, Andrew Beekhof пишет:
>> Someone is calling pacemakerd over and over and over.  Don't do that.
>> 
>>> On 26 Oct 2014, at 7:35 am, Andrew <nitr0 at seti.kr.ua> wrote:
>>> 
>>> Hi all.
>>> After upgrade CentOS to current (Pacemaker 1.1.8-7.el6 to 1.1.10-14.el6_5.3), Pacemaker produces tonns of logs. Near 20GB per day. What may cause this behavior?
>>> 
>>> Running config:
>>> node node2.cluster \
>>>    attributes p_mysql_mysql_master_IP="192.168.253.4" \
>>>    attributes p_pgsql-data-status="STREAMING|SYNC"
>>> node node1.cluster \
>>>    attributes p_mysql_mysql_master_IP="192.168.253.5" \
>>>    attributes p_pgsql-data-status="LATEST"
>>> primitive ClusterIP ocf:heartbeat:IPaddr \
>>>    params ip="192.168.253.254" nic="br0" cidr_netmask="24" \
>>>    op monitor interval="2s" \
>>>    meta target-role="Started"
>>> primitive mysql_reader_vip ocf:heartbeat:IPaddr2 \
>>>    params ip="192.168.253.63" nic="br0" cidr_netmask="24" \
>>>    op monitor interval="10s" \
>>>    meta target-role="Started"
>>> primitive mysql_writer_vip ocf:heartbeat:IPaddr2 \
>>>    params ip="192.168.253.64" nic="br0" cidr_netmask="24" \
>>>    op monitor interval="10s" \
>>>    meta target-role="Started"
>>> primitive p_mysql ocf:percona:mysql \
>>>    params config="/etc/my.cnf" pid="/var/lib/mysql/mysqld.pid" socket="/var/run/mysqld/mysqld.sock" replication_user="***user***" replication_passwd="***passwd***" max_slave_lag="60" evict_outdated_slaves="false" binary="/usr/libexec/mysqld" test_user="***user***" test_passwd="***password*** enable_creation="true" \
>>>    op monitor interval="5s" role="Master" timeout="30s" OCF_CHECK_LEVEL="1" \
>>>    op monitor interval="2s" role="Slave" timeout="30s" OCF_CHECK_LEVEL="1" \
>>>    op start interval="0" timeout="120s" \
>>>    op stop interval="0" timeout="120s"
>>> primitive p_nginx ocf:heartbeat:nginx \
>>>    params configfile="/etc/nginx/nginx.conf" httpd="/usr/sbin/nginx" \
>>>    op start interval="0" timeout="60s" on-fail="restart" \
>>>    op monitor interval="10s" timeout="30s" on-fail="restart" depth="0" \
>>>    op monitor interval="30s" timeout="30s" on-fail="restart" depth="10" \
>>>    op stop interval="0" timeout="120s"
>>> primitive p_perl-fpm ocf:fresh:daemon \
>>>    params binfile="/usr/local/bin/perl-fpm" cmdline_options="-u nginx -g nginx -x 180 -t 16 -d -P /var/run/perl-fpm/perl-fpm.pid" pidfile="/var/run/perl-fpm/perl-fpm.pid" \
>>>    op start interval="0" timeout="30s" \
>>>    op monitor interval="10" timeout="20s" depth="0" \
>>>    op stop interval="0" timeout="30s"
>>> primitive p_pgsql ocf:fresh:pgsql \
>>>    params pgctl="/usr/pgsql-9.1/bin/pg_ctl" psql="/usr/pgsql-9.1/bin/psql" pgdata="/var/lib/pgsql/9.1/data/" start_opt="-p 5432" rep_mode="sync" node_list="node2.cluster node1.cluster" restore_command="cp /var/lib/pgsql/9.1/wal_archive/%f %p" primary_conninfo_opt="keepalives_idle=60 keepalives_interval=5 keepalives_count=5 password=***passwd***" repuser="***user***" master_ip="192.168.253.32" stop_escalate="0" \
>>>    op start interval="0" timeout="120s" on-fail="restart" \
>>>    op monitor interval="7s" timeout="60s" on-fail="restart" \
>>>    op monitor interval="2s" role="Master" timeout="60s" on-fail="restart" \
>>>    op promote interval="0" timeout="120s" on-fail="restart" \
>>>    op demote interval="0" timeout="120s" on-fail="stop" \
>>>    op stop interval="0" timeout="120s" on-fail="block" \
>>>    op notify interval="0" timeout="90s"
>>> primitive p_radius_ip ocf:heartbeat:IPaddr2 \
>>>    params ip="10.255.0.33" nic="lo" cidr_netmask="32" \
>>>    op monitor interval="10s"
>>> primitive p_radiusd ocf:fresh:daemon \
>>>    params binfile="/usr/sbin/radiusd" pidfile="/var/run/radiusd/radiusd.pid" \
>>>    op start interval="0" timeout="30s" \
>>>    op monitor interval="10" timeout="20s" depth="0" \
>>>    op stop interval="0" timeout="30s"
>>> primitive p_web_ip ocf:heartbeat:IPaddr2 \
>>>    params ip="10.255.0.32" nic="lo" cidr_netmask="32" \
>>>    op monitor interval="10s"
>>> primitive pgsql_reader_vip ocf:heartbeat:IPaddr2 \
>>>    params ip="192.168.253.31" nic="br0" cidr_netmask="24" \
>>>    meta resource-stickiness="1" \
>>>    op start interval="0" timeout="60s" on-fail="restart" \
>>>    op monitor interval="10s" timeout="60s" on-fail="restart" \
>>>    op stop interval="0" timeout="60s" on-fail="block"
>>> primitive pgsql_writer_vip ocf:heartbeat:IPaddr2 \
>>>    params ip="192.168.253.32" nic="br0" cidr_netmask="24" \
>>>    meta migration-threshold="0" \
>>>    op start interval="0" timeout="60s" on-fail="restart" \
>>>    op monitor interval="10s" timeout="60s" on-fail="restart" \
>>>    op stop interval="0" timeout="60s" on-fail="block"
>>> group gr_http p_nginx p_perl-fpm
>>> ms ms_MySQL p_mysql \
>>>    meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true" globally-unique="false" target-role="Started"
>>> ms ms_Postgresql p_pgsql \
>>>    meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true" target-role="Started"
>>> clone cl_http gr_http \
>>>    meta clone-max="2" clone-node-max="1" target-role="Started"
>>> clone cl_radiusd p_radiusd \
>>>    meta clone-max="2" clone-node-max="1" target-role="Started"
>>> location loc-mysql-no-reader-vip mysql_reader_vip \
>>>    rule $id="loc-mysql-no-reader-vip-rule" -inf: readable eq 0 \
>>>    rule $id="loc-mysql-no-reader-vip-rule-0" -inf: not_defined readable
>>> location mysql_master_location ms_MySQL \
>>>    rule $id="mysql_master_location-rule" $role="master" 110: #uname eq node2.cluster \
>>>    rule $id="mysql_master_location-rule-0" $role="master" 100: #uname eq node1.cluster \
>>>    rule $id="mysql_master_location-rule-1" $role="master" -inf: defined fail-count-mysql_master_vip
>>> location pgsql_master_location ms_Postgresql \
>>>    rule $id="pgsql_master_location-rule" $role="master" 110: #uname eq node2.cluster \
>>>    rule $id="pgsql_master_location-rule-0" $role="master" 100: #uname eq node1.cluster \
>>>    rule $id="pgsql_master_location-rule-1" $role="master" -inf: defined fail-count-pgsql_master_vip
>>> location pgsql_reader_vip_location pgsql_reader_vip \
>>>    rule $id="pgsql_reader_vip_location-rule" 200: p_pgsql-status eq HS:sync \
>>>    rule $id="pgsql_reader_vip_location-rule-0" 100: p_pgsql-status eq PRI \
>>>    rule $id="pgsql_reader_vip_location-rule-1" -inf: not_defined p_pgsql-status \
>>>    rule $id="pgsql_reader_vip_location-rule-2" -inf: p_pgsql-status ne HS:sync and p_pgsql-status ne PRI
>>> colocation mysql_reader_vip_on_slave 500: mysql_reader_vip ms_MySQL:Slave
>>> colocation mysql_writer_vip_on_master inf: mysql_writer_vip ms_MySQL:Master
>>> colocation radius_ip_not_with_mysql_master -200: p_radius_ip ms_MySQL:Master
>>> colocation radius_ip_on_clone 500: p_radius_ip cl_radiusd
>>> colocation rsc_colocation-2 inf: pgsql_writer_vip ms_Postgresql:Master
>>> colocation web_ip_not_with_mysql_master -200: p_web_ip ms_MySQL:Master
>>> colocation web_ip_on_clone 500: p_web_ip cl_http
>>> order ms_MySQL_demote_before_vip inf: ms_MySQL:demote mysql_writer_vip:stop symmetrical=false
>>> order ms_MySQL_promote_before_vip inf: ms_MySQL:promote mysql_writer_vip:start symmetrical=false
>>> order ms_Postgresql_demote_before_vip 0: ms_Postgresql:demote pgsql_writer_vip:stop symmetrical=false
>>> order ms_Postgresql_promote_before_vip 0: ms_Postgresql:promote pgsql_writer_vip:start symmetrical=false
>>> property $id="cib-bootstrap-options" \
>>>    dc-version="1.1.10-14.el6_5.3-368c726" \
>>>    cluster-infrastructure="classic openais (with plugin)" \
>>>    expected-quorum-votes="2" \
>>>    no-quorum-policy="ignore" \
>>>    stonith-enabled="false" \
>>>    last-lrm-refresh="1414189544"
>>> property $id="mysql_replication" \
>>>    p_mysql_REPL_INFO="192.168.253.5|mysqld-bin.000472|106"
>>> rsc_defaults $id="rsc-options" \
>>>    resource-stickiness="100"
>>> 
>>> 
>>> /var/log/messages:
>>> 
>>> Oct 25 23:02:07 node2 pacemakerd[19345]:   notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
>>> Oct 25 23:02:14 node2 pacemakerd[19982]:   notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
>>> Oct 25 23:02:22 node2 pacemakerd[20443]:   notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
>>> Oct 25 23:02:29 node2 pacemakerd[21097]:   notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
>>> Oct 25 23:02:36 node2 pacemakerd[21580]:   notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
>>> Oct 25 23:02:43 node2 pacemakerd[22266]:   notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
>>> Oct 25 23:02:50 node2 pacemakerd[22727]:   notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
>>> Oct 25 23:02:57 node2 pacemakerd[23264]:   notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
>>> Oct 25 23:02:58 node2 attrd[14925]:   notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_mysql (58)
>>> Oct 25 23:02:58 node2 attrd[14925]:   notice: attrd_perform_update: Sent update 1083: master-p_mysql=58
>>> Oct 25 23:03:00 node2 attrd[14925]:   notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_mysql (60)
>>> Oct 25 23:03:00 node2 attrd[14925]:   notice: attrd_perform_update: Sent update 1085: master-p_mysql=60
>>> Oct 25 23:03:04 node2 pacemakerd[23868]:   notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
>>> Oct 25 23:03:12 node2 pacemakerd[24425]:   notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
>>> Oct 25 23:03:19 node2 pacemakerd[25019]:   notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
>>> Oct 25 23:03:26 node2 pacemakerd[25502]:   notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
>>> Oct 25 23:03:33 node2 pacemakerd[26149]:   notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
>>> Oct 25 23:03:40 node2 pacemakerd[26703]:   notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
>>> Oct 25 23:03:47 node2 pacemakerd[27240]:   notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
>>> Oct 25 23:03:54 node2 pacemakerd[27779]:   notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
>>> Oct 25 23:04:02 node2 pacemakerd[28294]:   notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
>>> Oct 25 23:04:09 node2 pacemakerd[28981]:   notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
>>> Oct 25 23:04:16 node2 pacemakerd[29467]:   notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
>>> Oct 25 23:04:23 node2 pacemakerd[29928]:   notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
>>> Oct 25 23:04:30 node2 pacemakerd[30575]:   notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
>>> Oct 25 23:04:37 node2 pacemakerd[31127]:   notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
>>> Oct 25 23:04:44 node2 pacemakerd[31744]:   notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
>>> Oct 25 23:04:52 node2 pacemakerd[32205]:   notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
>>> Oct 25 23:04:59 node2 pacemakerd[388]:   notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
>>> Oct 25 23:05:06 node2 pacemakerd[921]:   notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
>>> Oct 25 23:05:13 node2 pacemakerd[1487]:   notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
>>> Oct 25 23:05:20 node2 pacemakerd[2108]:   notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
>>> Oct 25 23:05:27 node2 pacemakerd[2591]:   notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
>>> Oct 25 23:05:34 node2 pacemakerd[3243]:   notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
>>> Oct 25 23:05:42 node2 pacemakerd[3797]:   notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
>>> Oct 25 23:05:49 node2 pacemakerd[4335]:   notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
>>> Oct 25 23:05:56 node2 pacemakerd[4874]:   notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
>>> Oct 25 23:06:03 node2 pacemakerd[5389]:   notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
>>> Oct 25 23:06:10 node2 pacemakerd[6077]:   notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
>>> Oct 25 23:06:17 node2 pacemakerd[6560]:   notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
>>> Oct 25 23:06:24 node2 pacemakerd[7172]:   notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
>>> Oct 25 23:06:32 node2 pacemakerd[7698]:   notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
>>> Oct 25 23:06:39 node2 pacemakerd[8328]:   notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
>>> Oct 25 23:06:46 node2 pacemakerd[8869]:   notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
>>> Oct 25 23:06:53 node2 pacemakerd[9330]:   notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
>>> 
>>> /var/log/cluster/corosync.log:
>>> 
>>> Oct 25 23:02:07 [19345] node2 pacemakerd:     info: crm_log_init:     Changed active directory to /var/lib/pacemaker/cores/root
>>> Oct 25 23:02:07 [19345] node2 pacemakerd:     info: crm_xml_cleanup:     Cleaning up memory from libxml2
>>> Oct 25 23:02:07 [14922] node2        cib:     info: crm_client_new:     Connecting 0x16ace20 for uid=0 gid=0 pid=19384 id=39f57ce0-e43b-4d09-849c-a9a7eb0ca3be
>>> Oct 25 23:02:07 [14922] node2        cib:     info: cib_process_request:     Completed cib_query operation for section 'all': OK (rc=0, origin=local/crm_resource/2, version=0.332.131)
>>> Oct 25 23:02:07 [14922] node2        cib:     info: crm_client_destroy:     Destroying 0 events
>>> Oct 25 23:02:07 [14922] node2        cib:     info: crm_client_new:     Connecting 0x16ace20 for uid=0 gid=0 pid=19410 id=48e1362e-30c9-461d-936b-564f2e79fc8e
>>> Oct 25 23:02:07 [14922] node2        cib:     info: cib_process_request:     Completed cib_query operation for section 'all': OK (rc=0, origin=local/crm_resource/2, version=0.332.131)
>>> Oct 25 23:02:07 [14922] node2        cib:     info: crm_client_destroy:     Destroying 0 events
>>> Oct 25 23:02:07 [14922] node2        cib:     info: crm_client_new:     Connecting 0x16ace20 for uid=0 gid=0 pid=19492 id=37aacb63-1247-464a-aab9-0ebefa65dcc0
>>> Oct 25 23:02:07 [14922] node2        cib:     info: cib_process_request:     Completed cib_query operation for section 'all': OK (rc=0, origin=local/crm_mon/2, version=0.332.131)
>>> Oct 25 23:02:07 [14922] node2        cib:     info: crm_client_destroy:     Destroying 0 events
>>> Oct 25 23:02:07 [14922] node2        cib:     info: crm_client_new:     Connecting 0x16ace20 for uid=0 gid=0 pid=19513 id=5c08f731-f255-459c-b781-b6b1e1b22c36
>>> Oct 25 23:02:07 [14922] node2        cib:     info: cib_process_request:     Completed cib_query operation for section nodes: OK (rc=0, origin=local/crm_attribute/2, version=0.332.131)
>>> Oct 25 23:02:07 [14922] node2        cib:     info: crm_client_new:     Connecting 0x18bc9f0 for uid=0 gid=0 pid=19514 id=78ae2c2b-dbb7-411a-9606-d5fb7d51fc0c
>>> Oct 25 23:02:07 [14922] node2        cib:     info: cib_process_request:     Completed cib_query operation for section //cib/configuration/nodes//node[@id='node2']//instance_attributes//nvpair[@name='p_pgsql-data-status']: OK (rc=0, origin=local/crm_attribute/3, version=0.332.131)
>>> Oct 25 23:02:07 [14922] node2        cib:     info: crm_client_destroy:     Destroying 0 events
>>> Oct 25 23:02:07 [14922] node2        cib:     info: cib_process_request:     Completed cib_query operation for section nodes: OK (rc=0, origin=local/crm_attribute/2, version=0.332.131)
>>> Oct 25 23:02:07 [14922] node2        cib:     info: crm_client_destroy:     Destroying 0 events
>>> Oct 25 23:02:07 [14922] node2        cib:     info: crm_client_new:     Connecting 0x16ace20 for uid=0 gid=0 pid=19517 id=107440db-ecd1-4121-921e-5cf425f2fc68
>>> Oct 25 23:02:07 [14922] node2        cib:     info: cib_process_request:     Completed cib_query operation for section nodes: OK (rc=0, origin=local/crm_attribute/2, version=0.332.131)
>>> Oct 25 23:02:07 [14922] node2        cib:     info: cib_process_request:     Completed cib_query operation for section //cib/status//node_state[@id='node2']//transient_attributes//nvpair[@name='readable']: OK (rc=0, origin=local/crm_attribute/3, version=0.332.131)
>>> Oct 25 23:02:07 [14922] node2        cib:     info: crm_client_destroy:     Destroying 0 events
>>> Oct 25 23:02:07 [14922] node2        cib:     info: crm_client_new:     Connecting 0x16ace20 for uid=0 gid=0 pid=19519 id=d41cfbd7-5a80-4b96-aedb-c0315ca5e1b3
>>> Oct 25 23:02:07 [14922] node2        cib:     info: cib_process_request:     Completed cib_query operation for section nodes: OK (rc=0, origin=local/crm_attribute/2, version=0.332.131)
>>> Oct 25 23:02:07 [14922] node2        cib:     info: cib_process_request:     Completed cib_query operation for section //cib/configuration/crm_config//cluster_property_set[@id='mysql_replication']//nvpair[@name='p_mysql_REPL_INFO']: OK (rc=0, origin=local/crm_attribute/3, version=0.332.131)
>>> Oct 25 23:02:07 [14922] node2        cib:     info: crm_client_destroy:     Destroying 0 events
>>> Oct 25 23:02:10 [14922] node2        cib:     info: crm_client_new:     Connecting 0x16ace20 for uid=0 gid=0 pid=19582 id=af2faf32-f2d5-4640-89e4-fbd0ae4257d7
>>> Oct 25 23:02:10 [14922] node2        cib:     info: cib_process_request:     Completed cib_query operation for section 'all': OK (rc=0, origin=local/crm_resource/2, version=0.332.131)
>>> Oct 25 23:02:10 [14922] node2        cib:     info: crm_client_destroy:     Destroying 0 events
>>> Oct 25 23:02:10 [14922] node2        cib:     info: crm_client_new:     Connecting 0x16ace20 for uid=0 gid=0 pid=19587 id=59a979fe-4c2d-4659-8a54-951055559088
>>> Oct 25 23:02:10 [14922] node2        cib:     info: cib_process_request:     Completed cib_query operation for section 'all': OK (rc=0, origin=local/crm_resource/2, version=0.332.131)
>>> Oct 25 23:02:10 [14922] node2        cib:     info: crm_client_destroy:     Destroying 0 events
>>> Oct 25 23:02:10 [14922] node2        cib:     info: crm_client_new:     Connecting 0x16ace20 for uid=0 gid=0 pid=19617 id=91befa69-b2fa-4a61-93b2-732d289cd22c
>>> Oct 25 23:02:10 [14922] node2        cib:     info: cib_process_request:     Completed cib_query operation for section nodes: OK (rc=0, origin=local/crm_attribute/2, version=0.332.131)
>>> Oct 25 23:02:10 [14922] node2        cib:     info: crm_client_destroy:     Destroying 0 events
>>> Oct 25 23:02:10 [14922] node2        cib:     info: crm_client_new:     Connecting 0x16ace20 for uid=0 gid=0 pid=19620 id=c37c6b07-c02d-4abb-a3c6-0dc8a33d4e85
>>> Oct 25 23:02:10 [14922] node2        cib:     info: cib_process_request:     Completed cib_query operation for section nodes: OK (rc=0, origin=local/crm_attribute/2, version=0.332.131)
>>> Oct 25 23:02:10 [14922] node2        cib:     info: cib_process_request:     Completed cib_query operation for section //cib/status//node_state[@id='node2']//transient_attributes//nvpair[@name='readable']: OK (rc=0, origin=local/crm_attribute/3, version=0.332.131)
>>> Oct 25 23:02:10 [14922] node2        cib:     info: crm_client_destroy:     Destroying 0 events
>>> Oct 25 23:02:10 [14922] node2        cib:     info: crm_client_new:     Connecting 0x16ace20 for uid=0 gid=0 pid=19622 id=2d924d1e-4ae9-4a6c-a13a-150fa1016ebf
>>> Oct 25 23:02:10 [14922] node2        cib:     info: cib_process_request:     Completed cib_query operation for section nodes: OK (rc=0, origin=local/crm_attribute/2, version=0.332.131)
>>> Oct 25 23:02:10 [14922] node2        cib:     info: cib_process_request:     Completed cib_query operation for section //cib/configuration/crm_config//cluster_property_set[@id='mysql_replication']//nvpair[@name='p_mysql_REPL_INFO']: OK (rc=0, origin=local/crm_attribute/3, version=0.332.131)
>>> Oct 25 23:02:10 [14922] node2        cib:     info: crm_client_destroy:     Destroying 0 events
>>> Oct 25 23:02:12 [14922] node2        cib:     info: crm_client_new:     Connecting 0x16ace20 for uid=0 gid=0 pid=19675 id=5ee58cdd-867a-4d1e-b311-8ae31b1c905b
>>> Oct 25 23:02:12 [14922] node2        cib:     info: cib_process_request:     Completed cib_query operation for section 'all': OK (rc=0, origin=local/crm_resource/2, version=0.332.131)
>>> Oct 25 23:02:12 [14922] node2        cib:     info: crm_client_destroy:     Destroying 0 events
>>> Oct 25 23:02:12 [14922] node2        cib:     info: crm_client_new:     Connecting 0x16ace20 for uid=0 gid=0 pid=19680 id=1a678060-11a0-4716-bb6d-93b5b80061e4
>>> Oct 25 23:02:12 [14922] node2        cib:     info: cib_process_request:     Completed cib_query operation for section 'all': OK (rc=0, origin=local/crm_resource/2, version=0.332.131)
>>> Oct 25 23:02:12 [14922] node2        cib:     info: crm_client_destroy:     Destroying 0 events
>>> Oct 25 23:02:12 [14922] node2        cib:     info: crm_client_new:     Connecting 0x16ace20 for uid=0 gid=0 pid=19710 id=0fc04bb5-cd12-4d4c-a83e-fb75ef87f5b6
>>> Oct 25 23:02:12 [14922] node2        cib:     info: cib_process_request:     Completed cib_query operation for section nodes: OK (rc=0, origin=local/crm_attribute/2, version=0.332.131)
>>> Oct 25 23:02:12 [14922] node2        cib:     info: crm_client_destroy:     Destroying 0 events
>>> Oct 25 23:02:12 [14922] node2        cib:     info: crm_client_new:     Connecting 0x16ace20 for uid=0 gid=0 pid=19713 id=5c53cb57-e6b6-4aa0-b909-4891c1c23505
>>> Oct 25 23:02:12 [14922] node2        cib:     info: cib_process_request:     Completed cib_query operation for section nodes: OK (rc=0, origin=local/crm_attribute/2, version=0.332.131)
>>> Oct 25 23:02:12 [14922] node2        cib:     info: cib_process_request:     Completed cib_query operation for section //cib/status//node_state[@id='node2']//transient_attributes//nvpair[@name='readable']: OK (rc=0, origin=local/crm_attribute/3, version=0.332.131)
>>> Oct 25 23:02:12 [14922] node2        cib:     info: crm_client_destroy:     Destroying 0 events
>>> Oct 25 23:02:12 [14922] node2        cib:     info: crm_client_new:     Connecting 0x16ace20 for uid=0 gid=0 pid=19715 id=18f32dd5-4ff4-4f0e-af9e-6872128f8932
>>> Oct 25 23:02:12 [14922] node2        cib:     info: cib_process_request:     Completed cib_query operation for section nodes: OK (rc=0, origin=local/crm_attribute/2, version=0.332.131)
>>> Oct 25 23:02:12 [14922] node2        cib:     info: cib_process_request:     Completed cib_query operation for section //cib/configuration/crm_config//cluster_property_set[@id='mysql_replication']//nvpair[@name='p_mysql_REPL_INFO']: OK (rc=0, origin=local/crm_attribute/3, version=0.332.131)
>>> Oct 25 23:02:12 [14922] node2        cib:     info: crm_client_destroy:     Destroying 0 events
>>> ........
>>> 
>>> _______________________________________________
>>> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
>>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>>> 
>>> Project Home: http://www.clusterlabs.org
>>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>>> Bugs: http://bugs.clusterlabs.org
>> 
>> _______________________________________________
>> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>> 
>> Project Home: http://www.clusterlabs.org
>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: http://bugs.clusterlabs.org
> 
> 
> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org





More information about the Pacemaker mailing list