[ClusterLabs] ressources in an unmanaged status
Ken Gaillot
kgaillot at redhat.com
Fri Nov 9 10:30:19 EST 2018
On Fri, 2018-11-09 at 14:37 +0100, Stefan K wrote:
> Hello,
>
> I've the following setup:
>
> crm conf sh
> node 1: zfs-serv3 \
> attributes
> node 2: zfs-serv4 \
> attributes maintenance=on
> primitive ha-ip IPaddr2 \
> params ip=192.168.2.10 cidr_netmask=24 nic=bond0 \
> op start interval=0s timeout=20s \
> op stop interval=0s timeout=20s \
> op monitor interval=10s timeout=20s \
> meta target-role=Started
> primitive iscsi-lun00 iSCSILogicalUnit \
> params implementation=lio-t target_iqn="iqn.2003-
> 01.org.linux-iscsi.vm-storage.x8664:sn.cf6fa665ec23" lun=0
> lio_iblock=0 path="/dev/zvol/vm_storage/zfs-vol1"
> primitive iscsi-lun01 iSCSILogicalUnit \
> params implementation=lio-t target_iqn="iqn.2003-
> 01.org.linux-iscsi.vm-storage.x8664:sn.cf6fa665ec23" lun=1
> lio_iblock=1 path="/dev/zvol/vm_storage/zfs-vol2"
> primitive iscsi-lun02 iSCSILogicalUnit \
> params implementation=lio-t target_iqn="iqn.2003-
> 01.org.linux-iscsi.vm-storage.x8664:sn.cf6fa665ec23" lun=2
> lio_iblock=2 path="/dev/zvol/vm_storage/zfs-vol3"
> primitive iscsi-server iSCSITarget \
> params implementation=lio-t iqn="iqn.2003-01.org.linux-
> iscsi.vm-storage.x8664:sn.cf6fa665ec23" portals="192.168.2.10:3260"
> allowed_initiators="iqn.1998-01.com.vmware:brainslug9-75488000
> iqn.1998-01.com.vmware:brainslug8-05897000 iqn.1998-
> 01.com.vmware:brainslug7-592b0000 iqn.1998-01.com.vmware:brainslug10-
> 5564c000" \
> meta
> primitive resIPMI-zfs3 stonith:external/ipmi \
> params hostname=zfs-serv3 ipaddr=172.xx.xx.xx userid=user
> passwd=pw interface=lan priv=OPERATOR pcmk_delay_max=20 \
> op monitor interval=60s \
> meta
> primitive resIPMI-zfs4 stonith:external/ipmi \
> params hostname=zfs-serv4 ipaddr=172.xx.xx.xx userid=user
> passwd=pw interface=lan priv=OPERATOR pcmk_delay_max=20 \
> op monitor interval=60s \
> meta
> primitive vm_storage ZFS \
> params pool=vm_storage importargs="-d /dev/disk/by-vdev/" \
> op monitor interval=5s timeout=30s \
> op start interval=0s timeout=90 \
> op stop interval=0s timeout=90 \
> meta target-role=Started
> location location-resIPMI-zfs3-zfs-serv3--INFINITY resIPMI-zfs3 -inf:
> zfs-serv3
> location location-resIPMI-zfs4-zfs-serv4--INFINITY resIPMI-zfs4 -inf:
> zfs-serv4
> colocation pcs_rsc_colocation_set_ha-ip_vm_storage_iscsi-server inf:
> ha-ip vm_storage iscsi-server iscsi-lun00 iscsi-lun01 iscsi-lun02
> order pcs_rsc_order_set_ha-ip_iscsi-server_vm_storage ha-ip:stop
> iscsi-lun00:stop iscsi-lun01:stop iscsi-lun02:stop iscsi-server:stop
> vm_storage:stop symmetrical=false
> order pcs_rsc_order_set_iscsi-server_vm_storage_ha-ip
> vm_storage:start iscsi-server:start iscsi-lun00:start iscsi-
> lun01:start iscsi-lun02:start ha-ip:start symmetrical=false
> property cib-bootstrap-options: \
> have-watchdog=false \
> dc-version=1.1.16-94ff4df \
> cluster-infrastructure=corosync \
> cluster-name=zfs-vmstorage \
> no-quorum-policy=stop \
> stonith-enabled=true \
> last-lrm-refresh=1541768433
> rsc_defaults rsc_defaults-options: \
> resource-stickiness=100
>
>
> If I put a node into maintenance the ressources become unmanaged, If
> I shutdown a node the ressources will migrate correctly, can somebody
> tell me please what is wrong here? Here the logs from a node which I
> set to maintenance:
Funny you mention this. I just recently had to investigate how single-
node maintenance mode works, and started documenting it in Pacemaker
Explained for the next release. I'm including this warning:
"Restarting pacemaker on a node that is in single-node maintenance mode
will likely lead to undesirable effects. If maintenance is set as a
transient attribute, it will be erased when pacemaker is stopped, which
will immediately take the node out of maintenance mode and likely get
it fenced. Even if permanent, if pacemaker is restarted, any resources
active on the node will have their local history erased when the node
rejoins, so the cluster will no longer consider them running on the
node and thus will consider them managed again, leading them to be
started elsewhere. This behavior might be improved in a future
release."
I didn't see any resource migration with just stopping pacemaker on the
maintenance node, though I wouldn't be surprised if there are issues.
My feeling at this point is that pacemaker/corosync should not be
stopped on a node in single-node maintenance mode, i.e. it should just
be used to perform any maintenance on the services running there.
Cluster-wide maintenance mode is the preferred route to go when the
cluster software needs to be stopped.
FYI in your logs here, I don't see any resources being moved. The
recurring monitors are cancelled, but that is intentional in
maintenance mode.
>
> Nov 09 14:23:08 [30104] zfs-serv4 crmd: info:
> crm_timer_popped: PEngine Recheck Timer (I_PE_CALC) just popped
> (900000ms)
> Nov 09 14:23:08 [30104] zfs-serv4 crmd: notice:
> do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE |
> input=I_PE_CALC cause=C_TIMER_POPPED origin=crm_timer_popped
> Nov 09 14:23:08 [30104] zfs-serv4 crmd: info:
> do_state_transition: Progressed to state S_POLICY_ENGINE after
> C_TIMER_POPPED
> Nov 09 14:23:08 [30103] zfs-serv4 pengine: info:
> process_pe_message: Input has not changed since last time, not
> saving to disk
> Nov 09 14:23:08 [30103] zfs-serv4 pengine: info:
> determine_online_status_fencing: Node zfs-serv4 is active
> Nov 09 14:23:08 [30103] zfs-serv4 pengine: info:
> determine_online_status: Node zfs-serv4 is online
> Nov 09 14:23:08 [30103] zfs-serv4 pengine: info:
> determine_online_status_fencing: Node zfs-serv3 is active
> Nov 09 14:23:08 [30103] zfs-serv4 pengine: info:
> determine_online_status: Node zfs-serv3 is online
> Nov 09 14:23:08 [30103] zfs-serv4 pengine: info: native_print:
> ha-ip (ocf::heartbeat:IPaddr2): Started zfs-serv4
> Nov 09 14:23:08 [30103] zfs-serv4 pengine: info: native_print:
> resIPMI-zfs4 (stonith:external/ipmi): Started zfs-serv3
> Nov 09 14:23:08 [30103] zfs-serv4 pengine: info: native_print:
> resIPMI-zfs3 (stonith:external/ipmi): Started zfs-serv4
> Nov 09 14:23:08 [30103] zfs-serv4 pengine: info: native_print:
> vm_storage (ocf::heartbeat:ZFS): Started zfs-serv4
> Nov 09 14:23:08 [30103] zfs-serv4 pengine: info: native_print:
> iscsi-server (ocf::heartbeat:iSCSITarget): Started zfs-serv4
> Nov 09 14:23:08 [30103] zfs-serv4 pengine: info: native_print:
> iscsi-lun00 (ocf::heartbeat:iSCSILogicalUnit): Started
> zfs-serv4
> Nov 09 14:23:08 [30103] zfs-serv4 pengine: info: native_print:
> iscsi-lun01 (ocf::heartbeat:iSCSILogicalUnit): Started
> zfs-serv4
> Nov 09 14:23:08 [30103] zfs-serv4 pengine: info: native_print:
> iscsi-lun02 (ocf::heartbeat:iSCSILogicalUnit): Started
> zfs-serv4
> Nov 09 14:23:08 [30103] zfs-serv4 pengine: info: LogActions:
> Leave ha-ip (Started zfs-serv4)
> Nov 09 14:23:08 [30103] zfs-serv4 pengine: info: LogActions:
> Leave resIPMI-zfs4 (Started zfs-serv3)
> Nov 09 14:23:08 [30103] zfs-serv4 pengine: info: LogActions:
> Leave resIPMI-zfs3 (Started zfs-serv4)
> Nov 09 14:23:08 [30103] zfs-serv4 pengine: info: LogActions:
> Leave vm_storage (Started zfs-serv4)
> Nov 09 14:23:08 [30103] zfs-serv4 pengine: info: LogActions:
> Leave iscsi-server (Started zfs-serv4)
> Nov 09 14:23:08 [30103] zfs-serv4 pengine: info: LogActions:
> Leave iscsi-lun00 (Started zfs-serv4)
> Nov 09 14:23:08 [30103] zfs-serv4 pengine: info: LogActions:
> Leave iscsi-lun01 (Started zfs-serv4)
> Nov 09 14:23:08 [30103] zfs-serv4 pengine: info: LogActions:
> Leave iscsi-lun02 (Started zfs-serv4)
> Nov 09 14:23:08 [30103] zfs-serv4 pengine: notice:
> process_pe_message: Calculated transition 59, saving inputs in
> /var/lib/pacemaker/pengine/pe-input-335.bz2
> Nov 09 14:23:08 [30104] zfs-serv4 crmd: info:
> do_state_transition: State transition S_POLICY_ENGINE ->
> S_TRANSITION_ENGINE | input=I_PE_SUCCESS cause=C_IPC_MESSAGE
> origin=handle_response
> Nov 09 14:23:08 [30104] zfs-serv4 crmd: info: do_te_invoke:
> Processing graph 59 (ref=pe_calc-dc-1541769788-220) derived from
> /var/lib/pacemaker/pengine/pe-input-335.bz2
> Nov 09 14:23:08 [30104] zfs-serv4 crmd: notice: run_graph:
> Transition 59 (Complete=0, Pending=0, Fired=0, Skipped=0,
> Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-335.bz2):
> Complete
> Nov 09 14:23:08 [30104] zfs-serv4 crmd: info: do_log: Input
> I_TE_SUCCESS received in state S_TRANSITION_ENGINE from notify_crmd
> Nov 09 14:23:08 [30104] zfs-serv4 crmd: notice:
> do_state_transition: State transition S_TRANSITION_ENGINE ->
> S_IDLE | input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd
> Nov 09 14:23:36 [30099] zfs-serv4 cib: info:
> cib_process_request: Forwarding cib_modify operation for section
> nodes to all (origin=local/crm_attribute/4)
> Nov 09 14:23:36 [30099] zfs-serv4 cib: info:
> cib_perform_op: Diff: --- 0.270.0 2
> Nov 09 14:23:36 [30099] zfs-serv4 cib: info:
> cib_perform_op: Diff: +++ 0.271.0 34fd566a02387aeadd17968e21ef9079
> Nov 09 14:23:36 [30099] zfs-serv4 cib: info:
> cib_perform_op: + /cib: @epoch=271
> Nov 09 14:23:36 [30099] zfs-serv4 cib: info:
> cib_perform_op: ++
> /cib/configuration/nodes/node[@id='2']/instance_attributes[@id='nodes
> -2']: <nvpair id="nodes-2-maintenance" name="maintenance"
> value="on"/>
> Nov 09 14:23:36 [30099] zfs-serv4 cib: info:
> cib_process_request: Completed cib_modify operation for section
> nodes: OK (rc=0, origin=zfs-serv4/crm_attribute/4, version=0.271.0)
> Nov 09 14:23:36 [30104] zfs-serv4 crmd: info:
> abort_transition_graph: Transition aborted by nodes-2-maintenance
> doing create maintenance=on: Configuration change | cib=0.271.0
> source=te_update_diff:444
> path=/cib/configuration/nodes/node[@id='2']/instance_attributes[@id='
> nodes-2'] complete=true
> Nov 09 14:23:36 [30104] zfs-serv4 crmd: notice:
> do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE |
> input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph
> Nov 09 14:23:36 [30103] zfs-serv4 pengine: info:
> unpack_status: Node zfs-serv4 is in maintenance-mode
> Nov 09 14:23:36 [30103] zfs-serv4 pengine: info:
> determine_online_status_fencing: Node zfs-serv4 is active
> Nov 09 14:23:36 [30103] zfs-serv4 pengine: info:
> determine_online_status: Node zfs-serv4 is maintenance
> Nov 09 14:23:36 [30103] zfs-serv4 pengine: info:
> determine_online_status_fencing: Node zfs-serv3 is active
> Nov 09 14:23:36 [30103] zfs-serv4 pengine: info:
> determine_online_status: Node zfs-serv3 is online
> Nov 09 14:23:36 [30103] zfs-serv4 pengine: info:
> native_add_running: resource iscsi-lun00 isn't managed
> Nov 09 14:23:36 [30103] zfs-serv4 pengine: info:
> native_add_running: resource iscsi-lun01 isn't managed
> Nov 09 14:23:36 [30103] zfs-serv4 pengine: info:
> native_add_running: resource iscsi-lun02 isn't managed
> Nov 09 14:23:36 [30103] zfs-serv4 pengine: info:
> native_add_running: resource ha-ip isn't managed
> Nov 09 14:23:36 [30103] zfs-serv4 pengine: info:
> native_add_running: resource resIPMI-zfs3 isn't managed
> Nov 09 14:23:36 [30103] zfs-serv4 pengine: info:
> native_add_running: resource iscsi-server isn't managed
> Nov 09 14:23:36 [30103] zfs-serv4 pengine: info:
> native_add_running: resource vm_storage isn't managed
> Nov 09 14:23:36 [30103] zfs-serv4 pengine: info: native_print:
> ha-ip (ocf::heartbeat:IPaddr2): Started zfs-serv4
> (unmanaged)
> Nov 09 14:23:36 [30103] zfs-serv4 pengine: info: native_print:
> resIPMI-zfs4 (stonith:external/ipmi): Started zfs-serv3
> Nov 09 14:23:36 [30103] zfs-serv4 pengine: info: native_print:
> resIPMI-zfs3 (stonith:external/ipmi): Started zfs-serv4
> (unmanaged)
> Nov 09 14:23:36 [30103] zfs-serv4 pengine: info: native_print:
> vm_storage (ocf::heartbeat:ZFS): Started zfs-serv4
> (unmanaged)
> Nov 09 14:23:36 [30103] zfs-serv4 pengine: info: native_print:
> iscsi-server (ocf::heartbeat:iSCSITarget): Started zfs-serv4
> (unmanaged)
> Nov 09 14:23:36 [30103] zfs-serv4 pengine: info: native_print:
> iscsi-lun00 (ocf::heartbeat:iSCSILogicalUnit): Started
> zfs-serv4 (unmanaged)
> Nov 09 14:23:36 [30103] zfs-serv4 pengine: info: native_print:
> iscsi-lun01 (ocf::heartbeat:iSCSILogicalUnit): Started
> zfs-serv4 (unmanaged)
> Nov 09 14:23:36 [30103] zfs-serv4 pengine: info: native_print:
> iscsi-lun02 (ocf::heartbeat:iSCSILogicalUnit): Started
> zfs-serv4 (unmanaged)
> Nov 09 14:23:36 [30103] zfs-serv4 pengine: info: CancelXmlOp:
> Action ha-ip_monitor_10000 on zfs-serv4 will be stopped:
> maintenance mode
> Nov 09 14:23:36 [30103] zfs-serv4 pengine: info: CancelXmlOp:
> Action resIPMI-zfs3_monitor_60000 on zfs-serv4 will be stopped:
> maintenance mode
> Nov 09 14:23:36 [30103] zfs-serv4 pengine: info: CancelXmlOp:
> Action vm_storage_monitor_5000 on zfs-serv4 will be stopped:
> maintenance mode
> Nov 09 14:23:36 [30103] zfs-serv4 pengine: info: native_color:
> Unmanaged resource ha-ip allocated to zfs-serv4: active
> Nov 09 14:23:36 [30103] zfs-serv4 pengine: info: native_color:
> Unmanaged resource resIPMI-zfs3 allocated to zfs-serv4: active
> Nov 09 14:23:36 [30103] zfs-serv4 pengine: info:
> rsc_merge_weights: vm_storage: Rolling back scores from iscsi-
> server
> Nov 09 14:23:36 [30103] zfs-serv4 pengine: info: native_color:
> Unmanaged resource vm_storage allocated to zfs-serv4: active
> Nov 09 14:23:36 [30103] zfs-serv4 pengine: info:
> rsc_merge_weights: iscsi-server: Rolling back scores from iscsi-
> lun00
> Nov 09 14:23:36 [30103] zfs-serv4 pengine: info: native_color:
> Unmanaged resource iscsi-server allocated to zfs-serv4: active
> Nov 09 14:23:36 [30103] zfs-serv4 pengine: info:
> rsc_merge_weights: iscsi-lun00: Rolling back scores from iscsi-
> lun01
> Nov 09 14:23:36 [30103] zfs-serv4 pengine: info: native_color:
> Unmanaged resource iscsi-lun00 allocated to zfs-serv4: active
> Nov 09 14:23:36 [30103] zfs-serv4 pengine: info:
> rsc_merge_weights: iscsi-lun01: Rolling back scores from iscsi-
> lun02
> Nov 09 14:23:36 [30103] zfs-serv4 pengine: info: native_color:
> Unmanaged resource iscsi-lun01 allocated to zfs-serv4: active
> Nov 09 14:23:36 [30103] zfs-serv4 pengine: info: native_color:
> Unmanaged resource iscsi-lun02 allocated to zfs-serv4: active
> Nov 09 14:23:36 [30103] zfs-serv4 pengine: info: LogActions:
> Leave ha-ip (Started unmanaged)
> Nov 09 14:23:36 [30103] zfs-serv4 pengine: info: LogActions:
> Leave resIPMI-zfs4 (Started zfs-serv3)
> Nov 09 14:23:36 [30103] zfs-serv4 pengine: info: LogActions:
> Leave resIPMI-zfs3 (Started unmanaged)
> Nov 09 14:23:36 [30103] zfs-serv4 pengine: info: LogActions:
> Leave vm_storage (Started unmanaged)
> Nov 09 14:23:36 [30103] zfs-serv4 pengine: info: LogActions:
> Leave iscsi-server (Started unmanaged)
> Nov 09 14:23:36 [30103] zfs-serv4 pengine: info: LogActions:
> Leave iscsi-lun00 (Started unmanaged)
> Nov 09 14:23:36 [30103] zfs-serv4 pengine: info: LogActions:
> Leave iscsi-lun01 (Started unmanaged)
> Nov 09 14:23:36 [30103] zfs-serv4 pengine: info: LogActions:
> Leave iscsi-lun02 (Started unmanaged)
> Nov 09 14:23:36 [30103] zfs-serv4 pengine: notice:
> process_pe_message: Calculated transition 60, saving inputs in
> /var/lib/pacemaker/pengine/pe-input-336.bz2
> Nov 09 14:23:36 [30104] zfs-serv4 crmd: info:
> do_state_transition: State transition S_POLICY_ENGINE ->
> S_TRANSITION_ENGINE | input=I_PE_SUCCESS cause=C_IPC_MESSAGE
> origin=handle_response
> Nov 09 14:23:36 [30104] zfs-serv4 crmd: info: do_te_invoke:
> Processing graph 60 (ref=pe_calc-dc-1541769816-221) derived from
> /var/lib/pacemaker/pengine/pe-input-336.bz2
> Nov 09 14:23:36 [30104] zfs-serv4 crmd: notice:
> te_rsc_command: Initiating cancel operation ha-ip_monitor_10000
> locally on zfs-serv4 | action 1
> Nov 09 14:23:36 [30101] zfs-serv4 lrmd: info:
> cancel_recurring_action: Cancelling ocf operation ha-
> ip_monitor_10000
> Nov 09 14:23:36 [30104] zfs-serv4 crmd: notice:
> te_rsc_command: Initiating cancel operation resIPMI-
> zfs3_monitor_60000 locally on zfs-serv4 | action 2
> Nov 09 14:23:36 [30104] zfs-serv4 crmd: notice:
> te_rsc_command: Initiating cancel operation vm_storage_monitor_5000
> locally on zfs-serv4 | action 3
> Nov 09 14:23:36 [30101] zfs-serv4 lrmd: info:
> cancel_recurring_action: Cancelling ocf operation
> vm_storage_monitor_5000
> Nov 09 14:23:36 [30104] zfs-serv4 crmd: info:
> process_lrm_event: Result of monitor operation for ha-ip on zfs-
> serv4: Cancelled | call=214 key=ha-ip_monitor_10000 confirmed=true
> Nov 09 14:23:36 [30099] zfs-serv4 cib: info:
> cib_process_request: Forwarding cib_delete operation for section
> status to all (origin=local/crmd/421)
> Nov 09 14:23:36 [30104] zfs-serv4 crmd: info:
> process_lrm_event: Result of monitor operation for resIPMI-zfs3
> on zfs-serv4: Cancelled | call=141 key=resIPMI-zfs3_monitor_60000
> confirmed=true
> Nov 09 14:23:36 [30104] zfs-serv4 crmd: info:
> process_lrm_event: Result of monitor operation for vm_storage on
> zfs-serv4: Cancelled | call=209 key=vm_storage_monitor_5000
> confirmed=true
> Nov 09 14:23:36 [30099] zfs-serv4 cib: info:
> cib_process_request: Forwarding cib_delete operation for section
> status to all (origin=local/crmd/422)
> Nov 09 14:23:36 [30099] zfs-serv4 cib: info:
> cib_process_request: Forwarding cib_delete operation for section
> status to all (origin=local/crmd/423)
> Nov 09 14:23:36 [30099] zfs-serv4 cib: info:
> cib_perform_op: Diff: --- 0.271.0 2
> Nov 09 14:23:36 [30099] zfs-serv4 cib: info:
> cib_perform_op: Diff: +++ 0.271.1 (null)
> Nov 09 14:23:36 [30099] zfs-serv4 cib: info:
> cib_perform_op: --
> /cib/status/node_state[@id='2']/lrm[@id='2']/lrm_resources/lrm_resour
> ce[@id='ha-ip']/lrm_rsc_op[@id='ha-ip_monitor_10000']
> Nov 09 14:23:36 [30099] zfs-serv4 cib: info:
> cib_perform_op: + /cib: @num_updates=1
> Nov 09 14:23:36 [30099] zfs-serv4 cib: info:
> cib_process_request: Completed cib_delete operation for section
> status: OK (rc=0, origin=zfs-serv4/crmd/421, version=0.271.1)
> Nov 09 14:23:36 [30104] zfs-serv4 crmd: info:
> te_update_diff: Cancellation of ha-ip_monitor_10000 on 2 confirmed
> (1)
> Nov 09 14:23:36 [30099] zfs-serv4 cib: info:
> cib_perform_op: Diff: --- 0.271.1 2
> Nov 09 14:23:36 [30099] zfs-serv4 cib: info:
> cib_perform_op: Diff: +++ 0.271.2 (null)
> Nov 09 14:23:36 [30099] zfs-serv4 cib: info:
> cib_perform_op: --
> /cib/status/node_state[@id='2']/lrm[@id='2']/lrm_resources/lrm_resour
> ce[@id='resIPMI-zfs3']/lrm_rsc_op[@id='resIPMI-zfs3_monitor_60000']
> Nov 09 14:23:36 [30099] zfs-serv4 cib: info:
> cib_perform_op: + /cib: @num_updates=2
> Nov 09 14:23:36 [30099] zfs-serv4 cib: info:
> cib_process_request: Completed cib_delete operation for section
> status: OK (rc=0, origin=zfs-serv4/crmd/422, version=0.271.2)
> Nov 09 14:23:36 [30104] zfs-serv4 crmd: info:
> te_update_diff: Cancellation of resIPMI-zfs3_monitor_60000 on 2
> confirmed (2)
> Nov 09 14:23:36 [30099] zfs-serv4 cib: info:
> cib_perform_op: Diff: --- 0.271.2 2
> Nov 09 14:23:36 [30099] zfs-serv4 cib: info:
> cib_perform_op: Diff: +++ 0.271.3 (null)
> Nov 09 14:23:36 [30099] zfs-serv4 cib: info:
> cib_perform_op: --
> /cib/status/node_state[@id='2']/lrm[@id='2']/lrm_resources/lrm_resour
> ce[@id='vm_storage']/lrm_rsc_op[@id='vm_storage_monitor_5000']
> Nov 09 14:23:36 [30099] zfs-serv4 cib: info:
> cib_perform_op: + /cib: @num_updates=3
> Nov 09 14:23:36 [30099] zfs-serv4 cib: info:
> cib_process_request: Completed cib_delete operation for section
> status: OK (rc=0, origin=zfs-serv4/crmd/423, version=0.271.3)
> Nov 09 14:23:36 [30104] zfs-serv4 crmd: info:
> te_update_diff: Cancellation of vm_storage_monitor_5000 on 2
> confirmed (3)
> Nov 09 14:23:36 [30104] zfs-serv4 crmd: notice: run_graph:
> Transition 60 (Complete=3, Pending=0, Fired=0, Skipped=0,
> Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-336.bz2):
> Complete
> Nov 09 14:23:36 [30104] zfs-serv4 crmd: info: do_log: Input
> I_TE_SUCCESS received in state S_TRANSITION_ENGINE from notify_crmd
> Nov 09 14:23:36 [30104] zfs-serv4 crmd: notice:
> do_state_transition: State transition S_TRANSITION_ENGINE ->
> S_IDLE | input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd
> Nov 09 14:23:36 [30099] zfs-serv4 cib: info:
> cib_file_backup: Archived previous version as
> /var/lib/pacemaker/cib/cib-10.raw
> Nov 09 14:23:36 [30099] zfs-serv4 cib: info:
> cib_file_write_with_digest: Wrote version 0.271.0 of the CIB to
> disk (digest: deb7df3b23d3b413719bef796b82e552)
> Nov 09 14:23:36 [30099] zfs-serv4 cib: info:
> cib_file_write_with_digest: Reading cluster configuration file
> /var/lib/pacemaker/cib/cib.RYZvcL (digest:
> /var/lib/pacemaker/cib/cib.swyMZG)
> Nov 09 14:23:41 [30099] zfs-serv4 cib: info:
> cib_process_ping: Reporting our current digest to zfs-serv4:
> 087bbb193e0824a48379d0025a06a387 for 0.271.3 (0x558c43b22850 0)
>
> at the same time on the other node:
> Nov 09 14:08:53 [9630] zfs-serv3 cib: info:
> cib_process_ping: Reporting our current digest to zfs-serv4:
> 81c440e08f9ab611967de64ba7b6ce46 for 0.270.0 (0x556faf4e9220 0)
> Nov 09 14:24:16 [9630] zfs-serv3 cib: info:
> cib_perform_op: Diff: --- 0.270.0 2
> Nov 09 14:24:16 [9630] zfs-serv3 cib: info:
> cib_perform_op: Diff: +++ 0.271.0 34fd566a02387aeadd17968e21ef9079
> Nov 09 14:24:16 [9630] zfs-serv3 cib: info:
> cib_perform_op: + /cib: @epoch=271
> Nov 09 14:24:16 [9630] zfs-serv3 cib: info:
> cib_perform_op: ++
> /cib/configuration/nodes/node[@id='2']/instance_attributes[@id='nodes
> -2']: <nvpair id="nodes-2-maintenance" name="maintenance"
> value="on"/>
> Nov 09 14:24:16 [9630] zfs-serv3 cib: info:
> cib_process_request: Completed cib_modify operation for section
> nodes: OK (rc=0, origin=zfs-serv4/crm_attribute/4, version=0.271.0)
> Nov 09 14:24:16 [9630] zfs-serv3 cib: info:
> cib_perform_op: Diff: --- 0.271.0 2
> Nov 09 14:24:16 [9630] zfs-serv3 cib: info:
> cib_perform_op: Diff: +++ 0.271.1 (null)
>
> thanks for help!
> best regards
> Stefan
> _______________________________________________
> Users mailing list: Users at clusterlabs.org
> https://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.
> pdf
> Bugs: http://bugs.clusterlabs.org
--
Ken Gaillot <kgaillot at redhat.com>
More information about the Users
mailing list