[ClusterLabs] crm resource trace
Xin Liang
XLiang at suse.com
Sun Oct 23 22:29:30 EDT 2022
Hi Bernd,
On which version you're running for crmsh and SLE?
Regards,
Xin
________________________________
From: Users <users-bounces at clusterlabs.org> on behalf of Lentes, Bernd <bernd.lentes at helmholtz-muenchen.de>
Sent: Monday, October 17, 2022 6:43 PM
To: Pacemaker ML <users at clusterlabs.org>
Subject: Re: [ClusterLabs] crm resource trace
Hi,
i try to find out why there is sometimes a restart of the resource and sometimes not.
Unpredictable behaviour is someting i expect from Windows, not from Linux.
Here you see two "crm resource trace "resource"".
In the first case the resource is restarted , in the second not.
The command i used is identical in both cases.
ha-idg-2:~/trace-untrace # date; crm resource trace vm-genetrap
Fri Oct 14 19:05:51 CEST 2022
INFO: Trace for vm-genetrap is written to /var/lib/heartbeat/trace_ra/
INFO: Trace set, restart vm-genetrap to trace non-monitor operations
==================================================================================
1st try:
Oct 14 19:05:52 [25996] ha-idg-1 cib: info: cib_perform_op: Diff: --- 7.28974.3 2
Oct 14 19:05:52 [25996] ha-idg-1 cib: info: cib_perform_op: Diff: +++ 7.28975.0 299af44e1c8a3867f9e7a4b25f2c3d6a
Oct 14 19:05:52 [25996] ha-idg-1 cib: info: cib_perform_op: + /cib: @epoch=28975, @num_updates=0
Oct 14 19:05:52 [25996] ha-idg-1 cib: info: cib_perform_op: ++ /cib/configuration/resources/primitive[@id='vm-genetrap']/operations/op[@id='vm-genetrap-monitor-30']: <instance_attributes id="vm-genetrap-monitor-30-instance_a
ttributes"/>
Oct 14 19:05:52 [25996] ha-idg-1 cib: info: cib_perform_op: ++ <nvpair name="trace_ra" value="1" id="vm-genetrap-monito
r-30-instance_attributes-trace_ra"/>
Oct 14 19:05:52 [25996] ha-idg-1 cib: info: cib_perform_op: ++ </instance_attributes>
Oct 14 19:05:52 [25996] ha-idg-1 cib: info: cib_perform_op: ++ /cib/configuration/resources/primitive[@id='vm-genetrap']/operations/op[@id='vm-genetrap-stop-0']: <instance_attributes id="vm-genetrap-stop-0-instance_attribute
s"/>
Oct 14 19:05:52 [25996] ha-idg-1 cib: info: cib_perform_op: ++ <nvpair name="trace_ra" value="1" id="vm-genetrap-stop-0-ins
tance_attributes-trace_ra"/>
Oct 14 19:05:52 [25996] ha-idg-1 cib: info: cib_perform_op: ++ </instance_attributes>
Oct 14 19:05:52 [25996] ha-idg-1 cib: info: cib_perform_op: ++ /cib/configuration/resources/primitive[@id='vm-genetrap']/operations/op[@id='vm-genetrap-start-0']: <instance_attributes id="vm-genetrap-start-0-instance_attribu
tes"/>
Oct 14 19:05:52 [25996] ha-idg-1 cib: info: cib_perform_op: ++ <nvpair name="trace_ra" value="1" id="vm-genetrap-start-0-i
nstance_attributes-trace_ra"/>
Oct 14 19:05:52 [25996] ha-idg-1 cib: info: cib_perform_op: ++ </instance_attributes>
Oct 14 19:05:52 [25996] ha-idg-1 cib: info: cib_perform_op: ++ /cib/configuration/resources/primitive[@id='vm-genetrap']/operations/op[@id='vm-genetrap-migrate_from-0']: <instance_attributes id="vm-genetrap-migrate_from-0-in
stance_attributes"/>
Oct 14 19:05:52 [25996] ha-idg-1 cib: info: cib_perform_op: ++ <nvpair name="trace_ra" value="1" id="vm-genetrap-mi
grate_from-0-instance_attributes-trace_ra"/>
Oct 14 19:05:52 [25996] ha-idg-1 cib: info: cib_perform_op: ++ </instance_attributes>
Oct 14 19:05:52 [25996] ha-idg-1 cib: info: cib_perform_op: ++ /cib/configuration/resources/primitive[@id='vm-genetrap']/operations/op[@id='vm-genetrap-migrate_to-0']: <instance_attributes id="vm-genetrap-migrate_to-0-instan
ce_attributes"/>
Oct 14 19:05:52 [25996] ha-idg-1 cib: info: cib_perform_op: ++ <nvpair name="trace_ra" value="1" id="vm-genetrap-migr
ate_to-0-instance_attributes-trace_ra"/>
Oct 14 19:05:52 [25996] ha-idg-1 cib: info: cib_perform_op: ++ </instance_attributes>
Oct 14 19:05:52 [26001] ha-idg-1 crmd: info: abort_transition_graph: Transition 791 aborted by instance_attributes.vm-genetrap-monitor-30-instance_attributes 'create': Configuration change | cib=7.28975.0 source=te_update_diff_v2:483 path=/cib/configuration/resources/primitive[@id='vm-genetrap']/operations/op[@id='vm-genetrap-monitor-30'] complete=true
Oct 14 19:05:52 [26001] ha-idg-1 crmd: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE | input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph
Oct 14 19:05:52 [25996] ha-idg-1 cib: info: cib_process_request: Completed cib_apply_diff operation for section 'all': OK (rc=0, origin=ha-idg-2/cibadmin/2, version=7.28975.0)
Oct 14 19:05:52 [25997] ha-idg-1 stonith-ng: info: update_cib_stonith_devices_v2: Updating device list from the cib: create op[@id='vm-genetrap-monitor-30']
Oct 14 19:05:52 [25997] ha-idg-1 stonith-ng: info: cib_devices_update: Updating devices to version 7.28975.0
Oct 14 19:05:52 [25997] ha-idg-1 stonith-ng: notice: unpack_config: On loss of CCM Quorum: Ignore
Oct 14 19:05:52 [25996] ha-idg-1 cib: info: cib_file_backup: Archived previous version as /var/lib/pacemaker/cib/cib-68.raw
Oct 14 19:05:52 [25996] ha-idg-1 cib: info: cib_file_write_with_digest: Wrote version 7.28975.0 of the CIB to disk (digest: d1ef5a98039f28697320c1eba4ca02cc)
Oct 14 19:05:52 [25996] ha-idg-1 cib: info: cib_file_write_with_digest: Reading cluster configuration file /var/lib/pacemaker/cib/cib.WIMMCF (digest: /var/lib/pacemaker/cib/cib.NfxNwG)
Oct 14 19:05:52 [26000] ha-idg-1 pengine: notice: unpack_config: On loss of CCM Quorum: Ignore
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: determine_online_status_fencing: Node ha-idg-1 is active
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: determine_online_status: Node ha-idg-1 is online
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: determine_online_status_fencing: Node ha-idg-2 is active
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: determine_online_status: Node ha-idg-2 is online
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: unpack_node_loop: Node 1084777482 is already processed
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: unpack_node_loop: Node 1084777492 is already processed
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: unpack_node_loop: Node 1084777482 is already processed
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: unpack_node_loop: Node 1084777492 is already processed
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: common_print: fence_ilo_ha-idg-2 (stonith:fence_ilo2): Started ha-idg-1
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: common_print: fence_ilo_ha-idg-1 (stonith:fence_ilo4): Started ha-idg-1
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: clone_print: Clone Set: cl_share [gr_share]
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: short_print: Started: [ ha-idg-1 ha-idg-2 ]
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: clone_print: Clone Set: ClusterMon-clone [ClusterMon-SMTP]
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: short_print: Stopped (disabled): [ ha-idg-1 ha-idg-2 ]
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: common_print: vm-mausdb (ocf::lentes:VirtualDomain): Started ha-idg-1
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: common_print: vm-sim (ocf::lentes:VirtualDomain): Started ha-idg-1
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: common_print: vm-geneious (ocf::lentes:VirtualDomain): Started ha-idg-1
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: common_print: vm-idcc-devel (ocf::lentes:VirtualDomain): Started ha-idg-1
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: common_print: vm-genetrap (ocf::lentes:VirtualDomain): Started ha-idg-1
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: common_print: vm-mouseidgenes (ocf::lentes:VirtualDomain): Started ha-idg-1
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: common_print: vm-greensql (ocf::lentes:VirtualDomain): Started ha-idg-1
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: common_print: vm-severin (ocf::lentes:VirtualDomain): Started ha-idg-1
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: common_print: ping_19216810010 (ocf::pacemaker:ping): Stopped (disabled)
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: common_print: ping_19216810020 (ocf::pacemaker:ping): Stopped (disabled)
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: common_print: vm_crispor (ocf::heartbeat:VirtualDomain): Stopped (unmanaged)
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: common_print: vm-dietrich (ocf::lentes:VirtualDomain): Started ha-idg-1
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: common_print: vm-pathway (ocf::lentes:VirtualDomain): Started ha-idg-1
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: common_print: vm-crispor-server (ocf::lentes:VirtualDomain): Started ha-idg-1
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: common_print: vm-geneious-license (ocf::lentes:VirtualDomain): Started ha-idg-1
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: common_print: vm-nc-mcd (ocf::lentes:VirtualDomain): Started ha-idg-1
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: common_print: vm-amok (ocf::lentes:VirtualDomain): Started ha-idg-1
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: common_print: vm-geneious-license-mcd (ocf::lentes:VirtualDomain): Started ha-idg-1
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: common_print: vm-documents-oo (ocf::lentes:VirtualDomain): Started ha-idg-1
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: common_print: fs_test_ocfs2 (ocf::lentes:Filesystem.new): Started ha-idg-2
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: common_print: vm-ssh (ocf::lentes:VirtualDomain): Started ha-idg-1
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: common_print: vm_snipanalysis (ocf::lentes:VirtualDomain): Stopped (disabled, unmanaged)
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: common_print: vm-seneca (ocf::lentes:VirtualDomain): Started ha-idg-1
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: common_print: vm-photoshop (ocf::lentes:VirtualDomain): Started ha-idg-1
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: common_print: vm-check-mk (ocf::lentes:VirtualDomain): Started ha-idg-1
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: common_print: vm-encore (ocf::lentes:VirtualDomain): Started ha-idg-1
--------------------------------------------------------------------------------------------------------------------
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: rsc_action_digest_cmp: Parameters to vm-genetrap_start_0 on ha-idg-1 changed: was e2eeb4e5d1604535fabae9ce5407d685 vs. now 516b745764a83d26e0d73daf2c65ca38 (reload:3.0.14) 0:0;82:692:0:167bea02-e39a-4fbc-a09f-3ba4d704c4f9
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: check_action_definition: params:reload <parameters migration_network_suffix="-private" migrate_options="--verbose" migration_transport="ssh" shutdown_mode="acpi,agent" config="/mnt/share/vm_genetrap.xml" hypervisor="qemu:///system" trace_ra="1"/>
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: rsc_action_digest_cmp: Parameters to vm-genetrap_monitor_30000 on ha-idg-1 changed: was 2c5e72e3ebb855036a484cb7e2823f92 vs. now d81c72a6c99d1a5c2defaa830fb82b23 (reschedule:3.0.14) 0:0;83:692:0:167bea02-e39a-4fbc-a09f-3ba4d704c4f9
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: check_action_definition: params:reload <parameters migration_network_suffix="-private" migrate_options="--verbose" migration_transport="ssh" shutdown_mode="acpi,agent" config="/mnt/share/vm_genetrap.xml" hypervisor="qemu:///system" trace_ra="1" CRM_meta_timeout="25000"/>
---------------------------------------------------------------------------------------------------------------------
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: pcmk__native_allocate: Resource ClusterMon-SMTP:0 cannot run anywhere
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: pcmk__native_allocate: Resource ClusterMon-SMTP:1 cannot run anywhere
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: pcmk__native_allocate: Resource ping_19216810010 cannot run anywhere
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: pcmk__native_allocate: Resource ping_19216810020 cannot run anywhere
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: pcmk__native_allocate: Unmanaged resource vm_crispor allocated to no node: inactive
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: pcmk__native_allocate: Unmanaged resource vm_snipanalysis allocated to no node: inactive
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: RecurringOp: Start recurring monitor (30s) for vm-genetrap on ha-idg-1
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: LogActions: Leave fence_ilo_ha-idg-2 (Started ha-idg-1)
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: LogActions: Leave fence_ilo_ha-idg-1 (Started ha-idg-1)
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: LogActions: Leave dlm:0 (Started ha-idg-1)
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: LogActions: Leave clvmd:0 (Started ha-idg-1)
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: LogActions: Leave gfs2_share:0 (Started ha-idg-1)
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: LogActions: Leave gfs2_snap:0 (Started ha-idg-1)
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: LogActions: Leave fs_ocfs2:0 (Started ha-idg-1)
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: LogActions: Leave dlm:1 (Started ha-idg-2)
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: LogActions: Leave clvmd:1 (Started ha-idg-2)
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: LogActions: Leave gfs2_share:1 (Started ha-idg-2)
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: LogActions: Leave gfs2_snap:1 (Started ha-idg-2)
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: LogActions: Leave fs_ocfs2:1 (Started ha-idg-2)
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: LogActions: Leave ClusterMon-SMTP:0 (Stopped)
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: LogActions: Leave ClusterMon-SMTP:1 (Stopped)
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: LogActions: Leave vm-mausdb (Started ha-idg-1)
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: LogActions: Leave vm-sim (Started ha-idg-1)
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: LogActions: Leave vm-geneious (Started ha-idg-1)
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: LogActions: Leave vm-idcc-devel (Started ha-idg-1)
Oct 14 19:05:52 [26000] ha-idg-1 pengine: notice: LogAction: * Restart vm-genetrap ( ha-idg-1 ) due to resource definition change
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: LogActions: Leave vm-mouseidgenes (Started ha-idg-1)
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: LogActions: Leave vm-greensql (Started ha-idg-1)
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: LogActions: Leave vm-severin (Started ha-idg-1)
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: LogActions: Leave ping_19216810010 (Stopped)
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: LogActions: Leave ping_19216810020 (Stopped)
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: LogActions: Leave vm_crispor (Stopped unmanaged)
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: LogActions: Leave vm-dietrich (Started ha-idg-1)
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: LogActions: Leave vm-pathway (Started ha-idg-1)
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: LogActions: Leave vm-crispor-server (Started ha-idg-1)
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: LogActions: Leave vm-geneious-license (Started ha-idg-1)
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: LogActions: Leave vm-nc-mcd (Started ha-idg-1)
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: LogActions: Leave vm-amok (Started ha-idg-1)
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: LogActions: Leave vm-geneious-license-mcd (Started ha-idg-1)
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: LogActions: Leave vm-documents-oo (Started ha-idg-1)
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: LogActions: Leave fs_test_ocfs2 (Started ha-idg-2)
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: LogActions: Leave vm-ssh (Started ha-idg-1)
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: LogActions: Leave vm_snipanalysis (Stopped unmanaged)
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: LogActions: Leave vm-seneca (Started ha-idg-1)
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: LogActions: Leave vm-photoshop (Started ha-idg-1)
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: LogActions: Leave vm-check-mk (Started ha-idg-1)
Oct 14 19:05:52 [26000] ha-idg-1 pengine: info: LogActions: Leave vm-encore (Started ha-idg-1)
restart !!!!!
======================================================================================
2nd try:
ha-idg-2:~/trace-untrace # cibadmin -Q|grep trace|grep genetrap
ha-idg-2:~/trace-untrace #
ha-idg-2:~/trace-untrace # date; crm resource trace vm-genetrap
Fri Oct 14 19:26:32 CEST 2022
INFO: Trace for vm-genetrap is written to /var/lib/heartbeat/trace_ra/
INFO: Trace set, restart vm-genetrap to trace non-monitor operations
Oct 14 19:26:33 [25996] ha-idg-1 cib: info: cib_perform_op: Diff: --- 7.28977.1 2
Oct 14 19:26:33 [25996] ha-idg-1 cib: info: cib_perform_op: Diff: +++ 7.28978.0 941fea150a15ecf82f00290ff9ecae0e
Oct 14 19:26:33 [25996] ha-idg-1 cib: info: cib_perform_op: + /cib: @epoch=28978, @num_updates=0
Oct 14 19:26:33 [25996] ha-idg-1 cib: info: cib_perform_op: ++ /cib/configuration/resources/primitive[@id='vm-genetrap']/operations/op[@id='vm-genetrap-monitor-30']: <instance_attributes id="vm-genetrap-monitor-30-instance_a
ttributes"/>
Oct 14 19:26:33 [25996] ha-idg-1 cib: info: cib_perform_op: ++ <nvpair name="trace_ra" value="1" id="vm-genetrap-monito
r-30-instance_attributes-trace_ra"/>
Oct 14 19:26:33 [25996] ha-idg-1 cib: info: cib_perform_op: ++ </instance_attributes>
Oct 14 19:26:33 [25996] ha-idg-1 cib: info: cib_perform_op: ++ /cib/configuration/resources/primitive[@id='vm-genetrap']/operations/op[@id='vm-genetrap-stop-0']: <instance_attributes id="vm-genetrap-stop-0-instance_attribute
s"/>
Oct 14 19:26:33 [25996] ha-idg-1 cib: info: cib_perform_op: ++ <nvpair name="trace_ra" value="1" id="vm-genetrap-stop-0-ins
tance_attributes-trace_ra"/>
Oct 14 19:26:33 [25996] ha-idg-1 cib: info: cib_perform_op: ++ </instance_attributes>
Oct 14 19:26:33 [25996] ha-idg-1 cib: info: cib_perform_op: ++ /cib/configuration/resources/primitive[@id='vm-genetrap']/operations/op[@id='vm-genetrap-start-0']: <instance_attributes id="vm-genetrap-start-0-instance_attribu
tes"/>
Oct 14 19:26:33 [25996] ha-idg-1 cib: info: cib_perform_op: ++ <nvpair name="trace_ra" value="1" id="vm-genetrap-start-0-i
nstance_attributes-trace_ra"/>
Oct 14 19:26:33 [25996] ha-idg-1 cib: info: cib_perform_op: ++ </instance_attributes>
Oct 14 19:26:33 [25996] ha-idg-1 cib: info: cib_perform_op: ++ /cib/configuration/resources/primitive[@id='vm-genetrap']/operations/op[@id='vm-genetrap-migrate_from-0']: <instance_attributes id="vm-genetrap-migrate_from-0-in
stance_attributes"/>
Oct 14 19:26:33 [25996] ha-idg-1 cib: info: cib_perform_op: ++ <nvpair name="trace_ra" value="1" id="vm-genetrap-mi
grate_from-0-instance_attributes-trace_ra"/>
Oct 14 19:26:33 [25996] ha-idg-1 cib: info: cib_perform_op: ++ </instance_attributes>
Oct 14 19:26:33 [25996] ha-idg-1 cib: info: cib_perform_op: ++ /cib/configuration/resources/primitive[@id='vm-genetrap']/operations/op[@id='vm-genetrap-migrate_to-0']: <instance_attributes id="vm-genetrap-migrate_to-0-instan
ce_attributes"/>
Oct 14 19:26:33 [25996] ha-idg-1 cib: info: cib_perform_op: ++ <nvpair name="trace_ra" value="1" id="vm-genetrap-migr
ate_to-0-instance_attributes-trace_ra"/>
Oct 14 19:26:33 [25996] ha-idg-1 cib: info: cib_perform_op: ++ </instance_attributes>
Oct 14 19:26:33 [26001] ha-idg-1 crmd: info: abort_transition_graph: Transition 797 aborted by instance_attributes.vm-genetrap-monitor-30-instance_attributes 'create': Configuration change | cib=7.28978.0 source=te_update_diff
_v2:483 path=/cib/configuration/resources/primitive[@id='vm-genetrap']/operations/op[@id='vm-genetrap-monitor-30'] complete=true
Oct 14 19:26:33 [26001] ha-idg-1 crmd: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE | input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph
Oct 14 19:26:33 [25996] ha-idg-1 cib: info: cib_process_request: Completed cib_apply_diff operation for section 'all': OK (rc=0, origin=ha-idg-2/cibadmin/2, version=7.28978.0)
Oct 14 19:26:33 [25997] ha-idg-1 stonith-ng: info: update_cib_stonith_devices_v2: Updating device list from the cib: create op[@id='vm-genetrap-monitor-30']
Oct 14 19:26:33 [25997] ha-idg-1 stonith-ng: info: cib_devices_update: Updating devices to version 7.28978.0
Oct 14 19:26:33 [25997] ha-idg-1 stonith-ng: notice: unpack_config: On loss of CCM Quorum: Ignore
Oct 14 19:26:33 [25996] ha-idg-1 cib: info: cib_file_backup: Archived previous version as /var/lib/pacemaker/cib/cib-71.raw
Oct 14 19:26:33 [25996] ha-idg-1 cib: info: cib_file_write_with_digest: Wrote version 7.28978.0 of the CIB to disk (digest: 69dc34584ba70cfb6868a00a3c59c45f)
Oct 14 19:26:33 [25996] ha-idg-1 cib: info: cib_file_write_with_digest: Reading cluster configuration file /var/lib/pacemaker/cib/cib.JQq4Ul (digest: /var/lib/pacemaker/cib/cib.GeC2uK)
Oct 14 19:26:33 [26000] ha-idg-1 pengine: notice: unpack_config: On loss of CCM Quorum: Ignore
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: determine_online_status_fencing: Node ha-idg-1 is active
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: determine_online_status: Node ha-idg-1 is online
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: determine_online_status_fencing: Node ha-idg-2 is active
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: determine_online_status: Node ha-idg-2 is online
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: determine_op_status: Operation monitor found resource vm-genetrap active on ha-idg-1
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: determine_op_status: Operation monitor found resource vm-genetrap active on ha-idg-1
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: unpack_node_loop: Node 1084777482 is already processed
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: unpack_node_loop: Node 1084777492 is already processed
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: unpack_node_loop: Node 1084777482 is already processed
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: unpack_node_loop: Node 1084777492 is already processed
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: common_print: fence_ilo_ha-idg-2 (stonith:fence_ilo2): Started ha-idg-1
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: common_print: fence_ilo_ha-idg-1 (stonith:fence_ilo4): Started ha-idg-1
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: clone_print: Clone Set: cl_share [gr_share]
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: short_print: Started: [ ha-idg-1 ha-idg-2 ]
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: clone_print: Clone Set: ClusterMon-clone [ClusterMon-SMTP]
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: short_print: Stopped (disabled): [ ha-idg-1 ha-idg-2 ]
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: common_print: vm-mausdb (ocf::lentes:VirtualDomain): Started ha-idg-1
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: common_print: vm-sim (ocf::lentes:VirtualDomain): Started ha-idg-1
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: common_print: vm-geneious (ocf::lentes:VirtualDomain): Started ha-idg-1
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: common_print: vm-idcc-devel (ocf::lentes:VirtualDomain): Started ha-idg-1
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: common_print: vm-genetrap (ocf::lentes:VirtualDomain): Started ha-idg-1
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: common_print: vm-mouseidgenes (ocf::lentes:VirtualDomain): Started ha-idg-1
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: common_print: vm-greensql (ocf::lentes:VirtualDomain): Started ha-idg-1
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: common_print: vm-severin (ocf::lentes:VirtualDomain): Started ha-idg-1
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: common_print: ping_19216810010 (ocf::pacemaker:ping): Stopped (disabled)
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: common_print: ping_19216810020 (ocf::pacemaker:ping): Stopped (disabled)
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: common_print: vm_crispor (ocf::heartbeat:VirtualDomain): Stopped (unmanaged)
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: common_print: vm-dietrich (ocf::lentes:VirtualDomain): Started ha-idg-1
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: common_print: vm-pathway (ocf::lentes:VirtualDomain): Started ha-idg-1
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: common_print: vm-crispor-server (ocf::lentes:VirtualDomain): Started ha-idg-1
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: common_print: vm-geneious-license (ocf::lentes:VirtualDomain): Started ha-idg-1
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: common_print: vm-nc-mcd (ocf::lentes:VirtualDomain): Started ha-idg-1
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: common_print: vm-amok (ocf::lentes:VirtualDomain): Started ha-idg-1
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: common_print: vm-geneious-license-mcd (ocf::lentes:VirtualDomain): Started ha-idg-1
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: common_print: vm-documents-oo (ocf::lentes:VirtualDomain): Started ha-idg-1
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: common_print: fs_test_ocfs2 (ocf::lentes:Filesystem.new): Started ha-idg-2
Oct 14 19:26:33 [26001] ha-idg-1 crmd: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE | input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph
Oct 14 19:26:33 [25996] ha-idg-1 cib: info: cib_process_request: Completed cib_apply_diff operation for section 'all': OK (rc=0, origin=ha-idg-2/cibadmin/2, version=7.28978.0)
Oct 14 19:26:33 [25997] ha-idg-1 stonith-ng: info: update_cib_stonith_devices_v2: Updating device list from the cib: create op[@id='vm-genetrap-monitor-30']
Oct 14 19:26:33 [25997] ha-idg-1 stonith-ng: info: cib_devices_update: Updating devices to version 7.28978.0
Oct 14 19:26:33 [25997] ha-idg-1 stonith-ng: notice: unpack_config: On loss of CCM Quorum: Ignore
Oct 14 19:26:33 [25996] ha-idg-1 cib: info: cib_file_backup: Archived previous version as /var/lib/pacemaker/cib/cib-71.raw
Oct 14 19:26:33 [25996] ha-idg-1 cib: info: cib_file_write_with_digest: Wrote version 7.28978.0 of the CIB to disk (digest: 69dc34584ba70cfb6868a00a3c59c45f)
Oct 14 19:26:33 [25996] ha-idg-1 cib: info: cib_file_write_with_digest: Reading cluster configuration file /var/lib/pacemaker/cib/cib.JQq4Ul (digest: /var/lib/pacemaker/cib/cib.GeC2uK)
Oct 14 19:26:33 [26000] ha-idg-1 pengine: notice: unpack_config: On loss of CCM Quorum: Ignore
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: determine_online_status_fencing: Node ha-idg-1 is active
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: determine_online_status: Node ha-idg-1 is online
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: determine_online_status_fencing: Node ha-idg-2 is active
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: determine_online_status: Node ha-idg-2 is online
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: determine_op_status: Operation monitor found resource vm-genetrap active on ha-idg-1
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: determine_op_status: Operation monitor found resource vm-genetrap active on ha-idg-1
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: unpack_node_loop: Node 1084777482 is already processed
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: unpack_node_loop: Node 1084777492 is already processed
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: unpack_node_loop: Node 1084777482 is already processed
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: unpack_node_loop: Node 1084777492 is already processed
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: common_print: fence_ilo_ha-idg-2 (stonith:fence_ilo2): Started ha-idg-1
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: common_print: fence_ilo_ha-idg-1 (stonith:fence_ilo4): Started ha-idg-1
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: clone_print: Clone Set: cl_share [gr_share]
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: short_print: Started: [ ha-idg-1 ha-idg-2 ]
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: clone_print: Clone Set: ClusterMon-clone [ClusterMon-SMTP]
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: short_print: Stopped (disabled): [ ha-idg-1 ha-idg-2 ]
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: common_print: vm-mausdb (ocf::lentes:VirtualDomain): Started ha-idg-1
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: common_print: vm-sim (ocf::lentes:VirtualDomain): Started ha-idg-1
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: common_print: vm-geneious (ocf::lentes:VirtualDomain): Started ha-idg-1
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: common_print: vm-idcc-devel (ocf::lentes:VirtualDomain): Started ha-idg-1
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: common_print: vm-genetrap (ocf::lentes:VirtualDomain): Started ha-idg-1
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: common_print: vm-mouseidgenes (ocf::lentes:VirtualDomain): Started ha-idg-1
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: common_print: vm-greensql (ocf::lentes:VirtualDomain): Started ha-idg-1
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: common_print: vm-severin (ocf::lentes:VirtualDomain): Started ha-idg-1
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: common_print: ping_19216810010 (ocf::pacemaker:ping): Stopped (disabled)
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: common_print: ping_19216810020 (ocf::pacemaker:ping): Stopped (disabled)
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: common_print: vm_crispor (ocf::heartbeat:VirtualDomain): Stopped (unmanaged)
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: common_print: vm-dietrich (ocf::lentes:VirtualDomain): Started ha-idg-1
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: common_print: vm-pathway (ocf::lentes:VirtualDomain): Started ha-idg-1
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: common_print: vm-crispor-server (ocf::lentes:VirtualDomain): Started ha-idg-1
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: common_print: vm-geneious-license (ocf::lentes:VirtualDomain): Started ha-idg-1
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: common_print: vm-nc-mcd (ocf::lentes:VirtualDomain): Started ha-idg-1
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: common_print: vm-amok (ocf::lentes:VirtualDomain): Started ha-idg-1
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: common_print: vm-geneious-license-mcd (ocf::lentes:VirtualDomain): Started ha-idg-1
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: common_print: vm-documents-oo (ocf::lentes:VirtualDomain): Started ha-idg-1
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: common_print: fs_test_ocfs2 (ocf::lentes:Filesystem.new): Started ha-idg-2
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: common_print: vm-ssh (ocf::lentes:VirtualDomain): Started ha-idg-1
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: common_print: vm_snipanalysis (ocf::lentes:VirtualDomain): Stopped (disabled, unmanaged)
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: common_print: vm-seneca (ocf::lentes:VirtualDomain): Started ha-idg-1
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: common_print: vm-photoshop (ocf::lentes:VirtualDomain): Started ha-idg-1
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: common_print: vm-check-mk (ocf::lentes:VirtualDomain): Started ha-idg-1
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: common_print: vm-encore (ocf::lentes:VirtualDomain): Started ha-idg-1
-----------------------------------------------------------------------------------------------
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: rsc_action_digest_cmp: Parameters to vm-genetrap_monitor_30000 on ha-idg-1 changed: was 2c5e72e3ebb855036a484cb7e2823f92 vs. now d81c72a6c99d1a5c2defaa830fb82b23 (reschedule:3.0.14) 0:0;28:797:0:167bea02-e39a-4fbc-a09f-3ba4d704c4f9
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: check_action_definition: params:reload <parameters migration_network_suffix="-private" migrate_options="--verbose" migration_transport="ssh" shutdown_mode="acpi,agent" config="/mnt/share/vm_genetrap.xml" hypervisor="qemu:///system" trace_ra="1" CRM_meta_timeout="25000"/>
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: pcmk__native_allocate: Resource ClusterMon-SMTP:0 cannot run anywhere
------------------------------------------------------------------------------------------------
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: pcmk__native_allocate: Resource ClusterMon-SMTP:1 cannot run anywhere
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: pcmk__native_allocate: Resource ping_19216810010 cannot run anywhere
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: pcmk__native_allocate: Resource ping_19216810020 cannot run anywhere
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: pcmk__native_allocate: Unmanaged resource vm_crispor allocated to no node: inactive
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: pcmk__native_allocate: Unmanaged resource vm_snipanalysis allocated to no node: inactive
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: RecurringOp: Start recurring monitor (30s) for vm-genetrap on ha-idg-1
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: LogActions: Leave fence_ilo_ha-idg-2 (Started ha-idg-1)
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: LogActions: Leave fence_ilo_ha-idg-1 (Started ha-idg-1)
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: LogActions: Leave dlm:0 (Started ha-idg-1)
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: LogActions: Leave clvmd:0 (Started ha-idg-1)
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: LogActions: Leave gfs2_share:0 (Started ha-idg-1)
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: LogActions: Leave gfs2_snap:0 (Started ha-idg-1)
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: LogActions: Leave fs_ocfs2:0 (Started ha-idg-1)
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: LogActions: Leave dlm:1 (Started ha-idg-2)
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: LogActions: Leave clvmd:1 (Started ha-idg-2)
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: LogActions: Leave gfs2_share:1 (Started ha-idg-2)
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: LogActions: Leave gfs2_snap:1 (Started ha-idg-2)
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: LogActions: Leave fs_ocfs2:1 (Started ha-idg-2)
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: LogActions: Leave ClusterMon-SMTP:0 (Stopped)
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: LogActions: Leave ClusterMon-SMTP:1 (Stopped)
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: LogActions: Leave vm-mausdb (Started ha-idg-1)
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: LogActions: Leave vm-sim (Started ha-idg-1)
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: LogActions: Leave vm-geneious (Started ha-idg-1)
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: LogActions: Leave vm-idcc-devel (Started ha-idg-1)
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: LogActions: Leave vm-genetrap (Started ha-idg-1)
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: LogActions: Leave vm-mouseidgenes (Started ha-idg-1)
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: LogActions: Leave vm-greensql (Started ha-idg-1)
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: LogActions: Leave vm-severin (Started ha-idg-1)
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: LogActions: Leave ping_19216810010 (Stopped)
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: LogActions: Leave ping_19216810020 (Stopped)
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: LogActions: Leave vm_crispor (Stopped unmanaged)
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: LogActions: Leave vm-dietrich (Started ha-idg-1)
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: LogActions: Leave vm-pathway (Started ha-idg-1)
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: LogActions: Leave vm-crispor-server (Started ha-idg-1)
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: LogActions: Leave vm-geneious-license (Started ha-idg-1)
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: LogActions: Leave vm-nc-mcd (Started ha-idg-1)
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: LogActions: Leave vm-amok (Started ha-idg-1)
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: LogActions: Leave vm-geneious-license-mcd (Started ha-idg-1)
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: LogActions: Leave vm-documents-oo (Started ha-idg-1)
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: LogActions: Leave fs_test_ocfs2 (Started ha-idg-2)
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: LogActions: Leave vm-ssh (Started ha-idg-1)
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: LogActions: Leave vm_snipanalysis (Stopped unmanaged)
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: LogActions: Leave vm-seneca (Started ha-idg-1)
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: LogActions: Leave vm-photoshop (Started ha-idg-1)
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: LogActions: Leave vm-check-mk (Started ha-idg-1)
Oct 14 19:26:33 [26000] ha-idg-1 pengine: info: LogActions: Leave vm-encore (Started ha-idg-1)
no restart !!!
There is only one difference i see is the section i marked with "---------- ".
But i don't understand why this is different.
Bernd
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/users/attachments/20221024/3bc2c164/attachment-0001.htm>
More information about the Users
mailing list