[Pacemaker] Failover after fail on Ethernet Fails (unSolved again)

Stefan Kelemen Stefan.Kelemen at gmx.de
Wed Apr 21 03:41:56 EDT 2010


So i madde a complete hb_report (attachment) over ten minutes in that i made a deconnect of the ethernet without failover and a reconnect.





-------- Original-Nachricht --------
> Datum: Wed, 21 Apr 2010 08:24:12 +0200
> Von: Andrew Beekhof <andrew at beekhof.net>
> An: The Pacemaker cluster resource manager <pacemaker at oss.clusterlabs.org>
> Betreff: Re: [Pacemaker] Failover after fail on Ethernet Fails (unSolved	again)

> On Tue, Apr 20, 2010 at 5:06 PM, Stefan Kelemen <Stefan.Kelemen at gmx.de>
> wrote:
> > More simple yes, i see no error Ping request, but it does no failover.,
> not with group not with ms_drbd not with ....
> >
> > Config with ping instead of pingd
> 
> Can you create a hb_report covering the time during which the problem
> occurred?
> The config alone isn't enough to know whats going on.
> 
> >
> > --------------------
> >
> > primitive pri_pingsys ocf:pacemaker:ping \
> >        params host_list="192.168.1.1 / 192.168.4.10"
> multiplier="100" dampen="5" \
> >        op monitor interval="15"
> > group group_t3 pri_FS_drbd_t3 pri_IP_Cluster pri_apache_Dienst
> > ms ms_drbd_service pri_drbd_Dienst \
> >        meta notify="true" target-role="Started"
> > clone clo_ping pri_pingsys \
> >        meta globally_unique="false" interleave="true"
> target-role="Started"
> > location loc_drbd_on_connected_node ms_drbd_service \
> >        rule $id="loc_group_t3_on_connected_node-rule" ping: defined
> ping
> > colocation col_apache_after_drbd inf: group_t3 ms_drbd_service:Master
> > order ord_apache_after_drbd inf: ms_drbd_service:promote group_t3:start
> >
> >
> >
> > -------- Original-Nachricht --------
> >> Datum: Mon, 19 Apr 2010 17:28:48 +0200
> >> Von: Andrew Beekhof <andrew at beekhof.net>
> >> An: The Pacemaker cluster resource manager
> <pacemaker at oss.clusterlabs.org>
> >> Betreff: Re: [Pacemaker] Failover after fail on Ethernet Fails
> (unSolved      again)
> >
> >> I would seriously consider giving the ping agent a go instead of pingd.
> >> It uses the ping utility that comes with your system and so is far
> >> simpler.
> >>
> >> On Mon, Apr 19, 2010 at 4:38 PM, Stefan Kelemen <Stefan.Kelemen at gmx.de>
> >> wrote:
> >> > That are Testservers, the Hostname of this machines are only in
> >> /etc/hosts
> >> >
> >> > -------- Original-Nachricht --------
> >> >> Datum: Mon, 19 Apr 2010 15:00:33 +0200
> >> >> Von: Andrew Beekhof <andrew at beekhof.net>
> >> >> An: The Pacemaker cluster resource manager
> >> <pacemaker at oss.clusterlabs.org>
> >> >> Betreff: Re: [Pacemaker] Failover after fail on Ethernet Fails
> >> (unSolved      again)
> >> >
> >> >> On Mon, Apr 19, 2010 at 2:27 PM, Stefan Kelemen
> <Stefan.Kelemen at gmx.de>
> >> >> wrote:
> >> >> > In the Moment i would collect the Logs for the states i found that
> >> the
> >> >> Failover works, but...
> >> >> > When i reconnecting the ethernet Cable the Resources going off
> >> >> > All Resources are Stopped, and cleanup dont works.
> >> >> > After Reboot all is normal
> >> >> >
> >> >> > hm.. i see much errors from the pingd, what is that???
> >> >>
> >> >> Well, they're warnings not errors, but basically they're saying that
> >> >> it can't lookup the hostname it was given.
> >> >> Presumably because the link to the DNS server is also down.
> >> >>
> >> >> >
> >> >> > ---------------
> >> >> >
> >> >> > /var/log/messages  from deconnect to reconnect
> >> >> >
> >> >> > Apr 19 14:15:11 server1 pingd: [2837]: WARN: ping_open:
> getaddrinfo:
> >> >> Name or service not known
> >> >> > Apr 19 14:15:12 server1 pingd: [2837]: WARN: ping_open:
> getaddrinfo:
> >> >> Name or service not known
> >> >> > Apr 19 14:15:13 server1 pingd: [2837]: WARN: ping_open:
> getaddrinfo:
> >> >> Name or service not known
> >> >> > Apr 19 14:15:14 server1 pingd: [2837]: WARN: ping_open:
> getaddrinfo:
> >> >> Name or service not known
> >> >> > Apr 19 14:15:15 server1 pingd: [2837]: WARN: ping_open:
> getaddrinfo:
> >> >> Name or service not known
> >> >> > Apr 19 14:15:16 server1 pingd: [2837]: WARN: ping_open:
> getaddrinfo:
> >> >> Name or service not known
> >> >> > Apr 19 14:15:17 server1 pingd: [2837]: WARN: ping_open:
> getaddrinfo:
> >> >> Name or service not known
> >> >> > Apr 19 14:15:18 server1 pingd: [2837]: WARN: ping_open:
> getaddrinfo:
> >> >> Name or service not known
> >> >> > Apr 19 14:15:19 server1 pingd: [2837]: WARN: ping_open:
> getaddrinfo:
> >> >> Name or service not known
> >> >> > Apr 19 14:15:20 server1 pingd: [2837]: WARN: ping_open:
> getaddrinfo:
> >> >> Name or service not known
> >> >> > Apr 19 14:15:21 server1 pingd: [2837]: WARN: ping_open:
> getaddrinfo:
> >> >> Name or service not known
> >> >> > Apr 19 14:15:22 server1 pingd: [2837]: WARN: ping_open:
> getaddrinfo:
> >> >> Name or service not known
> >> >> > Apr 19 14:15:23 server1 kernel: [  440.288130] eth1: link down
> >> >> > Apr 19 14:15:25 server1 pingd: [2837]: info: stand_alone_ping:
> Node
> >> >> 192.168.1.1 is unreachable (read)
> >> >> > Apr 19 14:15:27 server1 attrd: [2646]: info: attrd_trigger_update:
> >> >> Sending flush op to all hosts for: pingd (200)
> >> >> > Apr 19 14:15:27 server1 attrd: [2646]: info: attrd_ha_callback:
> flush
> >> >> message from server1
> >> >> > Apr 19 14:15:27 server1 attrd: [2646]: info: attrd_perform_update:
> >> Sent
> >> >> update 24: pingd=200
> >> >> > Apr 19 14:15:28 server1 pingd: [2837]: info: stand_alone_ping:
> Node
> >> >> 192.168.1.1 is unreachable (read)
> >> >> > Apr 19 14:15:33 server1 heartbeat: [2586]: WARN: node server2: is
> >> dead
> >> >> > Apr 19 14:15:33 server1 heartbeat: [2586]: info: Link server2:eth1
> >> dead.
> >> >> > Apr 19 14:15:33 server1 crmd: [2647]: notice:
> >> crmd_ha_status_callback:
> >> >> Status update: Node server2 now has status [dead] (DC=false)
> >> >> > Apr 19 14:15:33 server1 crmd: [2647]: info: crm_update_peer_proc:
> >> >> server2.ais is now offline
> >> >> > Apr 19 14:15:33 server1 cib: [2643]: info: mem_handle_event: Got
> an
> >> >> event OC_EV_MS_NOT_PRIMARY from ccm
> >> >> > Apr 19 14:15:33 server1 crmd: [2647]: info: mem_handle_event: Got
> an
> >> >> event OC_EV_MS_NOT_PRIMARY from ccm
> >> >> > Apr 19 14:15:33 server1 crmd: [2647]: info: mem_handle_event:
> >> >> instance=2, nodes=2, new=2, lost=0, n_idx=0, new_idx=0, old_idx=4
> >> >> > Apr 19 14:15:33 server1 cib: [2643]: info: mem_handle_event:
> >> instance=2,
> >> >> nodes=2, new=2, lost=0, n_idx=0, new_idx=0, old_idx=4
> >> >> > Apr 19 14:15:33 server1 crmd: [2647]: info: crmd_ccm_msg_callback:
> >> >> Quorum lost after event=NOT PRIMARY (id=2)
> >> >> > Apr 19 14:15:33 server1 cib: [2643]: info: cib_ccm_msg_callback:
> >> >> Processing CCM event=NOT PRIMARY (id=2)
> >> >> > Apr 19 14:15:33 server1 ccm: [2642]: info:
> ccm_state_sent_memlistreq:
> >> >> directly callccm_compute_and_send_final_memlist()
> >> >> > Apr 19 14:15:33 server1 ccm: [2642]: info: Break tie for 2 nodes
> >> cluster
> >> >> > Apr 19 14:15:33 server1 cib: [2643]: info: mem_handle_event: Got
> an
> >> >> event OC_EV_MS_INVALID from ccm
> >> >> > Apr 19 14:15:33 server1 cib: [2643]: info: mem_handle_event: no
> >> >> mbr_track info
> >> >> > Apr 19 14:15:33 server1 cib: [2643]: info: mem_handle_event: Got
> an
> >> >> event OC_EV_MS_NEW_MEMBERSHIP from ccm
> >> >> > Apr 19 14:15:33 server1 cib: [2643]: info: mem_handle_event:
> >> instance=3,
> >> >> nodes=1, new=0, lost=1, n_idx=0, new_idx=1, old_idx=3
> >> >> > Apr 19 14:15:33 server1 cib: [2643]: info: cib_ccm_msg_callback:
> >> >> Processing CCM event=NEW MEMBERSHIP (id=3)
> >> >> > Apr 19 14:15:33 server1 cib: [2643]: info: crm_update_peer: Node
> >> >> server2: id=1 state=lost (new) addr=(null) votes=-1 born=1 seen=2
> >> >> proc=00000000000000000000000000000302
> >> >> > Apr 19 14:15:33 server1 crmd: [2647]: info: mem_handle_event: Got
> an
> >> >> event OC_EV_MS_INVALID from ccm
> >> >> > Apr 19 14:15:33 server1 crmd: [2647]: info: mem_handle_event: no
> >> >> mbr_track info
> >> >> > Apr 19 14:15:33 server1 crmd: [2647]: info: mem_handle_event: Got
> an
> >> >> event OC_EV_MS_NEW_MEMBERSHIP from ccm
> >> >> > Apr 19 14:15:33 server1 crmd: [2647]: info: mem_handle_event:
> >> >> instance=3, nodes=1, new=0, lost=1, n_idx=0, new_idx=1, old_idx=3
> >> >> > Apr 19 14:15:33 server1 crmd: [2647]: info: crmd_ccm_msg_callback:
> >> >> Quorum (re)attained after event=NEW MEMBERSHIP (id=3)
> >> >> > Apr 19 14:15:33 server1 crmd: [2647]: info: ccm_event_detail: NEW
> >> >> MEMBERSHIP: trans=3, nodes=1, new=0, lost=1 n_idx=0, new_idx=1,
> >> old_idx=3
> >> >> > Apr 19 14:15:33 server1 crmd: [2647]: info: ccm_event_detail:
> >> >> #011CURRENT: server1 [nodeid=0, born=3]
> >> >> > Apr 19 14:15:33 server1 crmd: [2647]: info: ccm_event_detail:
> >> #011LOST:
> >> >>    server2 [nodeid=1, born=1]
> >> >> > Apr 19 14:15:33 server1 crmd: [2647]: info: crm_update_peer: Node
> >> >> server2: id=1 state=lost (new) addr=(null) votes=-1 born=1 seen=2
> >> >> proc=00000000000000000000000000000200
> >> >> > Apr 19 14:15:33 server1 crmd: [2647]: WARN: check_dead_member: Our
> DC
> >> >> node (server2) left the cluster
> >> >> > Apr 19 14:15:33 server1 crmd: [2647]: info: do_state_transition:
> >> State
> >> >> transition S_NOT_DC -> S_ELECTION [ input=I_ELECTION
> >> cause=C_FSA_INTERNAL
> >> >> origin=check_dead_member ]
> >> >> > Apr 19 14:15:33 server1 crmd: [2647]: info: update_dc: Unset DC
> >> server2
> >> >> > Apr 19 14:15:33 server1 ccm: [2642]: info: Break tie for 2 nodes
> >> cluster
> >> >> > Apr 19 14:15:34 server1 crmd: [2647]: info: do_state_transition:
> >> State
> >> >> transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC
> >> >> cause=C_FSA_INTERNAL origin=do_election_check ]
> >> >> > Apr 19 14:15:34 server1 crmd: [2647]: info: do_te_control:
> >> Registering
> >> >> TE UUID: d6a0122e-5574-4d0c-b15b-1bd452b9062c
> >> >> > Apr 19 14:15:34 server1 crmd: [2647]: WARN:
> >> >> cib_client_add_notify_callback: Callback already present
> >> >> > Apr 19 14:15:34 server1 crmd: [2647]: info: set_graph_functions:
> >> Setting
> >> >> custom graph functions
> >> >> > Apr 19 14:15:34 server1 crmd: [2647]: info: unpack_graph: Unpacked
> >> >> transition -1: 0 actions in 0 synapses
> >> >> > Apr 19 14:15:34 server1 crmd: [2647]: info: start_subsystem:
> Starting
> >> >> sub-system "pengine"
> >> >> > Apr 19 14:15:34 server1 pengine: [8287]: info: Invoked:
> >> >> /usr/lib/heartbeat/pengine
> >> >> > Apr 19 14:15:34 server1 pengine: [8287]: info: main: Starting
> pengine
> >> >> > Apr 19 14:15:37 server1 crmd: [2647]: info: do_dc_takeover: Taking
> >> over
> >> >> DC status for this partition
> >> >> > Apr 19 14:15:37 server1 cib: [2643]: info: cib_process_readwrite:
> We
> >> are
> >> >> now in R/W mode
> >> >> > Apr 19 14:15:37 server1 cib: [2643]: info: cib_process_request:
> >> >> Operation complete: op cib_master for section 'all'
> >> (origin=local/crmd/27,
> >> >> version=0.643.36): ok (rc=0)
> >> >> > Apr 19 14:15:37 server1 cib: [2643]: info: cib_process_request:
> >> >> Operation complete: op cib_modify for section cib
> >> (origin=local/crmd/28,
> >> >> version=0.643.36): ok (rc=0)
> >> >> > Apr 19 14:15:37 server1 cib: [2643]: info: cib_process_request:
> >> >> Operation complete: op cib_modify for section crm_config
> >> (origin=local/crmd/30,
> >> >> version=0.643.36): ok (rc=0)
> >> >> > Apr 19 14:15:37 server1 crmd: [2647]: info: join_make_offer:
> Making
> >> join
> >> >> offers based on membership 3
> >> >> > Apr 19 14:15:37 server1 crmd: [2647]: info: do_dc_join_offer_all:
> >> >> join-1: Waiting on 1 outstanding join acks
> >> >> > Apr 19 14:15:37 server1 crmd: [2647]: info: te_connect_stonith:
> >> >> Attempting connection to fencing daemon...
> >> >> > Apr 19 14:15:37 server1 cib: [2643]: info: cib_process_request:
> >> >> Operation complete: op cib_modify for section crm_config
> >> (origin=local/crmd/32,
> >> >> version=0.643.36): ok (rc=0)
> >> >> > Apr 19 14:15:38 server1 crmd: [2647]: info: te_connect_stonith:
> >> >> Connected
> >> >> > Apr 19 14:15:38 server1 crmd: [2647]: info: config_query_callback:
> >> >> Checking for expired actions every 900000ms
> >> >> > Apr 19 14:15:38 server1 crmd: [2647]: info: update_dc: Set DC to
> >> server1
> >> >> (3.0.1)
> >> >> > Apr 19 14:15:38 server1 crmd: [2647]: info: do_state_transition:
> >> State
> >> >> transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED
> >> >> cause=C_FSA_INTERNAL origin=check_join_state ]
> >> >> > Apr 19 14:15:38 server1 crmd: [2647]: info: do_state_transition:
> All
> >> 1
> >> >> cluster nodes responded to the join offer.
> >> >> > Apr 19 14:15:38 server1 crmd: [2647]: info: do_dc_join_finalize:
> >> join-1:
> >> >> Syncing the CIB from server1 to the rest of the cluster
> >> >> > Apr 19 14:15:38 server1 cib: [2643]: info: cib_process_request:
> >> >> Operation complete: op cib_sync for section 'all'
> >> (origin=local/crmd/35,
> >> >> version=0.643.36): ok (rc=0)
> >> >> > Apr 19 14:15:38 server1 cib: [2643]: info: cib_process_request:
> >> >> Operation complete: op cib_modify for section nodes
> >> (origin=local/crmd/36,
> >> >> version=0.643.36): ok (rc=0)
> >> >> > Apr 19 14:15:39 server1 pingd: [2837]: WARN: ping_open:
> getaddrinfo:
> >> >> Name or service not known
> >> >> > Apr 19 14:15:39 server1 crmd: [2647]: info: do_dc_join_ack:
> join-1:
> >> >> Updating node state to member for server1
> >> >> > Apr 19 14:15:39 server1 cib: [2643]: info: cib_process_request:
> >> >> Operation complete: op cib_delete for section
> >> //node_state[@uname='server1']/lrm
> >> >> (origin=local/crmd/37, version=0.643.37): ok (rc=0)
> >> >> > Apr 19 14:15:39 server1 crmd: [2647]: info: erase_xpath_callback:
> >> >> Deletion of "//node_state[@uname='server1']/lrm": ok (rc=0)
> >> >> > Apr 19 14:15:39 server1 crmd: [2647]: info: do_state_transition:
> >> State
> >> >> transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED
> >> >> cause=C_FSA_INTERNAL origin=check_join_state ]
> >> >> > Apr 19 14:15:39 server1 crmd: [2647]: info: populate_cib_nodes_ha:
> >> >> Requesting the list of configured nodes
> >> >> > Apr 19 14:15:40 server1 crmd: [2647]: info: do_state_transition:
> All
> >> 1
> >> >> cluster nodes are eligible to run resources.
> >> >> > Apr 19 14:15:40 server1 crmd: [2647]: info: do_dc_join_final:
> >> Ensuring
> >> >> DC, quorum and node attributes are up-to-date
> >> >> > Apr 19 14:15:40 server1 crmd: [2647]: info: crm_update_quorum:
> >> Updating
> >> >> quorum status to true (call=41)
> >> >> > Apr 19 14:15:40 server1 crmd: [2647]: info:
> abort_transition_graph:
> >> >> do_te_invoke:191 - Triggered transition abort (complete=1) : Peer
> >> Cancelled
> >> >> > Apr 19 14:15:40 server1 crmd: [2647]: info: do_pe_invoke: Query
> 42:
> >> >> Requesting the current CIB: S_POLICY_ENGINE
> >> >> > Apr 19 14:15:40 server1 attrd: [2646]: info: attrd_local_callback:
> >> >> Sending full refresh (origin=crmd)
> >> >> > Apr 19 14:15:40 server1 attrd: [2646]: info: attrd_trigger_update:
> >> >> Sending flush op to all hosts for: master-pri_drbd_Dienst:0 (10000)
> >> >> > Apr 19 14:15:40 server1 cib: [2643]: info: cib_process_request:
> >> >> Operation complete: op cib_modify for section nodes
> >> (origin=local/crmd/39,
> >> >> version=0.643.38): ok (rc=0)
> >> >> > Apr 19 14:15:40 server1 crmd: [2647]: WARN: match_down_event: No
> >> match
> >> >> for shutdown action on 5262f929-1082-4a85-aa05-7bd1992f15be
> >> >> > Apr 19 14:15:40 server1 crmd: [2647]: info: te_update_diff:
> >> >> Stonith/shutdown of 5262f929-1082-4a85-aa05-7bd1992f15be not matched
> >> >> > Apr 19 14:15:40 server1 crmd: [2647]: info:
> abort_transition_graph:
> >> >> te_update_diff:191 - Triggered transition abort (complete=1,
> >> tag=node_state,
> >> >> id=5262f929-1082-4a85-aa05-7bd1992f15be, magic=NA, cib=0.643.39) :
> Node
> >> >> failure
> >> >> > Apr 19 14:15:40 server1 crmd: [2647]: info: do_pe_invoke: Query
> 43:
> >> >> Requesting the current CIB: S_POLICY_ENGINE
> >> >> > Apr 19 14:15:40 server1 crmd: [2647]: info:
> abort_transition_graph:
> >> >> need_abort:59 - Triggered transition abort (complete=1) : Non-status
> >> change
> >> >> > Apr 19 14:15:40 server1 crmd: [2647]: info: need_abort: Aborting
> on
> >> >> change to admin_epoch
> >> >> > Apr 19 14:15:40 server1 crmd: [2647]: info: do_pe_invoke: Query
> 44:
> >> >> Requesting the current CIB: S_POLICY_ENGINE
> >> >> > Apr 19 14:15:40 server1 cib: [2643]: info: log_data_element:
> >> cib:diff: -
> >> >> <cib dc-uuid="5262f929-1082-4a85-aa05-7bd1992f15be" admin_epoch="0"
> >> >> epoch="643" num_updates="39" />
> >> >> > Apr 19 14:15:40 server1 cib: [2643]: info: log_data_element:
> >> cib:diff: +
> >> >> <cib dc-uuid="3e20966a-ed64-4972-8f5a-88be0977f759" admin_epoch="0"
> >> >> epoch="644" num_updates="1" />
> >> >> > Apr 19 14:15:40 server1 cib: [2643]: info: cib_process_request:
> >> >> Operation complete: op cib_modify for section cib
> >> (origin=local/crmd/41,
> >> >> version=0.644.1): ok (rc=0)
> >> >> > Apr 19 14:15:40 server1 attrd: [2646]: info: attrd_trigger_update:
> >> >> Sending flush op to all hosts for: probe_complete (true)
> >> >> > Apr 19 14:15:40 server1 attrd: [2646]: info: attrd_trigger_update:
> >> >> Sending flush op to all hosts for: terminate (<null>)
> >> >> > Apr 19 14:15:40 server1 attrd: [2646]: info: attrd_trigger_update:
> >> >> Sending flush op to all hosts for: master-pri_drbd_Dienst:1 (<null>)
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: notice: unpack_config: On
> >> loss
> >> >> of CCM Quorum: Ignore
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: info: unpack_config: Node
> >> >> scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: WARN: unpack_nodes: Blind
> >> >> faith: not fencing unseen nodes
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: info:
> >> determine_online_status:
> >> >> Node server1 is online
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: notice: clone_print:
> >> >>  Master/Slave Set: ms_drbd_service
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: notice: short_print:
> >> >>  Masters: [ server1 ]
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: notice: short_print:
> >> >>  Stopped: [ pri_drbd_Dienst:1 ]
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: notice: group_print:
> >>  Resource
> >> >> Group: group_t3
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: notice: native_print:
> >> >>  pri_FS_drbd_t3#011(ocf::heartbeat:Filesystem):#011Started server1
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: notice: native_print:
> >> >>  pri_IP_Cluster#011(ocf::heartbeat:IPaddr2):#011Started server1
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: notice: native_print:
> >> >>  pri_apache_Dienst#011(ocf::heartbeat:apache):#011Started server1
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: notice: clone_print:
>  Clone
> >> >> Set: clo_pingd
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: notice: short_print:
> >> >>  Started: [ server1 ]
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: notice: short_print:
> >> >>  Stopped: [ pri_pingd:1 ]
> >> >> > Apr 19 14:15:40 server1 crmd: [2647]: info: do_pe_invoke_callback:
> >> >> Invoking the PE: query=44, ref=pe_calc-dc-1271679340-11, seq=3,
> >> quorate=1
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: info:
> native_merge_weights:
> >> >> ms_drbd_service: Rolling back scores from pri_FS_drbd_t3
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: info:
> native_merge_weights:
> >> >> ms_drbd_service: Rolling back scores from pri_IP_Cluster
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: WARN: native_color:
> Resource
> >> >> pri_drbd_Dienst:1 cannot run anywhere
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: info:
> native_merge_weights:
> >> >> ms_drbd_service: Rolling back scores from pri_FS_drbd_t3
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: info:
> native_merge_weights:
> >> >> ms_drbd_service: Rolling back scores from pri_IP_Cluster
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: info: master_color:
> >> Promoting
> >> >> pri_drbd_Dienst:0 (Master server1)
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: info: master_color:
> >> >> ms_drbd_service: Promoted 1 instances of a possible 1 to master
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: info: master_color:
> >> Promoting
> >> >> pri_drbd_Dienst:0 (Master server1)
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: info: master_color:
> >> >> ms_drbd_service: Promoted 1 instances of a possible 1 to master
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: info:
> native_merge_weights:
> >> >> pri_FS_drbd_t3: Rolling back scores from pri_IP_Cluster
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: WARN: native_color:
> Resource
> >> >> pri_pingd:1 cannot run anywhere
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: notice: LogActions: Leave
> >> >> resource pri_drbd_Dienst:0#011(Master server1)
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: notice: LogActions: Leave
> >> >> resource pri_drbd_Dienst:1#011(Stopped)
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: notice: LogActions: Leave
> >> >> resource pri_FS_drbd_t3#011(Started server1)
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: notice: LogActions: Leave
> >> >> resource pri_IP_Cluster#011(Started server1)
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: notice: LogActions: Leave
> >> >> resource pri_apache_Dienst#011(Started server1)
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: notice: LogActions: Leave
> >> >> resource pri_pingd:0#011(Started server1)
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: notice: LogActions: Leave
> >> >> resource pri_pingd:1#011(Stopped)
> >> >> > Apr 19 14:15:40 server1 attrd: [2646]: info: attrd_trigger_update:
> >> >> Sending flush op to all hosts for: shutdown (<null>)
> >> >> > Apr 19 14:15:40 server1 attrd: [2646]: info: attrd_trigger_update:
> >> >> Sending flush op to all hosts for: pingd (200)
> >> >> > Apr 19 14:15:40 server1 crmd: [2647]: info: do_state_transition:
> >> State
> >> >> transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [
> input=I_PE_SUCCESS
> >> >> cause=C_IPC_MESSAGE origin=handle_response ]
> >> >> > Apr 19 14:15:40 server1 crmd: [2647]: info: unpack_graph: Unpacked
> >> >> transition 0: 0 actions in 0 synapses
> >> >> > Apr 19 14:15:40 server1 crmd: [2647]: info: do_te_invoke:
> Processing
> >> >> graph 0 (ref=pe_calc-dc-1271679340-11) derived from
> >> >> /var/lib/pengine/pe-warn-489.bz2
> >> >> > Apr 19 14:15:40 server1 crmd: [2647]: info: run_graph:
> >> >> ====================================================
> >> >> > Apr 19 14:15:40 server1 crmd: [2647]: notice: run_graph:
> Transition 0
> >> >> (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0,
> >> >> Source=/var/lib/pengine/pe-warn-489.bz2): Complete
> >> >> > Apr 19 14:15:40 server1 crmd: [2647]: info: te_graph_trigger:
> >> Transition
> >> >> 0 is now complete
> >> >> > Apr 19 14:15:40 server1 crmd: [2647]: info: notify_crmd:
> Transition 0
> >> >> status: done - <null>
> >> >> > Apr 19 14:15:40 server1 crmd: [2647]: info: do_state_transition:
> >> State
> >> >> transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS
> >> >> cause=C_FSA_INTERNAL origin=notify_crmd ]
> >> >> > Apr 19 14:15:40 server1 crmd: [2647]: info: do_state_transition:
> >> >> Starting PEngine Recheck Timer
> >> >> > Apr 19 14:15:40 server1 cib: [8437]: info: write_cib_contents:
> >> Archived
> >> >> previous version as /var/lib/heartbeat/crm/cib-63.raw
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: WARN: process_pe_message:
> >> >> Transition 0: WARNINGs found during PE processing. PEngine Input
> stored
> >> in:
> >> >> /var/lib/pengine/pe-warn-489.bz2
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: info: process_pe_message:
> >> >> Configuration WARNINGs found during PE processing.  Please run
> >> "crm_verify -L"
> >> >> to identify issues.
> >> >> > Apr 19 14:15:40 server1 cib: [8437]: info: write_cib_contents:
> Wrote
> >> >> version 0.644.0 of the CIB to disk (digest:
> >> 762bf4c3d783ca09d0d61dbdece13737)
> >> >> > Apr 19 14:15:40 server1 cib: [8437]: info: retrieveCib: Reading
> >> cluster
> >> >> configuration from: /var/lib/heartbeat/crm/cib.3T4HBe (digest:
> >> >> /var/lib/heartbeat/crm/cib.TXzgWB)
> >> >> > Apr 19 14:15:40 server1 attrd: [2646]: info: attrd_ha_callback:
> flush
> >> >> message from server1
> >> >> > Apr 19 14:15:40 server1 attrd: [2646]: info: attrd_ha_callback:
> flush
> >> >> message from server1
> >> >> > Apr 19 14:15:40 server1 attrd: [2646]: info: attrd_ha_callback:
> flush
> >> >> message from server1
> >> >> > Apr 19 14:15:40 server1 attrd: [2646]: info: attrd_ha_callback:
> flush
> >> >> message from server1
> >> >> > Apr 19 14:15:40 server1 attrd: [2646]: info: attrd_perform_update:
> >> Sent
> >> >> update 37: pingd=200
> >> >> > Apr 19 14:15:40 server1 crmd: [2647]: info:
> abort_transition_graph:
> >> >> te_update_diff:146 - Triggered transition abort (complete=1,
> >> >> tag=transient_attributes, id=3e20966a-ed64-4972-8f5a-88be0977f759,
> >> magic=NA, cib=0.644.2) :
> >> >> Transient attribute: update
> >> >> > Apr 19 14:15:40 server1 crmd: [2647]: info: do_state_transition:
> >> State
> >> >> transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC
> >> cause=C_FSA_INTERNAL
> >> >> origin=abort_transition_graph ]
> >> >> > Apr 19 14:15:40 server1 crmd: [2647]: info: do_state_transition:
> All
> >> 1
> >> >> cluster nodes are eligible to run resources.
> >> >> > Apr 19 14:15:40 server1 crmd: [2647]: info: do_pe_invoke: Query
> 45:
> >> >> Requesting the current CIB: S_POLICY_ENGINE
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: notice: unpack_config: On
> >> loss
> >> >> of CCM Quorum: Ignore
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: info: unpack_config: Node
> >> >> scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: WARN: unpack_nodes: Blind
> >> >> faith: not fencing unseen nodes
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: info:
> >> determine_online_status:
> >> >> Node server1 is online
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: notice: clone_print:
> >> >>  Master/Slave Set: ms_drbd_service
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: notice: short_print:
> >> >>  Masters: [ server1 ]
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: notice: short_print:
> >> >>  Stopped: [ pri_drbd_Dienst:1 ]
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: notice: group_print:
> >>  Resource
> >> >> Group: group_t3
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: notice: native_print:
> >> >>  pri_FS_drbd_t3#011(ocf::heartbeat:Filesystem):#011Started server1
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: notice: native_print:
> >> >>  pri_IP_Cluster#011(ocf::heartbeat:IPaddr2):#011Started server1
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: notice: native_print:
> >> >>  pri_apache_Dienst#011(ocf::heartbeat:apache):#011Started server1
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: notice: clone_print:
>  Clone
> >> >> Set: clo_pingd
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: notice: short_print:
> >> >>  Started: [ server1 ]
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: notice: short_print:
> >> >>  Stopped: [ pri_pingd:1 ]
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: info:
> native_merge_weights:
> >> >> ms_drbd_service: Rolling back scores from pri_FS_drbd_t3
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: WARN: native_color:
> Resource
> >> >> pri_drbd_Dienst:1 cannot run anywhere
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: WARN: native_color:
> Resource
> >> >> pri_drbd_Dienst:0 cannot run anywhere
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: info:
> native_merge_weights:
> >> >> ms_drbd_service: Rolling back scores from pri_FS_drbd_t3
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: info: master_color:
> >> >> ms_drbd_service: Promoted 0 instances of a possible 1 to master
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: info: master_color:
> >> >> ms_drbd_service: Promoted 0 instances of a possible 1 to master
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: info:
> native_merge_weights:
> >> >> pri_FS_drbd_t3: Rolling back scores from pri_IP_Cluster
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: WARN: native_color:
> Resource
> >> >> pri_FS_drbd_t3 cannot run anywhere
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: info:
> native_merge_weights:
> >> >> pri_IP_Cluster: Rolling back scores from pri_apache_Dienst
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: WARN: native_color:
> Resource
> >> >> pri_IP_Cluster cannot run anywhere
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: WARN: native_color:
> Resource
> >> >> pri_apache_Dienst cannot run anywhere
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: WARN: native_color:
> Resource
> >> >> pri_pingd:1 cannot run anywhere
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: notice: LogActions:
> Demote
> >> >> pri_drbd_Dienst:0#011(Master -> Stopped server1)
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: notice: LogActions: Stop
> >> >> resource pri_drbd_Dienst:0#011(server1)
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: notice: LogActions: Leave
> >> >> resource pri_drbd_Dienst:1#011(Stopped)
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: notice: LogActions: Stop
> >> >> resource pri_FS_drbd_t3#011(server1)
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: notice: LogActions: Stop
> >> >> resource pri_IP_Cluster#011(server1)
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: notice: LogActions: Stop
> >> >> resource pri_apache_Dienst#011(server1)
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: notice: LogActions: Leave
> >> >> resource pri_pingd:0#011(Started server1)
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: notice: LogActions: Leave
> >> >> resource pri_pingd:1#011(Stopped)
> >> >> > Apr 19 14:15:40 server1 crmd: [2647]: info: do_pe_invoke_callback:
> >> >> Invoking the PE: query=45, ref=pe_calc-dc-1271679340-12, seq=3,
> >> quorate=1
> >> >> > Apr 19 14:15:40 server1 crmd: [2647]: info: do_state_transition:
> >> State
> >> >> transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [
> input=I_PE_SUCCESS
> >> >> cause=C_IPC_MESSAGE origin=handle_response ]
> >> >> > Apr 19 14:15:40 server1 crmd: [2647]: info: unpack_graph: Unpacked
> >> >> transition 1: 23 actions in 23 synapses
> >> >> > Apr 19 14:15:40 server1 crmd: [2647]: info: do_te_invoke:
> Processing
> >> >> graph 1 (ref=pe_calc-dc-1271679340-12) derived from
> >> >> /var/lib/pengine/pe-warn-490.bz2
> >> >> > Apr 19 14:15:40 server1 crmd: [2647]: info: te_pseudo_action:
> Pseudo
> >> >> action 29 fired and confirmed
> >> >> > Apr 19 14:15:40 server1 crmd: [2647]: info: te_pseudo_action:
> Pseudo
> >> >> action 38 fired and confirmed
> >> >> > Apr 19 14:15:40 server1 crmd: [2647]: info: te_rsc_command:
> >> Initiating
> >> >> action 35: stop pri_apache_Dienst_stop_0 on server1 (local)
> >> >> > Apr 19 14:15:40 server1 lrmd: [2644]: info: cancel_op: operation
> >> >> monitor[18] on ocf::apache::pri_apache_Dienst for client 2647, its
> >> parameters:
> >> >> CRM_meta_interval=[15000] CRM_meta_timeout=[120000]
> >> crm_feature_set=[3.0.1]
> >> >> port=[80] CRM_meta_name=[monitor]
> >> configfile=[/etc/apache2/apache2.conf]
> >> >> httpd=[/usr/sbin/apache2]  cancelled
> >> >> > Apr 19 14:15:40 server1 crmd: [2647]: info: do_lrm_rsc_op:
> Performing
> >> >> key=35:1:0:d6a0122e-5574-4d0c-b15b-1bd452b9062c
> >> op=pri_apache_Dienst_stop_0 )
> >> >> > Apr 19 14:15:40 server1 lrmd: [2644]: info:
> rsc:pri_apache_Dienst:19:
> >> >> stop
> >> >> > Apr 19 14:15:40 server1 crmd: [2647]: info: process_lrm_event: LRM
> >> >> operation pri_apache_Dienst_monitor_15000 (call=18, status=1,
> >> cib-update=0,
> >> >> confirmed=true) Cancelled
> >> >> > Apr 19 14:15:40 server1 crmd: [2647]: info: te_rsc_command:
> >> Initiating
> >> >> action 60: notify pri_drbd_Dienst:0_pre_notify_demote_0 on server1
> >> (local)
> >> >> > Apr 19 14:15:40 server1 crmd: [2647]: info: do_lrm_rsc_op:
> Performing
> >> >> key=60:1:0:d6a0122e-5574-4d0c-b15b-1bd452b9062c
> >> op=pri_drbd_Dienst:0_notify_0
> >> >> )
> >> >> > Apr 19 14:15:40 server1 lrmd: [2644]: info:
> rsc:pri_drbd_Dienst:0:20:
> >> >> notify
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: WARN: process_pe_message:
> >> >> Transition 1: WARNINGs found during PE processing. PEngine Input
> stored
> >> in:
> >> >> /var/lib/pengine/pe-warn-490.bz2
> >> >> > Apr 19 14:15:40 server1 pengine: [8287]: info: process_pe_message:
> >> >> Configuration WARNINGs found during PE processing.  Please run
> >> "crm_verify -L"
> >> >> to identify issues.
> >> >> > Apr 19 14:15:41 server1 crmd: [2647]: info: process_lrm_event: LRM
> >> >> operation pri_drbd_Dienst:0_notify_0 (call=20, rc=0, cib-update=46,
> >> >> confirmed=true) ok
> >> >> > Apr 19 14:15:41 server1 crmd: [2647]: info: match_graph_event:
> Action
> >> >> pri_drbd_Dienst:0_pre_notify_demote_0 (60) confirmed on server1
> (rc=0)
> >> >> > Apr 19 14:15:41 server1 crmd: [2647]: info: te_pseudo_action:
> Pseudo
> >> >> action 30 fired and confirmed
> >> >> > Apr 19 14:15:41 server1 pingd: [2837]: info: stand_alone_ping:
> Node
> >> >> 192.168.4.10 is unreachable (read)
> >> >> > Apr 19 14:15:42 server1 pingd: [2837]: info: stand_alone_ping:
> Node
> >> >> 192.168.4.10 is unreachable (read)
> >> >> > Apr 19 14:15:42 server1 lrmd: [2644]: info: RA output:
> >> >> (pri_apache_Dienst:stop:stderr)
> >> /usr/lib/ocf/resource.d//heartbeat/apache: line 437: kill:
> >> >> (3247) - No such process
> >> >> > Apr 19 14:15:42 server1 apache[8438]: INFO: Killing apache PID
> 3247
> >> >> > Apr 19 14:15:42 server1 apache[8438]: INFO: apache stopped.
> >> >> > Apr 19 14:15:42 server1 crmd: [2647]: info: process_lrm_event: LRM
> >> >> operation pri_apache_Dienst_stop_0 (call=19, rc=0, cib-update=47,
> >> >> confirmed=true) ok
> >> >> > Apr 19 14:15:42 server1 crmd: [2647]: info: match_graph_event:
> Action
> >> >> pri_apache_Dienst_stop_0 (35) confirmed on server1 (rc=0)
> >> >> > Apr 19 14:15:42 server1 crmd: [2647]: info: te_rsc_command:
> >> Initiating
> >> >> action 34: stop pri_IP_Cluster_stop_0 on server1 (local)
> >> >> > Apr 19 14:15:42 server1 lrmd: [2644]: info: cancel_op: operation
> >> >> monitor[16] on ocf::IPaddr2::pri_IP_Cluster for client 2647, its
> >> parameters:
> >> >> CRM_meta_interval=[3000] ip=[192.168.1.253] cidr_netmask=[24]
> >> >> CRM_meta_timeout=[120000] crm_feature_set=[3.0.1]
> >> CRM_meta_name=[monitor] nic=[eth1]
> >> >>  cancelled
> >> >> > Apr 19 14:15:42 server1 crmd: [2647]: info: do_lrm_rsc_op:
> Performing
> >> >> key=34:1:0:d6a0122e-5574-4d0c-b15b-1bd452b9062c
> >> op=pri_IP_Cluster_stop_0 )
> >> >> > Apr 19 14:15:42 server1 lrmd: [2644]: info: rsc:pri_IP_Cluster:21:
> >> stop
> >> >> > Apr 19 14:15:42 server1 crmd: [2647]: info: process_lrm_event: LRM
> >> >> operation pri_IP_Cluster_monitor_3000 (call=16, status=1,
> cib-update=0,
> >> >> confirmed=true) Cancelled
> >> >> > Apr 19 14:15:42 server1 IPaddr2[8533]: INFO: IP status = ok,
> IP_CIP=
> >> >> > Apr 19 14:15:42 server1 IPaddr2[8533]: INFO: ip -f inet addr
> delete
> >> >> 192.168.1.253/24 dev eth1
> >> >> > Apr 19 14:15:42 server1 crmd: [2647]: info: process_lrm_event: LRM
> >> >> operation pri_IP_Cluster_stop_0 (call=21, rc=0, cib-update=48,
> >> confirmed=true)
> >> >> ok
> >> >> > Apr 19 14:15:42 server1 crmd: [2647]: info: match_graph_event:
> Action
> >> >> pri_IP_Cluster_stop_0 (34) confirmed on server1 (rc=0)
> >> >> > Apr 19 14:15:42 server1 crmd: [2647]: info: te_rsc_command:
> >> Initiating
> >> >> action 33: stop pri_FS_drbd_t3_stop_0 on server1 (local)
> >> >> > Apr 19 14:15:42 server1 crmd: [2647]: info: do_lrm_rsc_op:
> Performing
> >> >> key=33:1:0:d6a0122e-5574-4d0c-b15b-1bd452b9062c
> >> op=pri_FS_drbd_t3_stop_0 )
> >> >> > Apr 19 14:15:42 server1 lrmd: [2644]: info: rsc:pri_FS_drbd_t3:22:
> >> stop
> >> >> > Apr 19 14:15:42 server1 Filesystem[8577]: INFO: Running stop for
> >> >> /dev/drbd0 on /mnt/drbd_daten
> >> >> > Apr 19 14:15:42 server1 Filesystem[8577]: INFO: Trying to unmount
> >> >> /mnt/drbd_daten
> >> >> > Apr 19 14:15:42 server1 Filesystem[8577]: INFO: unmounted
> >> >> /mnt/drbd_daten successfully
> >> >> > Apr 19 14:15:42 server1 crmd: [2647]: info: process_lrm_event: LRM
> >> >> operation pri_FS_drbd_t3_stop_0 (call=22, rc=0, cib-update=49,
> >> confirmed=true)
> >> >> ok
> >> >> > Apr 19 14:15:42 server1 crmd: [2647]: info: match_graph_event:
> Action
> >> >> pri_FS_drbd_t3_stop_0 (33) confirmed on server1 (rc=0)
> >> >> > Apr 19 14:15:42 server1 crmd: [2647]: info: te_pseudo_action:
> Pseudo
> >> >> action 39 fired and confirmed
> >> >> > Apr 19 14:15:42 server1 crmd: [2647]: info: te_pseudo_action:
> Pseudo
> >> >> action 27 fired and confirmed
> >> >> > Apr 19 14:15:42 server1 crmd: [2647]: info: te_rsc_command:
> >> Initiating
> >> >> action 7: demote pri_drbd_Dienst:0_demote_0 on server1 (local)
> >> >> > Apr 19 14:15:42 server1 crmd: [2647]: info: do_lrm_rsc_op:
> Performing
> >> >> key=7:1:0:d6a0122e-5574-4d0c-b15b-1bd452b9062c
> >> op=pri_drbd_Dienst:0_demote_0
> >> >> )
> >> >> > Apr 19 14:15:42 server1 lrmd: [2644]: info:
> rsc:pri_drbd_Dienst:0:23:
> >> >> demote
> >> >> > Apr 19 14:15:42 server1 kernel: [  459.806546] block drbd0: role(
> >> >> Primary -> Secondary )
> >> >> > Apr 19 14:15:42 server1 lrmd: [2644]: info: RA output:
> >> >> (pri_drbd_Dienst:0:demote:stdout)
> >> >> > Apr 19 14:15:42 server1 crmd: [2647]: info: process_lrm_event: LRM
> >> >> operation pri_drbd_Dienst:0_demote_0 (call=23, rc=0, cib-update=50,
> >> >> confirmed=true) ok
> >> >> > Apr 19 14:15:42 server1 crmd: [2647]: info: match_graph_event:
> Action
> >> >> pri_drbd_Dienst:0_demote_0 (7) confirmed on server1 (rc=0)
> >> >> > Apr 19 14:15:42 server1 crmd: [2647]: info: te_pseudo_action:
> Pseudo
> >> >> action 28 fired and confirmed
> >> >> > Apr 19 14:15:42 server1 crmd: [2647]: info: te_pseudo_action:
> Pseudo
> >> >> action 31 fired and confirmed
> >> >> > Apr 19 14:15:42 server1 crmd: [2647]: info: te_rsc_command:
> >> Initiating
> >> >> action 61: notify pri_drbd_Dienst:0_post_notify_demote_0 on server1
> >> (local)
> >> >> > Apr 19 14:15:42 server1 crmd: [2647]: info: do_lrm_rsc_op:
> Performing
> >> >> key=61:1:0:d6a0122e-5574-4d0c-b15b-1bd452b9062c
> >> op=pri_drbd_Dienst:0_notify_0
> >> >> )
> >> >> > Apr 19 14:15:42 server1 lrmd: [2644]: info:
> rsc:pri_drbd_Dienst:0:24:
> >> >> notify
> >> >> > Apr 19 14:15:42 server1 lrmd: [2644]: info: RA output:
> >> >> (pri_drbd_Dienst:0:notify:stdout)
> >> >> > Apr 19 14:15:42 server1 crmd: [2647]: info: process_lrm_event: LRM
> >> >> operation pri_drbd_Dienst:0_notify_0 (call=24, rc=0, cib-update=51,
> >> >> confirmed=true) ok
> >> >> > Apr 19 14:15:42 server1 crmd: [2647]: info: match_graph_event:
> Action
> >> >> pri_drbd_Dienst:0_post_notify_demote_0 (61) confirmed on server1
> (rc=0)
> >> >> > Apr 19 14:15:42 server1 crmd: [2647]: info: te_pseudo_action:
> Pseudo
> >> >> action 32 fired and confirmed
> >> >> > Apr 19 14:15:42 server1 crmd: [2647]: info: te_pseudo_action:
> Pseudo
> >> >> action 17 fired and confirmed
> >> >> > Apr 19 14:15:42 server1 crmd: [2647]: info: te_rsc_command:
> >> Initiating
> >> >> action 59: notify pri_drbd_Dienst:0_pre_notify_stop_0 on server1
> >> (local)
> >> >> > Apr 19 14:15:42 server1 crmd: [2647]: info: do_lrm_rsc_op:
> Performing
> >> >> key=59:1:0:d6a0122e-5574-4d0c-b15b-1bd452b9062c
> >> op=pri_drbd_Dienst:0_notify_0
> >> >> )
> >> >> > Apr 19 14:15:42 server1 lrmd: [2644]: info:
> rsc:pri_drbd_Dienst:0:25:
> >> >> notify
> >> >> > Apr 19 14:15:42 server1 crmd: [2647]: info: process_lrm_event: LRM
> >> >> operation pri_drbd_Dienst:0_notify_0 (call=25, rc=0, cib-update=52,
> >> >> confirmed=true) ok
> >> >> > Apr 19 14:15:42 server1 crmd: [2647]: info: match_graph_event:
> Action
> >> >> pri_drbd_Dienst:0_pre_notify_stop_0 (59) confirmed on server1 (rc=0)
> >> >> > Apr 19 14:15:42 server1 crmd: [2647]: info: te_pseudo_action:
> Pseudo
> >> >> action 18 fired and confirmed
> >> >> > Apr 19 14:15:42 server1 crmd: [2647]: info: te_pseudo_action:
> Pseudo
> >> >> action 15 fired and confirmed
> >> >> > Apr 19 14:15:42 server1 crmd: [2647]: info: te_rsc_command:
> >> Initiating
> >> >> action 8: stop pri_drbd_Dienst:0_stop_0 on server1 (local)
> >> >> > Apr 19 14:15:42 server1 crmd: [2647]: info: do_lrm_rsc_op:
> Performing
> >> >> key=8:1:0:d6a0122e-5574-4d0c-b15b-1bd452b9062c
> >> op=pri_drbd_Dienst:0_stop_0 )
> >> >> > Apr 19 14:15:42 server1 lrmd: [2644]: info:
> rsc:pri_drbd_Dienst:0:26:
> >> >> stop
> >> >> > Apr 19 14:15:42 server1 kernel: [  460.066902] block drbd0: peer(
> >> >> Secondary -> Unknown ) conn( Connected -> Disconnecting ) pdsk(
> >> UpToDate ->
> >> >> DUnknown )
> >> >> > Apr 19 14:15:42 server1 kernel: [  460.067035] block drbd0:
> asender
> >> >> terminated
> >> >> > Apr 19 14:15:42 server1 kernel: [  460.067039] block drbd0:
> >> Terminating
> >> >> asender thread
> >> >> > Apr 19 14:15:42 server1 lrmd: [2644]: info: RA output:
> >> >> (pri_drbd_Dienst:0:stop:stdout)
> >> >> > Apr 19 14:15:42 server1 kernel: [  460.076574] block drbd0:
> >> Connection
> >> >> closed
> >> >> > Apr 19 14:15:42 server1 kernel: [  460.076589] block drbd0: conn(
> >> >> Disconnecting -> StandAlone )
> >> >> > Apr 19 14:15:42 server1 kernel: [  460.076605] block drbd0:
> receiver
> >> >> terminated
> >> >> > Apr 19 14:15:42 server1 kernel: [  460.076608] block drbd0:
> >> Terminating
> >> >> receiver thread
> >> >> > Apr 19 14:15:42 server1 kernel: [  460.076689] block drbd0: disk(
> >> >> UpToDate -> Diskless )
> >> >> > Apr 19 14:15:42 server1 kernel: [  460.076722] block drbd0:
> >> >> drbd_bm_resize called with capacity == 0
> >> >> > Apr 19 14:15:42 server1 kernel: [  460.076787] block drbd0:
> worker
> >> >> terminated
> >> >> > Apr 19 14:15:42 server1 kernel: [  460.076789] block drbd0:
> >> Terminating
> >> >> worker thread
> >> >> > Apr 19 14:15:42 server1 crm_attribute: [8761]: info: Invoked:
> >> >> crm_attribute -N server1 -n master-pri_drbd_Dienst:0 -l reboot -D
> >> >> > Apr 19 14:15:42 server1 attrd: [2646]: info: attrd_trigger_update:
> >> >> Sending flush op to all hosts for: master-pri_drbd_Dienst:0 (<null>)
> >> >> > Apr 19 14:15:42 server1 attrd: [2646]: info: attrd_perform_update:
> >> Sent
> >> >> delete 39: node=3e20966a-ed64-4972-8f5a-88be0977f759,
> >> >> attr=master-pri_drbd_Dienst:0, id=<n/a>, set=(null), section=status
> >> >> > Apr 19 14:15:42 server1 crmd: [2647]: info:
> abort_transition_graph:
> >> >> te_update_diff:157 - Triggered transition abort (complete=0,
> >> >> tag=transient_attributes, id=3e20966a-ed64-4972-8f5a-88be0977f759,
> >> magic=NA, cib=0.644.10) :
> >> >> Transient attribute: removal
> >> >> > Apr 19 14:15:42 server1 crmd: [2647]: info: update_abort_priority:
> >> Abort
> >> >> priority upgraded from 0 to 1000000
> >> >> > Apr 19 14:15:42 server1 crmd: [2647]: info: update_abort_priority:
> >> Abort
> >> >> action done superceeded by restart
> >> >> > Apr 19 14:15:42 server1 lrmd: [2644]: info: RA output:
> >> >> (pri_drbd_Dienst:0:stop:stdout)
> >> >> > Apr 19 14:15:42 server1 crmd: [2647]: info: process_lrm_event: LRM
> >> >> operation pri_drbd_Dienst:0_stop_0 (call=26, rc=0, cib-update=53,
> >> >> confirmed=true) ok
> >> >> > Apr 19 14:15:42 server1 crmd: [2647]: info: match_graph_event:
> Action
> >> >> pri_drbd_Dienst:0_stop_0 (8) confirmed on server1 (rc=0)
> >> >> > Apr 19 14:15:42 server1 crmd: [2647]: info: te_pseudo_action:
> Pseudo
> >> >> action 16 fired and confirmed
> >> >> > Apr 19 14:15:42 server1 crmd: [2647]: info: te_pseudo_action:
> Pseudo
> >> >> action 19 fired and confirmed
> >> >> > Apr 19 14:15:42 server1 crmd: [2647]: info: te_pseudo_action:
> Pseudo
> >> >> action 20 fired and confirmed
> >> >> > Apr 19 14:15:42 server1 crmd: [2647]: info: run_graph:
> >> >> ====================================================
> >> >> > Apr 19 14:15:42 server1 crmd: [2647]: notice: run_graph:
> Transition 1
> >> >> (Complete=22, Pending=0, Fired=0, Skipped=1, Incomplete=0,
> >> >> Source=/var/lib/pengine/pe-warn-490.bz2): Stopped
> >> >> > Apr 19 14:15:42 server1 crmd: [2647]: info: te_graph_trigger:
> >> Transition
> >> >> 1 is now complete
> >> >> > Apr 19 14:15:42 server1 crmd: [2647]: info: do_state_transition:
> >> State
> >> >> transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC
> >> >> cause=C_FSA_INTERNAL origin=notify_crmd ]
> >> >> > Apr 19 14:15:42 server1 crmd: [2647]: info: do_state_transition:
> All
> >> 1
> >> >> cluster nodes are eligible to run resources.
> >> >> > Apr 19 14:15:42 server1 crmd: [2647]: info: do_pe_invoke: Query
> 54:
> >> >> Requesting the current CIB: S_POLICY_ENGINE
> >> >> > Apr 19 14:15:42 server1 attrd: [2646]: info: attrd_ha_callback:
> flush
> >> >> message from server1
> >> >> > Apr 19 14:15:42 server1 pengine: [8287]: notice: unpack_config: On
> >> loss
> >> >> of CCM Quorum: Ignore
> >> >> > Apr 19 14:15:42 server1 pengine: [8287]: info: unpack_config: Node
> >> >> scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
> >> >> > Apr 19 14:15:42 server1 pengine: [8287]: WARN: unpack_nodes: Blind
> >> >> faith: not fencing unseen nodes
> >> >> > Apr 19 14:15:42 server1 pengine: [8287]: info:
> >> determine_online_status:
> >> >> Node server1 is online
> >> >> > Apr 19 14:15:42 server1 pengine: [8287]: notice: clone_print:
> >> >>  Master/Slave Set: ms_drbd_service
> >> >> > Apr 19 14:15:42 server1 pengine: [8287]: notice: short_print:
> >> >>  Stopped: [ pri_drbd_Dienst:0 pri_drbd_Dienst:1 ]
> >> >> > Apr 19 14:15:42 server1 pengine: [8287]: notice: group_print:
> >>  Resource
> >> >> Group: group_t3
> >> >> > Apr 19 14:15:42 server1 pengine: [8287]: notice: native_print:
> >> >>  pri_FS_drbd_t3#011(ocf::heartbeat:Filesystem):#011Stopped
> >> >> > Apr 19 14:15:42 server1 pengine: [8287]: notice: native_print:
> >> >>  pri_IP_Cluster#011(ocf::heartbeat:IPaddr2):#011Stopped
> >> >> > Apr 19 14:15:42 server1 pengine: [8287]: notice: native_print:
> >> >>  pri_apache_Dienst#011(ocf::heartbeat:apache):#011Stopped
> >> >> > Apr 19 14:15:42 server1 pengine: [8287]: notice: clone_print:
>  Clone
> >> >> Set: clo_pingd
> >> >> > Apr 19 14:15:42 server1 pengine: [8287]: notice: short_print:
> >> >>  Started: [ server1 ]
> >> >> > Apr 19 14:15:42 server1 pengine: [8287]: notice: short_print:
> >> >>  Stopped: [ pri_pingd:1 ]
> >> >> > Apr 19 14:15:42 server1 pengine: [8287]: info:
> native_merge_weights:
> >> >> ms_drbd_service: Rolling back scores from pri_FS_drbd_t3
> >> >> > Apr 19 14:15:42 server1 pengine: [8287]: WARN: native_color:
> Resource
> >> >> pri_drbd_Dienst:0 cannot run anywhere
> >> >> > Apr 19 14:15:42 server1 pengine: [8287]: WARN: native_color:
> Resource
> >> >> pri_drbd_Dienst:1 cannot run anywhere
> >> >> > Apr 19 14:15:42 server1 pengine: [8287]: info:
> native_merge_weights:
> >> >> ms_drbd_service: Rolling back scores from pri_FS_drbd_t3
> >> >> > Apr 19 14:15:42 server1 pengine: [8287]: info: master_color:
> >> >> ms_drbd_service: Promoted 0 instances of a possible 1 to master
> >> >> > Apr 19 14:15:42 server1 pengine: [8287]: info: master_color:
> >> >> ms_drbd_service: Promoted 0 instances of a possible 1 to master
> >> >> > Apr 19 14:15:42 server1 pengine: [8287]: info:
> native_merge_weights:
> >> >> pri_FS_drbd_t3: Rolling back scores from pri_IP_Cluster
> >> >> > Apr 19 14:15:42 server1 pengine: [8287]: WARN: native_color:
> Resource
> >> >> pri_FS_drbd_t3 cannot run anywhere
> >> >> > Apr 19 14:15:42 server1 pengine: [8287]: info:
> native_merge_weights:
> >> >> pri_IP_Cluster: Rolling back scores from pri_apache_Dienst
> >> >> > Apr 19 14:15:42 server1 pengine: [8287]: WARN: native_color:
> Resource
> >> >> pri_IP_Cluster cannot run anywhere
> >> >> > Apr 19 14:15:42 server1 pengine: [8287]: WARN: native_color:
> Resource
> >> >> pri_apache_Dienst cannot run anywhere
> >> >> > Apr 19 14:15:42 server1 pengine: [8287]: WARN: native_color:
> Resource
> >> >> pri_pingd:1 cannot run anywhere
> >> >> > Apr 19 14:15:42 server1 pengine: [8287]: notice: LogActions: Leave
> >> >> resource pri_drbd_Dienst:0#011(Stopped)
> >> >> > Apr 19 14:15:42 server1 pengine: [8287]: notice: LogActions: Leave
> >> >> resource pri_drbd_Dienst:1#011(Stopped)
> >> >> > Apr 19 14:15:42 server1 pengine: [8287]: notice: LogActions: Leave
> >> >> resource pri_FS_drbd_t3#011(Stopped)
> >> >> > Apr 19 14:15:42 server1 pengine: [8287]: notice: LogActions: Leave
> >> >> resource pri_IP_Cluster#011(Stopped)
> >> >> > Apr 19 14:15:42 server1 pengine: [8287]: notice: LogActions: Leave
> >> >> resource pri_apache_Dienst#011(Stopped)
> >> >> > Apr 19 14:15:42 server1 pengine: [8287]: notice: LogActions: Leave
> >> >> resource pri_pingd:0#011(Started server1)
> >> >> > Apr 19 14:15:42 server1 pengine: [8287]: notice: LogActions: Leave
> >> >> resource pri_pingd:1#011(Stopped)
> >> >> > Apr 19 14:15:42 server1 crmd: [2647]: info: do_pe_invoke_callback:
> >> >> Invoking the PE: query=54, ref=pe_calc-dc-1271679342-21, seq=3,
> >> quorate=1
> >> >> > Apr 19 14:15:42 server1 crmd: [2647]: info: do_state_transition:
> >> State
> >> >> transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [
> input=I_PE_SUCCESS
> >> >> cause=C_IPC_MESSAGE origin=handle_response ]
> >> >> > Apr 19 14:15:42 server1 crmd: [2647]: info: unpack_graph: Unpacked
> >> >> transition 2: 0 actions in 0 synapses
> >> >> > Apr 19 14:15:42 server1 crmd: [2647]: info: do_te_invoke:
> Processing
> >> >> graph 2 (ref=pe_calc-dc-1271679342-21) derived from
> >> >> /var/lib/pengine/pe-warn-491.bz2
> >> >> > Apr 19 14:15:42 server1 crmd: [2647]: info: run_graph:
> >> >> ====================================================
> >> >> > Apr 19 14:15:42 server1 crmd: [2647]: notice: run_graph:
> Transition 2
> >> >> (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0,
> >> >> Source=/var/lib/pengine/pe-warn-491.bz2): Complete
> >> >> > Apr 19 14:15:42 server1 crmd: [2647]: info: te_graph_trigger:
> >> Transition
> >> >> 2 is now complete
> >> >> > Apr 19 14:15:42 server1 crmd: [2647]: info: notify_crmd:
> Transition 2
> >> >> status: done - <null>
> >> >> > Apr 19 14:15:42 server1 crmd: [2647]: info: do_state_transition:
> >> State
> >> >> transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS
> >> >> cause=C_FSA_INTERNAL origin=notify_crmd ]
> >> >> > Apr 19 14:15:42 server1 crmd: [2647]: info: do_state_transition:
> >> >> Starting PEngine Recheck Timer
> >> >> > Apr 19 14:15:42 server1 pengine: [8287]: WARN: process_pe_message:
> >> >> Transition 2: WARNINGs found during PE processing. PEngine Input
> stored
> >> in:
> >> >> /var/lib/pengine/pe-warn-491.bz2
> >> >> > Apr 19 14:15:42 server1 pengine: [8287]: info: process_pe_message:
> >> >> Configuration WARNINGs found during PE processing.  Please run
> >> "crm_verify -L"
> >> >> to identify issues.
> >> >> > Apr 19 14:15:45 server1 pingd: [2837]: info: stand_alone_ping:
> Node
> >> >> 192.168.1.1 is unreachable (read)
> >> >> > Apr 19 14:15:46 server1 pingd: [2837]: info: stand_alone_ping:
> Node
> >> >> 192.168.1.1 is unreachable (read)
> >> >> > Apr 19 14:15:48 server1 attrd: [2646]: info: attrd_trigger_update:
> >> >> Sending flush op to all hosts for: pingd (0)
> >> >> > Apr 19 14:15:48 server1 attrd: [2646]: info: attrd_ha_callback:
> flush
> >> >> message from server1
> >> >> > Apr 19 14:15:48 server1 attrd: [2646]: info: attrd_perform_update:
> >> Sent
> >> >> update 42: pingd=0
> >> >> > Apr 19 14:15:48 server1 crmd: [2647]: info:
> abort_transition_graph:
> >> >> te_update_diff:146 - Triggered transition abort (complete=1,
> >> >> tag=transient_attributes, id=3e20966a-ed64-4972-8f5a-88be0977f759,
> >> magic=NA, cib=0.644.12) :
> >> >> Transient attribute: update
> >> >> > Apr 19 14:15:48 server1 crmd: [2647]: info: do_state_transition:
> >> State
> >> >> transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC
> >> cause=C_FSA_INTERNAL
> >> >> origin=abort_transition_graph ]
> >> >> > Apr 19 14:15:48 server1 crmd: [2647]: info: do_state_transition:
> All
> >> 1
> >> >> cluster nodes are eligible to run resources.
> >> >> > Apr 19 14:15:48 server1 crmd: [2647]: info: do_pe_invoke: Query
> 55:
> >> >> Requesting the current CIB: S_POLICY_ENGINE
> >> >> > Apr 19 14:15:48 server1 pengine: [8287]: notice: unpack_config: On
> >> loss
> >> >> of CCM Quorum: Ignore
> >> >> > Apr 19 14:15:48 server1 pengine: [8287]: info: unpack_config: Node
> >> >> scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
> >> >> > Apr 19 14:15:48 server1 pengine: [8287]: WARN: unpack_nodes: Blind
> >> >> faith: not fencing unseen nodes
> >> >> > Apr 19 14:15:48 server1 pengine: [8287]: info:
> >> determine_online_status:
> >> >> Node server1 is online
> >> >> > Apr 19 14:15:48 server1 pengine: [8287]: notice: clone_print:
> >> >>  Master/Slave Set: ms_drbd_service
> >> >> > Apr 19 14:15:48 server1 pengine: [8287]: notice: short_print:
> >> >>  Stopped: [ pri_drbd_Dienst:0 pri_drbd_Dienst:1 ]
> >> >> > Apr 19 14:15:48 server1 pengine: [8287]: notice: group_print:
> >>  Resource
> >> >> Group: group_t3
> >> >> > Apr 19 14:15:48 server1 pengine: [8287]: notice: native_print:
> >> >>  pri_FS_drbd_t3#011(ocf::heartbeat:Filesystem):#011Stopped
> >> >> > Apr 19 14:15:48 server1 pengine: [8287]: notice: native_print:
> >> >>  pri_IP_Cluster#011(ocf::heartbeat:IPaddr2):#011Stopped
> >> >> > Apr 19 14:15:48 server1 pengine: [8287]: notice: native_print:
> >> >>  pri_apache_Dienst#011(ocf::heartbeat:apache):#011Stopped
> >> >> > Apr 19 14:15:48 server1 pengine: [8287]: notice: clone_print:
>  Clone
> >> >> Set: clo_pingd
> >> >> > Apr 19 14:15:48 server1 pengine: [8287]: notice: short_print:
> >> >>  Started: [ server1 ]
> >> >> > Apr 19 14:15:48 server1 pengine: [8287]: notice: short_print:
> >> >>  Stopped: [ pri_pingd:1 ]
> >> >> > Apr 19 14:15:48 server1 pengine: [8287]: info:
> native_merge_weights:
> >> >> ms_drbd_service: Rolling back scores from pri_FS_drbd_t3
> >> >> > Apr 19 14:15:48 server1 pengine: [8287]: WARN: native_color:
> Resource
> >> >> pri_drbd_Dienst:0 cannot run anywhere
> >> >> > Apr 19 14:15:48 server1 pengine: [8287]: WARN: native_color:
> Resource
> >> >> pri_drbd_Dienst:1 cannot run anywhere
> >> >> > Apr 19 14:15:48 server1 pengine: [8287]: info:
> native_merge_weights:
> >> >> ms_drbd_service: Rolling back scores from pri_FS_drbd_t3
> >> >> > Apr 19 14:15:48 server1 pengine: [8287]: info: master_color:
> >> >> ms_drbd_service: Promoted 0 instances of a possible 1 to master
> >> >> > Apr 19 14:15:48 server1 pengine: [8287]: info: master_color:
> >> >> ms_drbd_service: Promoted 0 instances of a possible 1 to master
> >> >> > Apr 19 14:15:48 server1 pengine: [8287]: info:
> native_merge_weights:
> >> >> pri_FS_drbd_t3: Rolling back scores from pri_IP_Cluster
> >> >> > Apr 19 14:15:48 server1 pengine: [8287]: WARN: native_color:
> Resource
> >> >> pri_FS_drbd_t3 cannot run anywhere
> >> >> > Apr 19 14:15:48 server1 pengine: [8287]: info:
> native_merge_weights:
> >> >> pri_IP_Cluster: Rolling back scores from pri_apache_Dienst
> >> >> > Apr 19 14:15:48 server1 pengine: [8287]: WARN: native_color:
> Resource
> >> >> pri_IP_Cluster cannot run anywhere
> >> >> > Apr 19 14:15:48 server1 pengine: [8287]: WARN: native_color:
> Resource
> >> >> pri_apache_Dienst cannot run anywhere
> >> >> > Apr 19 14:15:48 server1 pengine: [8287]: WARN: native_color:
> Resource
> >> >> pri_pingd:1 cannot run anywhere
> >> >> > Apr 19 14:15:48 server1 pengine: [8287]: notice: LogActions: Leave
> >> >> resource pri_drbd_Dienst:0#011(Stopped)
> >> >> > Apr 19 14:15:48 server1 pengine: [8287]: notice: LogActions: Leave
> >> >> resource pri_drbd_Dienst:1#011(Stopped)
> >> >> > Apr 19 14:15:48 server1 pengine: [8287]: notice: LogActions: Leave
> >> >> resource pri_FS_drbd_t3#011(Stopped)
> >> >> > Apr 19 14:15:48 server1 pengine: [8287]: notice: LogActions: Leave
> >> >> resource pri_IP_Cluster#011(Stopped)
> >> >> > Apr 19 14:15:48 server1 pengine: [8287]: notice: LogActions: Leave
> >> >> resource pri_apache_Dienst#011(Stopped)
> >> >> > Apr 19 14:15:48 server1 pengine: [8287]: notice: LogActions: Leave
> >> >> resource pri_pingd:0#011(Started server1)
> >> >> > Apr 19 14:15:48 server1 pengine: [8287]: notice: LogActions: Leave
> >> >> resource pri_pingd:1#011(Stopped)
> >> >> > Apr 19 14:15:48 server1 crmd: [2647]: info: do_pe_invoke_callback:
> >> >> Invoking the PE: query=55, ref=pe_calc-dc-1271679348-22, seq=3,
> >> quorate=1
> >> >> > Apr 19 14:15:48 server1 crmd: [2647]: info: do_state_transition:
> >> State
> >> >> transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [
> input=I_PE_SUCCESS
> >> >> cause=C_IPC_MESSAGE origin=handle_response ]
> >> >> > Apr 19 14:15:48 server1 crmd: [2647]: info: unpack_graph: Unpacked
> >> >> transition 3: 0 actions in 0 synapses
> >> >> > Apr 19 14:15:48 server1 crmd: [2647]: info: do_te_invoke:
> Processing
> >> >> graph 3 (ref=pe_calc-dc-1271679348-22) derived from
> >> >> /var/lib/pengine/pe-warn-492.bz2
> >> >> > Apr 19 14:15:48 server1 crmd: [2647]: info: run_graph:
> >> >> ====================================================
> >> >> > Apr 19 14:15:48 server1 crmd: [2647]: notice: run_graph:
> Transition 3
> >> >> (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0,
> >> >> Source=/var/lib/pengine/pe-warn-492.bz2): Complete
> >> >> > Apr 19 14:15:48 server1 crmd: [2647]: info: te_graph_trigger:
> >> Transition
> >> >> 3 is now complete
> >> >> > Apr 19 14:15:48 server1 crmd: [2647]: info: notify_crmd:
> Transition 3
> >> >> status: done - <null>
> >> >> > Apr 19 14:15:48 server1 crmd: [2647]: info: do_state_transition:
> >> State
> >> >> transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS
> >> >> cause=C_FSA_INTERNAL origin=notify_crmd ]
> >> >> > Apr 19 14:15:48 server1 crmd: [2647]: info: do_state_transition:
> >> >> Starting PEngine Recheck Timer
> >> >> > Apr 19 14:15:48 server1 pengine: [8287]: WARN: process_pe_message:
> >> >> Transition 3: WARNINGs found during PE processing. PEngine Input
> stored
> >> in:
> >> >> /var/lib/pengine/pe-warn-492.bz2
> >> >> > Apr 19 14:15:48 server1 pengine: [8287]: info: process_pe_message:
> >> >> Configuration WARNINGs found during PE processing.  Please run
> >> "crm_verify -L"
> >> >> to identify issues.
> >> >> > Apr 19 14:15:57 server1 pingd: [2837]: WARN: ping_open:
> getaddrinfo:
> >> >> Name or service not known
> >> >> > Apr 19 14:15:59 server1 pingd: [2837]: info: stand_alone_ping:
> Node
> >> >> 192.168.4.10 is unreachable (read)
> >> >> > Apr 19 14:16:00 server1 pingd: [2837]: info: stand_alone_ping:
> Node
> >> >> 192.168.4.10 is unreachable (read)
> >> >> > Apr 19 14:16:03 server1 pingd: [2837]: info: stand_alone_ping:
> Node
> >> >> 192.168.1.1 is unreachable (read)
> >> >> > Apr 19 14:16:04 server1 pingd: [2837]: info: stand_alone_ping:
> Node
> >> >> 192.168.1.1 is unreachable (read)
> >> >> > Apr 19 14:16:15 server1 pingd: [2837]: WARN: ping_open:
> getaddrinfo:
> >> >> Name or service not known
> >> >> > Apr 19 14:16:17 server1 pingd: [2837]: info: stand_alone_ping:
> Node
> >> >> 192.168.4.10 is unreachable (read)
> >> >> > Apr 19 14:16:18 server1 pingd: [2837]: info: stand_alone_ping:
> Node
> >> >> 192.168.4.10 is unreachable (read)
> >> >> > Apr 19 14:16:21 server1 pingd: [2837]: info: stand_alone_ping:
> Node
> >> >> 192.168.1.1 is unreachable (read)
> >> >> > Apr 19 14:16:22 server1 pingd: [2837]: info: stand_alone_ping:
> Node
> >> >> 192.168.1.1 is unreachable (read)
> >> >> > Apr 19 14:16:29 server1 kernel: [  507.095214] eth1: link up,
> >> 100Mbps,
> >> >> full-duplex, lpa 0xC1E1
> >> >> > Apr 19 14:16:30 server1 pingd: [2837]: WARN: ping_open:
> getaddrinfo:
> >> >> Name or service not known
> >> >> > Apr 19 14:16:30 server1 pingd: [2837]: WARN: ping_open:
> getaddrinfo:
> >> >> Name or service not known
> >> >> > Apr 19 14:16:30 server1 heartbeat: [2586]: info: For information
> on
> >> >> cluster partitions, See URL: http://linux-ha.org/wiki/Split_Brain
> >> >> > Apr 19 14:16:30 server1 heartbeat: [2586]: WARN: Deadtime value
> may
> >> be
> >> >> too small.
> >> >> > Apr 19 14:16:30 server1 heartbeat: [2586]: info: See FAQ for
> >> information
> >> >> on tuning deadtime.
> >> >> > Apr 19 14:16:30 server1 heartbeat: [2586]: info: URL:
> >> >> http://linux-ha.org/wiki/FAQ#Heavy_Load
> >> >> > Apr 19 14:16:30 server1 heartbeat: [2586]: info: Link server2:eth1
> >> up.
> >> >> > Apr 19 14:16:30 server1 heartbeat: [2586]: WARN: Late heartbeat:
> Node
> >> >> server2: interval 68040 ms
> >> >> > Apr 19 14:16:30 server1 heartbeat: [2586]: info: Status update for
> >> node
> >> >> server2: status active
> >> >> > Apr 19 14:16:30 server1 crmd: [2647]: notice:
> >> crmd_ha_status_callback:
> >> >> Status update: Node server2 now has status [active] (DC=true)
> >> >> > Apr 19 14:16:30 server1 crmd: [2647]: info: crm_update_peer_proc:
> >> >> server2.ais is now online
> >> >> > Apr 19 14:16:30 server1 cib: [2643]: WARN: cib_peer_callback:
> >> Discarding
> >> >> cib_apply_diff message (27f) from server2: not in our membership
> >> >> > Apr 19 14:16:31 server1 pingd: [2837]: WARN: ping_open:
> getaddrinfo:
> >> >> Name or service not known
> >> >> > Apr 19 14:16:32 server1 pingd: [2837]: WARN: ping_open:
> getaddrinfo:
> >> >> Name or service not known
> >> >> > Apr 19 14:16:32 server1 crmd: [2647]: WARN: crmd_ha_msg_callback:
> >> >> Ignoring HA message (op=noop) from server2: not in our membership
> list
> >> (size=1)
> >> >> > Apr 19 14:16:32 server1 crmd: [2647]: info: mem_handle_event: Got
> an
> >> >> event OC_EV_MS_INVALID from ccm
> >> >> > Apr 19 14:16:32 server1 crmd: [2647]: info: mem_handle_event: no
> >> >> mbr_track info
> >> >> > Apr 19 14:16:32 server1 crmd: [2647]: info: mem_handle_event: Got
> an
> >> >> event OC_EV_MS_NEW_MEMBERSHIP from ccm
> >> >> > Apr 19 14:16:32 server1 crmd: [2647]: info: mem_handle_event:
> >> >> instance=5, nodes=2, new=1, lost=0, n_idx=0, new_idx=2, old_idx=4
> >> >> > Apr 19 14:16:32 server1 crmd: [2647]: info: crmd_ccm_msg_callback:
> >> >> Quorum (re)attained after event=NEW MEMBERSHIP (id=5)
> >> >> > Apr 19 14:16:32 server1 crmd: [2647]: info: ccm_event_detail: NEW
> >> >> MEMBERSHIP: trans=5, nodes=2, new=1, lost=0 n_idx=0, new_idx=2,
> >> old_idx=4
> >> >> > Apr 19 14:16:32 server1 crmd: [2647]: info: ccm_event_detail:
> >> >> #011CURRENT: server2 [nodeid=1, born=1]
> >> >> > Apr 19 14:16:32 server1 crmd: [2647]: info: ccm_event_detail:
> >> >> #011CURRENT: server1 [nodeid=0, born=5]
> >> >> > Apr 19 14:16:32 server1 crmd: [2647]: info: ccm_event_detail:
> >> #011NEW:
> >> >>     server2 [nodeid=1, born=1]
> >> >> > Apr 19 14:16:32 server1 crmd: [2647]: info: ais_status_callback:
> >> status:
> >> >> server2 is now member (was lost)
> >> >> > Apr 19 14:16:32 server1 crmd: [2647]: info: crm_update_peer: Node
> >> >> server2: id=1 state=member (new) addr=(null) votes=-1 born=1 seen=5
> >> >> proc=00000000000000000000000000000202
> >> >> > Apr 19 14:16:32 server1 crmd: [2647]: info: populate_cib_nodes_ha:
> >> >> Requesting the list of configured nodes
> >> >> > Apr 19 14:16:32 server1 cib: [2643]: WARN: cib_peer_callback:
> >> Discarding
> >> >> cib_apply_diff message (287) from server2: not in our membership
> >> >> > Apr 19 14:16:32 server1 cib: [2643]: WARN: cib_peer_callback:
> >> Discarding
> >> >> cib_apply_diff message (289) from server2: not in our membership
> >> >> > Apr 19 14:16:32 server1 cib: [2643]: info: mem_handle_event: Got
> an
> >> >> event OC_EV_MS_INVALID from ccm
> >> >> > Apr 19 14:16:32 server1 cib: [2643]: info: mem_handle_event: no
> >> >> mbr_track info
> >> >> > Apr 19 14:16:32 server1 cib: [2643]: info: mem_handle_event: Got
> an
> >> >> event OC_EV_MS_NEW_MEMBERSHIP from ccm
> >> >> > Apr 19 14:16:32 server1 cib: [2643]: info: mem_handle_event:
> >> instance=5,
> >> >> nodes=2, new=1, lost=0, n_idx=0, new_idx=2, old_idx=4
> >> >> > Apr 19 14:16:32 server1 cib: [2643]: info: cib_ccm_msg_callback:
> >> >> Processing CCM event=NEW MEMBERSHIP (id=5)
> >> >> > Apr 19 14:16:32 server1 cib: [2643]: info: crm_update_peer: Node
> >> >> server2: id=1 state=member (new) addr=(null) votes=-1 born=1 seen=5
> >> >> proc=00000000000000000000000000000302
> >> >> > Apr 19 14:16:32 server1 cib: [2643]: info: cib_process_request:
> >> >> Operation complete: op cib_delete for section
> >> //node_state[@uname='server2']/lrm
> >> >> (origin=local/crmd/57, version=0.644.14): ok (rc=0)
> >> >> > Apr 19 14:16:32 server1 cib: [2643]: info: cib_process_request:
> >> >> Operation complete: op cib_delete for section
> >> >> //node_state[@uname='server2']/transient_attributes
> >> (origin=local/crmd/58, version=0.644.15): ok (rc=0)
> >> >> > Apr 19 14:16:33 server1 pingd: [2837]: WARN: ping_open:
> getaddrinfo:
> >> >> Name or service not known
> >> >> > Apr 19 14:16:33 server1 crmd: [2647]: info:
> abort_transition_graph:
> >> >> te_update_diff:267 - Triggered transition abort (complete=1,
> >> tag=lrm_rsc_op,
> >> >> id=pri_FS_drbd_t3_monitor_0,
> >> >> magic=0:7;11:0:7:6090ba02-c064-4d80-9222-bf77b7011e17, cib=0.644.14)
> :
> >> Resource op removal
> >> >> > Apr 19 14:16:33 server1 crmd: [2647]: info: erase_xpath_callback:
> >> >> Deletion of "//node_state[@uname='server2']/lrm": ok (rc=0)
> >> >> > Apr 19 14:16:33 server1 crmd: [2647]: info:
> abort_transition_graph:
> >> >> te_update_diff:157 - Triggered transition abort (complete=1,
> >> >> tag=transient_attributes, id=5262f929-1082-4a85-aa05-7bd1992f15be,
> >> magic=NA, cib=0.644.15) :
> >> >> Transient attribute: removal
> >> >> > Apr 19 14:16:33 server1 crmd: [2647]: info: erase_xpath_callback:
> >> >> Deletion of "//node_state[@uname='server2']/transient_attributes":
> ok
> >> (rc=0)
> >> >> > Apr 19 14:16:33 server1 crmd: [2647]: info: do_state_transition:
> >> State
> >> >> transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC
> >> cause=C_FSA_INTERNAL
> >> >> origin=abort_transition_graph ]
> >> >> > Apr 19 14:16:33 server1 crmd: [2647]: info: do_state_transition:
> >> >> Membership changed: 3 -> 5 - join restart
> >> >> > Apr 19 14:16:33 server1 crmd: [2647]: info: do_pe_invoke: Query
> 61:
> >> >> Requesting the current CIB: S_POLICY_ENGINE
> >> >> > Apr 19 14:16:33 server1 crmd: [2647]: info: do_state_transition:
> >> State
> >> >> transition S_POLICY_ENGINE -> S_INTEGRATION [ input=I_NODE_JOIN
> >> >> cause=C_FSA_INTERNAL origin=do_state_transition ]
> >> >> > Apr 19 14:16:33 server1 crmd: [2647]: info: update_dc: Unset DC
> >> server1
> >> >> > Apr 19 14:16:33 server1 crmd: [2647]: info: join_make_offer:
> Making
> >> join
> >> >> offers based on membership 5
> >> >> > Apr 19 14:16:33 server1 crmd: [2647]: info: do_dc_join_offer_all:
> >> >> join-2: Waiting on 2 outstanding join acks
> >> >> > Apr 19 14:16:33 server1 cib: [2643]: info: cib_process_request:
> >> >> Operation complete: op cib_modify for section nodes
> >> (origin=local/crmd/59,
> >> >> version=0.644.15): ok (rc=0)
> >> >> > Apr 19 14:16:33 server1 crmd: [2647]: info: do_state_transition:
> >> State
> >> >> transition S_INTEGRATION -> S_ELECTION [ input=I_ELECTION
> >> >> cause=C_FSA_INTERNAL origin=crmd_ha_msg_filter ]
> >> >> > Apr 19 14:16:33 server1 crmd: [2647]: WARN: do_log: FSA: Input
> >> >> I_JOIN_OFFER from route_message() received in state S_ELECTION
> >> >> > Apr 19 14:16:33 server1 cib: [2643]: WARN: cib_process_diff: Diff
> >> >> 0.643.50 -> 0.643.51 not applied to 0.644.16: current "epoch" is
> >> greater than
> >> >> required
> >> >> > Apr 19 14:16:34 server1 pingd: [2837]: WARN: ping_open:
> getaddrinfo:
> >> >> Name or service not known
> >> >> > Apr 19 14:16:34 server1 crmd: [2647]: info: do_state_transition:
> >> State
> >> >> transition S_ELECTION -> S_RELEASE_DC [ input=I_RELEASE_DC
> >> >> cause=C_FSA_INTERNAL origin=do_election_count_vote ]
> >> >> > Apr 19 14:16:34 server1 crmd: [2647]: info: do_dc_release: DC role
> >> >> released
> >> >> > Apr 19 14:16:34 server1 pengine: [8287]: info:
> crm_signal_dispatch:
> >> >> Invoking handler for signal 15: Terminated
> >> >> > Apr 19 14:16:34 server1 crmd: [2647]: info: stop_subsystem: Sent
> >> -TERM
> >> >> to pengine: [8287]
> >> >> > Apr 19 14:16:34 server1 crmd: [2647]: info: do_te_control:
> >> Transitioner
> >> >> is now inactive
> >> >> > Apr 19 14:16:34 server1 crmd: [2647]: info: do_te_control:
> >> Disconnecting
> >> >> STONITH...
> >> >> > Apr 19 14:16:34 server1 crmd: [2647]: info:
> >> >> tengine_stonith_connection_destroy: Fencing daemon disconnected
> >> >> > Apr 19 14:16:34 server1 crmd: [2647]: notice: Not currently
> >> connected.
> >> >> > Apr 19 14:16:34 server1 crmd: [2647]: WARN: do_log: FSA: Input
> >> >> I_RELEASE_DC from do_election_count_vote() received in state
> >> S_RELEASE_DC
> >> >> > Apr 19 14:16:34 server1 crmd: [2647]: info: do_dc_release: DC role
> >> >> released
> >> >> > Apr 19 14:16:34 server1 crmd: [2647]: info: stop_subsystem: Sent
> >> -TERM
> >> >> to pengine: [8287]
> >> >> > Apr 19 14:16:34 server1 crmd: [2647]: info: do_te_control:
> >> Transitioner
> >> >> is now inactive
> >> >> > Apr 19 14:16:34 server1 crmd: [2647]: WARN: do_log: FSA: Input
> >> >> I_RELEASE_DC from do_election_count_vote() received in state
> >> S_RELEASE_DC
> >> >> > Apr 19 14:16:34 server1 crmd: [2647]: info: do_dc_release: DC role
> >> >> released
> >> >> > Apr 19 14:16:34 server1 crmd: [2647]: info: stop_subsystem: Sent
> >> -TERM
> >> >> to pengine: [8287]
> >> >> > Apr 19 14:16:34 server1 crmd: [2647]: info: do_te_control:
> >> Transitioner
> >> >> is now inactive
> >> >> > Apr 19 14:16:34 server1 crmd: [2647]: WARN: do_log: FSA: Input
> >> >> I_RELEASE_DC from do_election_count_vote() received in state
> >> S_RELEASE_DC
> >> >> > Apr 19 14:16:34 server1 crmd: [2647]: info: do_dc_release: DC role
> >> >> released
> >> >> > Apr 19 14:16:34 server1 crmd: [2647]: info: stop_subsystem: Sent
> >> -TERM
> >> >> to pengine: [8287]
> >> >> > Apr 19 14:16:34 server1 crmd: [2647]: info: do_te_control:
> >> Transitioner
> >> >> is now inactive
> >> >> > Apr 19 14:16:34 server1 crmd: [2647]: WARN: do_log: FSA: Input
> >> >> I_RELEASE_DC from do_election_count_vote() received in state
> >> S_RELEASE_DC
> >> >> > Apr 19 14:16:34 server1 crmd: [2647]: info: do_dc_release: DC role
> >> >> released
> >> >> > Apr 19 14:16:34 server1 crmd: [2647]: info: stop_subsystem: Sent
> >> -TERM
> >> >> to pengine: [8287]
> >> >> > Apr 19 14:16:34 server1 crmd: [2647]: info: do_te_control:
> >> Transitioner
> >> >> is now inactive
> >> >> > Apr 19 14:16:34 server1 crmd: [2647]: info: do_state_transition:
> >> State
> >> >> transition S_RELEASE_DC -> S_PENDING [ input=I_RELEASE_SUCCESS
> >> >> cause=C_FSA_INTERNAL origin=do_dc_release ]
> >> >> > Apr 19 14:16:34 server1 crmd: [2647]: info: crmdManagedChildDied:
> >> >> Process pengine:[8287] exited (signal=0, exitcode=0)
> >> >> > Apr 19 14:16:34 server1 crmd: [2647]: info: pe_msg_dispatch:
> Received
> >> >> HUP from pengine:[8287]
> >> >> > Apr 19 14:16:34 server1 crmd: [2647]: info: pe_connection_destroy:
> >> >> Connection to the Policy Engine released
> >> >> > Apr 19 14:16:34 server1 cib: [2643]: info: cib_process_readwrite:
> We
> >> are
> >> >> now in R/O mode
> >> >> > Apr 19 14:16:35 server1 pingd: [2837]: WARN: ping_open:
> getaddrinfo:
> >> >> Name or service not known
> >> >> > Apr 19 14:16:36 server1 pingd: [2837]: WARN: ping_open:
> getaddrinfo:
> >> >> Name or service not known
> >> >> > Apr 19 14:16:37 server1 pingd: [2837]: WARN: ping_open:
> getaddrinfo:
> >> >> Name or service not known
> >> >> > Apr 19 14:16:38 server1 pingd: [2837]: WARN: ping_open:
> getaddrinfo:
> >> >> Name or service not known
> >> >> > Apr 19 14:16:38 server1 crmd: [2647]: info: update_dc: Set DC to
> >> server2
> >> >> (3.0.1)
> >> >> > Apr 19 14:16:39 server1 pingd: [2837]: WARN: ping_open:
> getaddrinfo:
> >> >> Name or service not known
> >> >> > Apr 19 14:16:39 server1 cib: [2643]: info: cib_process_request:
> >> >> Operation complete: op cib_sync for section 'all'
> >> (origin=server2/crmd/74,
> >> >> version=0.644.16): ok (rc=0)
> >> >> > Apr 19 14:16:40 server1 pingd: [2837]: WARN: ping_open:
> getaddrinfo:
> >> >> Name or service not known
> >> >> > Apr 19 14:16:41 server1 crmd: [2647]: info: do_state_transition:
> >> State
> >> >> transition S_PENDING -> S_NOT_DC [ input=I_NOT_DC cause=C_HA_MESSAGE
> >> >> origin=do_cl_join_finalize_respond ]
> >> >> > Apr 19 14:16:41 server1 attrd: [2646]: info: attrd_local_callback:
> >> >> Sending full refresh (origin=crmd)
> >> >> > Apr 19 14:16:41 server1 attrd: [2646]: info: attrd_trigger_update:
> >> >> Sending flush op to all hosts for: master-pri_drbd_Dienst:0 (<null>)
> >> >> > Apr 19 14:16:41 server1 attrd: [2646]: info: attrd_trigger_update:
> >> >> Sending flush op to all hosts for: probe_complete (true)
> >> >> > Apr 19 14:16:41 server1 attrd: [2646]: info: attrd_trigger_update:
> >> >> Sending flush op to all hosts for: terminate (<null>)
> >> >> > Apr 19 14:16:41 server1 attrd: [2646]: info: attrd_trigger_update:
> >> >> Sending flush op to all hosts for: master-pri_drbd_Dienst:1 (<null>)
> >> >> > Apr 19 14:16:41 server1 attrd: [2646]: info: attrd_trigger_update:
> >> >> Sending flush op to all hosts for: shutdown (<null>)
> >> >> > Apr 19 14:16:41 server1 attrd: [2646]: info: attrd_trigger_update:
> >> >> Sending flush op to all hosts for: pingd (200)
> >> >> > Apr 19 14:16:41 server1 pingd: [2837]: WARN: ping_open:
> getaddrinfo:
> >> >> Name or service not known
> >> >> > Apr 19 14:16:41 server1 attrd: [2646]: info: attrd_ha_callback:
> flush
> >> >> message from server2
> >> >> > Apr 19 14:16:41 server1 attrd: [2646]: info: attrd_ha_callback:
> flush
> >> >> message from server2
> >> >> > Apr 19 14:16:41 server1 attrd: [2646]: info: attrd_ha_callback:
> flush
> >> >> message from server2
> >> >> > Apr 19 14:16:41 server1 attrd: [2646]: info: attrd_ha_callback:
> flush
> >> >> message from server2
> >> >> > Apr 19 14:16:41 server1 attrd: [2646]: info: attrd_ha_callback:
> flush
> >> >> message from server2
> >> >> > Apr 19 14:16:41 server1 attrd: [2646]: info: attrd_ha_callback:
> flush
> >> >> message from server2
> >> >> > Apr 19 14:16:41 server1 attrd: [2646]: info: attrd_perform_update:
> >> Sent
> >> >> update 56: pingd=200
> >> >> > Apr 19 14:16:41 server1 attrd: [2646]: info: attrd_ha_callback:
> flush
> >> >> message from server1
> >> >> > Apr 19 14:16:41 server1 attrd: [2646]: info: attrd_ha_callback:
> flush
> >> >> message from server1
> >> >> > Apr 19 14:16:41 server1 attrd: [2646]: info: attrd_ha_callback:
> flush
> >> >> message from server1
> >> >> > Apr 19 14:16:41 server1 attrd: [2646]: info: attrd_ha_callback:
> flush
> >> >> message from server1
> >> >> > Apr 19 14:16:41 server1 attrd: [2646]: info: attrd_ha_callback:
> flush
> >> >> message from server1
> >> >> > Apr 19 14:16:41 server1 attrd: [2646]: info: attrd_perform_update:
> >> Sent
> >> >> update 62: pingd=200
> >> >> > Apr 19 14:16:42 server1 pingd: [2837]: WARN: ping_open:
> getaddrinfo:
> >> >> Name or service not known
> >> >> > Apr 19 14:16:43 server1 pingd: [2837]: WARN: ping_open:
> getaddrinfo:
> >> >> Name or service not known
> >> >> > Apr 19 14:16:43 server1 attrd: [2646]: info: attrd_ha_callback:
> flush
> >> >> message from server2
> >> >> > Apr 19 14:16:43 server1 attrd: [2646]: info: attrd_ha_callback:
> flush
> >> >> message from server2
> >> >> > Apr 19 14:16:43 server1 attrd: [2646]: info: attrd_ha_callback:
> flush
> >> >> message from server2
> >> >> > Apr 19 14:16:43 server1 attrd: [2646]: info: attrd_ha_callback:
> flush
> >> >> message from server2
> >> >> > Apr 19 14:16:43 server1 attrd: [2646]: info: attrd_ha_callback:
> flush
> >> >> message from server2
> >> >> > Apr 19 14:16:43 server1 attrd: [2646]: info: attrd_ha_callback:
> flush
> >> >> message from server2
> >> >> > Apr 19 14:16:43 server1 cib: [8805]: info: write_cib_contents:
> >> Archived
> >> >> previous version as /var/lib/heartbeat/crm/cib-64.raw
> >> >> > Apr 19 14:16:43 server1 cib: [8805]: info: write_cib_contents:
> Wrote
> >> >> version 0.645.0 of the CIB to disk (digest:
> >> a9f8f622cb29207ef4bbcc7c0e1cab21)
> >> >> > Apr 19 14:16:43 server1 cib: [8805]: info: retrieveCib: Reading
> >> cluster
> >> >> configuration from: /var/lib/heartbeat/crm/cib.1YTxsI (digest:
> >> >> /var/lib/heartbeat/crm/cib.ULo5OR)
> >> >> > Apr 19 14:16:44 server1 pingd: [2837]: WARN: ping_open:
> getaddrinfo:
> >> >> Name or service not known
> >> >> > Apr 19 14:16:44 server1 attrd: [2646]: info: attrd_ha_callback:
> flush
> >> >> message from server2
> >> >> > Apr 19 14:16:45 server1 pingd: [2837]: WARN: ping_open:
> getaddrinfo:
> >> >> Name or service not known
> >> >> > Apr 19 14:16:45 server1 attrd: [2646]: info: attrd_ha_callback:
> flush
> >> >> message from server2
> >> >> > Apr 19 14:16:46 server1 pingd: [2837]: WARN: ping_open:
> getaddrinfo:
> >> >> Name or service not known
> >> >> >
> >> >> > ---------------
> >> >> > cibadmin -Q
> >> >> >
> >> >> >
> >> >> > <cib validate-with="pacemaker-1.0" crm_feature_set="3.0.1"
> >> >> have-quorum="1" dc-uuid="5262f929-1082-4a85-aa05-7bd1992f15be"
> >> admin_epoch="0"
> >> >> epoch="645" num_updates="11">
> >> >> >  <configuration>
> >> >> >    <crm_config>
> >> >> >      <cluster_property_set id="cib-bootstrap-options">
> >> >> >        <nvpair id="cib-bootstrap-options-dc-version"
> >> >> name="dc-version"
> >> value="1.0.8-2c98138c2f070fcb6ddeab1084154cffbf44ba75"/>
> >> >> >        <nvpair
> id="cib-bootstrap-options-cluster-infrastructure"
> >> >> name="cluster-infrastructure" value="Heartbeat"/>
> >> >> >        <nvpair id="cib-bootstrap-options-no-quorum-policy"
> >> >> name="no-quorum-policy" value="ignore"/>
> >> >> >        <nvpair name="default-resource-stickiness"
> >> >> id="cib-bootstrap-options-default-resource-stickiness" value="100"/>
> >> >> >        <nvpair name="last-lrm-refresh"
> >> >> id="cib-bootstrap-options-last-lrm-refresh" value="1271678801"/>
> >> >> >        <nvpair id="cib-bootstrap-options-startup-fencing"
> >> >> name="startup-fencing" value="false"/>
> >> >> >        <nvpair id="cib-bootstrap-options-stonith-enabled"
> >> >> name="stonith-enabled" value="false"/>
> >> >> >        <nvpair
> id="cib-bootstrap-options-default-action-timeout"
> >> >> name="default-action-timeout" value="120s"/>
> >> >> >      </cluster_property_set>
> >> >> >    </crm_config>
> >> >> >    <nodes>
> >> >> >      <node type="normal" uname="server1"
> >> >> id="3e20966a-ed64-4972-8f5a-88be0977f759">
> >> >> >        <instance_attributes
> >> >> id="nodes-3e20966a-ed64-4972-8f5a-88be0977f759">
> >> >> >          <nvpair name="standby"
> >> >> id="nodes-3e20966a-ed64-4972-8f5a-88be0977f759-standby"
> value="off"/>
> >> >> >        </instance_attributes>
> >> >> >      </node>
> >> >> >      <node type="normal" uname="server2"
> >> >> id="5262f929-1082-4a85-aa05-7bd1992f15be">
> >> >> >        <instance_attributes
> >> >> id="nodes-5262f929-1082-4a85-aa05-7bd1992f15be">
> >> >> >          <nvpair name="standby"
> >> >> id="nodes-5262f929-1082-4a85-aa05-7bd1992f15be-standby"
> value="off"/>
> >> >> >        </instance_attributes>
> >> >> >      </node>
> >> >> >    </nodes>
> >> >> >    <resources>
> >> >> >      <master id="ms_drbd_service">
> >> >> >        <meta_attributes id="ms_drbd_service-meta_attributes">
> >> >> >          <nvpair id="ms_drbd_service-meta_attributes-notify"
> >> >> name="notify" value="true"/>
> >> >> >          <nvpair
> >> id="ms_drbd_service-meta_attributes-target-role"
> >> >> name="target-role" value="Started"/>
> >> >> >        </meta_attributes>
> >> >> >        <primitive class="ocf" id="pri_drbd_Dienst"
> >> provider="linbit"
> >> >> type="drbd">
> >> >> >          <instance_attributes
> >> >> id="pri_drbd_Dienst-instance_attributes">
> >> >> >            <nvpair
> >> >> id="pri_drbd_Dienst-instance_attributes-drbd_resource"
> >> name="drbd_resource" value="t3"/>
> >> >> >          </instance_attributes>
> >> >> >          <operations>
> >> >> >            <op id="pri_drbd_Dienst-monitor-15" interval="15"
> >> >> name="monitor"/>
> >> >> >            <op id="pri_drbd_Dienst-start-0" interval="0"
> >> >> name="start" timeout="240"/>
> >> >> >            <op id="pri_drbd_Dienst-stop-0" interval="0"
> >> >> name="stop" timeout="100"/>
> >> >> >          </operations>
> >> >> >        </primitive>
> >> >> >      </master>
> >> >> >      <group id="group_t3">
> >> >> >        <primitive class="ocf" id="pri_FS_drbd_t3"
> >> >> provider="heartbeat" type="Filesystem">
> >> >> >          <instance_attributes
> >> >> id="pri_FS_drbd_t3-instance_attributes">
> >> >> >            <nvpair
> >> id="pri_FS_drbd_t3-instance_attributes-device"
> >> >> name="device" value="/dev/drbd0"/>
> >> >> >            <nvpair
> >> >> id="pri_FS_drbd_t3-instance_attributes-directory" name="directory"
> >> value="/mnt/drbd_daten"/>
> >> >> >            <nvpair
> >> id="pri_FS_drbd_t3-instance_attributes-fstype"
> >> >> name="fstype" value="ext3"/>
> >> >> >            <nvpair
> >> id="pri_FS_drbd_t3-instance_attributes-options"
> >> >> name="options" value="noatime"/>
> >> >> >          </instance_attributes>
> >> >> >        </primitive>
> >> >> >        <primitive class="ocf" id="pri_IP_Cluster"
> >> >> provider="heartbeat" type="IPaddr2">
> >> >> >          <instance_attributes
> >> >> id="pri_IP_Cluster-instance_attributes">
> >> >> >            <nvpair
> id="pri_IP_Cluster-instance_attributes-ip"
> >> >> name="ip" value="192.168.1.253"/>
> >> >> >            <nvpair
> >> >> id="pri_IP_Cluster-instance_attributes-cidr_netmask"
> >> name="cidr_netmask" value="24"/>
> >> >> >            <nvpair
> id="pri_IP_Cluster-instance_attributes-nic"
> >> >> name="nic" value="eth1"/>
> >> >> >          </instance_attributes>
> >> >> >          <operations>
> >> >> >            <op id="pri_IP_Cluster-monitor-3" interval="3"
> >> >> name="monitor"/>
> >> >> >          </operations>
> >> >> >        </primitive>
> >> >> >        <primitive class="ocf" id="pri_apache_Dienst"
> >> >> provider="heartbeat" type="apache">
> >> >> >          <operations>
> >> >> >            <op id="pri_apache_Dienst-monitor-15"
> interval="15"
> >> >> name="monitor"/>
> >> >> >          </operations>
> >> >> >          <instance_attributes
> >> >> id="pri_apache_Dienst-instance_attributes">
> >> >> >            <nvpair
> >> >> id="pri_apache_Dienst-instance_attributes-configfile"
> name="configfile"
> >> value="/etc/apache2/apache2.conf"/>
> >> >> >            <nvpair
> >> >> id="pri_apache_Dienst-instance_attributes-httpd" name="httpd"
> >> value="/usr/sbin/apache2"/>
> >> >> >            <nvpair
> >> id="pri_apache_Dienst-instance_attributes-port"
> >> >> name="port" value="80"/>
> >> >> >          </instance_attributes>
> >> >> >        </primitive>
> >> >> >      </group>
> >> >> >      <clone id="clo_pingd">
> >> >> >        <meta_attributes id="clo_pingd-meta_attributes">
> >> >> >          <nvpair
> id="clo_pingd-meta_attributes-globally-unique"
> >> >> name="globally-unique" value="false"/>
> >> >> >        </meta_attributes>
> >> >> >        <primitive class="ocf" id="pri_pingd"
> provider="pacemaker"
> >> >> type="pingd">
> >> >> >          <instance_attributes
> >> id="pri_pingd-instance_attributes">
> >> >> >            <nvpair id="pri_pingd-instance_attributes-name"
> >> >> name="name" value="pingd"/>
> >> >> >            <nvpair
> id="pri_pingd-instance_attributes-host_list"
> >> >> name="host_list" value="192.168.1.1 \ 192.168.4.10"/>
> >> >> >            <nvpair
> >> id="pri_pingd-instance_attributes-multiplier"
> >> >> name="multiplier" value="100"/>
> >> >> >            <nvpair id="pri_pingd-instance_attributes-dampen"
> >> >> name="dampen" value="5s"/>
> >> >> >          </instance_attributes>
> >> >> >          <operations>
> >> >> >            <op id="pri_pingd-monitor-15s" interval="15s"
> >> >> name="monitor" timeout="20s"/>
> >> >> >          </operations>
> >> >> >        </primitive>
> >> >> >      </clone>
> >> >> >    </resources>
> >> >> >    <constraints>
> >> >> >      <rsc_order first="ms_drbd_service" first-action="promote"
> >> >> id="ord_apache_after_drbd" score="INFINITY" then="group_t3"
> >> then-action="start"/>
> >> >> >      <rsc_colocation id="col_apache_after_drbd" rsc="group_t3"
> >> >> score="INFINITY" with-rsc="ms_drbd_service" with-rsc-role="Master"/>
> >> >> >      <rsc_location id="loc_drbd_on_conected_node"
> >> >> rsc="ms_drbd_service">
> >> >> >        <rule id="loc_drbd_on_conected_node-rule"
> >> >> score-attribute="ping">
> >> >> >          <expression attribute="pingd"
> >> >> id="loc_drbd_on_conected_node-expression" operation="defined"/>
> >> >> >        </rule>
> >> >> >      </rsc_location>
> >> >> >    </constraints>
> >> >> >    <rsc_defaults/>
> >> >> >    <op_defaults/>
> >> >> >  </configuration>
> >> >> >  <status>
> >> >> >    <node_state uname="server2" ha="active" in_ccm="true"
> >> crmd="online"
> >> >> join="member" expected="member" shutdown="0"
> >> >> id="5262f929-1082-4a85-aa05-7bd1992f15be"
> >> crm-debug-origin="do_update_resource">
> >> >> >      <transient_attributes
> >> id="5262f929-1082-4a85-aa05-7bd1992f15be">
> >> >> >        <instance_attributes
> >> >> id="status-5262f929-1082-4a85-aa05-7bd1992f15be">
> >> >> >          <nvpair
> >> >> id="status-5262f929-1082-4a85-aa05-7bd1992f15be-probe_complete"
> >> name="probe_complete" value="true"/>
> >> >> >          <nvpair
> >> >> id="status-5262f929-1082-4a85-aa05-7bd1992f15be-pingd" name="pingd"
> >> value="200"/>
> >> >> >        </instance_attributes>
> >> >> >      </transient_attributes>
> >> >> >      <lrm id="5262f929-1082-4a85-aa05-7bd1992f15be">
> >> >> >        <lrm_resources>
> >> >> >          <lrm_resource id="pri_apache_Dienst" type="apache"
> >> >> class="ocf" provider="heartbeat">
> >> >> >            <lrm_rsc_op id="pri_apache_Dienst_monitor_0"
> >> >> operation="monitor" crm-debug-origin="build_active_RAs"
> >> crm_feature_set="3.0.1"
> >> >> transition-key="13:0:7:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> >> transition-magic="0:7;13:0:7:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> call-id="5" rc-code="7"
> >> >> op-status="0" interval="0" last-run="1271678969"
> >> last-rc-change="1271678969"
> >> >> exec-time="800" queue-time="0"
> >> >> op-digest="592a5e45deff3022dc73ad2b8b690624"/>
> >> >> >            <lrm_rsc_op id="pri_apache_Dienst_start_0"
> >> >> operation="start" crm-debug-origin="build_active_RAs"
> >> crm_feature_set="3.0.1"
> >> >> transition-key="37:4:0:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> >> transition-magic="0:0;37:4:0:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> call-id="20" rc-code="0"
> >> >> op-status="0" interval="0" last-run="1271679345"
> >> last-rc-change="1271679345"
> >> >> exec-time="1820" queue-time="0"
> >> op-digest="592a5e45deff3022dc73ad2b8b690624"/>
> >> >> >            <lrm_rsc_op id="pri_apache_Dienst_monitor_15000"
> >> >> operation="monitor" crm-debug-origin="build_active_RAs"
> >> crm_feature_set="3.0.1"
> >> >> transition-key="38:4:0:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> >> transition-magic="0:0;38:4:0:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> call-id="21"
> >> >> rc-code="0" op-status="0" interval="15000" last-run="1271679391"
> >> >> last-rc-change="1271679346" exec-time="140" queue-time="0"
> >> >> op-digest="80b94a8e74a4ac7d41102e0b2bec9129"/>
> >> >> >            <lrm_rsc_op id="pri_apache_Dienst_stop_0"
> >> >> operation="stop" crm-debug-origin="do_update_resource"
> >> crm_feature_set="3.0.1"
> >> >> transition-key="37:5:0:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> >> transition-magic="0:0;37:5:0:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> call-id="22" rc-code="0"
> >> >> op-status="0" interval="0" last-run="1271679402"
> >> last-rc-change="1271679402"
> >> >> exec-time="1350" queue-time="0"
> >> op-digest="592a5e45deff3022dc73ad2b8b690624"/>
> >> >> >          </lrm_resource>
> >> >> >          <lrm_resource id="pri_IP_Cluster" type="IPaddr2"
> >> >> class="ocf" provider="heartbeat">
> >> >> >            <lrm_rsc_op id="pri_IP_Cluster_monitor_0"
> >> >> operation="monitor" crm-debug-origin="build_active_RAs"
> >> crm_feature_set="3.0.1"
> >> >> transition-key="12:0:7:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> >> transition-magic="0:7;12:0:7:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> call-id="4" rc-code="7"
> >> >> op-status="0" interval="0" last-run="1271678969"
> >> last-rc-change="1271678969"
> >> >> exec-time="510" queue-time="0"
> >> op-digest="ab9506a065e05252980696cd889aac20"/>
> >> >> >            <lrm_rsc_op id="pri_IP_Cluster_start_0"
> >> >> operation="start" crm-debug-origin="build_active_RAs"
> >> crm_feature_set="3.0.1"
> >> >> transition-key="35:4:0:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> >> transition-magic="0:0;35:4:0:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> call-id="18" rc-code="0"
> >> >> op-status="0" interval="0" last-run="1271679344"
> >> last-rc-change="1271679344"
> >> >> exec-time="80" queue-time="0"
> >> op-digest="ab9506a065e05252980696cd889aac20"/>
> >> >> >            <lrm_rsc_op id="pri_IP_Cluster_monitor_3000"
> >> >> operation="monitor" crm-debug-origin="build_active_RAs"
> >> crm_feature_set="3.0.1"
> >> >> transition-key="36:4:0:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> >> transition-magic="0:0;36:4:0:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> call-id="19" rc-code="0"
> >> >> op-status="0" interval="3000" last-run="1271679399"
> >> >> last-rc-change="1271679345" exec-time="50" queue-time="0"
> >> >> op-digest="f3cc891049deae7705d949e4e254a7f1"/>
> >> >> >            <lrm_rsc_op id="pri_IP_Cluster_stop_0"
> >> operation="stop"
> >> >> crm-debug-origin="do_update_resource" crm_feature_set="3.0.1"
> >> >> transition-key="36:5:0:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> >> transition-magic="0:0;36:5:0:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> call-id="24" rc-code="0"
> >> >> op-status="0" interval="0" last-run="1271679403"
> >> last-rc-change="1271679403"
> >> >> exec-time="80" queue-time="0"
> >> op-digest="ab9506a065e05252980696cd889aac20"/>
> >> >> >          </lrm_resource>
> >> >> >          <lrm_resource id="pri_drbd_Dienst:1" type="drbd"
> >> >> class="ocf" provider="linbit">
> >> >> >            <lrm_rsc_op id="pri_drbd_Dienst:1_monitor_0"
> >> >> operation="monitor" crm-debug-origin="build_active_RAs"
> >> crm_feature_set="3.0.1"
> >> >> transition-key="10:0:7:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> >> transition-magic="0:7;10:0:7:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> call-id="2" rc-code="7"
> >> >> op-status="0" interval="0" last-run="1271678969"
> >> last-rc-change="1271678969"
> >> >> exec-time="670" queue-time="0"
> >> >> op-digest="da70ef5b9aed870d7c0944ce6ee989e2"/>
> >> >> >            <lrm_rsc_op id="pri_drbd_Dienst:1_start_0"
> >> >> operation="start" crm-debug-origin="build_active_RAs"
> >> crm_feature_set="3.0.1"
> >> >> transition-key="7:1:0:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> >> transition-magic="0:0;7:1:0:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> call-id="8" rc-code="0"
> >> >> op-status="0" interval="0" last-run="1271678971"
> >> last-rc-change="1271678971"
> >> >> exec-time="300" queue-time="0"
> >> op-digest="da70ef5b9aed870d7c0944ce6ee989e2"/>
> >> >> >            <lrm_rsc_op id="pri_drbd_Dienst:1_promote_0"
> >> >> operation="promote" crm-debug-origin="build_active_RAs"
> >> crm_feature_set="3.0.1"
> >> >> transition-key="9:4:0:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> >> transition-magic="0:0;9:4:0:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> call-id="15" rc-code="0"
> >> >> op-status="0" interval="0" last-run="1271679332"
> >> last-rc-change="1271679332"
> >> >> exec-time="11230" queue-time="0"
> >> >> op-digest="da70ef5b9aed870d7c0944ce6ee989e2"/>
> >> >> >            <lrm_rsc_op
> >> >> id="pri_drbd_Dienst:1_post_notify_promote_0" operation="notify"
> >> crm-debug-origin="build_active_RAs"
> >> >> crm_feature_set="3.0.1"
> >> transition-key="65:4:0:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> >> transition-magic="0:0;65:4:0:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> call-id="16"
> >> >> rc-code="0" op-status="0" interval="0" last-run="1271679344"
> >> >> last-rc-change="1271679344" exec-time="70" queue-time="0"
> >> >> op-digest="da70ef5b9aed870d7c0944ce6ee989e2"/>
> >> >> >            <lrm_rsc_op
> >> id="pri_drbd_Dienst:1_pre_notify_demote_0"
> >> >> operation="notify" crm-debug-origin="do_update_resource"
> >> >> crm_feature_set="3.0.1"
> >> transition-key="62:5:0:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> >> transition-magic="0:0;62:5:0:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> call-id="23"
> >> >> rc-code="0" op-status="0" interval="0" last-run="1271679401"
> >> >> last-rc-change="1271679401" exec-time="160" queue-time="0"
> >> >> op-digest="da70ef5b9aed870d7c0944ce6ee989e2"/>
> >> >> >            <lrm_rsc_op id="pri_drbd_Dienst:1_demote_0"
> >> >> operation="demote" crm-debug-origin="do_update_resource"
> >> crm_feature_set="3.0.1"
> >> >> transition-key="9:5:0:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> >> transition-magic="0:0;9:5:0:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> call-id="26" rc-code="0"
> >> >> op-status="0" interval="0" last-run="1271679403"
> >> last-rc-change="1271679403"
> >> >> exec-time="100" queue-time="0"
> >> op-digest="da70ef5b9aed870d7c0944ce6ee989e2"/>
> >> >> >            <lrm_rsc_op
> >> id="pri_drbd_Dienst:1_post_notify_demote_0"
> >> >> operation="notify" crm-debug-origin="do_update_resource"
> >> >> crm_feature_set="3.0.1"
> >> transition-key="63:5:0:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> >> transition-magic="0:0;63:5:0:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> call-id="27"
> >> >> rc-code="0" op-status="0" interval="0" last-run="1271679403"
> >> >> last-rc-change="1271679403" exec-time="100" queue-time="0"
> >> >> op-digest="da70ef5b9aed870d7c0944ce6ee989e2"/>
> >> >> >            <lrm_rsc_op
> id="pri_drbd_Dienst:1_pre_notify_stop_0"
> >> >> operation="notify" crm-debug-origin="do_update_resource"
> >> >> crm_feature_set="3.0.1"
> >> transition-key="59:6:0:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> >> transition-magic="0:0;59:6:0:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> call-id="28"
> >> >> rc-code="0" op-status="0" interval="0" last-run="1271679403"
> >> >> last-rc-change="1271679403" exec-time="50" queue-time="0"
> >> >> op-digest="da70ef5b9aed870d7c0944ce6ee989e2"/>
> >> >> >            <lrm_rsc_op id="pri_drbd_Dienst:1_stop_0"
> >> >> operation="stop" crm-debug-origin="do_update_resource"
> >> crm_feature_set="3.0.1"
> >> >> transition-key="7:6:0:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> >> transition-magic="0:0;7:6:0:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> call-id="29" rc-code="0"
> >> >> op-status="0" interval="0" last-run="1271679403"
> >> last-rc-change="1271679403"
> >> >> exec-time="90" queue-time="0"
> >> op-digest="da70ef5b9aed870d7c0944ce6ee989e2"/>
> >> >> >          </lrm_resource>
> >> >> >          <lrm_resource id="pri_pingd:1" type="pingd"
> class="ocf"
> >> >> provider="pacemaker">
> >> >> >            <lrm_rsc_op id="pri_pingd:1_monitor_0"
> >> >> operation="monitor" crm-debug-origin="build_active_RAs"
> >> crm_feature_set="3.0.1"
> >> >> transition-key="14:0:7:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> >> transition-magic="0:7;14:0:7:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> call-id="6" rc-code="7"
> >> >> op-status="0" interval="0" last-run="1271678969"
> >> last-rc-change="1271678969"
> >> >> exec-time="30" queue-time="1000"
> >> op-digest="ab58a89887adc76008fe441640ea2c3e"/>
> >> >> >            <lrm_rsc_op id="pri_pingd:1_start_0"
> >> operation="start"
> >> >> crm-debug-origin="build_active_RAs" crm_feature_set="3.0.1"
> >> >> transition-key="39:1:0:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> >> transition-magic="0:0;39:1:0:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> call-id="7" rc-code="0" op-status="0"
> >> >> interval="0" last-run="1271678971" last-rc-change="1271678971"
> >> >> exec-time="60" queue-time="0"
> >> op-digest="ab58a89887adc76008fe441640ea2c3e"/>
> >> >> >            <lrm_rsc_op id="pri_pingd:1_monitor_15000"
> >> >> operation="monitor" crm-debug-origin="build_active_RAs"
> >> crm_feature_set="3.0.1"
> >> >> transition-key="40:1:0:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> >> transition-magic="0:0;40:1:0:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> call-id="9" rc-code="0"
> >> >> op-status="0" interval="15000" last-run="1271679391"
> >> last-rc-change="1271678971"
> >> >> exec-time="20" queue-time="0"
> >> >> op-digest="723c145d6f1d33caccebfa26c5fda578"/>
> >> >> >          </lrm_resource>
> >> >> >          <lrm_resource id="pri_FS_drbd_t3" type="Filesystem"
> >> >> class="ocf" provider="heartbeat">
> >> >> >            <lrm_rsc_op id="pri_FS_drbd_t3_monitor_0"
> >> >> operation="monitor" crm-debug-origin="build_active_RAs"
> >> crm_feature_set="3.0.1"
> >> >> transition-key="11:0:7:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> >> transition-magic="0:7;11:0:7:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> call-id="3" rc-code="7"
> >> >> op-status="0" interval="0" last-run="1271678969"
> >> last-rc-change="1271678969"
> >> >> exec-time="350" queue-time="0"
> >> op-digest="0948723f8c5b98b0d6330e30199bfe83"/>
> >> >> >            <lrm_rsc_op id="pri_FS_drbd_t3_start_0"
> >> >> operation="start" crm-debug-origin="build_active_RAs"
> >> crm_feature_set="3.0.1"
> >> >> transition-key="34:4:0:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> >> transition-magic="0:0;34:4:0:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> call-id="17" rc-code="0"
> >> >> op-status="0" interval="0" last-run="1271679344"
> >> last-rc-change="1271679344"
> >> >> exec-time="230" queue-time="0"
> >> op-digest="0948723f8c5b98b0d6330e30199bfe83"/>
> >> >> >            <lrm_rsc_op id="pri_FS_drbd_t3_stop_0"
> >> operation="stop"
> >> >> crm-debug-origin="do_update_resource" crm_feature_set="3.0.1"
> >> >> transition-key="35:5:0:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> >> transition-magic="0:0;35:5:0:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> call-id="25" rc-code="0"
> >> >> op-status="0" interval="0" last-run="1271679403"
> >> last-rc-change="1271679403"
> >> >> exec-time="130" queue-time="0"
> >> op-digest="0948723f8c5b98b0d6330e30199bfe83"/>
> >> >> >          </lrm_resource>
> >> >> >        </lrm_resources>
> >> >> >      </lrm>
> >> >> >    </node_state>
> >> >> >    <node_state uname="server1" ha="active" in_ccm="true"
> >> crmd="online"
> >> >> join="member" expected="member" shutdown="0"
> >> >> id="3e20966a-ed64-4972-8f5a-88be0977f759"
> >> crm-debug-origin="do_state_transition">
> >> >> >      <transient_attributes
> >> id="3e20966a-ed64-4972-8f5a-88be0977f759">
> >> >> >        <instance_attributes
> >> >> id="status-3e20966a-ed64-4972-8f5a-88be0977f759">
> >> >> >          <nvpair
> >> >> id="status-3e20966a-ed64-4972-8f5a-88be0977f759-probe_complete"
> >> name="probe_complete" value="true"/>
> >> >> >          <nvpair
> >> >> id="status-3e20966a-ed64-4972-8f5a-88be0977f759-pingd" name="pingd"
> >> value="200"/>
> >> >> >        </instance_attributes>
> >> >> >      </transient_attributes>
> >> >> >      <lrm id="3e20966a-ed64-4972-8f5a-88be0977f759">
> >> >> >        <lrm_resources>
> >> >> >          <lrm_resource id="pri_apache_Dienst" type="apache"
> >> >> class="ocf" provider="heartbeat">
> >> >> >            <lrm_rsc_op id="pri_apache_Dienst_monitor_0"
> >> >> operation="monitor" crm-debug-origin="build_active_RAs"
> >> crm_feature_set="3.0.1"
> >> >> transition-key="7:0:7:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> >> transition-magic="0:7;7:0:7:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> call-id="5" rc-code="7"
> >> >> op-status="0" interval="0" last-run="1271678969"
> >> last-rc-change="1271678969"
> >> >> exec-time="630" queue-time="10"
> >> op-digest="592a5e45deff3022dc73ad2b8b690624"/>
> >> >> >            <lrm_rsc_op id="pri_apache_Dienst_start_0"
> >> >> operation="start" crm-debug-origin="build_active_RAs"
> >> crm_feature_set="3.0.1"
> >> >> transition-key="42:3:0:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> >> transition-magic="0:0;42:3:0:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> call-id="17" rc-code="0"
> >> >> op-status="0" interval="0" last-run="1271678982"
> >> last-rc-change="1271678982"
> >> >> exec-time="1840" queue-time="0"
> >> op-digest="592a5e45deff3022dc73ad2b8b690624"/>
> >> >> >            <lrm_rsc_op id="pri_apache_Dienst_stop_0"
> >> >> operation="stop" crm-debug-origin="build_active_RAs"
> >> crm_feature_set="3.0.1"
> >> >> transition-key="35:1:0:d6a0122e-5574-4d0c-b15b-1bd452b9062c"
> >> >> transition-magic="0:0;35:1:0:d6a0122e-5574-4d0c-b15b-1bd452b9062c"
> >> call-id="19" rc-code="0"
> >> >> op-status="0" interval="0" last-run="1271679340"
> >> last-rc-change="1271679340"
> >> >> exec-time="1260" queue-time="0"
> >> op-digest="592a5e45deff3022dc73ad2b8b690624"/>
> >> >> >          </lrm_resource>
> >> >> >          <lrm_resource id="pri_IP_Cluster" type="IPaddr2"
> >> >> class="ocf" provider="heartbeat">
> >> >> >            <lrm_rsc_op id="pri_IP_Cluster_monitor_0"
> >> >> operation="monitor" crm-debug-origin="build_active_RAs"
> >> crm_feature_set="3.0.1"
> >> >> transition-key="6:0:7:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> >> transition-magic="0:7;6:0:7:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> call-id="4" rc-code="7"
> >> >> op-status="0" interval="0" last-run="1271678969"
> >> last-rc-change="1271678969"
> >> >> exec-time="350" queue-time="0"
> >> op-digest="ab9506a065e05252980696cd889aac20"/>
> >> >> >            <lrm_rsc_op id="pri_IP_Cluster_start_0"
> >> >> operation="start" crm-debug-origin="build_active_RAs"
> >> crm_feature_set="3.0.1"
> >> >> transition-key="40:3:0:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> >> transition-magic="0:0;40:3:0:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> call-id="15" rc-code="0"
> >> >> op-status="0" interval="0" last-run="1271678980"
> >> last-rc-change="1271678980"
> >> >> exec-time="100" queue-time="0"
> >> op-digest="ab9506a065e05252980696cd889aac20"/>
> >> >> >            <lrm_rsc_op id="pri_IP_Cluster_stop_0"
> >> operation="stop"
> >> >> crm-debug-origin="build_active_RAs" crm_feature_set="3.0.1"
> >> >> transition-key="34:1:0:d6a0122e-5574-4d0c-b15b-1bd452b9062c"
> >> >> transition-magic="0:0;34:1:0:d6a0122e-5574-4d0c-b15b-1bd452b9062c"
> >> call-id="21" rc-code="0"
> >> >> op-status="0" interval="0" last-run="1271679341"
> >> last-rc-change="1271679341"
> >> >> exec-time="60" queue-time="0"
> >> op-digest="ab9506a065e05252980696cd889aac20"/>
> >> >> >          </lrm_resource>
> >> >> >          <lrm_resource id="pri_drbd_Dienst:0" type="drbd"
> >> >> class="ocf" provider="linbit">
> >> >> >            <lrm_rsc_op id="pri_drbd_Dienst:0_monitor_0"
> >> >> operation="monitor" crm-debug-origin="build_active_RAs"
> >> crm_feature_set="3.0.1"
> >> >> transition-key="4:0:7:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> >> transition-magic="0:7;4:0:7:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> call-id="2" rc-code="7"
> >> >> op-status="0" interval="0" last-run="1271678969"
> >> last-rc-change="1271678969"
> >> >> exec-time="450" queue-time="0"
> >> op-digest="da70ef5b9aed870d7c0944ce6ee989e2"/>
> >> >> >            <lrm_rsc_op id="pri_drbd_Dienst:0_start_0"
> >> >> operation="start" crm-debug-origin="build_active_RAs"
> >> crm_feature_set="3.0.1"
> >> >> transition-key="5:1:0:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> >> transition-magic="0:0;5:1:0:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> call-id="8" rc-code="0"
> >> >> op-status="0" interval="0" last-run="1271678972"
> >> last-rc-change="1271678972"
> >> >> exec-time="370" queue-time="0"
> >> op-digest="da70ef5b9aed870d7c0944ce6ee989e2"/>
> >> >> >            <lrm_rsc_op id="pri_drbd_Dienst:0_promote_0"
> >> >> operation="promote" crm-debug-origin="build_active_RAs"
> >> crm_feature_set="3.0.1"
> >> >> transition-key="9:2:0:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> >> transition-magic="0:0;9:2:0:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> call-id="12" rc-code="0"
> >> >> op-status="0" interval="0" last-run="1271678976"
> >> last-rc-change="1271678976"
> >> >> exec-time="120" queue-time="0"
> >> op-digest="da70ef5b9aed870d7c0944ce6ee989e2"/>
> >> >> >            <lrm_rsc_op id="pri_drbd_Dienst:0_demote_0"
> >> >> operation="demote" crm-debug-origin="build_active_RAs"
> >> crm_feature_set="3.0.1"
> >> >> transition-key="7:1:0:d6a0122e-5574-4d0c-b15b-1bd452b9062c"
> >> >> transition-magic="0:0;7:1:0:d6a0122e-5574-4d0c-b15b-1bd452b9062c"
> >> call-id="23" rc-code="0"
> >> >> op-status="0" interval="0" last-run="1271679341"
> >> last-rc-change="1271679341"
> >> >> exec-time="90" queue-time="0"
> >> op-digest="da70ef5b9aed870d7c0944ce6ee989e2"/>
> >> >> >            <lrm_rsc_op
> id="pri_drbd_Dienst:0_pre_notify_stop_0"
> >> >> operation="notify" crm-debug-origin="build_active_RAs"
> >> >> crm_feature_set="3.0.1"
> >> transition-key="59:1:0:d6a0122e-5574-4d0c-b15b-1bd452b9062c"
> >> >> transition-magic="0:0;59:1:0:d6a0122e-5574-4d0c-b15b-1bd452b9062c"
> >> call-id="25"
> >> >> rc-code="0" op-status="0" interval="0" last-run="1271679341"
> >> >> last-rc-change="1271679341" exec-time="50" queue-time="0"
> >> >> op-digest="da70ef5b9aed870d7c0944ce6ee989e2"/>
> >> >> >            <lrm_rsc_op id="pri_drbd_Dienst:0_stop_0"
> >> >> operation="stop" crm-debug-origin="build_active_RAs"
> >> crm_feature_set="3.0.1"
> >> >> transition-key="8:1:0:d6a0122e-5574-4d0c-b15b-1bd452b9062c"
> >> >> transition-magic="0:0;8:1:0:d6a0122e-5574-4d0c-b15b-1bd452b9062c"
> >> call-id="26" rc-code="0"
> >> >> op-status="0" interval="0" last-run="1271679341"
> >> last-rc-change="1271679341"
> >> >> exec-time="100" queue-time="0"
> >> op-digest="da70ef5b9aed870d7c0944ce6ee989e2"/>
> >> >> >          </lrm_resource>
> >> >> >          <lrm_resource id="pri_pingd:0" type="pingd"
> class="ocf"
> >> >> provider="pacemaker">
> >> >> >            <lrm_rsc_op id="pri_pingd:0_monitor_0"
> >> >> operation="monitor" crm-debug-origin="build_active_RAs"
> >> crm_feature_set="3.0.1"
> >> >> transition-key="8:0:7:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> >> transition-magic="0:7;8:0:7:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> call-id="6" rc-code="7"
> >> >> op-status="0" interval="0" last-run="1271678970"
> >> last-rc-change="1271678970"
> >> >> exec-time="40" queue-time="1000"
> >> op-digest="ab58a89887adc76008fe441640ea2c3e"/>
> >> >> >            <lrm_rsc_op id="pri_pingd:0_start_0"
> >> operation="start"
> >> >> crm-debug-origin="build_active_RAs" crm_feature_set="3.0.1"
> >> >> transition-key="37:1:0:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> >> transition-magic="0:0;37:1:0:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> call-id="7" rc-code="0" op-status="0"
> >> >> interval="0" last-run="1271678971" last-rc-change="1271678971"
> >> >> exec-time="70" queue-time="0"
> >> op-digest="ab58a89887adc76008fe441640ea2c3e"/>
> >> >> >            <lrm_rsc_op id="pri_pingd:0_monitor_15000"
> >> >> operation="monitor" crm-debug-origin="build_active_RAs"
> >> crm_feature_set="3.0.1"
> >> >> transition-key="49:2:0:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> >> transition-magic="0:0;49:2:0:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> call-id="10" rc-code="0"
> >> >> op-status="0" interval="15000" last-run="1271679395"
> >> >> last-rc-change="1271678975" exec-time="20" queue-time="0"
> >> >> op-digest="723c145d6f1d33caccebfa26c5fda578"/>
> >> >> >          </lrm_resource>
> >> >> >          <lrm_resource id="pri_FS_drbd_t3" type="Filesystem"
> >> >> class="ocf" provider="heartbeat">
> >> >> >            <lrm_rsc_op id="pri_FS_drbd_t3_monitor_0"
> >> >> operation="monitor" crm-debug-origin="build_active_RAs"
> >> crm_feature_set="3.0.1"
> >> >> transition-key="5:0:7:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> >> transition-magic="0:7;5:0:7:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> call-id="3" rc-code="7"
> >> >> op-status="0" interval="0" last-run="1271678969"
> >> last-rc-change="1271678969"
> >> >> exec-time="380" queue-time="0"
> >> op-digest="0948723f8c5b98b0d6330e30199bfe83"/>
> >> >> >            <lrm_rsc_op id="pri_FS_drbd_t3_start_0"
> >> >> operation="start" crm-debug-origin="build_active_RAs"
> >> crm_feature_set="3.0.1"
> >> >> transition-key="39:3:0:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> >> transition-magic="0:0;39:3:0:6090ba02-c064-4d80-9222-bf77b7011e17"
> >> call-id="14" rc-code="0"
> >> >> op-status="0" interval="0" last-run="1271678979"
> >> last-rc-change="1271678979"
> >> >> exec-time="330" queue-time="0"
> >> op-digest="0948723f8c5b98b0d6330e30199bfe83"/>
> >> >> >            <lrm_rsc_op id="pri_FS_drbd_t3_stop_0"
> >> operation="stop"
> >> >> crm-debug-origin="build_active_RAs" crm_feature_set="3.0.1"
> >> >> transition-key="33:1:0:d6a0122e-5574-4d0c-b15b-1bd452b9062c"
> >> >> transition-magic="0:0;33:1:0:d6a0122e-5574-4d0c-b15b-1bd452b9062c"
> >> call-id="22" rc-code="0"
> >> >> op-status="0" interval="0" last-run="1271679341"
> >> last-rc-change="1271679341"
> >> >> exec-time="140" queue-time="0"
> >> op-digest="0948723f8c5b98b0d6330e30199bfe83"/>
> >> >> >          </lrm_resource>
> >> >> >        </lrm_resources>
> >> >> >      </lrm>
> >> >> >    </node_state>
> >> >> >  </status>
> >> >> > </cib>
> >> >> >
> >> >> >
> >> >> > --------------
> >> >> >
> >> >> > the clear config
> >> >> >
> >> >> > crm(live)configure# show
> >> >> > node $id="3e20966a-ed64-4972-8f5a-88be0977f759" server1 \
> >> >> >        attributes standby="off"
> >> >> > node $id="5262f929-1082-4a85-aa05-7bd1992f15be" server2 \
> >> >> >        attributes standby="off"
> >> >> > primitive pri_FS_drbd_t3 ocf:heartbeat:Filesystem \
> >> >> >        params device="/dev/drbd0" directory="/mnt/drbd_daten"
> >> >> fstype="ext3" options="noatime"
> >> >> > primitive pri_IP_Cluster ocf:heartbeat:IPaddr2 \
> >> >> >        params ip="192.168.1.253" cidr_netmask="24" nic="eth1"
> \
> >> >> >        op monitor interval="3"
> >> >> > primitive pri_apache_Dienst ocf:heartbeat:apache \
> >> >> >        op monitor interval="15" \
> >> >> >        params configfile="/etc/apache2/apache2.conf"
> >> >> httpd="/usr/sbin/apache2" port="80"
> >> >> > primitive pri_drbd_Dienst ocf:linbit:drbd \
> >> >> >        params drbd_resource="t3" \
> >> >> >        op monitor interval="15" \
> >> >> >        op start interval="0" timeout="240" \
> >> >> >        op stop interval="0" timeout="100"
> >> >> > primitive pri_pingd ocf:pacemaker:pingd \
> >> >> >        params name="pingd" host_list="192.168.1.1 \
> 192.168.4.10"
> >> >> multiplier="100" dampen="5s" \
> >> >> >        op monitor interval="15s" timeout="20s"
> >> >> > group group_t3 pri_FS_drbd_t3 pri_IP_Cluster pri_apache_Dienst
> >> >> > ms ms_drbd_service pri_drbd_Dienst \
> >> >> >        meta notify="true" target-role="Started"
> >> >> > clone clo_pingd pri_pingd \
> >> >> >        meta globally-unique="false"
> >> >> > location loc_drbd_on_conected_node ms_drbd_service \
> >> >> >        rule $id="loc_drbd_on_conected_node-rule" ping: defined
> >> pingd
> >> >> > colocation col_apache_after_drbd inf: group_t3
> ms_drbd_service:Master
> >> >> > order ord_apache_after_drbd inf: ms_drbd_service:promote
> >> group_t3:start
> >> >> > property $id="cib-bootstrap-options" \
> >> >> >
> >>  dc-version="1.0.8-2c98138c2f070fcb6ddeab1084154cffbf44ba75" \
> >> >> >        cluster-infrastructure="Heartbeat" \
> >> >> >        no-quorum-policy="ignore" \
> >> >> >        default-resource-stickiness="100" \
> >> >> >        last-lrm-refresh="1271679788" \
> >> >> >        startup-fencing="false" \
> >> >> >        stonith-enabled="false" \
> >> >> >        default-action-timeout="120s"
> >> >> >
> >> >> > --
> >> >> > GRATIS für alle GMX-Mitglieder: Die maxdome Movie-FLAT!
> >> >> > Jetzt freischalten unter http://portal.gmx.net/de/go/maxdome01
> >> >> >
> >> >> > _______________________________________________
> >> >> > Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> >> >> > http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> >> >> >
> >> >> > Project Home: http://www.clusterlabs.org
> >> >> > Getting started:
> >> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> >> >> >
> >> >>
> >> >> _______________________________________________
> >> >> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> >> >> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> >> >>
> >> >> Project Home: http://www.clusterlabs.org
> >> >> Getting started:
> >> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> >> >
> >> > --
> >> > GRATIS für alle GMX-Mitglieder: Die maxdome Movie-FLAT!
> >> > Jetzt freischalten unter http://portal.gmx.net/de/go/maxdome01
> >> >
> >> > _______________________________________________
> >> > Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> >> > http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> >> >
> >> > Project Home: http://www.clusterlabs.org
> >> > Getting started:
> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> >> >
> >>
> >> _______________________________________________
> >> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> >> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> >>
> >> Project Home: http://www.clusterlabs.org
> >> Getting started:
> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> >
> > --
> > GRATIS für alle GMX-Mitglieder: Die maxdome Movie-FLAT!
> > Jetzt freischalten unter http://portal.gmx.net/de/go/maxdome01
> >
> > _______________________________________________
> > Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> > http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> >
> > Project Home: http://www.clusterlabs.org
> > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> >
> 
> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf

-- 
GRATIS für alle GMX-Mitglieder: Die maxdome Movie-FLAT!
Jetzt freischalten unter http://portal.gmx.net/de/go/maxdome01
-------------- next part --------------
A non-text attachment was scrubbed...
Name: report_21.04.2010_09.40.tar.bz2
Type: application/octet-stream
Size: 763956 bytes
Desc: not available
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20100421/845d5ec4/attachment-0003.obj>


More information about the Pacemaker mailing list