[Pacemaker] Pacemaker Digest, Vol 39, Issue 53

William Wells WilliamWells at winn-dixie.com
Fri Feb 18 08:23:07 EST 2011


Does this include the BladeCenter serial number?

Will


-----Original Message-----
From: pacemaker-request at oss.clusterlabs.org
[mailto:pacemaker-request at oss.clusterlabs.org] 
Sent: Thursday, February 17, 2011 10:52 AM
To: pacemaker at oss.clusterlabs.org
Subject: Pacemaker Digest, Vol 39, Issue 53

Send Pacemaker mailing list submissions to
	pacemaker at oss.clusterlabs.org

To subscribe or unsubscribe via the World Wide Web, visit
	http://oss.clusterlabs.org/mailman/listinfo/pacemaker
or, via email, send a message with subject or body 'help' to
	pacemaker-request at oss.clusterlabs.org

You can reach the person managing the list at
	pacemaker-owner at oss.clusterlabs.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Pacemaker digest..."


Today's Topics:

   1. Re: Need help in starting resources - (Senftleben, Stefan, ITSC)
   2. Re: Need help in starting resources - (Michael Schwartzkopff)
   3. Re: Need help in starting resources - (Senftleben, Stefan, ITSC)
   4. Re: Need help in starting resources - (Michael Schwartzkopff)
   5. Re: Need help in starting resources - (Senftleben, Stefan, ITSC)
   6. Re: Need help in starting resources - (Senftleben, Stefan, ITSC)


----------------------------------------------------------------------

Message: 1
Date: Thu, 17 Feb 2011 15:37:44 +0000
From: "Senftleben, Stefan, ITSC" <Stefan.Senftleben at ITSC.de>
To: The Pacemaker cluster resource manager
	<pacemaker at oss.clusterlabs.org>
Subject: Re: [Pacemaker] Need help in starting resources -
Message-ID:
	<23D539E7B48E46408218BC46F8FC453103A29A at 00EX05.ITSCDOM.intern>
Content-Type: text/plain; charset="iso-8859-1"

Sorry, here is the configuration, but no logs are written.


Config:

crm configure show
node lxds05 \
        attributes standby="off"
node lxds07 \
        attributes standby="off"
primitive apache2 ocf:heartbeat:apache \
        params configfile="/services/etc/apache2/apache2.conf"
httpd="/usr/sbin/apache2" \
        op monitor interval="15s" \
        op start interval="0" timeout="240s" \
        op stop interval="0" timeout="240s" \
        meta target-role="Started"
primitive drbd_disk ocf:linbit:drbd \
        params drbd_resource="nagios" \
        op monitor interval="20s" role="Slave" timeout="240s" \
        op monitor interval="10s" role="Master" timeout="240s"
primitive drbd_disk1 ocf:linbit:drbd \
        params drbd_resource="pnp4nagios" \
        op monitor interval="20s" role="Slave" timeout="240s" \
        op monitor interval="10s" role="Master" timeout="240s"
primitive drbd_disk2 ocf:linbit:drbd \
        params drbd_resource="services" \
        op monitor interval="20s" role="Slave" timeout="240s" \
        op monitor interval="10s" role="Master" timeout="240s"
primitive fs_drbd ocf:heartbeat:Filesystem \
        params device="/dev/drbd0" directory="/usr/local/nagios"
fstype="ext3" \
        op monitor interval="15s" \
        op start interval="0" timeout="240s" \
        op stop interval="0" timeout="360s"
primitive fs_drbd1 ocf:heartbeat:Filesystem \
        params device="/dev/drbd1" directory="/usr/local/pnp4nagios"
fstype="ext3" \
        op monitor interval="15s" \
        op start interval="0" timeout="240s" \
        op stop interval="0" timeout="360s"
primitive fs_drbd2 ocf:heartbeat:Filesystem \
        params device="/dev/drbd2" directory="/services/etc"
fstype="ext3" \
        op monitor interval="15s" \
        op start interval="0" timeout="240s" \
        op stop interval="0" timeout="360s"
primitive ip1 ocf:heartbeat:IPaddr2 \
        params ip="192.168.138.88" nic="eth5" cidr_netmask="24" \
        op monitor interval="10s" timeout="20s" \
        meta target-role="Started"
primitive mailto ocf:heartbeat:MailTo \
        params email="GB2NagiosSysWin_Mngr at itsc.de" \
        op monitor interval="10" timeout="10" depth="0"
primitive nagios ocf:naprax:nagios \
        params configfile="/usr/local/nagios/etc/nagios.cfg"
nagios="/usr/local/nagios/bin/nagios" \
        op monitor interval="15s" \
        op start interval="0" timeout="240s" \
        op stop interval="0" timeout="240s" \
        meta target-role="Started"
primitive pingd ocf:pacemaker:pingd \
        params host_list="192.168.138.2 192.168.138.139 192.168.128.105
192.168.128.164" multiplier="100" dampen="5s" \
        op monitor interval="15s" timeout="5s"
primitive syslogng ocf:heartbeat:syslog-ng \
        params configfile="/etc/syslog-ng/syslog-ng.conf"
syslog_ng_binary="/usr/sbin/syslog-ng" \
        op monitor interval="10s" timeout="60s" depth="0"
primitive syslogng2mysql ocf:heartbeat:anything \
        params binfile="/usr/local/scripts/system/syslog-ng2mysql"
pidfile="/var/run/syslog-ng2mysql.pid" \
        op start interval="0" timeout="20s" \
        op stop interval="0" timeout="20s" \
        op monitor interval="30s" timeout="60s" depth="0" \
        meta target-role="Stopped"
group nagios-group fs_drbd fs_drbd1 fs_drbd2 ip1 apache2 nagios mailto \
        meta target-role="Started"
ms ms_drbd drbd_disk \
        meta master-max="1" master-node-max="1" clone-max="2"
clone-node-max="1" notify="true" target-role="Started"
ms ms_drbd1 drbd_disk1 \
        meta master-max="1" master-node-max="1" clone-max="2"
clone-node-max="1" notify="true"
ms ms_drbd2 drbd_disk2 \
        meta master-max="1" master-node-max="1" clone-max="2"
clone-node-max="1" notify="true"
clone pingdclone pingd \
        meta globally-unique="false" target-role="Started"
clone syslogng-clone syslogng \
        meta globally-unique="false" target-role="Started"
location cli-prefer-ip1 nagios-group \
        rule $id="cli-prefer-rule-ip1" inf: #uname eq lxds07 and #uname
eq lxds05
location nagios-group_on_connected_node nagios-group \
        rule $id="pingd-rule" pingd: defined pingd
colocation drbd_on_disks inf: ms_drbd ms_drbd1 ms_drbd2 nagios-group
order mount_after_drbd inf: ms_drbd:promote nagios-group:start
order mount_after_drbd1 inf: ms_drbd1:promote nagios-group:start
order mount_after_drbd2 inf: ms_drbd2:promote nagios-group:start
property $id="cib-bootstrap-options" \
        dc-version="1.0.8-042548a451fce8400660f6031f4da6f0223dd5dd" \
        cluster-infrastructure="openais" \
        expected-quorum-votes="2" \
        stonith-enabled="false" \
        no-quorum-policy="ignore"
rsc_defaults $id="rsc-options" \
        resource-stickiness="100"


Mit freundlichen Gr??en

i. A. Stefan Senftleben
Systemadministrator
Fachbereich Server Based Application
____________________________________________

itsc GmbH

Rothwiese 5
30559 Hannover

Tel.: 0511-27071-227
Fax: 0511-27071-555-227
Mobil: 0163-3327043

Mail: Stefan.Senftleben at itsc.de
Internet: www.itsc.de

________________________________________________________________________
___________

- itsc GmbH -
- Gesch?ftsf?hrer: Martin Behmann, stv. Stefan Kreit - Sitz der
Gesellschaft: Hannover -
- Registergericht: Hannover HRB 57585 - Steuernummer: 25/202/00736 -
UstNr: DE 198329821- IK 1 01 73 21 43 -
________________________________________________________________________
___________

Diese E-Mail enth?lt vertrauliche und/oder rechtlich gesch?tzte
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrt?mlich
erhalten haben, informieren Sie bitte sofort den Absender und vernichten
Sie diese E-Mail (inklusive aller Anh?nge). Bitte fertigen Sie keine
Kopien an oder bringen den Inhalt anderen Personen zur Kenntnis.
E-Mails sind anf?llig f?r Datenverf?lschungen, k?nnen abgefangen werden
und Computerviren verbreiten.
Au?er f?r Vorsatz und grobe Fahrl?ssigkeit lehnen wir jede Verantwortung
f?r derartige Vorg?nge ab.
________________________________________________________________________
___________

-----Urspr?ngliche Nachricht-----
Von: Michael Schwartzkopff [mailto:misch at clusterbau.com] 
Gesendet: Donnerstag, 17. Februar 2011 15:57
An: The Pacemaker cluster resource manager
Betreff: Re: [Pacemaker] Need help in starting resources -

On Thursday 17 February 2011 15:48:50 Senftleben, Stefan, ITSC wrote:
> Hello,
> 
> today I switched the master in my active-passive cluster with: "crm 
> node standy lxds05". The resources started cleanly on lxds07.
> After  the system updates and a reboot of lxds05, I tried to set 
> lxds05 online with: "crm node online lxds05". I tried to restart the 
> corosync-services, bit the cluster is in following situation. It 
> seems, that the resources on lxds05 were stopped. Can anyone help me
please?
> 
> Stefan
> 
> ============
> Last updated: Thu Feb 17 15:32:42 2011
> Stack: openais
> Current DC: lxds07 - partition with quorum
> Version: 1.0.8-042548a451fce8400660f6031f4da6f0223dd5dd
> 2 Nodes configured, 2 expected votes
> 7 Resources configured.
> ============
> 
> Online: [ lxds05 lxds07 ]
> 
>  Master/Slave Set: ms_drbd
>      Masters: [ lxds07 ]
>      Stopped: [ drbd_disk:0 ]
>  Master/Slave Set: ms_drbd1
>      Masters: [ lxds07 ]
>      Stopped: [ drbd_disk1:0 ]
>  Master/Slave Set: ms_drbd2
>      Masters: [ lxds07 ]
>      Stopped: [ drbd_disk2:1 ]
>  Resource Group: nagios-group
>      fs_drbd    (ocf::heartbeat:Filesystem):    Started lxds07
>      fs_drbd1   (ocf::heartbeat:Filesystem):    Started lxds07
>      fs_drbd2   (ocf::heartbeat:Filesystem):    Started lxds07
>      ip1        (ocf::heartbeat:IPaddr2):       Started lxds07
>      apache2    (ocf::heartbeat:apache):        Started lxds07
>      nagios     (ocf::naprax:nagios):   Started lxds07
>      mailto     (ocf::heartbeat:MailTo):        Started lxds07
>  Clone Set: pingdclone
>      Started: [ lxds07 ]
>      Stopped: [ pingd:0 ]
>  Clone Set: syslogng-clone
>      Started: [ lxds07 ]
>      Stopped: [ syslogng:0 ]
> 
> Migration summary:
> * Node lxds05:
> * Node lxds07:  pingd=400

Config? Logs?

--
Dr. Michael Schwartzkopff
Guardinistr. 63
81375 M?nchen

Tel: (0163) 172 50 98




------------------------------

Message: 2
Date: Thu, 17 Feb 2011 16:41:36 +0100
From: Michael Schwartzkopff <misch at clusterbau.com>
To: The Pacemaker cluster resource manager
	<pacemaker at oss.clusterlabs.org>
Subject: Re: [Pacemaker] Need help in starting resources -
Message-ID: <201102171641.36697.misch at clusterbau.com>
Content-Type: text/plain; charset="iso-8859-1"

On Thursday 17 February 2011 16:37:44 Senftleben, Stefan, ITSC wrote:
> Sorry, here is the configuration, but no logs are written.


Hard to believe, but let's try:

What logs are written if you enter

crm reosurce cleanup ms_drbd


-- 
Dr. Michael Schwartzkopff
Guardinistr. 63
81375 M?nchen

Tel: (0163) 172 50 98
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 198 bytes
Desc: This is a digitally signed message part.
URL:
<http://oss.clusterlabs.org/pipermail/pacemaker/attachments/20110217/9cc
80870/attachment-0001.sig>

------------------------------

Message: 3
Date: Thu, 17 Feb 2011 15:42:45 +0000
From: "Senftleben, Stefan, ITSC" <Stefan.Senftleben at ITSC.de>
To: The Pacemaker cluster resource manager
	<pacemaker at oss.clusterlabs.org>
Subject: Re: [Pacemaker] Need help in starting resources -
Message-ID:
	<23D539E7B48E46408218BC46F8FC453103A2CF at 00EX05.ITSCDOM.intern>
Content-Type: text/plain; charset="us-ascii"

How do I start a resource on the second node, like the
Slave-DRBD-Resource?

It seems to me, that the cib has marked all resources on lxds05 as
stopped, because of
the interrupted process of pushing the resources to the other node.

Online: [ lxds05 lxds07 ]

 Master/Slave Set: ms_drbd
     Masters: [ lxds07 ]
     Stopped: [ drbd_disk:0 ]




------------------------------

Message: 4
Date: Thu, 17 Feb 2011 16:45:01 +0100
From: Michael Schwartzkopff <misch at clusterbau.com>
To: The Pacemaker cluster resource manager
	<pacemaker at oss.clusterlabs.org>
Subject: Re: [Pacemaker] Need help in starting resources -
Message-ID: <201102171645.02272.misch at clusterbau.com>
Content-Type: text/plain; charset="iso-8859-1"

On Thursday 17 February 2011 16:42:45 Senftleben, Stefan, ITSC wrote:
> How do I start a resource on the second node, like the
Slave-DRBD-Resource?
> 
> It seems to me, that the cib has marked all resources on lxds05 as
stopped,
> because of the interrupted process of pushing the resources to the
other
> node.
> 
> Online: [ lxds05 lxds07 ]
> 
>  Master/Slave Set: ms_drbd
>      Masters: [ lxds07 ]
>      Stopped: [ drbd_disk:0 ]
> 
> 
> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> 
> Project Home: http://www.clusterlabs.org
> Getting started:
http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs:
>
http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemake
r

What is the state of DRBD?

cat /proc/drbd

-- 
Dr. Michael Schwartzkopff
Guardinistr. 63
81375 M?nchen

Tel: (0163) 172 50 98
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 198 bytes
Desc: This is a digitally signed message part.
URL:
<http://oss.clusterlabs.org/pipermail/pacemaker/attachments/20110217/32c
bed0c/attachment-0001.sig>

------------------------------

Message: 5
Date: Thu, 17 Feb 2011 15:50:32 +0000
From: "Senftleben, Stefan, ITSC" <Stefan.Senftleben at ITSC.de>
To: The Pacemaker cluster resource manager
	<pacemaker at oss.clusterlabs.org>
Subject: Re: [Pacemaker] Need help in starting resources -
Message-ID:
	<23D539E7B48E46408218BC46F8FC453103A2F5 at 00EX05.ITSCDOM.intern>
Content-Type: text/plain; charset="iso-8859-1"

root at lxds05:~# crm
crm(live)# resource
crm(live)resource# cleanup
usage: cleanup <rsc> [<node>]
crm(live)resource# cleanup ms_drbd
Cleaning up drbd_disk:0 on lxds05
Cleaning up drbd_disk:1 on lxds05
Cleaning up drbd_disk:0 on lxds07
Cleaning up drbd_disk:1 on lxds07


root at lxds07:~# cat /proc/drbd
version: 8.3.7 (api:88/proto:86-91)
GIT-hash: ea9e28dbff98e331a62bcbcc63a6135808fe2917 build by root at lxds07,
2010-11-15 16:58:34
 0: cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown C r----
    ns:89420 nr:12748380 dw:13728500 dr:241233 al:76 bm:45 lo:0 pe:0
ua:0 ap:0 ep:1 wo:b oos:91228
 1: cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown C r----
    ns:186260 nr:91457012 dw:97298760 dr:2059717 al:192939 bm:187047
lo:10 pe:0 ua:0 ap:9 ep:1 wo:b oos:891104
 2: cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown C r----
    ns:2928 nr:927992 dw:998844 dr:7001 al:50 bm:14 lo:0 pe:0 ua:0 ap:0
ep:1 wo:b oos:52140




-----Urspr?ngliche Nachricht-----
Von: Michael Schwartzkopff [mailto:misch at clusterbau.com] 
Gesendet: Donnerstag, 17. Februar 2011 16:42
An: The Pacemaker cluster resource manager
Betreff: Re: [Pacemaker] Need help in starting resources -

On Thursday 17 February 2011 16:37:44 Senftleben, Stefan, ITSC wrote:
> Sorry, here is the configuration, but no logs are written.


Hard to believe, but let's try:

What logs are written if you enter

crm reosurce cleanup ms_drbd


-- 
Dr. Michael Schwartzkopff
Guardinistr. 63
81375 M?nchen

Tel: (0163) 172 50 98




------------------------------

Message: 6
Date: Thu, 17 Feb 2011 15:52:17 +0000
From: "Senftleben, Stefan, ITSC" <Stefan.Senftleben at ITSC.de>
To: The Pacemaker cluster resource manager
	<pacemaker at oss.clusterlabs.org>
Subject: Re: [Pacemaker] Need help in starting resources -
Message-ID:
	<23D539E7B48E46408218BC46F8FC453103A30D at 00EX05.ITSCDOM.intern>
Content-Type: text/plain; charset="us-ascii"

okay, you are right. inthe syslog of lxds07 I did a grep on "crmd" and
"lrmd":

Feb 17 15:32:42 lxds07 crmd: [1083]: info: do_pe_invoke: Waiting for the
PE to connect
Feb 17 16:23:50 lxds07 crmd: [1083]: info: handle_shutdown_request:
Creating shutdown request for lxds05 (state=S_POLICY_ENGINE)
Feb 17 16:23:50 lxds07 crmd: [1083]: info: do_pe_invoke: Waiting for the
PE to connect
Feb 17 16:23:50 lxds07 crmd: [1083]: info: abort_transition_graph:
te_update_diff:146 - Triggered transition abort (complete=1,
tag=transient_attributes, id=lxds05, magic=NA, cib=0.3590.9) : Transient
attribute: update
Feb 17 16:23:50 lxds07 crmd: [1083]: info: do_pe_invoke: Waiting for the
PE to connect
Feb 17 16:24:53 lxds07 crmd: [1083]: notice: ais_dispatch: Membership
2484: quorum lost
Feb 17 16:24:53 lxds07 crmd: [1083]: info: ais_status_callback: status:
lxds05 is now lost (was member)
Feb 17 16:24:53 lxds07 crmd: [1083]: info: crm_update_peer: Node lxds05:
id=1821026496 state=lost (new) addr=r(0) ip(192.168.138.108)  votes=1
born=2480 seen=2480 proc=00000000000000000000000000013312
Feb 17 16:24:53 lxds07 crmd: [1083]: info: erase_node_from_join: Removed
node lxds05 from join calculations: welcomed=0 itegrated=0 finalized=0
confirmed=1
Feb 17 16:24:53 lxds07 crmd: [1083]: info: crm_update_quorum: Updating
quorum status to false (call=209)
Feb 17 16:24:53 lxds07 cib: [1079]: info: cib_process_request: Operation
complete: op cib_modify for section nodes (origin=local/crmd/207,
version=0.3590.9): ok (rc=0)
Feb 17 16:24:53 lxds07 cib: [1079]: info: cib_process_request: Operation
complete: op cib_modify for section cib (origin=local/crmd/209,
version=0.3591.1): ok (rc=0)
Feb 17 16:24:53 lxds07 crmd: [1083]: info: crm_ais_dispatch: Setting
expected votes to 2
Feb 17 16:24:53 lxds07 crmd: [1083]: info: do_pe_invoke: Waiting for the
PE to connect
Feb 17 16:24:53 lxds07 crmd: [1083]: WARN: match_down_event: No match
for shutdown action on lxds05
Feb 17 16:24:53 lxds07 crmd: [1083]: info: te_update_diff:
Stonith/shutdown of lxds05 not matched
Feb 17 16:24:53 lxds07 crmd: [1083]: info: abort_transition_graph:
te_update_diff:191 - Triggered transition abort (complete=1,
tag=node_state, id=lxds05, magic=NA, cib=0.3590.10) : Node failure
Feb 17 16:24:53 lxds07 crmd: [1083]: info: abort_transition_graph:
need_abort:59 - Triggered transition abort (complete=1) : Non-status
change
Feb 17 16:24:53 lxds07 crmd: [1083]: info: need_abort: Aborting on
change to have-quorum
Feb 17 16:24:53 lxds07 crmd: [1083]: info: do_pe_invoke: Waiting for the
PE to connect
Feb 17 16:24:53 lxds07 crmd: [1083]: WARN: register_fsa_input_adv:
do_pe_invoke stalled the FSA with pending inputs
Feb 17 16:24:53 lxds07 cib: [1079]: info: cib_process_request: Operation
complete: op cib_modify for section crm_config (origin=local/crmd/211,
version=0.3591.1): ok (rc=0)
Feb 17 16:32:10 lxds07 crmd: [1083]: notice: ais_dispatch: Membership
2488: quorum acquired
Feb 17 16:32:10 lxds07 crmd: [1083]: info: ais_status_callback: status:
lxds05 is now member (was lost)
Feb 17 16:32:10 lxds07 crmd: [1083]: info: crm_update_peer: Node lxds05:
id=1821026496 state=member (new) addr=r(0) ip(192.168.138.108)  votes=1
born=2480 seen=2488 proc=00000000000000000000000000013312
Feb 17 16:32:10 lxds07 crmd: [1083]: info: crm_update_quorum: Updating
quorum status to true (call=216)
Feb 17 16:32:10 lxds07 cib: [1079]: info: cib_process_request: Operation
complete: op cib_delete for section //node_state[@uname='lxds05']/lrm
(origin=local/crmd/212, version=0.3591.2): ok (rc=0)
Feb 17 16:32:10 lxds07 cib: [1079]: info: cib_process_request: Operation
complete: op cib_delete for section
//node_state[@uname='lxds05']/transient_attributes
(origin=local/crmd/213, version=0.3591.3): ok (rc=0)
Feb 17 16:32:10 lxds07 cib: [1079]: info: cib_process_request: Operation
complete: op cib_modify for section nodes (origin=local/crmd/214,
version=0.3591.3): ok (rc=0)
Feb 17 16:32:10 lxds07 cib: [1079]: info: cib_process_request: Operation
complete: op cib_modify for section cib (origin=local/crmd/216,
version=0.3592.1): ok (rc=0)
Feb 17 16:32:10 lxds07 crmd: [1083]: info: crm_ais_dispatch: Setting
expected votes to 2
Feb 17 16:32:10 lxds07 crmd: [1083]: info: do_pe_invoke: Waiting for the
PE to connect
Feb 17 16:32:10 lxds07 crmd: [1083]: info: erase_xpath_callback:
Deletion of "//node_state[@uname='lxds05']/lrm": ok (rc=0)
Feb 17 16:32:10 lxds07 crmd: [1083]: info: abort_transition_graph:
te_update_diff:157 - Triggered transition abort (complete=1,
tag=transient_attributes, id=lxds05, magic=NA, cib=0.3591.3) : Transient
attribute: removal
Feb 17 16:32:10 lxds07 crmd: [1083]: info: erase_xpath_callback:
Deletion of "//node_state[@uname='lxds05']/transient_attributes": ok
(rc=0)
Feb 17 16:32:10 lxds07 crmd: [1083]: info: abort_transition_graph:
need_abort:59 - Triggered transition abort (complete=1) : Non-status
change
Feb 17 16:32:10 lxds07 crmd: [1083]: info: need_abort: Aborting on
change to have-quorum
Feb 17 16:32:10 lxds07 crmd: [1083]: info: ais_dispatch: Membership
2488: quorum retained
Feb 17 16:32:10 lxds07 cib: [1079]: info: cib_process_request: Operation
complete: op cib_modify for section crm_config (origin=local/crmd/218,
version=0.3592.1): ok (rc=0)
Feb 17 16:32:10 lxds07 cib: [1079]: info: cib_process_request: Operation
complete: op cib_modify for section nodes (origin=local/crmd/219,
version=0.3592.1): ok (rc=0)
Feb 17 16:32:10 lxds07 crmd: [1083]: info: crm_ais_dispatch: Setting
expected votes to 2
Feb 17 16:32:10 lxds07 crmd: [1083]: info: do_pe_invoke: Waiting for the
PE to connect
Feb 17 16:32:10 lxds07 crmd: [1083]: WARN: register_fsa_input_adv:
do_pe_invoke stalled the FSA with pending inputs
Feb 17 16:32:10 lxds07 crmd: [1083]: info: do_pe_invoke: Waiting for the
PE to connect
Feb 17 16:32:10 lxds07 crmd: [1083]: info: do_pe_invoke: Waiting for the
PE to connect
Feb 17 16:32:10 lxds07 cib: [1079]: info: cib_process_request: Operation
complete: op cib_modify for section crm_config (origin=local/crmd/222,
version=0.3592.1): ok (rc=0)
Feb 17 16:32:11 lxds07 crmd: [1083]: info: do_pe_invoke: Waiting for the
PE to connect
Feb 17 16:32:12 lxds07 crmd: [1083]: info: do_state_transition: State
transition S_POLICY_ENGINE -> S_INTEGRATION [ input=I_NODE_JOIN
cause=C_HA_MESSAGE origin=route_message ]
Feb 17 16:32:12 lxds07 crmd: [1083]: info: update_dc: Unset DC lxds07
Feb 17 16:32:12 lxds07 crmd: [1083]: info: do_dc_join_offer_all: A new
node joined the cluster
Feb 17 16:32:12 lxds07 crmd: [1083]: info: join_make_offer: Making join
offers based on membership 2488
Feb 17 16:32:12 lxds07 crmd: [1083]: info: do_dc_join_offer_all: join-6:
Waiting on 2 outstanding join acks
Feb 17 16:32:12 lxds07 crmd: [1083]: info: do_pe_invoke: Waiting for the
PE to connect
Feb 17 16:32:12 lxds07 crmd: [1083]: info: update_dc: Set DC to lxds07
(3.0.1)
Feb 17 16:32:12 lxds07 crmd: [1083]: info: do_pe_invoke: Waiting for the
PE to connect
Feb 17 16:32:12 lxds07 crmd: [1083]: info: do_pe_invoke: Waiting for the
PE to connect
Feb 17 16:32:12 lxds07 crmd: [1083]: info: do_pe_invoke: Waiting for the
PE to connect
Feb 17 16:32:12 lxds07 crmd: [1083]: info: do_pe_invoke: Waiting for the
PE to connect
Feb 17 16:32:12 lxds07 crmd: [1083]: WARN: register_fsa_input_adv:
do_pe_invoke stalled the FSA with pending inputs
Feb 17 16:32:12 lxds07 crmd: [1083]: info: do_state_transition: State
transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED
cause=C_FSA_INTERNAL origin=check_join_state ]
Feb 17 16:32:12 lxds07 crmd: [1083]: info: do_state_transition: All 2
cluster nodes responded to the join offer.
Feb 17 16:32:12 lxds07 crmd: [1083]: info: do_dc_join_finalize: join-6:
Syncing the CIB from lxds07 to the rest of the cluster
Feb 17 16:32:12 lxds07 crmd: [1083]: info: do_pe_invoke: Waiting for the
PE to connect
Feb 17 16:32:12 lxds07 cib: [1079]: info: cib_process_request: Operation
complete: op cib_sync for section 'all' (origin=local/crmd/224,
version=0.3592.1): ok (rc=0)
Feb 17 16:32:12 lxds07 cib: [1079]: info: cib_process_request: Operation
complete: op cib_modify for section nodes (origin=local/crmd/225,
version=0.3592.1): ok (rc=0)
Feb 17 16:32:12 lxds07 cib: [1079]: info: cib_process_request: Operation
complete: op cib_modify for section nodes (origin=local/crmd/226,
version=0.3592.1): ok (rc=0)
Feb 17 16:32:12 lxds07 cib: [1079]: info: cib_process_request: Operation
complete: op cib_delete for section
//node_state[@uname='lxds05']/transient_attributes
(origin=lxds05/crmd/6, version=0.3592.1): ok (rc=0)
Feb 17 16:32:12 lxds07 crmd: [1083]: info: do_pe_invoke: Waiting for the
PE to connect
Feb 17 16:32:12 lxds07 crmd: [1083]: info: do_dc_join_ack: join-6:
Updating node state to member for lxds05
Feb 17 16:32:12 lxds07 crmd: [1083]: info: do_pe_invoke: Waiting for the
PE to connect
Feb 17 16:32:12 lxds07 crmd: [1083]: info: do_dc_join_ack: join-6:
Updating node state to member for lxds07
Feb 17 16:32:12 lxds07 crmd: [1083]: info: do_pe_invoke: Waiting for the
PE to connect
Feb 17 16:32:12 lxds07 cib: [1079]: info: cib_process_request: Operation
complete: op cib_delete for section //node_state[@uname='lxds05']/lrm
(origin=local/crmd/227, version=0.3592.1): ok (rc=0)
Feb 17 16:32:12 lxds07 crmd: [1083]: info: erase_xpath_callback:
Deletion of "//node_state[@uname='lxds05']/lrm": ok (rc=0)
Feb 17 16:32:12 lxds07 crmd: [1083]: info: do_state_transition: State
transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED
cause=C_FSA_INTERNAL origin=check_join_state ]
Feb 17 16:32:12 lxds07 crmd: [1083]: info: do_state_transition: All 2
cluster nodes are eligible to run resources.
Feb 17 16:32:12 lxds07 crmd: [1083]: info: do_dc_join_final: Ensuring
DC, quorum and node attributes are up-to-date
Feb 17 16:32:12 lxds07 crmd: [1083]: info: crm_update_quorum: Updating
quorum status to true (call=233)
Feb 17 16:32:12 lxds07 crmd: [1083]: info: abort_transition_graph:
do_te_invoke:191 - Triggered transition abort (complete=1) : Peer
Cancelled
Feb 17 16:32:12 lxds07 attrd: [1081]: info: attrd_local_callback:
Sending full refresh (origin=crmd)
Feb 17 16:32:12 lxds07 crmd: [1083]: info: do_pe_invoke: Waiting for the
PE to connect
Feb 17 16:32:12 lxds07 crmd: [1083]: WARN: register_fsa_input_adv:
do_pe_invoke stalled the FSA with pending inputs
Feb 17 16:32:12 lxds07 crmd: [1083]: info: do_pe_invoke: Waiting for the
PE to connect
Feb 17 16:32:12 lxds07 crmd: [1083]: info: abort_transition_graph:
te_update_diff:267 - Triggered transition abort (complete=1,
tag=lrm_rsc_op, id=apache2_monitor_0,
magic=0:7;24:3:7:5f4f5692-97ac-43d6-8be7-15d1fa0f67d8, cib=0.3592.3) :
Resource op removal
Feb 17 16:32:12 lxds07 crmd: [1083]: info: do_pe_invoke: Waiting for the
PE to connect
Feb 17 16:32:12 lxds07 cib: [1079]: info: cib_process_request: Operation
complete: op cib_delete for section //node_state[@uname='lxds07']/lrm
(origin=local/crmd/229, version=0.3592.3): ok (rc=0)
Feb 17 16:32:12 lxds07 crmd: [1083]: info: erase_xpath_callback:
Deletion of "//node_state[@uname='lxds07']/lrm": ok (rc=0)
Feb 17 16:32:12 lxds07 crmd: [1083]: info: te_update_diff: Detected LRM
refresh - 13 resources updated: Skipping all resource events
Feb 17 16:32:12 lxds07 crmd: [1083]: info: abort_transition_graph:
te_update_diff:227 - Triggered transition abort (complete=1, tag=diff,
id=(null), magic=NA, cib=0.3592.4) : LRM Refresh
Feb 17 16:32:12 lxds07 crmd: [1083]: info: do_pe_invoke: Waiting for the
PE to connect
Feb 17 16:32:12 lxds07 cib: [1079]: info: cib_process_request: Operation
complete: op cib_modify for section nodes (origin=local/crmd/231,
version=0.3592.4): ok (rc=0)
Feb 17 16:32:12 lxds07 cib: [1079]: info: cib_process_request: Operation
complete: op cib_modify for section cib (origin=local/crmd/233,
version=0.3592.4): ok (rc=0)
Feb 17 16:35:46 lxds07 crmd: [1083]: info: do_pe_invoke: Waiting for the
PE to connect
Feb 17 16:35:46 lxds07 crmd: [1083]: info: do_pe_invoke: Waiting for the
PE to connect
Feb 17 16:35:46 lxds07 crmd: [1083]: info: do_pe_invoke: Waiting for the
PE to connect
Feb 17 16:35:46 lxds07 crmd: [1083]: info: abort_transition_graph:
te_update_diff:146 - Triggered transition abort (complete=1,
tag=transient_attributes, id=lxds05, magic=NA, cib=0.3592.5) : Transient
attribute: update
Feb 17 16:35:46 lxds07 crmd: [1083]: info: do_pe_invoke: Waiting for the
PE to connect


lrmd:

Feb 17 15:34:49 lxds07 lrmd: [1080]: info: rsc:syslogng:1:27: monitor
Feb 17 16:03:12 lxds07 lrmd: [1080]: info: rsc:drbd_disk1:1:51: monitor
Feb 17 16:03:15 lxds07 lrmd: [1080]: info: rsc:fs_drbd2:61: monitor
Feb 17 16:03:15 lxds07 lrmd: [1080]: info: rsc:fs_drbd1:59: monitor
Feb 17 16:03:15 lxds07 lrmd: [1080]: info: rsc:fs_drbd:57: monitor
Feb 17 16:03:19 lxds07 lrmd: [1080]: info: rsc:apache2:65: monitor
Feb 17 16:03:21 lxds07 lrmd: [1080]: info: rsc:drbd_disk:1:50: monitor
Feb 17 16:03:23 lxds07 lrmd: [1080]: info: rsc:drbd_disk2:0:55: monitor
Feb 17 16:03:23 lxds07 lrmd: [1080]: info: rsc:ip1:63: monitor
Feb 17 16:03:33 lxds07 lrmd: [1080]: info: rsc:mailto:69: monitor
Feb 17 16:03:34 lxds07 lrmd: [1080]: info: rsc:nagios:67: monitor
Feb 17 16:32:41 lxds07 lrmd: [1080]: info: rsc:pingd:1:20: monitor
Feb 17 16:34:57 lxds07 lrmd: [1080]: info: rsc:syslogng:1:27: monitor
Feb 17 16:49:02 lxds07 crmd: [1083]: WARN: log_data_element:
do_lrm_invoke: bad input <create_request_adv origin="send_lrm_rsc_op"
t="crmd" version="3.0.1" subt="request"
reference="lrm_delete-crm_resource-1297957742-1" crm_task="lrm_delete"
crm_sys_to="lrmd" crm_sys_from="1758_crm_resource" crm_host_to="lxds07"
src="lxds05" seq="6" >
Feb 17 16:49:02 lxds07 crmd: [1083]: info: send_direct_ack: ACK'ing
resource op drbd_disk:0_delete_60000 from 0:0:crm-resource-1758:
lrm_invoke-lrmd-1297957742-54
Feb 17 16:49:03 lxds07 crmd: [1083]: info: send_direct_ack: ACK'ing
resource op drbd_disk:1_delete_60000 from 0:0:crm-resource-1760:
lrm_invoke-lrmd-1297957743-55




------------------------------

_______________________________________________
Pacemaker mailing list
Pacemaker at oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker


End of Pacemaker Digest, Vol 39, Issue 53
*****************************************




More information about the Pacemaker mailing list