[Pacemaker] Pacemaker Digest, Vol 45, Issue 55
leopoldo tosi
leopoldotosi at yahoo.it
Tue Aug 30 12:39:53 UTC 2011
hi Shravan Mishra, I've found what's wrong in my configuration
it's depend by apache ssl module in httpd.com and httpd-std.conf
after comment them everything works.
I don't know why the ocf:heartbeat:apache module try to load it for
server-status,
tks a lot for a hand.
by poldolo
/usr/local/apache2/conf/httpd.conf
#<IfModule mod_ssl.c>
# Include conf/ssl.conf
#</IfModule>
/usr/local/apache2/conf/httpd-std.conf
#
# <IfModule mod_ssl.c>
# Include conf/ssl.conf
# </IfModule>
2531:Aug 30 13:34:01 server01 lrmd: [1656]: info: RA output: (apa2:start:stderr) + eval ServerRoot="/usr/local/apache2" PidFile=logs/httpd.pid Listen=443
2532:+ ServerRoot=/usr/local/apache2 PidFile=logs/httpd.pid Listen=443
2547:Aug 30 13:34:01 server01 lrmd: [1656]: info: RA output: (apa2:start:stderr) + eval ServerRoot="/usr/local/apache2" PidFile=logs/httpd.pid Listen=443#012+ ServerRoot=/usr/local/apache2 PidFile=logs/httpd.pid Listen=443#012+ PidFile=/usr/local/apache2/logs/httpd.pid#012+ CheckPort #012+ ocf_is_decimal #012+ false#012+ CheckPort #012+ ocf_is_decimal #012+ false#012+ CheckPort 80#012+ ocf_is_decimal 80#012+ true#012+ [ 80 -gt 0 ]#012+ PORT=80#012+ break
2561:Aug 30 13:34:01 server01 lrmd: [1656]: info: RA output: (apa2:start:stderr) 443
2562:Aug 30 13:34:01 server01 lrmd: [1656]: info: RA output: (apa2:start:stderr) 443
2566:Aug 30 13:34:01 server01 lrmd: [1656]: info: RA output: (apa2:start:stderr) + Listen=localhost:443
2569:Aug 30 13:34:01 server01 lrmd: [1656]: info: RA output: (apa2:start:stderr) + Listen=localhost:443#012+ [ X = X ]
2669:Aug 30 13:34:02 server01 lrmd: [1656]: info: RA output: (apa2:start:stderr) xlocalhost:443
2670:Aug 30 13:34:02 server01 lrmd: [1656]: info: RA output: (apa2:start:stderr) xlocalhost:443
2677:Aug 30 13:34:02 server01 lrmd: [1656]: info: RA output: (apa2:start:stderr) #012+ echo http://localhost:443
2679:+ echo http://localhost:443
2681:Aug 30 13:34:02 server01 lrmd: [1656]: info: RA output: (apa2:start:stderr) + STATUSURL=http://localhost:443/server-status#012+ test /usr/local/apache2/logs/httpd.pid#012+ : OK#012+ start_apache#012+ silent_status#012+ [ -f /usr/local/apache2/logs/httpd.pid ]#012+ : No pid file#012+ false#012+ ocf_run /usr/sbin/httpd -DSTATUS -f /usr/local/apache2/conf/httpd.conf
2682:Aug 30 13:34:02 server01 lrmd: [1656]: info: RA output: (apa2:start:stderr) + STATUSURL=http://localhost:443/server-status
2919:Aug 30 13:34:03 server01 lrmd: [1656]: info: RA output: (apa2:start:stderr) echo wget#012+ ourhttpclient=wget#012+ monitor_apache_basic#012+ [ -z http://localhost:443/server-status ]#012+ [ -z wget ]
2923:+ [ -z http://localhost:443/server-status ]
2934:Aug 30 13:34:03 server01 lrmd: [1656]: info: RA output: (apa2:start:stderr) http://localhost:443/server-status
2935:Aug 30 13:34:03 server01 lrmd: [1656]: info: RA output: (apa2:start:stderr) http://localhost:443/server-status
2943:Aug 30 13:34:03 server01 lrmd: [1656]: info: RA output: (apa2:start:stderr) + auth=#012+ cl_opts=-O- -q -L --no-proxy --bind-address=127.0.0.1 #012+ [ x != x ]#012+ wget -O- -q -L --no-proxy --bind-address=127.0.0.1 http://localhost:443/server-status
2947:+ wget -O- -q -L --no-proxy --bind-address=127.0.0.1 http://localhost:443/server-status
3393:Aug 30 13:34:05 server01 lrmd: [1656]: info: RA output: (apa2:stop:stderr) + eval ServerRoot="/usr/local/apache2" PidFile=logs/httpd.pid Listen=443
7661:Aug 30 13:39:50 server01 lrmd: [1656]: info: RA output: (apa2:stop:stderr) echo 443
7663:Aug 30 13:39:50 server01 lrmd: [1656]: info: RA output: (apa2:stop:stderr) + Listen=localhost:443
7727:Aug 30 13:39:50 server01 lrmd: [1656]: info: RA output: (apa2:stop:stderr) + Listen=localhost:443#012+ [ X = X ]#012+ FindLocationForHandler /usr/local/apache2/conf/httpd.conf server-status#012+ PerlScript=while (<>) {#012#011/<Location "?([^ >"]+)/i && ($loc=$1);#012#011/SetHandler +server-status/i && print "$loc\n"; #012 }#012+ tail -1#012+ perl -e while (<>) {#012#011/<Location "?([^ >"]+)/i && ($loc=$1);#012#011/SetHandler +server-status/i && print "$loc\n"; #012 }#012+ apachecat /usr/local/apache2/conf/httpd.conf#012+ awk #012#011function procline() {#012#011#011split($0,a);#012#011#011if( a[1]~/^[Ii]nclude$/ ) {#012#011#011#011procinclude(a[2]);#012#011#011} else {#012#011#011#011if( a[1]=="ServerRoot" ) {#012#011#011#011#011rootdir=a[2];#012#011#011#011#011gsub("\"","",rootdir);#012#011#011#011}#012#011#011#011print;#012#011#011}#012#011}#012#011function printfile(infile, a) {#012#011#011while( (getline<infile) > 0 )
{#012#011#011#011procline();#012#011#011}#012#011#011close(infile);#012#011}#012#011function allfiles(dir, cmd,f) {#012#011#011cmd="find -L "dir" -type f";#012#011#011while( ( cmd | getline f ) > 0 ) {#012#011#011#011printfile(f);#012#011#011}#012#011#011close(cmd);#012#011}#012#011function listfiles(pattern, cmd,f) {#012#011#011cmd="ls "pattern" 2>/dev/null";#012#011#011while( ( cmd | getline f ) > 0 ) {#012#011#011#011printfile(f);#012#011#011}#012#011#011close(cmd);#012#011}#012#011function procinclude(spec) {#012#011#011if( rootdir!="" && spec!~/^\// ) {#012#011#011#011spec=rootdir"/"spec;#012#011#011}#012#011#011if( isdir(spec) ) {#012#011#011#011allfiles(spec); # read all files in a directory (and subdirs)#012#011#011} else {#012#011#011#011listfiles(spec); # there could be jokers#012#011#011}#012#011}#012#011function isdir(s) {#012#011#011return !system("test -d \""s"\"");#012#011}#012#011{ procline(); }#012#011
/usr/local/apache2/conf/httpd.conf#012+ sed s/#.*//;s/[[:blank:]]*$//;s/^[[:blank:]]*//#012+ grep -v ^$
7733:Aug 30 13:39:50 server01 lrmd: [1656]: info: RA output: (apa2:stop:stderr) buildlocalurl#012+ [ xlocalhost:443 != x ]#012+ echo http://localhost:443
7735:+ [ xlocalhost:443 != x ]
7736:+ echo http://localhost:443
7738:Aug 30 13:39:50 server01 lrmd: [1656]: info: RA output: (apa2:stop:stderr) + STATUSURL=http://localhost:443/server-status
7745:Aug 30 13:39:50 server01 lrmd: [1656]: info: RA output: (apa2:stop:stderr) + STATUSURL=http://localhost:443/server-status#012+ test /usr/local/apache2/logs/httpd.pid#012+ : OK#012+ stop_apache#012+ silent_status#012+ [ -f /usr/local/apache2/logs/httpd.pid ]
leopoldo tosi
--- On Sun, 28/8/11, pacemaker-request at oss.clusterlabs.org <pacemaker-request at oss.clusterlabs.org> wrote:
> From: pacemaker-request at oss.clusterlabs.org <pacemaker-request at oss.clusterlabs.org>
> Subject: Pacemaker Digest, Vol 45, Issue 55
> To: pacemaker at oss.clusterlabs.org
> Date: Sunday, 28 August, 2011, 4:28
> Send Pacemaker mailing list
> submissions to
> pacemaker at oss.clusterlabs.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> or, via email, send a message with subject or body 'help'
> to
> pacemaker-request at oss.clusterlabs.org
>
> You can reach the person managing the list at
> pacemaker-owner at oss.clusterlabs.org
>
> When replying, please edit your Subject line so it is more
> specific
> than "Re: Contents of Pacemaker digest..."
>
>
> Today's Topics:
>
> 1. IPaddr2 resource IP unavailable on
> 'lo' interface for brief
> period after start (Patrick H.)
> 2. Re: group depending on clones
> restarting unnescessary
> (Michael Schwartzkopff)
> 3. apche cannot run anywhere (leopoldo
> tosi)
> 4. Re: apche cannot run anywhere (Shravan
> Mishra)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Sat, 27 Aug 2011 02:22:08 -0600
> From: "Patrick H." <pacemaker at feystorm.net>
> To: pacemaker at oss.clusterlabs.org
> Subject: [Pacemaker] IPaddr2 resource IP unavailable on
> 'lo' interface
> for brief period after start
> Message-ID: <4E58A930.2000907 at feystorm.net>
> Content-Type: text/plain; charset="iso-8859-1";
> Format="flowed"
>
> So the issue is that whenever I start up an IP with an
> IPaddr2 resource,
> the IP is unavailable when attempting to connect via lo
> interface for
> approximately 21 seconds after the resource is started.
>
> What I am doing is starting up the IP resource, and then I
> have another
> resource that tries to start, but prior to starting it does
> a status
> check by connecting to that IP on a TCP port to see if the
> service is
> up. And if it isnt up, then it starts it. Well I should
> immediately get
> a 'connection refused' message as the service isnt running,
> however I
> dont. Instead the resource times out as I have startup
> timeout set to 20
> seconds, and connection attempts wont give 'connection
> refused' until
> after 21 seconds. However I can try to connect from another
> host on the
> network and I immediately get 'connection refused' as
> expected, even
> while the box trying to connect to itself is still not
> working.
>
> But it gets even more interesting. I did a tcpdump on eth0
> interface
> (the interface the IPaddr2 resource IP is on) the box
> running the
> resources and I get the following when resources start up
> (presumably
> triggered by the box trying to connect for the status
> check):
> 01:12:21.423330 arp who-has 192.168.2.21
> (Broadcast) tell 192.168.2.21
> 01:12:22.428523 arp who-has 192.168.2.21
> (Broadcast) tell 192.168.2.21
> 01:12:23.429342 arp who-has 192.168.2.21
> (Broadcast) tell 192.168.2.21
> Notice as my box seems to be having a slight identity
> crisis
> (192.168.2.21 is the IPaddr2 resource)
>
> Also when I tcpdump on the lo interface I get the
> following
> 01:15:41.837719 IP 192.168.2.11.37284
> > 192.168.2.21.25565: S
> 1770941237:1770941237(0) win 32792 <mss
> 16396,sackOK,timestamp 190131056
> 0,nop,wscale 4>
> 01:15:44.845531 IP 192.168.2.11.37284
> > 192.168.2.21.25565: S
> 1770941237:1770941237(0) win 32792 <mss
> 16396,sackOK,timestamp 190134064
> 0,nop,wscale 4>
> Which indicates that the box clearly isnt responding
> (192.168.2.11 is
> the box's normal ip)
>
> As mentioned earlier, after 21 seconds I start getting
> 'connection
> refused' when attempting to connect. The packets are still
> going over
> the lo interface at this point, so nothing changes.
> Additionally an arp
> reply never does come back on eth0 or lo, it just magically
> starts working.
> I could bump up my timeout to something higher, but i would
> really
> prefer to get this issue solved.
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://oss.clusterlabs.org/pipermail/pacemaker/attachments/20110827/19736476/attachment-0001.html>
>
> ------------------------------
>
> Message: 2
> Date: Sat, 27 Aug 2011 15:27:08 +0200
> From: Michael Schwartzkopff <misch at clusterbau.com>
> To: The Pacemaker cluster resource manager
> <pacemaker at oss.clusterlabs.org>
> Subject: Re: [Pacemaker] group depending on clones
> restarting
> unnescessary
> Message-ID: <201108271527.09426.misch at clusterbau.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> > On Aug 26, 2011, at 2:24 PM, Michael Schwartzkopff
> wrote:
> > > Hi,
> > >
> > > I set up HA NFS Server according to the HOWTO
> from linbit. Basically it
> > > is a clone of the NFS server and a clone of the
> root filesystem. A group
> > > of the Filesystem, the exportfs and the ip
> address depends on a DRBD and
> > > the root- exportfs clone.
> > >
> > > See below for the configuration.
> > >
> > > Lets say, the group run in node A and I put node
> B into standby
> > > everything looks good. But when I set node B
> online again the NFS-group
> > > restarts although it runs on node A where which
> is not touched by the
> > > restart of the second half of the clone.
> > >
> > > Any explanation? I tried to set the clone
> interleave and
> > > non-globally-uniq, but nothing helps.
> > >
> > > Thanks for any hints.
> >
> > How about :
> >
> > order ord_Root_NFS 0: cloneExportRoot groupNFS
>
> Thanks for the solution!
>
> --
> Dr. Michael Schwartzkopff
> Guardinistr. 63
> 81375 M?nchen
>
> Tel: (0163) 172 50 98
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: signature.asc
> Type: application/pgp-signature
> Size: 198 bytes
> Desc: This is a digitally signed message part.
> URL: <http://oss.clusterlabs.org/pipermail/pacemaker/attachments/20110827/88c3b220/attachment-0001.sig>
>
> ------------------------------
>
> Message: 3
> Date: Sat, 27 Aug 2011 17:18:20 +0100 (BST)
> From: leopoldo tosi <leopoldotosi at yahoo.it>
> To: pacemaker at oss.clusterlabs.org
> Subject: [Pacemaker] apche cannot run anywhere
> Message-ID:
> <1314461900.97760.YahooMailClassic at web29502.mail.ird.yahoo.com>
> Content-Type: text/plain; charset=utf-8
>
> I'm doing some test with apache, but it doesn't run,
> someone can help me, please ?
>
> leopoldo tosi
> Last updated: Sat Aug 27 18:09:26 2011
> Stack: openais
> Current DC: server01 - partition with quorum
> Version: 1.0.9-74392a28b7f31d7ddc86689598bd23114f58978b
> 2 Nodes configured, 2 expected votes
> 1 Resources configured.
> ============
>
> Online: [ server01 server02 ]
>
> Resource Group: group1
> ip1
> (ocf::heartbeat:IPaddr2): Started
> server01
> apache2
> (ocf::heartbeat:apache2): Stopped
>
> Failed actions:
> apache2_start_0 (node=server02, call=5, rc=1,
> status=complete): unknown error
> apache2_start_0 (node=server01, call=5, rc=1,
> status=complete): unknown error
>
>
>
> ptest[2141]: 2011/08/27_18:01:19 debug: unpack_config:
> STONITH timeout: 60000
> ptest[2141]: 2011/08/27_18:01:19 debug: unpack_config:
> STONITH of failed nodes is disabled
> ptest[2141]: 2011/08/27_18:01:19 debug: unpack_config: Stop
> all active resources: false
> ptest[2141]: 2011/08/27_18:01:19 debug: unpack_config:
> Cluster is symmetric - resources can run anywhere by
> default
> ptest[2141]: 2011/08/27_18:01:19 debug: unpack_config:
> Default stickiness: 0
> ptest[2141]: 2011/08/27_18:01:19 debug: unpack_config: On
> loss of CCM Quorum: Stop ALL resources
> ptest[2141]: 2011/08/27_18:01:19 info: unpack_config: Node
> scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
> ptest[2141]: 2011/08/27_18:01:19 info:
> determine_online_status: Node server02 is online
> ptest[2141]: 2011/08/27_18:01:19 debug: unpack_rsc_op:
> apache2_start_0 on server02 returned 1 (unknown error)
> instead of the expected value: 0 (ok)
> ptest[2141]: 2011/08/27_18:01:19 WARN: unpack_rsc_op:
> Processing failed op apache2_start_0 on server02: unknown
> error (1)
> ptest[2141]: 2011/08/27_18:01:19 info:
> determine_online_status: Node server01 is online
> ptest[2141]: 2011/08/27_18:01:19 debug: unpack_rsc_op:
> apache2_start_0 on server01 returned 1 (unknown error)
> instead of the expected value: 0 (ok)
> ptest[2141]: 2011/08/27_18:01:19 WARN: unpack_rsc_op:
> Processing failed op apache2_start_0 on server01: unknown
> error (1)
> ptest[2141]: 2011/08/27_18:01:19 notice: group_print:
> Resource Group: group1
> ptest[2141]: 2011/08/27_18:01:19 notice:
> native_print: ip1
> (ocf::heartbeat:IPaddr2): Started
> server01
> ptest[2141]: 2011/08/27_18:01:19 notice:
> native_print: apache2
> (ocf::heartbeat:apache2): Stopped
> ptest[2141]: 2011/08/27_18:01:19 info: get_failcount:
> apache2 has failed INFINITY times on server01
> ptest[2141]: 2011/08/27_18:01:19 WARN:
> common_apply_stickiness: Forcing apache2 away from server01
> after 1000000 failures (max=1000000)
> ptest[2141]: 2011/08/27_18:01:19 info: get_failcount:
> apache2 has failed INFINITY times on server02
> ptest[2141]: 2011/08/27_18:01:19 WARN:
> common_apply_stickiness: Forcing apache2 away from server02
> after 1000000 failures (max=1000000)
> ptest[2141]: 2011/08/27_18:01:19 info:
> native_merge_weights: ip1: Rolling back scores from apache2
> ptest[2141]: 2011/08/27_18:01:19 debug: native_assign_node:
> Assigning server01 to ip1
> ptest[2141]: 2011/08/27_18:01:19 debug: native_assign_node:
> All nodes for resource apache2 are unavailable, unclean or
> shutting down (server02: 1, -1000000)
> ptest[2141]: 2011/08/27_18:01:19 debug: native_assign_node:
> Could not allocate a node for apache2
> ptest[2141]: 2011/08/27_18:01:19 info: native_color:
> Resource apache2 cannot run anywhere
> ptest[2141]: 2011/08/27_18:01:19 notice: LogActions: Leave
> resource ip1 (Started server01)
> ptest[2141]: 2011/08/27_18:01:19 notice: LogActions: Leave
> resource apache2 (Stopped)
>
>
> Aug 27 18:00:55 server01 pengine: [1322]: info:
> native_merge_weights: ip1: Rolling back scores from apache2
> Aug 27 18:00:55 server01 pengine: [1322]: debug:
> native_assign_node: All nodes for resource apache2 are
> unavailable, unclean or shutting down (server02: 1,
> -1000000)
> Aug 27 18:00:55 server01 pengine: [1322]: debug:
> native_assign_node: Could not allocate a node for apache2
> Aug 27 18:00:55 server01 pengine: [1322]: info:
> native_color: Resource apache2 cannot run anywhere
> Aug 27 18:00:55 server01 pengine: [1322]: notice:
> LogActions: Leave resource apache2
> (Stopped)
>
>
> Aug 27 17:59:55 server02 attrd: [1203]: debug:
> attrd_cib_callback: Update 51 for
> fail-count-apache2=INFINITY passed
> Aug 27 17:59:56 server02 cib: [1201]: debug:
> cib_process_xpath: Processing cib_query op for
> //cib/status//node_state[@id='server02']//nvpair[@name='last-failure-apache2']
> (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[3])
> Aug 27 17:59:56 server02 attrd: [1203]: debug:
> log_data_element: find_nvpair_attr: Match <nvpair
> id="status-server02-last-failure-apache2"
> name="last-failure-apache2" value="1314452988" />
> Aug 27 17:59:56 server02 attrd: [1203]: debug:
> attrd_cib_callback: Update 54 for
> last-failure-apache2=1314452988 passed
>
>
>
>
>
>
>
>
> ------------------------------
>
> Message: 4
> Date: Sat, 27 Aug 2011 13:29:38 -0400
> From: Shravan Mishra <shravan.mishra at gmail.com>
> To: The Pacemaker cluster resource manager
> <pacemaker at oss.clusterlabs.org>
> Subject: Re: [Pacemaker] apche cannot run anywhere
> Message-ID:
> <CABNhDQy2GJB6Ls3DbWr_YN1E-UkzFKCCMdFdSR_wCoy1xVs8MA at mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> try checking if stonith is enabled if it is then disable
> it:
>
> crm_attribute -G -t crm_config -n stonith-enabled
>
> Or define stonith resources before running your resources.
>
>
> Thanks
> Shravan
>
> On Sat, Aug 27, 2011 at 12:18 PM, leopoldo tosi <leopoldotosi at yahoo.it>wrote:
>
> > I'm doing some test with apache, but it doesn't run,
> > someone can help me, please ?
> >
> > leopoldo tosi
> > Last updated: Sat Aug 27 18:09:26 2011
> > Stack: openais
> > Current DC: server01 - partition with quorum
> > Version:
> 1.0.9-74392a28b7f31d7ddc86689598bd23114f58978b
> > 2 Nodes configured, 2 expected votes
> > 1 Resources configured.
> > ============
> >
> > Online: [ server01 server02 ]
> >
> > Resource Group: group1
> > ip1
> (ocf::heartbeat:IPaddr2):
> Started server01
> > apache2
> (ocf::heartbeat:apache2):
> Stopped
> >
> > Failed actions:
> > apache2_start_0 (node=server02, call=5,
> rc=1, status=complete): unknown
> > error
> > apache2_start_0 (node=server01, call=5,
> rc=1, status=complete): unknown
> > error
> >
> >
> >
> > ptest[2141]: 2011/08/27_18:01:19 debug: unpack_config:
> STONITH timeout:
> > 60000
> > ptest[2141]: 2011/08/27_18:01:19 debug: unpack_config:
> STONITH of failed
> > nodes is disabled
> > ptest[2141]: 2011/08/27_18:01:19 debug: unpack_config:
> Stop all active
> > resources: false
> > ptest[2141]: 2011/08/27_18:01:19 debug: unpack_config:
> Cluster is symmetric
> > - resources can run anywhere by default
> > ptest[2141]: 2011/08/27_18:01:19 debug: unpack_config:
> Default stickiness:
> > 0
> > ptest[2141]: 2011/08/27_18:01:19 debug: unpack_config:
> On loss of CCM
> > Quorum: Stop ALL resources
> > ptest[2141]: 2011/08/27_18:01:19 info: unpack_config:
> Node scores: 'red' =
> > -INFINITY, 'yellow' = 0, 'green' = 0
> > ptest[2141]: 2011/08/27_18:01:19 info:
> determine_online_status: Node
> > server02 is online
> > ptest[2141]: 2011/08/27_18:01:19 debug: unpack_rsc_op:
> apache2_start_0 on
> > server02 returned 1 (unknown error) instead of the
> expected value: 0 (ok)
> > ptest[2141]: 2011/08/27_18:01:19 WARN: unpack_rsc_op:
> Processing failed op
> > apache2_start_0 on server02: unknown error (1)
> > ptest[2141]: 2011/08/27_18:01:19 info:
> determine_online_status: Node
> > server01 is online
> > ptest[2141]: 2011/08/27_18:01:19 debug: unpack_rsc_op:
> apache2_start_0 on
> > server01 returned 1 (unknown error) instead of the
> expected value: 0 (ok)
> > ptest[2141]: 2011/08/27_18:01:19 WARN: unpack_rsc_op:
> Processing failed op
> > apache2_start_0 on server01: unknown error (1)
> > ptest[2141]: 2011/08/27_18:01:19 notice:
> group_print: Resource Group:
> > group1
> > ptest[2141]: 2011/08/27_18:01:19 notice:
> native_print: ip1
> > (ocf::heartbeat:IPaddr2):
> Started server01
> > ptest[2141]: 2011/08/27_18:01:19 notice:
> native_print: apache2
> > (ocf::heartbeat:apache2):
> Stopped
> > ptest[2141]: 2011/08/27_18:01:19 info: get_failcount:
> apache2 has failed
> > INFINITY times on server01
> > ptest[2141]: 2011/08/27_18:01:19 WARN:
> common_apply_stickiness: Forcing
> > apache2 away from server01 after 1000000 failures
> (max=1000000)
> > ptest[2141]: 2011/08/27_18:01:19 info: get_failcount:
> apache2 has failed
> > INFINITY times on server02
> > ptest[2141]: 2011/08/27_18:01:19 WARN:
> common_apply_stickiness: Forcing
> > apache2 away from server02 after 1000000 failures
> (max=1000000)
> > ptest[2141]: 2011/08/27_18:01:19 info:
> native_merge_weights: ip1: Rolling
> > back scores from apache2
> > ptest[2141]: 2011/08/27_18:01:19 debug:
> native_assign_node: Assigning
> > server01 to ip1
> > ptest[2141]: 2011/08/27_18:01:19 debug:
> native_assign_node: All nodes for
> > resource apache2 are unavailable, unclean or shutting
> down (server02: 1,
> > -1000000)
> > ptest[2141]: 2011/08/27_18:01:19 debug:
> native_assign_node: Could not
> > allocate a node for apache2
> > ptest[2141]: 2011/08/27_18:01:19 info: native_color:
> Resource apache2
> > cannot run anywhere
> > ptest[2141]: 2011/08/27_18:01:19 notice: LogActions:
> Leave resource ip1
> > (Started server01)
> > ptest[2141]: 2011/08/27_18:01:19 notice: LogActions:
> Leave resource apache2
> > (Stopped)
> >
> >
> > Aug 27 18:00:55 server01 pengine: [1322]: info:
> native_merge_weights: ip1:
> > Rolling back scores from apache2
> > Aug 27 18:00:55 server01 pengine: [1322]: debug:
> native_assign_node: All
> > nodes for resource apache2 are unavailable, unclean or
> shutting down
> > (server02: 1, -1000000)
> > Aug 27 18:00:55 server01 pengine: [1322]: debug:
> native_assign_node: Could
> > not allocate a node for apache2
> > Aug 27 18:00:55 server01 pengine: [1322]: info:
> native_color: Resource
> > apache2 cannot run anywhere
> > Aug 27 18:00:55 server01 pengine: [1322]: notice:
> LogActions: Leave
> > resource apache2 (Stopped)
> >
> >
> > Aug 27 17:59:55 server02 attrd: [1203]: debug:
> attrd_cib_callback: Update
> > 51 for fail-count-apache2=INFINITY passed
> > Aug 27 17:59:56 server02 cib: [1201]: debug:
> cib_process_xpath: Processing
> > cib_query op for
> >
> //cib/status//node_state[@id='server02']//nvpair[@name='last-failure-apache2']
> >
> (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[3])
> > Aug 27 17:59:56 server02 attrd: [1203]: debug:
> log_data_element:
> > find_nvpair_attr: Match <nvpair
> id="status-server02-last-failure-apache2"
> > name="last-failure-apache2" value="1314452988" />
> > Aug 27 17:59:56 server02 attrd: [1203]: debug:
> attrd_cib_callback: Update
> > 54 for last-failure-apache2=1314452988 passed
> >
> >
> >
> >
> >
> >
> > _______________________________________________
> > Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> > http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> >
> > Project Home: http://www.clusterlabs.org
> > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> > Bugs:
> > http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker
> >
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://oss.clusterlabs.org/pipermail/pacemaker/attachments/20110827/87e9a6db/attachment.html>
>
> ------------------------------
>
> _______________________________________________
> Pacemaker mailing list
> Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
>
> End of Pacemaker Digest, Vol 45, Issue 55
> *****************************************
>
More information about the Pacemaker
mailing list