<div dir="ltr"><div><div><div>Hi <br></div>Yes the problem was solved, it was the Linux Kernel that started Postgres when the failed server came up again, I disabled the automatic start with chkconfig and that solved the problem, I will take out 172.16.0.5 from the conf file,<br></div>THANKS SO MUCH for all the help, I will do a blog post on how this is done on SLES 11 SP3 and Postgres 9.3 and will post the URL for the group, in case it will help someone out there, thanks again for all the help!<br></div>Regards<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Mar 18, 2015 at 3:58 AM, NAKAHIRA Kazutomo <span dir="ltr"><<a href="mailto:nakahira_kazutomo_b1@lab.ntt.co.jp" target="_blank">nakahira_kazutomo_b1@lab.ntt.co.jp</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi,<br>
<br>
As Brestan pointed out, old master can not come up as a slave is expected feature.<br>
<br>
BTW, this action is different from the original problem.<br>
It seems from logs, promote action succeeded in the cl2_lb1 after power off cl1_lb1.<br>
Was the original problem resolved?<br>
<br>
And cl2_lb1's postgresql.conf has the following problem.<br>
<br>
2015-03-17 07:34:28 SAST DETAIL:  The failed archive command was: cp pg_xlog/<u></u>0000001D00000008000000C2 172.16.0.5:/pgtablespace/<u></u>archive/<u></u>0000001D00000008000000C2<br>
<br>
"172.16.0.5" must be eliminated from the archive_command directive in the postgresql.conf.<br>
<br>
Best regards,<br>
Kazutomo NAKAHIRA<span class=""><br>
<br>
On 2015/03/18 5:00, Rainer Brestan wrote:<br>
</span><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">
Yes, thats the expected behaviour.<br>
Takatoshi Matsuo describes in his papers, why a former master cant come up as<br>
slave without possible data corruption.<br>
And you do not get an indication from Postgres that the data on disk is corrupted.<br>
Therefore, he created the lock file mechanism to prevent a former master to<br>
start up.<br>
Making the base backup from Master discards any possibly wrong data from the<br>
slave and the removed lock files indicates this for the resource agent.<br>
To shorten the discussion about "how this can be automated within the resource<br>
agent", there is no clean way of handling this with very large databases, for<br>
which this can take hours.<br>
And what you should do is making the base backup in a temporary directory and<br>
then renaming this to the name Postgres instance requires after base backup<br>
finish successful (yes, this requires twice of harddisk space). Otherwise you<br>
might loose everything, when your master brakes during base backup.<br>
Rainer<br></span>
*Gesendet:* Dienstag, 17. März 2015 um 12:16 Uhr<br>
*Von:* "Wynand Jansen van Vuuren" <<a href="mailto:esawyja@gmail.com" target="_blank">esawyja@gmail.com</a>><br>
*An:* "Cluster Labs - All topics related to open-source clustering welcomed"<br>
<<a href="mailto:users@clusterlabs.org" target="_blank">users@clusterlabs.org</a>><br>
*Betreff:* Re: [ClusterLabs] Postgres streaming VIP-REP not coming up on slave<div><div class="h5"><br>
Hi<br>
Ok I found this particular problem, when the failed node comes up again, the<br>
kernel start Postgres, I have disabled this and now the VIPs and Postgres remain<br>
on the new MASTER, but the failed node does not come up as a slave, IE there is<br>
no sync between the new master and slave, is this the expected behavior? The<br>
only way I can get it back into slave mode is to follow the procedure in the wiki<br>
<br>
# su - postgres<br>
$ rm -rf /var/lib/pgsql/data/<br>
$ pg_basebackup -h 192.168.2.3 -U postgres -D /var/lib/pgsql/data -X stream -P<br>
$ rm /var/lib/pgsql/tmp/PGSQL.lock<br>
$ exit<br>
# pcs resource cleanup msPostgresql<br>
<br>
Looking forward to your reply please<br>
Regards<br>
On Tue, Mar 17, 2015 at 7:55 AM, Wynand Jansen van Vuuren <<a href="mailto:esawyja@gmail.com" target="_blank">esawyja@gmail.com</a>><br>
wrote:<br>
<br>
     Hi Nakahira,<br>
     I finally got around testing this, below is the initial state<br>
<br>
     cl1_lb1:~ # crm_mon -1 -Af<br>
     Last updated: Tue Mar 17 07:31:58 2015<br>
     Last change: Tue Mar 17 07:31:12 2015 by root via crm_attribute on cl1_lb1<br>
     Stack: classic openais (with plugin)<br>
     Current DC: cl1_lb1 - partition with quorum<br>
     Version: 1.1.9-2db99f1<br>
     2 Nodes configured, 2 expected votes<br>
     6 Resources configured.<br>
<br>
<br>
     Online: [ cl1_lb1 cl2_lb1 ]<br>
<br>
       Resource Group: master-group<br>
           vip-master    (ocf::heartbeat:IPaddr2):    Started cl1_lb1<br>
           vip-rep    (ocf::heartbeat:IPaddr2):    Started cl1_lb1<br>
           CBC_instance    (ocf::heartbeat:cbc):    Started cl1_lb1<br>
           failover_MailTo    (ocf::heartbeat:MailTo):    Started cl1_lb1<br>
       Master/Slave Set: msPostgresql [pgsql]<br>
           Masters: [ cl1_lb1 ]<br>
           Slaves: [ cl2_lb1 ]<br>
<br>
     Node Attributes:<br>
     * Node cl1_lb1:<br>
          + master-pgsql                        : 1000<br>
          + pgsql-data-status                   : LATEST<br>
          + pgsql-master-baseline               : 00000008BE000000<br>
          + pgsql-status                        : PRI<br>
     * Node cl2_lb1:<br>
          + master-pgsql                        : 100<br>
          + pgsql-data-status                   : STREAMING|SYNC<br>
          + pgsql-status                        : HS:sync<br>
<br>
     Migration summary:<br>
     * Node cl2_lb1:<br>
     * Node cl1_lb1:<br>
     cl1_lb1:~ #<br>
     ###### -  I then did a init 0 on the master node, cl1_lb1<br>
<br>
     cl1_lb1:~ # init 0<br>
     cl1_lb1:~ #<br>
     Connection closed by foreign host.<br>
<br>
     Disconnected from remote host(cl1_lb1) at 07:36:18.<br>
<br>
     Type `help' to learn how to use Xshell prompt.<br>
     [c:\~]$<br>
     ###### - This was ok as the slave took over, became master<br>
<br>
     cl2_lb1:~ # crm_mon -1 -Af<br>
     Last updated: Tue Mar 17 07:35:04 2015<br>
     Last change: Tue Mar 17 07:34:29 2015 by root via crm_attribute on cl2_lb1<br>
     Stack: classic openais (with plugin)<br>
     Current DC: cl2_lb1 - partition WITHOUT quorum<br>
     Version: 1.1.9-2db99f1<br>
     2 Nodes configured, 2 expected votes<br>
     6 Resources configured.<br>
<br>
<br>
     Online: [ cl2_lb1 ]<br>
     OFFLINE: [ cl1_lb1 ]<br>
<br>
       Resource Group: master-group<br>
           vip-master    (ocf::heartbeat:IPaddr2):    Started cl2_lb1<br>
           vip-rep    (ocf::heartbeat:IPaddr2):    Started cl2_lb1<br>
           CBC_instance    (ocf::heartbeat:cbc):    Started cl2_lb1<br>
           failover_MailTo    (ocf::heartbeat:MailTo):    Started cl2_lb1<br>
       Master/Slave Set: msPostgresql [pgsql]<br>
           Masters: [ cl2_lb1 ]<br>
           Stopped: [ pgsql:1 ]<br>
<br>
     Node Attributes:<br>
     * Node cl2_lb1:<br>
          + master-pgsql                        : 1000<br>
          + pgsql-data-status                   : LATEST<br>
          + pgsql-master-baseline               : 00000008C2000090<br>
          + pgsql-status                        : PRI<br>
<br>
     Migration summary:<br>
     * Node cl2_lb1:<br>
     cl2_lb1:~ #<br>
     And the logs from Postgres and Corosync are attached<br>
     ###### - I then restarted the original Master cl1_lb1 and started Corosync<br>
     manually<br>
     Once the original Master cl1_lb1 was up and Corosync running, the status<br>
     below happened, notice no VIPs and Postgres<br>
     ###### - Still working below<br>
<br>
     cl2_lb1:~ # crm_mon -1 -Af<br>
     Last updated: Tue Mar 17 07:36:55 2015<br>
     Last change: Tue Mar 17 07:34:29 2015 by root via crm_attribute on cl2_lb1<br>
     Stack: classic openais (with plugin)<br>
     Current DC: cl2_lb1 - partition WITHOUT quorum<br>
     Version: 1.1.9-2db99f1<br>
     2 Nodes configured, 2 expected votes<br>
     6 Resources configured.<br>
<br>
<br>
     Online: [ cl2_lb1 ]<br>
     OFFLINE: [ cl1_lb1 ]<br>
<br>
       Resource Group: master-group<br>
           vip-master    (ocf::heartbeat:IPaddr2):    Started cl2_lb1<br>
           vip-rep    (ocf::heartbeat:IPaddr2):    Started cl2_lb1<br>
           CBC_instance    (ocf::heartbeat:cbc):    Started cl2_lb1<br>
           failover_MailTo    (ocf::heartbeat:MailTo):    Started cl2_lb1<br>
       Master/Slave Set: msPostgresql [pgsql]<br>
           Masters: [ cl2_lb1 ]<br>
           Stopped: [ pgsql:1 ]<br>
<br>
     Node Attributes:<br>
     * Node cl2_lb1:<br>
          + master-pgsql                        : 1000<br>
          + pgsql-data-status                   : LATEST<br>
          + pgsql-master-baseline               : 00000008C2000090<br>
          + pgsql-status                        : PRI<br>
<br>
     Migration summary:<br>
     * Node cl2_lb1:<br>
<br>
     ###### - After original master is up and Corosync running on cl1_lb1<br>
<br>
     cl2_lb1:~ # crm_mon -1 -Af<br>
     Last updated: Tue Mar 17 07:37:47 2015<br>
     Last change: Tue Mar 17 07:37:21 2015 by root via crm_attribute on cl1_lb1<br>
     Stack: classic openais (with plugin)<br>
     Current DC: cl2_lb1 - partition with quorum<br>
     Version: 1.1.9-2db99f1<br>
     2 Nodes configured, 2 expected votes<br>
     6 Resources configured.<br>
<br>
<br>
     Online: [ cl1_lb1 cl2_lb1 ]<br>
<br>
<br>
     Node Attributes:<br>
     * Node cl1_lb1:<br>
          + master-pgsql                        : -INFINITY<br>
          + pgsql-data-status                   : LATEST<br>
          + pgsql-status                        : STOP<br>
     * Node cl2_lb1:<br>
          + master-pgsql                        : -INFINITY<br>
          + pgsql-data-status                   : DISCONNECT<br>
          + pgsql-status                        : STOP<br>
<br>
     Migration summary:<br>
     * Node cl2_lb1:<br>
         pgsql:0: migration-threshold=1 fail-count=2 last-failure='Tue Mar 17<br>
     07:37:26 2015'<br>
     * Node cl1_lb1:<br>
         pgsql:0: migration-threshold=1 fail-count=2 last-failure='Tue Mar 17<br>
     07:37:26 2015'<br>
<br>
     Failed actions:<br>
          pgsql_monitor_4000 (node=cl2_lb1, call=735, rc=7, status=complete): not<br>
     running<br>
          pgsql_monitor_4000 (node=cl1_lb1, call=42, rc=7, status=complete): not<br>
     running<br>
     cl2_lb1:~ #<br>
     ##### - No VIPs up<br>
<br>
     cl2_lb1:~ # ping 172.28.200.159<br>
     PING 172.28.200.159 (172.28.200.159) 56(84) bytes of data.<br></div></div>
      >From 172.28.200.168 <<a href="http://172.28.200.168" target="_blank">http://172.28.200.168</a>>: icmp_seq=1 Destination Host<span class=""><br>
     Unreachable<br>
      >From 172.28.200.168 icmp_seq=1 Destination Host Unreachable<br>
      >From 172.28.200.168 icmp_seq=2 Destination Host Unreachable<br>
      >From 172.28.200.168 icmp_seq=3 Destination Host Unreachable<br>
     ^C<br>
     --- 172.28.200.159 ping statistics ---<br>
     5 packets transmitted, 0 received, +4 errors, 100% packet loss, time 4024ms<br>
     , pipe 3<br>
     cl2_lb1:~ # ping 172.16.0.5<br>
     PING 172.16.0.5 (172.16.0.5) 56(84) bytes of data.<br></span>
      >From 172.16.0.3 <<a href="http://172.16.0.3" target="_blank">http://172.16.0.3</a>>: icmp_seq=1 Destination Host Unreachable<div><div class="h5"><br>
      >From 172.16.0.3 icmp_seq=1 Destination Host Unreachable<br>
      >From 172.16.0.3 icmp_seq=2 Destination Host Unreachable<br>
      >From 172.16.0.3 icmp_seq=3 Destination Host Unreachable<br>
      >From 172.16.0.3 icmp_seq=5 Destination Host Unreachable<br>
      >From 172.16.0.3 icmp_seq=6 Destination Host Unreachable<br>
      >From 172.16.0.3 icmp_seq=7 Destination Host Unreachable<br>
     ^C<br>
     --- 172.16.0.5 ping statistics ---<br>
     8 packets transmitted, 0 received, +7 errors, 100% packet loss, time 7015ms<br>
     , pipe 3<br>
     cl2_lb1:~ #<br>
<br>
     Any ideas please, or it it a case of recovering the original master manually<br>
     before starting Corosync etc?<br>
     All logs are attached<br>
     Regards<br>
     On Mon, Mar 16, 2015 at 11:01 AM, Wynand Jansen van Vuuren<br>
     <<a href="mailto:esawyja@gmail.com" target="_blank">esawyja@gmail.com</a>> wrote:<br>
<br>
         Thanks for the advice, I have a demo on this now, so I don't want to<br>
         test this now, I will do so tomorrow and forwards the logs, many thanks!!<br>
         On Mon, Mar 16, 2015 at 10:54 AM, NAKAHIRA Kazutomo<br>
         <<a href="mailto:nakahira_kazutomo_b1@lab.ntt.co.jp" target="_blank">nakahira_kazutomo_b1@lab.ntt.<u></u>co.jp</a>> wrote:<br>
<br>
             Hi,<br>
<br>
             > do you suggest that I take it out? or should I look at the problem where<br>
             > cl2_lb1 is not being promoted?<br>
<br>
             You should look at the problem where cl2_lb1 is not being promoted.<br>
             And I look it if you send me a ha-log and PostgreSQL's log.<br>
<br>
             Best regards,<br>
             Kazutomo NAKAHIRA<br>
<br>
<br>
             On 2015/03/16 17:18, Wynand Jansen van Vuuren wrote:<br>
<br>
                 Hi Nakahira,<br>
                 Thanks so much for the info, this setting was as the wiki page<br>
                 suggested,<br>
                 do you suggest that I take it out? or should I look at the<br>
                 problem where<br>
                 cl2_lb1 is not being promoted?<br>
                 Regards<br>
<br>
                 On Mon, Mar 16, 2015 at 10:15 AM, NAKAHIRA Kazutomo <<br>
                 <a href="mailto:nakahira_kazutomo_b1@lab.ntt.co.jp" target="_blank">nakahira_kazutomo_b1@lab.ntt.<u></u>co.jp</a>> wrote:<br>
<br>
                     Hi,<br>
<br>
                         Notice there is no VIPs, looks like the VIPs depends on<br>
                         some other<br>
<br>
                     resource<br>
<br>
                         to start 1st?<br>
<br>
<br>
                     The following constraints means that "master-group" can not<br>
                     start<br>
                     without master of msPostgresql resource.<br>
<br>
                     colocation rsc_colocation-1 inf: master-group<br>
                     msPostgresql:Master<br>
<br>
                     After you power off cl1_lb1, msPostgresql on the cl2_lb1 is<br>
                     not promoted<br>
                     and master is not exist in your cluster.<br>
<br>
                     It means that "master-group" can not run anyware.<br>
<br>
                     Best regards,<br>
                     Kazutomo NAKAHIRA<br>
<br>
<br>
                     On 2015/03/16 16:48, Wynand Jansen van Vuuren wrote:<br>
<br>
                         Hi<br>
                         When I start out cl1_lb1 (Cluster 1 load balancer 1) is<br>
                         the master as<br>
                         below<br>
                         cl1_lb1:~ # crm_mon -1 -Af<br>
                         Last updated: Mon Mar 16 09:44:44 2015<br>
                         Last change: Mon Mar 16 08:06:26 2015 by root via<br>
                         crm_attribute on cl1_lb1<br>
                         Stack: classic openais (with plugin)<br>
                         Current DC: cl2_lb1 - partition with quorum<br>
                         Version: 1.1.9-2db99f1<br>
                         2 Nodes configured, 2 expected votes<br>
                         6 Resources configured.<br>
<br>
<br>
                         Online: [ cl1_lb1 cl2_lb1 ]<br>
<br>
                             Resource Group: master-group<br>
                                 vip-master    (ocf::heartbeat:IPaddr2):<br>
                         Started cl1_lb1<br>
                                 vip-rep    (ocf::heartbeat:IPaddr2):    Started<br>
                         cl1_lb1<br>
                                 CBC_instance    (ocf::heartbeat:cbc):    Started<br>
                         cl1_lb1<br>
                                 failover_MailTo    (ocf::heartbeat:MailTo):<br>
                         Started cl1_lb1<br>
                             Master/Slave Set: msPostgresql [pgsql]<br>
                                 Masters: [ cl1_lb1 ]<br>
                                 Slaves: [ cl2_lb1 ]<br>
<br>
                         Node Attributes:<br>
                         * Node cl1_lb1:<br>
                                + master-pgsql                        : 1000<br>
                                + pgsql-data-status                   : LATEST<br>
                                + pgsql-master-baseline               :<br>
                         00000008B90061F0<br>
                                + pgsql-status                        : PRI<br>
                         * Node cl2_lb1:<br>
                                + master-pgsql                        : 100<br>
                                + pgsql-data-status                   :<br>
                         STREAMING|SYNC<br>
                                + pgsql-status                        : HS:sync<br>
<br>
                         Migration summary:<br>
                         * Node cl2_lb1:<br>
                         * Node cl1_lb1:<br>
                         cl1_lb1:~ #<br>
<br>
                         If I then do a power off on cl1_lb1 (master), Postgres<br>
                         moves to cl2_lb1<br>
                         (Cluster 2 load balancer 1), but the VIP-MASTER and<br>
                         VIP-REP is not<br>
                         pingable<br>
                         from the NEW master (cl2_lb1), it stays line this below<br>
                         cl2_lb1:~ # crm_mon -1 -Af<br>
                         Last updated: Mon Mar 16 07:32:07 2015<br>
                         Last change: Mon Mar 16 07:28:53 2015 by root via<br>
                         crm_attribute on cl1_lb1<br>
                         Stack: classic openais (with plugin)<br>
                         Current DC: cl2_lb1 - partition WITHOUT quorum<br>
                         Version: 1.1.9-2db99f1<br>
                         2 Nodes configured, 2 expected votes<br>
                         6 Resources configured.<br>
<br>
<br>
                         Online: [ cl2_lb1 ]<br>
                         OFFLINE: [ cl1_lb1 ]<br>
<br>
                             Master/Slave Set: msPostgresql [pgsql]<br>
                                 Slaves: [ cl2_lb1 ]<br>
                                 Stopped: [ pgsql:1 ]<br>
<br>
                         Node Attributes:<br>
                         * Node cl2_lb1:<br>
                                + master-pgsql                        : -INFINITY<br>
                                + pgsql-data-status                   : DISCONNECT<br>
                                + pgsql-status                        : HS:alone<br>
<br>
                         Migration summary:<br>
                         * Node cl2_lb1:<br>
                         cl2_lb1:~ #<br>
<br>
                         Notice there is no VIPs, looks like the VIPs depends on<br>
                         some other<br>
                         resource<br>
                         to start 1st?<br>
                         Thanks for the reply!<br>
<br>
<br>
                         On Mon, Mar 16, 2015 at 9:42 AM, NAKAHIRA Kazutomo <<br>
                         <a href="mailto:nakahira_kazutomo_b1@lab.ntt.co.jp" target="_blank">nakahira_kazutomo_b1@lab.ntt.<u></u>co.jp</a>> wrote:<br>
<br>
                            Hi,<br>
<br>
<br>
                                fine, cl2_lb1 takes over and acts as a slave, but<br>
                             the VIPs does not come<br>
<br>
<br>
                             cl2_lb1 acts as a slave? It is not a master?<br>
                             VIPs comes up with master msPostgresql resource.<br>
<br>
                             If promote action was failed in the cl2_lb1, then<br>
                             please send a ha-log and PostgreSQL's log.<br>
<br>
                             Best regards,<br>
                             Kazutomo NAKAHIRA<br>
<br>
<br>
                             On 2015/03/16 16:09, Wynand Jansen van Vuuren wrote:<br>
<br>
                                Hi all,<br>
<br>
<br>
                                 I have 2 nodes, with 2 interfaces each, ETH0 is<br>
                                 used for an application,<br>
                                 CBC, that's writing to the Postgres DB on the<br>
                                 VIP-MASTER 172.28.200.159,<br>
                                 ETH1 is used for the Corosync configuration and<br>
                                 VIP-REP, everything<br>
                                 works,<br>
                                 but if the master currently on cl1_lb1 has a<br>
                                 catastrophic failure, like<br>
                                 power down, the VIPs does not start on the<br>
                                 slave, the Postgres parts<br>
                                 works<br>
                                 fine, cl2_lb1 takes over and acts as a slave,<br>
                                 but the VIPs does not come<br>
                                 up. If I test it manually, IE kill the<br>
                                 application 3 times on the<br>
                                 master,<br>
                                 the switchover is smooth, same if I kill<br>
                                 Postgres on master, but when<br>
                                 there<br>
                                 is a power failure on the Master, the VIPs stay<br>
                                 down. If I then delete<br>
                                 the<br>
                                 attributes pgsql-data-status="LATEST" and attributes<br>
                                 pgsql-data-status="STREAMING|<u></u>SYNC" on the slave<br>
                                 after power off on the<br>
                                 master and restart everything, then the VIPs<br>
                                 come up on the slave, any<br>
                                 ideas please?<br>
                                 I'm using this setup<br>
                                 <a href="http://clusterlabs.org/wiki/PgSQL_Replicated_Cluster" target="_blank">http://clusterlabs.org/wiki/<u></u>PgSQL_Replicated_Cluster</a><br>
<br>
                                 With this configuration below<br>
                                 node cl1_lb1 \<br>
                                             attributes pgsql-data-status="LATEST"<br>
                                 node cl2_lb1 \<br>
                                             attributes<br>
                                 pgsql-data-status="STREAMING|<u></u>SYNC"<br>
                                 primitive CBC_instance ocf:heartbeat:cbc \<br>
                                             op monitor interval="60s"<br>
                                 timeout="60s" on-fail="restart" \<br>
                                             op start interval="0s" timeout="60s"<br>
                                 on-fail="restart" \<br>
                                             meta target-role="Started"<br>
                                 migration-threshold="3"<br>
                                 failure-timeout="60s"<br>
                                 primitive failover_MailTo ocf:heartbeat:MailTo \<br>
                                             params email="<a href="mailto:wynandj@rorotika.com" target="_blank">wynandj@rorotika.com</a>"<br>
                                 subject="Cluster Status<br>
                                 change<br>
                                 - " \<br>
                                             op monitor interval="10"<br>
                                 timeout="10" dept="0"<br>
                                 primitive pgsql ocf:heartbeat:pgsql \<br>
                                             params<br>
                                 pgctl="/opt/app/PostgreSQL/9.<u></u>3/bin/pg_ctl"<br>
                                 psql="/opt/app/PostgreSQL/9.3/<u></u>bin/psql"<br>
                                 config="/opt/app/pgdata/9.3/<u></u>postgresql.conf"<br>
                                 pgdba="postgres"<br>
                                 pgdata="/opt/app/pgdata/9.3/" start_opt="-p<br>
                                 5432" rep_mode="sync"<br>
                                 node_list="cl1_lb1 cl2_lb1" restore_command="cp<br>
                                 /pgtablespace/archive/%f<br>
                                 %p" primary_conninfo_opt="<u></u>keepalives_idle=60<br>
                                 keepalives_interval=5<br>
                                 keepalives_count=5" master_ip="172.16.0.5"<br>
                                 restart_on_promote="false"<br>
                                 logfile="/var/log/OCF.log" \<br>
                                             op start interval="0s" timeout="60s"<br>
                                 on-fail="restart" \<br>
                                             op monitor interval="4s"<br>
                                 timeout="60s" on-fail="restart" \<br>
                                             op monitor interval="3s"<br>
                                 role="Master" timeout="60s"<br>
                                 on-fail="restart" \<br>
                                             op promote interval="0s"<br>
                                 timeout="60s" on-fail="restart" \<br>
                                             op demote interval="0s"<br>
                                 timeout="60s" on-fail="stop" \<br>
                                             op stop interval="0s" timeout="60s"<br>
                                 on-fail="block" \<br>
                                             op notify interval="0s" timeout="60s"<br>
                                 primitive vip-master ocf:heartbeat:IPaddr2 \<br>
                                             params ip="172.28.200.159"<br>
                                 nic="eth0" iflabel="CBC_VIP"<br>
                                 cidr_netmask="24" \<br>
                                             op start interval="0s" timeout="60s"<br>
                                 on-fail="restart" \<br>
                                             op monitor interval="10s"<br>
                                 timeout="60s" on-fail="restart" \<br>
                                             op stop interval="0s" timeout="60s"<br>
                                 on-fail="block" \<br>
                                             meta target-role="Started"<br>
                                 primitive vip-rep ocf:heartbeat:IPaddr2 \<br>
                                             params ip="172.16.0.5" nic="eth1"<br>
                                 iflabel="REP_VIP"<br>
                                 cidr_netmask="24" \<br>
                                             meta migration-threshold="0"<br>
                                 target-role="Started" \<br>
                                             op start interval="0s" timeout="60s"<br>
                                 on-fail="stop" \<br>
                                             op monitor interval="10s"<br>
                                 timeout="60s" on-fail="restart" \<br>
                                             op stop interval="0s" timeout="60s"<br>
                                 on-fail="restart"<br>
                                 group master-group vip-master vip-rep<br>
                                 CBC_instance failover_MailTo<br>
                                 ms msPostgresql pgsql \<br>
                                             meta master-max="1"<br>
                                 master-node-max="1" clone-max="2"<br>
                                 clone-node-max="1" notify="true"<br>
                                 colocation rsc_colocation-1 inf: master-group<br>
                                 msPostgresql:Master<br>
                                 order rsc_order-1 0: msPostgresql:promote<br>
                                 master-group:start<br>
                                 symmetrical=false<br>
                                 order rsc_order-2 0: msPostgresql:demote<br>
                                 master-group:stop<br>
                                 symmetrical=false<br>
                                 property $id="cib-bootstrap-options" \<br>
                                             dc-version="1.1.9-2db99f1" \<br>
                                             cluster-infrastructure="<u></u>classic<br>
                                 openais (with plugin)" \<br>
                                             expected-quorum-votes="2" \<br>
                                             no-quorum-policy="ignore" \<br>
                                             stonith-enabled="false" \<br>
                                             cluster-recheck-interval="<u></u>1min" \<br>
                                             crmd-transition-delay="0s" \<br>
                                             last-lrm-refresh="1426485983"<br>
                                             rsc_defaults $id="rsc-options" \<br>
                                             resource-stickiness="INFINITY" \<br>
                                             migration-threshold="1"<br>
                                 #vim:set syntax=pcmk<br>
<br>
                                 Any ideas please, I'm lost......<br>
<br>
<br>
<br>
                                 ______________________________<u></u>_________________<br>
                                 Users mailing list: <a href="mailto:Users@clusterlabs.org" target="_blank">Users@clusterlabs.org</a><br>
                                 <a href="http://clusterlabs.org/mailman/listinfo/users" target="_blank">http://clusterlabs.org/<u></u>mailman/listinfo/users</a><br>
<br>
                                 Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
                                 Getting started: <a href="http://www.clusterlabs.org/" target="_blank">http://www.clusterlabs.org/</a><br>
                                 doc/Cluster_from_Scratch.pdf<br>
                                 Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
<br>
<br>
<br>
                             ______________________________<u></u>_________________<br>
                             Users mailing list: <a href="mailto:Users@clusterlabs.org" target="_blank">Users@clusterlabs.org</a><br>
                             <a href="http://clusterlabs.org/mailman/listinfo/users" target="_blank">http://clusterlabs.org/<u></u>mailman/listinfo/users</a><br>
<br>
                             Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
                             Getting started:<br>
                             <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/<u></u>doc/Cluster_from_Scratch.pdf</a><br>
                             Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
<br>
<br>
<br>
                         ______________________________<u></u>_________________<br>
                         Users mailing list: <a href="mailto:Users@clusterlabs.org" target="_blank">Users@clusterlabs.org</a><br>
                         <a href="http://clusterlabs.org/mailman/listinfo/users" target="_blank">http://clusterlabs.org/<u></u>mailman/listinfo/users</a><br>
<br>
                         Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
                         Getting started:<br>
                         <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/<u></u>doc/Cluster_from_Scratch.pdf</a><br>
                         Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
<br>
<br>
                     --<br>
                     NTT オープンソースソフトウェアセンタ<br>
                     中平 和友<br>
                     TEL: 03-5860-5135 FAX: 03-5463-6490<br>
                     Mail: <a href="mailto:nakahira_kazutomo_b1@lab.ntt.co.jp" target="_blank">nakahira_kazutomo_b1@lab.ntt.<u></u>co.jp</a><br>
<br>
<br>
<br>
                     ______________________________<u></u>_________________<br>
                     Users mailing list: <a href="mailto:Users@clusterlabs.org" target="_blank">Users@clusterlabs.org</a><br>
                     <a href="http://clusterlabs.org/mailman/listinfo/users" target="_blank">http://clusterlabs.org/<u></u>mailman/listinfo/users</a><br>
<br>
                     Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
                     Getting started:<br>
                     <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/<u></u>doc/Cluster_from_Scratch.pdf</a><br>
                     Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
<br>
<br>
<br>
<br>
                 ______________________________<u></u>_________________<br>
                 Users mailing list: <a href="mailto:Users@clusterlabs.org" target="_blank">Users@clusterlabs.org</a><br>
                 <a href="http://clusterlabs.org/mailman/listinfo/users" target="_blank">http://clusterlabs.org/<u></u>mailman/listinfo/users</a><br>
<br>
                 Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
                 Getting started:<br>
                 <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/<u></u>doc/Cluster_from_Scratch.pdf</a><br>
                 Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
<br>
<br>
<br>
             --<br>
             NTT オープンソースソフトウェアセンタ<br>
             中平 和友<br>
             TEL: 03-5860-5135 FAX: 03-5463-6490<br>
             Mail: <a href="mailto:nakahira_kazutomo_b1@lab.ntt.co.jp" target="_blank">nakahira_kazutomo_b1@lab.ntt.<u></u>co.jp</a><br>
<br>
<br>
             ______________________________<u></u>_________________<br>
             Users mailing list: <a href="mailto:Users@clusterlabs.org" target="_blank">Users@clusterlabs.org</a><br>
             <a href="http://clusterlabs.org/mailman/listinfo/users" target="_blank">http://clusterlabs.org/<u></u>mailman/listinfo/users</a><br>
<br>
             Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
             Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/<u></u>doc/Cluster_from_Scratch.pdf</a><br>
             Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
<br>
______________________________<u></u>_________________ Users mailing list:<br>
<a href="mailto:Users@clusterlabs.org" target="_blank">Users@clusterlabs.org</a> <a href="http://clusterlabs.org/mailman/listinfo/users" target="_blank">http://clusterlabs.org/<u></u>mailman/listinfo/users</a> Project<br>
Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a> Getting started:<br>
<a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/<u></u>doc/Cluster_from_Scratch.pdf</a> Bugs:<br>
<a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
<br>
<br>
<br>
______________________________<u></u>_________________<br>
Users mailing list: <a href="mailto:Users@clusterlabs.org" target="_blank">Users@clusterlabs.org</a><br>
<a href="http://clusterlabs.org/mailman/listinfo/users" target="_blank">http://clusterlabs.org/<u></u>mailman/listinfo/users</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/<u></u>doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
<br>
</div></div></blockquote><div class="HOEnZb"><div class="h5">
<br>
<br>
<br>
______________________________<u></u>_________________<br>
Users mailing list: <a href="mailto:Users@clusterlabs.org" target="_blank">Users@clusterlabs.org</a><br>
<a href="http://clusterlabs.org/mailman/listinfo/users" target="_blank">http://clusterlabs.org/<u></u>mailman/listinfo/users</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/<u></u>doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
</div></div></blockquote></div><br></div>