<html>
  <head>
    <meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <font face="Microsoft Sans Serif">Hello.<br>
      <br>
      We was looking the ways to utilize Corosync/Pacemaker stack for
      creating a <br>
      high-availability cluster of PostgreSQL servers with automatic
      failover.<br>
      <br>
      We are using Corosync (2.3.4) as a messaging layer and a stateful
      master/slave <br>
      Resource Agent (pgsql) with Pacemaker (1.1.12) on CentOS 7.1.<br>
      <br>
      Things work pretty well for a static cluster - where membership is
      defined up front. <br>
      However, we needed to be able to seamlessly add new machines
      (node) to the cluster and remove <br>
      existing ones from it, without service interruption. And we ran
      into a problem.<br>
      <br>
      Is it possible to add a new node dynamically without interruption?<br>
      <br>
      Do you know the way to add new node to cluster without this
      disruption?<br>
      Maybe some command or something else?</font><br>
    <br>
    <div class="moz-cite-prefix">05.10.2015 13:19, Nikolay Popov пишет:<br>
    </div>
    <blockquote cite="mid:56124E99.1000301@postgrespro.ru" type="cite">
      <meta http-equiv="content-type" content="text/html; charset=UTF-8">
      Hello.<br>
      <br>
      I have got STOP cluster status when add\del new cluster node <small><font
          face="Courier New"><pi05></font></small> after run <small><font
          face="Courier New"><update pgsql></font></small>
      command:<br>
      <br>
      How to add a node without STOP cluster? <br>
      <br>
      I am doing command step's:<br>
      <br>
      <small><font face="Courier New"><small><font face="Courier New">#
            </font></small>pcs cluster auth pi01 pi02 pi03 pi05 -u
          hacluster -p hacluster<br>
          <br>
          pi01: Authorized<br>
          pi02: Authorized<br>
          pi03: Authorized<br>
          pi05: Authorized</font></small><br>
      <br>
      <small><font face="Courier New"># pcs cluster node add pi05
          --start<br>
          <br>
          pi01: Corosync updated<br>
          pi02: Corosync updated<br>
          pi03: Corosync updated<br>
          pi05: Succeeded<br>
          pi05: Starting Cluster...</font></small><br>
      <br>
      <small><font face="Courier New"># pcs resource show --full<br>
          <br>
           Group: master-group<br>
            Resource: vip-master (class=ocf provider=heartbeat
          type=IPaddr2)<br>
             Attributes: ip=192.168.242.100 nic=eth0 cidr_netmask=24<br>
             Operations: start interval=0s timeout=60s on-fail=restart
          (vip-master-start-interval-0s)<br>
                         monitor interval=10s timeout=60s
          on-fail=restart (vip-master-monitor-interval-10s)<br>
                         stop interval=0s timeout=60s on-fail=block
          (vip-master-stop-interval-0s)<br>
            Resource: vip-rep (class=ocf provider=heartbeat
          type=IPaddr2)<br>
             Attributes: ip=192.168.242.101 nic=eth0 cidr_netmask=24<br>
             Meta Attrs: migration-threshold=0<br>
             Operations: start interval=0s timeout=60s on-fail=stop
          (vip-rep-start-interval-0s)<br>
                         monitor interval=10s timeout=60s
          on-fail=restart (vip-rep-monitor-interval-10s)<br>
                         stop interval=0s timeout=60s on-fail=ignore
          (vip-rep-stop-interval-0s)<br>
           Master: msPostgresql<br>
            Meta Attrs: master-max=1 master-node-max=1 clone-max=3
          clone-node-max=1 notify=true<br>
            Resource: pgsql (class=ocf provider=heartbeat type=pgsql)<br>
             Attributes: pgctl=/usr/pgsql-9.5/bin/pg_ctl
          psql=/usr/pgsql-9.5/bin/psql pgdata=/var/lib/pgsql/9.5/data/
          rep_mode=sync node_list="pi01 pi02 pi03" restore_command="cp
          /var/lib/pgsql/9.5/data/wal_archive/%f %p"
          primary_conninfo_opt="user=repl password=super-pass-for-repl
          keepalives_idle=60 keepalives_interval=5 keepalives_count=5"
          master_ip=192.168.242.100 restart_on_promote=true
          check_wal_receiver=true<br>
             Operations: start interval=0s timeout=60s on-fail=restart
          (pgsql-start-interval-0s)<br>
                         monitor interval=4s timeout=60s on-fail=restart
          (pgsql-monitor-interval-4s)<br>
                         monitor role=Master timeout=60s on-fail=restart
          interval=3s (pgsql-monitor-interval-3s-role-Master)<br>
                         promote interval=0s timeout=60s on-fail=restart
          (pgsql-promote-interval-0s)<br>
                         demote interval=0s timeout=60s on-fail=stop
          (pgsql-demote-interval-0s)<br>
                         stop interval=0s timeout=60s on-fail=block
          (pgsql-stop-interval-0s)<br>
                         notify interval=0s timeout=60s
          (pgsql-notify-interval-0s)<br>
          <br>
          <br>
          # pcs resource update msPostgresql pgsql master-max=1
          master-node-max=1 clone-max=4 clone-node-max=1 notify=true</font></small><br>
      <br>
      <small><font face="Courier New"># pcs resource update pgsql pgsql
          node_list="pi01 pi02 pi03 pi05"<br>
          <br>
          # crm_mon -Afr1<br>
          <br>
          Last updated: Fri Oct  2 17:07:05 2015          Last change:
          Fri Oct  2 17:06:37 2015<br>
           by root via cibadmin on pi01<br>
          Stack: corosync<br>
          Current DC: pi02 (version 1.1.13-a14efad) - partition with
          quorum<br>
          4 nodes and 9 resources configured<br>
          <br>
          Online: [ pi01 pi02 pi03 pi05 ]<br>
          <br>
          Full list of resources:<br>
          <br>
           Resource Group: master-group<br>
               vip-master (ocf::heartbeat:IPaddr2):       Stopped<br>
               vip-rep    (ocf::heartbeat:IPaddr2):       Stopped<br>
           Master/Slave Set: msPostgresql [pgsql]<br>
               Slaves: [ pi02 ]<br>
               Stopped: [ pi01 pi03 pi05 ]<br>
           fence-pi01     (stonith:fence_ssh):    Started pi02<br>
           fence-pi02     (stonith:fence_ssh):    Started pi01<br>
           fence-pi03     (stonith:fence_ssh):    Started pi01<br>
          <br>
          Node Attributes:<br>
          * Node pi01:<br>
              + master-pgsql                      : -INFINITY<br>
              + pgsql-data-status                 : STREAMING|SYNC<br>
              + pgsql-status                      : STOP<br>
          * Node pi02:<br>
              + master-pgsql                      : -INFINITY<br>
              + pgsql-data-status                 : LATEST<br>
              + pgsql-status                      : STOP<br>
          * Node pi03:<br>
              + master-pgsql                      : -INFINITY<br>
              + pgsql-data-status                 : STREAMING|POTENTIAL<br>
              + pgsql-status                      : STOP<br>
          * Node pi05:<br>
              + master-pgsql                      : -INFINITY<br>
              + pgsql-status                      : STOP<br>
          <br>
          Migration Summary:<br>
          * Node pi01:<br>
          * Node pi03:<br>
          * Node pi02:<br>
          * Node pi05:</font></small><br>
      <br>
      After some time is worked:<br>
      <br>
      <small><font face="Courier New">Every 2.0s: crm_mon
          -Afr1                                                Fri Oct 
          2 17:04:36 2015<br>
          <br>
          Last updated: Fri Oct  2 17:04:36 2015          Last change:
          Fri Oct  2 17:04:07 2015 by root via<br>
           cibadmin on pi01<br>
          Stack: corosync<br>
          Current DC: pi02 (version 1.1.13-a14efad) - partition with
          quorum<br>
          4 nodes and 9 resources configured<br>
          <br>
          Online: [ pi01 pi02 pi03 pi05 ]<br>
          <br>
          Full list of resources:<br>
          <br>
           Resource Group: master-group<br>
               vip-master (ocf::heartbeat:IPaddr2):       Started pi02<br>
               vip-rep    (ocf::heartbeat:IPaddr2):       Started pi02<br>
           Master/Slave Set: msPostgresql [pgsql]<br>
               Masters: [ pi02 ]<br>
               Slaves: [ pi01 pi03 </font></small><small><font
          face="Courier New"><small><font face="Courier New">pi05</font></small>
          ]<br>
          <br>
           fence-pi01     (stonith:fence_ssh):    Started pi02<br>
           fence-pi02     (stonith:fence_ssh):    Started pi01<br>
           fence-pi03     (stonith:fence_ssh):    Started pi01<br>
          <br>
          Node Attributes:<br>
          * Node pi01:<br>
              + master-pgsql                      : 100<br>
              + pgsql-data-status                 : STREAMING|SYNC<br>
              + pgsql-receiver-status             : normal<br>
              + pgsql-status                      : HS:sync<br>
          * Node pi02:<br>
              + master-pgsql                      : 1000<br>
              + pgsql-data-status                 : LATEST<br>
              + pgsql-master-baseline             : 0000000008000098<br>
              + pgsql-receiver-status             : ERROR<br>
              + pgsql-status                      : PRI<br>
          * Node pi03:<br>
              + master-pgsql                      : -INFINITY<br>
              + pgsql-data-status                 : STREAMING|POTENTIAL<br>
              + pgsql-receiver-status             : normal<br>
              + pgsql-status                      : HS:potential<br>
          * Node pi05:<br>
        </font></small><small><font face="Courier New"><small><font
              face="Courier New">    + master-pgsql                     
                   : -INFINITY<br>
                  + pgsql-data-status                      :
              STREAMING|POTENTIAL<br>
                  + pgsql-receiver-status                  : normal<br>
                  + pgsql-status                           :
              HS:potential</font></small><br>
          <br>
          Migration Summary:<br>
          * Node pi01:<br>
          * Node pi03:<br>
          * Node pi02:<br>
          * Node pi05:</font></small><br>
      <br>
      <br>
      <pre class="moz-signature" cols="72">-- 
Nikolay Popov
</pre>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
Users mailing list: <a class="moz-txt-link-abbreviated" href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a>
<a class="moz-txt-link-freetext" href="http://clusterlabs.org/mailman/listinfo/users">http://clusterlabs.org/mailman/listinfo/users</a>

Project Home: <a class="moz-txt-link-freetext" href="http://www.clusterlabs.org">http://www.clusterlabs.org</a>
Getting started: <a class="moz-txt-link-freetext" href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a>
Bugs: <a class="moz-txt-link-freetext" href="http://bugs.clusterlabs.org">http://bugs.clusterlabs.org</a>
</pre>
    </blockquote>
    <br>
    <pre class="moz-signature" cols="72">-- 
Nikolay Popov
<a class="moz-txt-link-abbreviated" href="mailto:n.popov@postgrespro.ru">n.popov@postgrespro.ru</a>
Postgres Professional: <a class="moz-txt-link-freetext" href="http://www.postgrespro.com">http://www.postgrespro.com</a>
The Russian Postgres Company</pre>
  </body>
</html>