<div dir="ltr"><div><div><div>Hi<br></div>Ok I found this particular problem, when the failed node comes up again, the kernel start Postgres, I have disabled this and now the VIPs and Postgres remain on the new MASTER, but the failed node does not come up as a slave, IE there is no sync between the new master and slave, is this the expected behavior? The only way I can get it back into slave mode is to follow the procedure in the wiki<br><pre># su - postgres
$ rm -rf /var/lib/pgsql/data/
$ pg_basebackup -h 192.168.2.3 -U postgres -D /var/lib/pgsql/data -X stream -P
$ rm /var/lib/pgsql/tmp/PGSQL.lock
$ exit
# pcs resource cleanup msPostgresql</pre><br></div>Looking forward to your reply please<br></div>Regards<br><br></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Mar 17, 2015 at 7:55 AM, Wynand Jansen van Vuuren <span dir="ltr"><<a href="mailto:esawyja@gmail.com" target="_blank">esawyja@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div><div><div><div><div><div><div>Hi Nakahira,<br></div>I finally got around testing this, below is the initial state<span class=""><br><br>cl1_lb1:~ # crm_mon -1 -Af<br></span>Last updated: Tue Mar 17 07:31:58 2015<br>Last change: Tue Mar 17 07:31:12 2015 by root via crm_attribute on cl1_lb1<span class=""><br>Stack: classic openais (with plugin)<br></span>Current DC: cl1_lb1 - partition with quorum<span class=""><br>Version: 1.1.9-2db99f1<br>2 Nodes configured, 2 expected votes<br>6 Resources configured.<br><br><br>Online: [ cl1_lb1 cl2_lb1 ]<br><br> Resource Group: master-group<br> vip-master (ocf::heartbeat:IPaddr2): Started cl1_lb1 <br> vip-rep (ocf::heartbeat:IPaddr2): Started cl1_lb1 <br> CBC_instance (ocf::heartbeat:cbc): Started cl1_lb1 <br> failover_MailTo (ocf::heartbeat:MailTo): Started cl1_lb1 <br> Master/Slave Set: msPostgresql [pgsql]<br> Masters: [ cl1_lb1 ]<br> Slaves: [ cl2_lb1 ]<br><br>Node Attributes:<br>* Node cl1_lb1:<br> + master-pgsql : 1000 <br> + pgsql-data-status : LATEST <br></span> + pgsql-master-baseline : 00000008BE000000<span class=""><br> + pgsql-status : PRI <br>* Node cl2_lb1:<br> + master-pgsql : 100 <br> + pgsql-data-status : STREAMING|SYNC<br> + pgsql-status : HS:sync <br><br>Migration summary:<br>* Node cl2_lb1: <br>* Node cl1_lb1: <br>cl1_lb1:~ #<br><br></span></div>###### - I then did a init 0 on the master node, cl1_lb1<br><br>cl1_lb1:~ # init 0<br>cl1_lb1:~ # <br>Connection closed by foreign host.<br><br>Disconnected from remote host(cl1_lb1) at 07:36:18.<br><br>Type `help' to learn how to use Xshell prompt.<br>[c:\~]$ <br><br></div>###### - This was ok as the slave took over, became master<span class=""><br><br>cl2_lb1:~ # crm_mon -1 -Af<br></span>Last updated: Tue Mar 17 07:35:04 2015<br>Last change: Tue Mar 17 07:34:29 2015 by root via crm_attribute on cl2_lb1<span class=""><br>Stack: classic openais (with plugin)<br>Current DC: cl2_lb1 - partition WITHOUT quorum<br>Version: 1.1.9-2db99f1<br>2 Nodes configured, 2 expected votes<br>6 Resources configured.<br><br><br>Online: [ cl2_lb1 ]<br>OFFLINE: [ cl1_lb1 ]<br><br></span> Resource Group: master-group<br> vip-master (ocf::heartbeat:IPaddr2): Started cl2_lb1 <br> vip-rep (ocf::heartbeat:IPaddr2): Started cl2_lb1 <br> CBC_instance (ocf::heartbeat:cbc): Started cl2_lb1 <br> failover_MailTo (ocf::heartbeat:MailTo): Started cl2_lb1 <br> Master/Slave Set: msPostgresql [pgsql]<br> Masters: [ cl2_lb1 ]<span class=""><br> Stopped: [ pgsql:1 ]<br><br>Node Attributes:<br>* Node cl2_lb1:<br></span><span class=""> + master-pgsql : 1000 <br> + pgsql-data-status : LATEST <br></span> + pgsql-master-baseline : 00000008C2000090<br> + pgsql-status : PRI <br><span class=""><br>Migration summary:<br>* Node cl2_lb1: <br>cl2_lb1:~ #<br><br></span></div>And the logs from Postgres and Corosync are attached<br><br></div>###### - I then restarted the original Master cl1_lb1 and started Corosync manually<br><br></div>Once the original Master cl1_lb1 was up and Corosync running, the status below happened, notice no VIPs and Postgres<br><br></div>###### - Still working below<span class=""><br><br>cl2_lb1:~ # crm_mon -1 -Af<br></span>Last updated: Tue Mar 17 07:36:55 2015<br>Last change: Tue Mar 17 07:34:29 2015 by root via crm_attribute on cl2_lb1<span class=""><br>Stack: classic openais (with plugin)<br>Current DC: cl2_lb1 - partition WITHOUT quorum<br>Version: 1.1.9-2db99f1<br>2 Nodes configured, 2 expected votes<br>6 Resources configured.<br><br><br>Online: [ cl2_lb1 ]<br>OFFLINE: [ cl1_lb1 ]<br><br></span> Resource Group: master-group<br> vip-master (ocf::heartbeat:IPaddr2): Started cl2_lb1 <br> vip-rep (ocf::heartbeat:IPaddr2): Started cl2_lb1 <br> CBC_instance (ocf::heartbeat:cbc): Started cl2_lb1 <br> failover_MailTo (ocf::heartbeat:MailTo): Started cl2_lb1 <br> Master/Slave Set: msPostgresql [pgsql]<br> Masters: [ cl2_lb1 ]<span class=""><br> Stopped: [ pgsql:1 ]<br><br>Node Attributes:<br>* Node cl2_lb1:<br></span><span class=""> + master-pgsql : 1000 <br> + pgsql-data-status : LATEST <br></span> + pgsql-master-baseline : 00000008C2000090<br> + pgsql-status : PRI <br><span class=""><br>Migration summary:<br>* Node cl2_lb1: <br><br><br></span></div>###### - After original master is up and Corosync running on cl1_lb1<br><div><span class=""><br>cl2_lb1:~ # crm_mon -1 -Af<br></span>Last updated: Tue Mar 17 07:37:47 2015<br>Last change: Tue Mar 17 07:37:21 2015 by root via crm_attribute on cl1_lb1<span class=""><br>Stack: classic openais (with plugin)<br>Current DC: cl2_lb1 - partition with quorum<br>Version: 1.1.9-2db99f1<br>2 Nodes configured, 2 expected votes<br>6 Resources configured.<br><br><br>Online: [ cl1_lb1 cl2_lb1 ]<br><br><br></span><span class="">Node Attributes:<br>* Node cl1_lb1:<br></span><div> + master-pgsql : -INFINITY <br> + pgsql-data-status : LATEST <br> + pgsql-status : STOP <br><span class="">* Node cl2_lb1:<br> + master-pgsql : -INFINITY <br> + pgsql-data-status : DISCONNECT<br></span> + pgsql-status : STOP <br><span class=""><br>Migration summary:<br>* Node cl2_lb1: <br></span> pgsql:0: migration-threshold=1 fail-count=2 last-failure='Tue Mar 17 07:37:26 2015'<br>* Node cl1_lb1: <br> pgsql:0: migration-threshold=1 fail-count=2 last-failure='Tue Mar 17 07:37:26 2015'<br><br>Failed actions:<br> pgsql_monitor_4000 (node=cl2_lb1, call=735, rc=7, status=complete): not running<br> pgsql_monitor_4000 (node=cl1_lb1, call=42, rc=7, status=complete): not running<br>cl2_lb1:~ # <br><br></div><div>##### - No VIPs up<br></div><div><br>cl2_lb1:~ # ping 172.28.200.159<br>PING 172.28.200.159 (172.28.200.159) 56(84) bytes of data.<br>From <a href="http://172.28.200.168" target="_blank">172.28.200.168</a>: icmp_seq=1 Destination Host Unreachable<br>From 172.28.200.168 icmp_seq=1 Destination Host Unreachable<br>From 172.28.200.168 icmp_seq=2 Destination Host Unreachable<br>From 172.28.200.168 icmp_seq=3 Destination Host Unreachable<br>^C<br>--- 172.28.200.159 ping statistics ---<br>5 packets transmitted, 0 received, +4 errors, 100% packet loss, time 4024ms<br>, pipe 3<br>cl2_lb1:~ # ping 172.16.0.5<br>PING 172.16.0.5 (172.16.0.5) 56(84) bytes of data.<br>From <a href="http://172.16.0.3" target="_blank">172.16.0.3</a>: icmp_seq=1 Destination Host Unreachable<br>From 172.16.0.3 icmp_seq=1 Destination Host Unreachable<br>From 172.16.0.3 icmp_seq=2 Destination Host Unreachable<br>From 172.16.0.3 icmp_seq=3 Destination Host Unreachable<br>From 172.16.0.3 icmp_seq=5 Destination Host Unreachable<br>From 172.16.0.3 icmp_seq=6 Destination Host Unreachable<br>From 172.16.0.3 icmp_seq=7 Destination Host Unreachable<br>^C<br>--- 172.16.0.5 ping statistics ---<br>8 packets transmitted, 0 received, +7 errors, 100% packet loss, time 7015ms<br>, pipe 3<br>cl2_lb1:~ # <br><div><div><div><div><br><br></div><div>Any ideas please, or it it a case of recovering the original master manually before starting Corosync etc?<br></div><div>All logs are attached<br><br></div><div>Regards<br><br></div></div></div></div></div></div></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Mar 16, 2015 at 11:01 AM, Wynand Jansen van Vuuren <span dir="ltr"><<a href="mailto:esawyja@gmail.com" target="_blank">esawyja@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Thanks for the advice, I have a demo on this now, so I don't want to test this now, I will do so tomorrow and forwards the logs, many thanks!!<br></div><div><div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Mar 16, 2015 at 10:54 AM, NAKAHIRA Kazutomo <span dir="ltr"><<a href="mailto:nakahira_kazutomo_b1@lab.ntt.co.jp" target="_blank">nakahira_kazutomo_b1@lab.ntt.co.jp</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi,<span><br>
<br>
> do you suggest that I take it out? or should I look at the problem where<br>
> cl2_lb1 is not being promoted?<br>
<br></span>
You should look at the problem where cl2_lb1 is not being promoted.<br>
And I look it if you send me a ha-log and PostgreSQL's log.<br>
<br>
Best regards,<br>
Kazutomo NAKAHIRA<div><div><br>
<br>
On 2015/03/16 17:18, Wynand Jansen van Vuuren wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Hi Nakahira,<br>
Thanks so much for the info, this setting was as the wiki page suggested,<br>
do you suggest that I take it out? or should I look at the problem where<br>
cl2_lb1 is not being promoted?<br>
Regards<br>
<br>
On Mon, Mar 16, 2015 at 10:15 AM, NAKAHIRA Kazutomo <<br>
<a href="mailto:nakahira_kazutomo_b1@lab.ntt.co.jp" target="_blank">nakahira_kazutomo_b1@lab.ntt.<u></u>co.jp</a>> wrote:<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Hi,<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Notice there is no VIPs, looks like the VIPs depends on some other<br>
</blockquote>
resource<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
to start 1st?<br>
</blockquote>
<br>
The following constraints means that "master-group" can not start<br>
without master of msPostgresql resource.<br>
<br>
colocation rsc_colocation-1 inf: master-group msPostgresql:Master<br>
<br>
After you power off cl1_lb1, msPostgresql on the cl2_lb1 is not promoted<br>
and master is not exist in your cluster.<br>
<br>
It means that "master-group" can not run anyware.<br>
<br>
Best regards,<br>
Kazutomo NAKAHIRA<br>
<br>
<br>
On 2015/03/16 16:48, Wynand Jansen van Vuuren wrote:<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Hi<br>
When I start out cl1_lb1 (Cluster 1 load balancer 1) is the master as<br>
below<br>
cl1_lb1:~ # crm_mon -1 -Af<br>
Last updated: Mon Mar 16 09:44:44 2015<br>
Last change: Mon Mar 16 08:06:26 2015 by root via crm_attribute on cl1_lb1<br>
Stack: classic openais (with plugin)<br>
Current DC: cl2_lb1 - partition with quorum<br>
Version: 1.1.9-2db99f1<br>
2 Nodes configured, 2 expected votes<br>
6 Resources configured.<br>
<br>
<br>
Online: [ cl1_lb1 cl2_lb1 ]<br>
<br>
Resource Group: master-group<br>
vip-master (ocf::heartbeat:IPaddr2): Started cl1_lb1<br>
vip-rep (ocf::heartbeat:IPaddr2): Started cl1_lb1<br>
CBC_instance (ocf::heartbeat:cbc): Started cl1_lb1<br>
failover_MailTo (ocf::heartbeat:MailTo): Started cl1_lb1<br>
Master/Slave Set: msPostgresql [pgsql]<br>
Masters: [ cl1_lb1 ]<br>
Slaves: [ cl2_lb1 ]<br>
<br>
Node Attributes:<br>
* Node cl1_lb1:<br>
+ master-pgsql : 1000<br>
+ pgsql-data-status : LATEST<br>
+ pgsql-master-baseline : 00000008B90061F0<br>
+ pgsql-status : PRI<br>
* Node cl2_lb1:<br>
+ master-pgsql : 100<br>
+ pgsql-data-status : STREAMING|SYNC<br>
+ pgsql-status : HS:sync<br>
<br>
Migration summary:<br>
* Node cl2_lb1:<br>
* Node cl1_lb1:<br>
cl1_lb1:~ #<br>
<br>
If I then do a power off on cl1_lb1 (master), Postgres moves to cl2_lb1<br>
(Cluster 2 load balancer 1), but the VIP-MASTER and VIP-REP is not<br>
pingable<br>
from the NEW master (cl2_lb1), it stays line this below<br>
cl2_lb1:~ # crm_mon -1 -Af<br>
Last updated: Mon Mar 16 07:32:07 2015<br>
Last change: Mon Mar 16 07:28:53 2015 by root via crm_attribute on cl1_lb1<br>
Stack: classic openais (with plugin)<br>
Current DC: cl2_lb1 - partition WITHOUT quorum<br>
Version: 1.1.9-2db99f1<br>
2 Nodes configured, 2 expected votes<br>
6 Resources configured.<br>
<br>
<br>
Online: [ cl2_lb1 ]<br>
OFFLINE: [ cl1_lb1 ]<br>
<br>
Master/Slave Set: msPostgresql [pgsql]<br>
Slaves: [ cl2_lb1 ]<br>
Stopped: [ pgsql:1 ]<br>
<br>
Node Attributes:<br>
* Node cl2_lb1:<br>
+ master-pgsql : -INFINITY<br>
+ pgsql-data-status : DISCONNECT<br>
+ pgsql-status : HS:alone<br>
<br>
Migration summary:<br>
* Node cl2_lb1:<br>
cl2_lb1:~ #<br>
<br>
Notice there is no VIPs, looks like the VIPs depends on some other<br>
resource<br>
to start 1st?<br>
Thanks for the reply!<br>
<br>
<br>
On Mon, Mar 16, 2015 at 9:42 AM, NAKAHIRA Kazutomo <<br>
<a href="mailto:nakahira_kazutomo_b1@lab.ntt.co.jp" target="_blank">nakahira_kazutomo_b1@lab.ntt.<u></u>co.jp</a>> wrote:<br>
<br>
Hi,<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
fine, cl2_lb1 takes over and acts as a slave, but the VIPs does not come<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
</blockquote>
<br>
cl2_lb1 acts as a slave? It is not a master?<br>
VIPs comes up with master msPostgresql resource.<br>
<br>
If promote action was failed in the cl2_lb1, then<br>
please send a ha-log and PostgreSQL's log.<br>
<br>
Best regards,<br>
Kazutomo NAKAHIRA<br>
<br>
<br>
On 2015/03/16 16:09, Wynand Jansen van Vuuren wrote:<br>
<br>
Hi all,<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
I have 2 nodes, with 2 interfaces each, ETH0 is used for an application,<br>
CBC, that's writing to the Postgres DB on the VIP-MASTER 172.28.200.159,<br>
ETH1 is used for the Corosync configuration and VIP-REP, everything<br>
works,<br>
but if the master currently on cl1_lb1 has a catastrophic failure, like<br>
power down, the VIPs does not start on the slave, the Postgres parts<br>
works<br>
fine, cl2_lb1 takes over and acts as a slave, but the VIPs does not come<br>
up. If I test it manually, IE kill the application 3 times on the<br>
master,<br>
the switchover is smooth, same if I kill Postgres on master, but when<br>
there<br>
is a power failure on the Master, the VIPs stay down. If I then delete<br>
the<br>
attributes pgsql-data-status="LATEST" and attributes<br>
pgsql-data-status="STREAMING|<u></u>SYNC" on the slave after power off on the<br>
master and restart everything, then the VIPs come up on the slave, any<br>
ideas please?<br>
I'm using this setup<br>
<a href="http://clusterlabs.org/wiki/PgSQL_Replicated_Cluster" target="_blank">http://clusterlabs.org/wiki/<u></u>PgSQL_Replicated_Cluster</a><br>
<br>
With this configuration below<br>
node cl1_lb1 \<br>
attributes pgsql-data-status="LATEST"<br>
node cl2_lb1 \<br>
attributes pgsql-data-status="STREAMING|<u></u>SYNC"<br>
primitive CBC_instance ocf:heartbeat:cbc \<br>
op monitor interval="60s" timeout="60s" on-fail="restart" \<br>
op start interval="0s" timeout="60s" on-fail="restart" \<br>
meta target-role="Started" migration-threshold="3"<br>
failure-timeout="60s"<br>
primitive failover_MailTo ocf:heartbeat:MailTo \<br>
params email="<a href="mailto:wynandj@rorotika.com" target="_blank">wynandj@rorotika.com</a>" subject="Cluster Status<br>
change<br>
- " \<br>
op monitor interval="10" timeout="10" dept="0"<br>
primitive pgsql ocf:heartbeat:pgsql \<br>
params pgctl="/opt/app/PostgreSQL/9.<u></u>3/bin/pg_ctl"<br>
psql="/opt/app/PostgreSQL/9.3/<u></u>bin/psql"<br>
config="/opt/app/pgdata/9.3/<u></u>postgresql.conf" pgdba="postgres"<br>
pgdata="/opt/app/pgdata/9.3/" start_opt="-p 5432" rep_mode="sync"<br>
node_list="cl1_lb1 cl2_lb1" restore_command="cp /pgtablespace/archive/%f<br>
%p" primary_conninfo_opt="<u></u>keepalives_idle=60 keepalives_interval=5<br>
keepalives_count=5" master_ip="172.16.0.5" restart_on_promote="false"<br>
logfile="/var/log/OCF.log" \<br>
op start interval="0s" timeout="60s" on-fail="restart" \<br>
op monitor interval="4s" timeout="60s" on-fail="restart" \<br>
op monitor interval="3s" role="Master" timeout="60s"<br>
on-fail="restart" \<br>
op promote interval="0s" timeout="60s" on-fail="restart" \<br>
op demote interval="0s" timeout="60s" on-fail="stop" \<br>
op stop interval="0s" timeout="60s" on-fail="block" \<br>
op notify interval="0s" timeout="60s"<br>
primitive vip-master ocf:heartbeat:IPaddr2 \<br>
params ip="172.28.200.159" nic="eth0" iflabel="CBC_VIP"<br>
cidr_netmask="24" \<br>
op start interval="0s" timeout="60s" on-fail="restart" \<br>
op monitor interval="10s" timeout="60s" on-fail="restart" \<br>
op stop interval="0s" timeout="60s" on-fail="block" \<br>
meta target-role="Started"<br>
primitive vip-rep ocf:heartbeat:IPaddr2 \<br>
params ip="172.16.0.5" nic="eth1" iflabel="REP_VIP"<br>
cidr_netmask="24" \<br>
meta migration-threshold="0" target-role="Started" \<br>
op start interval="0s" timeout="60s" on-fail="stop" \<br>
op monitor interval="10s" timeout="60s" on-fail="restart" \<br>
op stop interval="0s" timeout="60s" on-fail="restart"<br>
group master-group vip-master vip-rep CBC_instance failover_MailTo<br>
ms msPostgresql pgsql \<br>
meta master-max="1" master-node-max="1" clone-max="2"<br>
clone-node-max="1" notify="true"<br>
colocation rsc_colocation-1 inf: master-group msPostgresql:Master<br>
order rsc_order-1 0: msPostgresql:promote master-group:start<br>
symmetrical=false<br>
order rsc_order-2 0: msPostgresql:demote master-group:stop<br>
symmetrical=false<br>
property $id="cib-bootstrap-options" \<br>
dc-version="1.1.9-2db99f1" \<br>
cluster-infrastructure="<u></u>classic openais (with plugin)" \<br>
expected-quorum-votes="2" \<br>
no-quorum-policy="ignore" \<br>
stonith-enabled="false" \<br>
cluster-recheck-interval="<u></u>1min" \<br>
crmd-transition-delay="0s" \<br>
last-lrm-refresh="1426485983"<br>
rsc_defaults $id="rsc-options" \<br>
resource-stickiness="INFINITY" \<br>
migration-threshold="1"<br>
#vim:set syntax=pcmk<br>
<br>
Any ideas please, I'm lost......<br>
<br>
<br>
<br>
______________________________<u></u>_________________<br>
Users mailing list: <a href="mailto:Users@clusterlabs.org" target="_blank">Users@clusterlabs.org</a><br>
<a href="http://clusterlabs.org/mailman/listinfo/users" target="_blank">http://clusterlabs.org/<u></u>mailman/listinfo/users</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/" target="_blank">http://www.clusterlabs.org/</a><br>
doc/Cluster_from_Scratch.pdf<br>
Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
<br>
<br>
<br>
</blockquote>
<br>
______________________________<u></u>_________________<br>
Users mailing list: <a href="mailto:Users@clusterlabs.org" target="_blank">Users@clusterlabs.org</a><br>
<a href="http://clusterlabs.org/mailman/listinfo/users" target="_blank">http://clusterlabs.org/<u></u>mailman/listinfo/users</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/<u></u>doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
<br>
<br>
</blockquote>
<br>
<br>
______________________________<u></u>_________________<br>
Users mailing list: <a href="mailto:Users@clusterlabs.org" target="_blank">Users@clusterlabs.org</a><br>
<a href="http://clusterlabs.org/mailman/listinfo/users" target="_blank">http://clusterlabs.org/<u></u>mailman/listinfo/users</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/<u></u>doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
<br>
<br>
</blockquote>
<br>
--<br>
NTT オープンソースソフトウェアセンタ<br>
中平 和友<br>
TEL: 03-5860-5135 FAX: 03-5463-6490<br>
Mail: <a href="mailto:nakahira_kazutomo_b1@lab.ntt.co.jp" target="_blank">nakahira_kazutomo_b1@lab.ntt.<u></u>co.jp</a><br>
<br>
<br>
<br>
______________________________<u></u>_________________<br>
Users mailing list: <a href="mailto:Users@clusterlabs.org" target="_blank">Users@clusterlabs.org</a><br>
<a href="http://clusterlabs.org/mailman/listinfo/users" target="_blank">http://clusterlabs.org/<u></u>mailman/listinfo/users</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/<u></u>doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
<br>
</blockquote>
<br>
<br>
<br>
______________________________<u></u>_________________<br>
Users mailing list: <a href="mailto:Users@clusterlabs.org" target="_blank">Users@clusterlabs.org</a><br>
<a href="http://clusterlabs.org/mailman/listinfo/users" target="_blank">http://clusterlabs.org/<u></u>mailman/listinfo/users</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/<u></u>doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
<br>
</blockquote>
<br>
<br>
-- <br>
NTT オープンソースソフトウェアセンタ<br>
中平 和友<br>
TEL: 03-5860-5135 FAX: 03-5463-6490<br>
Mail: <a href="mailto:nakahira_kazutomo_b1@lab.ntt.co.jp" target="_blank">nakahira_kazutomo_b1@lab.ntt.<u></u>co.jp</a><br>
<br>
<br>
______________________________<u></u>_________________<br>
Users mailing list: <a href="mailto:Users@clusterlabs.org" target="_blank">Users@clusterlabs.org</a><br>
<a href="http://clusterlabs.org/mailman/listinfo/users" target="_blank">http://clusterlabs.org/<u></u>mailman/listinfo/users</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/<u></u>doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>