[Pacemaker] Problems with colocation/order with drbd tree-node-setup

Schmidt, Torsten torsten.schmidt at tecdoc.net
Fri Oct 16 09:49:01 UTC 2009


my first post on this list; will be quite a long one :)

environment:
OS: RHEL 5.4 x86_64

drbd83.x86_64                             8.3.2-6.el5_3
kmod-drbd83.x86_64                    8.3.2-6.el5_3
openais.x86_64                            0.80.6-8.el5_4.1
heartbeat.x86_64                         3.0.0-33.2
resource-agents.x86_64                1.0-31.4
pacemaker.x86_64                       1.0.5-4.1
pacemaker-libs.x86_64                 1.0.5-4.1


i would like to mention that i first successfully implemented a 2 node active-passive setup with pacemaker with help from the drbd-users-guide.

my next try was the three node setup (one backup-server outside of the HA cluster)
it is up-and-running when i doing it 'by hand' !

my problems start with the configuration of pacemaker, here i' m lost.

i defined the primitive resources, master-slave (without complains of crm/configure/ptest)

after colocation + ordering, ptest complains:
-------------------------------------------------------------------------------------------------------------------------
crm(live)configure# ptest
ptest[15059]: 2009/10/16_11:27:17 WARN: native_color: Resource res.ip.mysql cannot run anywhere
ptest[15059]: 2009/10/16_11:27:17 WARN: native_color: Resource res.drbd.mysqlstack:0 cannot run anywhere
-------------------------------------------------------------------------------------------------------------------------

-------------------------------------------------------------------------------------------------------------------------
crm(live)configure# show
node mysqlHA1.cologne.tecdoc.local
node mysqlHA2.cologne.tecdoc.local
primitive res.drbd.mysqldb ocf:linbit:drbd \
        params drbd_resource="mysqldb"
primitive res.drbd.mysqlstack ocf:linbit:drbd \
        params drbd_resource="mysqlstack"
primitive res.ip.mysql ocf:heartbeat:IPaddr2 \
        params ip="172.30.2.10" nic="eth0" \
        op monitor interval="2s" timeout="0.5s"
ms ms.drbd.mysqldb res.drbd.mysqldb \
        meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true" globally_unique="false"
ms ms.drbd.mysqlstack res.drbd.mysqlstack \
        meta master-max="1" master-node-max="1" clone-max="1" clone-node-max="1" notify="true" globally_unique="false"
colocation co.ms.drbd.mysqlstack_on_ms.drbd.mysqldb inf: ms.drbd.mysqlstack ms.drbd.mysqldb:Master
colocation co.ms.drbd.mysqlstack_on_res.ip.mysql inf: ms.drbd.mysqlstack res.ip.mysql
colocation co.res.ip.mysql_on_ms.drbd.mysqldb_master inf: res.ip.mysql ms.drbd.mysqldb:Master
order o.ip.mysql_before_ms.drbd.mysqlstack inf: res.ip.mysql ms.drbd.mysqlstack:start
order o.ms.drbd.mysqldb_before_ms.drbd.mysqlstack inf: ms.drbd.mysqldb:promote ms.drbd.mysqlstack:start
property $id="cib-bootstrap-options" \
        stonith-enabled="false"
-------------------------------------------------------------------------------------------------------------------------

i have no idea why the cluster-ip cannot be run anywhere (and of course prevent res.drbd.mysqlstack from starting)


if you're interested, my /etc/drbd.conf:
-------------------------------------------------------------------------------------------------------------------------
global {
  usage-count no;
}
common {
  syncer {
                rate 10M;
                verify-alg sha1;
        }
  protocol C;
}
resource mysqldb {
  handlers {
  fence-peer "/usr/lib/drbd/crm-fence-peer.sh";
  after-resync-target "/usr/lib/drbd/crm-unfence-peer.sh";
  split-brain   "/usr/lib/drbd/notify-split-brain.sh root";
  }
  startup {
    degr-wfc-timeout 60;    # 1 minute
    outdated-wfc-timeout 2;  # 2 seconds.
  }
  disk {
    on-io-error   detach;
  }
  net {
    cram-hmac-alg "sha1";
    shared-secret "secret";
    after-sb-0pri disconnect;
    after-sb-1pri disconnect;
    after-sb-2pri disconnect;
    rr-conflict disconnect;
  }
  syncer {
    rate 20M;
    al-extents 257;
  }
    device    /dev/drbd0;
    disk      /dev/sdb;
    meta-disk internal;
  on mysqlHA1.cologne.tecdoc.local {
    address    10.6.0.127:7788;
  }
  on mysqlHA2.cologne.tecdoc.local {
    address   10.6.0.128:7788;
  }
}
resource mysqlstack {
        protocol A;
        device          /dev/drbd10;
        stacked-on-top-of mysqldb {
                address         172.30.2.10:7789;  # Cluster IP
        }
        on mysqlHAoffsite.cologne.tecdoc.local {
                disk                    /dev/sdb;
                address         172.30.2.78:7789;  # public IP of backup-node
                meta-disk       internal;
        }
}
-------------------------------------------------------------------------------------------------------------------------

Mit freundlichen Grüßen / with kind regards
Torsten Schmidt
System Manager


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20091016/e4566ee0/attachment-0001.html>


More information about the Pacemaker mailing list