[Pacemaker] Config Sanity Check - Postgres, OCFS2 etc

Serge Dubrouski sergeyfd at gmail.com
Thu Feb 10 20:39:06 EST 2011


I won't talk about other parts but approach for pgsql configuration is
incorrect. You shouldn't create a new RA for each of your instances
like it seems you are trying to do:

primitive PGSQL_DEPOT ocf:heartbeat:pgsql.depot \

instead you should you different set of meta attributes for each of
your instances:

primitive PGSQL_DEPOT ocf:heartbeat:pgsql \
      pgdata='/var/lib/pgsql/depot' \
      pghost='depot_vip' \
      pgdb='depot'

and so on.

Check out How-To page for pgsql on clusterlabs web site




On Thu, Feb 10, 2011 at 6:27 PM, David Morton <davidmorton78 at gmail.com> wrote:
> Afternoon all,
> We're cutting over from OpenSUSE and straight heartbeat based on ext3 (two
> node active passive) to SLES, Pacemaker / Corosync, and OCFS2 in a split
> role active/passive configuration (three databases, two on one server and
> one on the other which can fail over to each other).
> As this is my first experience with the new Pacemaker / Corosync stack and
> OCFS2 I would like to get the configuration validated by more experienced
> users to ensure there will be no big issues.
> Also I have some related queries:
> 1) On a latest gen x86 IBM server what is the best / appropriate STONITH
> resource to use for control via the built in IMM interface ?
> 2) Is there an OCF compliant resource agent for JavaDB / Derby ? Currently
> am using a nasty init script for its start / stop. I can't seem to find one
> anywhere.
> 3) Does the use of a group command negate the use of a colocation command ?
> 4) Do you have to be very careful around the timeout values specified for
> timeouts relative to things like SAN multipath failover times, bonded
> interface convergence times and OCFS2 / filesystem related timeouts ? ie:
> Can Pacemaker end up with things chasing their tail, small blip in
> connectivity turns into a major ?
> Also an unrelated query which I'm sure somebody will know:
> 5) Is the use of CLVM mandatory for the OCFS2 filesystem, or is it simply
> used if you wish to use logical volumes with online resizing capability ?
> 6) Is there danger in having a SAN disk visible to two servers at once (with
> non clustered filesystem such as ext3) but only ever mounted on one ? This
> is the scenario we will be using and ext3 gives performance gains over OCFS2
> so if there is no danger it would be preferable to use.
> Current config is as per the below for review:
> node company-prod-db-001
> node company-prod-db-002
> primitive DERBYDB lsb:derby
> primitive FS_DB_DEPOT ocf:heartbeat:Filesystem \
>         params device="-LDB_DEPOT" directory="/DB_DEPOT" fstype="ocfs2"
> options="acl" \
>         op start interval="0" timeout="60" \
>         op stop interval="0" timeout="60"
> primitive FS_DB_ESP_AUDIT ocf:heartbeat:Filesystem \
>         params device="-LDB_ESP_AUDIT" directory="/DB_ESP_AUDIT"
> fstype="ocfs2" options="acl" \
>         op start interval="0" timeout="60" \
>         op stop interval="0" timeout="60"
> primitive FS_LOGS_DEPOT ocf:heartbeat:Filesystem \
>         params device="-LLOGS_DEPOT" directory="/LOGS_DEPOT" fstype="ocfs2"
> options="data=writeback,noatime,acl" \
>         op start interval="0" timeout="60" \
>         op stop interval="0" timeout="60"
> primitive FS_LOGS_ESP_AUDIT ocf:heartbeat:Filesystem \
>         params device="-LLOGS_ESP_AUDIT" directory="/LOGS_ESP_AUDIT"
> fstype="ocfs2" options="data=writeback,noatime,acl" \
>         op start interval="0" timeout="60" \
>         op stop interval="0" timeout="60"
> primitive IP_DEPOT_15 ocf:heartbeat:IPaddr2 \
>         params ip="192.168.15.93" cidr_netmask="24" \
>         op monitor interval="30s"
> primitive IP_DEPOT_72 ocf:heartbeat:IPaddr2 \
>         params ip="192.168.72.93" cidr_netmask="24" \
>         op monitor interval="30s"
> primitive IP_ESP_AUDIT_15 ocf:heartbeat:IPaddr2 \
>         params ip="192.168.15.92" cidr_netmask="24" \
>         op monitor interval="30s"
> primitive IP_ESP_AUDIT_72 ocf:heartbeat:IPaddr2 \
>         params ip="192.168.72.92" cidr_netmask="24" \
>         op monitor interval="30s"
> primitive PGSQL_AUDIT ocf:heartbeat:pgsql.audit \
>         op start interval="0" timeout="120" \
>         op stop interval="0" timeout="120"
> primitive PGSQL_DEPOT ocf:heartbeat:pgsql.depot \
>         op start interval="0" timeout="120" \
>         op stop interval="0" timeout="120"
> primitive PGSQL_ESP ocf:heartbeat:pgsql.esp \
>         op start interval="0" timeout="120" \
>         op stop interval="0" timeout="120"
> group DEPOT FS_LOGS_DEPOT FS_DB_DEPOT IP_DEPOT_15 IP_DEPOT_72 DERBYDB
> PGSQL_DEPOT
> group ESP_AUDIT FS_LOGS_ESP_AUDIT FS_DB_ESP_AUDIT IP_ESP_AUDIT_15
> IP_ESP_AUDIT_72 PGSQL_AUDIT PGSQL_ESP
> location LOC_DEPOT DEPOT 25: company-prod-db-001
> location LOC_ESP_AUDIT ESP_AUDIT 25: company-prod-db-002
> property $id="cib-bootstrap-options" \
>         dc-version="1.1.2-2e096a41a5f9e184a1c1537c82c6da1093698eb5" \
>         cluster-infrastructure="openais" \
>         expected-quorum-votes="2" \
>         no-quorum-policy="ignore" \
>         stonith-enabled="false" \
>         start-failure-is-fatal="false"
> rsc_defaults $id="rsc-options" \
>         resource-stickiness="100"
> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs:
> http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker
>
>



-- 
Serge Dubrouski.




More information about the Pacemaker mailing list