Thanks Serge, I have changed the resource agents as you have suggested and everything appears to be working fine ... example below from a 'crm configure show'. Still looking for advice on the other items .... anyone ?<div>
<br></div><div><div>primitive PGSQL_DEPOT ocf:heartbeat:pgsql \</div><div> params pgdata="/DB_DEPOT/depot/dbdata/data/" pgport="5433" pgdba="depot" \</div><div> op start interval="0" timeout="120" \</div>
<div> op stop interval="0" timeout="120"</div></div><div><br></div><div><div class="gmail_quote">On Fri, Feb 11, 2011 at 2:39 PM, Serge Dubrouski <span dir="ltr"><<a href="mailto:sergeyfd@gmail.com">sergeyfd@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">I won't talk about other parts but approach for pgsql configuration is<br>
incorrect. You shouldn't create a new RA for each of your instances<br>
like it seems you are trying to do:<br>
<div class="im"><br>
primitive PGSQL_DEPOT ocf:heartbeat:pgsql.depot \<br>
<br>
</div>instead you should you different set of meta attributes for each of<br>
your instances:<br>
<br>
primitive PGSQL_DEPOT ocf:heartbeat:pgsql \<br>
pgdata='/var/lib/pgsql/depot' \<br>
pghost='depot_vip' \<br>
pgdb='depot'<br>
<br>
and so on.<br>
<br>
Check out How-To page for pgsql on clusterlabs web site<br>
<div><div></div><div class="h5"><br>
<br>
<br>
<br>
On Thu, Feb 10, 2011 at 6:27 PM, David Morton <<a href="mailto:davidmorton78@gmail.com">davidmorton78@gmail.com</a>> wrote:<br>
> Afternoon all,<br>
> We're cutting over from OpenSUSE and straight heartbeat based on ext3 (two<br>
> node active passive) to SLES, Pacemaker / Corosync, and OCFS2 in a split<br>
> role active/passive configuration (three databases, two on one server and<br>
> one on the other which can fail over to each other).<br>
> As this is my first experience with the new Pacemaker / Corosync stack and<br>
> OCFS2 I would like to get the configuration validated by more experienced<br>
> users to ensure there will be no big issues.<br>
> Also I have some related queries:<br>
> 1) On a latest gen x86 IBM server what is the best / appropriate STONITH<br>
> resource to use for control via the built in IMM interface ?<br>
> 2) Is there an OCF compliant resource agent for JavaDB / Derby ? Currently<br>
> am using a nasty init script for its start / stop. I can't seem to find one<br>
> anywhere.<br>
> 3) Does the use of a group command negate the use of a colocation command ?<br>
> 4) Do you have to be very careful around the timeout values specified for<br>
> timeouts relative to things like SAN multipath failover times, bonded<br>
> interface convergence times and OCFS2 / filesystem related timeouts ? ie:<br>
> Can Pacemaker end up with things chasing their tail, small blip in<br>
> connectivity turns into a major ?<br>
> Also an unrelated query which I'm sure somebody will know:<br>
> 5) Is the use of CLVM mandatory for the OCFS2 filesystem, or is it simply<br>
> used if you wish to use logical volumes with online resizing capability ?<br>
> 6) Is there danger in having a SAN disk visible to two servers at once (with<br>
> non clustered filesystem such as ext3) but only ever mounted on one ? This<br>
> is the scenario we will be using and ext3 gives performance gains over OCFS2<br>
> so if there is no danger it would be preferable to use.<br>
> Current config is as per the below for review:<br>
> node company-prod-db-001<br>
> node company-prod-db-002<br>
> primitive DERBYDB lsb:derby<br>
> primitive FS_DB_DEPOT ocf:heartbeat:Filesystem \<br>
> params device="-LDB_DEPOT" directory="/DB_DEPOT" fstype="ocfs2"<br>
> options="acl" \<br>
> op start interval="0" timeout="60" \<br>
> op stop interval="0" timeout="60"<br>
> primitive FS_DB_ESP_AUDIT ocf:heartbeat:Filesystem \<br>
> params device="-LDB_ESP_AUDIT" directory="/DB_ESP_AUDIT"<br>
> fstype="ocfs2" options="acl" \<br>
> op start interval="0" timeout="60" \<br>
> op stop interval="0" timeout="60"<br>
> primitive FS_LOGS_DEPOT ocf:heartbeat:Filesystem \<br>
> params device="-LLOGS_DEPOT" directory="/LOGS_DEPOT" fstype="ocfs2"<br>
> options="data=writeback,noatime,acl" \<br>
> op start interval="0" timeout="60" \<br>
> op stop interval="0" timeout="60"<br>
> primitive FS_LOGS_ESP_AUDIT ocf:heartbeat:Filesystem \<br>
> params device="-LLOGS_ESP_AUDIT" directory="/LOGS_ESP_AUDIT"<br>
> fstype="ocfs2" options="data=writeback,noatime,acl" \<br>
> op start interval="0" timeout="60" \<br>
> op stop interval="0" timeout="60"<br>
> primitive IP_DEPOT_15 ocf:heartbeat:IPaddr2 \<br>
> params ip="192.168.15.93" cidr_netmask="24" \<br>
> op monitor interval="30s"<br>
> primitive IP_DEPOT_72 ocf:heartbeat:IPaddr2 \<br>
> params ip="192.168.72.93" cidr_netmask="24" \<br>
> op monitor interval="30s"<br>
> primitive IP_ESP_AUDIT_15 ocf:heartbeat:IPaddr2 \<br>
> params ip="192.168.15.92" cidr_netmask="24" \<br>
> op monitor interval="30s"<br>
> primitive IP_ESP_AUDIT_72 ocf:heartbeat:IPaddr2 \<br>
> params ip="192.168.72.92" cidr_netmask="24" \<br>
> op monitor interval="30s"<br>
> primitive PGSQL_AUDIT ocf:heartbeat:pgsql.audit \<br>
> op start interval="0" timeout="120" \<br>
> op stop interval="0" timeout="120"<br>
> primitive PGSQL_DEPOT ocf:heartbeat:pgsql.depot \<br>
> op start interval="0" timeout="120" \<br>
> op stop interval="0" timeout="120"<br>
> primitive PGSQL_ESP ocf:heartbeat:pgsql.esp \<br>
> op start interval="0" timeout="120" \<br>
> op stop interval="0" timeout="120"<br>
> group DEPOT FS_LOGS_DEPOT FS_DB_DEPOT IP_DEPOT_15 IP_DEPOT_72 DERBYDB<br>
> PGSQL_DEPOT<br>
> group ESP_AUDIT FS_LOGS_ESP_AUDIT FS_DB_ESP_AUDIT IP_ESP_AUDIT_15<br>
> IP_ESP_AUDIT_72 PGSQL_AUDIT PGSQL_ESP<br>
> location LOC_DEPOT DEPOT 25: company-prod-db-001<br>
> location LOC_ESP_AUDIT ESP_AUDIT 25: company-prod-db-002<br>
> property $id="cib-bootstrap-options" \<br>
> dc-version="1.1.2-2e096a41a5f9e184a1c1537c82c6da1093698eb5" \<br>
> cluster-infrastructure="openais" \<br>
> expected-quorum-votes="2" \<br>
> no-quorum-policy="ignore" \<br>
> stonith-enabled="false" \<br>
> start-failure-is-fatal="false"<br>
> rsc_defaults $id="rsc-options" \<br>
> resource-stickiness="100"<br>
</div></div>> _______________________________________________<br>
> Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a><br>
> <a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
><br>
> Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
> Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
> Bugs:<br>
> <a href="http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker" target="_blank">http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker</a><br>
><br>
><br>
<br>
<br>
<br>
--<br>
Serge Dubrouski.<br>
<br>
_______________________________________________<br>
Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a><br>
<a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker" target="_blank">http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker</a><br>
</blockquote></div><br></div>