[ClusterLabs] Cluster resources migration from CMAN to Pacemaker
Jan Pokorný
jpokorny at redhat.com
Sat Jan 30 03:48:03 CET 2016
On 27/01/16 19:41 +0100, Jan Pokorný wrote:
> On 27/01/16 11:04 -0600, Ken Gaillot wrote:
>> On 01/27/2016 02:34 AM, jaspal singla wrote:
>>> 1) In CMAN, there was meta attribute - autostart=0 (This parameter disables
>>> the start of all services when RGManager starts). Is there any way for such
>>> behavior in Pacemaker?
>
> Please be more careful about the descriptions; autostart=0 specified
> at the given resource group ("service" or "vm" tag) means just not to
> start anything contained in this very one automatically (also upon
> new resources being defined, IIUIC), definitely not "all services".
>
> [...]
>
>> I don't think there's any exact replacement for autostart in pacemaker.
>> Probably the closest is to set target-role=Stopped before stopping the
>> cluster, and set target-role=Started when services are desired to be
>> started.
Beside is-managed=false (as currently used in clufter), I also looked
at downright disabling "start" action, but this turned out to be a naive
approach caused by unclear documentation.
Pushing for a bit more clarity (hopefully):
https://github.com/ClusterLabs/pacemaker/pull/905
>>> 2) Please put some alternatives to exclusive=0 and __independent_subtree?
>>> what we have in Pacemaker instead of these?
(exclusive property discussed in the other subthread; as a recap,
no extra effort is needed to achieve exclusive=0, exclusive=1 is
currently a show stopper in clufter as neither approach is versatile
enough)
> For __independent_subtree, each component must be a separate pacemaker
> resource, and the constraints between them would depend on exactly what
> you were trying to accomplish. The key concepts here are ordering
> constraints, colocation constraints, kind=Mandatory/Optional (for
> ordering constraints), and ordered sets.
Current approach in clufter as of the next branch:
- __independent_subtree=1 -> do nothing special (hardly can be
improved?)
- __independent_subtree=2 -> for that very resource, set operations
as follows:
monitor (interval=60s) on-fail=ignore
stop interval=0 on-fail=stop
Groups carrying such resources are not unrolled into primitives plus
contraints, as the above might suggest (also default kind=Mandatory
for underlying order constraints should fit well).
Please holler if this is not sound.
So when put together with some other changes/fixes, current
suggested/informative sequence of pcs commands goes like this:
pcs cluster auth ha1-105.test.com
pcs cluster setup --start --name HA1-105_CLUSTER ha1-105.test.com \
--consensus 12000 --token 10000 --join 60
sleep 60
pcs cluster cib tmp-cib.xml --config
pcs -f tmp-cib.xml property set stonith-enabled=false
pcs -f tmp-cib.xml \
resource create RESOURCE-script-FSCheck \
lsb:../../..//data/Product/HA/bin/FsCheckAgent.py \
op monitor interval=30s
pcs -f tmp-cib.xml \
resource create RESOURCE-script-NTW_IF \
lsb:../../..//data/Product/HA/bin/NtwIFAgent.py \
op monitor interval=30s
pcs -f tmp-cib.xml \
resource create RESOURCE-script-CTM_RSYNC \
lsb:../../..//data/Product/HA/bin/RsyncAgent.py \
op monitor interval=30s on-fail=ignore stop interval=0 on-fail=stop
pcs -f tmp-cib.xml \
resource create RESOURCE-script-REPL_IF \
lsb:../../..//data/Product/HA/bin/ODG_IFAgent.py \
op monitor interval=30s on-fail=ignore stop interval=0 on-fail=stop
pcs -f tmp-cib.xml \
resource create RESOURCE-script-ORACLE_REPLICATOR \
lsb:../../..//data/Product/HA/bin/ODG_ReplicatorAgent.py \
op monitor interval=30s on-fail=ignore stop interval=0 on-fail=stop
pcs -f tmp-cib.xml \
resource create RESOURCE-script-CTM_SID \
lsb:../../..//data/Product/HA/bin/OracleAgent.py \
op monitor interval=30s
pcs -f tmp-cib.xml \
resource create RESOURCE-script-CTM_SRV \
lsb:../../..//data/Product/HA/bin/CtmAgent.py \
op monitor interval=30s
pcs -f tmp-cib.xml \
resource create RESOURCE-script-CTM_APACHE \
lsb:../../..//data/Product/HA/bin/ApacheAgent.py \
op monitor interval=30s
pcs -f tmp-cib.xml \
resource create RESOURCE-script-CTM_HEARTBEAT \
lsb:../../..//data/Product/HA/bin/HeartBeat.py \
op monitor interval=30s
pcs -f tmp-cib.xml \
resource create RESOURCE-script-FLASHBACK \
lsb:../../..//data/Product/HA/bin/FlashBackMonitor.py \
op monitor interval=30s
pcs -f tmp-cib.xml \
resource group add SERVICE-ctm_service-GROUP RESOURCE-script-FSCheck \
RESOURCE-script-NTW_IF RESOURCE-script-CTM_RSYNC \
RESOURCE-script-REPL_IF RESOURCE-script-ORACLE_REPLICATOR \
RESOURCE-script-CTM_SID RESOURCE-script-CTM_SRV \
RESOURCE-script-CTM_APACHE
pcs -f tmp-cib.xml resource \
meta SERVICE-ctm_service-GROUP is-managed=false
pcs -f tmp-cib.xml \
resource group add SERVICE-ctm_heartbeat-GROUP \
RESOURCE-script-CTM_HEARTBEAT
pcs -f tmp-cib.xml resource \
meta SERVICE-ctm_heartbeat-GROUP migration-threshold=3 \
failure-timeout=900
pcs -f tmp-cib.xml \
resource group add SERVICE-ctm_monitoring-GROUP \
RESOURCE-script-FLASHBACK
pcs -f tmp-cib.xml resource \
meta SERVICE-ctm_monitoring-GROUP migration-threshold=3 \
failure-timeout=900
pcs cluster cib-push tmp-cib.xml --config
Any suggestions welcome...
--
Jan (Poki)
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://clusterlabs.org/pipermail/users/attachments/20160130/5eb89fbd/attachment.sig>
More information about the Users
mailing list