[ClusterLabs] Antw: Resource clone groups - resource deletion

Ulrich Windl Ulrich.Windl at rz.uni-regensburg.de
Wed May 11 09:28:13 UTC 2016


Hi!

I think the solution is switching from sequential dependencies in groups to parallel dependencies, separating in ordering and colocation
In crm shell it is displayed as "order any inf: first ( then1 then2 )" and "colocation any inf: ( then11 then2 ) first".

Would this help? We had a similar problem with NFS exports...

Regards,
Ulrich

>>> Adrian Saul <Adrian.Saul at tpgtelecom.com.au> schrieb am 10.05.2016 um 10:06 in
Nachricht <4b4a569ea9234f96b9d865cdd4cd085c at TPG-TC2-EXCH05.tpg.local>:

> Hi,
>  I am building up a service for doing ALUA based iSCSI targets using 
> pacemaker for configuration management and failover of target groups.   This 
> is done using our own developed resource scripts.
> To keep the config clean I have organised resources into groups - as I create 
> resources they are added to the groups, and the groups are either clone or 
> master groups to apply the configuration across the cluster:
> 
> devicegroups -  master/slave group for controlling the ALUA states
> targets - iSCSI targets - clone group
> devices - iSCSI backing devices - clone group
> hostgroups - iSCSI hostgroups and LUN mappings - clone group
> 
> There are constraints that order the devicegroup groups then targets.  
> Targets then hostgroups and devices then hostgroups.  I have enabled 
> interleaving as well.
> 
> This works reasonably well with two issues.  The first is that a 
> configuration error in one resource can take down the entire group rather 
> than just the faulty resource - for example a typo in the attributes for a 
> device resource pointing to a non-existing device can cause it to error, 
> taking down the device group and by order constraints the hostgroups too.  
> This makes a small mistake a big one as devices go offline unnecessarily.
> 
> The second is that it appears deleting a resource from the group (using pcs 
> resource delete) causes the policy engine to stop the entire group and 
> dependencies, remove the resource, then start them all again - which is very 
> disruptive to iSCSI traffic.   For example if I remove a device that is no 
> longer in use by a hostgroup, it triggers a stop of the devices and 
> hostgroups groups, removes the device, then starts them all again.
> 
> Is there a way I can avoid these two behaviours maintaining this simpler 
> group level config, or am I simply using them the wrong way?
> 
> The alternative I am guessing would be to make each resource its own clone 
> instead, and do individual resource constraints more finely rather than 
> wholesale group constraints, which is possible but would require some more 
> careful configuration management to ensure constraints and resources are 
> properly associated.
> 
> Thanks,
> 
>  Adrian
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Confidentiality: This email and any attachments are confidential and may be 
> subject to copyright, legal or some other professional privilege. They are 
> intended solely for the attention and use of the named addressee(s). They may 
> only be copied, distributed or disclosed with the consent of the copyright 
> owner. If you have received this email by mistake or by breach of the 
> confidentiality clause, please notify the sender immediately by return email 
> and delete or destroy all copies of the email. Any confidentiality, privilege 
> or copyright is not waived or lost because this email has been sent to you by 
> mistake.
> 
> _______________________________________________
> Users mailing list: Users at clusterlabs.org 
> http://clusterlabs.org/mailman/listinfo/users 
> 
> Project Home: http://www.clusterlabs.org 
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf 
> Bugs: http://bugs.clusterlabs.org 








More information about the Users mailing list