[ClusterLabs] Antw: Re: Antw: [EXT] Suggestions for multiple NFS mounts as LSB script

Ulrich Windl Ulrich.Windl at rz.uni-regensburg.de
Tue Jun 30 02:33:18 EDT 2020


>>> Tony Stocker <akostocker at gmail.com> schrieb am 29.06.2020 um 19:20 in
Nachricht
<CACLi31UFwKGf8KuyoucbMYXhNP9o2vLiQ8ew28yPKVWDe6v-Xg at mail.gmail.com>:
> On Mon, Jun 29, 2020 at 11:08 AM Ulrich Windl
> <Ulrich.Windl at rz.uni‑regensburg.de> wrote:
>>
>> You could construct a script that generates the commands needed, so it
would
>> be rather easy to handle.
> 
> True. The initial population wouldn't be that burdensome. I was
> thinking of later when my coworkers have to add/remove mounts. I,
> honestly, don't want to be involved in that any more than I must.
> Currently they just make changes in their script and alles ist gut.

Well, it all depends:
you could have a configuration file with lines like this
Action Configuration...

"Action" would describe what to do (e.g. Add/Remove/Keep) and
"Configuration..."  would describe the details. The script would create the
needed actions (script commands), and maybe execute them...

You could also (once you have a setup) to use a graphical frontend like hawk
to enable and disable services (adding and removing is a bit more tricky).


> But more than anything I don't want them mucking about with Pacemaker
> commands (which means I would have to do updates) since once they
> break things, I'm the one who would have to fix it and explain how it
> wasn't my fault.
> 
>>
>>
>> Have you considered using automount? It's like fstab, but won't mount
>> automatically.
> 
> We looked at it a few years ago, but it didn't seem to react too well
> to being used in a file server (https/ftps) role and so we abandoned
> it.

So far it was NFS...


> 
>>
>>
>> The most interesting part seems to be the question whow you define (and
>> detect) a failure that will cause a node switch.
> 
> That is a VERY good question! How many mounts failed is the critical
> number when you have 130+? If a single one fails, do you suddenly move
> everything to the other node (even though it's just as likely to fail
> there)? Do you just monitor and issue complaints? At the moment
> there's zero checking of this, so until someone complains that they
> can't reach something, we don't know that the mount isn't working
> properly ‑‑ so apparently I guess it's not viewed as that critical.

With manual checking you don't need a cluster: Just set up both machines and
run of of them.


> But at the very least, the main home directory for the https/ftps file
> server operations should be operational, or else it's all moot.

Actually I wrote a monitoring plugin that can monitor even hanging NFS mounts
;-) (see attachment)

> 
> Is ocf_tester still available? I installed via 'yum' from the High
> Availability repository and don't see it. I also did a 'yum
> whatprovides *bin/ocf‑tester' and no package came back. Do I have to
> manual download it from somewhere? If so, could someone provide a link
> to the most up‑to‑date source?

In SLES (12) it's part of the resource agent package:
> rpm -qf /usr/sbin/ocf-tester
resource-agents-4.3.018.a7fb5035-3.45.1.x86_64

Regards,
Ulrich


> 
> Thanks!
> _______________________________________________
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users 
> 
> ClusterLabs home: https://www.clusterlabs.org/ 



-------------- next part --------------
A non-text attachment was scrubbed...
Name: IOTW-NFS.png
Type: image/png
Size: 40868 bytes
Desc: not available
URL: <http://lists.clusterlabs.org/pipermail/users/attachments/20200630/92d2b420/attachment-0001.png>


More information about the Users mailing list