[ClusterLabs] Corosync 2.3.6 is available at corosync.org!

Christine Caulfield ccaulfie at redhat.com
Thu Jun 16 12:57:41 UTC 2016


On 16/06/16 13:54, Vladislav Bogdanov wrote:
> 16.06.2016 15:28, Christine Caulfield wrote:
>> On 16/06/16 13:22, Vladislav Bogdanov wrote:
>>> Hi,
>>>
>>> 16.06.2016 14:09, Jan Friesse wrote:
>>>> I am pleased to announce the latest maintenance release of Corosync
>>>> 2.3.6 available immediately from our website at
>>>> http://build.clusterlabs.org/corosync/releases/.
>>> [...]
>>>> Christine Caulfield (9):
>>> [...]
>>>>         Add some more RO keys
>>>
>>> Is there a strong reason to make quorum.wait_for_all read-only?
>>>
>>
>> It's almost a no-op for documentation purposes. corosync has never
>> looked at that value after startup anyway. This just makes sure that an
>> error will be returned if an attempt is made to change it.
> 
> But it looks at it on a config reload, allowing to change
> wait_for_all_status from 0 to 1, but not vice versa. And reload does not
> look at "ro" - I though it does. That's fine.
> IIUC, even after this change I still have everything working as expected
> (I actually did not look at that part of code before):
> 

It doesn't .. or if it does it's a bug! There should be no wait to
change wait_for_all once a node is booted. Doing so threatens quorum.

Chrissie


> Setting wait_for_all to 0 and two_node to 1 in config (both were not set
> at all prior to that) and then reload leaves wait_for_all_status=0 and
> NODE_FLAGS_WFASTATUS bit unset in flags. But setting wait_for_all to 1
> after that (followed by another reload) sets wait_for_all_status=1 and
> NODE_FLAGS_WFASTATUS bit.
> 
> Great, thank you!
> 
> Vladislav
> 
>>
>> Chrissie
>>
>>> In one of products I use the following (fully-automated) actions to
>>> migrate from one-node to two-node setup:
>>>
>>> == mark second node "being joined"
>>> * set quorum.wait_for_all to 0 to make cluster function if node is
>>> reboot/power is lost
>>> * set quorum.two_node to 1
>>> * Add second node to corosync.conf
>>> * reload corosync on a first node
>>> * configure fencing in pacemaker (for both nodes)
>>> * copy corosync.{key,conf} to a second node
>>> * enable/start corosync on the second node
>>> * set quorum.wait_for_all to 1
>>> * copy corosync.conf again to a second node
>>> * reload corosync on both nodes
>>> == Only at this point mark second node "joined"
>>> * enable/start pacemaker on a second node
>>>
>>> I realize that all is a little bit paranoid, but actually it is handy
>>> when you want to predict any problem you are not aware about yet.
>>>
>>> Best regards,
>>> Vladislav
>>>
>>>
>>> _______________________________________________
>>> Users mailing list: Users at clusterlabs.org
>>> http://clusterlabs.org/mailman/listinfo/users
>>>
>>> Project Home: http://www.clusterlabs.org
>>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>>> Bugs: http://bugs.clusterlabs.org
>>
>>
>> _______________________________________________
>> Users mailing list: Users at clusterlabs.org
>> http://clusterlabs.org/mailman/listinfo/users
>>
>> Project Home: http://www.clusterlabs.org
>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: http://bugs.clusterlabs.org
>>
> 
> 
> _______________________________________________
> Users mailing list: Users at clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org





More information about the Users mailing list