[Pacemaker] [Question and Problem] In vSphere5.1 environment, IO blocking of pengine occurs at the time of shared disk trouble for a long time.

Andrew Beekhof andrew at beekhof.net
Thu May 16 23:24:11 EDT 2013


On 17/05/2013, at 10:27 AM, renayama19661014 at ybb.ne.jp wrote:

> Hi Andrew,
> Hi Vladislav,
> 
> I try whether this correction is effective for this problem.
> * https://github.com/beekhof/pacemaker/commit/eb6264bf2db395779e65dadf1c626e050a388c59
> 

Doubtful, it just reduces code duplication.
But it would also be a single place to put a deployment specific patch :)

> Best Regards,
> Hideo Yamauchi.
> 
> --- On Thu, 2013/5/16, Andrew Beekhof <andrew at beekhof.net> wrote:
> 
>> 
>> On 16/05/2013, at 3:49 PM, Vladislav Bogdanov <bubble at hoster-ok.com> wrote:
>> 
>>> 16.05.2013 02:46, Andrew Beekhof wrote:
>>>> 
>>>> On 15/05/2013, at 6:44 PM, Vladislav Bogdanov <bubble at hoster-ok.com> wrote:
>>>> 
>>>>> 15.05.2013 11:18, Andrew Beekhof wrote:
>>>>>> 
>>>>>> On 15/05/2013, at 5:31 PM, Vladislav Bogdanov <bubble at hoster-ok.com> wrote:
>>>>>> 
>>>>>>> 15.05.2013 10:25, Andrew Beekhof wrote:
>>>>>>>> 
>>>>>>>> On 15/05/2013, at 3:50 PM, Vladislav Bogdanov <bubble at hoster-ok.com> wrote:
>>>>>>>> 
>>>>>>>>> 15.05.2013 08:23, Andrew Beekhof wrote:
>>>>>>>>>> 
>>>>>>>>>> On 15/05/2013, at 3:11 PM, renayama19661014 at ybb.ne.jp wrote:
>>>>>>>>>> 
>>>>>>>>>>> Hi Andrew,
>>>>>>>>>>> 
>>>>>>>>>>> Thank you for comments.
>>>>>>>>>>> 
>>>>>>>>>>>>> The guest located it to the shared disk.
>>>>>>>>>>>> 
>>>>>>>>>>>> What is on the shared disk?  The whole OS or app-specific data (i.e. nothing pacemaker needs directly)?
>>>>>>>>>>> 
>>>>>>>>>>> Shared disk has all the OS and the all data.
>>>>>>>>>> 
>>>>>>>>>> Oh. I can imagine that being problematic.
>>>>>>>>>> Pacemaker really isn't designed to function without disk access.
>>>>>>>>>> 
>>>>>>>>>> You might be able to get away with it if you turn off saving PE files to disk though.
>>>>>>>>> 
>>>>>>>>> I store CIB and PE files to tmpfs, and sync them to remote storage
>>>>>>>>> (CIFS) with lsyncd level 1 config (I may share it on request). It copies
>>>>>>>>> critical data like cib.xml, and moves everything else, symlinking it to
>>>>>>>>> original place. The same technique may apply here, but with local fs
>>>>>>>>> instead of cifs.
>>>>>>>>> 
>>>>>>>>> Btw, the following patch is needed for that, otherwise pacemaker
>>>>>>>>> overwrites remote files instead of creating new ones on tmpfs:
>>>>>>>>> 
>>>>>>>>> --- a/lib/common/xml.c  2011-02-11 11:42:37.000000000 +0100
>>>>>>>>> +++ b/lib/common/xml.c  2011-02-24 15:07:48.541870829 +0100
>>>>>>>>> @@ -529,6 +529,8 @@ write_file(const char *string, const char *filename)
>>>>>>>>>       return -1;
>>>>>>>>>   }
>>>>>>>>> 
>>>>>>>>> +    unlink(filename);
>>>>>>>> 
>>>>>>>> Seems like it should be safe to include for normal operation.
>>>>>>> 
>>>>>>> Exactly.
>>>>>> 
>>>>>> Small flaw in that logic... write_file() is not used anywhere.
>>>>> 
>>>>> Heh, thanks for spotting this.
>>>>> 
>>>>> I recall write_file() was used for pengine, but some other function for
>>>>> CIB. You probably optimized that but forgot to remove unused function,
>>>>> that's why I was sure patch is still valid. And I did tests (CIFS
>>>>> storage outage simulation) only after initial patch, but not last years,
>>>>> that's why I didn't notice the regression - storage uses pacemaker too ;) .
>>>>> 
>>>>> This should go to write_xml_file() (And probably to other places just
>>>>> before fopen(..., "w"), f.e. series).
>>>> 
>>>> I've consolidated the code, however adding the unlink() would break things for anyone intentionally symlinking cib.xml from somewhere else (like a git repo).
>>>> So I'm not so sure I should make the unlink() change :(
>>> 
>>> Agree.
>>> I originally made it specific to pengine files.
>>> What do you prefer, simple wrapper in xml.c (f.e.
>>> unlink_and_write_xml_file()) or just add unlink() call to pengine before
>>> it calls write_xml_file()?
>> 
>> The last one :)
>> _______________________________________________
>> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>> 
>> Project Home: http://www.clusterlabs.org
>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: http://bugs.clusterlabs.org
>> 
> 
> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org





More information about the Pacemaker mailing list