[Pacemaker] Pacemaker is not automatically mounting the DRBD partitions

Cristiane França cristianedefranca at gmail.com
Fri Feb 15 00:41:14 UTC 2013


Hi Emmanuel,

Thank you very much!
I changed my pacemaker config as you suggested and the problem was solved.

Thanks.
Cristiane


On Thu, Feb 14, 2013 at 4:38 PM, emmanuel segura <emi2fast at gmail.com> wrote:

> Hello Cristiane
>
> You need to change your pacemaker config(drbd primitive) like this
>
> example:
>
>
> primitive drbd_home ocf:linbit:drbd \
>         params drbd_resource="home" \
>         op monitor interval="15s"
>
> In drbd_resource parameter put the name of your drbd resource, not the
> name of devices
>
> Thanks
>
>
> 2013/2/14 Cristiane França <cristianedefranca at gmail.com>
>
>> Hi,
>> I configured resources with options "is-managed=true".
>>
>> crm(live)configure# edit ms_drbd_home
>> ms ms_drbd_home drbd_home \
>>         meta is-managed="true" master-max="1" master-node-max="1"
>> clone-max="2" clone-node-max="1" notify="true"
>>
>>
>>
>> But the problem remains :
>>
>>
>>  Master/Slave Set: ms_drbd_home [drbd_home]
>>      drbd_home:1 (ocf::linbit:drbd): Slave primario (unmanaged) FAILED
>>      Stopped: [ drbd_home:0 ]
>>  Master/Slave Set: ms_drbd_sistema [drbd_sistema]
>>      drbd_sistema:0 (ocf::linbit:drbd): Slave primario (unmanaged) FAILED
>>      Stopped: [ drbd_sistema:1 ]
>>  Master/Slave Set: ms_drbd_database [drbd_database]
>>      drbd_database:0 (ocf::linbit:drbd): Slave primario (unmanaged)
>> FAILED
>>      Stopped: [ drbd_database:1 ]
>>
>>
>> regards,
>>
>>
>>
>> On Thu, Feb 14, 2013 at 11:21 AM, emmanuel segura <emi2fast at gmail.com>wrote:
>>
>>> Hello Cristiane
>>>
>>> I think your pacemaker config doesn't call the resource defined in your
>>> drbd config
>>>
>>>  2013/2/14 Cristiane França <cristianedefranca at gmail.com>
>>>
>>>> hello,
>>>> I installed Pacemaker (1.1.7-6) and DRBD (8.4.2-2) on my server CentOS
>>>> 6.3 (kernel 2.6.32-279.19.1 - 64 bits).
>>>> I'm having the following problem:
>>>> The Pacemaker is not automatically mounting the DRBD partitions or
>>>> setting which is the main machine.
>>>> Where is configured to mount the partitions?
>>>>
>>>> my server configuration:
>>>>
>>>> node primario
>>>> node secundario
>>>> primitive ClusterIP ocf:heartbeat:IPaddr2 \
>>>>         params ip="192.168.0.110" cidr_netmask="32" \
>>>>         op monitor interval="30s"
>>>> primitive database_fs ocf:heartbeat:Filesystem \
>>>>         params device="/dev/drbd3" directory="/database" fstype="ext4"
>>>> primitive drbd_database ocf:linbit:drbd \
>>>>         params drbd_resource="drbd3" \
>>>>         op monitor interval="15s"
>>>> primitive drbd_home ocf:linbit:drbd \
>>>>         params drbd_resource="drbd1" \
>>>>         op monitor interval="15s"
>>>> primitive drbd_sistema ocf:linbit:drbd \
>>>>         params drbd_resource="drbd2" \
>>>>         op monitor interval="15s"
>>>> primitive home_fs ocf:heartbeat:Filesystem \
>>>>         params device="/dev/drbd1" directory="/home" fstype="ext4"
>>>> primitive sistema_fs ocf:heartbeat:Filesystem \
>>>>         params device="/dev/drbd2" directory="/sistema" fstype="ext4"
>>>> ms ms_drbd_database drbd_database \
>>>>         meta master-max="1" master-node-max="1" clone-max="2"
>>>> clone-node-max="1" notify="true"
>>>> ms ms_drbd_home drbd_home \
>>>>         meta master-max="1" master-node-max="1" clone-max="2"
>>>> clone-node-max="1" notify="true"
>>>> ms ms_drbd_sistema drbd_sistema \
>>>>         meta master-max="1" master-node-max="1" clone-max="2"
>>>> clone-node-max="1" notify="true"
>>>> colocation database_on_drbd inf: database_fs ms_drbd_database:Master
>>>> colocation fs_on_drbd inf: home_fs ms_drbd_home:Master
>>>> colocation sistema_on_drbd inf: sistema_fs ms_drbd_sistema:Master
>>>>  order database_after_drbd inf: ms_drbd_database:promote
>>>> database_fs:start
>>>> order fs_after_drbd inf: ms_drbd_home:promote home_fs:start
>>>> order sistema_after_drbd inf: ms_drbd_sistema:promote sistema_fs:start
>>>> property $id="cib-bootstrap-options" \
>>>>
>>>> dc-version="1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14" \
>>>>         cluster-infrastructure="openais" \
>>>>         stonith-enabled="false" \
>>>>         no-quorum-policy="ignore" \
>>>>         expected-quorum-votes="2" \
>>>>         last-lrm-refresh="1360756132"
>>>> rsc_defaults $id="rsc-options" \
>>>>         resource-stickiness="100"
>>>>
>>>>
>>>>
>>>>
>>>> ============
>>>> Last updated: Thu Feb 14 10:21:47 2013
>>>> Last change: Thu Feb 14 09:45:16 2013 via cibadmin on primario
>>>> Stack: openais
>>>> Current DC: primario - partition with quorum
>>>> Version: 1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14
>>>> 2 Nodes configured, 2 expected votes
>>>> 10 Resources configured.
>>>> ============
>>>>
>>>> Online: [ secundario primario ]
>>>>
>>>>  ClusterIP (ocf::heartbeat:IPaddr2): Started primario
>>>>  Master/Slave Set: ms_drbd_home [drbd_home]
>>>>      drbd_home:0 (ocf::linbit:drbd): Slave secundario (unmanaged)
>>>> FAILED
>>>>      drbd_home:1 (ocf::linbit:drbd): Slave primario (unmanaged) FAILED
>>>>  Master/Slave Set: ms_drbd_sistema [drbd_sistema]
>>>>      drbd_sistema:0 (ocf::linbit:drbd): Slave primario (unmanaged)
>>>> FAILED
>>>>      drbd_sistema:1 (ocf::linbit:drbd): Slave secundario (unmanaged)
>>>> FAILED
>>>>  Master/Slave Set: ms_drbd_database [drbd_database]
>>>>      drbd_database:0 (ocf::linbit:drbd): Slave primario (unmanaged)
>>>> FAILED
>>>>      drbd_database:1 (ocf::linbit:drbd): Slave secundario (unmanaged)
>>>> FAILED
>>>>
>>>> Failed actions:
>>>>     drbd_database:0_stop_0 (node=primario, call=23, rc=5,
>>>> status=complete): not installed
>>>>     drbd_home:1_stop_0 (node=primario, call=8, rc=5, status=complete):
>>>> not installed
>>>>     drbd_sistema:0_stop_0 (node=primario, call=22, rc=5,
>>>> status=complete): not installed
>>>>     drbd_home:0_stop_0 (node=secundario, call=18, rc=5,
>>>> status=complete): not installed
>>>>     drbd_sistema:1_stop_0 (node=secundario, call=20, rc=5,
>>>> status=complete): not installed
>>>>     drbd_database:1_stop_0 (node=secundario, call=19, rc=5,
>>>> status=complete): not installed
>>>>
>>>>
>>>>
>>>> I'm sorry for my English.
>>>> Cristiane
>>>>
>>>>
>>>> _______________________________________________
>>>> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
>>>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>>>>
>>>> Project Home: http://www.clusterlabs.org
>>>> Getting started:
>>>> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>>>> Bugs: http://bugs.clusterlabs.org
>>>>
>>>>
>>>
>>>
>>> --
>>> esta es mi vida e me la vivo hasta que dios quiera
>>>
>>> _______________________________________________
>>> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
>>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>>>
>>> Project Home: http://www.clusterlabs.org
>>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>>> Bugs: http://bugs.clusterlabs.org
>>>
>>>
>>
>> _______________________________________________
>> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>>
>> Project Home: http://www.clusterlabs.org
>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: http://bugs.clusterlabs.org
>>
>>
>
>
> --
> esta es mi vida e me la vivo hasta que dios quiera
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20130214/947706c5/attachment.htm>


More information about the Pacemaker mailing list