[Pacemaker] Pacemaker is not automatically mounting the DRBD partitions

Cristiane França cristianedefranca at gmail.com
Thu Feb 14 14:00:27 UTC 2013


Hello Emmanuel,

My drbd.conf:

global {
        usage-count yes;
}

common {
syncer { rate 100M; }
        protocol C;
}

resource home {
        meta-disk internal;
        device  /dev/drbd1;
        startup {
                wfc-timeout         0;  ## Infinite!
                degr-wfc-timeout 120;   ## 2 minutes
        }
        disk {
                on-io-error   detach;
        }
        net {
        }
        syncer {
                rate 100M;
        }
        on primario {
                disk   /dev/sdb1;
                address  10.0.0.10:7767;
        }
        on secundario {
                disk   /dev/sdb1;
                address  10.0.0.20:7767;
        }
}

resource sistema {
        meta-disk internal;
        device  /dev/drbd2;
        handlers {
                pri-on-incon-degr "echo o > /proc/sysrq-trigger ; halt -f";
                pri-lost-after-sb "echo o > /proc/sysrq-trigger ; halt -f";
                local-io-error "echo o > /proc/sysrq-trigger ; halt -f";
                outdate-peer "/usr/sbin/drbd-peer-outdater";
        }
        startup {
                degr-wfc-timeout 120;
        }
        disk {
                on-io-error   detach;
        }
        net {
                after-sb-0pri disconnect;
                after-sb-1pri disconnect;
                after-sb-2pri disconnect;
                rr-conflict disconnect;
        }
        syncer {
                rate 100M;
                al-extents 257;
        }
        on primario {
                disk   /dev/sdb2;
                address  10.0.0.10:7768;
        }
        on secundario {
                disk   /dev/sdb2;
                address  10.0.0.20:7768;
        }
}


resource database {
        meta-disk internal;
        device  /dev/drbd3;
        startup {
                wfc-timeout         0;  ## Infinite!
                degr-wfc-timeout 120;   ## 2 minutes
        }
        disk {
                on-io-error   detach;
        }
        net {
        }
        syncer {
                rate 100M;
        }
        on primario {
                disk   /dev/sdb3;
                address  10.0.0.10:7769;
        }
        on secundario {
                disk   /dev/sdb3;
                address  10.0.0.20:7769;
        }
}



My LOG /var/log/cluster/corosync.log :

Feb 14 10:04:27 [10866] primario    pengine:  warning:
common_apply_stickiness:         Forcing ms_drbd_database away from
primario after 1000000 failures (max=1000000)
Feb 14 10:04:27 [10866] primario    pengine:  warning:
common_apply_stickiness:         Forcing ms_drbd_database away from
primario after 1000000 failures (max=1000000)
Feb 14 10:04:27 [10866] primario    pengine:  warning: should_dump_input:
    Ignoring requirement that drbd_home:0_pre_notify_stop_0 comeplete
before ms_drbd_home_confirmed-pre_notify_stop_0: unmanaged failed resources
cannot prevent clone shutdown
Feb 14 10:04:27 [10866] primario    pengine:  warning: should_dump_input:
    Ignoring requirement that drbd_home:1_pre_notify_stop_0 comeplete
before ms_drbd_home_confirmed-pre_notify_stop_0: unmanaged failed resources
cannot prevent clone shutdown
Feb 14 10:04:27 [10866] primario    pengine:  warning: should_dump_input:
    Ignoring requirement that drbd_home:0_stop_0 comeplete before
ms_drbd_home_stopped_0: unmanaged failed resources cannot prevent clone
shutdown
Feb 14 10:04:27 [10866] primario    pengine:  warning: should_dump_input:
    Ignoring requirement that drbd_home:1_stop_0 comeplete before
ms_drbd_home_stopped_0: unmanaged failed resources cannot prevent clone
shutdown
Feb 14 10:04:27 [10866] primario    pengine:  warning: should_dump_input:
    Ignoring requirement that drbd_sistema:0_pre_notify_stop_0 comeplete
before ms_drbd_sistema_confirmed-pre_notify_stop_0: unmanaged failed
resources cannot prevent clone shutdown
Feb 14 10:04:27 [10866] primario    pengine:  warning: should_dump_input:
    Ignoring requirement that drbd_sistema:1_pre_notify_stop_0 comeplete
before ms_drbd_sistema_confirmed-pre_notify_stop_0: unmanaged failed
resources cannot prevent clone shutdown
Feb 14 10:04:27 [10866] primario    pengine:  warning: should_dump_input:
    Ignoring requirement that drbd_sistema:0_stop_0 comeplete before
ms_drbd_sistema_stopped_0: unmanaged failed resources cannot prevent clone
shutdown
Feb 14 10:04:27 [10866] primario    pengine:  warning: should_dump_input:
    Ignoring requirement that drbd_sistema:1_stop_0 comeplete before
ms_drbd_sistema_stopped_0: unmanaged failed resources cannot prevent clone
shutdown
Feb 14 10:04:27 [10866] primario    pengine:  warning: should_dump_input:
    Ignoring requirement that drbd_database:0_pre_notify_stop_0 comeplete
before ms_drbd_database_confirmed-pre_notify_stop_0: unmanaged failed
resources cannot prevent clone shutdown
Feb 14 10:04:27 [10866] primario    pengine:  warning: should_dump_input:
    Ignoring requirement that drbd_database:1_pre_notify_stop_0 comeplete
before ms_drbd_database_confirmed-pre_notify_stop_0: unmanaged failed
resources cannot prevent clone shutdown
Feb 14 10:04:27 [10866] primario    pengine:  warning: should_dump_input:
    Ignoring requirement that drbd_database:0_stop_0 comeplete before
ms_drbd_database_stopped_0: unmanaged failed resources cannot prevent clone
shutdown
Feb 14 10:04:27 [10866] primario    pengine:  warning: should_dump_input:
    Ignoring requirement that drbd_database:1_stop_0 comeplete before
ms_drbd_database_stopped_0: unmanaged failed resources cannot prevent clone
shutdown
Feb 14 10:04:27 [10866] primario    pengine:   notice: process_pe_message:
     Transition 112: PEngine Input stored in:
/var/lib/pengine/pe-input-179.bz2
Feb 14 10:04:27 [10867] primario       crmd:   notice: do_state_transition:
    State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [
input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Feb 14 10:04:27 [10867] primario       crmd:     info: do_te_invoke:
 Processing graph 112 (ref=pe_calc-dc-1360847067-237) derived from
/var/lib/pengine/pe-input-179.bz2


....

Feb 14 10:19:27 [10866] primario    pengine:   notice: unpack_config:   On
loss of CCM Quorum: Ignore
Feb 14 10:19:27 [10866] primario    pengine:   notice: unpack_rsc_op:
Operation monitor found resource ClusterIP active on primario
Feb 14 10:19:27 [10866] primario    pengine:   notice: unpack_rsc_op:
Preventing ms_drbd_database from re-starting on primario: operation stop
failed 'not installed' (rc=5)
Feb 14 10:19:27 [10866] primario    pengine:  warning: unpack_rsc_op:
Processing failed op drbd_database:0_last_failure_0 on primario: not
installed (5)
Feb 14 10:19:27 [10866] primario    pengine:   notice: unpack_rsc_op:
Preventing ms_drbd_home from re-starting on primario: operation stop failed
'not installed' (rc=5)
Feb 14 10:19:27 [10866] primario    pengine:  warning: unpack_rsc_op:
Processing failed op drbd_home:1_last_failure_0 on primario: not installed
(5)
Feb 14 10:19:27 [10866] primario    pengine:   notice: unpack_rsc_op:
Preventing ms_drbd_sistema from re-starting on primario: operation stop
failed 'not installed' (rc=5)
Feb 14 10:19:27 [10866] primario    pengine:  warning: unpack_rsc_op:
Processing failed op drbd_sistema:0_last_failure_0 on primario: not
installed (5)
Feb 14 10:19:27 [10866] primario    pengine:   notice: unpack_rsc_op:
Preventing ms_drbd_home from re-starting on secundario: operation stop
failed 'not installed' (rc=5)
Feb 14 10:19:27 [10866] primario    pengine:  warning: unpack_rsc_op:
Processing failed op drbd_home:0_last_failure_0 on secundario: not
installed (5)
Feb 14 10:19:27 [10866] primario    pengine:   notice: unpack_rsc_op:
Preventing ms_drbd_sistema from re-starting on secundario: operation stop
failed 'not installed' (rc=5)
Feb 14 10:19:27 [10866] primario    pengine:  warning: unpack_rsc_op:
Processing failed op drbd_sistema:1_last_failure_0 on secundario: not
installed (5)
Feb 14 10:19:27 [10866] primario    pengine:   notice: unpack_rsc_op:
Preventing ms_drbd_database from re-starting on secundario: operation stop
failed 'not installed' (rc=5)
Feb 14 10:19:27 [10866] primario    pengine:  warning: unpack_rsc_op:
Processing failed op drbd_database:1_last_failure_0 on secundario: not
installed (5)


Cristiane


On Thu, Feb 14, 2013 at 10:33 AM, emmanuel segura <emi2fast at gmail.com>wrote:

> Hello Cristiane
>
> can you post your cluster logs and your drbd config
>
> Thanks
> 2013/2/14 Cristiane França <cristianedefranca at gmail.com>
>
>> hello,
>> I installed Pacemaker (1.1.7-6) and DRBD (8.4.2-2) on my server CentOS
>> 6.3 (kernel 2.6.32-279.19.1 - 64 bits).
>> I'm having the following problem:
>> The Pacemaker is not automatically mounting the DRBD partitions or
>> setting which is the main machine.
>> Where is configured to mount the partitions?
>>
>> my server configuration:
>>
>> node primario
>> node secundario
>> primitive ClusterIP ocf:heartbeat:IPaddr2 \
>>         params ip="192.168.0.110" cidr_netmask="32" \
>>         op monitor interval="30s"
>> primitive database_fs ocf:heartbeat:Filesystem \
>>         params device="/dev/drbd3" directory="/database" fstype="ext4"
>> primitive drbd_database ocf:linbit:drbd \
>>         params drbd_resource="drbd3" \
>>         op monitor interval="15s"
>> primitive drbd_home ocf:linbit:drbd \
>>         params drbd_resource="drbd1" \
>>         op monitor interval="15s"
>> primitive drbd_sistema ocf:linbit:drbd \
>>         params drbd_resource="drbd2" \
>>         op monitor interval="15s"
>> primitive home_fs ocf:heartbeat:Filesystem \
>>         params device="/dev/drbd1" directory="/home" fstype="ext4"
>> primitive sistema_fs ocf:heartbeat:Filesystem \
>>         params device="/dev/drbd2" directory="/sistema" fstype="ext4"
>> ms ms_drbd_database drbd_database \
>>         meta master-max="1" master-node-max="1" clone-max="2"
>> clone-node-max="1" notify="true"
>> ms ms_drbd_home drbd_home \
>>         meta master-max="1" master-node-max="1" clone-max="2"
>> clone-node-max="1" notify="true"
>> ms ms_drbd_sistema drbd_sistema \
>>         meta master-max="1" master-node-max="1" clone-max="2"
>> clone-node-max="1" notify="true"
>> colocation database_on_drbd inf: database_fs ms_drbd_database:Master
>> colocation fs_on_drbd inf: home_fs ms_drbd_home:Master
>> colocation sistema_on_drbd inf: sistema_fs ms_drbd_sistema:Master
>>  order database_after_drbd inf: ms_drbd_database:promote database_fs:start
>> order fs_after_drbd inf: ms_drbd_home:promote home_fs:start
>> order sistema_after_drbd inf: ms_drbd_sistema:promote sistema_fs:start
>> property $id="cib-bootstrap-options" \
>>         dc-version="1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14"
>> \
>>         cluster-infrastructure="openais" \
>>         stonith-enabled="false" \
>>         no-quorum-policy="ignore" \
>>         expected-quorum-votes="2" \
>>         last-lrm-refresh="1360756132"
>> rsc_defaults $id="rsc-options" \
>>         resource-stickiness="100"
>>
>>
>>
>>
>> ============
>> Last updated: Thu Feb 14 10:21:47 2013
>> Last change: Thu Feb 14 09:45:16 2013 via cibadmin on primario
>> Stack: openais
>> Current DC: primario - partition with quorum
>> Version: 1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14
>> 2 Nodes configured, 2 expected votes
>> 10 Resources configured.
>> ============
>>
>> Online: [ secundario primario ]
>>
>>  ClusterIP (ocf::heartbeat:IPaddr2): Started primario
>>  Master/Slave Set: ms_drbd_home [drbd_home]
>>      drbd_home:0 (ocf::linbit:drbd): Slave secundario (unmanaged) FAILED
>>      drbd_home:1 (ocf::linbit:drbd): Slave primario (unmanaged) FAILED
>>  Master/Slave Set: ms_drbd_sistema [drbd_sistema]
>>      drbd_sistema:0 (ocf::linbit:drbd): Slave primario (unmanaged) FAILED
>>      drbd_sistema:1 (ocf::linbit:drbd): Slave secundario (unmanaged)
>> FAILED
>>  Master/Slave Set: ms_drbd_database [drbd_database]
>>      drbd_database:0 (ocf::linbit:drbd): Slave primario (unmanaged)
>> FAILED
>>      drbd_database:1 (ocf::linbit:drbd): Slave secundario (unmanaged)
>> FAILED
>>
>> Failed actions:
>>     drbd_database:0_stop_0 (node=primario, call=23, rc=5,
>> status=complete): not installed
>>     drbd_home:1_stop_0 (node=primario, call=8, rc=5, status=complete):
>> not installed
>>     drbd_sistema:0_stop_0 (node=primario, call=22, rc=5,
>> status=complete): not installed
>>     drbd_home:0_stop_0 (node=secundario, call=18, rc=5, status=complete):
>> not installed
>>     drbd_sistema:1_stop_0 (node=secundario, call=20, rc=5,
>> status=complete): not installed
>>     drbd_database:1_stop_0 (node=secundario, call=19, rc=5,
>> status=complete): not installed
>>
>>
>>
>> I'm sorry for my English.
>> Cristiane
>>
>>
>> _______________________________________________
>> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>>
>> Project Home: http://www.clusterlabs.org
>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: http://bugs.clusterlabs.org
>>
>>
>
>
> --
> esta es mi vida e me la vivo hasta que dios quiera
> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20130214/36a20ebe/attachment.htm>


More information about the Pacemaker mailing list