[Pacemaker] problem configuring DRBD resource in "Floating Peers" mode

Димитър Бойн DBOYN at POSTPATH.COM
Wed May 27 02:23:39 UTC 2009


Hi,
My ultimate goal is to run a bunch of servers/nodes that shall be able to handle a bunch of floating drbd peers.
I start small having only two nodes on site A up and running but I fail to start the drbd resource.
 
Anyone, please help! :-)
 
I am running CentOS 5.3 
c001mlb_node01a:root >uname -a
Linux c001mlb_node01a 2.6.18-128.el5 #1 SMP Wed Jan 21 10:41:14 EST 2009 x86_64 x86_64 x86_64 GNU/Linux
 
Using:
#########	rpm -ivh http://download.opensuse.org/repositories/server:/ha-clustering/CentOS_5/x86_64/libnet-1.1.2.1-1.1.x86_64.rpmrpm -ivh http://download.opensuse.org/repositories/server:/ha-clustering/CentOS_5/x86_64/libheartbeat2-2.99.2-8.1.x86_64.rpmrpm -ivh http://download.opensuse.org/repositories/server:/ha-clustering/CentOS_5/x86_64/libopenais2-0.80.5-13.1.x86_64.rpmrpm -ivh http://download.opensuse.org/repositories/server:/ha-clustering/CentOS_5/x86_64/heartbeat-common-2.99.2-8.1.x86_64.rpmrpm -ivh http://download.opensuse.org/repositories/server:/ha-clustering/CentOS_5/x86_64/heartbeat-resources-2.99.2-8.1.x86_64.rpmrpm -ivh http://download.opensuse.org/repositories/server:/ha-clustering/CentOS_5/x86_64/heartbeat-2.99.2-8.1.x86_64.rpmrpm -ivh http://download.opensuse.org/repositories/server:/ha-clustering/CentOS_5/x86_64/openais-0.80.5-13.1.x86_64.rpmrpm -ivh http://download.opensuse.org/repositories/server:/ha-clustering/CentOS_5/x86_64/libpacemaker3-1.0.3-2.2.x86_64.rpmrpm -ivh http://download.opensuse.org/repositories/server:/ha-clustering/CentOS_5/x86_64/pacemaker-1.0.3-2.2.x86_64.rpm	
#

 #	rpm -ivh http://download.opensuse.org/repositories/server:/ha-clustering/CentOS_5/x86_64/libopenais2-0.80.5-13.1.x86_64.rpmrpm -ivh http://download.opensuse.org/repositories/server:/ha-clustering/CentOS_5/x86_64/openais-0.80.5-13.1.x86_64.rpm	
 
##	rpm -ihv http://mirror.centos.org/centos-5/5.3/extras/x86_64/RPMS/kmod-drbd-8.0.13-2.x86_64.rpmrpm -ihv http://mirror.centos.org/centos-5/5.3/extras/x86_64/RPMS/drbd82-8.2.6-1.el5.centos.x86_64.rpm	
 
If I change the hostname appropriately I can start drbd just fine.
 
Here are my configuration files:
 
c001mlb_node01a:root >cat /etc/drbd.conf
#
# please have a a look at the example configuration file in
# /usr/share/doc/drbd82/drbd.conf
#
resource drbd0 {
 
        protocol C;
        disk    {
                on-io-error detach;
                }
        syncer  {
                rate 100M;
                al-extents 127;
                }
        net     {
                after-sb-0pri discard-older-primary;
                after-sb-1pri consensus;
                after-sb-2pri violently-as0p;
                rr-conflict disconnect;
                }
 
        on node_0 {
                      device /dev/drbd0;
                      disk /dev/mpath/SL7E2083700018-EMC-SATA-AX4-5i-LUN0;
                      address 192.168.80.213:7788;
                      meta-disk internal;
                      }
        on node_1 {
                      device /dev/drbd0;
                      disk /dev/ppsdvg/ppsdlv;
                      address 192.168.80.186:7788;
                      meta-disk internal;
                      }
                }
 
c001mlb_node01a:root >cibadmin -Q
<cib validate-with= pacemaker-1.0 crm_feature_set= 3.0.1 have-quorum= 1 dc-uuid= c001mlb_node01a admin_epoch= 0 epoch= 250 num_updates= 7 >
  <configuration>
    <crm_config>
      <cluster_property_set id= cib-bootstrap-options >
        <nvpair id= cib-bootstrap-options-dc-version name= dc-version value= 1.0.3-b133b3f19797c00f9189f4b66b513963f9d25db9 />
        <nvpair id= cib-bootstrap-options-expected-quorum-votes name= expected-quorum-votes value= 2 />
      </cluster_property_set>
      <cluster_property_set id= cluster_property_set >
        <nvpair id= symmetric-cluster name= symmetric-cluster value= true />
        <nvpair id= no-quorum-policy name= no-quorum-policy value= ignore />
        <nvpair id= stonith-enabled name= stonith-enabled value= false />
      </cluster_property_set>
    </crm_config>
    <nodes>
      <node id= c001mlb_node01a uname= c001mlb_node01a type= normal >
        <instance_attributes id= nodes-c001mlb_node01a >
          <nvpair id= nodes-c001mlb_node01a-site name= site value= a />
        </instance_attributes>
      </node>
      <node id= c001mlb_node02a uname= c001mlb_node02a type= normal >
        <instance_attributes id= nodes-c001mlb_node02a >
          <nvpair id= nodes-c001mlb_node02a-site name= site value= a />
        </instance_attributes>
      </node>
      <node id= c001mlb_node03a uname= c001mlb_node03a type= normal >
        <instance_attributes id= nodes-c001mlb_node03a >
          <nvpair id= nodes-c001mlb_node03a-site name= site value= a />
        </instance_attributes>
      </node>
      <node id= c001mlb_node04a uname= c001mlb_node04a type= normal >
        <instance_attributes id= nodes-c001mlb_node04a >
          <nvpair id= nodes-c001mlb_node04a-site name= site value= a />
        </instance_attributes>
      </node>
      <node id= c001mlb_node05a uname= c001mlb_node05a type= normal >
        <instance_attributes id= nodes-c001mlb_node05a >
          <nvpair id= nodes-c001mlb_node05a-site name= site value= a />
        </instance_attributes>
      </node>
      <node id= c001mlb_node06a uname= c001mlb_node06a type= normal >
        <instance_attributes id= nodes-c001mlb_node06a >
          <nvpair id= nodes-c001mlb_node06a-site name= site value= a />
        </instance_attributes>
      </node>
      <node id= c001mlb_node07a uname= c001mlb_node07a type= normal >
        <instance_attributes id= nodes-c001mlb_node07a >
          <nvpair id= nodes-c001mlb_node07a-site name= site value= a />
        </instance_attributes>
      </node>
      <node id= c001mlb_node08a uname= c001mlb_node08a type= normal >
        <instance_attributes id= nodes-c001mlb_node08a >
          <nvpair id= nodes-c001mlb_node08a-site name= site value= a />
        </instance_attributes>
      </node>
      <node id= c001mlb_node09a uname= c001mlb_node09a type= normal >
        <instance_attributes id= nodes-c001mlb_node09a >
          <nvpair id= nodes-c001mlb_node09a-site name= site value= a />
        </instance_attributes>
      </node>
      <node id= c001mlb_node10a uname= c001mlb_node10a type= normal >
        <instance_attributes id= nodes-c001mlb_node10a >
          <nvpair id= nodes-c001mlb_node10a-site name= site value= a />
        </instance_attributes>
      </node>
      <node id= c001mlb_node11a uname= c001mlb_node11a type= normal >
        <instance_attributes id= nodes-c001mlb_node11a >
          <nvpair id= nodes-c001mlb_node11a-site name= site value= a />
        </instance_attributes>
      </node>
      <node id= c001mlb_node12a uname= c001mlb_node12a type= normal >
        <instance_attributes id= nodes-c001mlb_node12a >
          <nvpair id= nodes-c001mlb_node12a-site name= site value= a />
        </instance_attributes>
      </node>
      <node id= c001mlb_node13a uname= c001mlb_node13a type= normal >
        <instance_attributes id= nodes-c001mlb_node13a >
          <nvpair id= nodes-c001mlb_node13a-site name= site value= a />
        </instance_attributes>
      </node>
      <node id= c001mlb_node01b uname= c001mlb_node01b type= normal >
        <instance_attributes id= nodes-c001mlb_node01b >
          <nvpair id= nodes-c001mlb_node01b-site name= site value= b />
        </instance_attributes>
      </node>
      <node id= c001mlb_node02b uname= c001mlb_node02b type= normal >
        <instance_attributes id= nodes-c001mlb_node02b >
          <nvpair id= nodes-c001mlb_node02b-site name= site value= b />
        </instance_attributes>
      </node>
      <node id= c001mlb_node03b uname= c001mlb_node03b type= normal >
        <instance_attributes id= nodes-c001mlb_node03b >
          <nvpair id= nodes-c001mlb_node03b-site name= site value= b />
        </instance_attributes>
      </node>
      <node id= c001mlb_node04b uname= c001mlb_node04b type= normal >
        <instance_attributes id= nodes-c001mlb_node04b >
          <nvpair id= nodes-c001mlb_node04b-site name= site value= b />
        </instance_attributes>
      </node>
      <node id= c001mlb_node05b uname= c001mlb_node05b type= normal >
        <instance_attributes id= nodes-c001mlb_node05b >
          <nvpair id= nodes-c001mlb_node05b-site name= site value= b />
        </instance_attributes>
      </node>
      <node id= c001mlb_node06b uname= c001mlb_node06b type= normal >
        <instance_attributes id= nodes-c001mlb_node06b >
          <nvpair id= nodes-c001mlb_node06b-site name= site value= b />
        </instance_attributes>
      </node>
      <node id= c001mlb_node07b uname= c001mlb_node07b type= normal >
        <instance_attributes id= nodes-c001mlb_node07b >
          <nvpair id= nodes-c001mlb_node07b-site name= site value= b />
        </instance_attributes>
      </node>
      <node id= c001mlb_node08b uname= c001mlb_node08b type= normal >
        <instance_attributes id= nodes-c001mlb_node08b >
          <nvpair id= nodes-c001mlb_node08b-site name= site value= b />
        </instance_attributes>
      </node>
      <node id= c001mlb_node09b uname= c001mlb_node09b type= normal >
        <instance_attributes id= nodes-c001mlb_node09b >
          <nvpair id= nodes-c001mlb_node09b-site name= site value= b />
        </instance_attributes>
      </node>
      <node id= c001mlb_node10b uname= c001mlb_node10b type= normal >
        <instance_attributes id= nodes-c001mlb_node10b >
          <nvpair id= nodes-c001mlb_node10b-site name= site value= b />
        </instance_attributes>
      </node>
      <node id= c001mlb_node11b uname= c001mlb_node11b type= normal >
        <instance_attributes id= nodes-c001mlb_node11b >
          <nvpair id= nodes-c001mlb_node11b-site name= site value= b />
        </instance_attributes>
      </node>
      <node id= c001mlb_node12b uname= c001mlb_node12b type= normal >
        <instance_attributes id= nodes-c001mlb_node12b >
          <nvpair id= nodes-c001mlb_node12b-site name= site value= b />
        </instance_attributes>
      </node>
      <node id= c001mlb_node13b uname= c001mlb_node13b type= normal >
        <instance_attributes id= nodes-c001mlb_node13b >
          <nvpair id= nodes-c001mlb_node13b-site name= site value= b />
        </instance_attributes>
      </node>
    </nodes>
    <resources>
      <primitive class= ocf provider= heartbeat type= IPaddr2 id= ip-c001drbd01a >
        <instance_attributes id= ia-ip-c001drbd01a >
          <nvpair id= ia-ip-c001drbd01a-ip name= ip value= 192.168.80.213 />
          <nvpair id= ia-ip-c001drbd01a-nic name= nic value= eth0:0 />
        </instance_attributes>
        <operations>
          <op id= op-ip-c001drbd01a name= monitor interval= 21s timeout= 5s />
        </operations>
        <meta_attributes id= ip-c001drbd01a-meta_attributes >
          <nvpair id= ip-c001drbd01a-meta_attributes-target-role name= target-role value= started />
        </meta_attributes>
      </primitive>
      <master id= ms-drbd0 >
        <meta_attributes id= ma-ms-drbd0 >
          <nvpair id= ma-ms-drbd0-1 name= clone_max value= 2 />
          <nvpair id= ma-ms-drbd0-2 name= clone_node_max value= 1 />
          <nvpair id= ma-ms-drbd0-3 name= master_max value= 1 />
          <nvpair id= ma-ms-drbd0-4 name= master_node_max value= 1 />
          <nvpair id= ma-ms-drbd0-5 name= notify value= yes />
          <nvpair id= ma-ms-drbd0-6 name= globally_unique value= true />
          <nvpair id= ma-ms-drbd0-7 name= target_role value= started />
        </meta_attributes>
        <primitive class= ocf provider= heartbeat type= drbd id= drbd0 >
          <instance_attributes id= ia-drbd0 >
            <nvpair id= ia-drbd0-1 name= drbd_resource value= drbd0 />
            <nvpair id= ia-drbd0-2 name= clone_overrides_hostname value= yes />
          </instance_attributes>
          <operations>
            <op id= op-drbd0-1 name= monitor interval= 59s timeout= 10s role= Master />
            <op id= op-drbd0-2 name= monitor interval= 60s timeout= 10s role= Slave />
          </operations>
          <meta_attributes id= drbd0-meta_attributes >
            <nvpair name= target-role id= drbd0-meta_attributes-target-role value= Started />
          </meta_attributes>
        </primitive>
        <meta_attributes id= ms-drbd0-meta_attributes >
          <nvpair name= target-role id= ms-drbd0-meta_attributes-target-role value= started />
        </meta_attributes>
      </master>
      <primitive class= ocf provider= heartbeat type= IPaddr2 id= ip-c001drbd01b >
        <instance_attributes id= ia-ip-c001drbd01b >
          <nvpair id= ia-ip-c001drbd01b-ip name= ip value= 192.168.80.186 />
          <nvpair id= ia-ip-c001drbd01b-nic name= nic value= eth0:0 />
        </instance_attributes>
        <operations>
          <op id= op-ip-c001drbd01b name= monitor interval= 21s timeout= 5s />
        </operations>
      </primitive>
    </resources>
    <constraints>
      <rsc_location id= location-ip-c001drbd01a rsc= ip-c001drbd01a >
        <rule id= ip-c001drbd01a-rule score= -INFINITY >
          <expression id= exp-ip-c001drbd01a-rule value= b attribute= site operation= eq />
        </rule>
      </rsc_location>
      <rsc_location id= location-ip-c001drbd01b rsc= ip-c001drbd01b >
        <rule id= ip-c001drbd01b-rule score= -INFINITY >
          <expression id= exp-ip-c001drbd01b-rule value= a attribute= site operation= eq />
        </rule>
      </rsc_location>
      <rsc_location id= drbd0-master-1 rsc= ms-drbd0 >
        <rule id= drbd0-master-on-c001mlb_node01a role= master score= 100 >
          <expression id= expression-1 attribute= #uname operation= eq value= c001mlb_node01a />
        </rule>
      </rsc_location>
      <rsc_order id= order-drbd0-after-ip-c001drbd01a first= ip-c001drbd01a then= ms-drbd0 score= 1 />
      <rsc_order id= order-drbd0-after-ip-c001drbd01b first= ip-c001drbd01b then= ms-drbd0 score= 1 />
      <rsc_colocation rsc= ip-c001drbd01a score= INFINITY id= colocate-drbd0-ip-c001drbd01a with-rsc= ms-drbd0 />
      <rsc_colocation rsc= ip-c001drbd01b score= INFINITY id= colocate-drbd0-ip-c001drbd01b with-rsc= ms-drbd0 />
    </constraints>
  </configuration>
  <status>
    <node_state uname= c001mlb_node01a ha= active in_ccm= true crmd= online join= member shutdown= 0 expected= member id= c001mlb_node01a crm-debug-origin= do_update_resource >
      <transient_attributes id= c001mlb_node01a >
        <instance_attributes id= status-c001mlb_node01a >
          <nvpair id= status-c001mlb_node01a-probe_complete name= probe_complete value= true />
          <nvpair id= status-master-drbd0:0-c001mlb_node01a name= master-drbd0:0 value= 5 />
        </instance_attributes>
      </transient_attributes>
      <lrm id= c001mlb_node01a >
        <lrm_resources>
          <lrm_resource id= ip-c001drbd01a type= IPaddr2 class= ocf provider= heartbeat >
            <lrm_rsc_op id= ip-c001drbd01a_monitor_0 operation= monitor crm-debug-origin= build_active_RAs crm_feature_set= 3.0.1 transition-key= 5:0:7:cb9ea1f4-bf01-47af-bf6d-7bf4a5fa63d6 transition-magic= 0:7;5:0:7:cb9ea1f4-bf01-47af-bf6d-7bf4a5fa63d6 call-id= 2 rc-code= 7 op-status= 0 interval= 0 last-run= 1243362296 last-rc-change= 1243362296 exec-time= 130 queue-time= 0 op-digest= 6ac57a5bf2ae895cd84e7007731b6714 />
            <lrm_rsc_op id= ip-c001drbd01a_start_0 operation= start crm-debug-origin= build_active_RAs crm_feature_set= 3.0.1 transition-key= 10:0:0:cb9ea1f4-bf01-47af-bf6d-7bf4a5fa63d6 transition-magic= 0:0;10:0:0:cb9ea1f4-bf01-47af-bf6d-7bf4a5fa63d6 call-id= 7 rc-code= 0 op-status= 0 interval= 0 last-run= 1243362296 last-rc-change= 1243362296 exec-time= 180 queue-time= 0 op-digest= 6ac57a5bf2ae895cd84e7007731b6714 />
            <lrm_rsc_op id= ip-c001drbd01a_monitor_21000 operation= monitor crm-debug-origin= build_active_RAs crm_feature_set= 3.0.1 transition-key= 11:0:0:cb9ea1f4-bf01-47af-bf6d-7bf4a5fa63d6 transition-magic= 0:0;11:0:0:cb9ea1f4-bf01-47af-bf6d-7bf4a5fa63d6 call-id= 8 rc-code= 0 op-status= 0 interval= 21000 last-run= 1243362402 last-rc-change= 1243362297 exec-time= 150 queue-time= 0 op-digest= fbed438503a3bbfcf2299123a52669f0 />
          </lrm_resource>
          <lrm_resource id= ip-c001drbd01b type= IPaddr2 class= ocf provider= heartbeat >
            <lrm_rsc_op id= ip-c001drbd01b_monitor_0 operation= monitor crm-debug-origin= build_active_RAs crm_feature_set= 3.0.1 transition-key= 7:0:7:cb9ea1f4-bf01-47af-bf6d-7bf4a5fa63d6 transition-magic= 0:7;7:0:7:cb9ea1f4-bf01-47af-bf6d-7bf4a5fa63d6 call-id= 4 rc-code= 7 op-status= 0 interval= 0 last-run= 1243362296 last-rc-change= 1243362296 exec-time= 180 queue-time= 0 op-digest= a6d240a21b81c9776b45597f7fb943e1 />
          </lrm_resource>
          <lrm_resource id= drbd0:0 type= drbd class= ocf provider= heartbeat >
            <lrm_rsc_op id= drbd0:0_monitor_0 operation= monitor crm-debug-origin= build_active_RAs crm_feature_set= 3.0.1 transition-key= 6:0:7:cb9ea1f4-bf01-47af-bf6d-7bf4a5fa63d6 transition-magic= 0:7;6:0:7:cb9ea1f4-bf01-47af-bf6d-7bf4a5fa63d6 call-id= 3 rc-code= 7 op-status= 0 interval= 0 last-run= 1243362296 last-rc-change= 1243362296 exec-time= 150 queue-time= 0 op-digest= 61b0cfb88a3bcb6d6adeeb37791e380a />
            <lrm_rsc_op id= drbd0:0_pre_notify_promote_0 operation= notify crm-debug-origin= do_update_resource crm_feature_set= 3.0.1 transition-key= 142:3:0:c58a32ec-ae57-4bc8-8a1e-5d7069c2f2bd transition-magic= 0:0;142:3:0:c58a32ec-ae57-4bc8-8a1e-5d7069c2f2bd call-id= 25 rc-code= 0 op-status= 0 interval= 0 last-run= 1243364098 last-rc-change= 1243364098 exec-time= 70 queue-time= 0 op-digest= 61b0cfb88a3bcb6d6adeeb37791e380a />
            <lrm_rsc_op id= drbd0:0_post_notify_start_0 operation= notify crm-debug-origin= do_update_resource crm_feature_set= 3.0.1 transition-key= 141:3:0:c58a32ec-ae57-4bc8-8a1e-5d7069c2f2bd transition-magic= 0:0;141:3:0:c58a32ec-ae57-4bc8-8a1e-5d7069c2f2bd call-id= 23 rc-code= 0 op-status= 0 interval= 0 last-run= 1243364097 last-rc-change= 1243364097 exec-time= 160 queue-time= 0 op-digest= 61b0cfb88a3bcb6d6adeeb37791e380a />
            <lrm_rsc_op id= drbd0:0_post_notify_promote_0 operation= notify crm-debug-origin= do_update_resource crm_feature_set= 3.0.1 transition-key= 143:3:0:c58a32ec-ae57-4bc8-8a1e-5d7069c2f2bd transition-magic= 0:0;143:3:0:c58a32ec-ae57-4bc8-8a1e-5d7069c2f2bd call-id= 24 rc-code= 0 op-status= 0 interval= 0 last-run= 1243364098 last-rc-change= 1243364098 exec-time= 160 queue-time= 150 op-digest= 61b0cfb88a3bcb6d6adeeb37791e380a />
          </lrm_resource>
        </lrm_resources>
      </lrm>
    </node_state>
    <node_state uname= c001mlb_node02a ha= active in_ccm= true crmd= online join= member shutdown= 0 id= c001mlb_node02a expected= member crm-debug-origin= do_update_resource >
      <lrm id= c001mlb_node02a >
        <lrm_resources>
          <lrm_resource id= drbd0:1 type= drbd class= ocf provider= heartbeat >
            <lrm_rsc_op id= drbd0:1_monitor_0 operation= monitor crm-debug-origin= do_update_resource crm_feature_set= 3.0.1 transition-key= 7:0:7:c58a32ec-ae57-4bc8-8a1e-5d7069c2f2bd transition-magic= 0:7;7:0:7:c58a32ec-ae57-4bc8-8a1e-5d7069c2f2bd call-id= 3 rc-code= 7 op-status= 0 interval= 0 last-run= 1243362403 last-rc-change= 1243362403 exec-time= 100 queue-time= 0 op-digest= 61b0cfb88a3bcb6d6adeeb37791e380a />
            <lrm_rsc_op id= drbd0:1_post_notify_start_0 operation= notify crm-debug-origin= do_update_resource crm_feature_set= 3.0.1 transition-key= 144:3:0:c58a32ec-ae57-4bc8-8a1e-5d7069c2f2bd transition-magic= 0:0;144:3:0:c58a32ec-ae57-4bc8-8a1e-5d7069c2f2bd call-id= 13 rc-code= 0 op-status= 0 interval= 0 last-run= 1243364083 last-rc-change= 1243364083 exec-time= 1140 queue-time= 0 op-digest= 61b0cfb88a3bcb6d6adeeb37791e380a />
            <lrm_rsc_op id= drbd0:1_post_notify_promote_0 operation= notify crm-debug-origin= do_update_resource crm_feature_set= 3.0.1 transition-key= 146:3:0:c58a32ec-ae57-4bc8-8a1e-5d7069c2f2bd transition-magic= 0:0;146:3:0:c58a32ec-ae57-4bc8-8a1e-5d7069c2f2bd call-id= 14 rc-code= 0 op-status= 0 interval= 0 last-run= 1243364084 last-rc-change= 1243364084 exec-time= 1170 queue-time= 1140 op-digest= 61b0cfb88a3bcb6d6adeeb37791e380a />
            <lrm_rsc_op id= drbd0:1_pre_notify_promote_0 operation= notify crm-debug-origin= do_update_resource crm_feature_set= 3.0.1 transition-key= 145:3:0:c58a32ec-ae57-4bc8-8a1e-5d7069c2f2bd transition-magic= 0:0;145:3:0:c58a32ec-ae57-4bc8-8a1e-5d7069c2f2bd call-id= 15 rc-code= 0 op-status= 0 interval= 0 last-run= 1243364086 last-rc-change= 1243364086 exec-time= 70 queue-time= 1120 op-digest= 61b0cfb88a3bcb6d6adeeb37791e380a />
          </lrm_resource>
          <lrm_resource id= ip-c001drbd01b type= IPaddr2 class= ocf provider= heartbeat >
            <lrm_rsc_op id= ip-c001drbd01b_monitor_0 operation= monitor crm-debug-origin= do_update_resource crm_feature_set= 3.0.1 transition-key= 8:0:7:c58a32ec-ae57-4bc8-8a1e-5d7069c2f2bd transition-magic= 0:7;8:0:7:c58a32ec-ae57-4bc8-8a1e-5d7069c2f2bd call-id= 4 rc-code= 7 op-status= 0 interval= 0 last-run= 1243362403 last-rc-change= 1243362403 exec-time= 130 queue-time= 0 op-digest= a6d240a21b81c9776b45597f7fb943e1 />
          </lrm_resource>
          <lrm_resource id= ip-c001drbd01a type= IPaddr2 class= ocf provider= heartbeat >
            <lrm_rsc_op id= ip-c001drbd01a_monitor_0 operation= monitor crm-debug-origin= do_update_resource crm_feature_set= 3.0.1 transition-key= 6:0:7:c58a32ec-ae57-4bc8-8a1e-5d7069c2f2bd transition-magic= 0:7;6:0:7:c58a32ec-ae57-4bc8-8a1e-5d7069c2f2bd call-id= 2 rc-code= 7 op-status= 0 interval= 0 last-run= 1243362403 last-rc-change= 1243362403 exec-time= 240 queue-time= 0 op-digest= 6ac57a5bf2ae895cd84e7007731b6714 />
          </lrm_resource>
        </lrm_resources>
      </lrm>
      <transient_attributes id= c001mlb_node02a >
        <instance_attributes id= status-c001mlb_node02a >
          <nvpair id= status-c001mlb_node02a-probe_complete name= probe_complete value= true />
          <nvpair id= status-master-drbd0:1-c001mlb_node02a name= master-drbd0:1 value= 5 />
        </instance_attributes>
      </transient_attributes>
    </node_state>
  </status>
</cib>
 
c001mlb_node02a:root >crm_mon -r -V -i 2
 
============
Last updated: Tue May 26 19:03:16 2009
Current DC: c001mlb_node01a - partition with quorum
Version: 1.0.3-b133b3f19797c00f9189f4b66b513963f9d25db9
26 Nodes configured, 2 expected votes
3 Resources configured.
============
 
Online: [ c001mlb_node01a c001mlb_node02a ]
OFFLINE: [ c001mlb_node03a c001mlb_node04a c001mlb_node05a c001mlb_node06a c001mlb_node07a c001mlb_node08a c001mlb_node09a c001mlb_node10a c001mlb_node11a c001m
lb_node12a c001mlb_node13a c001mlb_node01b c001mlb_node02b c001mlb_node03b c001mlb_node04b c001mlb_node05b c001mlb_node06b c001mlb_node07b c001mlb_node08b c001m
lb_node09b c001mlb_node10b c001mlb_node11b c001mlb_node12b c001mlb_node13b ]
 
Full list of resources:
 
ip-c001drbd01a  (ocf::heartbeat:IPaddr2):       Started c001mlb_node01a
Master/Slave Set: ms-drbd0
        Stopped: [ drbd0:0 drbd0:1 drbd0:2 drbd0:3 drbd0:4 drbd0:5 drbd0:6 drbd0:7 drbd0:8 drbd0:9 drbd0:10 drbd0:11 drbd0:12 drbd0:13 drbd0:14 drbd0:15 drbd0:1
6 drbd0:17 drbd0:18 drbd0:19 drbd0:20 drbd0:21 drbd0:22 drbd0:23 drbd0:24 drbd0:25 ]
ip-c001drbd01b  (ocf::heartbeat:IPaddr2):       Stopped
 
Thanks!
 
./Dimitar Boyn
 
P.S.
Here is what I get if I try to start the resource manually:
:c001mlb_node01a:root >crm resource start drbd0
 
".
May 26 19:14:48 c001mlb_node01a crm_resource: [31520]: info: Invoked: crm_resource --meta -r drbd0 -p target-role -v Started
May 26 19:14:48 c001mlb_node01a cib: [25731]: info: log_data_element: cib:diff: - <cib admin_epoch= 0 epoch= 252 num_updates= 1 >
May 26 19:14:48 c001mlb_node01a cib: [25731]: info: log_data_element: cib:diff: -   <configuration >
May 26 19:14:48 c001mlb_node01a cib: [25731]: info: log_data_element: cib:diff: -     <resources >
May 26 19:14:48 c001mlb_node01a cib: [25731]: info: log_data_element: cib:diff: -       <master id= ms-drbd0 >
May 26 19:14:48 c001mlb_node01a cib: [25731]: info: log_data_element: cib:diff: -         <primitive id= drbd0 >
May 26 19:14:48 c001mlb_node01a cib: [25731]: info: log_data_element: cib:diff: -           <meta_attributes id= drbd0-meta_attributes >
May 26 19:14:48 c001mlb_node01a cib: [25731]: info: log_data_element: cib:diff: -             <nvpair value= Stopped id= drbd0-meta_attributes-target-role />
May 26 19:14:48 c001mlb_node01a cib: [25731]: info: log_data_element: cib:diff: -           </meta_attributes>
May 26 19:14:48 c001mlb_node01a cib: [25731]: info: log_data_element: cib:diff: -         </primitive>
May 26 19:14:48 c001mlb_node01a cib: [25731]: info: log_data_element: cib:diff: -       </master>
May 26 19:14:48 c001mlb_node01a cib: [25731]: info: log_data_element: cib:diff: -     </resources>
May 26 19:14:48 c001mlb_node01a cib: [25731]: info: log_data_element: cib:diff: -   </configuration>
May 26 19:14:48 c001mlb_node01a crmd: [25735]: info: abort_transition_graph: need_abort:59 - Triggered transition abort (complete=1) : Non-status change
May 26 19:14:48 c001mlb_node01a cib: [25731]: info: log_data_element: cib:diff: - </cib>
May 26 19:14:48 c001mlb_node01a crmd: [25735]: info: need_abort: Aborting on change to admin_epoch
May 26 19:14:48 c001mlb_node01a cib: [25731]: info: log_data_element: cib:diff: + <cib admin_epoch= 0 epoch= 253 num_updates= 1 >
May 26 19:14:48 c001mlb_node01a crmd: [25735]: info: do_state_transition: State transition S_IDLE ->S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
May 26 19:14:48 c001mlb_node01a cib: [25731]: info: log_data_element: cib:diff: +   <configuration >
May 26 19:14:48 c001mlb_node01a crmd: [25735]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.
May 26 19:14:48 c001mlb_node01a cib: [25731]: info: log_data_element: cib:diff: +     <resources >
May 26 19:14:48 c001mlb_node01a crmd: [25735]: ERROR: do_fsa_action: Action A_DC_TIMER_STOP took 812418852s to complete
May 26 19:14:48 c001mlb_node01a cib: [25731]: info: log_data_element: cib:diff: +       <master id= ms-drbd0 >
May 26 19:14:48 c001mlb_node01a crmd: [25735]: ERROR: do_fsa_action: Action A_INTEGRATE_TIMER_STOP took 812418851s to complete
May 26 19:14:48 c001mlb_node01a cib: [25731]: info: log_data_element: cib:diff: +         <primitive id= drbd0 >
May 26 19:14:48 c001mlb_node01a crmd: [25735]: ERROR: do_fsa_action: Action A_FINALIZE_TIMER_STOP took 812418850s to complete
May 26 19:14:48 c001mlb_node01a cib: [25731]: info: log_data_element: cib:diff: +           <meta_attributes id= drbd0-meta_attributes >
May 26 19:14:48 c001mlb_node01a crmd: [25735]: info: do_pe_invoke: Query 73: Requesting the current CIB: S_POLICY_ENGINE
May 26 19:14:48 c001mlb_node01a cib: [25731]: info: log_data_element: cib:diff: +             <nvpair value= Started id= drbd0-meta_attributes-target-role />
May 26 19:14:48 c001mlb_node01a crmd: [25735]: ERROR: do_fsa_action: Action A_PE_INVOKE took 812418850s to complete
May 26 19:14:48 c001mlb_node01a cib: [25731]: info: log_data_element: cib:diff: +           </meta_attributes>
May 26 19:14:48 c001mlb_node01a cib: [25731]: info: log_data_element: cib:diff: +         </primitive>
May 26 19:14:48 c001mlb_node01a cib: [25731]: info: log_data_element: cib:diff: +       </master>
May 26 19:14:48 c001mlb_node01a cib: [25731]: info: log_data_element: cib:diff: +     </resources>
May 26 19:14:48 c001mlb_node01a cib: [25731]: info: log_data_element: cib:diff: +   </configuration>
May 26 19:14:48 c001mlb_node01a cib: [25731]: info: log_data_element: cib:diff: + </cib>
May 26 19:14:48 c001mlb_node01a cib: [25731]: info: cib_process_request: Operation complete: op cib_modify for section resources (origin=local/crm_resource/4, version=0.253.1): ok (rc=0)
May 26 19:14:48 c001mlb_node01a pengine: [25734]: notice: unpack_config: On loss of CCM Quorum: Ignore
May 26 19:14:48 c001mlb_node01a pengine: [25734]: info: determine_online_status: Node c001mlb_node01a is online
May 26 19:14:48 c001mlb_node01a crmd: [25735]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1243365288-55, seq=19460, quorate=1
May 26 19:14:48 c001mlb_node01a cib: [31521]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-6.raw
May 26 19:14:48 c001mlb_node01a pengine: [25734]: info: determine_online_status: Node c001mlb_node02a is online
May 26 19:14:48 c001mlb_node01a pengine: [25734]: notice: native_print: ip-c001drbd01a  (ocf::heartbeat:IPaddr2):       Stopped
May 26 19:14:48 c001mlb_node01a cib: [31521]: info: write_cib_contents: Wrote version 0.253.0 of the CIB to disk (digest: 2d74f55807e87a2e9c64d91fe4a3525e)
May 26 19:14:48 c001mlb_node01a pengine: [25734]: notice: clone_print: Master/Slave Set: ms-drbd0
May 26 19:14:48 c001mlb_node01a cib: [31521]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.2Ij02L (digest: /var/lib/heartbeat/crm/cib.G64hBN)
May 26 19:14:48 c001mlb_node01a pengine: [25734]: notice: print_list:   Stopped: [ drbd0:0 drbd0:1 drbd0:2 drbd0:3 drbd0:4 drbd0:5 drbd0:6 drbd0:7 drbd0:8 drbd0:9 drbd0:10 drbd0:11 drbd0:12 drbd0:13 drbd0:14 drbd0:15 drbd0:16 drbd0:17 drbd0:18 drbd0:19 drbd0:20 drbd0:21 drbd0:22 drbd0:23 drbd0:24 drbd0:25 ]
May 26 19:14:48 c001mlb_node01a pengine: [25734]: notice: native_print: ip-c001drbd01b  (ocf::heartbeat:IPaddr2):       Stopped
May 26 19:14:48 c001mlb_node01a pengine: [25734]: WARN: native_color: Resource drbd0:2 cannot run anywhere
May 26 19:14:48 c001mlb_node01a pengine: [25734]: WARN: native_color: Resource drbd0:3 cannot run anywhere
May 26 19:14:48 c001mlb_node01a pengine: [25734]: WARN: native_color: Resource drbd0:4 cannot run anywhere
May 26 19:14:48 c001mlb_node01a pengine: [25734]: WARN: native_color: Resource drbd0:5 cannot run anywhere
May 26 19:14:48 c001mlb_node01a pengine: [25734]: WARN: native_color: Resource drbd0:6 cannot run anywhere
May 26 19:14:48 c001mlb_node01a pengine: [25734]: WARN: native_color: Resource drbd0:7 cannot run anywhere
May 26 19:14:48 c001mlb_node01a pengine: [25734]: WARN: native_color: Resource drbd0:8 cannot run anywhere
May 26 19:14:48 c001mlb_node01a pengine: [25734]: WARN: native_color: Resource drbd0:9 cannot run anywhere
May 26 19:14:48 c001mlb_node01a pengine: [25734]: WARN: native_color: Resource drbd0:10 cannot run anywhere
May 26 19:14:48 c001mlb_node01a pengine: [25734]: WARN: native_color: Resource drbd0:11 cannot run anywhere
May 26 19:14:48 c001mlb_node01a pengine: [25734]: WARN: native_color: Resource drbd0:12 cannot run anywhere
May 26 19:14:48 c001mlb_node01a pengine: [25734]: WARN: native_color: Resource drbd0:13 cannot run anywhere
May 26 19:14:48 c001mlb_node01a pengine: [25734]: WARN: native_color: Resource drbd0:14 cannot run anywhere
May 26 19:14:48 c001mlb_node01a pengine: [25734]: WARN: native_color: Resource drbd0:15 cannot run anywhere
May 26 19:14:48 c001mlb_node01a pengine: [25734]: WARN: native_color: Resource drbd0:16 cannot run anywhere
May 26 19:14:48 c001mlb_node01a pengine: [25734]: WARN: native_color: Resource drbd0:17 cannot run anywhere
May 26 19:14:48 c001mlb_node01a pengine: [25734]: WARN: native_color: Resource drbd0:18 cannot run anywhere
May 26 19:14:48 c001mlb_node01a pengine: [25734]: WARN: native_color: Resource drbd0:19 cannot run anywhere
May 26 19:14:48 c001mlb_node01a pengine: [25734]: WARN: native_color: Resource drbd0:20 cannot run anywhere
May 26 19:14:48 c001mlb_node01a pengine: [25734]: WARN: native_color: Resource drbd0:21 cannot run anywhere
May 26 19:14:48 c001mlb_node01a pengine: [25734]: WARN: native_color: Resource drbd0:22 cannot run anywhere
May 26 19:14:48 c001mlb_node01a pengine: [25734]: WARN: native_color: Resource drbd0:23 cannot run anywhere
May 26 19:14:48 c001mlb_node01a pengine: [25734]: WARN: native_color: Resource drbd0:24 cannot run anywhere
May 26 19:14:48 c001mlb_node01a pengine: [25734]: WARN: native_color: Resource drbd0:25 cannot run anywhere
May 26 19:14:48 c001mlb_node01a pengine: [25734]: info: master_color: Promoting drbd0:0 (Stopped c001mlb_node01a)
May 26 19:14:48 c001mlb_node01a pengine: [25734]: info: master_color: ms-drbd0: Promoted 1 instances of a possible 1 to master
May 26 19:14:48 c001mlb_node01a pengine: [25734]: info: master_color: ms-drbd0: Promoted 1 instances of a possible 1 to master
May 26 19:14:48 c001mlb_node01a pengine: [25734]: WARN: native_color: Resource ip-c001drbd01b cannot run anywhere
May 26 19:14:48 c001mlb_node01a pengine: [25734]: notice: RecurringOp:  Start recurring monitor (21s) for ip-c001drbd01a on c001mlb_node01a
May 26 19:14:48 c001mlb_node01a pengine: [25734]: notice: RecurringOp:  Start recurring monitor (59s) for drbd0:0 on c001mlb_node01a
May 26 19:14:48 c001mlb_node01a pengine: [25734]: notice: RecurringOp:  Start recurring monitor (60s) for drbd0:1 on c001mlb_node02a
May 26 19:14:48 c001mlb_node01a pengine: [25734]: notice: RecurringOp:  Start recurring monitor (59s) for drbd0:0 on c001mlb_node01a
May 26 19:14:48 c001mlb_node01a pengine: [25734]: notice: RecurringOp:  Start recurring monitor (60s) for drbd0:1 on c001mlb_node02a
May 26 19:14:48 c001mlb_node01a pengine: [25734]: notice: LogActions: Start ip-c001drbd01a      (c001mlb_node01a)
May 26 19:14:48 c001mlb_node01a pengine: [25734]: notice: LogActions: Start drbd0:0     (c001mlb_node01a)
May 26 19:14:48 c001mlb_node01a pengine: [25734]: notice: LogActions: Start drbd0:1     (c001mlb_node02a)
May 26 19:14:48 c001mlb_node01a pengine: [25734]: notice: LogActions: Leave resource drbd0:10   (Stopped)
May 26 19:14:48 c001mlb_node01a pengine: [25734]: notice: LogActions: Leave resource drbd0:11   (Stopped)
May 26 19:14:48 c001mlb_node01a pengine: [25734]: notice: LogActions: Leave resource drbd0:12   (Stopped)
May 26 19:14:48 c001mlb_node01a pengine: [25734]: notice: LogActions: Leave resource drbd0:13   (Stopped)
May 26 19:14:48 c001mlb_node01a pengine: [25734]: notice: LogActions: Leave resource drbd0:14   (Stopped)
May 26 19:14:48 c001mlb_node01a pengine: [25734]: notice: LogActions: Leave resource drbd0:15   (Stopped)
May 26 19:14:48 c001mlb_node01a pengine: [25734]: notice: LogActions: Leave resource drbd0:16   (Stopped)
May 26 19:14:48 c001mlb_node01a pengine: [25734]: notice: LogActions: Leave resource drbd0:17   (Stopped)
May 26 19:14:48 c001mlb_node01a pengine: [25734]: notice: LogActions: Leave resource drbd0:18   (Stopped)
May 26 19:14:48 c001mlb_node01a pengine: [25734]: notice: LogActions: Leave resource drbd0:19   (Stopped)
May 26 19:14:48 c001mlb_node01a pengine: [25734]: notice: LogActions: Leave resource drbd0:2    (Stopped)
May 26 19:14:48 c001mlb_node01a pengine: [25734]: notice: LogActions: Leave resource drbd0:20   (Stopped)
May 26 19:14:48 c001mlb_node01a pengine: [25734]: notice: LogActions: Leave resource drbd0:21   (Stopped)
May 26 19:14:48 c001mlb_node01a pengine: [25734]: notice: LogActions: Leave resource drbd0:22   (Stopped)
May 26 19:14:48 c001mlb_node01a pengine: [25734]: notice: LogActions: Leave resource drbd0:23   (Stopped)
May 26 19:14:48 c001mlb_node01a pengine: [25734]: notice: LogActions: Leave resource drbd0:24   (Stopped)
May 26 19:14:48 c001mlb_node01a pengine: [25734]: notice: LogActions: Leave resource drbd0:25   (Stopped)
May 26 19:14:48 c001mlb_node01a pengine: [25734]: notice: LogActions: Leave resource drbd0:3    (Stopped)
May 26 19:14:48 c001mlb_node01a pengine: [25734]: notice: LogActions: Leave resource drbd0:4    (Stopped)
May 26 19:14:48 c001mlb_node01a pengine: [25734]: notice: LogActions: Leave resource drbd0:5    (Stopped)
May 26 19:14:48 c001mlb_node01a pengine: [25734]: notice: LogActions: Leave resource drbd0:6    (Stopped)
May 26 19:14:48 c001mlb_node01a pengine: [25734]: notice: LogActions: Leave resource drbd0:7    (Stopped)
May 26 19:14:48 c001mlb_node01a pengine: [25734]: notice: LogActions: Leave resource drbd0:8    (Stopped)
May 26 19:14:48 c001mlb_node01a pengine: [25734]: notice: LogActions: Leave resource drbd0:9    (Stopped)
May 26 19:14:48 c001mlb_node01a pengine: [25734]: notice: LogActions: Leave resource ip-c001drbd01b     (Stopped)
May 26 19:14:48 c001mlb_node01a crmd: [25735]: ERROR: do_fsa_action: Action A_LOG    took 812418819s to complete
May 26 19:14:48 c001mlb_node01a crmd: [25735]: info: do_state_transition: State transition S_POLICY_ENGINE ->S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
May 26 19:14:48 c001mlb_node01a crmd: [25735]: ERROR: do_fsa_action: Action A_DC_TIMER_STOP took 812418818s to complete
May 26 19:14:48 c001mlb_node01a crmd: [25735]: ERROR: do_fsa_action: Action A_INTEGRATE_TIMER_STOP took 812418818s to complete
May 26 19:14:48 c001mlb_node01a pengine: [25734]: WARN: process_pe_message: Transition 7: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-771.bz2
May 26 19:14:48 c001mlb_node01a crmd: [25735]: ERROR: do_fsa_action: Action A_FINALIZE_TIMER_STOP took 812418818s to complete
May 26 19:14:48 c001mlb_node01a pengine: [25734]: info: process_pe_message: Configuration WARNINGs found during PE processing.  Please run  crm_verify -L to identify issues.
May 26 19:14:48 c001mlb_node01a crmd: [25735]: info: unpack_graph: Unpacked transition 7: 17 actions in 17 synapses
May 26 19:14:48 c001mlb_node01a crmd: [25735]: info: do_te_invoke: Processing graph 7 (ref=pe_calc-dc-1243365288-55) derived from /var/lib/pengine/pe-warn-771.bz2
May 26 19:14:48 c001mlb_node01a crmd: [25735]: ERROR: do_fsa_action: Action A_TE_INVOKE took 812418817s to complete
May 26 19:14:48 c001mlb_node01a crmd: [25735]: info: te_rsc_command: Initiating action 5: start ip-c001drbd01a_start_0 on c001mlb_node01a (local)
May 26 19:14:48 c001mlb_node01a crmd: [25735]: info: do_lrm_rsc_op: Performing key=5:7:0:c58a32ec-ae57-4bc8-8a1e-5d7069c2f2bd op=ip-c001drbd01a_start_0 )
May 26 19:14:48 c001mlb_node01a lrmd: [25732]: info: rsc:ip-c001drbd01a: start
May 26 19:14:48 c001mlb_node01a crmd: [25735]: info: te_pseudo_action: Pseudo action 14 fired and confirmed
May 26 19:14:48 c001mlb_node01a crmd: [25735]: info: te_pseudo_action: Pseudo action 15 fired and confirmed
May 26 19:14:48 c001mlb_node01a crmd: [25735]: info: te_pseudo_action: Pseudo action 16 fired and confirmed
May 26 19:14:48 c001mlb_node01a crmd: [25735]: info: te_pseudo_action: Pseudo action 25 fired and confirmed
May 26 19:14:48 c001mlb_node01a crmd: [25735]: info: te_pseudo_action: Pseudo action 28 fired and confirmed
May 26 19:14:48 c001mlb_node01a crmd: [25735]: info: te_rsc_command: Initiating action 141: notify drbd0:0_post_notify_start_0 on c001mlb_node01a (local)
May 26 19:14:48 c001mlb_node01a crmd: [25735]: info: do_lrm_rsc_op: Performing key=141:7:0:c58a32ec-ae57-4bc8-8a1e-5d7069c2f2bd op=drbd0:0_notify_0 )
May 26 19:14:48 c001mlb_node01a lrmd: [25732]: info: rsc:drbd0:0: notify
May 26 19:14:48 c001mlb_node01a crmd: [25735]: info: te_rsc_command: Initiating action 143: notify drbd0:0_post_notify_promote_0 on c001mlb_node01a (local)
May 26 19:14:48 c001mlb_node01a lrmd: [25732]: info: RA output: (ip-c001drbd01a:start:stderr) eth0:0: warning: name may be invalid
May 26 19:14:48 c001mlb_node01a crmd: [25735]: info: do_lrm_rsc_op: Performing key=143:7:0:c58a32ec-ae57-4bc8-8a1e-5d7069c2f2bd op=drbd0:0_notify_0 )
May 26 19:14:48 c001mlb_node01a lrmd: [25732]: info: RA output: (drbd0:0:notify:stderr) 2009/05/26_19:14:48 INFO: drbd0: Using hostname node_0
May 26 19:14:48 c001mlb_node01a lrmd: [25732]: info: RA output: (ip-c001drbd01a:start:stderr) 2009/05/26_19:14:48 INFO: ip -f inet addr add 192.168.80.213/32 brd 192.168.80.213 dev eth0 label eth0:0
May 26 19:14:48 c001mlb_node01a lrmd: [25732]: info: RA output: (ip-c001drbd01a:start:stderr) 2009/05/26_19:14:48 INFO: ip link set eth0 up 2009/05/26_19:14:48 INFO: /usr/lib64/heartbeat/send_arp -i 200 -r 5 -p /var/run/heartbeat/rsctmp/send_arp/send_arp-192.168.80.213 eth0 192.168.80.213 auto not_used not_used
May 26 19:14:48 c001mlb_node01a crmd: [25735]: info: te_rsc_command: Initiating action 144: notify drbd0:1_post_notify_start_0 on c001mlb_node02a
May 26 19:14:48 c001mlb_node01a crmd: [25735]: info: te_rsc_command: Initiating action 146: notify drbd0:1_post_notify_promote_0 on c001mlb_node02a
May 26 19:14:48 c001mlb_node01a crm_master: [31607]: info: Invoked: /usr/sbin/crm_master -l reboot -v 5
May 26 19:14:48 c001mlb_node01a crmd: [25735]: info: process_lrm_event: LRM operation ip-c001drbd01a_start_0 (call=30, rc=0, cib-update=74, confirmed=true) complete ok
May 26 19:14:49 c001mlb_node01a crmd: [25735]: info: match_graph_event: Action ip-c001drbd01a_start_0 (5) confirmed on c001mlb_node01a (rc=0)
May 26 19:14:49 c001mlb_node01a crmd: [25735]: info: te_rsc_command: Initiating action 6: monitor ip-c001drbd01a_monitor_21000 on c001mlb_node01a (local)
May 26 19:14:49 c001mlb_node01a crmd: [25735]: info: do_lrm_rsc_op: Performing key=6:7:0:c58a32ec-ae57-4bc8-8a1e-5d7069c2f2bd op=ip-c001drbd01a_monitor_21000 )
May 26 19:14:49 c001mlb_node01a lrmd: [25732]: info: rsc:drbd0:0: notify
May 26 19:14:49 c001mlb_node01a lrmd: [25732]: info: RA output: (drbd0:0:notify:stdout) 0 Trying master-drbd0:0=5 update via attrd
May 26 19:14:49 c001mlb_node01a lrmd: [25732]: info: RA output: (ip-c001drbd01a:monitor:stderr) eth0:0: warning: name may be invalid
May 26 19:14:49 c001mlb_node01a lrmd: [25732]: info: RA output: (drbd0:0:notify:stderr) 2009/05/26_19:14:49 INFO: drbd0: Using hostname node_0
May 26 19:14:49 c001mlb_node01a crmd: [25735]: info: process_lrm_event: LRM operation drbd0:0_notify_0 (call=31, rc=0, cib-update=75, confirmed=true) complete ok
May 26 19:14:49 c001mlb_node01a crm_master: [31675]: info: Invoked: /usr/sbin/crm_master -l reboot -v 5
May 26 19:14:49 c001mlb_node01a crmd: [25735]: info: process_lrm_event: LRM operation ip-c001drbd01a_monitor_21000 (call=33, rc=0, cib-update=76, confirmed=false) complete ok
May 26 19:14:49 c001mlb_node01a crmd: [25735]: info: match_graph_event: Action drbd0:0_post_notify_start_0 (141) confirmed on c001mlb_node01a (rc=0)
May 26 19:14:49 c001mlb_node01a crmd: [25735]: info: match_graph_event: Action ip-c001drbd01a_monitor_21000 (6) confirmed on c001mlb_node01a (rc=0)
May 26 19:14:49 c001mlb_node01a lrmd: [25732]: info: RA output: (drbd0:0:notify:stdout) 0 Trying master-drbd0:0=5 update via attrd
May 26 19:14:49 c001mlb_node01a crmd: [25735]: info: process_lrm_event: LRM operation drbd0:0_notify_0 (call=32, rc=0, cib-update=77, confirmed=true) complete ok
May 26 19:14:49 c001mlb_node01a crmd: [25735]: info: match_graph_event: Action drbd0:0_post_notify_promote_0 (143) confirmed on c001mlb_node01a (rc=0)
May 26 19:14:50 c001mlb_node01a crmd: [25735]: info: match_graph_event: Action drbd0:1_post_notify_start_0 (144) confirmed on c001mlb_node02a (rc=0)
May 26 19:14:50 c001mlb_node01a crmd: [25735]: info: te_pseudo_action: Pseudo action 17 fired and confirmed
May 26 19:14:50 c001mlb_node01a crmd: [25735]: info: te_pseudo_action: Pseudo action 26 fired and confirmed
May 26 19:14:50 c001mlb_node01a crmd: [25735]: info: te_rsc_command: Initiating action 142: notify drbd0:0_pre_notify_promote_0 on c001mlb_node01a (local)
May 26 19:14:50 c001mlb_node01a crmd: [25735]: info: do_lrm_rsc_op: Performing key=142:7:0:c58a32ec-ae57-4bc8-8a1e-5d7069c2f2bd op=drbd0:0_notify_0 )
May 26 19:14:50 c001mlb_node01a lrmd: [25732]: info: rsc:drbd0:0: notify
May 26 19:14:50 c001mlb_node01a crmd: [25735]: info: te_rsc_command: Initiating action 145: notify drbd0:1_pre_notify_promote_0 on c001mlb_node02a
May 26 19:14:50 c001mlb_node01a lrmd: [25732]: info: RA output: (drbd0:0:notify:stderr) 2009/05/26_19:14:50 INFO: drbd0: Using hostname node_0
May 26 19:14:50 c001mlb_node01a crmd: [25735]: info: process_lrm_event: LRM operation drbd0:0_notify_0 (call=34, rc=0, cib-update=78, confirmed=true) complete ok
May 26 19:14:50 c001mlb_node01a crmd: [25735]: info: match_graph_event: Action drbd0:0_pre_notify_promote_0 (142) confirmed on c001mlb_node01a (rc=0)
May 26 19:14:51 c001mlb_node01a crmd: [25735]: info: match_graph_event: Action drbd0:1_post_notify_promote_0 (146) confirmed on c001mlb_node02a (rc=0)
May 26 19:14:51 c001mlb_node01a crmd: [25735]: info: te_pseudo_action: Pseudo action 29 fired and confirmed
May 26 19:14:51 c001mlb_node01a crmd: [25735]: info: match_graph_event: Action drbd0:1_pre_notify_promote_0 (145) confirmed on c001mlb_node02a (rc=0)
May 26 19:14:51 c001mlb_node01a crmd: [25735]: info: te_pseudo_action: Pseudo action 27 fired and confirmed
May 26 19:14:51 c001mlb_node01a crmd: [25735]: info: run_graph: ====================================================
May 26 19:14:51 c001mlb_node01a crmd: [25735]: notice: run_graph: Transition 7 (Complete=17, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-warn-771.bz2): Complete
May 26 19:14:51 c001mlb_node01a crmd: [25735]: info: te_graph_trigger: Transition 7 is now complete
May 26 19:14:51 c001mlb_node01a crmd: [25735]: info: notify_crmd: Transition 7 status: done - <null>
May 26 19:14:51 c001mlb_node01a crmd: [25735]: ERROR: do_fsa_action: Action A_LOG    took 812418558s to complete
May 26 19:14:51 c001mlb_node01a crmd: [25735]: info: do_state_transition: State transition S_TRANSITION_ENGINE ->S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
May 26 19:14:51 c001mlb_node01a crmd: [25735]: info: do_state_transition: Starting PEngine Recheck Timer
May 26 19:14:51 c001mlb_node01a crmd: [25735]: ERROR: do_fsa_action: Action A_DC_TIMER_STOP took 812418558s to complete
May 26 19:14:51 c001mlb_node01a crmd: [25735]: ERROR: do_fsa_action: Action A_INTEGRATE_TIMER_STOP took 812418558s to complete
May 26 19:14:51 c001mlb_node01a crmd: [25735]: ERROR: do_fsa_action: Action A_FINALIZE_TIMER_STOP took 812418558s to complete
May 26 19:15:10 c001mlb_node01a lrmd: [25732]: info: RA output: (ip-c001drbd01a:monitor:stderr) eth0:0: warning: name may be invalid
May 26 19:15:52 c001mlb_node01a last message repeated 2 times

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20090526/0426ba4e/attachment-0001.html>


More information about the Pacemaker mailing list