[ClusterLabs] DRBD problem

Adam Kuśmirek amkusmirek at gmail.com
Thu Apr 30 09:02:42 UTC 2015


Hello,

I have a two node cluster with two resource groups that I want to run on
separate nodes.
These resource groups contain two filesystems replicated with DRBD tool.

[root at pbx-fs-dc ~]# pcs status
Cluster name: frontend
Last updated: Thu Apr 30 10:23:48 2015
Last change: Thu Apr 30 10:09:24 2015
Stack: corosync
Current DC: pbx-fs-cluster-drc (2) - partition with quorum
Version: 1.1.12-a14efad
2 Nodes configured
13 Resources configured


Online: [ pbx-fs-cluster-dc pbx-fs-cluster-drc ]

Full list of resources:

 pbx-fs-cluster-dc-ipmi (stonith:fence_ipmilan):        Started
pbx-fs-cluster-drc
 pbx-fs-cluster-drc-ipmi        (stonith:fence_ipmilan):        Started
pbx-fs-cluster-dc
 Master/Slave Set: Rec01Clone [Rec01Data]
     Masters: [ pbx-fs-cluster-dc ]
     Slaves: [ pbx-fs-cluster-drc ]
 Master/Slave Set: Rec02Clone [Rec02Data]
     Masters: [ pbx-fs-cluster-drc ]
     Slaves: [ pbx-fs-cluster-dc ]
 Resource Group: Frontend01
     RecFS01    (ocf::heartbeat:Filesystem):    Started pbx-fs-cluster-dc
     voipIP01   (ocf::heartbeat:IPaddr2):       Started pbx-fs-cluster-dc
     coreIP01   (ocf::heartbeat:IPaddr2):       Started pbx-fs-cluster-dc
 Resource Group: Frontend02
     RecFS02    (ocf::heartbeat:Filesystem):    Started pbx-fs-cluster-drc
     voipIP02   (ocf::heartbeat:IPaddr2):       Started pbx-fs-cluster-drc
     coreIP02   (ocf::heartbeat:IPaddr2):       Started pbx-fs-cluster-drc
 Resource Group: ModCC01
     modccIP01  (ocf::heartbeat:IPaddr2):       Started pbx-fs-cluster-dc

PCSD Status:
  pbx-fs-cluster-dc: Online
  pbx-fs-cluster-drc: Online

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled


[root at pbx-fs-dc ~]# pcs constraint show
Location Constraints:
  Resource: Rec01Clone
    Constraint: location-Rec01Clone
      Rule: score=INFINITY role=master
        Expression: uname eq pbx-fs-cluster-dc
  Resource: Rec02Clone
    Constraint: location-Rec02Clone
      Rule: score=INFINITY role=master
        Expression: uname eq pbx-fs-cluster-drc
  Resource: modccIP01
    Enabled on: pbx-fs-cluster-dc (score:INFINITY)
Ordering Constraints:
  promote Rec01Clone then start RecFS01 (kind:Mandatory)
  promote Rec02Clone then start RecFS02 (kind:Mandatory)
Colocation Constraints:
  RecFS01 with Rec01Clone (score:INFINITY) (with-rsc-role:Master)
  RecFS02 with Rec02Clone (score:INFINITY) (with-rsc-role:Master)



[root at pbx-fs-dc ~]# cat /proc/drbd
version: 8.4.6 (api:1/proto:86-101)
GIT-hash: 833d830e0152d1e457fa7856e71e11248ccf3f70 build by phil at Build64R7,
2015-04-10 05:13:52

 1: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----
    ns:2052 nr:0 dw:2048 dr:4992 al:1 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f
oos:0
 2: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r-----
    ns:0 nr:140 dw:140 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0


When one node crashes (eg. pbx-fs-cluster-drc) all resources move correctly
to second node (pbx-fs-cluster-dc)

[root at pbx-fs-drc ~]# pcs cluster standby pbx-fs-cluster-drc

[root at pbx-fs-drc ~]# pcs status
Cluster name: frontend
Last updated: Thu Apr 30 10:33:24 2015
Last change: Thu Apr 30 10:33:07 2015
Stack: corosync
Current DC: pbx-fs-cluster-drc (2) - partition with quorum
Version: 1.1.12-a14efad
2 Nodes configured
13 Resources configured


Node pbx-fs-cluster-drc (2): standby
Online: [ pbx-fs-cluster-dc ]

Full list of resources:

 pbx-fs-cluster-dc-ipmi (stonith:fence_ipmilan):        Started
pbx-fs-cluster-dc
 pbx-fs-cluster-drc-ipmi        (stonith:fence_ipmilan):        Started
pbx-fs-cluster-dc
 Master/Slave Set: Rec01Clone [Rec01Data]
     Masters: [ pbx-fs-cluster-dc ]
     Stopped: [ pbx-fs-cluster-drc ]
 Master/Slave Set: Rec02Clone [Rec02Data]
     Masters: [ pbx-fs-cluster-dc ]
     Stopped: [ pbx-fs-cluster-drc ]
 Resource Group: Frontend01
     RecFS01    (ocf::heartbeat:Filesystem):    Started pbx-fs-cluster-dc
     voipIP01   (ocf::heartbeat:IPaddr2):       Started pbx-fs-cluster-dc
     coreIP01   (ocf::heartbeat:IPaddr2):       Started pbx-fs-cluster-dc
 Resource Group: Frontend02
     RecFS02    (ocf::heartbeat:Filesystem):    Started pbx-fs-cluster-dc
     voipIP02   (ocf::heartbeat:IPaddr2):       Started pbx-fs-cluster-dc
     coreIP02   (ocf::heartbeat:IPaddr2):       Started pbx-fs-cluster-dc
 Resource Group: ModCC01
     modccIP01  (ocf::heartbeat:IPaddr2):       Started pbx-fs-cluster-dc

PCSD Status:
  pbx-fs-cluster-dc: Online
  pbx-fs-cluster-drc: Online

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

[root at pbx-fs-dc ~]# cat /proc/drbd
version: 8.4.6 (api:1/proto:86-101)
GIT-hash: 833d830e0152d1e457fa7856e71e11248ccf3f70 build by phil at Build64R7,
2015-04-10 05:13:52

 1: cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown C r-----
    ns:2052 nr:0 dw:2048 dr:4992 al:1 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f
oos:0
 2: cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown C r-----
    ns:0 nr:152 dw:2200 dr:1620 al:1 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f
oos:2048


Till now everything is OK.

When I bring the crashed node (pbx-fs-cluster-drc) back to live, the
resources that prefer this node move to it as expected, but DRBD is in
disconnected state.

[root at pbx-fs-dc ~]# pcs status
Cluster name: frontend
Last updated: Thu Apr 30 10:56:32 2015
Last change: Thu Apr 30 10:40:26 2015
Stack: corosync
Current DC: pbx-fs-cluster-drc (2) - partition with quorum
Version: 1.1.12-a14efad
2 Nodes configured
13 Resources configured


Online: [ pbx-fs-cluster-dc pbx-fs-cluster-drc ]

Full list of resources:

 pbx-fs-cluster-dc-ipmi (stonith:fence_ipmilan):        Started
pbx-fs-cluster-dc
 pbx-fs-cluster-drc-ipmi        (stonith:fence_ipmilan):        Started
pbx-fs-cluster-drc
 Master/Slave Set: Rec01Clone [Rec01Data]
     Masters: [ pbx-fs-cluster-dc ]
     Slaves: [ pbx-fs-cluster-drc ]
 Master/Slave Set: Rec02Clone [Rec02Data]
     Masters: [ pbx-fs-cluster-drc ]
     Slaves: [ pbx-fs-cluster-dc ]
 Resource Group: Frontend01
     RecFS01    (ocf::heartbeat:Filesystem):    Started pbx-fs-cluster-dc
     voipIP01   (ocf::heartbeat:IPaddr2):       Started pbx-fs-cluster-dc
     coreIP01   (ocf::heartbeat:IPaddr2):       Started pbx-fs-cluster-dc
 Resource Group: Frontend02
     RecFS02    (ocf::heartbeat:Filesystem):    Started pbx-fs-cluster-drc
     voipIP02   (ocf::heartbeat:IPaddr2):       Started pbx-fs-cluster-drc
     coreIP02   (ocf::heartbeat:IPaddr2):       Started pbx-fs-cluster-drc
 Resource Group: ModCC01
     modccIP01  (ocf::heartbeat:IPaddr2):       Started pbx-fs-cluster-dc

PCSD Status:
  pbx-fs-cluster-dc: Online
  pbx-fs-cluster-drc: Online

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

[root at pbx-fs-drc ~]# cat /proc/drbd
version: 8.4.6 (api:1/proto:86-101)
GIT-hash: 833d830e0152d1e457fa7856e71e11248ccf3f70 build by phil at Build64R7,
2015-04-10 05:13:52

 1: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r-----
    ns:0 nr:164 dw:164 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
 2: cs:StandAlone ro:Primary/Unknown ds:UpToDate/DUnknown   r-----
    ns:0 nr:0 dw:2076 dr:1944 al:2 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f
oos:2060

What is worse, it seems that cluster did not synced devices before
promoting it on the node bringed back to life, so all changes were lost.

Below you can find system logs:

Apr 30 10:40:26 pbx-fs-dc crmd[3815]: notice: process_lrm_event: Operation
pbx-fs-cluster-drc-ipmi_stop_0: ok (node=pbx-fs-cluster-dc, call=122, rc=0,
cib-update=69, confirmed=true)
Apr 30 10:40:26 pbx-fs-dc crmd[3815]: notice: process_lrm_event: Operation
Rec02Data_notify_0: ok (node=pbx-fs-cluster-dc, call=126, rc=0,
cib-update=0, confirmed=true)
Apr 30 10:40:26 pbx-fs-dc crmd[3815]: notice: process_lrm_event: Operation
Rec01Data_notify_0: ok (node=pbx-fs-cluster-dc, call=125, rc=0,
cib-update=0, confirmed=true)
Apr 30 10:40:26 pbx-fs-dc IPaddr2(coreIP02)[40952]: INFO: IP status = ok,
IP_CIP=
Apr 30 10:40:26 pbx-fs-dc crmd[3815]: notice: process_lrm_event: Operation
coreIP02_stop_0: ok (node=pbx-fs-cluster-dc, call=124, rc=0, cib-update=70,
confirmed=true)
Apr 30 10:40:26 pbx-fs-dc IPaddr2(voipIP02)[41057]: INFO: IP status = ok,
IP_CIP=
Apr 30 10:40:26 pbx-fs-dc crmd[3815]: notice: process_lrm_event: Operation
voipIP02_stop_0: ok (node=pbx-fs-cluster-dc, call=128, rc=0, cib-update=71,
confirmed=true)
Apr 30 10:40:27 pbx-fs-dc Filesystem(RecFS02)[41112]: INFO: Running stop
for /dev/drbd2 on /data/rec02
Apr 30 10:40:27 pbx-fs-dc Filesystem(RecFS02)[41112]: INFO: Trying to
unmount /data/rec02
Apr 30 10:40:27 pbx-fs-dc Filesystem(RecFS02)[41112]: ERROR: Couldn't
unmount /data/rec02; trying cleanup with TERM
Apr 30 10:40:27 pbx-fs-dc Filesystem(RecFS02)[41112]: INFO: sending signal
TERM to: root      5756  5664  0 09:47 pts/0    Ss+    0:00 -bash
Apr 30 10:40:28 pbx-fs-dc Filesystem(RecFS02)[41112]: ERROR: Couldn't
unmount /data/rec02; trying cleanup with TERM
Apr 30 10:40:28 pbx-fs-dc Filesystem(RecFS02)[41112]: INFO: sending signal
TERM to: root      5756  5664  0 09:47 pts/0    Ss+    0:00 -bash
Apr 30 10:40:29 pbx-fs-dc Filesystem(RecFS02)[41112]: ERROR: Couldn't
unmount /data/rec02; trying cleanup with TERM
Apr 30 10:40:29 pbx-fs-dc Filesystem(RecFS02)[41112]: INFO: sending signal
TERM to: root      5756  5664  0 09:47 pts/0    Ss+    0:00 -bash
Apr 30 10:40:30 pbx-fs-dc crmd[3815]: notice: process_lrm_event: Operation
Rec01Data_notify_0: ok (node=pbx-fs-cluster-dc, call=131, rc=0,
cib-update=0, confirmed=true)
Apr 30 10:40:30 pbx-fs-dc kernel: drbd rec01: Handshake successful: Agreed
network protocol version 101
Apr 30 10:40:30 pbx-fs-dc kernel: drbd rec01: Agreed to support TRIM on
protocol level
Apr 30 10:40:30 pbx-fs-dc kernel: drbd rec01: conn( WFConnection ->
WFReportParams )
Apr 30 10:40:30 pbx-fs-dc kernel: drbd rec01: Starting ack_recv thread
(from drbd_r_rec01 [30637])
Apr 30 10:40:30 pbx-fs-dc kernel: block drbd1: drbd_sync_handshake:
Apr 30 10:40:30 pbx-fs-dc kernel: block drbd1: self
D1738D6AEB0D2ED7:A79967DFC9150419:2694B00ED5C0E490:2693B00ED5C0E490 bits:0
flags:0
Apr 30 10:40:30 pbx-fs-dc kernel: block drbd1: peer
A79967DFC9150418:0000000000000000:2694B00ED5C0E490:2693B00ED5C0E490 bits:0
flags:0
Apr 30 10:40:30 pbx-fs-dc kernel: block drbd1: uuid_compare()=1 by rule 70
Apr 30 10:40:30 pbx-fs-dc kernel: block drbd1: peer( Unknown -> Secondary )
conn( WFReportParams -> WFBitMapS ) pdsk( DUnknown -> Consistent )
Apr 30 10:40:30 pbx-fs-dc kernel: block drbd1: send bitmap stats
[Bytes(packets)]: plain 0(0), RLE 23(1), total 23; compression: 100.0%
Apr 30 10:40:30 pbx-fs-dc kernel: block drbd1: receive bitmap stats
[Bytes(packets)]: plain 0(0), RLE 23(1), total 23; compression: 100.0%
Apr 30 10:40:30 pbx-fs-dc kernel: block drbd1: helper command:
/sbin/drbdadm before-resync-source minor-1
Apr 30 10:40:30 pbx-fs-dc kernel: block drbd1: helper command:
/sbin/drbdadm before-resync-source minor-1 exit code 0 (0x0)
Apr 30 10:40:30 pbx-fs-dc kernel: block drbd1: conn( WFBitMapS ->
SyncSource ) pdsk( Consistent -> Inconsistent )
Apr 30 10:40:30 pbx-fs-dc kernel: block drbd1: Began resync as SyncSource
(will sync 0 KB [0 bits set]).
Apr 30 10:40:30 pbx-fs-dc kernel: block drbd1: updated sync UUID
D1738D6AEB0D2ED7:A79A67DFC9150419:A79967DFC9150419:2694B00ED5C0E490
Apr 30 10:40:30 pbx-fs-dc kernel: block drbd1: Resync done (total 1 sec;
paused 0 sec; 0 K/sec)
Apr 30 10:40:30 pbx-fs-dc kernel: block drbd1: updated UUIDs
D1738D6AEB0D2ED7:0000000000000000:A79A67DFC9150419:A79967DFC9150419
Apr 30 10:40:30 pbx-fs-dc kernel: block drbd1: conn( SyncSource ->
Connected ) pdsk( Inconsistent -> UpToDate )
Apr 30 10:40:31 pbx-fs-dc Filesystem(RecFS02)[41112]: ERROR: Couldn't
unmount /data/rec02; trying cleanup with KILL
Apr 30 10:40:31 pbx-fs-dc Filesystem(RecFS02)[41112]: INFO: sending signal
KILL to: root      5756  5664  0 09:47 pts/0    Ss+    0:00 -bash
Apr 30 10:40:32 pbx-fs-dc Filesystem(RecFS02)[41112]: INFO: unmounted
/data/rec02 successfully
Apr 30 10:40:32 pbx-fs-dc lrmd[3812]: notice: operation_finished:
RecFS02_stop_0:41112:stderr [ umount: /data/rec02: target is busy. ]
Apr 30 10:40:32 pbx-fs-dc lrmd[3812]: notice: operation_finished:
RecFS02_stop_0:41112:stderr [         (In some cases useful info about
processes that use ]
Apr 30 10:40:32 pbx-fs-dc lrmd[3812]: notice: operation_finished:
RecFS02_stop_0:41112:stderr [          the device is found by lsof(8) or
fuser(1)) ]
Apr 30 10:40:32 pbx-fs-dc lrmd[3812]: notice: operation_finished:
RecFS02_stop_0:41112:stderr [ ocf-exit-reason:Couldn't unmount /data/rec02;
trying cleanup with TERM ]
Apr 30 10:40:32 pbx-fs-dc lrmd[3812]: notice: operation_finished:
RecFS02_stop_0:41112:stderr [ umount: /data/rec02: target is busy. ]
Apr 30 10:40:32 pbx-fs-dc lrmd[3812]: notice: operation_finished:
RecFS02_stop_0:41112:stderr [         (In some cases useful info about
processes that use ]
Apr 30 10:40:32 pbx-fs-dc lrmd[3812]: notice: operation_finished:
RecFS02_stop_0:41112:stderr [          the device is found by lsof(8) or
fuser(1)) ]
Apr 30 10:40:32 pbx-fs-dc lrmd[3812]: notice: operation_finished:
RecFS02_stop_0:41112:stderr [ ocf-exit-reason:Couldn't unmount /data/rec02;
trying cleanup with TERM ]
Apr 30 10:40:32 pbx-fs-dc lrmd[3812]: notice: operation_finished:
RecFS02_stop_0:41112:stderr [ umount: /data/rec02: target is busy. ]
Apr 30 10:40:32 pbx-fs-dc lrmd[3812]: notice: operation_finished:
RecFS02_stop_0:41112:stderr [         (In some cases useful info about
processes that use ]
Apr 30 10:40:32 pbx-fs-dc lrmd[3812]: notice: operation_finished:
RecFS02_stop_0:41112:stderr [          the device is found by lsof(8) or
fuser(1)) ]
Apr 30 10:40:32 pbx-fs-dc lrmd[3812]: notice: operation_finished:
RecFS02_stop_0:41112:stderr [ ocf-exit-reason:Couldn't unmount /data/rec02;
trying cleanup with TERM ]
Apr 30 10:40:32 pbx-fs-dc lrmd[3812]: notice: operation_finished:
RecFS02_stop_0:41112:stderr [ umount: /data/rec02: target is busy. ]
Apr 30 10:40:32 pbx-fs-dc lrmd[3812]: notice: operation_finished:
RecFS02_stop_0:41112:stderr [         (In some cases useful info about
processes that use ]
Apr 30 10:40:32 pbx-fs-dc lrmd[3812]: notice: operation_finished:
RecFS02_stop_0:41112:stderr [          the device is found by lsof(8) or
fuser(1)) ]
Apr 30 10:40:32 pbx-fs-dc lrmd[3812]: notice: operation_finished:
RecFS02_stop_0:41112:stderr [ ocf-exit-reason:Couldn't unmount /data/rec02;
trying cleanup with KILL ]
Apr 30 10:40:32 pbx-fs-dc crmd[3815]: notice: process_lrm_event: Operation
RecFS02_stop_0: ok (node=pbx-fs-cluster-dc, call=130, rc=0, cib-update=72,
confirmed=true)
Apr 30 10:40:32 pbx-fs-dc crmd[3815]: notice: process_lrm_event: Operation
Rec02Data_notify_0: ok (node=pbx-fs-cluster-dc, call=132, rc=0,
cib-update=0, confirmed=true)
Apr 30 10:40:32 pbx-fs-dc kernel: block drbd2: role( Primary -> Secondary )
Apr 30 10:40:32 pbx-fs-dc kernel: block drbd2: bitmap WRITE of 2 pages took
0 jiffies
Apr 30 10:40:32 pbx-fs-dc kernel: block drbd2: 2080 KB (520 bits) marked
out-of-sync by on disk bit-map.
Apr 30 10:40:32 pbx-fs-dc crmd[3815]: notice: process_lrm_event: Operation
Rec02Data_demote_0: ok (node=pbx-fs-cluster-dc, call=133, rc=0,
cib-update=73, confirmed=true)
Apr 30 10:40:32 pbx-fs-dc crmd[3815]: notice: process_lrm_event: Operation
Rec02Data_notify_0: ok (node=pbx-fs-cluster-dc, call=134, rc=0,
cib-update=0, confirmed=true)
Apr 30 10:40:32 pbx-fs-dc crmd[3815]: notice: process_lrm_event: Operation
Rec02Data_notify_0: ok (node=pbx-fs-cluster-dc, call=135, rc=0,
cib-update=0, confirmed=true)
Apr 30 10:40:35 pbx-fs-dc crmd[3815]: notice: process_lrm_event: Operation
Rec02Data_notify_0: ok (node=pbx-fs-cluster-dc, call=136, rc=0,
cib-update=0, confirmed=true)
Apr 30 10:40:35 pbx-fs-dc crmd[3815]: notice: process_lrm_event: Operation
Rec02Data_notify_0: ok (node=pbx-fs-cluster-dc, call=137, rc=0,
cib-update=0, confirmed=true)
Apr 30 10:40:36 pbx-fs-dc crmd[3815]: notice: process_lrm_event: Operation
Rec02Data_notify_0: ok (node=pbx-fs-cluster-dc, call=138, rc=0,
cib-update=0, confirmed=true)
Apr 30 10:40:36 pbx-fs-dc crmd[3815]: notice: process_lrm_event: Operation
Rec02Data_monitor_60000: ok (node=pbx-fs-cluster-dc, call=139, rc=0,
cib-update=74, confirmed=false)
Apr 30 10:40:36 pbx-fs-dc kernel: drbd rec02: Handshake successful: Agreed
network protocol version 101
Apr 30 10:40:36 pbx-fs-dc kernel: drbd rec02: Agreed to support TRIM on
protocol level
Apr 30 10:40:36 pbx-fs-dc kernel: drbd rec02: conn( WFConnection ->
WFReportParams )
Apr 30 10:40:36 pbx-fs-dc kernel: drbd rec02: Starting ack_recv thread
(from drbd_r_rec02 [19050])
Apr 30 10:40:36 pbx-fs-dc kernel: block drbd2: drbd_sync_handshake:
Apr 30 10:40:36 pbx-fs-dc kernel: block drbd2: self
71B04D381988A024:8D0BCDF6F65527CA:914700D99A398BB8:914600D99A398BB9
bits:520 flags:0
Apr 30 10:40:36 pbx-fs-dc kernel: block drbd2: peer
E8F800BB587A0E57:8D0BCDF6F65527CA:914700D99A398BB9:914600D99A398BB9 bits:0
flags:0
Apr 30 10:40:36 pbx-fs-dc kernel: block drbd2: uuid_compare()=100 by rule 90
Apr 30 10:40:36 pbx-fs-dc kernel: block drbd2: helper command:
/sbin/drbdadm initial-split-brain minor-2
Apr 30 10:40:36 pbx-fs-dc kernel: block drbd2: helper command:
/sbin/drbdadm initial-split-brain minor-2 exit code 0 (0x0)
Apr 30 10:40:36 pbx-fs-dc kernel: block drbd2: Split-Brain detected but
unresolved, dropping connection!
Apr 30 10:40:36 pbx-fs-dc kernel: block drbd2: helper command:
/sbin/drbdadm split-brain minor-2
Apr 30 10:40:36 pbx-fs-dc kernel: block drbd2: helper command:
/sbin/drbdadm split-brain minor-2 exit code 0 (0x0)
Apr 30 10:40:36 pbx-fs-dc kernel: drbd rec02: conn( WFReportParams ->
Disconnecting )
Apr 30 10:40:36 pbx-fs-dc kernel: drbd rec02: error receiving ReportState,
e: -5 l: 0!
Apr 30 10:40:36 pbx-fs-dc kernel: drbd rec02: ack_receiver terminated
Apr 30 10:40:36 pbx-fs-dc kernel: drbd rec02: Terminating drbd_a_rec02
Apr 30 10:40:36 pbx-fs-dc kernel: drbd rec02: Connection closed
Apr 30 10:40:36 pbx-fs-dc kernel: drbd rec02: conn( Disconnecting ->
StandAlone )
Apr 30 10:40:36 pbx-fs-dc kernel: drbd rec02: receiver terminated
Apr 30 10:40:36 pbx-fs-dc kernel: drbd rec02: Terminating drbd_r_rec02


Can you help please ?

Regards
Adam Kusmirek
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/users/attachments/20150430/5df3b874/attachment-0003.html>


More information about the Users mailing list