[ClusterLabs] Antw: Re: SuSE12SP3 HAE SBD Communication Issue

Ulrich Windl Ulrich.Windl at rz.uni-regensburg.de
Thu Dec 27 19:15:43 UTC 2018


Hi!

Offline a SCSI disk: "echo offline > /sys/block/sd<X>/device/state". The opposite is not "online", BTW, but: ""echo running > /sys/block/sd<X>/device/state".
You could also try "echo "scsi remove-single-device" <MAGIC> > /proc/scsi/scsi", where MAGIC is (AFAIR) "HOST BUS TARGET LUN".


Regards,
Ulrich



>>> Fulong Wang <fulong.wang at hotmail.com> 24.12.18 7.10 Uhr >>>
Yan, klaus and Everyone,

Merry Christmas!!!


Many thanks for your advice!
I added the "-v" param in "SBD_OPTS", but didn't see any apparent change in the system message log,  am i looking at a wrong place?

By the way, we want to test when the disk access paths (multipath devices) lost, the sbd can fence the node automatically.
what's your recommendation for this scenario?


[cid:d2557952-5a58-49b8-a6c2-6903bd401f6b]

[cid:3fcb9b0d-f08f-4d5f-bec7-678841f848ca]


The "crm node fence"  did the work.

[cid:1454a9c9-fd84-4aae-9625-600c756ab587]


[cid:3917dddb-ce98-430b-9cfc-d02cc9569748]



[cid:c0fa78fd-49fa-4780-b24b-27bf85db0796]




Regards
Fulong

________________________________
From: Gao,Yan <ygao at suse.com>
Sent: Friday, December 21, 2018 20:43
To: kwenning at redhat.com; Cluster Labs - All topics related to open-source clustering welcomed; Fulong Wang
Subject: Re: [ClusterLabs] SuSE12SP3 HAE SBD Communication Issue

First thanks for your reply, Klaus!

On 2018/12/21 10:09, Klaus Wenninger wrote:
> On 12/21/2018 08:15 AM, Fulong Wang wrote:
>> Hello Experts,
>>
>> I'm New to this mail lists.
>> Pls kindlyforgive me if this mail has disturb you!
>>
>> Our Company recently is evaluating the usage of the SuSE HAE on x86
>> platform.
>> Wen simulating the storage disaster fail-over, i finally found that
>> the SBD communication functioned normal on SuSE11 SP4 but abnormal on
>> SuSE12 SP3.
>
> I have no experience with SBD on SLES but I know that handling of the
> logging verbosity-levels has changed recently in the upstream-repo.
> Given that it was done by Yan Gao iirc I'd assume it went into SLES.
> So changing the verbosity of the sbd-daemon might get you back
> these logs.
Yes, I think it's the issue. Could you please retrieve the latest
maintenance update for SLE12SP3 and try? Otherwise of course you could
temporarily enable verbose/debug logging by adding a couple of "-v" into
  "SBD_OPTS" in /etc/sysconfig/sbd.

But frankly, it makes more sense to manually trigger fencing for example
by "crm node fence" and see if it indeed works correctly.

> And of course you can use the list command on the other node
> to verify as well.
The "test" message in the slot might get overwritten soon by a "clear"
if the sbd daemon is running.

Regards,
   Yan


>
> Klaus
>
>> The SBD device was added during the initialization of the first
>> cluster node.
>>
>> I have requested help from SuSE guys, but they didn't give me any
>> valuable feedback yet now!
>>
>>
>> Below are some screenshots to explain what i have encountered.
>> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>
>> on a SuSE11 SP4 HAE cluster,  i  run the sbd test command as below:
>>
>>
>> then there will be some information showed up in the local system
>> message log
>>
>>
>>
>> on the second node,  we can found that the communication is normal by
>>
>>
>>
>> but when i turn to a SuSE12 SP3 HAE cluster,  ran the same command as
>> above:
>>
>>
>>
>> I didn't get any  response in the system message log.
>>
>>
>> "systemctl status sbd" also doesn't give me any clue on this.
>>
>>
>>
>> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>
>> What could be the reason for this abnormal behavior?  Is there any
>> problems with my setup?
>> Any suggestions are appreciate!
>>
>> Thanks!
>>
>>
>> Regards
>> FuLong
>>
>>
>> _______________________________________________
>> Users mailing list:Users at clusterlabs.org
>> https://lists.clusterlabs.org/mailman/listinfo/users
>>
>> Project Home:http://www.clusterlabs.org
>> Getting started:http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs:http://bugs.clusterlabs.org
>
>
>
> _______________________________________________
> Users mailing list: Users at clusterlabs.org
> https://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>



More information about the Users mailing list