[ClusterLabs] Antw: Re: lvm on shared storage and a lot of...
Ulrich Windl
Ulrich.Windl at rz.uni-regensburg.de
Tue Apr 18 10:22:01 EDT 2017
>>> lejeczek <peljasz at yahoo.co.uk> schrieb am 18.04.2017 um 16:14 in Nachricht
<bc95febd-8269-f2f7-30a7-8729b44b0f01 at yahoo.co.uk>:
>
> On 18/04/17 14:45, Digimer wrote:
>> On 18/04/17 07:31 AM, lejeczek wrote:
>>> .. device_block & device_unblock in dmesg.
>>>
>>> and I see that the LVM resource would fail.
>>> This to me seems to happen randomly, or I fail to spot a pattern.
>>>
>>> Shared storage is a sas3 enclosure.
>>> I believe I follow docs on LVM to the letter. I don't know what could be
>>> the problem.
>>>
>>> would you suggest ways to troubleshoot it? Is it faulty/failing hardware?
>>>
>>> many thanks,
>>> L.
>> LVM or clustered LVM?
>>
> no clvmd
> And inasmuch as the resource would start, fs would mount, if
> I start using it more intensely I'd get more of
> block/unblock and after a while mountpoint resource failes
> and then LVM resource too.
> It gets only worse after, even after I deleted resourced, I
> begin to see, eg.:
>
> [ 6242.606870] sd 7:0:32:0: device_unblock and setting to
> running, handle(0x002c)
> [ 6334.248617] sd 7:0:18:0: [sdy] tag#0 FAILED Result:
> hostbyte=DID_OK driverbyte=DRIVER_SENSE
> [ 6334.248633] sd 7:0:18:0: [sdy] tag#0 Sense Key : Not
> Ready [current]
> [ 6334.248640] sd 7:0:18:0: [sdy] tag#0 Add. Sense: Logical
> unit is in process of becoming ready
> [ 6334.248647] sd 7:0:18:0: [sdy] tag#0 CDB: Read(10) 28 00
> 00 00 00 00 00 00 08 00
> [ 6334.248652] blk_update_request: I/O error, dev sdy, sector 0
Silly question: Do you have a multi-initiator setup where both initiators use the same ID? Do your initiators have the highest prioriy (over the targets)?
Regards,
Ulrich
More information about the Users
mailing list