[ClusterLabs] serious problem with iSCSILogicalUnit
Andrei Borzenkov
arvidjaar at gmail.com
Mon Dec 16 12:38:44 EST 2019
16.12.2019 18:26, Stefan K пишет:
> I thnik I got it..
>
> It looks like that (A)
> order pcs_rsc_order_set_iscsi-server_haip iscsi-server:start iscsi-lun00:start iscsi-lun01:start iscsi-lun02:start ha-ip:start symmetrical=false
It is different from configuration you show originally.
> order pcs_rsc_order_set_haip_iscsi-server ha-ip:stop iscsi-lun02:stop iscsi-lun01:stop iscsi-lun00:stop iscsi-server:stop symmetrical=false
>
> and (B)
> order pcs_rsc_order_set_iscsi-server_vm_storage_ha-ip iscsi-server iscsi-lun00 iscsi-lun01 iscsi-lun02 ha-ip
>
> doesn't have the same meaning?!
> Because with A it doesn't work, but with B it works like expected, can somebody explain me this behavior?
>
Your original configuration was not symmetrical which may explain it.
You never said anything about changing configuration so it is unclear
what you tested - original statement or statement you show now.
> best regards
> Stefan
>
>
> On Thursday, December 12, 2019 4:19:19 PM CET Stefan K wrote:
>> So it looks like that it restart the iSCSITarget but not the iSCSILogicalUnit, that make sense - more or less because I change something in the iSCSITarget, but it is necessary to have a working iSCSI.. here a the log output when I change/add the iqn..
>>
>> Dec 12 16:08:21 [7051] ha-test1 cib: info: cib_process_request: Forwarding cib_apply_diff operation for section 'all' to all (origin=local/cibadmin/2)
>> Dec 12 16:08:21 [7051] ha-test1 cib: info: cib_perform_op: Diff: --- 0.58.3 2
>> Dec 12 16:08:21 [7051] ha-test1 cib: info: cib_perform_op: Diff: +++ 0.59.0 c5423fcdc276ad43361aeb4c8081f7f4
>> Dec 12 16:08:21 [7051] ha-test1 cib: info: cib_perform_op: + /cib: @epoch=59, @num_updates=0
>> Dec 12 16:08:21 [7051] ha-test1 cib: info: cib_perform_op: + /cib/configuration/resources/primitive[@id='iscsi-server']/instance_attributes[@id='iscsi-server-instance_attributes']/nvpair[@id='iscsi-server-instance_attributes-allowed_initiators']: @value=iqn.1998-01.com.vmware:brainslug9-75488e35 iqn.1998-01.com.vmware:brainslug10-5564u4325 iqn.1993-08.org.debian:01:fee35be01c4d iqn.1998-01.com.vmware:brainslug10-34ad648763 iqn.1998-01.com.vmware:brainslug66-75488e12 iqn.1998-01.com.vmware:brai
>> Dec 12 16:08:21 [7051] ha-test1 cib: info: cib_process_request: Completed cib_apply_diff operation for section 'all': OK (rc=0, origin=ha-test1/cibadmin/2, version=0.59.0)
>> Dec 12 16:08:21 [7051] ha-test1 cib: info: cib_file_backup: Archived previous version as /var/lib/pacemaker/cib/cib-69.raw
>> Dec 12 16:08:21 [7056] ha-test1 crmd: info: do_lrm_rsc_op: Performing key=12:116:0:9fd4e826-f0ba-4864-8861-2c585d644d1c op=iscsi-server_stop_0
>> Dec 12 16:08:21 [7053] ha-test1 lrmd: info: log_execute: executing - rsc:iscsi-server action:stop call_id:62
>> Dec 12 16:08:21 [7051] ha-test1 cib: info: cib_file_write_with_digest: Wrote version 0.59.0 of the CIB to disk (digest: dcbab759c4d0e7f38234434bfbe7ca8e)
>> Dec 12 16:08:21 [7051] ha-test1 cib: info: cib_file_write_with_digest: Reading cluster configuration file /var/lib/pacemaker/cib/cib.6YgOXO (digest: /var/lib/pacemaker/cib/cib.gOKlWj)
>> iSCSITarget(iscsi-server)[4524]: 2019/12/12_16:08:22 INFO: Deleted Target iqn.2003-01.org.linux-iscsi.vm-storage.x8664:sn.cf6fa66tgyh3.
>> Dec 12 16:08:22 [7053] ha-test1 lrmd: info: log_finished: finished - rsc:iscsi-server action:stop call_id:62 pid:4524 exit-code:0 exec-time:293ms queue-time:0ms
>> Dec 12 16:08:22 [7056] ha-test1 crmd: notice: process_lrm_event: Result of stop operation for iscsi-server on ha-test1: 0 (ok) | call=62 key=iscsi-server_stop_0 confirmed=true cib-update=58
>> Dec 12 16:08:22 [7051] ha-test1 cib: info: cib_process_request: Forwarding cib_modify operation for section status to all (origin=local/crmd/58)
>> Dec 12 16:08:22 [7051] ha-test1 cib: info: cib_perform_op: Diff: --- 0.59.0 2
>> Dec 12 16:08:22 [7051] ha-test1 cib: info: cib_perform_op: Diff: +++ 0.59.1 (null)
>> Dec 12 16:08:22 [7051] ha-test1 cib: info: cib_perform_op: + /cib: @num_updates=1
>> Dec 12 16:08:22 [7051] ha-test1 cib: info: cib_perform_op: + /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='iscsi-server']/lrm_rsc_op[@id='iscsi-server_last_0']: @operation_key=iscsi-server_stop_0, @operation=stop, @transition-key=12:116:0:9fd4e826-f0ba-4864-8861-2c585d644d1c, @transition-magic=0:0;12:116:0:9fd4e826-f0ba-4864-8861-2c585d644d1c, @call-id=62, @last-run=1576163301, @last-rc-change=1576163301, @exec-time=293
>> Dec 12 16:08:22 [7051] ha-test1 cib: info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=ha-test1/crmd/58, version=0.59.1)
>> Dec 12 16:08:22 [7056] ha-test1 crmd: info: do_lrm_rsc_op: Performing key=3:116:0:9fd4e826-f0ba-4864-8861-2c585d644d1c op=iscsi-server_start_0
>> Dec 12 16:08:22 [7053] ha-test1 lrmd: info: log_execute: executing - rsc:iscsi-server action:start call_id:63
>> iSCSITarget(iscsi-server)[4564]: 2019/12/12_16:08:22 INFO: Parameter auto_add_default_portal is now 'false'.
>> iSCSITarget(iscsi-server)[4564]: 2019/12/12_16:08:22 INFO: Created target iqn.2003-01.org.linux-iscsi.vm-storage.x8664:sn.cf6fa66tgyh3. Created TPG 1.
>> iSCSITarget(iscsi-server)[4564]: 2019/12/12_16:08:22 INFO: Using default IP port 3260 Created network portal 172.16.101.166:3260.
>> iSCSITarget(iscsi-server)[4564]: 2019/12/12_16:08:22 INFO: Created Node ACL for iqn.1998-01.com.vmware:brainslug9-75488e35
>> iSCSITarget(iscsi-server)[4564]: 2019/12/12_16:08:23 INFO: Created Node ACL for iqn.1998-01.com.vmware:brainslug10-5564u4325
>> iSCSITarget(iscsi-server)[4564]: 2019/12/12_16:08:23 INFO: Created Node ACL for iqn.1993-08.org.debian:01:fee35be01c4d
>> iSCSITarget(iscsi-server)[4564]: 2019/12/12_16:08:23 INFO: Created Node ACL for iqn.1998-01.com.vmware:brainslug10-34ad648763
>> iSCSITarget(iscsi-server)[4564]: 2019/12/12_16:08:23 INFO: Created Node ACL for iqn.1998-01.com.vmware:brainslug66-75488e12
>> iSCSITarget(iscsi-server)[4564]: 2019/12/12_16:08:23 INFO: Created Node ACL for iqn.1998-01.com.vmware:brainslug99-5564u4123
>> iSCSITarget(iscsi-server)[4564]: 2019/12/12_16:08:24 INFO: Parameter authentication is now '0'.
>> Dec 12 16:08:24 [7053] ha-test1 lrmd: info: log_finished: finished - rsc:iscsi-server action:start call_id:63 pid:4564 exit-code:0 exec-time:1781ms queue-time:0ms
>> Dec 12 16:08:24 [7056] ha-test1 crmd: info: action_synced_wait: Managed iSCSITarget_meta-data_0 process 4695 exited with rc=0
>> Dec 12 16:08:24 [7051] ha-test1 cib: info: cib_process_request: Forwarding cib_modify operation for section status to all (origin=local/crmd/59)
>> Dec 12 16:08:24 [7056] ha-test1 crmd: notice: process_lrm_event: Result of start operation for iscsi-server on ha-test1: 0 (ok) | call=63 key=iscsi-server_start_0 confirmed=true cib-update=59
>> Dec 12 16:08:24 [7051] ha-test1 cib: info: cib_perform_op: Diff: --- 0.59.1 2
>> Dec 12 16:08:24 [7051] ha-test1 cib: info: cib_perform_op: Diff: +++ 0.59.2 (null)
>> Dec 12 16:08:24 [7051] ha-test1 cib: info: cib_perform_op: + /cib: @num_updates=2
>> Dec 12 16:08:24 [7051] ha-test1 cib: info: cib_perform_op: + /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='iscsi-server']/lrm_rsc_op[@id='iscsi-server_last_0']: @operation_key=iscsi-server_start_0, @operation=start, @transition-key=3:116:0:9fd4e826-f0ba-4864-8861-2c585d644d1c, @transition-magic=0:0;3:116:0:9fd4e826-f0ba-4864-8861-2c585d644d1c, @call-id=63, @last-run=1576163302, @last-rc-change=1576163302, @exec-time=1781, @op-digest=549b6bf42c2d944da4df2c1d2d675b
>> Dec 12 16:08:24 [7051] ha-test1 cib: info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=ha-test1/crmd/59, version=0.59.2)
>> Dec 12 16:08:29 [7051] ha-test1 cib: info: cib_process_ping: Reporting our current digest to ha-test2: 8bf64451f91add76d89a608b6f51a214 for 0.59.2 (0x55998d145df0 0)
>>
>>
>>
>>
>> On Wednesday, December 11, 2019 3:58:51 PM CET Stefan K wrote:
>>> Hello,
>>>
>>> I've a working HA-Setup with iSCSI an ZFS, but last week I add an iSCSI allowed initiator, and than it happens - my hole VMware infrastructure fails because iSCSI does not working anymore.. today I've time to get a closer look into this..
>>>
>>> create 2 VMs an put the same (more or less) config into it.
>>> What I do:
>>> - I create a iSCSI-target with allowed initiators
>>> - I create iSCSI Logical units
>>>
>>> but I got this:
>>>
>>> targetcli
>>> targetcli shell version 2.1.fb43
>>> Copyright 2011-2013 by Datera, Inc and others.
>>> For help on commands, type 'help'.
>>>
>>> /> ls
>>> o- / ......................................................................................................................... [...]
>>> o- backstores .............................................................................................................. [...]
>>> | o- block .................................................................................................. [Storage Objects: 3]
>>> | | o- iscsi-lun00 .................................................................. [/dev/loop1 (1.0GiB) write-thru deactivated]
>>> | | o- iscsi-lun01 .................................................................. [/dev/loop2 (1.0GiB) write-thru deactivated]
>>> | | o- iscsi-lun02 ................................................................. [/dev/loop3 (0 bytes) write-thru deactivated]
>>> | o- fileio ................................................................................................. [Storage Objects: 0]
>>> | o- pscsi .................................................................................................. [Storage Objects: 0]
>>> | o- ramdisk ................................................................................................ [Storage Objects: 0]
>>> o- iscsi ............................................................................................................ [Targets: 1]
>>> | o- iqn.2003-01.org.linux-iscsi.vm-storage.x8664:sn.cf6fa66tgyh3 ...................................................... [TPGs: 1]
>>> | o- tpg1 ............................................................................................... [no-gen-acls, no-auth]
>>> | o- acls .......................................................................................................... [ACLs: 4]
>>> | | o- iqn.1993-08.org.debian:01:fee35be01c4d ............................................................... [Mapped LUNs: 0]
>>> | | o- iqn.1998-01.com.vmware:brainslug10-34ad648763 ........................................................ [Mapped LUNs: 0]
>>> | | o- iqn.1998-01.com.vmware:brainslug10-5564u4325 ......................................................... [Mapped LUNs: 0]
>>> | | o- iqn.1998-01.com.vmware:brainslug9-75488e35 ........................................................... [Mapped LUNs: 0]
>>> | o- luns .......................................................................................................... [LUNs: 0]
>>> | o- portals .................................................................................................... [Portals: 1]
>>> | o- 172.16.101.166:3260 .............................................................................................. [OK]
>>> o- loopback ......................................................................................................... [Targets: 0]
>>> o- vhost ............................................................................................................ [Targets: 0]
>>>
>>>
>>> here you can see there are missing luns, when I move the ressource to the other node, it will shown the luns, if I then add/remove/change an "allowed_initiators" it will happen again - all luns are gone. And that is a very serious problem for us.
>>>
>>> So my questions is, do i misconfigure something or is that a bug? My pacemaker config looks like the following:
>>>
>>> crm conf sh
>>> node 1: ha-test1 \
>>> attributes \
>>> attributes standby=off maintenance=off
>>> node 2: ha-test2 \
>>> attributes \
>>> attributes standby=off
>>> primitive ha-ip IPaddr2 \
>>> params ip=172.16.101.166 cidr_netmask=24 nic=ens192 \
>>> op start interval=0s timeout=20s \
>>> op stop interval=0s timeout=20s \
>>> op monitor interval=10s timeout=20s \
>>> meta target-role=Started
>>> primitive iscsi-lun00 iSCSILogicalUnit \
>>> params implementation=lio-t target_iqn="iqn.2003-01.org.linux-iscsi.vm-storage.x8664:sn.cf6fa66tgyh3" lun=0 lio_iblock=0 path="/dev/loop1" \
>>> op start interval=0 trace_ra=1 \
>>> op stop interval=0 trace_ra=1 \
>>> meta target-role=Started
>>> primitive iscsi-lun01 iSCSILogicalUnit \
>>> params implementation=lio-t target_iqn="iqn.2003-01.org.linux-iscsi.vm-storage.x8664:sn.cf6fa66tgyh3" lun=1 lio_iblock=1 path="/dev/loop2" \
>>> meta
>>> primitive iscsi-lun02 iSCSILogicalUnit \
>>> params implementation=lio-t target_iqn="iqn.2003-01.org.linux-iscsi.vm-storage.x8664:sn.cf6fa66tgyh3" lun=2 lio_iblock=2 path="/dev/loop3" \
>>> meta
>>> primitive iscsi-server iSCSITarget \
>>> params implementation=lio-t iqn="iqn.2003-01.org.linux-iscsi.vm-storage.x8664:sn.cf6fa66tgyh3" portals="172.16.101.166:3260" allowed_initiators="iqn.1998-01.com.vmware:brainslug9-75488e35 iqn.1998-01.com.vmware:brainslug10-5564u4325 iqn.1993-08.org.debian:01:fee35be01c4d iqn.1998-01.com.vmware:brainslug10-34ad648763" \
>>> meta
>>> colocation pcs_rsc_colocation_set_ha-ip_vm_storage_iscsi-server inf: ha-ip iscsi-server iscsi-lun00 iscsi-lun01 iscsi-lun02
>>> order pcs_rsc_order_set_ha-ip_iscsi-server_vm_storage ha-ip:stop iscsi-lun00:stop iscsi-lun01:stop iscsi-lun02:stop iscsi-server:stop symmetrical=false
>>> order pcs_rsc_order_set_iscsi-server_vm_storage_ha-ip iscsi-lun00:start iscsi-server:start iscsi-lun01:start iscsi-lun02:start ha-ip:start symmetrical=false
>>> property cib-bootstrap-options: \
>>> have-watchdog=false \
>>> dc-version=1.1.16-94ff4df \
>>> cluster-infrastructure=corosync \
>>> cluster-name=ha-vmstorage \
>>> no-quorum-policy=stop \
>>> stonith-enabled=false \
>>> last-lrm-refresh=1576056627
>>> rsc_defaults rsc_defaults-options: \
>>> resource-stickiness=100
>>>
>>>
>>> The system is running on Debian Stretch.
>>>
>>> Thank you very much for you help!
>>>
>>> best regards
>>> Stefan
>>>
>>>
>>> _______________________________________________
>>> Manage your subscription:
>>> https://lists.clusterlabs.org/mailman/listinfo/users
>>>
>>> ClusterLabs home: https://www.clusterlabs.org/
>>>
>>
>>
>>
>> _______________________________________________
>> Manage your subscription:
>> https://lists.clusterlabs.org/mailman/listinfo/users
>>
>> ClusterLabs home: https://www.clusterlabs.org/
>>
>
>
>
> _______________________________________________
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
>
> ClusterLabs home: https://www.clusterlabs.org/
>
More information about the Users
mailing list