Recently I provided an article on configuring SSP (Shared Storage Pool) on the VIOS. In this article, we’re going to take an active LPAR (VIOC) and have all of the PVs migrated over to use SSP. This will allow us to do live partition mobility from one VIOS to another. In this case here, the cluster has already been created and it’s across 3 VIOS Nodes.
First, on the VIOS we’ll need to determine what VSCSI devices are in use, and how many you can currently have as a maximum. The LPAR name in this example is “dev3”.
$ lshwres -r virtualio --rsubtype slot --level slot |grep dev3
slot_num=0,lpar_name=dev3,lpar_id=2,config=serial,state=1,drc_name=U7778.23X.06ABCDA-V2-C0
slot_num=1,lpar_name=dev3,lpar_id=2,config=serial,state=1,drc_name=U7778.23X.06ABCDA-V2-C1
slot_num=2,lpar_name=dev3,lpar_id=2,config=scsi,state=1,drc_name=U7778.23X.06ABCDA-V2-C2
slot_num=3,lpar_name=dev3,lpar_id=2,config=reserved,state=0,drc_name=U7778.23X.06ABCDA-V2-C3
slot_num=4,lpar_name=dev3,lpar_id=2,config=eth,state=1,drc_name=U7778.23X.06ABCDA-V2-C4
$ lshwres -r virtualio --rsubtype slot --level lpar | grep dev3
lpar_name=dev3,lpar_id=2,curr_max_virtual_slots=10,pend_max_virtual_slots=10
Ok, the LPAR “dev3” is currently configured to have a maximum of 10 virtual scsi devices. The system in question has 4 PVs that we want to add. The idea is to add a one-to-one ratio for performance gains. Thus, we can put one of the disks in the already existant (and currently used) vscsi slot, and then we’ll add three additional ones. As the last one above has a SLOT #4, we’ll add slots 5-7.
chhwres -r virtualio --rsubtype scsi -p dev3 -o a -s 5
/usr/ios/lpm/sbin/lpmdrmgr drmgr -c slot -s 'U7778.23X.06ABCDA-V1-C20' -a
U7778.23X.06ABCDA-V1-C20
U7778.23X.06ABCDA-V2-C5
$ chhwres -r virtualio --rsubtype scsi -p dev3 -o a -s 6
/usr/ios/lpm/sbin/lpmdrmgr drmgr -c slot -s 'U7778.23X.06ABCDA-V1-C21' -a
U7778.23X.06ABCDA-V1-C21
U7778.23X.06ABCDA-V2-C6
$ chhwres -r virtualio --rsubtype scsi -p dev3 -o a -s 7
/usr/ios/lpm/sbin/lpmdrmgr drmgr -c slot -s 'U7778.23X.06ABCDA-V1-C22' -a
U7778.23X.06ABCDA-V1-C22
U7778.23X.06ABCDA-V2-C7
Confirm they were created with:
$ lsmap -all |grep 0x00000002
vhost0 U7778.23X.06ABCDA-V1-C15 0x00000002
vhost3 U7778.23X.06ABCDA-V1-C20 0x00000002
vhost4 U7778.23X.06ABCDA-V1-C21 0x00000002
vhost7 U7778.23X.06ABCDA-V1-C22 0x00000002
I searched for LPAR id #2. This may need to be updated, depending on the ID # in your specific case.
Next, let’s create the LUs (Logical Units) as backing devices. These will be used as the PVs on the system. We’ll also have them created to match the vhost’s above (Virtual SCSI Adapters).
$ lu -create -clustername omega -sp omega -lu dev3_rootvg -size 25G -vadapter vhost0
Lu Name:dev3_rootvg
Lu Udid:fa9eb775c2d8511c32ffff95e6576ce5
Assigning logical unit 'dev3_rootvg' as a backing device.
VTD:vtscsi1
$ lu -create -clustername omega -sp omega -lu dev3_d1 -size 24G -vadapter vhost3
Lu Name:dev3_d1
Lu Udid:c7303b2ab21789026021c895a89f8947
Assigning logical unit 'dev3_d1' as a backing device.
VTD:vtscsi2
$ lu -create -clustername omega -sp omega -lu dev3_d2 -size 24G -vadapter vhost4
Lu Name:dev3_d2
Lu Udid:9fa396afc76dedc4c2e7e44004ebcb41
Assigning logical unit 'dev3_d2' as a backing device.
VTD:vtscsi3
$ lu -create -clustername omega -sp omega -lu dev3_d3 -size 24G -vadapter vhost7
Lu Name:dev3_d3
Lu Udid:b50b5454c02ef012a87dc6e19b8d897e
Assigning logical unit 'dev3_d3' as a backing device.
VTD:vtscsi4
On the VIOC, we’ll look at the existing (relevant) resources, and then scan the bus for our new additions.
dev3:~# lsdev |grep vscsi
vscsi0 Available Virtual SCSI Client Adapter
dev3:~# lspv
hdisk0 0009cbaac48a8574 rootvg active
hdisk1 0009cbaac4c2e4a9 othervg active
hdisk2 0009cbaac4c2e4ee othervg active
hdisk5 0009cbaac4c2e534 othervg active
dev3:~# cfgmgr
dev3:~# lspv
hdisk0 0009cbaac48a8574 rootvg active
hdisk1 0009cbaac4c2e4a9 othervg active
hdisk2 0009cbaac4c2e4ee othervg active
hdisk5 0009cbaac4c2e534 othervg active
hdisk3 none None
hdisk4 none None
hdisk6 none None
hdisk7 none None
dev3:~# lsdev |grep vscsi
vscsi0 Available Virtual SCSI Client Adapter
vscsi1 Available Virtual SCSI Client Adapter
vscsi2 Available Virtual SCSI Client Adapter
vscsi3 Available Virtual SCSI Client Adapter
Update some disk attributes:
dev3:~# for i in 3 4 6 7 ; do chdev -l hdisk${i} -a hcheck_interval=30 -a queue_depth=32 ; done
hdisk3 changed
hdisk4 changed
hdisk6 changed
hdisk7 changed
Migrate rootvg
dev3:~# migratepv hdisk0 hdisk3
0516-1011 migratepv: Logical volume hd5 is labeled as a boot logical volume.
0516-1246 migratepv: If hd5 is the boot logical volume, please run 'chpv -c hdisk0'
as root user to clear the boot record and avoid a potential boot
off an old boot image that may reside on the disk from which this
logical volume is moved/removed.
migratepv: boot logical volume hd5 migrated. Please remember to run
bosboot, specifying /dev/hdisk3 as the target physical boot device.
Also, run bootlist command to modify bootlist to include /dev/hdisk3.
dev3:~# chpv -c hdisk0
dev3:~# bosboot -ad /dev/hdisk3
bosboot: Boot image is 51228 512 byte blocks.
dev3:~# bootlist -m normal hdisk3
dev3:~# bootlist -m normal -o
hdisk3 blv=hd5 pathid=0
Confirmation:
dev3:~# ls -l /dev/rhdisk3 /dev/ipldevice
crw------- 2 root system 13, 5 Sep 30 11:43 /dev/ipldevice
crw------- 2 root system 13, 5 Sep 30 11:43 /dev/rhdisk3
dev3:~# ls -l /dev/rhd5 /dev/ipl_blv
crw-rw---- 2 root system 10, 1 May 27 11:18 /dev/ipl_blv
crw-rw---- 2 root system 10, 1 May 27 11:18 /dev/rhd5
dev3:~# lsvg -p rootvg
rootvg:
PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION
hdisk0 active 800 800 160..160..160..160..160
hdisk3 active 799 102 17..13..00..00..72
migratePVs for othervg
dev3:~# extendvg oravg hdisk4 hdisk6 hdisk7
0516-1254 extendvg: Changing the PVID in the ODM.
0516-1254 extendvg: Changing the PVID in the ODM.
0516-1254 extendvg: Changing the PVID in the ODM.
dev3:~# migratepv hdisk1 hdisk4 hdisk6 hdisk7
dev3:~# migratepv hdisk2 hdisk4 hdisk6 hdisk7
dev3:~# migratepv hdisk5 hdisk4 hdisk6 hdisk7
Get EMC VPD information on old PVs
Get the VPD information, then when you remove the disks they can be re-claimed by the Storage Administrator.
from VIOS:
$ lsmap -vadapter vhost0
## Your’s may be different vhost adapter… If you don’t know, use lsmap -all
instead.
SVSA Physloc Client Partition ID
--------------- -------------------------------------------- ------------------
vhost0 U7778.23X.06ABCDA-V1-C15 0x00000002
VTD vtopt1
Status Available
LUN 0x8500000000000000
Backing device
Physloc
Mirrored N/A
VTD vtscsi1
Status Available
LUN 0x8600000000000000
Backing device dev3_rootvg.fa9eb775c2d8511c32ffff95e6576ce5
Physloc
Mirrored N/A
VTD vtscsi44
Status Available
LUN 0x8100000000000000
Backing device hdisk91
Physloc U78A5.001.WIH8668-P1-C12-T2-W500009720849B924-L2E000000000000
Mirrored false
VTD vtscsi45
Status Available
LUN 0x8200000000000000
Backing device hdisk92
Physloc U78A5.001.WIH8668-P1-C12-T2-W500009720849B924-L2F000000000000
Mirrored false
VTD vtscsi46
Status Available
LUN 0x8300000000000000
Backing device hdisk93
Physloc U78A5.001.WIH8668-P1-C12-T2-W500009720849B924-L30000000000000
Mirrored false
VTD vtscsi47
Status Available
LUN 0x8400000000000000
Backing device hdisk94
Physloc U78A5.001.WIH8668-P1-C12-T2-W500009720849B924-L31000000000000
Mirrored false
In the case above, the PVs which will be removed and reclaimed by the Storage Administrator will be hdisk91 – hdisk94.
= Proper removal of PVs from VIOC =
Note: It would be advisable to “reboot” the LPAR/VIOC first to ensure it will come back up properly. Assuming it does, now you can proceed with removing the old storage and providing the EMC VPD to the storage admin for reclaiming. In the case of AIX, the EMC VPD will be empty (or contain useless data) if you haven’t previously installed the AIX_ODM_Definitions from EMC.
dev3:~# reducevg rootvg hdisk0
dev3:~# reducevg othervg hdisk1
dev3:~# reducevg othervg hdisk2
dev3:~# reducevg othervg hdisk5
dev3:~# for i in hdisk0 hdisk1 hdisk2 hdisk5; do rmdev -dl $i ; done
hdisk0 deleted
hdisk1 deleted
hdisk2 deleted
hdisk5 deleted
OK, disks have been removed from the VG and deleted from the VIOC. Now you need to remove them from the VIOS.
= Remove PVs from the VIOS =
The PVs have been removed from the client LPAR (vioc) so they virtual mappings will need to be removed. As you got the VPD’s early, run these commands in the restricted shell.
$ rmvdev -vtd vtscsi44
$ rmvdev -vtd vtscsi45
$ rmvdev -vtd vtscsi46
$ rmvdev -vtd vtscsi47
Next get the EMC VPD information (for providing to storage admin). This part could be done outside the restricted shell if dealing with multiple PVs.
dev3:~# for i in {91..94} ; do lscfg -vpl hdisk${i} |grep VPD ; done
LIC Node VPD................0C43
LIC Node VPD................0C44
LIC Node VPD................0C45
LIC Node VPD................0C46
Clear the PVID from the disks with:
dev3:~# for i in {91..94} ; do chdev -l hdisk${i} -a pv=clear ; done
Now delete the PVs with:
dev3:~# for i in {91..94} ; do rmdev -dl hdisk${i} ; done
hdisk91 deleted
hdisk92 deleted
hdisk93 deleted
hdisk94 deleted
Congratulations. You have migrated all of the LUNs over to SSP devices. You can now use the IVM web interface to “migrate” the system from one VIOS to another.