Pre-Requisites:
* Vios version 2.2.0.11 SP6+
* FQDN on the VIOS servers
* no virtual optical devices (remove prior to migrating)
* FQDN & short name in /etc/hosts
* update /etc/netsvc.conf to have hosts=local,bind so that local is checked prior to DNS
* same vlan ID on all of the VIOS machines (this is accessible in the IVM, Shared Virtual Ethernet, Ethernet Bridge)
* SAN LUNs assigned to multiple VIOS’s
Recently with PowerVM (VIOS) IBM has added the ability to use shared storage pools (as of vios 2.2.0.11 SP6). This now gives one the ability to setup shared storage to 2 or more VIOS’s which enabled live partition mobility (move a partition while system live, with no impact to production, and alleviating the need for an outage window).
To configure this, one will need the SAN Administrator to assign the shared storage to two or more VIOS. Once that is done, we’ll scan for the new devices on the VIOS, and configure ’em.
VIOS 1
Perform a cfgmgr
to find the storage. If you already have quite a few disks, you may wish to run something like this instead:
lspv > /tmp/lspv.1 ; cfgmgr ; lspv > /tmp/lspv.2 ; diff /tmp/lspv.1 /tmp/lspv.2 | grep hdisk
Once the disks are added in, you may need to configure some specific disk attributes. In my case, I make the following changes to each of the newly discovered disks:
chdev -l hdiskX -a algorithm=round_robin -a reserve_policy=no_reserve
Now as these disks will be used for shared storage, it would be wise to rename them so one doesn’t attempt to re-purpose them causing you grief. Example:
rendev -l hdisk1 -n repo1
rendev -l hdisk2 -n shared01
rendev -l hdisk3 -n shared02
Note: When allocated LUNs to the VIOS for shared storage, one disk is used as the repository. The other disks are clustered storage. One might think that’s a single point of failure, however, the repository information is storage in multiple locations, so rebuilding the repository disk is typically a non-issue.
Assuming the disks assigned to your weren’t repuposed, you can validate they are available for use with:
$ lspv -free
If the newly added disks are not showing up, then changes are they have previously stored LVM information residing withing the VGDA. A quick way to check this is with the command: readvgda | more
If you do in fact have previous VGDA information residing on the physical volume, validate it isn’t in use somewhere else. Assuming it is not used, you can clear the VGDA with: chpv -C
Now, let’s move on to creating the shared storage pool with a command like:
cluster -create -clustername test -repopvs repo1 -spname testpool -sppvs shared01 shared02 -hostname vios1.company.x.com
After a short wait, the cluster should be created. Verify with:
cluster -list
CLUSTER_NAME: test
CLUSTER_ID: ea2ca9923b6c11e5832d00892a1j664ab
$ cluster -status -clustername test
Cluster Name State
test OK
Node Name MTM Partition Num State Pool State
vios1 xxxx-xxXyyyyyyyy 1 OK OK
(first group of X's are the Model, second set are machine type. All the y's are the serial number of the unit)
Next, you will create a backing device with a command like:
mkbdsp -clustername test -sp testpool 25G -bd migtest
Lu Name:migtest
Lu Udid:906a06330f107ba83892366bce76a33b
The next is to assign this backing device (virtual disk) to an LPAR. You’ll need an LPAR for this part, so if you don’t have one.. go create one now.
Assign virtual disk to the LPAR
$ mkbdsp -clustername test -sp testpool -bd migtest -vadapter vhost0 -tn migtest_rootvg
Assigning logical unit 'migtest' as a backing device.
VTD:migtest_rootvg
Now it would just be a matter of installing an OS on that disk however you see fit. If you’re using a disk image from the VIOS repository, just create the virtual optical device and assign away. Assuming this is a new VIOS, you can create the virtual repository like so:
$ lssp
Pool Size(mb) Free(mb) Alloc Size(mb) BDs Type
rootvg 76800 27968 64 0 LVPOOL
This shows you have over 20GB of free space, so for this example we’ll create a 15GB virtual repository.
mkrep -sp rootvg -size 15G
Next, you would upload an AIX DVD disk image file to vio1:/var/vio/VMLibrary. You may need to chown padmin /var/vio/VMLibrary
first.
List your repository with: lsvopt
and see the available disk image files with: lsrep
.
Now, you may need to create a new virtual optical disk to install the OS. This is done using a command similar to:
mkvdev -fbo -vadapter vhost0 -dev testDVD
Substitue vhost0 for your virtualServerAdapter. The -dev gives it a name of your choosing. If you omit -dev testDVD, the VIOS will assign it’s own name for the virtual optical device.
Now that is done, you will want to assign the virtual disk you created earlier to this lpar. For this example, I’ll assume the partition name is migtest, and that it has a partition id of 20 (represented by 0x14 below).
$ lsmap -all
SVSA Physloc Client
Partition ID
--------------- -------------------------------------------- ------------------
vhost0 Uxxx.xxX.yyyyyyy-V1-C11 0x00000014
So assign it with:
mkbdsp -clustername test -sp testpool -bd migtest -vadapter vhost0 -tn migtest_rootvg
This is then viewable with the command: lsmap -all
or specifically for a certain virtual server adapter: lsmap -vadapter vhost0
Now feel free to assign a disk to the LPAR, boot it, and install the OS (or do a nim restore, whatever).
Assign disk to VTD:
loadopt -disk -vtd
At the moment, you have created a single-node shared storage pool. Run the following command to add in another vios node:
cluster -addnode -clustername test -hostname vios2
Now that both VIOS nodes are configured, you should be able to do the migration from one to the other.
One would do the migration from within the Vio (the Web page for managing the VIO).
Note: You should be aware of a couple of findings.
1) In the event that your VIO server(s) are at different IOS levels, you should create the cluster on the “oldest ios level”. Otherwise, you won’t be able to join that node to the cluster.
2) Only the padmin user has the ability to create a Logical Unit (LU) of a 1GB disk or more. This would be by design, and would probably require editing perms for RBAC for another user
3) Each VIOS which is a member of the cluster, can ONLY be a member of one cluster (you can’t have multiple clusters on a VIO server)
Some additional commands that could be useful.
To add a disk to an existing cluster.
chsp -add -clustername testpool -sp testpool hdisk54
To remove a disk from an existing cluster.
pv -remove -clustername testpool -sp testpool -pv hdisk54