Prerequisites:
- All vios nodes are below 2.2.6.31 (old db standard)
- Shared Storage Pools are being used
- Vscsi devices are used (optional)
- An additional PV disk to install vio 3.1 to
- Systems managed by a Hardware Management Console
- SAN drivers disk (if required)
NOTE: Vios 3.1 is based on AIX 7.2 TL3, so any NIM servers in the environment will also need to be running at least AIX 7.2 TL3 SP3.
Reference:
- https://www.ibm.com/support/pages/node/1110195#80 # Nigel’s info on the upgrade
- https://www.ibm.com/support/pages/upgrading-vios-31 # IBM Upgrade steps
- http://www.bit.ly/powersystemsvug # VIOS 3.1 features
- https://www.ibm.com/support/pages/node/795288 # generate the VIOS 3.1.0.0 mksysb
- https://www.ibm.com/support/pages/how-clone-powervm-vios-rootvg # how to clone a PowerVM VIOS rootvg
- https://www.ibm.com/support/knowledgecenter/TI0003M/p8hb1/p8hb1_vios_backup_backup.htm #backing up the Virtual I/O Server
In the environment this was tested in, EMC SAN disks were being used, and in particular they were using EMC’s aix_odm_definitions. These drivers give more functionality over the standard AIX mpio drivers. However, because they are used a custom VIOS mksysb image will need to be generated.
Phase I: Download latest VIO image
- Download the latest VIOS version from IBM’s Entitled Software Support (as of this writing it was 3.1.1.25). Preferably download the flash iso image.
- Transfer the iso to a FTP or NFS resource (will upload to HMC later)
Phase II: Create Virtual I/O Server partition
Next step is to create a Virtual I/O Server partition (if re-using an existing vio partition, this phase can be skipped). Open a web browser and navigate to the URL of your HMC, logging in with your hmc credentials. Default: hscroot:PASSWORD
ex. https://hmc1.example.com
Next create vio profile in HMC by navigating to:
Resources, All Systems, <managed system>, Virtual I/O Servers
Click “Create Virtual I/O Server” button.Navigate through the menu’s providing the name for the partition, processor, memory, and Physical I/O requirements and finally press the Finish button. We will refer to this partition as “myvio31”.
Next you will want to upload the ISO file (downloaded from phase I). In the HMC navigate to “HMC Management”, then “Templates and OS Images”. Select the “VIOS Images” tab and click the link for “Manage Virtual I/O Server Image”. Click the “Import New Virtual I/O Server Image” button to add in a new ISO image. Indicate how you are going to add it in (NFS, FTP, etc) and navigate through the menus. When you are done, the ISO file will be available from the HMC.
Phase III: Install VIO using installios on HMC
The VIO can be installed from the HMC GUI or the HMC cli (command line-interface). This example will install the vios using the hmc cli. So perform a ssh connection to your hmc:
ex. ssh [email protected]
Next, type installios
and hit enter to start the interactive installation, and provide the relevant details.
- Select Managed System
- Select VIO Partition
- Select Profile
- Select iso source: /extra/viosimages/<uploadedfile.iso>/dvdimage.v1.iso
- Provide network IP address for new vio server
- Provide network netmask
- Provide network gateway
- Network Client Speed – set as auto
- Network Client Duplex – set as auto
- Numeric VLAN tag priority (enter if n/a)
- Numeric tag identifier (enter if n/a)
- Select network device to use (Make sure you are selecting the one with an ip from network card or loopback adapter) depending on use case
- Select the proper device from discovered network devices
- Provide second ISO absolute path (if required)
With the steps above, the VIO should install properly. If it doesn’t, check the log to see if it was because the “network wasn’t pinagble”. If so, this usually indicates either you selected the wrong network device OR on occasion a bad ISO file.
If you get an error about the “label”, try running the following: installios -F -e -R default3
and then repeat the installios steps above.
Phase IV: Configure new VIO
Once the vio is built, you will want to connect to it using a virtual network console from the hmc. So first connect to the hmc with: ssh [email protected]
and then run vtmenu
selecting the managed server, and then the target vio server “myvio31”.
Login to the vio server with the vio default username of padmin. At that point, it will ask you to set a new password for the padmin user, and to also accept the license agreement.
Next step (in this usage case) is to download and install emc’s AIX_ODM_DEFINITIONS. This is available through Dell’s web portal. Once downloaded, you will untar the tarball and install the proper filesets. This is dependant on what is required at your site. Could be EMC Clarion, EMC Symmetrix, EMC xTremIO, etc. Install any other requirements (such as bash) until you ready to create a mksysb of the system. This can be done with: mksysb -i filename.mksysb
Next, copy that mksysb to another system, such as a ftp/nfs resource.
Phase V: Upgrade current VIO to latest 2.2.6 version
In the test environment and production environment there are two different levels. Test environment is at 2.2.6.x, and production environment is at 2.2.5.x. So you’ll want to use IBM’s fixcentral to download the latest vio level(s). As of this writing, the latest is 2.2.6.65 which is a sp for 2.2.6.61. So the following will need to be downloaded:
a) for vios 2.2.5.X , download 2.2.6.10, 2.2.6.61 and 2.2.6.65
b) for vios 2.2.6.x, download 2.2.6.61 and 2.2.6.65
When you download the vios bff files above they will come with the ck_sum.bff package which is used for calculating and checking the CRC checksum. Note: if you don’t frequently replace the /home/padmin/ck_sum.bff from the downloaded vio filesets, you can end up with an older version of ck_sum.bff at /home/padmin. Older versions of ck_sum.bff can give false positives for CRC errors in the vios filesets.
Typically one will cp vios_downloaded_fileset_path/ck_sum.bff /home/padmin && chmod 755 /home/padmin
. This is then followed up by testing the files with: ck_sum.bff vios_downloaded_fileset_path
.
Place the required filesets in one directory (depending on if a or b from above), and perform the upgrade from there. As a SSP is being used, you will need to perform rolling updates. This means you’ll stop cluster services on one of the cluster nodes, perform the upgrade, and then add it back in afterwords. Typically the DBN node is used to start/stop cluster services on any of the nodes, and it also is upgraded last. To determine which node is the DBN, login to one of the cluster nodes and run: cluster -status -verbose | grep -p DBN
Login to DBN node, and stop the cluster services for node X with: clstartstop -stop -n clustername -m hostname
Next, login to node X. Determine if there are any loaded optical devices by running: lsvopt
while in the restricted padmin shell. If there is any, you will have to unload them first: unloadopt -vtd vtdname
For the purpose of this example, the vios upgrade filesets were placed in /upgrade. So to perform the upgrade run:
updateios -commit
updateios -install -accept -dev /upgrade
Once the system has been upgraded, reboot the server with: shutdown -restart
Once the system comes back up, confirm you are now at the correct level with: ioslevel
Repeat this for ALL nodes in the cluster (DBN node last). When the last node (the DBN) is done, it will do a background DB conversion. So it is recommended that you let that finish PRIOR to migrating any VIO servers to vio 3.1.
Phase VI: Migrate VIO to 3.1
Now that all of the cluster node members have been upgraded to the latest 2.2.6.x version, you’re ready to start migrating VIO systems to VIO 3.1. This is a “complete” new overwrite / install. So alot of custom things won’t come across. For example, any additional files on the system, RPM installs, repository ISO images, extra shells, custom ntp configurations, tunables, etc. Therefore, you’ll want to make a backup of the system, and also determine which files you will want transferred across. The viosupgrade utility can look in a specific text file, and copy all of those files across during the migration.
HOWEVER, note that IF the file doesn’t exist, the migration will stop there. So you will want to confirm the file(s) exist first, prior to adding them to the migration text file. A script could be utilized here, specifically adding the files you want (if they exist) to the migration text file.
The viosupgrade utility is used to perform the migration from vios 2.2.6.X to 3.1. This viosupgrade will use the custom mksysb you created earlier. However, before getting into that I will list a couple of things I discovered while going through this process.
- The viosupgrade utility has no way of renaming physical volume disks off the AIX standard naming convention of hdiskXXX (so if you had a disk named nfs_os it would be discovered as hdiskXXX).
- By default the viosupgrade restore does NOT pass the proper arguments to restore Logical Units (PVs from the Shared Storage Pool)
- The viosupgrade may or may not capture the old PV names in it’s migration log file
Some recommended things to capture PRIOR to doing the vio upgrade.
- Capture the PV information:
lsdev -Cc disk |grep -v SAS | awk '{print "lscfg -vpl " $1}' |sh > /tmp/vioxxx_pv.txt
- Capture cluster0 attributes:
lsattr -El cluster0 > /tmp/vioxxx_cluster0.txt
- Capture all the virtual mappings:
/usr/ios/cli/ioscli lsmap -all > /tmp/vioxxx_lsmap_all.txt
You may wish to grab some other things than above. For example, usually people will grab some files for networking and what-not and migrate them across. This could be easily accomplished with a script that checks if the file exists first using a simple bash script:
#!/bin/bash
f=/home/padmin/filestosave.txt
if [ -e /etc/motd ]; then echo /etc/motd > $f ; fi
if [ -e /etc/netsvc.conf ]; then echo /etc/netsvc.conf >> $f ; fi
if [ -e /etc/inetd.conf ]; then echo /etc/inetd.conf >> $f ; fi
if [ -e /etc/hosts ]; then echo /etc/hosts >> $f; fi
if [ -e /etc/environment ]; then echo /etc/environment >> $f; fi
if [ -e /etc/profile ]; then echo /etc/profile >> $f ;fi
if [ -e /etc/inittab ]; then echo /etc/inittab >> $f ; fi
if [ -e /etc/resolv.conf ]; then echo /etc/resolv.conf >> $f ; fi
if [ -e /etc/ntp.conf ]; then echo /etc/ntp.conf >> $f; fi
In my example case, I was doing viosupgrades when system was part of cluster and when it was NOT part of the cluster. Because I use customer PV device names which I want to keep, it didn’t make any sense to keep the vio node a part of the cluster. This is because the viosupgrade tool will NOT rename the PVs back to the original names, and i’ll end up with hdiskxxx in the cluster. Amongst different node members, hdisk10 could actually reference different disks, based on which vio node it’s on. I prefer having any shared PVs across multiple VIO servers to have the same PV name. This makes things easier when troubleshooting issues, removing disks, etc.
Due to this fact, I will run a vios backup PRIOR to running the viosupgrade (and save it to a NFS resource). That file can be used later to restore the information.
NOTE: To perform the viosupgrade, it should be done on a remote virtual console, not over SSH.
ssh [email protected]
vtmenu (select managed system, then vio partition)
IBM’s recommendation is to perform VIOS upgrade to version 3.1.x to an alternate rootvg disk. Thus, we will perform a viosupgrade to an alternate disk. In this example, the “alternate disk” has been renamed to hdisk999. First mount a NFS resource, perform a backup of the existing vio server and then perform a migration.
# nfso -o nfs_use_reserved_ports=1 -o portcheck=1
# mount nimserver:/export/viobackup /mnt
# cp /mnt/filename.mksysb /home/padmin/ #(copy custom mksysb to hard drive)
# exit #(to get back to a restricted shell)
$ viosbr -backup -file /mnt/viox_22665 -clustername cluster1
Now that the vio is backed up (you remembered to backup the PV and lsmappings right ) we will remove the virtual mappings and remove it from the cluster.
# /usr/ios/cli/ioslic lsmap -all | grep VTD | grep -v “NO VIRTUAL” | awk -F’ ‘ {‘print “/usr/ios/cli/ioscli rmvdev -vtd ” $2’} |sh
# ssh padmin@DBN
$ cluster -rmnode -clustername cluster1 -hostname nodex
Now perform a viosupgrade: (within restricted shell)
$ viosupgrade -l -i /home/padmin/filename.mksysb -a hdisk999 -g /home/padmin/filestosave.txt
The viosupgrade utility will do the installation to the alternate disk supplied from the -a switch, and backup the files which are listed in /home/padmin/filestosave.txt.
The system will reboot 2-3 times. Eventually it will make it back to a login prompt. Login and look around. If you were using a custom pv disk names, you will probably want to rename them back (using the VPD to determine which disk is which). Once they are back, login to the DBN and add the node back into the cluster:
ssh padmin@DBN
cluster -addnode -clustername cluster1 -hostname nodex
Now you’ll want to perform one or both restores, depending on which virtual devices you need restored.
To restore just the virtual scsi devices (no SSP LUs) run:
# nfso -o nfs_use_reserved_ports=1 -o portcheck=1
# mount nfsserver:/export/viobackup /mnt
# exit
$ viosbr -restore -file /mnt/viox_22665.cluster1.tar.gz -clustername cluster1 -curnode -type vscsi
to restore JUST the ssp LU devices: (can be done before after the vscsi devices are restored, if required)
$ viosbr -restore -file /mnt/viox_22665.cluster1.tar.gz -clustername cluster1 -curnode -xmlvtds
NOTE: If the DBN changes PRIOR to getting all systems upgraded to vio 3.1, you may start getting effective cluster errors. This is normal until ALL nodes are upgraded to vio 3.1.