Jump Menu
- Create Virtual Media Repository
- Create Virtual Optical Device
- Mount / unmount images to Virtual Optical Device
- Create Disk Mapping to LPAR
IBM Power systems are typically configured to use one or more VIO servers (vios). The VIOS allows one to share resources between logical partitions (LPARs). The VIOS runs AIX under the hood, and is equipped with a restricted Korn shell with a limited set of commands (setup using aliases typically). Some commands will be the same as their AIX counterparts, while other commands will be unique to the VIOS.
To connect to the VIOS, you will “normally” use the VIOS administrator account (padmin). You will usually connect via virtual console, or using openssh. The virtual console connection is made from the Integrated Virtualization Manager (IVM), the Hardware Management Console (HMC) or Systems Directory Management Console (SDMC). This depends on what your organization has in place.
If you connect using openssh, one will typically connect with:
ssh padmin@fqdn_or_ip_address
Once you are logged on, you will be within the Korn shell restricted environment. Some of the items you will be “unable” to achieve in the restricted shell are:
- Change directory below /home/padmin
- Set variables for SHELL, ENV or PATH
- Run any commands that start with a forward slash (/)
- Redirect output using > >| <> >>. Capturing the output would therefore have to be done via piping into the tee command
To ‘break-out’ of the restricted shell, one would simply run: oem_setup_env
The VIOS commands are similar to their counter-parts, thus easy to learn. The first thing you may wish to run is the help
command. This will give an overview of the available commands within the restricted shell. If you wish to get more detailed help on a specific command, prepend the command with help.
example:
help lsvg
Usage: lsvg [-map | -lv | -pv] VolumeGroup … [-field FieldName …]
[-fmt delimiter]
lsvg
Displays information about volume groups.
-map Displays information about the mapping of logical and
physical volumes in the volume group.
-field Specifies a list of fields to be displayed.
-fmt Divides output by a user-specified delimiter.
-lv Displays information about logical volumes in the
volume group.
-pv Displays information about physical volumes in the
volume group.
You may also wish to look at the manual pages of the command (if they exist), by prepending the command with man. For example, man lsvg
.
Create Virtual Media Repository
In preparation for allowing one to use a virtual optical device, a media repository has to be configured first. Once the media repository is setup, you can then upload disk image files (.iso). These are then mapped to a “virtual optical device”. That virtual optical device will see the .iso file like a “dvd drive” would, and it’s bootable etc.
To determine if you already have a Virtual Media repository, ssh into the VIO server and run: lsrep
If a virtual media repository has previously been created, it will display the files within the /var/vio/VMLibrary directory. It will also indicate the size of the Media Repository and how much space is remaining.
If the Media Repository doesn’t exist, you’ll get an error message indicating that. If that is the case, you can create a Media Repository with:
mkrep -sp rootvg -size 20G
The sp is used to specific the Storage Pool to use. The size (in this case) is to create a 20GB media repository.
Creating a Virtual SCSI device
A virtual SCSI device is used in a server/client relationship. You create a device on the VIO server(s) and another on the LPAR. There is a direct mapping relationship here, so that the VIOS can communicate properly. This virtual SCSI device can then be used to map virtual storage. Once this is in place, you’ll be able to install an OS from a virtual optical device to a virtual physical volume.
These virtual SCSI devices can be created dynamically (using LPAR properties), or by modifying the LPAR’s profile. If the changes are made to a running LPAR (by modifying it’s profile), then the LPAR will have to be power cycled (powered down, then re-activated from the profile).
The standard rule of thumb here is:
1) Create virtual SCSI device(s) on the VIO server(s) dynamically
2) Create virtual SCSI device(s) on the LPAR by modifying it’s profile (if LPAR is powered off, otherwise create VSCSI devices dynamically)
These virtual mappings can be created in the HMC enhanced+ web interface, the HMC command-line. If the managed server is NOT managed by a HMC, then the additions can be added directly from the VIO server.
The HMC enhanced+ interface will create the VSCSI devices in the background when one “assigns” a virtual optical device or a virtual physical volume. Unfortunately, in my experience the HMC enhanced+ interface isn’t dependable, and it may or may not create all of the required VSCSI adapters.
To create things from the HMC (command-line), you would need the following tidbits of information:
- HMC IP address / FQDN
- HMC super user account login details
- name of managed system
- name of vio server(s)
- name of target LPAR
Assuming you have all of this information, you would start with:
ssh hscroot@HMC_FQDN
Collect name of ALL of the managed servers by this HMC:
lssyscfg -r sys -F name
I use a script on the hmc called ezh. Track down author before continuing with post. Possibly Brian or Chris’s blog.
Creating a Virtual Optical Device
One may wish to have access to virtual optical disk images (such as AIX 7.1 DVD #1). This is done by first creating a Media Repository, then uploading disk image files into said repository (files are saved to /var/vio/VMLibrary/). Additionally, you can use the “mkdvd” command on an AIX lpar to create a bootable ISO disk. If you have a NIM master server, you will already have a number of mksysb files. You can use the mkdvd command to create an iso bootable disk image from a pre-existing mksysb image as well. The next step is to create a virtual optical device and assign it to a specific vhost adapter (which is configured to a specific LPAR).
Assuming you already created a virtual SCSI adapter called vhost0 for an lpar named test1, here is a command for creating said virtual optical device:
mkvdev -fbo -vadapter vhost0 -dev test1_cd0
-fbo indicates to create a file backed optical device.
-vadapter <devicename> – Create and assign to devicename
-dev <vtdname> – Give a friendly name to virtual optical device
Mount disk to Virtual Optical device
Now that you have a virtual media repository created, iso image files uploaded, and a virtual optical device(s) created you’ll want to be able to mount / unmount disk images to be usable within the underlying LPAR.
lsrep
– To list available disk imageslsvopt
– List available Virtual Optical Devices
Assuming you have an available virtual optical device and a disk image, one can mount the media. That media can then be accessible within the LPAR. If it’s bootable, you can have the LPAR boot from the virtual optical device. Otherwise, one would just mount the media within the running Linux / AIX operating system.
example:loadopt -disk AIX_7100-04-04_DVD_1_of_2.iso -vtd test1_cd0
unloadopt -vtd test1_cd0 -release
The first command above loads the iso file to virtual optical device named test1_cd0. The second command unloads the virtual optical device’s ISO file.
Create disk mapping to LPAR
Now that you have an LPAR created, and have already created a virtual optical device (if required), you’re now ready to create / associate a LUN / PV to the LPAR. In the case of VSCSI disks, the SAN administrator would of already created a LUN, and zoned it to one (or more) VIO servers based on the WWN from the Fibre Adapter. In the case of Shared Storage Pools (SSP), you have a number of LUNs assigned to the VIO Servers. Those LUNs are put into a SSP, and a number of VIO servers are clustered together. You are now responsible for creating and assigning a Logical Unit (LU) to the LPAR. Below are some examples of creating mappings to the VSCSI or LU devices.
mkvdev -vdev hdisk200 -vadapter vhost0 -dev test1_rootvg1
lu -create -lu test5_rootvg -tier ssd -size 50G -vadapter vhost1
1st command – Maps hdisk200 to virtual adapter vhost0, with a friendly name of test1_rootvg1. This is helpful in determining the LPAR it should belong to.
2nd command – Creates a 50GB disk from the ssd tier (SSP can have multiple tiers, with default of system), assigning to the virtual adapter vhost1 and creating the device named test1_rootvg1. The vtd (Virtual Target Device) isn’t used in this instance to give it a friendly name, as the LU already has a friendly name.