A PowerVM (VIOS) can be configured with an IP address and can also be configured to have a httpd daemon listening (so you can connect to it via https). However, this depends on the configuration. Some corporations don’t assign IP addresses, whereas others do (reducing their fingerprint) and optionally they may or may not enable https / gui functionality. In the case that no IP address is assigned, then one would access the VIOS through the virtual console ability of the HMC / IVM.
VIOS provides virtual storage and shared ethernet to logical partitions. Once the physical adapters have been assigned to the VIOS, they can then be shared by one (or more) partitions. This enables the ability to ‘share’ the physical optical device among the different lpars. One only has to associate a disk image (.iso) to a virtual optical device, to make it available on that LPAR. This is accomplished by first creating an online disk repository on the VIOS system, then uploading one (or more) disk image (as .iso) files to said repository. Virtual ethernet enables inter-partition communication on a private network, and (if required) bridging it to the public network, by using a Shared Ethernet Adapter (SEA). Using virtual LANs (VLAN) is quite normal here. A Virtual Local Area Network (VLAN) enables an ethernet switch to create sub-groups within a single physical network. This enables isolation from each other. The VIOS determines what traffic goes to what network / partition based on the VLAN ID. Multiple partitions using the same VLAN ID can communicate with one other. Thus, eliminating the requirement to have individual physical network adapters per LPAR.
fig. 1
Additionally, the VIOS enables the use of Virtual SCSI (vscsi). This enables the exporting of disks as “virtual devices”. The VIOS supports the exporting of these disks in the following types:
- VSCSI disk backed by a Physical Volume (PV)
- VSCSI disk backed by a Logical Volume (LV)
- VSCSI disk backed by a file
- VSCSI disk backed by a Physical Volume (PV)
This would typically be a LUN volume from a SAN associated to the VIO Server. The VIO server then creates a virtual mapping of that LUN to a specific LPAR. This is a one-to-one ratio. The entire PV is associated to the LPAR.
2. VSCSI disk backed by a Logical Volume (LV)
Never performed myself, however, one would create a LV on the VIOS, and the LV is then presented to the LPAR as a physical voume
3. VSCSI disk backed by a file
This is the standard practice when creating a virtual optical device on the VIOS for a specific LPAR. One can then ‘associate’ a disk image (.iso) to the virtual optical device. The LPAR sees the loaded ISO disk image. ex. mkvdev -fbo -vadapter vhost0 -dev webserv1_dvd
The -dev above is to give it a friendly name. That can help to specifically identify which LPAR the optical device is associated with.
Virtual Storage
The standard practice is to carve out LUN(s) / Disks from the Storage Area Network (SAN) and zone it to the VIOS (or VIOS’s if using multiples), unless using NPIV. On the VIOS, you can scan for the devices using cfgdev
(or cfgmgr
if not in the restricted shell). The disks by default are discovered as hdiskXXX devices. Whereby AIX/VIOS will use the next available hdisk device. Once the physical devices (pvs) are on the system, you apply any MPIO disk attribute changes (if applicable) and assign the PV as virtual SCSI devices to an lpar. You could add the PV into a Shared Storage Pool (SSP) instead. Historically, there was an option in the VIOS to also add the vscsi to a “disk pool”. You also have the option of using a “lv” as a vscsi disk. Each have their advantages and disadvantages.
Fig 2 above shows LUNs zoned to dual vios’s, and then presented to an LPAR. Note: don’t expect the vio hdisk name to match the lpar hdisk name, they usually don’t.
- Vscsi disk mapping to a Physical Volume. Performs the worst, as the disk subsystem in AIX uses queuing, and once it hits it’s max it queues (causing IO wait states). The advantage of this type, is in the case of DR the LUN / PV can be re-zoned to alternate hardware. Once on the new hardware, one creates a new LPAR & vscsi disks. Once that is done the system can be brought back online. This is typically a very short outage (most of the time spent with re-zoning the disks on the Storage side).
- Logical Units (LU) from a shared storage pool. The PVs are added into a “shared disk pool”. Then LUs are carved out to certain sizes and allocated to the LPAR. This scenario performs better because the LU is using multiple disk queues from each of it’s underlying PVs. This scenario will typically give a 30+% advantage over vscsi to pv mappings. The disadvantage here, is in the case of DR, the disks can NOT be allocated to another system (because the LUs are only a chunk of the underlying disks in the pool, and would be locked by the SSP making them unavailable).
- N-port ID (NPIV). This one is a bit different. The LPAR has a ‘virtual fibre adapter’ allocated to it from the VIOS. This virtual fibre adapter has a specific World Wide Port Name (WWPN) address, and the LUN from the SAN is zoned to this specific WWPN. The WWPN is a specific unique hardware address (similar in function as a network adapters MAC address). This configuration performs well too, and it is constantly used in comparison to LUs in a SSP. This specific configuration is a little bit different. The I/O is not handled by the VIOS, and as such no management is done by the VIOS (other than providing the virtual fibre adapter) to control the I/O.