This is off the cuff, and is not a technical walkthrough. This is enough for you to teach yourself assuming you have a system to hack on.
IBM’s POWER8 docs are missing almost everything. I don’t understand how they can call them docs at all. They want you to use some really picky tools that are cumbersome and not flexible in all the right ways.
The IBM POWER7 docs are close, but are missing the SR-IOV info. Your best bet is to skim though this, and stop when you find the bits you want (concepts, config):
The high level jist of building a VIO environment is as follows:
- Configure to HMC
- Clear managed system profile data
- Build a couple VIO servers:
- 6GB RAM, 3 virtual procs, 0.3 virtual CPUs, 255 CPU weight
- At least one storage and one network adapter
- You can use SR-IOV to share an ethernet adapter from firmware if needed
- One virtual ethernet trunk for each separate physical network. Assign VLANs here
- One virtual ethernet non-trunk for each VLAN you want an IP address on (ideal, but you can also hang IPs and VLANs directly from AIX)
- One virtual SCSI server adapter for each client LPAR that will need virtual CDROM, Virtual Tape, or legacy Virtual SCSI disk (higher CPU load).
- One virtual fibre adapter for each client port (usually two per client on each VIO server, but can be anywhere from 18)
- Upload the VIO base media into the HMC media repository
- Install the VIO server from the HMC
- SSH into the HMC, and use vtmenu to rebuild the VIO networking
- Remove all en, et, ent, hba devices, then cfgmgr
- mkvdev -lnagg for any etherchannel bonded pairs needed for the Shared Ethernet Adapter(s)
- mkvdev -sea to build any shared ethernet adapters (ethernet bridge from virtual switch to physical port)
- mkvdev -lnagg for any etherchannel bonded pairs needed for local IP communication
- mkvdev -vlan for any additional VLANs hanging directly off an SEA rather than through a virtual ethernet client adapter
- mktcpip to configure your primary interface, gateway, etc
- Add any extra IP addresses.
- Build your Client LPARs
- Memory, CPU, RAM as desired
- Virtual ethernet just picks the switch and VLAN that you need. If this does not exist on any VIO trunk adapters, then you need to fix that.
- Virtual SCSI client adapter
- this needs the VIO server partition ID, and the VIO server slot number added to it for the firmware connection.
- The VIO server virtual SCSI adapter needs the same mapping back to the client LPAR id and slot.
- There may be some GUI improvements to add this all for you, but it’s been decades of garbage for so long that I just do it all manually.
- Virtual Fibre adapter – This maps back and forth to the VIO server virtual fibre similar to how VSCSI did.
- SSH into the VIO server
- make virtual optical devices attached to the “vhost” (virtual SCSI” if needed
- Use vfcmap to map the “vfchost” adapters to real “fcs” ports. This requires them to be NPIV capable (8gbit or newer), logged into an NPIV capable switch (lsnports).
- Zone any LUNs
- lsnportlogin can give you the WWNs for the clients, or you can get it from the client profile data manually
- You can use OpenFirmware’s “ioinfo” to light up a port to force it to log in to the switch.
- If the LPAR is down, you can use “chnportlogin” from the HMC to log in all ports for that client.
- You can also zone directly to the VIO server, and “mkvdev” to map them as vscsi disks (higher CPU load on VIO server, and kind of a pain in the rump).
- Note that LPM requires any VSCSI LUNs to be mapped to all VIO servers in advance.
- Note that LPM requires any NPIV LUNs to be mapped to the secondary WWNs in advance
- SSH into the VIO server
- Make sure lsmap and lsmap -npiv show whatever mapping is required
- Make sure loadopt has mounted any ISO images as virtual CDROMs if needed
- You can also just mask an alt_disk_install LUN from a source host.
- You can also use NIM to do a network install
- Activate the LPAR profile.
- If you did not open a vterm from SSH into the HMC, then you can do it from the activate GUI.
- You can use SMS to pick your boot device
- Install or boot as desired
- Reconfigure your network as normal
- smitty tcpip or “chdev -l en0” and “chdev -l inet0” with appropriate flags
- Tune everything as desired.
- If it was a Linux install, then that has its own config options.
SR-IOV can be used instead of Shared Ethernet above.
It allows you to share a single PCI NIC or single ethernet port between LPARs. It uses less CPU on the VIO server, and has lower latency for your LPARs. It’s sort of the Next Generation of network virtualization, though there are some restrictions in its use. It’s best to review all of the info, and decide up front, but is worth your time to do so. If you want to use an SEA on SR-IOV, you still only have one VIO server per port, but you can have different ports on different VIO servers. When sharing among all clients and VIO server without SEA, understand that the percentage capacity is a minimum guaranteed, not a cap. Leave it low unless you have some critical workload that needs to crowd out anyone else. Some of the best URLs today when I look up “SR-IOV vNIC vio howto” are as follows:
CLI and Automation
If you want to build a whole bunch of VIO clients and servers at once, it may be worth the effort to do it from the HMC CLI. It gets really complicated, but once you have it set up, you can adjust and rebuild things quickly. This also lets you manually specify WWNs for your LPARs in case there are collisions, or if you are rebuilding and need to keep the same numbers.
The VIO server can be installed with alt_disk_copy, or from NIM, or from physical CD, or from the HMC. The CLI version is called “installios” and you MUST specify the MAC address of the boot adapter for it to work properly. Without CLI options, installios will prompt you for all of the info.