AIX Dog Threads broken – DO NOT USE

After -1811 builds of AIX, I found that enabling thread=on, yes, or any number, caused severe disk latency problems. Low load systems would not notice, but any database would suffer horribly.

In VIO 2.x (AIX 6.1), I would use “thread=on” for the SEA (and really any en## device), and it was fine.
Well, VIO 3.1 uses AIX 7.2.3.3 which is build 19-15 (2019 week 15).
Using thread=on here caused:
• Load sharing to fail
• Ping to some hosts from vio2 to fail
• reboot one VIO causes permanent network outage for both VIO and clients.
• Problem would change each time network was reconfigured / rebuilt.

One thing I tinkered with was load sharing for one VLAN.
I gave up during troubleshooting, but on my next build, I plan to re-test that.
The idea is two virtual switches. Both VIO have a trunk for each switch, both with the same VLAN.
Each trunk on the same VIO for the same SEA has to be the same trunk priority.
Configure the SEA with ha_mode=sharing (instead of auto), and both trunks.
Prod LPARs on one switch, non-prod LPARs on the other switch.

All of the IBM docs only show this with different PVIDs on each trunk, and then they can be on the same switch.
Maybe it’s a pipe dream that this could work, but the sharing is between trunks, not by VLAN.
The kicker would be if the SEA is not a proper bridge, but only uses some kind of shortcut for packet handling.

This issue was found to be a problem on Emulex 10gbit ethernet adapters. We have ongoing network hangs on non-shared Broadcom adapters. Intel is the gold standard, but IBM does not certify/resell the 10gbit cards for AIX on POWER. The Mellanox cards cannot network boot (maybe in p9?)

In the meantime, IBM is killing developerworks, and not porting the info to the new community blogs. This is going to be a HUGE loss of technical reference.


Replacing TSM / ISP server

I ran into an issue where a primary TSM 7.1.8 server was broken, and it was easier to just move all of the clients over to the secondary 8.1.5 server. These used the new TLS encryption, and I kept running into issues.

    —————

First, I physically shut down the old server, and updated the new server to use the IP as an alias by editing /etc/network/interfaces. (Ubuntu 16 LTS)

    —————

Various errors included:
ANR8599W The connection with host address:host port failed due to an untrusted server certificate. An attempt to reconnect and establish certificate trust might follow.

ANR2284S The server master encryption key has changed. Passwords protected with the previous master encryption key are not available.

I had to make the server trust itself again with:[LINK]
dsmcert -add -server TSM -file /home/tsminst1/cert256.arm

    —————

This error:
ANR0456W Session rejected for server DT – the server name at 192.168.1.99, 1500 does not match.

I removed the server:
REMOVE SERVER DT

    —————

These errors:
ANR1651E Server information for DT is not available.
ANR4377E Session failure, target server DT is not defined on the source server.
ANR3151E Configuration refresh failed with configuration manager DT.

I disabled replication config
REMOVE REPLNODE *
Q REPLSERVER
REMOVE REPLSERVER {GUID}
DEL SUBSCRIPTION DEFAULT_PROFILE

    —————

These errors:
ANS1695E The certificate is not valid.
ANS1592E Failed to initialize SSL protocol.
ANS8023E Unable to establish session with server.
ANR3335W Unable to distribute certificate to for session .
ANR8599W The connection with 192.168.1.2:40250 failed due to an untrusted server certificate. An attempt to reconnect and establish certificate trust might follow.

From the clients, I had to remove /opt/tivoli/tsm/client/ba/bin/dsmcert.* and C:\Program Files\Tivoli\TSM\baclient\dsmcert.* on the clients per http://andrewjtobiason.com/index.php/2018/08/16/resolving-ssl-errors/

However, I also had to fix dsmcert / dsmcert.exe which I clobbered in the process.

    —————

And on the server, allow keys to be swapped again:[LINK]
UPD NODE * SESSIONSECURITY=TRANSITIONAL
UPD ADMIN * SESSIONSECURITY=TRANSITIONAL

Swap keys on client:
dsmadmc

    —————

NOTE: I also removed /etc/adsm/* during troubleshooting, but that was not needed. That just lead to me having to re-enter passwords again. Simply deleting the cert database corrected the problem on other clients.

    —————

I tried to order this in dependency order. I was sort of all over the place when I did it, and might have missed something. I just could not exactly get all of the right info from any one documentation source.


AIXPCM vs SDDPCM

AIX geeks, converting from SDDPCM to AIXPCM, when you uninstall the drivers, you also need to uninstall host attachment. On 2% of our migrations, mksysb/alt clones would fail to find the boot disk (554).

devices.fcp.disk.ibm.mpio.rte devices.sddpcm.*

Note that from SVC 7.6.1, AIX 6.1.8, AIX 7.1.3, and AIX 7.2, you MAY switch to AIXPCM. On POWER9 and later, you MUST switch to AIXPCM.

The rm script is from storage development, but the manage_disk_drivers command is from AIX dev. Either is okay, but tge AIX one does not require making a PMR.

Best reference:

https://www.ibm.com/developerworks/community/blogs/cgaix/entry/One_Path_Control_Module_to_Rule_Them_All


IBM PCIe3 Ethernet Decoder

Some modern Ethernet adapter decoder wheel information.

PCIe2 4-Port (10GbE SFP+ & 1GbE RJ45) Adapter
PCIe2 4-Port Adapter (10GbE SFP+) (e4148a1614109304)
PCIe2 4-Port Adapter (1GbE RJ45) (e4148a1614109404)
FRU Number.................. 00E2715
Customer Card ID Number..... 2CC3
Part Number................. 00E2719
Feature Code/Marketing ID... EN0S or EN0T
Make/Model: Broadcom NetXtreme II 10Gb
Code Name: Shiner

PCIe3 10GbE RoCE Converged Host Bus Adapter (b31507101410eb04)
RoCE Converged Network Adapter
2-Port 10GbE RoCE SR PCIe 3.0
Part Number................. 00RX871
FRU Number.................. 00RX875
Feature Code/Marketing ID... EC2M / EC2N / EC37 / EC38 / EL3X / EL40
Customer Card ID Number..... 57BE
Make/Model: Mellanox ConnectX-3 Pro / Gen 3.0 x8 10GbE
Code Name: Core

PCIe3 4-Port 10GbE SR Adapter (df1020e21410e304)
PCIe3 4-port 10GbE SR Adapter, NIC PF
Part Number................. 00ND462 or 00ND464 or 00ND468 or 00RX863
Feature Code/Marketing ID... EN15 or EN16
Customer Card ID Number..... 2CE3
Make/Model: Emulex OneConnect NIC (Lancer) (rev 30)

I believe the 4-port 1gbit PCI cards are Intel.

Emulex, Brocade, and Broadcom are all owned by the same company as of 2016. Currently known as “Broadcom, Inc.”, the parent was formerly Avago, formerly Agilent, and formerly the semiconductor division of HP.


AIX types of ethernet interfaces

AIX shows a lot of different info in different places.  This is because AIX predates the time when everyone had RJ45 ethernet ports.

HBA represents a high-function PCI adapter that contains multiple protocols, and which can sometimes be configured to provide ENT devices.  Primary candidates are “Integrated Virtual Ethernet” on POWER5 and POWER6 servers, as well as ROCE adapters, which are “RDMA Over Converged Ethernet”, with RDMA being “Remote Direct Memory Addressing” or “Access”.  Basically, Infiniband adapters which can use ethernet at the link layer.

ENT represents the “physical port”, though that is not always the case.  I’ll explain more later.  There is one one of these for every Ethernet port visible to the operating system.

EN represents the “ETHERNET II” protocol device for IP communication.  This is the standard today, also known as “DIX Etehrnet”, named after DEC, Intel, Xerox.  This is where you will normally put your IP address.  There is one of these for every ENT device.

ET represents an IEEE 802.3 protocol device.  This would have been used in the days of Novel Networking, or with SNA protocol.  Almost no-one uses this anymore, but I’m sure there’s an AIX 3.2.5U2 microchannel server running with this somewhere in the bottom of an old government facility, with coaxial cables and barrell terminators.  Really, I don’t know why this still is needed on anything produced in the last 20 years.  There is one of these for every ENT device.

INET is for config options that affect the entire TCP/IP stack, such as persistent routes, the hostname, and whether you are bypassing ODM for config of your network (rare).  There is only one of these per system, and it is always inet0 unless someone gets cheeky.

There are other ways to get IP devices, such as IP over Fibre Channel, IP over Infiniband, IP over ATM, over FDDI, over serial or parallel, etc.  These are less common, so I’m not going into them here.

Generally, you may have a stack like this:

ent0    physical ethernet port
ent1    physical ethernet port
ent2    Etherchannel (Static, or LACP bond created out of both of the above)
ent3    Virtual Ethernet (Connects to a virtual, firmware-only switch)
ent4    Shared Ethernet (VIO server only, a software bridge between a virtual physical)
ent5    VLAN (an additional VLAN port configured off of any of the above)
en0     IP interface – unused because we give ENT0 as a backing device to ENT3
en1     IP interface – also unused for the same reason
en2     IP Interface – also unused, because this is the backing device for the SEA
en3     IP interface – Also unused because this is a backing device for the SEA.
en4     IP interface hanging off of ENT4 – this can be skipped, and a virtual ethernet used
en5     IP interface hanging off of ENT5 – this can be skipped, and a virtual ethernet used

Each device has its own type of parameters.  You can use “lsattr -El $device”, “netstat -in”, and “entstat -d $device” to get details of this.  Note that entstat wants to be on the top device, not the bottom device.  Start with where the IP address is assigned, and it will show the subdevices, virtual connections, etc.


High Level VIO/Client build

This is off the cuff, and is not a technical walkthrough. This is enough for you to teach yourself assuming you have a system to hack on.

IBM’s POWER8 docs are missing almost everything. I don’t understand how they can call them docs at all. They want you to use some really picky tools that are cumbersome and not flexible in all the right ways.

The IBM POWER7 docs are close, but are missing the SR-IOV info. Your best bet is to skim though this, and stop when you find the bits you want (concepts, config):

The high level jist of building a VIO environment is as follows:

  • Configure to HMC
  • Clear managed system profile data
  • Build a couple VIO servers:
    • 6GB RAM, 3 virtual procs, 0.3 virtual CPUs, 255 CPU weight
    • At least one storage and one network adapter
    • You can use SR-IOV to share an ethernet adapter from firmware if needed
    • One virtual ethernet trunk for each separate physical network.  Assign VLANs here
    • One virtual ethernet non-trunk for each VLAN you want an IP address on (ideal, but you can also hang IPs and VLANs directly from AIX)
    • One virtual SCSI server adapter for each client LPAR that will need virtual CDROM, Virtual Tape, or legacy Virtual SCSI disk (higher CPU load).
    • One virtual fibre adapter for each client port (usually two per client on each VIO server, but can be anywhere from 18)
  • Upload the VIO base media into the HMC media repository
  • Install the VIO server from the HMC
  • SSH into the HMC, and use vtmenu to rebuild the VIO networking
    • Remove all en, et, ent, hba devices, then cfgmgr
    • mkvdev -lnagg for any etherchannel bonded pairs needed for the Shared Ethernet Adapter(s)
    • mkvdev -sea  to build any shared ethernet adapters (ethernet bridge from virtual switch to physical port)
    • mkvdev -lnagg for any etherchannel bonded pairs needed for local IP communication
    • mkvdev -vlan for any additional VLANs hanging directly off an SEA rather than through a virtual ethernet client adapter
    • mktcpip to configure your primary interface, gateway, etc
    • Add any extra IP addresses.
  • Build your Client LPARs
    • Memory, CPU, RAM as desired
    • Virtual ethernet just picks the switch and VLAN that you need.  If this does not exist on any VIO trunk adapters, then you need to fix that.
    • Virtual SCSI client adapter
      • this needs the VIO server partition ID, and the VIO server slot number added to it for the firmware connection.
      • The VIO server virtual SCSI adapter needs the same mapping back to the client LPAR id and slot.
      • There may be some GUI improvements to add this all for you, but it’s been decades of garbage for so long that I just do it all manually.
    • Virtual Fibre adapter – This maps back and forth to the VIO server virtual fibre similar to how VSCSI did.
  • SSH into the VIO server
    • make virtual optical devices attached to the “vhost” (virtual SCSI” if needed
    • Use vfcmap to map the “vfchost” adapters to real “fcs” ports.  This requires them to be NPIV capable (8gbit or newer), logged into an NPIV capable switch (lsnports).
  • Zone any LUNs
    • lsnportlogin can give you the WWNs for the clients, or you can get it from the client profile data manually
    • You can use OpenFirmware’s “ioinfo” to light up a port to force it to log in to the switch.
    • If the LPAR is down, you can use “chnportlogin” from the HMC to log in all ports for that client.
    • You can also zone directly to the VIO server, and “mkvdev” to map them as vscsi disks (higher CPU load on VIO server, and kind of a pain in the rump).
    • Note that LPM requires any VSCSI LUNs to be mapped to all VIO servers in advance.
    • Note that LPM requires any NPIV LUNs to be mapped to the secondary WWNs in advance
  • SSH into the VIO server
    • Make sure lsmap and lsmap -npiv show whatever mapping is required
    • Make sure loadopt has mounted any ISO images as virtual CDROMs if needed
    • You can also just mask an alt_disk_install LUN from a source host.
    • You can also use NIM to do a network install
  • Activate the LPAR profile.
    • If you did not open a vterm from SSH into the HMC, then you can do it from the activate GUI.
    • You can use SMS to pick your boot device
    • Install or boot as desired
    • Reconfigure your network as normal
      • smitty tcpip or “chdev -l en0” and “chdev -l inet0” with appropriate flags
      • Tune everything as desired.
      • If it was a Linux install, then that has its own config options.

SR-IOV can be used instead of Shared Ethernet above. 

It allows you to share a single PCI NIC or single ethernet port between LPARs.  It uses less CPU on the VIO server, and has lower latency for your LPARs.  It’s sort of the Next Generation of network virtualization, though there are some restrictions in its use.  It’s best to review all of the info, and decide up front, but is worth your time to do so.  If you want to use an SEA on SR-IOV, you still only have one VIO server per port, but you can have different ports on different VIO servers.  When sharing among all clients and VIO server without SEA, understand that the percentage capacity is a minimum guaranteed, not a cap.  Leave it low unless you have some critical workload that needs to crowd out anyone else. Some of the best URLs today when I look up “SR-IOV vNIC vio howto” are as follows:

CLI and Automation

If you want to build a whole bunch of VIO clients and servers at once, it may be worth the effort to do it from the HMC CLI.  It gets really complicated, but once you have it set up, you can adjust and rebuild things quickly.  This also lets you manually specify WWNs for your LPARs in case there are collisions, or if you are rebuilding and need to keep the same numbers.

The VIO server can be installed with alt_disk_copy, or from NIM, or from physical CD, or from the HMC.  The CLI version is called “installios” and you MUST specify the MAC address of the boot adapter for it to work properly. Without CLI options, installios will prompt you for all of the info.

 


AIX 7.2.3.1 breaks GSKit 8.0.50.89

AIX 7.2.3 breaks GSKit8, up through GP29 (8.0.50.89).

This affects TSP/Spectrum Protect, Content Manager, Tivoli Directory Server, Websphere, DB2, Informix, IBM HTTP Server, etc.

Before reboot, everything works still, which implies the change is in the kernel.

We found it on TSM, and AIX 7200-03-01-1838, and Spectrum Protect server 8.1.6.0.

Application crash and DBX follow below.

ANR7800I DSMSERV generated at 12:17:13 on Sep 11 2018.
IBM Spectrum Protect for AIX
Version 8, Release 1, Level 6.000
Licensed Materials - Property of IBM
(C) Copyright IBM Corporation 1990, 2018.
All rights reserved.
U.S. Government Users Restricted Rights - Use, duplication or disclosure
restricted by GSA ADP Schedule Contract with IBM Corporation.

ANR7801I Subsystem process ID is 10944920.
ANR0900I Processing options file /home/tsminst1/dsmserv.opt.
ANR7811I Using instance directory /home/tsminst1.
Illegal instruction(coredump)

# dbx /opt/tivoli/tsm/server/bin/dsmserv core.10944896.28165312
Type 'help' for help.
[using memory image in core.10944896.28165312]
reading symbolic information ...warning: no source compiled with -g

Illegal instruction (illegal opcode) in . at 0x0 ($t1)
warning: Unable to access address 0x0 from core

(dbx) where
.() at 0x0
gsk_src_create__FPPvPv(??, ??) at 0x9000000015b6d88
__ct__8GSKMutexFv(??) at 0x9000000018d664c
__ct__20GSKPasswordEncryptorFv(??) at 0x9000000018cb248
__ct__7gsk_envFv(??) at 0x900000000aaa6b0
GskEnvironmentOpen__FPPvb(??, ??) at 0x900000000ab14c4
gsk_environment_open(??) at 0x900000000ab277c
IPRA.$CheckGSKVersion() at 0x100eecf68
tlsInit() at 0x100eecd70
main(??, ??) at 0x10000112c

(dbx) th
thread state-k wchan state-u k-tid mode held scope function

$t1 run running 41877977 k no sys
$t2 run blocked 21234465 u no sys _cond_wait_global
$t3 run running 24380103 u no sys waitpid


dsmserv fails to start if LDAP is inaccessible

IBM, and the white books, say this is working as designed.
• If LDAP dies, dsmserv stays up without it.
• If LDAP dies, and dsmserv restarts, it refuses to come up.
• ANR3103E https://www.ibm.com/support/knowledgecenter/SSEQVQ_8.1.5/srv.msgs/b_msgs_server.pdf
• Workaround is remove LDAPURL from dsmserv.opt, or wait for LDAP to become accessible.

On a multi-homed server, any links serving LDAP should be fault tolerant.

Any server using LDAP should have fault tolerant LDAP servers.

Here’s where to vote on getting the start-up limitation changed.
http://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=121985


AIX 7.2 crash removing adapters from etherchannel

If I remove the first main adapter, and re-add it, then I can add/remove either adapter or IP interface after that.

If I remove the second main adapter, and re-add it, then I cannot remove the first, and dropping the IP interface crashes.

So, assuming adapter_names=ent2,ent6

This works everywhere:
/usr/lib/methods/ethchan_config -d ent17 ent2
/usr/lib/methods/ethchan_config -a ent17 ent2
/usr/lib/methods/ethchan_config -d ent17 ent6
/usr/lib/methods/ethchan_config -a ent17 ent6
/usr/sbin/rmdev -Rl en17
/usr/sbin/mkdev -l en17
/usr/sbin/cfgmgr
# Can do any combination of the above after remove/readd first adapter in advance.

And this crashes everywhere:
/usr/lib/methods/ethchan_config -d ent17 ent6
/usr/lib/methods/ethchan_config -a ent17 ent6
# crashed here on one server
/usr/lib/methods/ethchan_config -d ent17 ent2
ethchan_config: 0950-021 Unable to delete adapter ent2 from the
EtherChannel because it could not be found, errno = 2
/usr/sbin/rmdev -Rl en17

# crash here on several others

Crash analysis follows:

(96)> stat
SYSTEM_CONFIGURATION:
CHRP_SMP_PCI POWER_PC POWER_8 machine with 160 available CPU(s) (64-bit
registers)

SYSTEM STATUS:
sysname... AIX
nodename.. testnode001
release... 2
version... 7
build date Mar 2 2018
build time 13:02:46
label..... 1809C_72H
machine... 00DEADBEEF00
nid....... FBCAFE4C
time of crash: Wed May 9 04:45:59 2018
age of system: 25 day, 10 hr., 54 min., 41 sec.
xmalloc debug: enabled
FRRs active... 0
FRRs started.. 0

CRASH INFORMATION:
CPU 96 CSA F00000002FF47600 at time of crash, error code for LEDs:
30000000
pvthread+1A0E00 STACK:
[00009324].unlock_enable_mem+000018 ()
[06058D54]shientdd:entcore_disable_tx_timeout_timers@AF123_105+000074
(??, ??)
[060592E8]shientdd:entcore_suspend_nic+000028 (??, ??)
[0605FB20]shientdd:entcore_suspend+0001E0 (??, ??, ??)
[06129A68]shientdd:entcore_close_common+000668 (??)
[0612A0B0]shientdd:entcore_close+000490 (??)
[060103CC]shientdd:shi2ent_close+00000C (??)
[F1000000C04911C0]ethchandd:ethchan_close+0001A0 (??)
[00014D70].hkey_legacy_gate+00004C ()
[0057A914]ns_free+000074 (??)
[00014F50].kernel_add_gate_cstack+000030 ()
[069E503C]if_en:en_ioctl+0002DC (??, ??, ??)
[0057126C]if_detach+0001CC (??)
[0056E1DC]ifioctl+00081C (F00000002FF473D0, 8020696680206966,
00000000066EB8A0)
[005EA764]soo_ioctl+0005C4 (??, ??, ??)
[007A4754]common_ioctl+000114 (??, ??, ??, ??)
[00003930]syscall+000228 ()
[kdb_get_virtual_memory] no real storage @ 2FF22358
[D011C92C]D011C92C ()
[kdb_read_mem] no real storage @ FFFFFFFFFFF5D60

(96)> status | grep -v wait
CPU INTR TID TSLOT PID PSLOT PROC_NAME
96 20E03BF 6670 380324 3128 ifconfig

(96)> vmlog
Most recent VMM errorlog entry
Error id = DSI_PROC
Exception DSISR/ISISR = 000000000A000000
Exception srval = 00007FFFFFFFD080
Exception virt addr = 0000000000000004
Exception value = 00000086 EXCEPT_PROT

0x86:
Protection exception. An attempt was made to write to a protected
address in memory

(96)> th -n ifconfig
SLOT NAME STATE TID PRI RQ CPUID CL WCHAN
pvthread+1A0E00 6670*ifconfig RUN 20E03BF 03E 96 0
shientdd:.entcore_disable_tx_timeout_timers AF123_105+000074
bla < .unlock_enable>
.
2390 ! SUNLOCK(TX_QUEUE_SLOCK, tx_pri);
.

---- NDD INFO ----( F1000B003952B410)----
name............. ent6 alias............ en6
ndd_next......... 0000000000000000
ndd_flags........ 00610812
(BROADCAST!NOECHO!64BIT!CHECKSUM_OFFLOAD)
ndd_2_flags...... 00000930
(IPV6_LARGESEND!IPV6_CHECKSUM_OFFLOAD!LARGE_RECEIVE!ECHAN_ELEM)

(96)> print entcore_acs_t F1000B00393F0000
struct entcore_acs_t
struct entcore_tx_queue_t
< ...>
struct entcore_ras_cb_t *ffdc_ras_cb = 0xF1000B0039537D40;
struct entcore_tx_atomics_t *atomics = 0x0000000000000000;
struct mbuf *overflow_queue = 0x0000000000000000;
struct mbuf *overflow_queue_tail = 0x0000000000000000;
uint64_t ofq_cnt = 0x0000000000000000;
struct entcore_lock_info_t *p_lock_info = 0x0000000000000000;
void *p_acs = 0xF1000B00393F0000; NULL so DSI

(96)> dd F1000B00393F78D0
F1000B00393F78D0: 0000000000000000 < - p_lock_info

(96)> xm F1000B00393F78D0
Page Information:
heap_vaddr = F1000B0000000000
P_allocrange (range of 2 or more allocated full pages)
page........... 00003937 start.. F1000B00393F0000 page_cnt....... 0017
allocated_size. 00170000 pd_size........ 00010000 pinned......... yes
XMDBG: ALLOC_RECORD

Allocation Record:
F1000B00E4306600: addr......... F1000B00393F0000 allocated pinned
F1000B00E4306600: req_size..... 1458712 act_size..... 1507328
F1000B00E4306600: tid.......... 033F0187 comm......... cfgshien
XMDBG: ALLOC_RECORD
Trace during xmalloc() on CPU 00
0604FCB0(.entcore_allocate_acs+000310)
060129C4(.entcore_config_state_machine+
0601A884(.entcore_perform_init+0000A4)

Free History:
105D 40.955808 SHIENTDD GEN: L3 Close__B d1=F1000B00393F0000
105D 40.955808 SHIENTDD GEN: L3 CloseC_B d1=F1000B00393F0000
105D 40.955809 SHIENTDD GEN: L3 HwClos_B d1=F1000B00393F0000
105D 40.955810 SHIENTDD GEN: L3 HwClos_B -HW| d1=0000000000000000
105D 40.955810 SHIENTDD GEN: L3 HwClos10 -HW| d1=0000000000000000
105D 40.955810 SHIENTDD GEN: L3 HwClos_E -HW| d1=0000000000000000
105D 40.955811 SHIENTDD GEN: L3 HwClos_E d1=0000000000000000

< ...>

105D 41.039269 SHIENTDD GEN: L3 CloseC_E d1=F1000B00393F0000
105D 41.039269 SHIENTDD GEN: L3 Close__E d1=0000000000000000
105D 41.039273 SHIENTDD GEN: L3 Close__B d1=F1000B00393F0000

another close ? >>

105D 41.039273 SHIENTDD GEN: L3 CloseC_B d1=F1000B00393F0000
105D 41.039274 SHIENTDD GEN: L3 HwClos_B d1=F1000B00393F0000
105D 41.039275 SHIENTDD GEN: L3 HwClos_B -HW| d1=0000000000000000
105D 41.039275 SHIENTDD GEN: L3 HwClos10 -HW| d1=0000000000000000
105D 41.039276 SHIENTDD GEN: L3 HwClos_E -HW| d1=0000000000000000
105D 41.039276 SHIENTDD GEN: L3 HwClos_E d1=0000000000000000
105D 41.039276 SHIENTDD GEN: L3 Suspnd_B d1=F1000B00393F0000
105D 41.039279 SHIENTDD GEN: L3 MctSyn_B d1=F1000B00393F0000
105D 41.039281 SHIENTDD GEN: L3 MctSyn_E d1=0000000000000000
END

It seems that 2 closes happened, which would have leaded to a double free, and the crash.

Debug efix was tested for 2 weeks on 24 systems and problem was resolved, patch was stabl.

APAR IJ06720 was generated, and a public efix will be released for that./


Spectrum Protect – container vulnerability

We ran into an issue where a level-zero operator became root, and cleaned up some TSM dedupe-pool containers so he’d stop getting full filesystem alerts.

Things exposed:

How does someone that green get full, unmonitored root access?
* They told false information about timestaps during defense
* Their senior tech lead was content to advise they not move or delete files without contacting the app owner.
* Imagine if this had been a customer facing database server!

In ISP/TSM, once extents are marked damaged, a new backup of that extent will replace it.
* Good TDP4VT CTL files and other incrementals will send missing files.
* TDP for VMWare full backups fail if the control file backup is damaged.
* Damaged extents do not mark files as damaged or missing.

Replicate Node will back-propagate damaged files.
* Damaged extents do not mark files as damaged or missing.

Also, in case you missed that:
* Damaged extents do not mark files as damaged or missing.

For real, IBM says:
* Damaged extents do not mark files as damaged or missing.
* “That might cause a whole bunch of duplicates to be ingested and processed.”

IBM’s option is to use REPAIR STGPOOL.
* Requires a prior PROTECT STGPOOL (similar to BACKUP STGPOOL and RESTORE STGPOOL).
* PROTECT STGPOOL can go to a container copy on tape, a container copy on FILE, or a container primary on the replica target server.
* PROTECT STGPOOL cannot go to a cloud pool
* STGRULE TIERING only processes files, not PROTECT extents.
* PROTECT STGPOOL cannot go to a cloud pool that way either.
* There is NO WAY to use cloud storage pool to protect a container pool from damage.

EXCEPTION: Damaged extents can be replaced by REPLICATE NODE into a pool.
* You can DISABLE SES, and reverse the replication config.
* Replicate node that way will perform a FULL READ of the source pool.

There is a Request For Enhancement from November, 2017 for TYPE=CLOUD POOLTYPE=COPY.
* That would be a major code effort, but would solve this major hole.
* That has not gotten a blink from product engineering.
* Not even an “under review”, nor “No Way”, nor “maybe sometime”.

Alternatives for PROTECT into CLOUD might be:
* Don’t use cloud. Double the amount of local disk space, and replicate to another datacenter.
* Use NFS (We would need to build a beefy VM, and configure KRB5 at both ends, so we could do NFSv4 encrypted).
* Use CIFS (the host is on AIX, which does not support CIFS v3. Linux conversion up front before we had bulk data was given a big NO.)
* Use azfusefs (Again, it’s not Linux)

Anyway, maybe in 2019 this can be resolved, but this is the sort of thing that really REALLY was poorly documented, and did not get the time and resources to be tested in advance. This is the sort of thing that angers everyone at every level.

REFERENCE: hard,intr,nfsvers=4,tcp,rsize=1048576,wsize=1048576,bg,noatime