We ran into an issue where a level-zero operator became root, and cleaned up some TSM dedupe-pool containers so he’d stop getting full filesystem alerts.
Things exposed:
How does someone that green get full, unmonitored root access?
* They told false information about timestaps during defense
* Their senior tech lead was content to advise they not move or delete files without contacting the app owner.
* Imagine if this had been a customer facing database server!
In ISP/TSM, once extents are marked damaged, a new backup of that extent will replace it.
* Good TDP4VT CTL files and other incrementals will send missing files.
* TDP for VMWare full backups fail if the control file backup is damaged.
* Damaged extents do not mark files as damaged or missing.
Replicate Node will back-propagate damaged files.
* Damaged extents do not mark files as damaged or missing.
Also, in case you missed that:
* Damaged extents do not mark files as damaged or missing.
For real, IBM says:
* Damaged extents do not mark files as damaged or missing.
* “That might cause a whole bunch of duplicates to be ingested and processed.”
IBM’s option is to use REPAIR STGPOOL.
* Requires a prior PROTECT STGPOOL (similar to BACKUP STGPOOL and RESTORE STGPOOL).
* PROTECT STGPOOL can go to a container copy on tape, a container copy on FILE, or a container primary on the replica target server.
* PROTECT STGPOOL cannot go to a cloud pool
* STGRULE TIERING only processes files, not PROTECT extents.
* PROTECT STGPOOL cannot go to a cloud pool that way either.
* There is NO WAY to use cloud storage pool to protect a container pool from damage.
EXCEPTION: Damaged extents can be replaced by REPLICATE NODE into a pool.
* You can DISABLE SES, and reverse the replication config.
* Replicate node that way will perform a FULL READ of the source pool.
There is a Request For Enhancement from November, 2017 for TYPE=CLOUD POOLTYPE=COPY.
* That would be a major code effort, but would solve this major hole.
* That has not gotten a blink from product engineering.
* Not even an “under review”, nor “No Way”, nor “maybe sometime”.
Alternatives for PROTECT into CLOUD might be:
* Don’t use cloud. Double the amount of local disk space, and replicate to another datacenter.
* Use NFS (We would need to build a beefy VM, and configure KRB5 at both ends, so we could do NFSv4 encrypted).
* Use CIFS (the host is on AIX, which does not support CIFS v3. Linux conversion up front before we had bulk data was given a big NO.)
* Use azfusefs (Again, it’s not Linux)
Anyway, maybe in 2019 this can be resolved, but this is the sort of thing that really REALLY was poorly documented, and did not get the time and resources to be tested in advance. This is the sort of thing that angers everyone at every level.
REFERENCE: hard,intr,nfsvers=4,tcp,rsize=1048576,wsize=1048576,bg,noatime