IBM SVC / Storwize DRP – Still a no-go

IBM released “Data Reduction Pools” for their SAN Volume Controller, which includes Storwize, FlashSystem etc products back in early 2018. The internal flag for it is “deduplication”, but it can be used for thin, compressed, or deduplicated volumes. You can put thick volumes in them too, but that is more of a “block off this space” thing.

DRP is a “log structured device”, meaning it is random read, but always sequential write. When you enable DRP, it inserts into the high cache (front-end cache). Because of this, it adds 3-4ms of latency right off the bat. Some of the higher end controllers can reduce that to 2-3ms, but it never goes away. Also, this affects ALL storage pools (mdisk groups), not just the ones built with DRP enabled. IBM product engineering says this is not a defect. It’s just they way it is, because you have to twiddle the data on the CPU.

In late 2018, I found one of their code defects, which has been resolved; however, that defect highlighted a troubling design choice. DRP is a whole-cluster thing, not a whole I/O group thing, nor a whole pool thing. That means, if you trigger a defect that kills DRP for one pool, it will kill DRP for every pool. If you’re in a hyperswap or stretched cluster, and something happens (future defect maybe) to take one site’s DRP offline while the nodes are still live, then that could take out BOTH sites.

This year, I found out that their sizing partner, “Intellimagic”, does not understand DRP. When a “Disk Magic” report comes out, it can recommend DRP. It cannot tell how much cache affects things, so it just recommends maximum cache. It cannot tell how much DRP affects things, so it cannot recommend against it for latency sensitive workloads. The IBM technical sales team is trying to work around this by simply recommending the next model up when DRP is to be used.

However, that is not enough. DRP garbage collection is VERY heavy handed. There is no way to throttle it. It’s a log structured array, so when you re-write a block on a LUN, the old block gets invalidated, then the old chunk gets read, then new data is packed in, then it’s written to the end of the array. This is sequential for each major section of the pool, and it has additional latency added by the code. We found that a normal system of 25% writes would be suppressed by 80% in DRP. Latency during normal operations would fluctuate from 6 to 18, and that’s to the client. Any major I/O operations on the client (big rewrites, or big block freeing), or any vdisk (LUN) deletions would cause latency to fluctuate from 16-85ms (completely unusable).

Once DRP is enabled, it affects the whole system. You cannot turn it off for a pool. You have to delete that pool. You cannot migrate a vdisk out of nor into a DRP, you have to mirror to a new pool, then delete or split the old copy. Funny thing is, that’s just what a migrate does at all but the lowest levels. It’s been 2 years, that really should have been folded into the standard commands by now.

Once you have everything split, and all of the hosts are running on the new, non-DRP enabled pool, that is not enough. All of the performance problems will still be there. It is only once you delete the old pool that performance returns to normal. Since vdisk deletions take several hours each, and running more than 4 at a time causes severe, deadly latency, it’s best to just verify nothing is in use and force-delete the whole DRP-enabled pool. Risky, but otherwise, you could spend weeks deleting the old LUNs by hand due to how slow they clear.

The sales materials, technical whitebooks, and the gritty redbooks all recommend DRP as the best thing ever. Staff and books shy away from highlighting any of these limitations. Other products handle deduplication without this sort of negative impact, but here, even with no deduplication in use, just the OPTION of using dedupe, it causes the system to be unusable.

Maybe there will be a “fix” or a “rewrite”, but I’ve been burned hard, with severe customer satisfaction, over two years. This negatively impacts trust, and is not the only IBM storage product causing me trust issues.

I expect that this is the fallout from Ginny Rometty’s command, “Don’t try to protect the past”. IBM’s vision is stated as “Cloud, Data, and Watson”. It will be interesting to see exactly how that translates into a future structure. IBM has been divesting a lot of pieces lately. Rometty ran services, and that’s what she knows as Chair and CEO. Anything she sees as a commodity, or low margin will be spun off. (IBM seems to need at least a 6:1 revenue to cost in order to operate. There are a lot of layers.) I’m starting to wonder if Storage and POWER will be sold off to Lenovo. I see hints, but no roadmaps, nothing concrete.


Spectrum Protect – container vulnerability

We ran into an issue where a level-zero operator became root, and cleaned up some TSM dedupe-pool containers so he’d stop getting full filesystem alerts.

Things exposed:

How does someone that green get full, unmonitored root access?
* They told false information about timestaps during defense
* Their senior tech lead was content to advise they not move or delete files without contacting the app owner.
* Imagine if this had been a customer facing database server!

In ISP/TSM, once extents are marked damaged, a new backup of that extent will replace it.
* Good TDP4VT CTL files and other incrementals will send missing files.
* TDP for VMWare full backups fail if the control file backup is damaged.
* Damaged extents do not mark files as damaged or missing.

Replicate Node will back-propagate damaged files.
* Damaged extents do not mark files as damaged or missing.

Also, in case you missed that:
* Damaged extents do not mark files as damaged or missing.

For real, IBM says:
* Damaged extents do not mark files as damaged or missing.
* “That might cause a whole bunch of duplicates to be ingested and processed.”

IBM’s option is to use REPAIR STGPOOL.
* Requires a prior PROTECT STGPOOL (similar to BACKUP STGPOOL and RESTORE STGPOOL).
* PROTECT STGPOOL can go to a container copy on tape, a container copy on FILE, or a container primary on the replica target server.
* PROTECT STGPOOL cannot go to a cloud pool
* STGRULE TIERING only processes files, not PROTECT extents.
* PROTECT STGPOOL cannot go to a cloud pool that way either.
* There is NO WAY to use cloud storage pool to protect a container pool from damage.

EXCEPTION: Damaged extents can be replaced by REPLICATE NODE into a pool.
* You can DISABLE SES, and reverse the replication config.
* Replicate node that way will perform a FULL READ of the source pool.

There is a Request For Enhancement from November, 2017 for TYPE=CLOUD POOLTYPE=COPY.
* That would be a major code effort, but would solve this major hole.
* That has not gotten a blink from product engineering.
* Not even an “under review”, nor “No Way”, nor “maybe sometime”.

Alternatives for PROTECT into CLOUD might be:
* Don’t use cloud. Double the amount of local disk space, and replicate to another datacenter.
* Use NFS (We would need to build a beefy VM, and configure KRB5 at both ends, so we could do NFSv4 encrypted).
* Use CIFS (the host is on AIX, which does not support CIFS v3. Linux conversion up front before we had bulk data was given a big NO.)
* Use azfusefs (Again, it’s not Linux)

Anyway, maybe in 2019 this can be resolved, but this is the sort of thing that really REALLY was poorly documented, and did not get the time and resources to be tested in advance. This is the sort of thing that angers everyone at every level.

REFERENCE: hard,intr,nfsvers=4,tcp,rsize=1048576,wsize=1048576,bg,noatime


TSM dedup BACKUP STGPOOL performance

BACKUP STGPOOL for dedupe runs about 6x slower than direct tape to tape.
Why?

1) First, the database has a huge number of random reads for dedupe rehydration.
Tack on any Dedup Deletion activity (SHOW DEDUPDELETEINFO) and anything else that’s competing for DB IOPS.
FIX: Put the database on SSD or RAM backed storage.
NOTE: SSD stats are usually lies. Sustained performance is 4500-12,000 IOPS each, divided by 2 for RAID-1/10, or by 3.5 for RAID-5/6)
FIX: increase server memory and provide more for DB2 bufferpools.
NOTE: This might require manually changing bufferpools, limiting filesystem cache, etc.
FIX: Large amounts of cache for the database containers

2) Next, the file class, while sequential, still has a large number of random read IOPS.
TSM Server has no read ahead for this. It reads the chunks in order, rather than requesting a huge buffer full of chunks.
As such, streaming speed will be limited by DB latency, file-class latency, and actual read IO times.
FIX: Reduce the latency for your file class
FIX: Reduce the latency for your database
FIX: Don’t do anything else during BACKUP STGPOOL.
FIX: Run your EXPIRE INVENTORY and IDENTIFY DUPLICATE after, not before.
FIX: Submit a Design Change Request (DCR) for larger chunk read cache to be used for BACKUP STGPOOL.
FIX: Submit a Design Change Request (DCR) for larger tape write buffer.

3) Last, tape buffer underruns can kill performance.
If the write buffer empties, then the tape will stop.
Before it begins again, the tape has to be repositioned backward.
For LTO drives, usually the minimum write speed is 50MB/sec.
Anything less, and you have latency and tape life consumed by “shoe shining”.
FIX: Fix/improve issues 1 and 2 above.
FIX: Submit a design change request to allow TSM to interleave more threads onto the same tape at once.
FIX: Use tape drives with lower minimum speeds to prevent underruns
FIX: Don’t use tape. Use virtual tape, another dedupe disk pool, or a replica target TSM server.

4) Check TSM server instrumentation.
This will show you where your time is spent, and what to upgrade next.
INSTRUMENTATION BEGIN
BACKUP STGPOOL DEDUP COPYPOOL
wait several minutes
INSTRUMENTATION END FILE=/tsm/instrumentation.out


TSM and NDMP

NDMP backups into a TSM storage pool will not be deduplicated.
If you set ENABLENASDEDUPE YES, that only affects NetApp backups.
IBM doesn’t make the NDMP code, so they don’t support deduplication of anything but NetApp.
That means neither IBM’s v7000 Unified backups, nor any other NDMP device, get deduplicated.

As such, go ahead and have your NDMP backups go to a DISK pool or direct to tape.
Sending to your dedupe pool will just clog things up.