0x8007003B timeout copying large file to Samba server

PROBLEM: 0x8007003B timeout copying large file to Samba server

SOLUTION: This is an SMB.CONF issue, solved / fixed with this line:

strict allocate = no

DESCRIPTION: I had this issue for a long time, and mostly the web mocks people, tells them to do stupid things, or generally is unhelpful.  Lots of 2GB, or “your network” or “your firewall” or “turn off DPI” or whatever, none of it applicable to me.  I just accepted it, but decided to dig a little deeper today.

The exact amount of data written before it fails would vary, but the size from LS would always be the full file size.  Higher performance filesystems such as XFS, EXT4, JFS, all of them on NVMe arrays, I found I could get about 55GB allocated before timeout.  On spinning disk, it was much less, which is probably why many people fell down the rabbit hole of claiming 2GB limits, etc.

Strict Allocate = YES tells it to allocate the whole file upon request, which is what Windows does.  Samba says “OK, hold on”, and then times out.  Some people used powershell on a client to change the smbclient timeout to 600 seconds, or whatever, but that’s not really ideal, since it does not scale.

Strict Allocate = NO says to use normal UNIX semantics, where the file has no pre-allocated blocks, and allocates blocks only as the data comes in.  This starts with a fully sparse file, and data copy status on the windows client shows it processing immediately.  This is what we want for large files.  If it was only small files, then we don’t care.

I made this a global change.  I don’t need fully pre-allocated, non-sparse files on my file server.  It’s possible someone writing databases might need this, and you’d want to make sure you didn’t feed data faster than the kernel can allocate blocks.  Another one of those multiple filesystems kind of solutions.

When you play with tunables, you run into things that people don’t really know how to troubleshoot.  That’s what this is for, just so it shows up in web searches.


IBM SVC / Storwize DRP – Still a no-go

IBM released “Data Reduction Pools” for their SAN Volume Controller, which includes Storwize, FlashSystem etc products back in early 2018. The internal flag for it is “deduplication”, but it can be used for thin, compressed, or deduplicated volumes. You can put thick volumes in them too, but that is more of a “block off this space” thing.

DRP is a “log structured device”, meaning it is random read, but always sequential write. When you enable DRP, it inserts into the high cache (front-end cache). Because of this, it adds 3-4ms of latency right off the bat. Some of the higher end controllers can reduce that to 2-3ms, but it never goes away. Also, this affects ALL storage pools (mdisk groups), not just the ones built with DRP enabled. IBM product engineering says this is not a defect. It’s just they way it is, because you have to twiddle the data on the CPU.

In late 2018, I found one of their code defects, which has been resolved; however, that defect highlighted a troubling design choice. DRP is a whole-cluster thing, not a whole I/O group thing, nor a whole pool thing. That means, if you trigger a defect that kills DRP for one pool, it will kill DRP for every pool. If you’re in a hyperswap or stretched cluster, and something happens (future defect maybe) to take one site’s DRP offline while the nodes are still live, then that could take out BOTH sites.

This year, I found out that their sizing partner, “Intellimagic”, does not understand DRP. When a “Disk Magic” report comes out, it can recommend DRP. It cannot tell how much cache affects things, so it just recommends maximum cache. It cannot tell how much DRP affects things, so it cannot recommend against it for latency sensitive workloads. The IBM technical sales team is trying to work around this by simply recommending the next model up when DRP is to be used.

However, that is not enough. DRP garbage collection is VERY heavy handed. There is no way to throttle it. It’s a log structured array, so when you re-write a block on a LUN, the old block gets invalidated, then the old chunk gets read, then new data is packed in, then it’s written to the end of the array. This is sequential for each major section of the pool, and it has additional latency added by the code. We found that a normal system of 25% writes would be suppressed by 80% in DRP. Latency during normal operations would fluctuate from 6 to 18, and that’s to the client. Any major I/O operations on the client (big rewrites, or big block freeing), or any vdisk (LUN) deletions would cause latency to fluctuate from 16-85ms (completely unusable).

Once DRP is enabled, it affects the whole system. You cannot turn it off for a pool. You have to delete that pool. You cannot migrate a vdisk out of nor into a DRP, you have to mirror to a new pool, then delete or split the old copy. Funny thing is, that’s just what a migrate does at all but the lowest levels. It’s been 2 years, that really should have been folded into the standard commands by now.

Once you have everything split, and all of the hosts are running on the new, non-DRP enabled pool, that is not enough. All of the performance problems will still be there. It is only once you delete the old pool that performance returns to normal. Since vdisk deletions take several hours each, and running more than 4 at a time causes severe, deadly latency, it’s best to just verify nothing is in use and force-delete the whole DRP-enabled pool. Risky, but otherwise, you could spend weeks deleting the old LUNs by hand due to how slow they clear.

The sales materials, technical whitebooks, and the gritty redbooks all recommend DRP as the best thing ever. Staff and books shy away from highlighting any of these limitations. Other products handle deduplication without this sort of negative impact, but here, even with no deduplication in use, just the OPTION of using dedupe, it causes the system to be unusable.

Maybe there will be a “fix” or a “rewrite”, but I’ve been burned hard, with severe customer satisfaction, over two years. This negatively impacts trust, and is not the only IBM storage product causing me trust issues.

I expect that this is the fallout from Ginny Rometty’s command, “Don’t try to protect the past”. IBM’s vision is stated as “Cloud, Data, and Watson”. It will be interesting to see exactly how that translates into a future structure. IBM has been divesting a lot of pieces lately. Rometty ran services, and that’s what she knows as Chair and CEO. Anything she sees as a commodity, or low margin will be spun off. (IBM seems to need at least a 6:1 revenue to cost in order to operate. There are a lot of layers.) I’m starting to wonder if Storage and POWER will be sold off to Lenovo. I see hints, but no roadmaps, nothing concrete.


TSM dedup BACKUP STGPOOL performance

BACKUP STGPOOL for dedupe runs about 6x slower than direct tape to tape.
Why?

1) First, the database has a huge number of random reads for dedupe rehydration.
Tack on any Dedup Deletion activity (SHOW DEDUPDELETEINFO) and anything else that’s competing for DB IOPS.
FIX: Put the database on SSD or RAM backed storage.
NOTE: SSD stats are usually lies. Sustained performance is 4500-12,000 IOPS each, divided by 2 for RAID-1/10, or by 3.5 for RAID-5/6)
FIX: increase server memory and provide more for DB2 bufferpools.
NOTE: This might require manually changing bufferpools, limiting filesystem cache, etc.
FIX: Large amounts of cache for the database containers

2) Next, the file class, while sequential, still has a large number of random read IOPS.
TSM Server has no read ahead for this. It reads the chunks in order, rather than requesting a huge buffer full of chunks.
As such, streaming speed will be limited by DB latency, file-class latency, and actual read IO times.
FIX: Reduce the latency for your file class
FIX: Reduce the latency for your database
FIX: Don’t do anything else during BACKUP STGPOOL.
FIX: Run your EXPIRE INVENTORY and IDENTIFY DUPLICATE after, not before.
FIX: Submit a Design Change Request (DCR) for larger chunk read cache to be used for BACKUP STGPOOL.
FIX: Submit a Design Change Request (DCR) for larger tape write buffer.

3) Last, tape buffer underruns can kill performance.
If the write buffer empties, then the tape will stop.
Before it begins again, the tape has to be repositioned backward.
For LTO drives, usually the minimum write speed is 50MB/sec.
Anything less, and you have latency and tape life consumed by “shoe shining”.
FIX: Fix/improve issues 1 and 2 above.
FIX: Submit a design change request to allow TSM to interleave more threads onto the same tape at once.
FIX: Use tape drives with lower minimum speeds to prevent underruns
FIX: Don’t use tape. Use virtual tape, another dedupe disk pool, or a replica target TSM server.

4) Check TSM server instrumentation.
This will show you where your time is spent, and what to upgrade next.
INSTRUMENTATION BEGIN
BACKUP STGPOOL DEDUP COPYPOOL
wait several minutes
INSTRUMENTATION END FILE=/tsm/instrumentation.out