mdadm fewer number of larger devices

I could not find where people were confident in the possibility of reshaping an MDAdm array to a fewer number of larger devices.  Plenty of recent people said you cannot do this.  I made this happen, and the biggest concern is making sure you provide enough space on the new devices.  There are some safety warnings that help with this.  I did have to resize my new partitions a couple of times during the process.

I did this because my rootvg needed to move to NVMe, and I only had room for 4 devices, vs the 5 on SATA.  The OS I used was Debian 10 Buster, but this should work on any vaguely contemporary GNU/Linux distribution.

There are always risks with reshaping arrays and LVM, so I recommend you back up your data.
There are always risks with reshaping arrays and LVM, so I recommend you back up your data.
There are always risks with reshaping arrays and LVM, so I recommend you back up your data.
There are always risks with reshaping arrays and LVM, so I recommend you back up your data.

First, build the new NVME partitions
I have p1 for /boot (not UEFI yet, and I’m on LILO still, so unused right now).
I have p2 for rootvg, and p3 for ssddatavg

parted /dev/nvme0n1
mklabel gpt
y
mkpart boot ext4 4096s 300MB
set 1 raid on
set 1 boot on
mkpart root 300MB 80GB
set 2 raid on
### Resizing last
rm 3
resizepart 2 80G
mkpart datassd 80G 100%
set 3 raid on
print
quit

Repeat for the other devices so they match.
My devices looked like this after:

Model: INTEL SSDPEKNW020T8 (nvme)
Disk /dev/nvme3n1: 2048GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number Start End Size File system Name Flags
1 2097kB 300MB 298MB boot boot, esp
2 300MB 80.0GB 79.7GB root raid
3 80.0GB 2048GB 1968GB datassd raid

Clear superblocks if needed
If you are retrying after 37 attempts, these commands may come in handy:

### wipe superblock
for i in /dev/nvme?n1p1 ; do mdadm –zero-superblock $i ; done

### Wipe FS
for i in /dev/nvme?n1p1 ; do dd bs=256k count=4k if=/dev/zero of=$i ; done

Rebuild /boot – high level
This is incomplete, because I have not changed my host to UEFI mode.  The reference is good, but incomplete.

### Make new array and filesystem
mdadm –create –verbose /dev/md3 –level=1 –raid-devices=4 /dev/nvme*p1
mkfs.ext4 /dev/md3
mount /dev/md3 /mnt
rsync -avSP /boot/ /mnt/

### Install GRUB2
mkdir /boot/grub

apt update
apt-get install grub2
### From dpkg-reconfigure: kopt=nvme_core.default_ps_max_latency_us=0

### Make the basic config
[root@ns1: /root]

/bin/bash# grub-mkconfig -o /boot/grub/grub.cfg
Generating grub configuration file …
Found linux image: /boot/vmlinuz-4.19.0-10-amd64
Found initrd image: /boot/initrd.img-4.19.0-10-amd64
Found linux image: /boot/vmlinuz-4.19.0-5-amd64
Found initrd image: /boot/initrd.img-4.19.0-5-amd64
done

### Install the bootloader
[root@ns1: /root]

/bin/bash# grub-install /dev/md3
Installing for i386-pc platform.
grub-install: warning: File system `ext2′ doesn’t support embedding.
grub-install: error: embedding is not possible, but this is required for cross-disk install.

[root@ns1: /root]
/bin/bash# grub-install /dev/nvme0n1
Installing for i386-pc platform.
grub-install: warning: this GPT partition label contains no BIOS Boot Partition; embedding won’t be possible.
grub-install: error: embedding is not possible, but this is required for RAID and LVM install.

Need to convert to uefi before installing the bootloader will work.  I also rsync’d my old /boot into the new one, etc.  That is moot until this is corrected.

Swap out my SATA members with SSD

Original members are 37GB, and new are 77GB.  It was time to go bigger, and I found that I kept coming up a few gigs short trying to match size (5×37 vs 4×57).

The goal is to fail a drive, remove a drive, then add a larger SSD replacement. After the last drive is removed, we reshape the array while it is degraded, because we don’t have a 5th device to add.

### Replace the first device
mdadm -f /dev/md1 /dev/sda2

mdadm -r /dev/md1 /dev/sda2
mdadm –add /dev/md1 /dev/nvme0n1p2

### wait until it’s done rebuilding
#mdadm –wait /dev/md1
while grep re /proc/mdstat ; do sleep 20 ; date ; done
mdadm -f /dev/md1 /dev/sdb2
mdadm -r /dev/md1 /dev/sdb2
mdadm –add /dev/md1 /dev/nvme1n1p2

### wait until it’s done rebuilding
#mdadm –wait /dev/md1
sleep 1 ; while grep re /proc/mdstat ; do sleep 20 ; date ; done
mdadm -f /dev/md1 /dev/sdc2
mdadm -r /dev/md1 /dev/sdc2
mdadm –add /dev/md1 /dev/nvme2n1p2

### wait until it’s done rebuilding
#sleep 1 ; while grep re /proc/mdstat ; do sleep 20 ; date ; done
mdadm –wait /dev/md1
mdadm -f /dev/md1 /dev/sdd2
mdadm -r /dev/md1 /dev/sdd2
mdadm –add /dev/md1 /dev/nvme3n1p2

### Remove last smaller device
#sleep 1 ; while grep re /proc/mdstat ; do sleep 20 ; date ; done
mdadm –wait /dev/md1
mdadm -f /dev/md1 /dev/sde2
mdadm -r /dev/md1 /dev/sde2

Reshape the array

Check to make sure your required array resize is larger than the LVM space used in your PV.  

[root@ns1: /root]
/bin/bash# mdadm –grow /dev/md1 –raid-devices=4 –backup-file=/storage/backup
mdadm: this change will reduce the size of the array.
use –grow –array-size first to truncate array.
e.g. mdadm –grow /dev/md1 –array-size 155663872

[root@ns1: /root]
/bin/bash# pvs /dev/md1
PV VG Fmt Attr PSize PFree
/dev/md1 rootvg lvm2 a– 102.50g 8.75g

If you come up short, you can shrink a PV a little, but often, there are used blocks scattered around.  There is no defrag for LVM, so you would have to manually migrate extents.  I was too lazy to do that, and instead, grew my PV from 103GB to 155GB.  I kind of need the space anyway.

# pvresize –setphysicalvolumesize 102G /dev/md1

Final reshape here

Now that I know the size MDADM wants to use, I use that exactly (or smaller, but larger than the PV size currently set.)

mdadm –grow /dev/md1 –array-size 155663872
mdadm –grow /dev/md1 –raid-devices=4 –backup-file=/storage/backup1
sleep 1 ; while grep re /proc/mdstat ; do sleep 20 ; date ; done

One of the drives was stuck as a spare.

This is not guaranteed to happen, but it does happen sometimes.  Just an annoyance, and one of the many reasons using RAID6 is much better than RAID6.  Also, errors can be properly identified better than with RAID5, and various other things.  Just use RAID6 for 4 drives and up.  I promise, it’s worth it.  3 drives can be RAID5, or RAID10 on Linux, but it’s not ideal.  Also, if you have a random-write-intensive workload, then you can use RAID10 to save some IOPS at the expense of more drives used to protect larger arrays, and inferior protection.  (eg, it is possible to lose 2 drives on a 6 drive RAID10 and still lose data, if they are both copies of the same data.)

[root@ns1: /root]
/bin/bash# mdadm /dev/md1 –remove faulty

[root@ns1: /root]
/bin/bash# mdadm –detail /dev/md1
/dev/md1:
State : active, degraded

Number Major Minor RaidDevice State
0 259 15 0 active sync /dev/nvme2n1p2
1 259 17 1 active sync /dev/nvme3n1p2
2 259 11 2 active sync /dev/nvme0n1p2
– 0 0 3 removed

4 259 13 – spare /dev/nvme1n1p2

Remove and re-add the spare

The fix was easy.  I just removed and re-added the drive that was stuck as a spare.

[root@ns1: /root]
/bin/bash# mdadm /dev/md1 –remove /dev/nvme1n1p2
mdadm: hot removed /dev/nvme1n1p2 from /dev/md1

[root@ns1: /root]
/bin/bash# mdadm /dev/md1 –add /dev/nvme1n1p2
mdadm: hot added /dev/nvme1n1p2

Check status on rebuilding
[root@ns1: /root]
/bin/bash# mdadm –detail /dev/md1
/dev/md1:
State : active, degraded, recovering

Number Major Minor RaidDevice State
0 259 15 0 active sync /dev/nvme2n1p2
1 259 17 1 active sync /dev/nvme3n1p2
2 259 11 2 active sync /dev/nvme0n1p2
4 259 13 3 spare rebuilding /dev/nvme1n1p2

Alternatively, this might have been frozen
cat /sys/block/md1/md/sync_action
frozen
echo idle > /sys/block/md1/md/sync_action
echo recover > /sys/block/md1/md/sync_action

Grow to any extra space

Once it is done recovering and/or resyncing, then you can grow into any additional space.  Since we used the value above to set the size “smaller”, we do not have to do this.  Note, when resizing “UP”, it is technically possible to overrun the bitmap.  This example drops the bitmap during the resize.  That is a risk you’ll have to weigh.  A power outage during restructure without a bitmap could be a bad day.

mdadm –grow /dev/md1 –bitmap none
mdadm –grow /dev/md1 –size max
mdadm –wait /dev/md1
mdadm –grow /dev/md1 –bitmap internal

Expand LVM to use the new space

[root@ns1: /root]
/bin/bash# pvresize /dev/md1

[root@ns1: /root]
/bin/bash# pvs
PV VG Fmt Attr PSize PFree
/dev/md1 rootvg lvm2 a– <148.38g 54.62g

Other Notes 1:

I also dropped/readded a drive with pending reallocation sectors.  That is entirely unrelated to the reshaping above, but I’ll dump the log here for my own reference.

### See the errors
/bin/bash# smartctl -a /dev/sda
smartctl 6.6 2017-11-05 r4594 [x86_64-linux-4.19.0-10-amd64] (local build)
Copyright (C) 2002-17, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family: Western Digital Red
Device Model: WDC WD30EFRX-68EUZN0
Firmware Version: 82.00A82
User Capacity: 3,000,592,982,016 bytes [3.00 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Rotation Rate: 5400 rpm

Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always – 1
196 Reallocated_Event_Count 0x0032 199 199 000 Old_age Always – 1
197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always – 2
198 Offline_Uncorrectable 0x0030 200 200 000 Old_age Offline – 0

### See what arrays use this disk
[root@ns1: /root]
/bin/bash# cat /proc/mdstat | grep -p sda
md0 : active raid1 sda1[4] sdd1[1] sde1[3] sdc1[2] sdb1[0]
271296 blocks [5/5] [UUUUU]
bitmap: 0/1 pages [0KB], 65536KB chunk

md2 : active raid6 sda3[3] sdb3[2] sdd3[4] sdc3[1] sde3[0]
8682399744 blocks level 6, 512k chunk, algorithm 2 [5/5] [UUUUU]
bitmap: 0/11 pages [0KB], 131072KB chunk

### Remove and re/add so it re-writes
[root@ns1: /root]
/bin/bash# mdadm /dev/md0 –fail /dev/sda1
mdadm: set /dev/sda1 faulty in /dev/md0

[root@ns1: /root]
/bin/bash# mdadm /dev/md0 –remove /dev/sda1
mdadm: hot removed /dev/sda1 from /dev/md0

[root@ns1: /root]
/bin/bash# mdadm /dev/md0 –add /dev/sda1
mdadm: hot added /dev/sda1

[root@ns1: /root]
/bin/bash# cat /proc/mdstat | grep -p sda
md0 : active raid1 sda1[5] sdd1[1] sde1[3] sdc1[2] sdb1[0]
271296 blocks [5/4] [UUUU_]
[=================>…] recovery = 87.3% (237440/271296) finish=0.0min speed=118720K/sec
bitmap: 0/1 pages [0KB], 65536KB chunk

md2 : active raid6 sda3[3] sdb3[2] sdd3[4] sdc3[1] sde3[0]
8682399744 blocks level 6, 512k chunk, algorithm 2 [5/5] [UUUUU]
bitmap: 0/11 pages [0KB], 131072KB chunk

### Remove/Readd the bigger array member
[root@ns1: /root]
/bin/bash# mdadm /dev/md2 –fail /dev/sda3
mdadm: set /dev/sda3 faulty in /dev/md2

[root@ns1: /root]
/bin/bash# mdadm /dev/md2 –remove /dev/sda3
mdadm: hot removed /dev/sda3 from /dev/md2

[root@ns1: /root]
/bin/bash# mdadm /dev/md2 –add /dev/sda3
mdadm: hot added /dev/sda3

Other Notes 2:

I also made a new array on partition 3.  That is entirely unrelated to the reshaping above, but I’ll dump the log here for my own reference.

[root@ns1: /root]
/bin/bash# mdadm /dev/md4 –create -l 6 -n 4 /dev/nvme?n1p3
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md4 started.

[root@ns1: /root]
/bin/bash# pvcreate /dev/md4
vgcreate Physical volume “/dev/md4” successfully created.

[root@ns1: /root]
/bin/bash# vgcreate ssdvg /dev/md4 -Ay -Zn
Volume group “ssdvg” successfully created

[root@ns1: /root]
/bin/bash# vgs
VG #PV #LV #SN Attr VSize VFree
datavg 1 7 0 wz–n- <8.09t 704.12g
rootvg 1 7 0 wz–n- <148.38g 54.62g
ssdvg 1 0 0 wz–n- 3.58t 3.58t

[root@ns1: /root]
/bin/bash# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid10]
md4 : active raid6 nvme3n1p3[3] nvme2n1p3[2] nvme1n1p3[1] nvme0n1p3[0]
3844282368 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU]
[>………………..] resync = 1.0% (20680300/1922141184) finish=216.5min speed=146336K/sec
bitmap: 15/15 pages [60KB], 65536KB chunk

md3 : active raid1 nvme3n1p1[3] nvme2n1p1[2] nvme1n1p1[1] nvme0n1p1[0]
289792 blocks super 1.2 [4/4] [UUUU]

md0 : active raid1 sda1[4] sdd1[1] sde1[3] sdc1[2] sdb1[0]
271296 blocks [5/5] [UUUUU]
bitmap: 0/1 pages [0KB], 65536KB chunk

md1 : active raid6 nvme1n1p2[3] nvme3n1p2[1] nvme2n1p2[0] nvme0n1p2[2]
155663872 blocks level 6, 256k chunk, algorithm 2 [4/4] [UUUU]
bitmap: 1/1 pages [4KB], 65536KB chunk

md2 : active raid6 sda3[5] sdb3[2] sdd3[4] sdc3[1] sde3[0]
8682399744 blocks level 6, 512k chunk, algorithm 2 [5/4] [UUU_U]
[=>……………….] recovery = 6.6% (191733452/2894133248) finish=349.9min speed=128713K/sec
bitmap: 0/11 pages [0KB], 131072KB chunk

unused devices: <none>


lancache

TLDR: I now only have to download microsoft and steam updates once for all 13 systems in the house.
 
I finally set up a LAN Cache. I got tired of windows update sneaking in and eating all of my bandwidth, killing movies, etc. We have 4 regular Steam clients, plus 3 that don’t run very often; and we have 13 Windows 10 systems. It seems like settings always revert, and they update whenever they want, or at 100% bandwidth a few months after setting the throttles low.
 
https://lancache.net/ caches steam, windows updates, and several others. It was much easier to set up than a squid webproxy on my router. This should make it so anything that is downloaded only downloads once. only have 200GB to throw at it right now, but that should help a bunch. I need to set it to auto-start on boot, and to give it more space eventually, but I’m just really happy it’s working now. And apache, SSLH, and DNSMASQ on the same host still working.
 
My router already pointed to my server for DNS Masquerading, so I could manually override things. I added a second IP address, and modified lancache.yml to put all services only on the new IP address. I updated dnsmasq.conf to forward to lancache only, because it was not obeying the fallback rules.
 
This means if lancache dies, I have to edit dnsmasq to keep the home network functional. So many layers.

Spacetime Dream

I had one of those dreams during waking that was both vivid and meaningful. It was in a village, with many people around. Standing inside, but near the lip of a basin. There were trees, complex terrain, but not a lot of big rocks. Everything was lush and green in the late afternoon, early evening. Before dusk, but no direct beams of light seen.

The basin wasn’t actually a basin. It was curved spacetime. Gravity did not “feel” strange, but it did “look” like we would walk down into the basin to go forward. Left and right seemed normal. Perhaps the Earth had become a Mobius Strip in spacetime, not a torus. We were looking across the gap from the inside.

There was no Sun in this universe. It just became brighter and darker over time. Shadows were always towards the center of the strip, but if you looked behind you, you would see the curce of the Earth going upwards.

We were looking up at the moon. The moon was physically smaller, but seemed much larger because of how close it was. It took up maybe 5-10 degrees of arc worth of the sky. The moon travelled perpetually in the center, because really, it stayed stationary. The Earth’s surface rotated continually, almost flowed, across the Mobius, dragging atmosphere with it.

Left to right, there were lots of cirrus clouds, with a thick line of them. You could see the clouds striking and billowing against the line of clouds. No dust was coming off of the moon, and it was very dense — more massive than IRL.

In the distance, instead of blackness, or stars, it was the darkest blue. You could see the lights of night-time, creating an outline of the entire continent of Africa, isolated, without Europe nearby. It was almost directly across it seemed, but it was also up vertically because of the curvature.

We understood, all of this in a way, because we had grown up with it. This was still an amazing sight, just like IRL how people get excited for a solar eclipse, or a blood moon, or a comet, or a meteor shower, or any other less common movements.


reducevg very slow

This is an APAR, but really it’s a description. Reducevg sends the equivalent of TRIM commands, but on a storage array, this is writing nulls. On a big LUN, or with a busy array, this can take a long time. If you do not need to worry about this, then you can disable that space reclaim.

ioo -o -dk_lbp_enabled=0

Here is the IBM doc about it.

 

IJ23045: REDUCEVG UNCLEAR ON DELAY WHEN WAITING FOR INFLIGHT RECLAIM REQ APPLIES TO AIX 7100-05

 

A fix is available

APAR status

  • Closed as program error.

Error description

  • reducevg may be unclear, why there is some delay
    when waiting on inflight reclaim requests.
    

Local fix

  • Disable space reclamation by running:
    ioo -o dk_lbp_enabled=0
    

Problem summary

  • reducevg may be unclear, why there is some delay
    when waiting on inflight reclaim requests.
    

Problem conclusion

  • reducevg displays message incase there are space reclamation
    IOs inflight to indicate reducevg may take some time to
    complete.

TSM SP Remove ReplServer

PROBLEM:
Every 5.5 minutes, this shows up in the actlog

08/13/20 08:05:25 ANR1663E Open Server: Server OLDSERVER not defined
08/13/20 08:05:25 ANR1651E Server information for OLDSERVER is not available.
08/13/20 08:05:25 ANR4377E Session failure, target server OLDSERVER is not defined on the source server.
08/13/20 08:05:25 ANR1663E Open Server: Server OLDSERVER not defined
08/13/20 08:05:25 ANR1651E Server information for OLDSERVER is not available.
08/13/20 08:05:25 ANR4377E Session failure, target server OLDSERVER is not defined on the source server.
08/13/20 08:05:26 ANR1663E Open Server: Server OLDSERVER not defined
08/13/20 08:05:26 ANR1651E Server information for OLDSERVER is not available.
08/13/20 08:05:26 ANR4377E Session failure, target server OLDSERVER is not defined on the source server.
08/13/20 08:05:28 ANR1663E Open Server: Server OLDSERVER not defined
08/13/20 08:05:28 ANR1651E Server information for OLDSERVER is not available.
08/13/20 08:05:28 ANR4377E Session failure, target server OLDSERVER is not defined on the source server.

SOLUTION:
QUERY REPLSERVER shows the GUID
REMOVE REPLSERVER (GUID) to cause the errors to stop.


SVC, StorWize, FlashSystem, Spectrum Virtualize – replace a drive

I always forget, so here’s a reminder…

When you replace a drive on one of these, mdisk arrays do not auto-rebuild.

If the GUI fix procedures go away, or never show up, or whatever causes the replacement drive to not get included as a new drive in the mdisk, you can do this manually.

First, look for the candidate or spare drive you want to use.
lsmdisk

Then, make sure that drive ID is a candidate:
chdrive -use candidate 72

Then, find the missing member:
lsarraymember mdisk1

Then, set the new drive to use that missing member ID:
charraymember -member 31 -newdrive 72 mdisk1

You can watch the progress of the rebuild:
lsarraymemberprogress mdisk1


Dovecot recompress

I was getting an error about the file size being too large.

May 4 17:42:57 ns1 dovecot: imap(jdavis)<21859><XXXXXXXXX/YYYYYYYYY>: Error: Corrupted record in index cache file /home/jdavis/Maildir/.Archivedir/dovecot.index.cache: UID 1: Broken physical size in mailbox Archivedir: read(zlib(/home/jdavis/Maildir/.Archivedir/cur/1111111111.M555555P333V000000000000FD05I0001A11F_2212.mailhost,S=9794:2,SZ,Z)) failed: Cached message size larger than expected (9794 > 3254, box=Archivedir, UID=1)

I might have clobbered some things while trying to fix it, so I restored a backup to maildir.tmp, and did the following to try to repair/rebuild.

################################
### Clean up the restored mail repo
################################
cd /storage/uploads/CustomerImages/mailtemp/Maildir.tmp
for i in .[a-zA-Z]*/cur/* ; do rm cur/`basename $i` ; done

IFS=$'\n'
for i in $(find . -type f); do
   if file "$i" |grep gzip >/dev/null; then
      # echo "Extracting GZIP:" "$i" 
      mv "$i" "$i".gz
      gunzip "$i".gz
   fi
done &

for i in $(find . -type f); do
   if file "$i" |grep bzip2 >/dev/null; then
      # echo "Extracting BZIP2:" "$i"
      bunzip2 -q "$i"
      mv "$i".out "$(echo $i |sed 's/.out//')"
   fi
done &



################################
### Copy in the missing or damaged files
################################
cd /home/jdavis/Maildir
for i in .[a-zA-Z]* [a-z]* ; do rsync -avS --partial /storage/uploads/CustomerImages/mailtemp/Maildir.tmp/Maildir/"${i}" ./ ; done
for i in .[a-zA-Z]*/cur/* ; do rm cur/`basename $i` ; done

IFS=$'\n'
for i in $(find . -type f); do
   if file "$i" |grep gzip >/dev/null; then
      # echo "Extracting GZIP:" "$i" 
      mv "$i" "$i".gz
      gunzip "$i".gz
   fi
done &

for i in $(find . -type f); do
   if file "$i" |grep bzip2 >/dev/null; then
      # echo "Extracting BZIP2:" "$i"
      bunzip2 -q "$i"
      mv "$i".out "$(echo $i |sed 's/.out//')"
   fi
done &


################################
### Now, remove duplicates
################################
find /storage/uploads/CustomerImages/mailtemp/Maildir.tmp /home/jdavis/Maildir -type d -exec fdupes -dNI {} \;



################################
### Now, recompress it all
################################
compress_maildir () {
   cd $1
   DIRS=`find -maxdepth 2 -type d -name cur`
   for dir in $DIRS; do
      echo $dir
      cd $dir
      FILES=`find -type f -name “*,S=*” -not -regex “.*:2,.*Z.*”`
      #compress all files
      for FILE in $FILES; do
         NEWFILE=../tmp/${FILE}
         #echo bzip $FILE $NEWFILE
         if ! bzip2 -9 $FILE -c > $NEWFILE; then
            echo compressing failed
            exit -1;
         fi
         #reset mtime
         if ! touch -r $FILE $NEWFILE; then
            echo setting time failed
            exit -1
         fi
      done
      echo Locking $dir/..
      if PID=`/usr/lib/dovecot/maildirlock .. 120`; then
         #locking successfull, moving compressed files
         for FILE in $FILES; do
            NEWFILE=../tmp/${FILE}
            if [ -s $FILE ] && [ -s $NEWFILE ]; then
               echo mv $FILE $NEWFILE
               mv $FILE /tmp
               mv $NEWFILE ${FILE}Z
            else
               echo mv failed
               exit -1
            fi
         done
         kill $PID
      else
         echo lock failed
         exit -1
      fi
      cd – >/dev/null
   done
}


################################
### Actually RUN the script to compress all maildir files
################################
./compress_maildir /home/jdavis/Maildir/ &

Related: http://omnitech.net/news/2015/11/14/compressed-dovecot-maildir/


Light and Disinfectant

So, time to crank up the UV and disinfectants into our lungs, huh? Remember folks, if you die, the virus dies too!

TRANSCRIPT: Donald J. Trump said on 4/23/20: “Supposing we hit the body with a tremendous, whether it’s ultraviolet or just very powerful light, and, I think you said, that hasn’t been checked but you’re going to test it? And then I said, supposing you brought the light INSIDE the body, which you can do either through the skin or, uh, in some other way….and then I see the disinfectant, where it knocks it out in a minute, one minute, and is there a way we can do something like that? Uh, by injection inside, or almost a cleaning, because you see it gets in the lungs and it does a tremendous number on the lungs, so it would be interesting to check that, but you’re going to have to use medical doctors for that. But it sounds interesting to me.”

EXPLANATION: UV and disinfectants kill C19, and he was suggesting introducing those to the inside of a body may be helpful. He said Brix would be looking into that.

There is zero justification for trying to say what he suggested might be valid. It is so obviously ignorant that it is valid to dismiss it on the spot. Brix’ obvious pain on her face while having to listen to this makes perfect sense.

The suggestions are also careless for not being clear to the moderately large number of people who absolutely will interpret this to mean they should resume drinking diluted bleach.

If this kind of thing worked, then no infectious disease would ever be possible anymore. It’s like saying we should boil people to cure the disease. Sure, it would destroy the germs, but also destroy the person’s cells, aka them.

The people defending what he said are doing so out of ideological loyalty, or blind faith, or abject ignorance. Think of the Golgafrenchams and their wheel. It is not worth your time trying to get understanding into their minds. Either someone understands, or they do not.


Death Rates Falling

US death rate trends since the pandemic declaration (03-11) and national emergency (03-13).  Chart attached for various areas:
Downward Death Rates
 
The raw numbers (deaths, new infections) are:
169.49% 175.73%
122.00% 139.65%
125.82% 133.45%
135.83% 130.53%
132.37% 131.24%
127.90% 123.07%
133.43% 122.40%
128.34% 127.45%
130.77% 121.26%
128.15% 119.50%
121.77% 115.98%
120.71% 114.85%
130.05% 116.29%
122.82% 113.39%
124.57% 114.10%
119.59% 113.20%
118.63% 112.07%
114.42% 109.14%
112.10% 108.76%
117.98% 108.08%

SARS2 and Animals

This CNET Article discusses research on SARS-CoV-2 in several common animals, as well as a bit of history on the virus.  The article also discusses the animal origins of this virus, and a quick summary to date. A limited number of animals have been tested for and proven to be infected by the virus.  No proof of humans catching it back from animals has happened.  Very little study has occurred on the communicablility, and it was initially thought to be no risk.
 
The jist of the research paper is that adults and juvenile cats can get it the same as humans; young cats and ferrets can get it just in the upper respiratory tract (sinuses, tonsils), but not the lower respiratory tract. Dogs can technically get it, but are not very susceptible. It does not stay in them long. Ducks, pigs, pidgeons, etc are not susceptible at all. 
 
Another group did computer modeling of 253 animals’ ACE2 receptors to see what other animals we should investigate as possible transmission vectors.  
  • Human, Flying Fox, Horseshoe Bat, Lynx, Civet, Cat, Swine, Pangolin, Cow, Buffalo, Mustela (ferrets, weasles, etc), Goats, Sheep, ACE2 were clustered with humans.
  • Mice, birds, reptiles, etc were not, and mice were proven not susceptible.
  • Civet and Bat have been implicated in SARS1 sources, and Pangolin and Bat for SARS2 sources.

     SARS2 ACE2 Phylology Chart

This chart shows the ACE2 receptors that conserve the same binding sites as humans. They suspect that 50% and above “could” harbor the virus, but that birds generally are not a reservoir for betacoronaviruses. We see in other research that dogs, at 90%, are poor carriers, and clear the virus within 4 days. Swine were not actually susceptible, and ferrets were not able to get it into their lungs. The mechanism of those differences is unknown.

SARS2 ACE2 Phylology similarities