Spectrum Protect – container vulnerability

We ran into an issue where a level-zero operator became root, and cleaned up some TSM dedupe-pool containers so he’d stop getting full filesystem alerts.

Things exposed:

How does someone that green get full, unmonitored root access?

  • They told false information about timestaps during defense
  • Their senior tech lead was content to advise they not move or delete files without contacting the app owner.
  • Imagine if this had been a customer facing database server!

In ISP/TSM, once extents are marked damaged, a new backup of that extent will replace it.

  • Good TDP4VT CTL files and other incrementals will send missing files.
  • TDP for VMWare full backups fail if the control file backup is damaged.
  • Damaged extents do not mark files as damaged or missing.

Replicate Node will back-propagate damaged files.

  • Damaged extents do not mark files as damaged or missing.

Also, in case you missed that:

  • Damaged extents do not mark files as damaged or missing.

For real, IBM says:

  • Damaged extents do not mark files as damaged or missing.
  • “That might cause a whole bunch of duplicates to be ingested and processed.”

IBM’s option is to use REPAIR STGPOOL.

  • Requires a prior PROTECT STGPOOL (similar to BACKUP STGPOOL and RESTORE STGPOOL).
  • PROTECT STGPOOL can go to a container copy on tape, a container copy on FILE, or a container primary on the replica target server.
  • PROTECT STGPOOL cannot go to a cloud pool
  • STGRULE TIERING only processes files, not PROTECT extents.
  • PROTECT STGPOOL cannot go to a cloud pool that way either.
  • There is NO WAY to use cloud storage pool to protect a container pool from damage.

EXCEPTION: Damaged extents can be replaced by REPLICATE NODE into a pool.

  • You can DISABLE SES, and reverse the replication config.
  • Replicate node that way will perform a FULL READ of the source pool.

There is a Request For Enhancement from November, 2017 for TYPE=CLOUD POOLTYPE=COPY.

  • That would be a major code effort, but would solve this major hole.
  • That has not gotten a blink from product engineering.
  • Not even an “under review”, nor “No Way”, nor “maybe sometime”.

Alternatives for PROTECT into CLOUD might be:

  • Don’t use cloud. Double the amount of local disk space, and replicate to another datacenter.
  • Use NFS (We would need to build a beefy VM, and configure KRB5 at both ends, so we could do NFSv4 encrypted).
  • Use CIFS (the host is on AIX, which does not support CIFS v3. Linux conversion up front before we had bulk data was given a big NO.)
  • Use azfusefs (Again, it’s not Linux)

Anyway, maybe in 2019 this can be resolved, but this is the sort of thing that really REALLY was poorly documented, and did not get the time and resources to be tested in advance. This is the sort of thing that angers everyone at every level.

REFERENCE: hard,intr,nfsvers=4,tcp,rsize=1048576,wsize=1048576,bg,noatime


Biology Rambles with Khai

biology = whole organisms
microbiology = whole tiny organisms
cellular biology = whole cells of complex organisms
molecular biology = The machinery inside of cells
biochemistry = The chemicals of organisms, both inside and outside of cells
organic chemistry = chemistry involving carbon and hydrogen.

There’s more chemistry, and underlying chemistry is physics. Biology is an application of chemistry, and chemistry is an application of physics.

When you look at molecular biology, you see what looks like program counters and 3d printers. Most of the bits inside of cells are literally physical machines, aided along by the right shape to have a static or magnetic charge necessary to pull the right pieces out of the semi-random soup and move, attach, or detach them.

When you exercise, you literally dump calcium into pockets in a muscle cell, and those fit into a lock on a little machine that ratchets down a little rope, each click caused by an ATP molecule floating in, connecting, getting snapped, and having a tiny bit of electric charge transferred.

The fun thing is that ATP is also required to pump the calcium out of that little pocket, so when you’ve depleted ATP. There is a complex loop for making ATP, and a whole of of ways to keep that cycle (ctric acid or Krebs cycle) going, but the fastest way to make ATP is oxygen and glucose.

Oxygen input often lags behind, and you can always burn up oxygen faster than you can replenish it. When this happens, muscle cells get stuck contracted, and they keep trying to contract, competing for ATP in this depleted state. This is one of the ways a muscle cramp occurs.

This rambling brought to you by brain inputs triggered by talking to Khai about his AP biology test today, and how excited he is about all of the machinery inside of cells.


ANR3114E LDAP error 81. Failure to connect to the LDAP server

This used to be on IBM’s website, but it disappeared.  It is referenced all over the net, and needed to still exist. I only found it in the wayback machine, so I’m adding another copy to the internet.

2013 SOURCE: www-01.ibm.com/support/docview.wss?uid=swg21656339

Problem(Abstract)

When the SET LDAPUSER command is used, the connection can fail with:

ANR3114E LDAP error 81 (Can’t contact LDAP server)

Cause

The user common name (CN) in the SET LDAPUSER command contains a space or the ldapurl option is incorrectly specified.

Diagnosing the problem

Collect a trace of the Tivoli Storage Manager Server using the following trace classes:
session verbdetail ldap ldapcache unicode

More information about tracing the server can be found here: Enabling a trace for the server or storage agent

The following errors are reported within the trace:

11:02:04.127 [44][output.c][7531][PutConsoleMsg]:ANR2017I Administrator ADMIN issued command: SET LDAPPASSWORD ?***? ~
11:02:04.171 [44][ldapintr.c][548][ldapInit]:Entry: ldapUserNew =      CN=tsm user,OU=TSM,DC=ds,DC=example,DC=com
11:02:04.173 [44][ldapintr.c][5851][LdapHandleErrorEx]:Entry: LdapOpenSession(ldapintr.c:2340) ldapFunc = ldap_start_tls_s_np, ldapRc = 81, ld = 0000000001B0CAB0
11:02:04.174 [44][ldapintr.c][5867][LdapHandleErrorEx]:ldap_start_tls_s_np returned LDAP code 81(Can't contact LDAP server), LDAP Server message ((null)), and possible GSKIT SSL/TLS error 0(Success)
11:02:04.174 [44][output.c][7531][PutConsoleMsg]:ANR3114E LDAP error 81 (Can't contact LDAP server) occurred during ldap_start_tls_s_np.~
11:02:04.174 [44][ldapintr.c][6079][LdapHandleErrorEx]:Exit: rc = 2339, LdapOpenSession(ldapintr.c:2340), ldapFunc = ldap_start_tls_s_np, ldapRc = 81, ld = 0000000001B0CAB0
11:02:04.174 [44][ldapintr.c][1580][ldapCloseSession]:Entry: sessP = 0000000009B99CD0
11:02:04.175 [44][ldapintr.c][3159][LdapFreeSess]:Entry: sessP = 0000000009B99CD0
11:02:04.175 [44][ldapintr.c][2449][LdapOpenSession]:Exit: rc = 2339, ldapHandleP = 000000000AFDE740, bindDn =                              (CN=tsm user,OU=TSM,DC=ds,DC=example,DC=com)
11:02:04.175 [44][output.c][7531][PutConsoleMsg]:ANR3103E Failure occurred while initializing LDAP directory services.~
11:02:04.175 [44][ldapintr.c][856][ldapInit]:Exit: rc = 2339
11:02:04.175 [44][output.c][7531][PutConsoleMsg]:ANR2732E Unable to communicate with the external LDAP directory server.~

Resolving the problem

  • In the trace provided, the common name (CN) contains a space. (CN=tsm user,OU=TSM,DC=ds,DC=example,DC=com)

    Remove the space in the common name when using the SET LDAPUSER command. For example:

    SET LDAPUSER “CN=tsmuser,OU=TSM,DC=ds,DC=example,DC=com”

  • Use an LDAP connection utility such as ldp.exe to ensure the ldapurl option is correct and the LDAP server is accepting connections

    <ldapurl> port 636, check the box for SSL

    Verify there are no errors in the output


Class-M Asteroids

John W. posted about a Voyager episode (Emanations) where they found a Class-M Asteroid, and the question came up as to whether such a thing would even be possible.

In the episode, there were multiple Class-M asteroids around a Class-D Planet. Class-D is a small, rocky, barren planet, and Class-M means nickel-iron core, water, atmosphere, and overall suitable for human life.

This sent me off into research into what the minimum size might be for a human-habitable planet. Sea Level Pressure on Earth is 101.3 kPa, with 21% Oxygen. Minimum partial pressure of oxygen is 16 kPa, below which we cannot adapt. (15 kPa we lose cognitive functions, peripheral vision, and it gets worse the lower it goes.)

Pure oxygen atmosphere at 16kPa is not feasible either, because of dehydration and fire risks, plus oxygen toxicity. Bump up to 25% nitrogen to mitigate that. We also need a little moisture, at least 30% RH. At room temperature, we’re looking at another 1kPa.

Lastly, we need a buffer, because we’ll breathe out CO2, and cannot have more than 5%, though Earth normal is 0.04kPa. In a spacecraft, the buffer would depend on air circulation, reaction times, etc..Mir used 34kPa total atmospheric pressure, with 25% Nitrogen, 75% oxygen, and the CO2 and H2O were under 1% absolute.

On a dwarf planet or planet, the buffer would be the difference in pressures across the habitable zones of the planet. Maybe the highest peak would be 14kPa O2, maybe 21kPa total. Maybe the lowest trench would be 36kPa total, and 24kPa O2.

For a third pressure, we’re looking at a third of surface gravity. Assuming similar density to Earth, this could be about the size of Mercury. You still want a molten core, so you can have a magnetosphere. Either the rock is young (relatively speaking), or it’s a moon for tidal heating. The other options of high radioactivity, or much closer to the star, have issues for survivability.

With a gravity of around 3.3m/s, and a 34 kPa mean surface air pressure, humans would be limited to the bottom 4km of atmosphere. That’s totally reasonable. Maybe people get altitude sickness at 2km instead of 6-7km. Fine.

Planet diameter is, again, close to a third, more like 38-39%. So, we’re talking about 4900 km in diameter, and about 3.3E23 kg (about 5.5% of Earth). Our largest known asteroid is 1000km, and our largest known dwarf planet is 2600km.

For our solarsystem, a “Class M” asteroid would not be possible. If you get big enough, you move from asteroid to dwarf planet, though this term was not in use during that part of Voyager’s production. However, if it were big, they would have just called it a small planet, or even a moon since they’re orbiting a Class-D planet. NOTE: We have M-Type asteroids, which a re “metallic”. Ceres is a G-Type, which is carbon rich.

If there were a higher density core, you could reduce the size requirements. Remember that asteroids are basically shattered planets, or proto-planets that never could accrete, so you could have one that used to be mostly core material. The earth is 35% iron, 30% oxygen, 15% silicon, 13% magnesium, and 7% other stuff by mass.

Something with higher percentages of other stuff, such as late generation supernovae fragments might be possible. You could get three times the density out of things like Osmium, Platinum, and Gold without being toxic, though at some point, you’re looking at terraforming rather than evolved life, and it would be extra extra rare.

You couldn’t just do a 2600km 60% iron planetoid, because you wouldn’t be able to have enough silicate to protect the surface. The planet would cool too quickly unless it were close to the star (like Mercury). You’d have a very hot side, and a very cold side, or a very narrow window of access on astronomical and geological scales. It would be very magnetic (and conductive), and very reactive with water (part of Class-M is lots of water). The limit is probably somewhere around 40%. Even 43% is only 20-30% increased density. Even Mercury at 65% is too big to be even a Dwarf Planet. The density here is just not enough to bring gravity up to our target on a sub-3000km body.

You might be able to scrape by with 25% gravity, and a breathing apparatus, or if it were a really young planet, or had some other source for keeping the atmosphere relatively thick, but that gets so much more difficult to find AND call “Class-M”.

A Chthonian planet that got shattered by impact might work, if it were somehow put into a reasonable orbit. It could have very high density for a small core, 4x that of the Earth. But, they get that size by being a gas giant, then having the atmosphere stripped by being in too close of an orbit to a star. So then, it would have to be shattered, and one of those pieces would have to be ejected into a stable orbit inside of the habitable zone. That really means TWO collisions, one to transfer, and one to remove eccentricity. Not very likely, but maybe even smaller than Ceres might work. You’d need a more radioactive core, otherwise the planet would cool way too quickly. But, a radioactive core, on a shattered planetoid, would have a radioactive surface. Not Class-M.

A planet that had a Platinum inner core, and iron outer core might work. I’m thinking 30% Platinum, 20% iron, 20% oxygen, 12% silicon, 8% magnesium, and 10% other stuff (lots of carbon since it’s a smaller planet) could totally work on a 1200km planetoid. Though, this type of body would not really be a “Class-M” body by Trek standards (nickel-iron core), but it might be close enough. “Exotic Class-M” maybe. Might be an issue for heavy metal poisoning, since not all of the increased platinum would be in the core. Excess surface level might mean excess platinum salts… etc.

We still have to be concerned with loss of atmosphere through interaction with other asteroids, solar radiation, maintainingthe magnetic field, etc. The planet would not be habitable for nearly as long, and probably would not evolve life on its own. This asteroid did not. The only life were dead bodies (basically a cemetery planetoid). It’s possible even the atmosphere came from the subspace voids, though why the moons had atmospheres, but the central planet did not, does not seem rational to me. The lower pressure means water boils at 60C, which limits our range of surface temperatures (and increases cooking times!)

I did not calculate the scale height, nor any of the stuff to get exact with all of this. I might be off. This is just my mental gymnastics after too much dinner caffeine.


Tea Tree Soap 2018

Soap is done. It processed really fast, which scares me a little. I did use a much better stick blender. The pH came out a perfect 10 on one strip, and the other kind a 9.9.
 
I worry I’ll have trouble with it slumping whenever I remove the molds. I’m going to let it use up counter space in the kitchen until mid-week, and if it’s still soft, I’ll chill it before freeing it. I should have made this batch in January.
 
Anyway, I’m hopeful, though I overestimated the mold capacity. My old molds have self destructed (HDPE pipe, split), so I have some new, silicone ones. I used three, plus four Solo cups. Don’t judge. This is SCIENCE! 

Note to self, 3L of molds will not hold the soap created from 3L of oils, because, you know, another liter of water, plus some additives, plus the steam trapped in the soap during casting.  (Those molds were domed over with excess soap, and I tapped repeatedly to settle the bubbles.)

 
ps, I only burned myself once with a glob of molten soap!
 
 
 

Gridcoin Compiles on Xenial

I know it might not mean anything to the non-techs, and it might seem insignificant to the uber-techs.

I have successfully built the Gridcoin client on Xenial 16.04.4 with proper libraries, and confirmed it sees my testnet wallet, and overall is just working like it should. (No testplan per se.)

This is a major accomplishment for me, and makes me very excited. It takes 16 minutes to compile with 3 cores of i7-6820HQ CPU @ 2.70GHz and 3.2GB RAM.

I get some QT warnings, but they do not seem to break anything:

/usr/include/x86_64-linux-gnu/qt5/QtCore/qlogging.h:112:73: note: in expansion of macro ‘Q_ATTRIBUTE_FORMAT_PRINTF’
void critical(CategoryFunction catFunc, const char *msg, ...) const Q_ATTRIBUTE_FORMAT_PRINTF(3, 4);

Here is my build environment setup procedure.

### Xenial Build Environment References
http://wiki.gridcoin.us/Linux_guide
https://raw.githubusercontent.com/gridcoin/Gridcoin-Research/master/CompilingGridcoinOnLinux.txt
Google searches for libqt5charts for xenial (KDE Neon distribution)

https://88plug.com/linux/install-berkeley-4-8-db-libs-on-ubuntu-16-04

##########################################
### Intall libqt5charts5-dev and related files from Neon LTS
cat < <‘EOF’ > /etc/apt/sources.list.d/kde-neon-archive-xenial.list
deb http://archive.neon.kde.org/testing-qt xenial main
deb http://archive.neon.kde.org/user/lts xenial main
EOF
sudo apt update
sudo upgrade
sudo apt install ntp git build-essential curl libcurl4-openssl-dev libcurl3-dev libssl-dev libzip-dev libzip4 libdb-dev libdb++-dev \
libdb4.8-dev libdb4.8++-dev debhelper devscripts automake libtool pkg-config libprotobuf-dev protobuf-compiler libminiupnpc-dev \
autotools-dev libevent-dev bsdmainutils software-properties-common libboost-all-dev libqt5gui5 libqt5core5a libqt5dbus5 qttools5-dev \
libqrencode-dev qt-sdk qtcreator libqt5charts5-dev qt5-default qttools5-dev-tools libqt5webkit5-dev libqt5charts5-dev
sudo apt-get autoremove

### Change the distro back to Ubuntu from Neon
echo “DISTRIB_ID=Ubuntu” >> /etc/lsb-release

### Install BDB 4.8 on Xenial
sudo add-apt-repository ppa:bitcoin/bitcoin ## Stable
#sudo add-apt-repository ppa:bitcoin/rc ## testing
sudo apt update
sudo apt install libdb4.8-dev libdb4.8++-dev

##########################################
### New download
cd ~
git clone https://github.com/gridcoin/Gridcoin-Research
cd ~/Gridcoin-Research

### Optional if issues
git config –global http.sslverify false

##########################################
### Refresh download – master, hotfix, staging
cd ~/Gridcoin-Research
make clean
git fetch –all
git reset –hard origin/master

### Build Daemon
cd ~/Gridcoin-Research/src
make clean
mkdir obj
chmod 755 leveldb/build_detect_platform
make -j3 -f makefile.unix USE_UPNP=-
strip gridcoinresearchd
install -m 755 gridcoinresearchd ~/.Gridcoinresearchd/testnet/gridcoinresearchd
### The above probably wants sudo, and a target of /usr/bin

### Build GUI
cd ~/Gridcoin-Research
rm -f build/o.*
qmake gridcoinresearch.pro “USE_UPNP=-”
make -j3
strip gridcoinresearch
install -m 755 gridcoinresearch ~/.Gridcoinresearchd/testnet/gridcoinresearch
### The above probably wants sudo, and a target of /usr/bin

##########################################
### Refresh download, development
cd ~/Gridcoin-Research
make clean
git fetch –all
git reset –hard origin/development

### Build Autotools
cd ~/Gridcoin-Research
./autogen.sh
#./configure –with-incompatible-bdb ### If you do not have BDB 4.8
./configure
date ; make -j3 ; date
make install

###############################


GRC VM Template

Installed base OS

Ubuntu 16.04.4 LTS  because only LTS releases are worthy.  No auto-updates.
Copy over home directory and /etc/apt from my current TESTNET system

Split /home /usr /var /tmp into separate LVs.

lvcreate, edit /etc/fstab, mount on temp space, copy over, move old dir, reboot, remove old dir.

Shrunk root filesystem

grub
e
root=/dev/ram0 rw
^X

Wait for mdadm to finish complaining.
alias ll='ls -laF'
mkdir /mnt
lvm pvscan
lvm vgscan
lvm vgchange -a y
e2fsck -f /dev/ubuntu-vg/root
mount /dev/ubuntu-vg/root /mnt
cd /mnt
cp -a lib lib65 bin sbin /
cd /
umount /mnt
e2fsck -f /dev/ubuntu-vg/root
resize2fs -M /dev/ubuntu-vg/root
lvreduce -L 1120M /dev/ubuntu-vg/root
e2fsck -f /dev/ubuntu-vg/root
resize2fs /dev/ubuntu-vg/root
umount -a

power off VM since halt and reboot do not work.

Cleared free space

lvcreate -l 100%FREE -n deleteme ubuntu-vg
dd if=/dev/zero of=/dev/null bs=256k
lvremove /dev/ubuntu-vg/deleteme
swapoff /dev/ubuntu-vg/swap_1
dd if=/dev/zero of=/dev/ubuntu-vg/swap_1 bs=256k
for i in / /home /var /tmp /boot /usr ; do dd if=/dev/null of=${i}/deleteme bs=256k & rm ${i}/deleteme ; done
halt -p

Compacted

"\Program Files\Oracle\VirtualBox\VBoxManage.exe" modifyhd --compact "Xenial GRC Build.vdi"

Plans

Snapshot, test building.


ubuntu shrink root

/var, /usr, /home, and /tmp were all fairly easy to replace live.

/ is a special case.  I did the following:

grub
e
root=/dev/ram0 rw
^X
Wait for mdadm to finish complaining.
alias ll='ls -laF'
mkdir /mnt
lvm pvscan
lvm vgscan
lvm vgchange -a y
e2fsck -f /dev/ubuntu-vg/root
mount /dev/ubuntu-vg/root /mnt
cd /mnt
cp -a lib lib65 bin sbin /
cd /
umount /mnt
e2fsck -f /dev/ubuntu-vg/root
resize2fs -M /dev/ubuntu-vg/root
lvreduce -L 1120M /dev/ubuntu-vg/root
e2fsck -f /dev/ubuntu-vg/root
resize2fs /dev/ubuntu-vg/root
umount -a

Cryptocurrency

This is the Bitcoin write-up my BIL asked for. It’s a bit wordy, but hopefully it has info useful to you.

 

First and foremost, Don’t buy more than you are willing to lose outright.
Consider bitcoins to be the highest risk place to put money other than leaving gold coins on your front porch. This isn’t just some “oh, usual disclaimer.” This is a real warning. There is no protection in a crypto market. Zero. There is no such thing as insider trading. Theft of video-game money is not really theft at all.

 

Economy dynamics.
Value of anything comes from holding on to something when someone else wants it. The more you want to hold it, the more they have to pay to get it.

If no one wants something, then it has zero value. If only one person wants it, then it is only valuable to that person.

Currencies, real, fiat, and crypto, have value because people feel they represent something. In the past, it represented precious metals. Now, fiat currencies represent a share of the value of the country which issues them. Stocks are a share of the value of the company that issued them. Bonds are a share of the future debt payments of the company that issued them.

Usually, bonds track with the value of the payout, plus or minus a little based on the interest payments. Stocks track about 6x the value of the company. Currency exchange rates are a little more fuzzy, but similar to stocks, they track the relative value of the contributions of that country.

“Price” is just a way of quantifying an exchange rate. Deflation causes scarcity and economic collapse if unchecked. Basically, if you have 100 coins, and never more, but population and productivity grows, that means the value of goods would continually drop with relation to the value of money. You end up in a deflationary spiral, and people starve, economies collapse, etc. This is a risk for Bitcoin, because the currency is limited to 21m coins total. As more people try to get into crypto, it drives up the “price”.

If GDP or equivalent falls, then the opposite happens, and rapid inflation happens, the exact opposite. You can see this in Zimbabwe, where wheelbarrows of dollars are used, and the million dollar banknote was issued years ago.

Also, you can decrease the value of your debts by inflation. Most promisory notes are say, 100 dollars, not “1% of your productivity”. So, if you control the value of your currency, and the promisory notes are in that currency, you can just issue another 900 dollars. Suddenly, your debt is worth 10% of what it was before.

In the end, you want a slow inflation, steady, 2-4% per year, to ensure a stable economy.

 

What is Bitcoin?
Bitcoin is accepted for some real-world transactions. In the end though, it’s still basically video-game money. People use it for money laundering and other sorts of illegal activities. Its value is based on the idea of privacy, or decentralization. All it would take to destroy it is for people to realize it has no value. There is on underlying premise where it has a real value of its own.

Bitcoin is the grand-daddy. It’s 9 years old. It uses a blockchain to track transactions. The blockchain is literally that. Think of it as a database that is purely linear. You start at block zero, and go to block infinity. There are never random writes. Only appends. It’s like a transaction log without a normal database on it.

The way it works is that every transaction is held by every other node in the network. After X number of megs, you pack it up and generate a cryptographic hash for that block, and issue it to the network. It’s signed by your set of keys, so you get some credit for the computation. Your block gets passed around to everyone else, but so did a a bunch of competing blocks. Eventually, “the best block” is selected and added to the blockchain. Winners get payout, and some cryptos pay out to secondary, tertiary, etc. since they confirmed that transaction computed to the same as what everyone else thought. People who compute a different hash have their block thrown away. It’s a majority rule type thing to ensure that no one person can insert a false transaction. The complexity of the hash increases with time (block number) so that new computers do not inflate the currency too fast.

Along with all of that, multiple people often issue the same block at the same time with slightly different transactions in it. This is called a fork. Basically the blockchain splits in two (or more). Forks are consolidated based on whomever has the most peers/votes. If your transaction was stuck on a fork, and did not propagate to all of the other forks, then if your fork is abandoned, your transaction rolls back. It’s completely undone. This is why transactions require confirmations. If you accepted a transaction based on zero confirmations, you could keep reloading your wallet, sending the same coins over and over, getting your hamburger, and walking away, with the other party getting no payment once the forks consolidate. But, if you have 500 confirmations, you can be pretty sure that the whole network has accepted your transaction, even through there are tens of thousands of nodes online at any given time.

 

The wallet is just sets of keys.
The address is a hash of your public key. Every address has a set of keys to sign the messages you issue, proving you are you. The public key is usable by anyone, but the private key is only usable by you (hopefully). You can sign messages like with PGP, or you sign transaction on the coin network. “Yes, I promise I am giving away 0.00001 coin to address sdoyf3486t9f032h082h. Signed, w0ty982ty0t8yeiwf”. The value of your wallet is just tallying up all of the transactions for your addresses.

 

What are altcoins?
Every other crypto uses some of the Bitcoin source code. Most of them are literally a fork of Bircoin. As discussed above, forks can happen naturally, like cancer, and usually are eradicated by the protocol as it merges transactions. Forks can also happen if the protocol changes. If not everyone accepts the new protocol, they cannot join that network. Or, if the merge protocol is broken. As time went on, there were forks of forks. Some forked at block zero on purpose, before being issued. Some forked at a live block due to political issues (BCH and ETC are both political forks). When forks happen later in the chain, everyone who had coins at the time of the fork has coins on both networks.  Some are great, and some are scams.

 

How secure is it?
It’s not. The blockchains themselves could be secure, but from time to time, some defect happens, and everyone downloads a snapshot of the blockchain to merge the forks. You’re taking on faith that this one source has not tampered with the blockchain. There is no SEC regulation, because it’s not real currency. The IRS does consider it an asset, so you have to pay capital gains (or losses), so keep track of your purchases, earnings, mining, etc.

Also, many coins have foundation wallets. Basically, before the currency was issued, someone computed the first million hashes, and got the starting 10% of the possible coins. Someone holds that, and can spend it, for free. Some do good things, such as pay for developers to keep working on the source code, but there is no restriction (and no confiscating funds) if they decide to go all hedonistic about it. Of the 1800 coins out there, 90% of them are scams. Literally pump-and-dump schemes where a handful of people convince unknowning people to give up valuable currency for junk currency. It’s the same as if you convinced people to give you Euros for Monopoly money.

Some people consider it a speculation market. We saw this last year, with a huge peak in December. If you look at the long-term value of each crypto currency, you can see how this happens from time to time. Often, it’s one person manipulating the market, buying, making insider trades, etc, then cashing out. Some people ride the wave, and time the market, to make good money. Some people do not.

Then there are the “hacks”. A major exchange accumulates a bunch of crypto, and once they run low on cash, they are “hacked”. All of their coins are drained off. There is no insurance, so everyone is just SOL. (Mt. Gox is a major example.) Most people recommend keeping your coins in your own wallet, but if you lose that, your coins are also gone. If someone gets access to it, then they can steal all of your coins. Beyond that, Bitcoin’s blockchain is already 150GB. Lightning and SegWit promise to help, but only going forward. You can also run a partial wallet, but then you are assuming all of the old blocks are true.

 

Fees, exchange rates, etc.
In the end, you’ll find price tracks relative to Bitcoin. Bitcoin is THE currency, but it’s long in the tooth, and takes too much CPU to continue. Eventually, all of the tech advances of the top coins will fold into a new altcoin, and the old coins will fall out of favor. Maybe it will be a fork like BCH (BCH is bitcoin 1.0, BTC is 2.0. 1.0 was supposed to die, but some people didn’t jump over.)

Lastly, most ccards are treating it as cash, so if you buy starting this month, they many of them will put it at the highest interest rate, add transaction fees, etc. YMMV, but pulling from debit or from bank transfer may be the best option going forward. OR, you can trade services and stuff privately for crypto. There are forums that let you buy and sell crypto without going through your bank. Usually it involves meeting someone in person. There are also a few ATMs here and there which can do BTC transactions, but not a whole lot of them BTC to USD.

My thought is, if buying in, go for something you can actually purchase directly. BTC, ETH and LTC are all low-drama and directly accessible. Wait for the bottom to hit, and wait for the next bubble before selling out. But assume that the money you put in might completely evaporate while waiting. As with all investing, diversify. I have about $200 in crypto, including the electricity to fold proteins and get paid for it, plus a mutual fund with 5% bitcoin exposure. I would never want more than 5% of my holdings in crypto, and getting it in a mutual fund maybe is not as much gain, but also not as much risk. Also, so much easier to sell a mutual fund for cash (though harder to evade taxes, etc.)

 

So then, why do crypto at all?
Some people, it’s just for fun. It’s a videogame with points. Bitcoin has name recognition. Litecoin is basically all the things they should have done to Bitcoin. Etherium has contracts. Gridcoin has scientific research. IOTA has microtransactions for small electronics. Ripple has interbank trading support. Some people just believe in the value of holding the currency, and supporting the network. (Just HODL it!)  Whatever you do, research the history, vision, and mission of the currency.  Look into the community, developers, etc.  If you don’t get the warm fuzzies, move along.

 

END OF LINE
So much ramble, and not as organized as I’d like. Hopefully there is some value in this braindump for you.


If my brain were a CPU

5GHz, one core, 32 stage pipeline, multiple level branch predictors, out of order execution,

complete pipeline and cache flush on context switch.

64MB level-1 cache

No Level 2 cache

8 bit memory bus, with a 128-bit page select

All devices are memory mapped.

No I/O bus.

Fans and heat sinks clogged with dust.


BOINC Xenial Drivers

Any time I want to apply updates, here is the cycle I have to do to keep nVidia and ATI/AMD drivers happy for BOINC.

#####################################################
BOINC drivers on Xenial
#####################################################

### Update the kernel
apt-get update
apt-get install linux-lowlatency-hwe-16.04 linux-tools-lowlatency-hwe-16.04 linux-image-lowlatency-hwe-16.04 linux-headers-lowlatency-hwe-16.04
apt-get dist-upgrade
apt-get autoremove

### For AMD/ATI driver issues:
Testing: https://launchpad.net/~paulo-miguel-dias/+archive/ubuntu/mesa/
OR stable https://launchpad.net/~paulo-miguel-dias/+archive/ubuntu/pkppa
apt-get install ppa-purge
ppa-purge ppa:paulo-miguel-dias/ppka
ppa-purge ppa:paulo-miguel-dias/mesa
add-apt-repository ppa:paulo-miguel-dias/mesa
apt-get update
apt-get dist-upgrade
apt-get install boinc-client-opencl clinfo mesa-opencl-icd

### For nvidia driver issues:
apt-get purge xserver-xorg-video-nouveau
apt-get purge nvidia*
apt-get install boinc-client-nvidia-cuda
update-alternatives --config gl_conf
update-alternatives --config x86_64-linux-gnu_gl_conf
ldconfig
update-initramfs -u
nvidia-xconfig

### Virtualbox crashes
apt-get purge virtualbox*
https://www.virtualbox.org/wiki/Linux_Downloads

### Reboot for the drivers if necessary
shutdown -r now

If I were running an x-server, there is nvidia-xconfig, but I run headless (mostly) and connect to boincmgr over an SSH tunnel.


What is Evil?

What is evil? It’s not some force, roaming the land. It is a dark spot inside each of us. It’s an absence of love.

Be a better person. Don’t take joy in someone else suffering for their faults. Protect yourself, and those who are unable to protect themselves, and fill the rest of your heart with love and compassion.

Yes, we all get angry, frustrated, and generally are all horrible on the inside sometimes. That’s okay, but it’s not okay to wallow in it. It’s our responsibility to interrupt those feelings, and try to remember better ones. Try to be in a way that creates those better ones.


AIX JFS2 autoresize

computersarefun put in a request for AIX to auto-grow/shrink filesystems.
Ref: https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=114789

This seems more like a monitoring thing than an operating system thing.
Also, handling this as a thin LUN is probably better where possible.
Here is an example script.

Potential improvements:

  • Notifications on exceptions
  • Config file to track different settings per filesystem
  • Also check iused / ifree to handle tiny-files
  • Run as a daemon vs from cron.
  • Explicit lists of filesystems, or include/exclude lists
  1. !/bin/ksh
  2. Run this from cron every minute to automatically resize JFS2 filesystems
  3. Incorrect limits could cause size flapping for small filesystems.
  4. We skip things we cannot reduce.

MINFREEPCT=10
MAXFREEPCT=70
MINPPFREE=10

LVLIST=`mount | grep jfs2 | grep /dev/ | awk ‘{print $1;}’ | cut -f 3 -d /`
for lv in $LVLIST ; do

  df -gv 2>/dev/null | grep $lv | read device size used free pct iused ifree ipct mountpoint || continue
  FREEPCT=$(( $used * 100 / $size ))
  VG=`lslv $lv 2>/dev/null | grep "VOLUME GROUP:" | awk '{print $6;}'`
  PPSIZE=`lsvg $VG 2>/dev/null | grep 'PP SIZE' | awk '{print $6;}'`
  [[ $PPSIZE -gt 0 ]] || continue
  #
  if [[ $FREEPCT -lt $MINFREEPCT ]] ; then
     FREEPPS=`lsvg $VG | grep FREE | awk '{print $6;}'`
     [[ $FREEPPS -gt $MINFREEPPS ]] && chfs -a size=+1 $mountpoint
     continue
  fi
  #
  [[ $FREEPCT -gt $MAXFREEPCT ]] && chfs -a size=-$PPSIZE $mountpoint
  #

done


Security Defect in Intel, ARM and AMD processors

THE RISK:
The defect allows a user process to read any system memory.
A VM can read memory from the host or another guest in some environments.

WHAT IS AFFECTED:
This does NOT affect POWER/PPC architecture.
Only some of this affects AMD, and only in some modes.
Almost every ARM and Intel processor since 1995 is affected.
That includes desktops, laptops, servers, cellphones, routers, automobiles with Sync/Onstar/autopilot, etc.

DISCOVERY:
This defect was reported in June, 2017, but due to pervasiveness, has been embargoed.
It is only fully described now because patch notes leaked the problem.

THE FIX:
The actual fix would be replacement of the affected CPUs with new silicon, which does not exist yet.
There is a partial software workaround which decreases system performance.

TECHNICAL:
The issue is because the processor does not perform access checking prior to loading L1 cache.
Due to this design issue, data can be forced into L1 cache, and read, before access is denied by the TLB.
It’s fairly slow, at around 2k/second, but a long-running process can harvest everything.

Hardware Statuses:
• ARM has provided workarounds to vendors, but it’s up to them to implement
• Intel’s CEO sold off as much of his stock as possible last year after glowing projections.
• Not a peep from AMD.
• POWER/PPC is not affected.

Software Statuses:
• Windows included a partial workaround in the November security rollup.
• MacOS released a partial workaround in December’s 10.13.2
• Linux included a partial workaround in the mainline kernels 4.15, and 4.14.11.
• The workarounds decrease performance between 1% and 45% depending on the workload.
• Cloud providers are scheduling maintenance January 2018.

More Reading:
• Community: https://spectreattack.com
• Google: https://googleprojectzero.blogspot.com/2018/01/reading-privileged-memory-with-side.html
• Workaround: https://en.wikipedia.org/wiki/Kernel_page-table_isolation
• AMD: https://www.amd.com/en/corporate/speculative-execution
• A better write-up: https://techcrunch.com/2018/01/03/kernel-panic-what-are-meltdown-and-spectre-the-bugs-affecting-nearly-every-computer-and-device/
• Outlet that broke the embargo: https://www.theregister.co.uk/2018/01/02/intel_cpu_design_flaw/


tsm server status

I ordered the new backup server on October 27.
Initial setup gave app crashes intermittently, so was not ready to make it live yet.
I ran BOINC on it for a day, and at one point, all tasks died at once.

Syslog showed EDAC errors starting 11 days after I got the system, calling out CPU#1Channel#2_DIMM#0

This matches CPU1, DIMM1 on the board (ie, DIMMs are ordered backwards in Linux from printed labels).

I swapped all of CPU1 DIMMS with CPU0 DIMMs to troubleshoot.

Problem went away. 99% chance this was just a slightly loose DIMM from shipping.

Aside from that, the system has been awesome. I’ve run DB2, Spectrum Protect, and BOINC on here. For BOINC, the fans stay on low at 66% and 50% on a warm day, and 66%/66% on a cool day.

TLDR – remember to re-seat your DIMMs after shipping. System is stable otherwise.

Here are logs and system queries:

Nov 7 15:00:43 tsm kernel: [929582.997825] EDAC MC1: 1 CE error on CPU#1Channel#2_DIMM#0 (channel:2 slot:0 page:0x0 offset:0x0 grain:8 syndrome:0x0)
...
Nov 14 19:59:05 tsm kernel: [1552272.728748] EDAC MC1: 7112 CE error on CPU#1Channel#2_DIMM#0 (channel:2 slot:0 page:0x0 offset:0x0 grain:8 syndrome:0x0)

/bin/bash# ll -d /sys/devices/system/edac/mc/mc1/dimm*
drwxr-xr-x 3 root root 0 Nov 14 20:07 /sys/devices/system/edac/mc/mc1/dimm0/
drwxr-xr-x 3 root root 0 Nov 14 20:07 /sys/devices/system/edac/mc/mc1/dimm3/
drwxr-xr-x 3 root root 0 Nov 14 20:07 /sys/devices/system/edac/mc/mc1/dimm6/

/bin/bash# cat /sys/devices/system/edac/mc/mc1/dimm6/dimm_label
CPU#1Channel#2_DIMM#0

/bin/bash# cat /sys/devices/system/edac/mc/mc1/dimm6/dimm_location
channel 2 slot 0

/bin/bash# cat /sys/devices/system/edac/mc/mc1/dimm6/dimm_mem_type
Registered-DDR3

/bin/bash# cat /sys/devices/system/edac/mc/mc1/dimm6/size
8192

/bin/bash# cat /sys/devices/system/edac/mc/mc1/mc_name
i7 core #1

/bin/bash# cat /sys/devices/system/edac/mc/mc1/ce_count
1197602807

/bin/bash# cat /sys/devices/system/edac/mc/mc0/mc_name
i7 core #0

/bin/bash# cat /sys/devices/system/edac/mc/mc0/ce_count
0

/bin/bash# uptime

20:15:26 up 17 days, 23:28,  2 users,  load average: 0.01, 0.40, 2.64

Power off and back on, and now BIOS shows:

209-Memory warning condition (WARN_DQS_TEST) detected slot CPU1 DIMM1
209-Memory warning condition (WARN_DQS_TEST) detected slot CPU1 DIMM1
209-Memory warning condition (rd dq dqs) detected slot CPU1 DIMM1
203-Memory module failed self-test and failing rank was disabled slot CPU1 DIMM1

The following configuration options were automatically updated:

Memory:40960 MB


Using ESD precautions, I moved all DIMMs from CPU1 bank to CPU0 bank.
All errors went away.

Loose DIMM. False alarm.


Protect initial install

This is happiness…

tsminst1@tsm:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.3 LTS
Release: 16.04
Codename: xenial

/bin/bash# for i in /dev/sd? ; do smartctl -a $i ; done | grep ‘Device Model’
Device Model: Samsung SSD 850 EVO 250GB
Device Model: WDC WD30EFRX-68EUZN0
Device Model: Samsung SSD 850 EVO 250GB
Device Model: WDC WD30EFRX-68EUZN0
Device Model: WDC WD30EFRX-68EUZN0

tsminst1@tsm:~$ dsmserv format dbdir=/tsm/db01,/tsm/db02,/tsm/db03,/tsm/db04,/tsm/db05,/tsm/db06,/tsm/db07,/tsm/db08 \
> activelogsize=8192 activelogdirectory=/tsm/log archlogdirectory=/tsm/logarch

ANR7800I DSMSERV generated at 11:32:48 on Sep 19 2017.

IBM Spectrum Protect for Linux/x86_64
Version 8, Release 1, Level 3.000

Licensed Materials – Property of IBM

(C) Copyright IBM Corporation 1990, 2017.
All rights reserved.
U.S. Government Users Restricted Rights – Use, duplication or disclosure
restricted by GSA ADP Schedule Contract with IBM Corporation.

ANR7801I Subsystem process ID is 29286.
ANR0900I Processing options file /home/tsminst1/dsmserv.opt.
ANR0010W Unable to open message catalog for language en_US.UTF-8. The default language message catalog will be used.
ANR7814I Using instance directory /home/tsminst1.
ANR3339I Default Label in key data base is TSM Server SelfSigned SHA Key.
ANR4726I The ICC support module has been loaded.
ANR0152I Database manager successfully started.
ANR2976I Offline DB backup for database TSMDB1 started.
ANR2974I Offline DB backup for database TSMDB1 completed successfully.
ANR0992I Server’s database formatting complete.
ANR0369I Stopping the database manager because of a server shutdown.


New data protection

Upgrading TSM server from Q9650 Core 2 Quad 3.0GHz, 8GB DDR2 on Win 2008R2.

New system is HP Z600, two-socket, 6-core 2.66GHz Xeon X5650 and 48GB of RAM. Wattage is the same per socket, but two sockets now. 3x the cores, 4x the performance.

SSDs for DB and Log are also moving to EVO 850 from Corsair M100. I’ll set up a container pool to replace the dedupe file class, and put that on 3x 3TB RAID5 instead of 2x RAID1.

OS will be Ubuntu 16.04.2 LTS. I’d like to just use Debian 9.1, but Debian and long-term-support seem to not be synonymous. I’d hate to run a patch update and have everything break, then fight with debian testing repo to try to get it all back to normal. Plus, I have no Ubuntu boxes, only Debian. It’ll give me a chance to see what operational differences I run into.

Old TSM is 6.4. New will be “Spectrum Protect” 8.1.3. Yes, the billions spent to rebrand to the same name as Charter Cable’s rebrand really seems like money well spent.

Anyway, Since I lost the offsite replication provider for the dedupe file pool, and it was having trouble keeping up anyway, this will let me change to server-side encryption, and object storage. We’ll see which provider wins out on price once everything is rededuped properly.

If the fan noise is not too bad, maybe this platform can be considered for a low-cost upgrade to the kids’ game machines. Though, these are heavy, with 2 big handles on the top.

Also, really, something new enough to have USB3 on the motherboard is probably better. I have some laptops picked out, but that’s re-buying every component, including ones that are presently decent. *sigh*


Calories per Mile

This is a SWAG for calories burned cycling 12-15mph:

  • Divide your feet climbed by 10.
  • Divide again by your average MPH.
  • Add that to your total miles.
  • Multiply the new number by the weight in pounds of you, your bike, and everything you’re carrying.
  • Multiply the new number by 0.105 (or divide by 9.5).
  • (Use 0.115 or 8.7 if you only know your own naked wake-up weight.)
  • That is pretty close to your calories burned for the ride.
  • Baseline here is me, 6’5″, anywhere between 250 and 290 pounds plus bike weight (any bike).

BMR is not a part of this SWAG:

  • BMR is how much you burn in 24 hours of sleeping.
  • Most people are around 9kcal per pound per day.
  • BMR and is not based on your activity level (see TDEE).
  • BMR is based in your microcellular efficiency, and is influenced by hormones.
  • If you are on severe caloric restriction, it goes down.
  • Thyroid issues can affect this either way.
  • Baseline here is me, at 285 pounds, and averaging 2550 kcal per day.

Faster speeds pick up exponentially more wind resistance.

  • Twice the airspeed has four times the wind drag.
  • Higher density altitude has proportionally less drag.
  • Shorter and narrower shouldered people people have less wind drag.
  • Fatter people are slightly more aerodynamic, so the increased wind profile is not THAT much of an issue.
  • Cycling 10mph into a 5mph headwind has as much wind drag as cycling 20mph with a 5mph tailwind.
  • 12-16mph is the 50% transition for wind vs other factors on flat ground. (13mph for me at 6 sqft)
  • Baseline here is me, with about 6 square feet of frontal area, about 22 Watts at 10mph, and about 150 Watts at 20mph, just for wind.

Rolling resistance is a big part of drag.

  • Increases linearly with speed (2x speed is 2x the rolling drag).
  • Lower weight is better (because tiny bumps have to push you UP over them).
  • Race tires can be half the CRR of average tires.
  • Wider tires are better by around 1% per mm with 23mm as baseline.
  • Baseline is me, at 310 total, 31W at 10mph, or 61W at 20mph on 1% grade.

Routes with less uphill than downhill will cost fewer calories.

  • Increases linearly with speed (2x speed is 2x the gravity drag).
  • 1% uphill is 2x the drag of rolling resistance. 2% is 4x.
  • Lower weight people do way better on both gravity and CRR.
  • Baseline is me, at 310 total, 62W at 10mph vs 124W at 20mph on 1% grade.

Good links:


Docker Debian autoinstall fails

Debian (and Ubuntu and others) use apt, aptitude, apt-get, and dpkg. apt currently requires the Release keys to match in a complex way. Mondo, Docker, and many other projects have problems making a repo actually work. The telltale failure is similar to this:

W: The repository 'https://apt.dockerproject.org/repo debian-stretch Release' does not have a Release file.
N: Data from such a repository can't be authenticated and is therefore potentially dangerous to use.
N: See apt-secure(8) manpage for repository creation and user configuration details.
E: Failed to fetch https://apt.dockerproject.org/repo/dists/debian-stretch/testing/binary-i386/Packages
E: Some index files failed to download. They have been ignored, or old ones used instead.
[root@ns1:/etc/apt/sources.list.d]

 

You can manually work around this by changing your sources.list to use HTTP instead of HTTPS, but scripts such Ubiquiti’s Universal Network Management Server installer will replace that:

curl -fsSL https://raw.githubusercontent.com/Ubiquiti-App/UNMS/master/install.sh > /tmp/unms_install.sh && sudo bash /tmp/unms_install.sh
branch=master
version=0.10.3
Downloading installation package for version 0.10.3.
Setting VERSION=0.10.3
Download and install Docker
# Executing docker install script, commit: 490beaa
+ sh -c 'apt-get update -qq >/dev/null'
+ sh -c 'apt-get install -y -qq apt-transport-https ca-certificates curl software-properties-common >/dev/null'
+ sh -c 'curl -fsSL "https://download.docker.com/linux/debian/gpg" | apt-key add -qq - >/dev/null'
Warning: apt-key output should not be parsed (stdout is not a terminal)
+ sh -c 'echo "deb [arch=amd64] https://download.docker.com/linux/debian stretch edge" > /etc/apt/sources.list.d/docker.list'
+ '[' debian = debian ']'
+ '[' stretch = wheezy ']'
+ sh -c 'apt-get update -qq >/dev/null'
W: The repository 'https://download.docker.com/linux/debian stretch Release' does not have a Release file.
E: Failed to fetch https://download.docker.com/linux/debian/dists/stretch/edge/binary-amd64/Packages
E: Some index files failed to download. They have been ignored, or old ones used instead.

 

A more stable workaround is to force apt back into the old mode of not caring if the Release certs are perfectly matched to the file server:

cat <<'EOF' >>/etc/apt/apt.conf.d/01docker
Acquire::https::apt.dockerproject.org::Verify-Peer "false";
Acquire::https::download.docker.com::Verify-Peer "false";
EOF

 

Now, the install works fine:

curl -fsSL https://raw.githubusercontent.com/Ubiquiti-App/UNMS/master/install.sh > /tmp/unms_install.sh \
  && sudo bash /tmp/unms_install.sh
branch=master
version=0.10.3
Downloading installation package for version 0.10.3.
Setting VERSION=0.10.3
Download and install Docker
# Executing docker install script, commit: 490beaa
+ sh -c 'apt-get update -qq >/dev/null'
+ sh -c 'apt-get install -y -qq apt-transport-https ca-certificates curl software-properties-common >/dev/null'
+ sh -c 'curl -fsSL "https://download.docker.com/linux/debian/gpg" | apt-key add -qq - >/dev/null'
Warning: apt-key output should not be parsed (stdout is not a terminal)
+ sh -c 'echo "deb [arch=amd64] https://download.docker.com/linux/debian stretch edge" > /etc/apt/sources.list.d/docker.list'
+ '[' debian = debian ']'
+ '[' stretch = wheezy ']'
+ sh -c 'apt-get update -qq >/dev/null'
+ sh -c 'apt-get install -y -qq docker-ce >/dev/null'
+ sh -c 'docker version'
Client:
 Version:      17.09.0-ce
 API version:  1.32
 Go version:   go1.8.3
 Git commit:   afdb6d4
 Built:        Tue Sep 26 22:42:09 2017
 OS/Arch:      linux/amd64

Server:
 Version:      17.09.0-ce
 API version:  1.32 (minimum version 1.12)
 Go version:   go1.8.3
 Git commit:   afdb6d4
 Built:        Tue Sep 26 22:40:48 2017
 OS/Arch:      linux/amd64
 Experimental: false
If you would like to use Docker as a non-root user, you should now consider
adding your user to the "docker" group with something like:

  sudo usermod -aG docker your-user

Remember that you will have to log out and back in for this to take effect!

WARNING: Adding a user to the "docker" group will grant the ability to run
         containers which can be used to obtain root privileges on the
         docker host.
         Refer to https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface
         for more information.
Docker version: 17.09.0
./install-full.sh: line 470: ((: 17 < 1
        || (17 == 1 && 09: value too great for base (error token is "09")
Download and install Docker compose.
Docker Compose version: 1.9
Creating user unms.
Skipping 0.8.0 permission fix
Preparing templates
Creating docker-compose.yml
Pulling docker images.
Pulling redis (redis:3.2.8-alpine)...
3.2.8-alpine: Pulling from library/redis
cfc728c1c558: Pull complete
8eda5cfd7e0a: Pull complete
8acb752a319b: Pull complete
955021cea791: Pull complete
d301d906247c: Pull complete
ff438d9e11c6: Pull complete
Digest: sha256:262d8bd214e74cebb3a0573e0f3a042aa3ddade36cf39a4891dd1b05b636bc55
Status: Downloaded newer image for redis:3.2.8-alpine
Pulling postgres (postgres:9.6.1-alpine)...
9.6.1-alpine: Pulling from library/postgres
0a8490d0dfd3: Pull complete
b6475055d17e: Pull complete
ba55801edf3d: Pull complete
f132014bbab8: Pull complete
9775497ec4a5: Pull complete
678be380896e: Pull complete
31e4998cc9ec: Pull complete
Digest: sha256:fa48df82694141793fb0cd52b9a93a3618ba03e5814e11dbf0dd43797f4d4cf7
Status: Downloaded newer image for postgres:9.6.1-alpine
Pulling rabbitmq (rabbitmq:3)...
3: Pulling from library/rabbitmq
bc95e04b23c0: Pull complete
2e65f0b00e4c: Pull complete
f2bd80317989: Pull complete
7b05ca830283: Pull complete
0bb5a4bbcce5: Pull complete
cf840d8999f6: Pull complete
be339ca44883: Pull complete
ce35cd9f9b5b: Pull complete
a4fe32a0a00d: Pull complete
77408ca9e94e: Pull complete
db03407a1aba: Pull complete
Digest: sha256:9a0de56d27909c518f448314d430f8eda3ad479fc459d908ff8b281c4dfc1c00
Status: Downloaded newer image for rabbitmq:3
Pulling unms (ubnt/unms:0.10.3)...
0.10.3: Pulling from ubnt/unms
627beaf3eaaf: Pull complete
5fc32359ecb8: Pull complete
2b99ae07dd66: Pull complete
99c9d1420b38: Pull complete
b65b0ba413b8: Pull complete
86bd816c9566: Pull complete
32ebfd822bb4: Pull complete
Digest: sha256:5dc99a77ee8bb4d09f02da715ec3142283ce44d5e91b8f515b5694ffb25d6c3c
Status: Downloaded newer image for ubnt/unms:0.10.3
Checking available ports
Port 80 is already in use, please choose a different HTTP port for UNMS. [8080]:
Port 8080 is already in use, please choose a different HTTP port for UNMS. [8080]: 8888
Port 443 is already in use, please choose a different HTTPS port for UNMS. [8443]:
Port 8443 is already in use, please choose a different HTTPS port for UNMS. [8443]: 8883
Creating data volumes.
Will mount /home/unms/data
Creating docker-compose.yml
Deploying templates
Writing config file
no crontab for unms
no crontab for unms
Deleting obsolete firmwares...
Downloading new firmwares...
Downloading e50-1.9.7-hotfix.3.170831.tar
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 74.6M  100 74.6M    0     0  5502k      0  0:00:13  0:00:13 --:--:-- 5870k
Downloading e100-1.9.7-hotfix.3.170831.tar
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 80.8M  100 80.8M    0     0  5692k      0  0:00:14  0:00:14 --:--:-- 5859k
Downloading e200-1.9.7-hotfix.3.170831.tar
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 80.7M  100 80.7M    0     0  5725k      0  0:00:14  0:00:14 --:--:-- 5873k
Downloading e1000-1.9.7-hotfix.3.170831.tar
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 81.7M  100 81.7M    0     0  5705k      0  0:00:14  0:00:14 --:--:-- 5867k
Downloading e600-1.0.2.170728.tar
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 86.8M  100 86.8M    0     0  5738k      0  0:00:15  0:00:15 --:--:-- 5871k
Downloading SFU-1.2.0.171003.bin
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 15.0M  100 15.0M    0     0  4663k      0  0:00:03  0:00:03 --:--:-- 4664k
Downloading XC-8.3.2.170901.bin
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 9046k  100 9046k    0     0  5219k      0  0:00:01  0:00:01 --:--:-- 5216k
Downloading XC-8.3.2-cs.170901.bin
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 9046k  100 9046k    0     0  5218k      0  0:00:01  0:00:01 --:--:-- 5219k
Downloading WA-8.3.2.170901.bin
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 9028k  100 9028k    0     0  5327k      0  0:00:01  0:00:01 --:--:-- 5329k
Downloading WA-8.3.2-cs.170901.bin
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 9028k  100 9028k    0     0  5006k      0  0:00:01  0:00:01 --:--:-- 5004k
Downloading TI-6.0.7.170908.bin
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 7083k  100 7083k    0     0  4917k      0  0:00:01  0:00:01 --:--:-- 4915k
Downloading TI-6.0.7-cs.170908.bin
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 7083k  100 7083k    0     0  5181k      0  0:00:01  0:00:01 --:--:-- 5185k
Downloading XM-6.0.7.170908.bin
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 7389k  100 7389k    0     0  5218k      0  0:00:01  0:00:01 --:--:-- 5222k
Downloading XM.6.0.7-cs.170908.bin
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 7389k  100 7389k    0     0  5963k      0  0:00:01  0:00:01 --:--:-- 5959k
Downloading XW.v6.0.7.170908.bin
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 7227k  100 7227k    0     0  5224k      0  0:00:01  0:00:01 --:--:-- 5225k
Downloading XW-6.0.7-cs.170908.bin
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 7227k  100 7227k    0     0  5075k      0  0:00:01  0:00:01 --:--:-- 5071k
Starting docker containers.
Creating network "unms_internal" with the default driver
Creating network "unms_public" with the default driver
Building fluentd
Step 1/6 : FROM fluent/fluentd:v0.12-latest
v0.12-latest: Pulling from fluent/fluentd
019300c8a437: Pull complete
d30279f73a02: Pull complete
fd39bd5a5dae: Pull complete
4dacb8d2bb26: Pull complete
963e933724db: Pull complete
8b4dd4e99009: Pull complete
59bedb222c2c: Pull complete
Digest: sha256:9b10ed70251fda1cd91c92f07a3ae74059adb1bdad6fc51cfcfe42272a9e78e8
Status: Downloaded newer image for fluent/fluentd:v0.12-latest
 ---> 4fce39752458
Step 2/6 : USER root
 ---> Running in 8f315349c16e
 ---> 84398611a0ad
Removing intermediate container 8f315349c16e
Step 3/6 : COPY entrypoint.sh /
 ---> 157af3140182
Step 4/6 : RUN apk add --no-cache --update su-exec     && apk add --no-cache dumb-init --repository http://dl-cdn.alpinelinux.org/alpine/edge/community/     && chmod +x /entrypoint.sh
 ---> Running in fbdef19d9e1a
fetch http://dl-cdn.alpinelinux.org/alpine/v3.5/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.5/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.5/community/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.5/community/x86_64/APKINDEX.tar.gz
OK: 27 MiB in 24 packages
fetch http://dl-cdn.alpinelinux.org/alpine/edge/community/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.5/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.5/community/x86_64/APKINDEX.tar.gz
OK: 27 MiB in 24 packages
 ---> e82e4e7e156f
Removing intermediate container fbdef19d9e1a
Step 5/6 : ENTRYPOINT /entrypoint.sh
 ---> Running in 3a0455e845ef
 ---> 7581bd63c44f
Removing intermediate container 3a0455e845ef
Step 6/6 : CMD fluentd -c /fluentd/etc/$FLUENTD_CONF -p /fluentd/plugins $FLUENTD_OPT
 ---> Running in 13c6baad173b
 ---> 97647e174228
Removing intermediate container 13c6baad173b
Successfully built 97647e174228
Successfully tagged unms_fluentd:latest
WARNING: Image for service fluentd was built because it did not already exist.
 To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
Creating unms-fluentd
Creating unms-redis
Creating unms-rabbitmq
Creating unms-postgres
Creating unms
Removing old images
Current image: ubnt/unms:0.10.3
All UNMS images: ubnt/unms:0.10.3
Images to remove: ''
No old images found
Waiting for UNMS to start
CONTAINER ID      IMAGE                   COMMAND                  CREATED             STATUS           PORTS                                            NAMES
6e814af4ffc5      ubnt/unms:0.10.3        "/usr/bin/dumb-ini..."   8 seconds ago       Up 3 seconds     0.0.0.0:8888->8080/tcp, 0.0.0.0:8883->8443/tcp   unms
01f61e7d9ae8      postgres:9.6.1-alpine   "/docker-entrypoin..."   10 seconds ago      Up 7 seconds                                                      unms-postgres
99261993de75      rabbitmq:3              "docker-entrypoint..."   10 seconds ago      Up 6 seconds                                                      unms-rabbitmq
21bb0d5db0e1      redis:3.2.8-alpine      "docker-entrypoint..."   10 seconds ago      Up 7 seconds                                                      unms-redis
cdb0b878b633      unms_fluentd            "/entrypoint.sh /b..."   11 seconds ago      Up 1 second      5140/tcp, 127.0.0.1:24224->24224/tcp             unms-fluentd
UNMS is running

WHAT DO YOU GET WHEN YOU MULTIPLE SIX BY NINE

Today, at 8:22am, I turned 42 years old. I set up a mini-monitor and airflow for the new server location. I’m really happy with all that. Planted some squash. Watched about half of “Kill Switch”, a FPS style sci-fi movie. I made some lemon battery videos. I had lunch with my sister. I had cupcakes and coffee with Erica and Khai. I sync’d up with my team lead over the phone. All of that in a different order. Lastly, I’m about to go watch Max’s marching band performance, to see their progression. This has been a nice, chill day so far.


Posted in family, xaminmo | Comments Off on WHAT DO YOU GET WHEN YOU MULTIPLE SIX BY NINE

Lemon Battery Videos

Construction of a lemon battery – for kids
https://youtu.be/7tnaqfvU3r0

Info about voltage of a lemon battery for interested kids.
https://youtu.be/GXsJ1YESo7g

Juice batteries in glasses – magnesium cell has more power – for older kids.
https://youtu.be/hSVYXLlBq7w

Hydrogen Gas Production – just a fun clip for us to see bubbles.
https://youtu.be/99CUidqNoW4


IBM Download Director is a beast

I’m sure this will all change in a week, but until then, here is reference for how to uninstall download director, or forcibly reinstall it.

There was no support, and no google help, no IBM search help, etc. ​After all the usual things, I went to a system without an existing DD installation.

​You can force-reinstall Download Director from here:
https://www-03.ibm.com/isc/esd/dswdown/dldirector/installation_en.html

​You can manually run DD here, but I don’t know how to feed it packages:
https://www14.software.ibm.com/dldirector/IBMDownloadDirectorApp.jnlp

​There is info on how to uninstall DD here:
​https://www-03.ibm.com/isc/esd/dswdown/dldirector/uninstall_en.html

​I’m sure these URLs will change in the next forced web redesign, but for now, this should help for people with broken DD installs.

Reinstall info is obscured in convoluted JavaScript, but here’s the uninstall information:

Windows
How to uninstall

  • Open a new cmd window, paste the following command and hit enter:
  • reg DELETE HKCU\Software\Classes\ibmddp /f && rmdir %HOMEPATH%\AppData\Local\IBM\DD /S /Q
  • You should see a “The operation completed successfully.” message.

How to verify if Download Director is installed

  • Open a new cmd window, paste the following command and hit enter:
  • (reg query HKCU\Software\Classes\ibmddp 1> NUL 2>&1 && IF EXIST %HOMEPATH%\AppData\Local\IBM\DD\DownloadDirectorLauncher.exe (echo DD Installed) else (echo DD not installed)) || echo DD not installed
  • You should see either “DD installed” or “DD not installed”.

Linux
How to uninstall

  • Open a new terminal window, paste the following command and hit enter:
  • xdg-mime uninstall ~/.local/share/applications/ibm-downloaddirector.desktop && rm -rf ~/.local/share/applications/ibm-downloaddirector.desktop ~/.config/download-director/
  • If no errors are displayed, the operation completed successfully.

How to verify if Download Director is installed

  • Open a new terminal window, paste the following command and hit enter:
  • [[ -f ~/.local/share/applications/ibm-downloaddirector.desktop || -f ~/.config/download-director/DownloadDirectorLauncher.sh ]] && echo "DD installed" || echo "DD not installed"
  • You should see either “DD installed” or “DD not installed”.

Mac
How to uninstall

  • Open the “Terminal” app, paste the following command and hit enter:
  • rm -rf ~/Applications/DownloadDirectorLauncher.app/
  • If no errors are displayed, the operation completed successfully.

How to verify if Download Director is installed

  • Open the “Terminal” app, paste the following command and hit enter:
  • [[ -d ~/Applications/DownloadDirectorLauncher.app/ ]] && echo "DD installed" || echo "DD not installed"
  • You should see either “DD installed” or “DD not installed”.

Why I wrote this up:
I find myself stuck with IBM due to the value of legacy skills vs transitioning to newer skills.
Periodically, IBM makes changes to their webpage, or code download system.
Often, these leave things inconsistent (claims that HTTP can be used, but it’s no longer available).
Worse, forced tools will stop working, and the IBM solution is to wipe your entire browser config and start over.

IBM has decided it’s better to force people to use Download Director instead of any standard protocol.
IBM’s mantra is “It worked for me in the lab, so if it doesn’t work for you, tough patooties.”
There is no escalation to people who make decisions. This has been an ongoing issue for a decade.
No one cares, except a few of the ubertechs supporting things, but they have no sway.

I’ve been using HTTP for a while, but they pulled that, so I had to use DD.
This time, DD gave me an error that JavaWS could not be started.
So I uninstalled all Java, reinstalled the newest, and DD said I had no Java installed.

There were no google hits to help, no IBM pages to help, and IBM search is useless as always.
Of the pages I found, none of them had contact forms, because that costs money.
There is no uninstall tool for Download Director.
There is no Browser Extension, no OS uninstall tool.
Removing the AppData folder does not help.

I went to a clean system, and wrote down all that I could find during a new code download attempt.
There is actually a webpage for this, but it is not indexed anywhere. That’s linked above.
That’s what this post is about.

Note that this is not acceptable in any way, and is one of the many reasons people are leaving IBM for open standards.
It’s not about “The Cloud”. It’s about IBM having so many layers between the decision-makers and the workers that they are out of touch. They have no idea how to be a tech business anymore, and are run by people who are content to gut the reputation of IBM so as to report a short-term improvement in gross profit. Zero interest in the long term.


Omnitech DP server lives

10 days ago, the drive enclosure for the TSM server failed during a storm. The enclosure is an RSV-S5 from 2010. The PSU died, and seems to be a specialty part. The part costs $250. A newer version of the enclosure $180 from Sans Digital. This is a bulk data server, so a 4-pay box was fine. I picked up a Mediasonic Probox 4-bay JBOD with ESATA and USB3 ports. It’s a faster port multiplier, better functionality, and half the volume on the server shelf.

I still plan to migrate everything to Linux on Spectrum Protect 8, with container pools, and maybe use glacier for off-site storage. This is compounded by CrashPlan ditching their non-business plans, and never being able to sync anyway. I really need a better way to store off-site DR data. BOX for a critical chunk is okay. Google and Dropbox for active data is okay. But for an off-site DR pool, it would be too expensive to put into either of those. Plus, SP8 is chunk aware much better. I’d hate for a CDP product to revert a chunk, or be constantly out of sync.


Posted in News | Comments Off on Omnitech DP server lives