To the best of my knowledge, everything is back online now but not all of the hardware, thus some things will be not as fast as usual.
We have four physical hosts providing the services with numerous virtual machines on these hosts. Two hosts are i7-6700k 4 core/8 thread systems with 64GB RAM, an i7-6850k system 6 core/12 thread system with 128GB of RAM, and an i9-10980xe 18 core/36 thread system with 256GB of memory. Of these four machines the last is really the big work horse as it has more CPU and as much memory as all of the other machines combined. It also has dual nvme RAIDED and 16TB hard drives RAIDED. I had intended this really to be the model for our next generation servers. Even though this CPU design is five years old, Intel has not produced any non-Xeon systems capable of addressing this much memory since and memory is our biggest constraint.
The newest system has become unstable and it is unstable in a bad way, instead of merely rebooting and coming back up, it is hard hanging. I have had this very issue with this server before and the last time around it ended up being a bad power supply. But in the meantime the Asus motherboard also developed an issue where it would not see one of the memory channels. This is typical of a bent pin on the CPU socket except I had not had the CPU out of the machine.
So I bought a replacement Asus Motherboard, and it had exactly the same issue. Asus support told me the memory I was using was not compatible (even though it had previously worked), and so at this point I decided to try another company, and went with an Asrock Motherboard. That motherboard ran four hours and then died with a puff of smoke. Upon examination this motherboard melted the soldered connection between the power and CPU socket. The i9-10980xe CPU is an extremely hungry chip and can draw as much as 540 watts with all cores fully busy at 4.8Ghz. Even though the Asrock motherboard was designed for i9-xx9xx series of CPUs, it was designed for earlier models that had fewer cores and addressed less memory and so were not as power hungry.
So I then bought a Gigabyte board, went this route because, like Asus, they are designed for overclocking and thus have much more robust power and ground traces to handle the power requirements of monster CPUs. And initially all was well, it ran stable at 4.8Ghz all cores loaded with no issues.
However, after a bit of operation it started locking up. And when I checked the temps they were high even though I had previously tested under full load and the CPU never got hotter than 62C. What had happened is that I did not use enough thermal paste and it had developed an air gap between the CPU heat spreader and the cooler right in the middle of the heat spread so cores near that area were overheating. I fixed that but it still wasn’t entire stable.
Initially the power supply I used that subsequently died, was a Thermaltake. When it failed, I replaced it with a Gigabyte PSU, my thinking being that since they make components designed overclocking, this PSU should be, like the motherboard, more robust. Apparently my thinking is wrong. Net wisdom seems to suggest the best units are now made by Seasonic, I actually ordered through Phanteks but the supply is a rebranded Seasonic. This time around I went with a slightly higher power rated supply so it will be less taxed, the prior two supplies were 1000-watt units, which with a CPU maxing out at 540 watts, a very minimal graphics card, maybe 50 watts, perhaps 100 watts worth of drives and another 100 worth of fan, should have been enough but at full load it is running at the upper end of it’s capability. So this time around I bought a 1200 watt unit so it has a bit more overhead.
This power supply will arrive Monday. Then at 4:00AM Sunday morning we disappeared from the Internet. The machine I was using as a router which had also been rock solid, died. So I moved the network to another machine with dual NIC’s but one of those NIC’s was a Real-Tek and the Linux Real-Tek network drivers do not work well and can not operate at 1Gb/s, so had to run at 100mb/s but that proved totally inadequate, lot of packet loss and very bad performance.
I went back to the Co-Lo and took the network card out of the failed machine (Ice) and put it into Iglulik, when I powered Iglulik back up it would not boot. I took the card out, it still would not boot, so I put it into a third machine and now that machine would not boot. So now I’m in a situation where I have three dead machines, one that periodically locks up, and has an interface that only works at 100mb/s so I moved the net back to that machine and proceeded to try to diagnose the others. The easiest machine to get back online was igloo, I could get a grub prompt but not boot fully into Linux, but the fact that I could get to grub prompt suggested hardware was ok and just the boot configuration had gotten mangled, so I repaired the initramfs system and re-installed grub and it came up and ran. This at least allowed us to have DNS and to have a working incoming mail server although it could not accept SSL encrypted because the encryption certificates were not available.
I brought Iglulik back, and this machine is particularly important because it has the /home directories and the SSL certificates. I could not even get a grub prompt, but what is more I could only see six of the seven drives present on the machine. Everything on this machine is RAIDED except for the root partition because at the time I did this build, I did not know of any way to boot off software MDADM raid. I have since figured that out and so Inuvik is 100% raided except the EFI partition and even that is replicated, just manually rather than by software RAID. So of seven drives, is one of the RAIDED drives going to fail? No the drive with the root partition failed.
So I replaced the drive and then tried to restore from backups. Problem is when I mounted the partition labeled backup, there was nothing. At this point I began to wonder if I had been hit by some malicious virus but at any rate, I had backups on my home workstation as well as a guard against a Ransomware attack. I tried to restore from that but it was corrupt. Now I am faced with having to rebuild from scratch and that could potentially take weeks. But then I mounted all the partitions and found that the one labled libvirt, supposed to have images for virtual machine actually contained the backups and I was able to restore.
While it was restoring, at this point I had been up for nearly 48 hours straight, I am 65 and don’t handle this well anymore, and so I slept for four hours while it was restoring. When I got up the restoration was finished but it still would not boot. There was something wrong with the initramfs file system but I could not determine exactly what. Eventually I noticed it was trying to mount the wrong /root partition UUID, because when I restored the system the file systems had to be re-created and had a new UUID as a result. So I fixed the fstab and it still would not work, and I was up all night again last night chasing this down. I finally discovered that /etc/initramfs-tools.conf.d/resume had the wrong UUID in it and fixed that. Now that machine was bootable and ran. Because I knew it was going to have to do more than it was originally intended to do until I get the remaining machines repaired, I attempted to remove some unnecessary bloatware, for example CUPS, not needed since not doing any remote printing from this machine AND there is a security issue that makes it wise not to have on servers anyway. Also bluetooth software, the machine has no bluetooth hardware so that didn’t make a lot of sense. So removed that. Then found wpa-supplicant, didn’t know what it was for so looked it up and it said it was for managing wireless connections, well no wireless hardware either so removed it, then the machine got very very sick. What the online material didn’t tell me is that it’s also a back-end to NetworkManager and tied into the dbus daemon and when you remove it it breaks both things, and with both things broken the machine was so insane that I had a very hard time getting it re-installed, and then when re-installed it still did not work. I finally determined it was necessary to re-install NetworkManager to fix and got it working. Took it back to the co-lo facility around 7am and installed it.
This was enough to mostly get us back up into service except I had to get backups from the other machines to get their functionality up on this hardware. So I got stuff started restoring and went to sleep, got up after about four hours and started up those services which had recovered, mostly the virtual private servers, went back to bed, slept another six hours, got up and restored the remaining things to service.
Now during this time, particularly the second day into it, Tuesday, I got some calls while I am frantically trying to figure out what was wrong with Iglulik even after I had replaced the drive and restored from backups and I was somewhat rude to a feel people. I apologize for this but like I said earlier, at 65 I do not have the endurance I had at 25.
And I’ve got a bunch of hate calls and hate mail about our reliability, but here is my delimma, I have not raised prices in 30 years, but if I even whisper a hint at doing that people threaten to bolt, and by the same token the only way to provide more reliability is more redundancy. For example, having enough disk to maintain near-time copies of home directory and mail directories, it would have been possible to maintain at least minimal services. I ask people to refer more people to use because that is another way to increase income and provide some of these things but only happens minimally. And this is a rather niche operation so I do understand that.
And on the opposite side of the hate mail and calls, I also got calls from people who appreciated my efforts, and I want you to know I appreciate your patience.