I recently wondered how long it had been since I added ethitter.com to the Strict Transport Security preload list, as it’s been several years. I added my domain without fully understanding the consequences, though I’ve been fortunate to avoid any problems that could’ve resulted.
Getting back to my original question, version control made it easy to find August 18, 2014 as the date my domain landed in Chromium’s list: https://src.chromium.org/viewvc/chrome?view=revision&revision=290306. At the time, it was one of less than 1,000 domains in the preload list; the bulk of the list was comprised of Google’s own domains. There are currently over 40,000 preloaded domains in Chrome/Chromium. It’s a good thing preloading hasn’t been an issue, it takes a while to get off the list.
I took a break from my ongoing “convert everything to Ansible” project to use the base I’ve established there to install something new: GitLab Runner using Docker Machine. I finally have CI/CD built into my GitLab instance: https://git.ethitter.com/debian/eth-log-alerting/pipelines! 🎉
Continue reading docker-machine doesn’t require Docker locally
I’ve had entirely too much fun with Ansible this weekend. Looking forward to drastically improving how I manage my servers!
Two weeks ago, the VPS that hosts this site moved to a machine that had been patched for the Spectre vulnerabilities. Immediately, I began receiving warnings about high load, and these alerts continued unabated for over a week. I tried moving services to other hosts, and I reduced the resources allocated to
php-fpm, all to no avail.
As I continued to monitor and debug the situation,
Continue reading Restoring performance after Spectre updates
fail2ban regularly appeared among the top resource consumers, but I didn’t think much of it;
fail2ban has always been a voracious resource user, but it’s an indispensable tool, so removing it wasn’t an option.
When using DHCP, most routers allow individual IPv4 addresses to be assigned to specific devices. In my case, I do so for my Raspberry Pis, making Home Assistant accessible at a domain name rather than trying to remember an IP address.
Continue reading Why routers don’t support IPv6 reservations
Earlier today, I switched from custom PHP builds to sury.org's PHP 7.2 builds. I've been using his builds on my Photon server for some time now, and I've lost interest in maintaining my own builds, so this seemed like a natural progression.
Previously, I used
curl to trigger
dyndnsd updates via my Raspberry Pis. This worked well for many months, but lacked IPv6 support as
dyndnsd was interpreting my IP from the request. Fortunately, the daemon accepts parameters for IPv4 and IPv6 addresses, so I wrote a Go program to handle regular updates. It still relies on cron, but passes explicit IP values and moves all options to a configuration file.
The client is available from https://git.ethitter.com/open-source/dyndnsd-client. I don't provide built binaries yet, but I'd like to soon.
If your ISP doesn't support IPv6, or if you run multiple daemons on the same network, options are available for your situation. Take a look at the readme for more.
Hopefully someone else finds this useful!
Yesterday, after moving my GitLab instance, I noticed that the public clone of my Home Assistant configurations was a bit stale, so I decided that it was time to refresh.
In so doing, I also discovered that I was a few releases behind (three, to be exact), and that those intervening releases included several breaking changes. Fortunately, updating my configurations to support Home Assistant 0.57.3 also resolved several longstanding bugs.
Continue reading Another Home Assistant Update
After a few successful months of testing Packet.net, I've once again moved
git.ethitter.com. The decision was purely financial–my GitLab instance doesn't receive enough traffic to warrant Packet.net's pricing. As far as reliability and value were concerned, Packet.net was excellent. I would've appreciated built-in backups, but otherwise, I have no complaints about the service.
It will likely come as little surprise that git.ethitter.com is back on Linode. Compared to Digital Ocean, Linode is slightly more-generous with its resources, and GitLab wants all the resources it can get.
The migration itself was quite easy, with most of the time was spent preparing the server; GitLab's backup/restore process did most of the hard work. Now I just have to finish the ancillary setup, like monitoring.