Two weeks ago, I noted that I was preparing to switch from PHP 7.0 to 7.1. It took me a bit more time than expected, thanks to a segmentation fault that appeared in 7.1 when using OPcache.
Category: Tutorials
Adding Brotli support to nginx
Last year, Google released a successor to the deflate
compression algorithm, Brotli. Chrome adopted it in version 51, and Firefox in version 44 (see Can I use…). That said, from the webserver side, nginx doesn’t support it natively, so Google provides the ngx_brotli
module, making it just a matter of compiling nginx.
With your own authoritative DNS, dynamic DNS is easy
At the beginning of the year, I wrote about using nsd3
to run my own nameservers: “Authoritative DNS with redundancy, using nsd
and Debian Wheezy“. That post focused on the public-facing benefits of running my own nameservers, notably the flexibility it gives me with regard to record types and update frequency.
As I’ve added more and more services to the Raspberry Pis running on our home network, the flexibility I have has demonstrated another benefit: assigning a domain name to the network’s ever-changing IP address. Time Warner doesn’t offer static IPs for consumer accounts, which presents a challenge to using our router’s native VPN functionality. To make it convenient to connect to our home network from afar, I’ve employed an open-source script and a custom DNS zone to provide dynamic DNS via my own servers.
Continue reading With your own authoritative DNS, dynamic DNS is easy
Compiling nginx with OpenSSL 1.0.2 to maintain HTTP/2 support
Chrome 51 disabled support for NPN, or Next Protocol Negotiation, the mechanism that millions of nginx servers needed to establish HTTP/2 connections with Chrome users. For anyone running nginx compiled against OpenSSL 1.0.1, Chrome 51 users are still connecting over SSL, but only via the legacy HTTP/1.1 specification, which lacks the performance benefits HTTP/2 imparts.
Both the nginx project, and Mattias Geniar, provide lengthier explanations of what changed in Chrome 51:
- https://www.nginx.com/blog/supporting-http2-google-chrome-users/
- https://ma.ttias.be/day-google-chrome-disables-http2-nearly-everyone-may-31st-2016/
For those wondering how to restore HTTP/2 support for Chrome 51 users, there is but one answer: switch nginx to OpenSSL 1.0.2. While OpenSSL 1.0.1 is only receiving security updates (and will stop receiving any updates after December 31, 2016), OpenSSL 1.0.2 is actively maintained and receiving new features, including the successor to NPN, which nginx supports: ALPN, or Application-layer Protocol Negotiation.
Continue reading Compiling nginx with OpenSSL 1.0.2 to maintain HTTP/2 support
Restricted SFTP access in Debian
As I’ll elaborate on in a few days1, when I added rate-limiting to nginx, I unintentionally blocked some legitimate traffic. Rather than make exceptions for these sources, I chose to provide certain services with read-only SFTP access to the specific directories they require.
It’s worth noting that in my case, I needed to grant particular users, not user groups, access to certain directories. Also, I have no need for any of these special users to access the same items. As a result, the following is tailored to user-level access to discrete directories, but can be set up using groups instead. I won’t detail that here, but the following should be sufficient for one to extrapolate how it would work for groups and shared directories.
Continue reading Restricted SFTP access in Debian
- That post started as an introduction to this one, then approached 500 words, which called for excision. ↩
Rate limiting: another way I guard against brute-force logins
For the last few weeks, the VPS powering this site received an increase in nefarious traffic arriving via IPv6. Perhaps unsurprisingly, much of this traffic came as brute-force login attempts against my WordPress site, and its arrival over IPv6 was key.
As I noted in my post on login monitoring, I already employ fail2ban, in conjunction with Konstantin Kovshenin’s technique for blocking failed WP logins. Unfortunately, fail2ban only supports IPv4, which is the only reason I even noticed this uptick in login attempts or needed to address it.
Continue reading Rate limiting: another way I guard against brute-force logins
Generating a CSR with SAN at the command line
Lately, I’ve explored creating my own CSRs for use with Let’s Encrypt, so I can control the common name and subject names. I’m neurotic enough that I can’t bear to let Let’s Encrypt decide.
Including additional domains, a technique known as Subject Alternatives Names or subjectAltName
(SAN), requires a configuration file to pass the relevant arguments to OpenSSL.
Dropping IPv6 traffic during a brute-force attempt
ip6tables -A INPUT -s 2002:5bc8:d05::5bc8:d05 -j DROP
2002:5bc8:d05::5bc8:d05 recently attempted a brute-force login against this network’s wp-login.php
. The above abated that effort. 😂
Backing up a Gmail address with gmvault
Despite all I’ve done to move my email to my own domain and hosting, inevitably some messages still arrive in the Gmail account I’ve had for more than a decade. I’ve already configured the account to send replies from my new addresses, but I also wanted to archive the 215,000+ messages already stored with Google, along with anything new that arrived there.
Options considered before gmvault
One solution is Google’s Takeout service, which will produce an archive of everything stored in Gmail (and many of Google’s other services, too!), but this process is manual and can be very slow. Takeouts can only be created through a web interface; downloading the archive requires doing so in the browser (for authentication reasons); and since it isn’t creating incremental backups, every message is included in every Takeout. An archive of just my Gmail account takes about 29 hours for Google to prepare, amounts to nearly 7 GB (in gzipped tar format), and takes several hours to download to my laptop. I’ve then another several hours to upload the archive to my backup server. While I’m willing to undertake this process once a month to back up all services that Takeout supports–which entails two files totaling around 30 GB–Takeout is impractical for regular exports of a frequently-changing service like Gmail.
Automatically renewing a lot of Let’s Encrypt certificates
I’ve been experimenting lately with Let’s Encrypt for SSL certificates, contemplating whether it can replace my StartSSL Class 2 wildcards.
For those unfamiliar with Let’s Encrypt, it’s a free certificate authority1 aimed at simplifying the process of making a site available via a secure connection. If you’re reading this on ethitter.com, your browser’s address bar will display a lock icon, the text https, or some other indicator that the connection is secure.
Until Let’s Encrypt launched its public beta in December 2015, acquiring a certificate involved many steps; at times, considerable cost; and terminology many find confusing. Let’s Encrypt intends to address these issues, and effectively does so in at least one way.
Continue reading Automatically renewing a lot of Let’s Encrypt certificates
- An organization that provides the trusted verification that makes secure certificates secure. ↩