Matt-induced traffic spike

Planning for the post that Matt links to

For most of the time that I’ve had my multisite network and the underlying infrastructure that I’ve written about lately, I’ve been overly focused on performance and scalability.

I say “overly focused” because I average about 50 views a day here on ethitter.com, on a good day. I write about exceedingly technical–or exceedingly uninteresting–topics, so that’s no surprise.

It’s also no surprise that my two most-popular posts are both about Automattic: the first announcing my hiring, the second declaring that Matt will have to fire me to be free of me. Interest in our hiring process and company culture far exceeds that which exists for my blathering.

When Matt retweeted the latter post back in January, my heart paused, then skipped into overdrive. Beyond the excitement of Matt recognizing my post, I immediately feared the embarrassment of my site crashing.

As it turns out, I had nothing to worry about. The pageviews were, while meaningful for this humble site, insignificant as far as the infrastructure was concerned. No resource-usage alerts were triggered, nor did my provider inform me that I’d exceeded my plan’s allotments. Between Redis-based object and page caching, nginx microcaching, and a robust CDN, there was really no cause for concern.

Read on for some basic details.

Peeling back the layers

To start, I’ve implemented both object and full-page caching in WordPress, backed by Redis. Undoubtedly, this was the most-significant factor in my site sustaining that day’s traffic; together, these layers protected both the database and PHP workers from excessive load. Memcached could’ve just as easily replaced Redis, but I already have the latter thanks to GitLab.

In addition to those WordPress-powered caches, because I use nginx as my webserver, I’ve taken advantage of its native caching abilities. Under times of peak traffic, nginx won’t even need to contact WordPress to fulfill a logged-out user’s request, allowing nginx to handle traffic very efficiently. As I only need this protection when traffic is high, I’ve implemented an approach referred to as “microcaching,” which uses very short cache durations; under my configuration, the nginx caches expire in 30 seconds. I referenced Zach Brown’s guide when I set this up, consulting Google to create a dedicated tmpfs folder for even-better performance.

Lastly, all static assets are served from a CDN, benefitting visitors as much as my server. Scripts and stylesheets are concatenated and minified before KeyCDN stores them, while images are resized and served via Photon (after it retrieves them from the CDN). I also use KeyCDN’s “origin shield” feature, which protects my site against cache stampedes by limiting which of their servers connect to mine.

2 thoughts on “Planning for the post that Matt links to”

Leave a Reply