Dog-whistle politics

I pretty recently learned the phrase dog-whistle politics. The idea is that certain phrases have hidden, extra meaning to certain people. The Wikipedia page gives state’s rights as an example where political comments often have a more nuanced meaning that’s semi-concealed.

I’m not sure if it’s properly the same concept, but one apparent example of this I’ve become really interested in is the “Black lives matter” and “All lives matter” phrases. “Black lives matter” became a common refrain after Michael Brown’s shooting, and came to encompass a general frustration (probably too tame of a term) at the apparent disregard for how many people of color were shot by police. And, much like the proper definitions of feminism, I think that’s a cause that everyone should support.

But then, “All lives matter” and “Police lives matter” became common counter-arguments. And I saw many tweets along the lines of, “People who don’t attack cops don’t get shot. #policelivesmatter.” It started to be associated with people who argued that Darren Wilson was innocent (or even, in some people’s strange opinions, “a hero”), and that Michael Brown pretty much deserved to be shot. (To be clear, that is not my opinion.)

I remember being very upset upon reading a tweet that said something like, “On 9/11, many police officers knowingly ran INTO the Twin Towers. #policelivesmatter” And I realized that it had reached the point where the actual words used were entirely irrelevant.

The literal meaning, and that a person not familiar with a lot of backstory, was one that everyone would agree with: there were so many heroes in the NYPD that willingly gave their lives on 9/11, and saying that their lives matter is so patently obvious that it seems weird to even mention.

But the reason it seems so weird to mention is that there’s a lot of hidden meaning, or at least that I read into it. It reads like a counter to the “black lives matter” people, in a time with a lot of police brutality being discussed in the news. What I read wasn’t a lot different from, “Black people need to quit complaining about being disproportionately harmed by police violence. I side with the police who choked Eric Gardner.”

The point here isn’t whether I correctly read the meaning, nor who is correct. I’m merely fascinated by how some terms or concepts can become so incredibly charged that people read into them meanings that aren’t contained in the actual words said. Because of the specific phrasing and the timing/context of a comment, I took a tweet expressing gratitude for NYPD officers who gave their lives on 9/11 as an appallingly racist, hateful message. And that is utterly fascinating to me.

But this isn’t isolated. Conversations about the Confederate flag, “religious freedom,” or “women’s rights” often conjure up extremely strong emotions and opinions, even where they’re not necessarily intended. And just try to have a rational conversation about gun control or the Second Amendment, or immigration policy. The terms are so charged with meanings you likely don’t even intend.

My Ghetto CDN

I’m not sure yet if this is a good idea or not, nor if I want to pay for it long-term, but I’m playing with an experiment that I think is kind of neat. (But maybe I’m biased.)

Background

For a long time now, I’ve been using CloudFront to serve static assets (images, CSS, JS, etc.) on the blogs, and a few other sites I host. That content never changes, so I can offload it to a global Content Delivery Network. IMHO it’s win-win: visitors get a faster experience because most of those assets come from a server near them (they have a pretty extensive network), and I get my server alleviated from serving most images, so it’s only handling request for the blog pages themselves.

Except, serving images is easy; they’re just static files, and I’ve got plenty of bandwidth. What’s hard is serving blog pages: they’re dynamically-generated, and involve database queries, parsing templates, and so forth… Easily 100ms or more, even when things go well. What I’d really like is to cache those files, and just remove them from cache when something changes. And I’ve actually had that in place for a while now, using varnish in front of the blogs. It’s worked very well; more than 90% of visits are served out of cache. (And a decent bit of the 10% that miss are things that can’t be cached, like form POSTs.) It alleviates backend load, and makes the site much faster when cache hits occur, which is most of the time.

But doing this requires a lot of control over the cache, because I need to be able to quickly invalidate the cache. CloudFront doesn’t make that easy, and they also don’t support IPv6. What I really wanted to do was run varnish on multiple servers around the world myself. But directing people to the closest server isn’t easy. Or, at least, that’s what I thought.

Amazon’s Latency-Based Routing

Amazon (more specifically, AWS) has supported latency-based routing for a while now. If you run instances in, say, Amazon’s Virginia/DC (us-east-1) region and in their Ireland (eu-west-1) data centers, you can set up LBR records for a domain to point to both, and they’ll answer DNS queries for whichever IP address is closer to the user (well, the user’s DNS server).

It turns out that, although the latency is in reference to AWS data centers, your IPs don’t actually have to point to data centers.

So I set up the following:

  • NYC (at Digital Ocean), mapped to us-east-1 (AWS’s DC/Virginia region)
  • Frankfurt, Germany (at Vultr), mapped to eu-central-1 (AWS’s Frankfurt region)
  • Los Angeles (at Vultr), mapped to us-west-1 (AWS’s “N. California” region)
  • Singapore (at Digital Ocean), mapped to ap-southeast-1 (AWS’s Singapore region)

The locations aren’t quite 1:1, but what I realized is that it doesn’t actually matter. Los Angeles isn’t exactly Northern California, but the latency is insignificant—and the alternative was all the traffic going to Boston, so it’s a major improvement.

Doing this in DNS isn’t perfect, either: if you are in Japan and use a DNS server in Kansas, you’re going to get records as if you’re in Kansas. But that’s insane and you shouldn’t do it, but again, it doesn’t entirely matter. You’re generally going to get routed to the closest location, and when you don’t, it’s not really a big deal. Worst case, you see perhaps 300ms latency.

Purging

It turns out that there’s a Multi-Varnish HTTP Purge plugin, which seems to work. The downside is that it’s slow: not because of anything wrong with the plugin, but because, whenever a page changes, WordPress now has to make connections to four servers across the planet.

I want to hack together a little API to accept purge requests and return immediately, and then execute them in the background, and in parallel. (And why not log the time it takes to return in statsd?)

Debugging, and future ideas

I added a little bit of VCL so that /EDGE_LOCATION will return a page showing your current location.

I think I’m going to leave things this way for a while and see how things go. I don’t really need a global CDN in front of my site, but with Vultr and Digital Ocean both having instances in the $5-10 range, it’s fairly cheap to experiment with for a while.

Ideally, I’d like to do a few things:

  1. Enable HTTP/2 support, so everything can be handled in one request. Doing so has some great reviews.
  2. Play around with keeping backend connections open from the edge nodes to my backend server, to speed up requests that miss cache.
  3. Play around with something like wanproxy to try to de-dupe/speed up traffic between edge nodes and my backend server.

Update

I just tried Pingdom’s Full Page Test tool, after making sure the Germany edge node had this post in cache.

Screen Shot 2015-03-06 at 11.22.00 PM

Page loaded in 291ms: after fetching all images and stylesheets. I’ll take that anywhere! But for my site, loaded in Stockholm? I’m really pleased with that.