Why airplanes don’t fly in straight lines…

…and other information about airplane engine failures.

I’ve sometimes wondered why airplanes don’t seem to fly in straight lines. I once saw someone give a seemingly-simple explanation: the Earth is round. While that fact is true, it doesn’t really explain it. When I flew from New York to Hong Kong, “curvature of the Earth” doesn’t explain why we practically flew through the arctic circle. It was certainly not the most efficient path.

I suspect there are many components to the answer, and perhaps the earth’s curvature factors in a bit. I also suspect that weather and wind factor in. But there’s one big, glaring reason that I’ve found: a twin-engine airplane must, at all times, be able to reach an airport on a single engine within a certain period of time. Early on, the limit was 60 minutes, though that figure has gone up over time.

The idea was simple—with only two engines, if one fails, you want to be able to land pretty quickly. So the FAA set a limit of 60 minutes. This surely had all sorts of positive safety implications, but it was also inconvenient, and led to some circuitous routes, plus some areas just not being possible to reach in a twin-engine plane. Over time, apparently, evidence allowed these rules to be relaxed. For one, it turns out that a plane is capable of flying just fine on a single engine.

For example, here is a rather chilling video of an airplane (a Boeing 757) ingesting a bird into one of its engines during takeoff, causing the engine to spew flames until it is shut down:

It continues its takeoff normally, declares an emergency, and lands normally a few minutes later. If you ignore the flames during takeoff and the inspection by the fire department upon landing, it looks entirely normal. Also fascinating to me is how the pilot seems entirely calm during the whole situation, and how half of the radio traffic is just about how they’ll be able to exit the runway normally so other flights shouldn’t need to divert, and how they plan to taxi back.

While a video of an airplane engine spewing fire might not inspire a lot of confidence, what’s intriguing to me is that the plane flew just fine with only one engine operating. It didn’t begin flying sideways or have difficulty landing as I might have naively expected. Hence the initial justification of the 60-minute rule—if an engine failed mid-flight, pilots would be able to safely fly to the nearest airport with only one engine.

Over time, apparently, evidence showed that spontaneous failure of an engine mid-flight was extremely uncommon, and that the 60-minute limit was excessively conservative and made many flights impractical. Over time, allowances of up to four hours have been granted, and newer planes are being certified for times in excess of five hours. But many older planes are still limited by shorter times, hence the seemingly-odd routes they take—they need to stay within range of airports.

Pirates

After reading about a series of pirate attacks last year—back then an almost laughably bizarre occurrence—I became interested in the concept of modern piracy, something I, like many average citizens, was unaware still went on. I picked up a copy of John Burnett’s Dangerous Waters: Modern Piracy and Terror on the High Seas after hearing him talk on NPR, but didn’t get far into it.

Recent events revived my interest, and I made some headway in the book this weekend. It turns out that piracy has been a major problem for ships in third-world areas, which is problematic since many major international shipping lanes progress right through these areas. No ship is immune, from small sailboats to “VLCCs”: Very Large Crude Carriers, commercial oil tankers rivaling our military’s biggest ships in size. As we learned with the recent hostage situation, pirates tend to be destitute teenagers from the poverty-stricken nations who have little to lose and everything to gain.

This afternoon, I read an interesting observation: some private ships, including cruise ships, are known to employ “heavies,” gun-toting mercenaries, to protect the ship and those onboard. Guns are otherwise uncommon: there are many thorny legal issues, including the need to declare them to customs when docking in a foreign port, at which point they’re seized until you leave again; the fact that pulling a gun on pirates, unless you’re a well-trained marksman, is likely to get you shot; and the fact that, on many of the oil tankers, a single stray round could blow the whole ship up.

So imagine my surprise when I checked out Google News, and saw that an Italian cruise liner off the coast of Somalia actually used its heavies to deter pirates. Besides idle fascination n the escalating pirate wars, I think this is a good thing: if pirates are becoming brazen enough to fire on cruise ships, there’s a much more pressing need for the international community to aggressively put an end to piracy. Piracy is no longer an obscure issue affecting an incredibly small number of commercial ships, but something threatening anyone on a boat in international waters, and the latest escalation is likely to cause an even greater escalation in piracy defenses.

Georgia

This is getting way too confusing. All along, someone said “Georgia” and I thought of the southern state. And then there was a war in Georgia with Russia, and I was just geographically-astute enough to know that they meant the country.

So that raged on for a while, and now, whenever I hear about something happening in Georgia, I think of the country.

So now, people in Georgia claim to have found Bigfoot. And I was thinking, their country is kind of insane-sounding. Like, one day most people have never heard of them. And then one day President Bush visits and someone hurls a grenade at him. But it’s apparently a dud, and no one notices until afterwards anyway… (Talk about failing at terrorism.) And then we all forget about the country again. And then Russia invades them, confusing everyone who both assumed that the news was talking about the US state, and that Russia was a nice country that wouldn’t go starting wars. And then their war ends. And then like the next day they find Bigfoot.

But it seems that it’s actually our Georgia that found Bigfoot.

*proud to be an American*

Building an Improvised CDN

From my “Random ideas I wish I had the resources to try out…” file…

The way the “pretty big” sites work is that they have a cluster of servers… A few are database servers, many are webservers, and a few are front-end caches. The theory is that the webservers do the ‘heavy lifting’ to generate a page… But many pages, such as the main page of the news, Wikipedia, or even these blogs, don’t need to be generated every time. The main page only updates every now and then. So you have a caching server, which basically handles all of the connections. If the page is in cache (and still valid), it’s served right then and there. If the page isn’t in cache, it will get the page from the backend servers and serve it up, and then add it to the cache.

The way the “really big” sites work is that they have many data centers across the country and your browser hits the closest one. This enhances load times and adds in redundancy (data centers do periodically go offline: The Planet did it just last week when a transformer inside blew up and the fire marshalls made them shut down all the generators.). Depending on whether they’re filthy rich or not, they’ll either use GeoIP-based DNS, or have elaborate routing going on. Many companies offer these services, by the way. It’s called CDN, or a Contribution Distribution Network. Akamai is the most obvious one, though you’ve probably used LimeLight before, too, along with some other less-prominent ones.

I’ve been toying with SilverStripe a bit, which is very spiffy, but it has one fatal flaw in my mind: its out-of-box performance is atrocious. I was testing it in a VPS I haven’t used before, so I don’t have a good frame of reference, but I got between 4 and 6 pages/second under benchmarking. That was after I turned on MySQL query caching and installed APC. Of course, I was using SilverStripe to build pages that would probably stay unchanged for weeks at a time. The 4-6 pages/second is similar to how WordPress behaved before I worked on optimizing it. For what it’s worth, static content (that is, stuff that doesn’t require talking to databases and running code) can handle 300-1000 pages/second on my server as some benchmarks I did demonstrated.

There were two main ways to enhance SilverStripe’s performance that I thought of. (Well, a third option, too: realize that no one will visit my SilverStripe site and leave it as-is. But that’s no fun.) The first is to ‘fix’ Silverstripe itself. With WordPress, I tweaked MySQL and set up APC (which gave a bigger boost than with SilverStripe, but still not a huge gain). But then I ended up coding the main page from scratch, and it uses memcache to store the generated page in RAM for a period of time. Instantly, benchmarking showed that I could handle hundreds of pages a second on the meager hardware I’m hosted on. (Soon to change…)

The other option, and one that may actually be preferable, is to just run the software normally, but stick it behind a cache. This might not be an instant fix, as I’m guessing the generated pages are tagged to not allow caching, but that can be fixed. (Aside: people seem to love setting huge expiry times for cached data, like having it cached for an hour. The main page here caches data for 30 seconds, which means that, worst case, the backend would be handling two pages a minute. Although if there were a network involved, I might bump it up or add a way to selectively purge pages from the cache.) squid is the most commonly-used one, but I’ve also heard interesting things about varnish, which was tailor-made for this purpose and is supposed to be a lot more efficient. There’s also pound, which seems interesting, but doesn’t cache on its own. varnish doesn’t yet support gzip compression of pages, which I think would be a major boost in throughput. (Although at the cost of server resources, of course… Unless you could get it working with a hardware gzip card!)

But then I started thinking… That caching frontend doesn’t have to be local! Pick up a machine in another data center as a ‘reverse proxy’ for your site. Viewers hit that, and it will keep an updated page in its cache. Pick a server up when someone’s having a sale and set it up.

But then, you can take it one step further, and pick up boxes to act as your caches in multiple data centers. One on the East Coast, one in the South, one on the West Coast, and one in Europe. (Or whatever your needs call for.) Use PowerDNS with GeoIP to direct viewers to the closest cache. (Indeed, this is what Wikipedia does: they have servers in Florida, the Netherlands, and Korea… DNS hands out the closest server based on where your IP is registered.) You can also keep DNS records with a fairly short TTL, so if one of the cache servers goes offline, you can just pull it from the pool and it’ll stop receiving traffic. You can also use the cache nodes themselves as DNS servers, to help make sure DNS is highly redundant.

It seems to me that it’d be a fairly promising idea, although I think there are some potential kinks you’d have to work out. (Given that you’ll probably have 20-100ms latency in retreiving cache misses, do you set a longer cache duration? But then, do you have to wait an hour for your urgent change to get pushed out? Can you flush only one item from the cache? What about uncacheable content, such as when users have to log in? How do you monitor many nodes to make sure they’re serving the right data? Will ISPs obey your DNS’s TTL records? Most of these things have obvious solutions, really, but the point is that it’s not an off-the-shelf solution, but something you’d have to mold to fit your exact setup.)

Aside: I’d like to put nginx, lighttpd, and Apache in a face-off. I’m reading good things about nginx.

Broken Windows

Last night we were unloading a shopping cart. When done, the place to put it away was pretty far away. But there were about ten other shopping carts littering the parking lot nearby, so I said, “Meh, what’s one more?”

As we got in the car, I proclaimed, “Broken Windows in action!” I think people were confused and assumed I was referring to a literal window which was broken. Instead, I was referring to the Broken Windows Theory, which is an interesting read. The basic premise is that researchers watched an abandoned warehouse. For weeks, no one vandalized the building. One day, one of the researchers (deliberately) broke one of the windows. In short order, vandals knocked out the rest of the windows. The theory is used a lot in policing, but I think it has applications in many other places. Such as parking lots: if you’re diligent in bringing in carts, I’d argue that you’d avoid people doing whta I did. (I also felt the same way at the bowling alley: if we frequently picked up candy wrappers and popcorn from the floor, the place seemed pretty clean. If we slacked, it felt like the place was being trashed by everyone in short order.)

The theory does have its detractors, but it also has strange people who see applications of their theory in parking lots. Enjoy the photo of chives, which have nothing to do with anything, but I just took it and I like it.

Chives

Obama Wins!

Ed.: Because the blogs have been slow, and because this is a hot topic, I’ve fudged the date on this to appear to have been published two days later, so it will stay on the main page a bit longer.

Obama LogoIt looks like Obama is the Democratic nominee, while Hillary Clinton, the woman who has twice alluded to Obama being assassinated (okay, the first time was a speaker at her event, not her), has conceded that she’d be open to running as his VP.

I’d be happier with an Obama-Richardson ticket, but people are calling Obama-Clinton the fastest way to try to heal the wounds this election cycle saw. In her defense, if she doesn’t get him assassinated, she’d make an excellent VP.

Needless to say, I’ll be watching the news tonight for what may be two very historic speeches: Obama’s victory speech and Hillary’s concession speech. (It seems like it was just weeks ago that Obama gave his “concession speech” that was anything but a concession speech, in New Hampshire, which led to the Yes We Can Song.)

The AP story is hot off the press, and many MSM outlets aren’t carrying it yet. Whether that’s because the polls don’t close for two hours, because it’s not factual, or just because MSM isn’t as obsessed with checking Google News as I am remains to be seen.

Update: It seems that Hillary hasn’t conceded quite yet. Honestly, I’m not sure how the AP is so sure that Obama’s won yet.

Update 2: USA Today has a good piece suggesting that, while Obama might do it tonight, it’s still about 30 delegates premature. And they also have this good article on exactly how the AP story was put together.

Update 3: You can follow the whole Google News thread.

Strange Antenna Challenge

You know those times when you decide to let yourself surf aimlessly? And an hour later, you have absolutely no idea how you got to where you did?

I found the K0S Strange Antenna Contest page from 2003, where some ham radio operators started using, well, strange things as antennas. Who’d think that a ladder works well? (No no, not ladder line, but an actual ladder.) In fact, after working some people off of a ladder, they got an even better idea, and stood several ladders up, using them to support a pair of extension ladders laid horizontally, forming a ladder dipole, with impressive results. Sadly, they report that combining two shopping carts to make a dipole did not get them any contacts, nor did a basketball hoop.

This has me wondering what else would work… An aluminum chain link fence? A railing? Train tracks? Power lines? (Kidding on that one. Please do not try to attach anything to power lines.) Curtain rods? A couple of cars? A section of guardrail? A metal lamppost?

I poked around the site some more, to see if they did it in subsequent years. And they did. 2004, for example, saw my joke about using two cars come to fruition. (Okay, so they beat me to it by four years.) 2005 saw someone use a bronze statue, and, the next year, he was at it again with railroad tracks, albeit not full ones, but some sort of art exhibit / monument. (Aside: I’m pretty certain that trying to hook up a bunch of wires to train tracks may arouse a bit of suspicion by the police?) 2006 also saw a pair of exercise machines being used, with a comment about how they weren’t very effective, but the apt comment, “On the other hand, we did in fact make two contacts with a pair of exercise machines standing only a few inches above the earth!” And, confusing everything I know about antennas, someone used a tree. And a football stadium (which includes a commentary about how the university police were initially slightly suspicious about someone getting out of their car and hooking wires up to the stadium for some reason). 2007 saw a bridge as an antenna.

And 2008? Well, see, here’s the best thing. The 2008 Challenge is this weekend!

Of course, as a Technician-class license, I don’t have many HF privileges… The Technician license was (before all license classes saw it eliminated) the only class that didn’t require a Morse code exam, so it’s somewhat ironic that almost all of the new HF privileges Techs were given are in the CW portions of various bands. I do get 28.3-28.5 MHz now, allowing SSB on HF…

Time to hit the books, I think. (I think mine–and that one–might be outdated, actually. Looks like the question pool got revised in 2007.) There are always sample exams online, and the feedback can be helpful. Study a bit and take an exam a day, and then review your answers. (Theoretically, actually, you could just learn the answers to each question without understanding the concepts, though that’s really missing the spirit and point of ham radio.)

So I Can Close the Tab

I came across Ken Rockwell’s site the other day, and, as I perused a lot, I came across his interesting mention of the Casio EX-F1. I’ve “graduated” from integrated point-and-shoots to digital SLRs, although this camera costs more than my digital SLR and three lenses put together.

Photographically, it’s mediocre. 6 megapixels. Except you don’t buy this thing for its resolution. You crave it because:

  • 60 frames per second at 6 megapixels. (Note that most movies are shot at 24 fps.)
  • “[S]tereo HDTV movies,” although I confess that I’m not quite sure what that means.
  • Continuous shooting mode, where it’s just constantly shooting at 60fps, and, when you hit the shutter, saves the ones around that time. Thus, you can actually get shots from before you click the shutter.
  • A maximum shutter speed of 1/40,000 second. That is not a typo.
  • 60 frames per second is ridiculous. But if you can take a cut in resolution, you can go further, all the way to 1,200 frames per second at a pitiful 336×96.

Actually, 336×96 isn’t just tiny, it’s a really weird size. I’ve resized (and cropped) a random photo of mine down to 336×96:

336×96 Pixels

In conclusion… 6 megapixel camera, with a long zoom lens equivalent to 36-432mm. And it’s an HDTV video camera. And it’s got the crazy bonus of letting you use shutter speeds of 1/40,000 of a second, and capture low-res video at 1,200 frames/second. I wouldn’t carry it as my main camera (though it would probably be entirely usable for that), but I’d love one of these in my bag for video and such.

I also wonder about the “trickle-down” effect. Although really, more like the “trickle-out effect.” Nikon’s D3 will give pretty clean shots at ISO 6400, something no other camera even tries to offer. It goes up to ISO 25,600. Canon and Nikon are very close when it comes to the frames-per-second rate of their high-end digital SLRs; 7-9 frames/second. (Hint: get rid of the shutter, which is useless on a digital camera where you can just “read” the sensor for a given period of time.) Companies keep focusing on packing more and more megapixels into smaller and smaller sensors. As I’ve said before, I have a 20×30″ print from my 6-megapixel camera. (Cropped a bit, too, actually.) I only “upgraded” to my 10-megapixel XTi because the old one broke and you can’t buy a 6-megapixel SLR anymore. Maybe, just maybe, we’ve seen an end to the megapixel arms race. We exceeded the resolution you could squeeze out of film a long time ago, and now we’re giving medium format a run for its money. When I go to buy a new SLR in maybe five years, I don’t want it to be more than 10 megapixels. But I hope that it goes a lot further than 6 megapixels. And if a “prosumer” point-and-shoot camera does 60 frames per second at full resolution, all of a sudden 3 frames per second on an SLR looks pathetic. Similarly, I’m unaware of any still camera (aside from maybe weird scientific-engineered stuff) that will take a 1/40,000-second exposure, or any flash that’s capable of running at 7 frames per second.

(That said, I’m having a hard time figuring out when you’d need a 1/40,000-second exposure. I only hit my camera’s 1/2,000-second limit when I’m too lazy to stop the lens down…)