Location Error vs. Time Error

This post christens my newest category, Thinking Aloud. It’s meant to house random thoughts that pop into my head, versus fully fleshed-out ideas. Thus it’s meant more as an invitation for comments than something factual or informative, and is likely full of errors…

Aside from “time geeks,” those who deal with it professionally, and those intricately familiar with the technical details, most people probably are unaware that each of the GPS satellites carries an atomic clock on board. This is necessary because the way the system works, in a nutshell, by triangulating your position from various satellites, where an integral detail is knowing precisely where the satellite is at a given time. More precise time means a more precise location, and there’s not much margin of error here. The GPS satellites are also syncronized daily to the “main” atomic clock (actually a bunch of atomic clocks based on a few different standards), so the net result is that the time from a GPS satellite is accurate down to the nano-second level: they’re within a few billionths of a second of the true time. Of course, GPS units, since they don’t cost millions of dollars, rarely output time this accurately, so even the best units seem to have “only” microsecond accuracy, or time down to a millionth of a second. Still, that’s pretty darn precise.

Thus many–in fact, most–of the stratum 1 NTP servers in the world derive their time from GPS, since it’s now pretty affordable and incredibly accurate.

The problem is that GPS isn’t perfect. Anyone with a GPS probably knows this. It’s liable to be anywhere from a foot off to something like a hundred feet off. This server (I feel bad linking, having just seen what colocation prices out there are like) keeps a scatter plot of its coordinates as reported by GPS. This basically shows the random noise (some would call it jitter) of the signal: the small inaccuracies in GPS are what result in the fixed server seemingly moving around.

We know that an error in location will also cause (or, really, is caused by) an error in time, even if it’s miniscule.

So here’s the wondering aloud part: we know that the server is not moving. (Or at least, we can reasonably assume it’s not.) So suppose we define one position as “right,” and any deviation in that as inaccurate. We could do what they did with Differential GPS and “precision-survey” the location, which would be very expensive. But we could also go for the cheap way, and just take an average. It looks like the center of that scatter graph is around -26.01255, 28.11445. (Unless I’m being dense, that graph seems ‘sideways’ from how we typically view a map, but I digress. The latitude was also stripped of its sign, which put it in Egypt… But again, I digress.)

So suppose we just defined that as the “correct” location, as it’s a good median value. Could we not write code to take the difference in reported location and translate it into a shift in time? Say that six meters East is the same as running 2 microseconds fast? (Totally arbitrary example.) I think the complicating factors wouldn’t whether it was possible, but knowing what to use as ‘true time,’ since if you picked an inaccurate assumed-accurate location, you’d essentially be introducing error, albeit a constant one. The big question, though, is whether it’s worth it: GPS is quite accurate as it is. I’m a perfectionist, so there’s no such thing as “good enough” time, but I have to wonder whether the benefit would even show up.

Building an Improvised CDN

From my “Random ideas I wish I had the resources to try out…” file…

The way the “pretty big” sites work is that they have a cluster of servers… A few are database servers, many are webservers, and a few are front-end caches. The theory is that the webservers do the ‘heavy lifting’ to generate a page… But many pages, such as the main page of the news, Wikipedia, or even these blogs, don’t need to be generated every time. The main page only updates every now and then. So you have a caching server, which basically handles all of the connections. If the page is in cache (and still valid), it’s served right then and there. If the page isn’t in cache, it will get the page from the backend servers and serve it up, and then add it to the cache.

The way the “really big” sites work is that they have many data centers across the country and your browser hits the closest one. This enhances load times and adds in redundancy (data centers do periodically go offline: The Planet did it just last week when a transformer inside blew up and the fire marshalls made them shut down all the generators.). Depending on whether they’re filthy rich or not, they’ll either use GeoIP-based DNS, or have elaborate routing going on. Many companies offer these services, by the way. It’s called CDN, or a Contribution Distribution Network. Akamai is the most obvious one, though you’ve probably used LimeLight before, too, along with some other less-prominent ones.

I’ve been toying with SilverStripe a bit, which is very spiffy, but it has one fatal flaw in my mind: its out-of-box performance is atrocious. I was testing it in a VPS I haven’t used before, so I don’t have a good frame of reference, but I got between 4 and 6 pages/second under benchmarking. That was after I turned on MySQL query caching and installed APC. Of course, I was using SilverStripe to build pages that would probably stay unchanged for weeks at a time. The 4-6 pages/second is similar to how WordPress behaved before I worked on optimizing it. For what it’s worth, static content (that is, stuff that doesn’t require talking to databases and running code) can handle 300-1000 pages/second on my server as some benchmarks I did demonstrated.

There were two main ways to enhance SilverStripe’s performance that I thought of. (Well, a third option, too: realize that no one will visit my SilverStripe site and leave it as-is. But that’s no fun.) The first is to ‘fix’ Silverstripe itself. With WordPress, I tweaked MySQL and set up APC (which gave a bigger boost than with SilverStripe, but still not a huge gain). But then I ended up coding the main page from scratch, and it uses memcache to store the generated page in RAM for a period of time. Instantly, benchmarking showed that I could handle hundreds of pages a second on the meager hardware I’m hosted on. (Soon to change…)

The other option, and one that may actually be preferable, is to just run the software normally, but stick it behind a cache. This might not be an instant fix, as I’m guessing the generated pages are tagged to not allow caching, but that can be fixed. (Aside: people seem to love setting huge expiry times for cached data, like having it cached for an hour. The main page here caches data for 30 seconds, which means that, worst case, the backend would be handling two pages a minute. Although if there were a network involved, I might bump it up or add a way to selectively purge pages from the cache.) squid is the most commonly-used one, but I’ve also heard interesting things about varnish, which was tailor-made for this purpose and is supposed to be a lot more efficient. There’s also pound, which seems interesting, but doesn’t cache on its own. varnish doesn’t yet support gzip compression of pages, which I think would be a major boost in throughput. (Although at the cost of server resources, of course… Unless you could get it working with a hardware gzip card!)

But then I started thinking… That caching frontend doesn’t have to be local! Pick up a machine in another data center as a ‘reverse proxy’ for your site. Viewers hit that, and it will keep an updated page in its cache. Pick a server up when someone’s having a sale and set it up.

But then, you can take it one step further, and pick up boxes to act as your caches in multiple data centers. One on the East Coast, one in the South, one on the West Coast, and one in Europe. (Or whatever your needs call for.) Use PowerDNS with GeoIP to direct viewers to the closest cache. (Indeed, this is what Wikipedia does: they have servers in Florida, the Netherlands, and Korea… DNS hands out the closest server based on where your IP is registered.) You can also keep DNS records with a fairly short TTL, so if one of the cache servers goes offline, you can just pull it from the pool and it’ll stop receiving traffic. You can also use the cache nodes themselves as DNS servers, to help make sure DNS is highly redundant.

It seems to me that it’d be a fairly promising idea, although I think there are some potential kinks you’d have to work out. (Given that you’ll probably have 20-100ms latency in retreiving cache misses, do you set a longer cache duration? But then, do you have to wait an hour for your urgent change to get pushed out? Can you flush only one item from the cache? What about uncacheable content, such as when users have to log in? How do you monitor many nodes to make sure they’re serving the right data? Will ISPs obey your DNS’s TTL records? Most of these things have obvious solutions, really, but the point is that it’s not an off-the-shelf solution, but something you’d have to mold to fit your exact setup.)

Aside: I’d like to put nginx, lighttpd, and Apache in a face-off. I’m reading good things about nginx.

Windows Login, Verbose Mode

I made a bunch of changes all at once, and suddenly my system froze when I tried to log in, just saying “Loading your personal settings…”

For a long time, I’ve wanted Windows to show me exactly what it was doing, since “Loading your personal settings…” means nothing. Is it choking on a config file? Trying to reconnect to the network share that doesn’t exist anymore? Is my new anti-virus software conflicting with the old?

I’m still not entirely satisfied, but it turns out that Windows does support extended messages in the login dialog: in HKEY_LOCAL_MACHINE / Software / Microsoft / Windows / CurrentVersion / Polices / system, create a DWORD called “verbosestatus” and set it to 1. (And, per some of the online guides, make sure you don’t have a “DisableStatusMessages” key, or at least make sure it’s set to 0.)

Now, instead of getting “Loading your personal settings…” I can see exactly what file it’s loading. Although to be honest, that wasn’t at all helpful in this case, but this is a setting I’m going to leave on.

As an aside, don’t ever run two anti-virus programs at once. I’m pretty sure that’s the program. Safe Mode doesn’t let you remove software (stupid! stupid! That’s why I needed to get into Safe Mode), but I remembered the old msconfig (Start -> Run -> “msconfig”), where I was able to be judicious in disabling both anti-virus applications, along with some other services that I really don’t need running in the background anyway. And now it works like a charm.

Strange Antenna Challenge

You know those times when you decide to let yourself surf aimlessly? And an hour later, you have absolutely no idea how you got to where you did?

I found the K0S Strange Antenna Contest page from 2003, where some ham radio operators started using, well, strange things as antennas. Who’d think that a ladder works well? (No no, not ladder line, but an actual ladder.) In fact, after working some people off of a ladder, they got an even better idea, and stood several ladders up, using them to support a pair of extension ladders laid horizontally, forming a ladder dipole, with impressive results. Sadly, they report that combining two shopping carts to make a dipole did not get them any contacts, nor did a basketball hoop.

This has me wondering what else would work… An aluminum chain link fence? A railing? Train tracks? Power lines? (Kidding on that one. Please do not try to attach anything to power lines.) Curtain rods? A couple of cars? A section of guardrail? A metal lamppost?

I poked around the site some more, to see if they did it in subsequent years. And they did. 2004, for example, saw my joke about using two cars come to fruition. (Okay, so they beat me to it by four years.) 2005 saw someone use a bronze statue, and, the next year, he was at it again with railroad tracks, albeit not full ones, but some sort of art exhibit / monument. (Aside: I’m pretty certain that trying to hook up a bunch of wires to train tracks may arouse a bit of suspicion by the police?) 2006 also saw a pair of exercise machines being used, with a comment about how they weren’t very effective, but the apt comment, “On the other hand, we did in fact make two contacts with a pair of exercise machines standing only a few inches above the earth!” And, confusing everything I know about antennas, someone used a tree. And a football stadium (which includes a commentary about how the university police were initially slightly suspicious about someone getting out of their car and hooking wires up to the stadium for some reason). 2007 saw a bridge as an antenna.

And 2008? Well, see, here’s the best thing. The 2008 Challenge is this weekend!

Of course, as a Technician-class license, I don’t have many HF privileges… The Technician license was (before all license classes saw it eliminated) the only class that didn’t require a Morse code exam, so it’s somewhat ironic that almost all of the new HF privileges Techs were given are in the CW portions of various bands. I do get 28.3-28.5 MHz now, allowing SSB on HF…

Time to hit the books, I think. (I think mine–and that one–might be outdated, actually. Looks like the question pool got revised in 2007.) There are always sample exams online, and the feedback can be helpful. Study a bit and take an exam a day, and then review your answers. (Theoretically, actually, you could just learn the answers to each question without understanding the concepts, though that’s really missing the spirit and point of ham radio.)

AIM

I frankly don’t use AIM that much these days, but will often sign on and think, “Wow, lots of people are on tonight!” or, “Wow, almost no one is on tonight!” So I just wanted to list my thought process after noticing this:

  1. I’d be interested in seeing a graph of my “buddies” online over time.
  2. It wouldn’t be too hard to write a little script to sit on AIM 24/7 and watch this.
  3. If I was doing that, I might as well log each time someone signed on and off, which would let me answer those, “I wonder if x has been online in at all lately?” questions.
  4. As long as I have a stalker bot going, it’d be even more interesting to grab their away message text and buddy profile.
  5. And as long as I’m doing that, I might as well add support for using diff to show changes in the above between any two points in time.

Is there anything that can’t be graphed? Or made into a shell script?

Big Iron

I keep coming across things like this eBay listing. Sun Enterprise 4500, 12 SPARC processors (400 MHz, 4MB cache) and 12 GB of RAM. This one looks to have a couple Gigabit fiber NICs, too. (Although it’s fiber, so you’d need a pricier switch to use it on a “normal” copper home LAN.)

Even if you foolishly assume that a 400 MHz SPARC is no better than a 400 MHz Celeron, with 12 processors, this is still a net of 4.8 GHz. With a dozen processors, this is clearly best for something that’s very multi-threaded.

Of course, there’s one problem: these machines use SCSI disks. SCSI’s great and all, but it’s expensive, and you can be sure that, if this machine even comes with hard drives (none are listed?), they’re 9GB. So pick up one of these. What’s that you say? Oh, it’s ATA and won’t work with SCSI? No problem!

Nowhere that I see does Sun mention whether Solaris 10 / OpenSolaris will run on older hardware, but I assume it will. Some Linux distros also excel at running on platforms like SPARC.

Now the real question: how much electricity does this thing use?

Tip o’ the Day

The Web Developer toolbar, which is (1) the #1 hit on Google for “Web Developer,” and (2) now compatible with Firefox 3 beta, is totally awesome. You may recall that, in the past, if you had text after a bulleted list or similar on this page, the text would suddenly be mashed together. I never took the time to fully look into it, but it always irked me.

A quick “Outline… Outline Block Level Elements” drew colored boxes around each element of the page, which was exceptionally helpful. This shows the problem: posts start off inside a <p> tag, and adding a list or similar closes the <p> tag. This would have been an easy catch, except that the list looked fine. Upon a closer review, it’s because the lists specified the same line-spacing, thus looking right. While I most likely could have solved this by staring at the code for a long time, Web Developer made it much easier to spot: the first text is inside one box, followed by the list, but the other text is floating outside, leading to a quick, “Oh, I should look at how the <div> is set up” thought, which ended up being exactly the problem. (There’s a bit of excessive space now, but that’s caused by me using PHP to inject linebreaks.)

Web Developer also includes a lot of other useful tools, including the ability to edit the HTML of the page you’re viewing, view server headers, resize non-resizeable elements frames, show page comments, change GETs to POSTs and vice-versa, and much more. Whether you do design full-time, or if you just occasionally fix things, it’s worth having. And you can’t beat the fact that it’s free.

Web Compression

I’ve alluded before to using gzip compression on webserver. HTML is very compressible, so servers moving tremendous amounts of text/HTML would see a major reduction in bandwidth. (Images and such would not see much of a benefit, as they’re already compressed.)

As an example, I downloaded the main page of Wikipedia, retrieving only the HTML and none of the supporting elements (graphics, stylesheets, external JavaScript). It’s 53,190 bytes. (This, frankly, isn’t a lot.) After running it through “gzip -9” (strongest compression), it’s 13,512 bytes, just shy of a 75% reduction in size.

There are a few problems with gzip, though:

  • Not all clients support it. Although frankly, I think most do. This isn’t a huge deal, though, as the client and server “negotiate” the content encoding, so it’ll only be used if it’s supported.
  • Not all servers support it. I don’t believe IIS supports it at all, although I could be wrong. Apache/PHP will merrily do it, but it has to be enabled, which means that lazy server admins won’t turn it on.
  • Although it really shouldn’t work that way, it looks to me as if it will ‘buffer’ the whole page then compress it, then send it. (gzip does support ‘streaming’ compression, just working in blocks.) Thus if you have a page that’s slow to load (e.g., it runs complex database queries that can’t be cached), it will appear even worse: users will get a blank page and then it will suddenly appear in front of them.
  • There’s overhead involved, so it looks like some admins keep it off due to server load. (Aside: it looks like Wikipedia compresses everything, even dynamically-generated content.)

But I’ve come across something interesting… A Hardware gzip Compression Card, apparently capable of handling 3 Gbits/second. I can’t find it for sale anywhere, nor a price mentioned, but I think it would be interesting to set up a sort of squid proxy that would sit between clients and the back-end servers, seamlessly compressing outgoing content to save bandwidth.

The Dream Network

Periodically I come across deals for computers that are very tempting. I’m not necessarily in the market right away: I’m going to keep my laptop until I’ve been working long enough that I can afford something stellar. It’s silly to “upgrade” a little bit. But every time I see these deals, I think of the various ways I could set things up… My “ideal (but realistic) computer” would actually be a network:

  • Network infrastructure: Gigabit Ethernet, switched, over Cat6. 10GigE and fiber are cool, but really not worth the cost for a home network.
  • A server machine. It needn’t be anything too powerful, and could (should) be something that doesn’t use a ton of electricity. The machine would run Linux and serve multiple rolls:
    • Fileserver. It’d have a handful (4-6?) of 500GB disks, running RAID. While performance is important, it’s important to me that this thing be very ‘safe’ and not lose data. (Actually, in a very ideal setup, there’d be two fileservers for maximum redundancy, but my goal with this setup is to be reasonable. What interests me, though, is that I think it’d be possible to use an uncommon but awesome network file system like Coda or AFS, but also have some network shares on top of that service that ‘look normal,’ so Windows could just merrily connect to an M: drive or whatnot, merrily oblivious to the fact that the fileserver is actually a network of two machines.) It’s important that the machine have gobs of free space, so that I can merrily rip every CD and DVD I own, save every photo I take, and back up my computers, without every worrying about being almost out of disk space. It’s also important to be hyper-organized here, and have one “share” for music, one “share” for photos I’ve taken, etc.
    • Internet gateway. It’d act as my router/firewall to the Internet, and also do stuff like DNS caching. It may or may not serve as a caching proxy; I tend to only notice caches when they act up, but then again, it might be quite helpful.
    • Timeserver. For about $100 you can get a good GPS with PPS (pulse-per-second) output and keep time down to a microsecond. Hook it up to the serial port of this machine, and have your local machine sync to that for unnecessarily accurate time. (Actually, it looks like you can do PTP in software with reasonable accuracy?)
    • Asterisk machine, potentially taking in an analog phone line and also VoIP services, and giving me a nice IP-based system to use, blending them all so it’s transparent how they’re coming in. It would also do stuff like voicemail, call routing/forwarding, etc. For added fun, it could be made to do faxes: receive them and save them as a PDF, and act as a “printer” for outgoing faxes. The code’s there to do this already.
    • Printserver. If you have multiple machines, it’s best to hang your printer(s) off of an always-on server. It could speak CUPS or the like to Linux, and simultaneously share the printer for Windows hosts.
    • MythTV backend? But most likely not; I’d prefer to offload that to a more powerful machine, rather than bogging down a server.
  • Primary desktop. Surprisingly, a quad-core system, 4 GB of RAM, and a 24″ LCD can be had for around $1,000 these days. That’s all I need in a system. I have my Logitech G15, which is all the keyboard I need. My concern is with what to run… These days I make use of Windows and Linux pretty heavily. I think virtualization will be mature enough by the time I’m actually going for a setup like this to allow me to get a Linux-based Xen host and run Windows inside of a virtual machine with no performance degradation. (This is actually mostly possible already, but as Andrew will attest, Xen can still have some kinks….) The system should have a big monitor. It’d be interesting to put something like an 8GB solid-state drive in it and use that for a super-fast boot, but the jury’s still out on whether it’s worthwhile. (I guess that some places are pushing SSD under some special name to make Windows boot instantly, but the reviews I’ve heard suggest that it gives a nominal improvement at best.)
  • Secondary desktop. Pay attention for a while to the short bursts of time when you can’t use your computer. The system locks up for a bit, or it’s just unbearably slow while the disks spin up and get a massive file, or you have to reboot, or you’re playing a full-screen game and die and wait 15 seconds to respawn, or….. In this “ideal setup,” I’d have a second machine. It needn’t be anything special; in fact, it could be the cheapest machine possible. It’d basically run Firefox, AIM/IRC, Picasa (off of the network fileserver), iTunes, and the like. For the sake of completeness, it should probably run whatever the other system doesn’t, out of Linux, XP, and Vista.

Google Charts

Have you guys seen Google Charts? It’s a quirky little API I didn’t know existed until I saw a passing allusion to it. Essentially, you pass it a specially-crafted URL (essentially the definition of an API) and it will generate a PNG image.

Here’s a fairly random line graph. My labeling of the axes makes no sense, but it was nonsensical data anyway.

One of the cooler things they support is a “map” type of chart, like this US map. The URL is a bit tricky, though this one of South America is easier to understand: chco (presumably “CHart COlor”) sets the colors, with the first being the ‘default’ color. chld lists the countries, as they should map up to the colors: UYVECO is “Uruguay,” “Venezuela,” and “Colombia.”

What has me particularly interested is that I’ve recently installed code to watch connections to my NTP servers. Here’s my Texas box, a stratum 2 server in the NTP pool (pool.ntp.org). I bumped it up to list a 10 Mbps connection speed to signal that I could handle a lot more queries than the average, although it’s still nowhere near its limit. In addition to the stats you see there, it keeps a “dump file” of every single connection. (As an aside, this strikes me as inefficient and I want to write an SQL interface to keep aggregate stats… But that’s very low-priority right now.)

Further, I have some IPGeo code I played with. More than meets the eye, actually: a separate database can give a city/state in addition to the country. (It comes from the free MaxMind GeoLite City database.) Thus I could, in theory, parse the log file created, match each IP to a state, and plot that on a US map.

This reminds me that I never posted… I set up NTP on the second server Andrew and I got, where we’re intending to consolidate everything, but haven’t had time yet. It sat unused for a while, keeping exceptionally good time. So, with Andrew’s approval, I submitted it to the NTP pool. I set the bandwidth to 3 Mbps, lower than the 10 Mbps my Texas box is at.

I was somewhat surprised to see it handling significantly more NTP queries. (They’re not online, since the box isn’t running a webserver, but for those in-the-know, ~/ntp/ntp_clients_stats | less produces the same type of output seen here.) It turns out that a flaw in the IPGeo code assigning the server to the right ‘zones’ for some reason thought our server was in Brazil. Strangely, while the United States has 550 (at last count) servers in the pool, South America has 16. Thus I got a much greater share of the traffic. It’s still low: at its peak it looks like me might use 2GB of bandwidth.

So there are a few graphs I think would be interesting:

  • A line graph of the number of clients served over time. Using Google Charts would save me from having to deal with RRDTool / MRTG.
  • A map of South American countries, colored to show which of the countries are querying the server most frequently. (The same could be done for my US server, on a state-by-state basis.)