Location Error vs. Time Error

This post christens my newest category, Thinking Aloud. It’s meant to house random thoughts that pop into my head, versus fully fleshed-out ideas. Thus it’s meant more as an invitation for comments than something factual or informative, and is likely full of errors…

Aside from “time geeks,” those who deal with it professionally, and those intricately familiar with the technical details, most people probably are unaware that each of the GPS satellites carries an atomic clock on board. This is necessary because the way the system works, in a nutshell, by triangulating your position from various satellites, where an integral detail is knowing precisely where the satellite is at a given time. More precise time means a more precise location, and there’s not much margin of error here. The GPS satellites are also syncronized daily to the “main” atomic clock (actually a bunch of atomic clocks based on a few different standards), so the net result is that the time from a GPS satellite is accurate down to the nano-second level: they’re within a few billionths of a second of the true time. Of course, GPS units, since they don’t cost millions of dollars, rarely output time this accurately, so even the best units seem to have “only” microsecond accuracy, or time down to a millionth of a second. Still, that’s pretty darn precise.

Thus many–in fact, most–of the stratum 1 NTP servers in the world derive their time from GPS, since it’s now pretty affordable and incredibly accurate.

The problem is that GPS isn’t perfect. Anyone with a GPS probably knows this. It’s liable to be anywhere from a foot off to something like a hundred feet off. This server (I feel bad linking, having just seen what colocation prices out there are like) keeps a scatter plot of its coordinates as reported by GPS. This basically shows the random noise (some would call it jitter) of the signal: the small inaccuracies in GPS are what result in the fixed server seemingly moving around.

We know that an error in location will also cause (or, really, is caused by) an error in time, even if it’s miniscule.

So here’s the wondering aloud part: we know that the server is not moving. (Or at least, we can reasonably assume it’s not.) So suppose we define one position as “right,” and any deviation in that as inaccurate. We could do what they did with Differential GPS and “precision-survey” the location, which would be very expensive. But we could also go for the cheap way, and just take an average. It looks like the center of that scatter graph is around -26.01255, 28.11445. (Unless I’m being dense, that graph seems ‘sideways’ from how we typically view a map, but I digress. The latitude was also stripped of its sign, which put it in Egypt… But again, I digress.)

So suppose we just defined that as the “correct” location, as it’s a good median value. Could we not write code to take the difference in reported location and translate it into a shift in time? Say that six meters East is the same as running 2 microseconds fast? (Totally arbitrary example.) I think the complicating factors wouldn’t whether it was possible, but knowing what to use as ‘true time,’ since if you picked an inaccurate assumed-accurate location, you’d essentially be introducing error, albeit a constant one. The big question, though, is whether it’s worth it: GPS is quite accurate as it is. I’m a perfectionist, so there’s no such thing as “good enough” time, but I have to wonder whether the benefit would even show up.

2 thoughts on “Location Error vs. Time Error

  1. Speaking of GPS, I continue to be surprised by the problems with height above sea level. I rode a ferry and watched the reading go as high as 40 feet above sea level to 17 feet below sea level as the ferry rode across calm water. Of course sea level in general is a moving target (no pun intended) as it rises and falls with the tide. But that much in a few seconds? 🙂

Leave a Reply

Your email address will not be published. Required fields are marked *