Is Noon 12am or 12pm?

$title is something that has always confused me.

But I learned the correct answer today: it is neither.

The precise instant that is “noon” is the “M” in AM and PM (ante- and post-meridiem). So noon is properly “12M,” though the spoilsports at Wikipedia term this usage “antiquated.” (They do go on to call the modern US GPO style manual incorrect, though.) More modern usage appears to be to just say “12 noon.” But if you’re referring specifically to noon, “12am” or “12pm” are both incorrect. (In the same way that the year 2000 was neither pre-2000 nor post-2000.)

This does clear something up for me, though! My confusion has typically been about the whole hour of 12:xx. Is 12:30 (30 minutes past noon) a.m. or p.m.?

It turns out that I have been wrong. I assumed it was a.m., and that the switch happened when the 12 rolled over to 1. But the pedantic clarification of what a.m. and p.m. means makes this suddenly intuitive:

11:59:59 a.m.
12:00:00 M
12:00:01 p.m.

The equivalent for midnight is less clear, but it seems as though “12 p.m.” is generally accepted, with 12:00:01 being a.m.

Now you know.

One Weird Trick to Never Run out of Batteries

(No, no, I’m not actually selling anything, and this isn’t spam. But some of us have been parodying the weird “One weird trick” ads that no one understands. I did it in my post about getting NFS working, and my friend and colleague Tomas introduced his Gerrit expand comments bookmarklet as “one weird trick” to make it more usable. This one is about batteries.)

For years, I’ve been plagued by what to do about the things I own that use AA/AAA batteries. On one hand, disposable batteries are really convenient, because they last years in storage, and are always ready. But the idea of throwing away batteries weekly becomes morally objectionable the more you think about it. On the other hand, you have rechargeable batteries. They can be used hundreds of times, solving (or at least exponentially reducing) the disposal issue I have. But I’ll often endure a wait of a few hours when the batteries in something die and I have to go hastily recharge them. So often I’ll charge things when the batteries aren’t really all that low, so that they won’t run out at an inconvenient time, and that’s just a hassle.

So the “one weird trick” is simple: buy a ton of Eneloop batteries, more than you need, and use them first-in, first-out. Throw depleted batteries on the charger as you grab the new ones to ensure you never run out. It’s the best of both worlds. You always have fully-charged batteries on hand, but instead of throwing the depleted ones in the trash, you put them in the charger. Here it is in a diagram:

Sequence diagram

I really can’t begin to describe how incredible this is. It’s like this amazing luxury I have. But there are some things I figured out along the away:

  • You need to get something that’s got a very low self-discharge rate. This is the selling point of the Eneloops. You can charge them and they’ll still be good a year down the road. (They’re not immune to self-discharge, but it’s greatly reduced.) This system just will not work at all with regular batteries. Otherwise when you charge a battery and return it to storage, it will be drained by the time you use it. Seriously, this bullet point is absolutely critical.
  • Get all the same type of battery. Different brands, or maybe even different models of the same brand, discharge at different rates. It becomes a lowest common denominator situation if you mix them. If you must use different brands/models, don’t mix them in the same device.
  • Buy a ton of batteries. I figured I’d need maybe ten AA batteries. I actually have about 25-30 in circulation, four in reserve (charged and ready to go), and four that I just replaced. You think you got everything, and then the batteries in something else die and you realize you need more. Because they last a long time between charges, having a dozen extra batteries is not a problem. Having too few batteries is.
  • You must force yourself to always put the dead batteries on the charger as you grab new ones. Keep the reserve (charged batteries) and the charger next to each other. This is absolutely essential. You will just have a pile of dead batteries otherwise.
  • Go strictly first-in, first-out on batteries. Work left-to-right, front-to-back, or whatever. I’ve often been tempted to take the batteries off the charger and use them, since they’re “fresher” than the ones waiting in reserve. But the whole point of this system is that the ones in reserve are perfectly good. Once you start skipping over batteries, that guarantee starts to break down.
  • If you have new disposable batteries in reserve, use them first. It pains me to use them and then throw them away when I have perfectly-good rechargeable batteries, but the alternative is to not use them and throw them away. Use them while they’re good, knowing that you will never purchase a disposable battery again.

Eneloop batteries are really expensive. $2-3 per battery. But this is absolutely a case of paying for quality. It’s also a wise investment—you’re going to use these hundreds of times, such that the per-use cost of each battery is probably under a cent. I highly, highly recommend the Eneloop brand. There may now be other low-self-discharge batteries, but I haven’t tried them. Whatever you do, be absolutely certain that batteries can last a month or two without losing much of their charge before you buy them. It is a complete waste of your money otherwise.

I highly recommend some specifics, too:
* A good charger. I have, and would recommend, the LaCrosse BC-700. It is not particularly intuitive to use, but it’s not too tough to master. It allows you to discharge and then recharge batteries, which can help extend capacity early on. ([citation needed] on that, but I’ve read it multiple places.) However, this is just the model I bought and that I like; any quality charger should work fine.
* A container for the batteries. I don’t have a specific product recommendation here, though the linked set is good. (Note that some retail-packaged Eneloop batteries come with these, so you might not need to purchase them separately.) I have a container that holds 8 that I use for my main reserve set, spilling over into 4-packs when it fills. I do recommend that you get one that only holds one layer of batteries, to make the first-in, first-out system easy and intuitive. Take batteries from the right, and insert from the left. No worrying about top and bottom rows or anything. Really any container will do, but don’t just leave them lying around or it will just become a mess and you won’t know what’s what.

This all sounds so silly, but it’s incredibly useful. I would never go back to any other system.

Thinking Like an Engineer

Lately a lot of my work as a web developer has been way at the back-end, and, for whatever reason, it tends to focus heavily on third parties. I spent a while fixing a bizarre intermittent error with our credit card processor, moved on to connecting with Facebook, and am now working on a major rewrite of the API client we use to talk to our e-mail provider. Sometimes it starts to bleed over into my personal life.

This kind of turned into crazy-person babble, but I haven’t posted in a while, so here goes a perhaps-horrifying look into how my mind works:

  • Driving home one night, I went through the FastLane / EZPass lane, as I often do. Only this time, instead of thinking, “I hate that I have to slow down for this,” I started thinking about latency. Latency is one of the biggest enemies of people working with third parties. It was at the crux of our problems with the credit card processor — we’d store a card and immediately try to charge it, when sometimes we had to wait “a little bit” before the card was available throughout their system to be charged. So I had to introduce a retry loop with exponential backoff. The email API work has major hurdles around latency and timeouts. We’ve moved almost all of it into a background queue so that it doesn’t delay page load, but even then we have intermittent issues with timeouts. So driving through the FastLane lane today, I slowed to about 45, and thought how remarkable it was that, even at that speed, it was able to read the ID off my transponder, look it up in a remote database somewhere, and come back with a value on what to do. I’d have assumed that they’d just queue the requests to charge my account, but if my prepaid balance is low, I get a warning light shown. It seems that there’s actually a remote call. It’s got to happen in a split-second, though, and that’s pretty impressive. I wonder how they do it. I thought a lot about this, actually.
  • I work on the fourth floor of a building with one, slow elevator. A subsection of Murphy’s Law indicates that the elevator will always be on the exact opposite floor: when I’m on the first floor, it’s on the fourth, for example. So one day while waiting for the elevator, I started thinking that it needed an API. I could, from my desk, summon it to our floor to lessen my wait time. Likewise, I could build an iPhone app allowing me to call the elevator as I was walking towards it. The issue of people obnoxiously calling the elevator way too early seems like a problem, but I think it’s okay — if you call it too soon, it will arrive, and then someone else will call it and you’ll miss out entirely. It’s in everyone’s interest to call it “just right” or err on the side of a very slight wait.
  • While thinking more about the elevator API, I started thinking about how elevators aren’t really object-oriented. (I’m pretty sure that’s never been written before.) It seems an elevator is really pretty procedural, running something like goToFloor(4). The obvious object would be Floors, but that’s not really right. You’re not adding Floors to the building, or even changing properties of Floors. The object is really CallRequest, and it would take two attributes: an origin and a direction. “Come to floor two, I’m going up.” It made me think that there are some places that being object-oriented just doesn’t make a ton of sense.
  • You really want to add authentication. To get to our floor, you need to swipe your badge. The elevator API needs to account for the fact that some requests require validating a user’s credentials to see if they’re authorized to make the request they are.
  • “Code an elevator” would actually be an interesting programming assignment. But I fear it’s too far removed from most normal coding. I started thinking that you’d want to sort CallRequests in some manner, use some algorithms, and then iterate over CallRequests. I think you actually want to throw out that concept. You have a tri-state to control direction: Up, Down, and Idle. Then you have two arrays: UpwardCalls and DownwardCalls. They don’t even need to be sorted. As you near a floor, you see if UpwardCalls contains that floor. If so, you stop. If not, you continue. If you’ve reached the highest floor in UpwardCalls, you check to see if DownwardCalls has an elements. If so, you set your direction to Down and repeat the same procedure for DownwardCalls. If there are no DownwardCalls, you set your state to Idle. The problem is that this is really not how I’m used to thinking. I want to iterate over CallRequests as they come in, but this means that the elevator is going all over the place. The person on the 4th floor wants go to the 2nd, so we make that happen. But right as they put that request in, the person on the 3rd wants to go to the 1st. So you’d go 4 to 2 to 3 to 1. “Fair” queuing, but ridiculously inefficient. On your way from the 4th to the 2nd, stop on the 3rd to pick the person up.
  • I wonder how things work when you have multiple elevators. In big buildings you’ll often have something like 8 elevators. I’m far too tired to try to figure out the ideal way to handle that. They need to be smart enough to have a common queue so that I don’t have to hit “Up” on all eight elevators and just take whatever comes first, but deciding what elevator can service my request first is interesting. I kind of think it’s another case of elevators not being the same as the programming I’m used to, and it’s just whatever elevator happens to pass my floor in its service going in the right direction. But what if there’s an idle elevator? Can it get to me first, or will an already-running elevator get there first? Do you start the idle elevator first and make it event-driven? What if the already-running elevator has someone request another floor between its present location and my floor? You’d need to recompute. You’re probably better off dispatching an idle elevator and just giving me whatever gets there first.
  • You then need to figure out what’s important. If you have an idle elevator that can get to me more expediently than an already-running elevator, but the wait time wouldn’t be that much longer, do you start up the idle elevator, or do you save power and have me wait? How do you define that wait? Is this something elevator-engineers actually tune?
  • I think you want to track the source of a request — whether it came from within the elevator or from the external button on a floor. If it’s within the elevator, you obviously need to stop, or the elevator has effectively “kidnapped” the person. But if it’s an external button, you might want to just skip it and let another elevator get to it, if you have a bunch of CallRequests you’re working through. Ideally, you’d also approximate the occupancy of the elevator based on the weight (from reading the load on the motors?), and when the elevator was at perhaps 75% capacity, stop processing new external requests.
  • Should the elevator controller try to be proactive? It might keep a running log of the most “popular” floors out of, say, the last 50 CallRequests, and, when it was done processing all CallRequests, go to whatever the most popular was and sit idle there? Or perhaps it should work its way towards the middle floor? If you had multiple elevators you could split them apart that way. Is it worth the power conservation?
  • The office thermostat should have an API, too. (I bet the good ones do. But we don’t have those.) Thermostats are a pain to use. readTemperature and setTemperature are the obvious API calls, though advanced thermostats would have a TemperaturePlan concept.

Disk Throughput

I think I’ve alluded earlier to the fact that I’ve been trying to speed up some systems at home, and how some of them are really slow. (I’m starting to suspect Norton, actually, but more on that when I find out more.)

I just came across this spiffy application, which will write and then read a test file to measure disk performance. My laptop gets 27.1 MB/sec. (sequential) write, 41 MB/sec. sequential read, and 29.9 MB/sec. random reads. This was on a 1GB file; it wanted to do a ~4 GB file, but I really didn’t feel like spending the time. I suspect the goal is to make sure that it’s not being “fooled” by caching, but I figured 1GB was sufficient for that. Some of the results show read speeds of 600+ MB/sec., which is most definitely coming from cache. (That said, this is a more “real-life” test… Just don’t think you have a hard drive that does 800MB/sec. reads!)

Location Error vs. Time Error

This post christens my newest category, Thinking Aloud. It’s meant to house random thoughts that pop into my head, versus fully fleshed-out ideas. Thus it’s meant more as an invitation for comments than something factual or informative, and is likely full of errors…

Aside from “time geeks,” those who deal with it professionally, and those intricately familiar with the technical details, most people probably are unaware that each of the GPS satellites carries an atomic clock on board. This is necessary because the way the system works, in a nutshell, by triangulating your position from various satellites, where an integral detail is knowing precisely where the satellite is at a given time. More precise time means a more precise location, and there’s not much margin of error here. The GPS satellites are also syncronized daily to the “main” atomic clock (actually a bunch of atomic clocks based on a few different standards), so the net result is that the time from a GPS satellite is accurate down to the nano-second level: they’re within a few billionths of a second of the true time. Of course, GPS units, since they don’t cost millions of dollars, rarely output time this accurately, so even the best units seem to have “only” microsecond accuracy, or time down to a millionth of a second. Still, that’s pretty darn precise.

Thus many–in fact, most–of the stratum 1 NTP servers in the world derive their time from GPS, since it’s now pretty affordable and incredibly accurate.

The problem is that GPS isn’t perfect. Anyone with a GPS probably knows this. It’s liable to be anywhere from a foot off to something like a hundred feet off. This server (I feel bad linking, having just seen what colocation prices out there are like) keeps a scatter plot of its coordinates as reported by GPS. This basically shows the random noise (some would call it jitter) of the signal: the small inaccuracies in GPS are what result in the fixed server seemingly moving around.

We know that an error in location will also cause (or, really, is caused by) an error in time, even if it’s miniscule.

So here’s the wondering aloud part: we know that the server is not moving. (Or at least, we can reasonably assume it’s not.) So suppose we define one position as “right,” and any deviation in that as inaccurate. We could do what they did with Differential GPS and “precision-survey” the location, which would be very expensive. But we could also go for the cheap way, and just take an average. It looks like the center of that scatter graph is around -26.01255, 28.11445. (Unless I’m being dense, that graph seems ‘sideways’ from how we typically view a map, but I digress. The latitude was also stripped of its sign, which put it in Egypt… But again, I digress.)

So suppose we just defined that as the “correct” location, as it’s a good median value. Could we not write code to take the difference in reported location and translate it into a shift in time? Say that six meters East is the same as running 2 microseconds fast? (Totally arbitrary example.) I think the complicating factors wouldn’t whether it was possible, but knowing what to use as ‘true time,’ since if you picked an inaccurate assumed-accurate location, you’d essentially be introducing error, albeit a constant one. The big question, though, is whether it’s worth it: GPS is quite accurate as it is. I’m a perfectionist, so there’s no such thing as “good enough” time, but I have to wonder whether the benefit would even show up.

Building an Improvised CDN

From my “Random ideas I wish I had the resources to try out…” file…

The way the “pretty big” sites work is that they have a cluster of servers… A few are database servers, many are webservers, and a few are front-end caches. The theory is that the webservers do the ‘heavy lifting’ to generate a page… But many pages, such as the main page of the news, Wikipedia, or even these blogs, don’t need to be generated every time. The main page only updates every now and then. So you have a caching server, which basically handles all of the connections. If the page is in cache (and still valid), it’s served right then and there. If the page isn’t in cache, it will get the page from the backend servers and serve it up, and then add it to the cache.

The way the “really big” sites work is that they have many data centers across the country and your browser hits the closest one. This enhances load times and adds in redundancy (data centers do periodically go offline: The Planet did it just last week when a transformer inside blew up and the fire marshalls made them shut down all the generators.). Depending on whether they’re filthy rich or not, they’ll either use GeoIP-based DNS, or have elaborate routing going on. Many companies offer these services, by the way. It’s called CDN, or a Contribution Distribution Network. Akamai is the most obvious one, though you’ve probably used LimeLight before, too, along with some other less-prominent ones.

I’ve been toying with SilverStripe a bit, which is very spiffy, but it has one fatal flaw in my mind: its out-of-box performance is atrocious. I was testing it in a VPS I haven’t used before, so I don’t have a good frame of reference, but I got between 4 and 6 pages/second under benchmarking. That was after I turned on MySQL query caching and installed APC. Of course, I was using SilverStripe to build pages that would probably stay unchanged for weeks at a time. The 4-6 pages/second is similar to how WordPress behaved before I worked on optimizing it. For what it’s worth, static content (that is, stuff that doesn’t require talking to databases and running code) can handle 300-1000 pages/second on my server as some benchmarks I did demonstrated.

There were two main ways to enhance SilverStripe’s performance that I thought of. (Well, a third option, too: realize that no one will visit my SilverStripe site and leave it as-is. But that’s no fun.) The first is to ‘fix’ Silverstripe itself. With WordPress, I tweaked MySQL and set up APC (which gave a bigger boost than with SilverStripe, but still not a huge gain). But then I ended up coding the main page from scratch, and it uses memcache to store the generated page in RAM for a period of time. Instantly, benchmarking showed that I could handle hundreds of pages a second on the meager hardware I’m hosted on. (Soon to change…)

The other option, and one that may actually be preferable, is to just run the software normally, but stick it behind a cache. This might not be an instant fix, as I’m guessing the generated pages are tagged to not allow caching, but that can be fixed. (Aside: people seem to love setting huge expiry times for cached data, like having it cached for an hour. The main page here caches data for 30 seconds, which means that, worst case, the backend would be handling two pages a minute. Although if there were a network involved, I might bump it up or add a way to selectively purge pages from the cache.) squid is the most commonly-used one, but I’ve also heard interesting things about varnish, which was tailor-made for this purpose and is supposed to be a lot more efficient. There’s also pound, which seems interesting, but doesn’t cache on its own. varnish doesn’t yet support gzip compression of pages, which I think would be a major boost in throughput. (Although at the cost of server resources, of course… Unless you could get it working with a hardware gzip card!)

But then I started thinking… That caching frontend doesn’t have to be local! Pick up a machine in another data center as a ‘reverse proxy’ for your site. Viewers hit that, and it will keep an updated page in its cache. Pick a server up when someone’s having a sale and set it up.

But then, you can take it one step further, and pick up boxes to act as your caches in multiple data centers. One on the East Coast, one in the South, one on the West Coast, and one in Europe. (Or whatever your needs call for.) Use PowerDNS with GeoIP to direct viewers to the closest cache. (Indeed, this is what Wikipedia does: they have servers in Florida, the Netherlands, and Korea… DNS hands out the closest server based on where your IP is registered.) You can also keep DNS records with a fairly short TTL, so if one of the cache servers goes offline, you can just pull it from the pool and it’ll stop receiving traffic. You can also use the cache nodes themselves as DNS servers, to help make sure DNS is highly redundant.

It seems to me that it’d be a fairly promising idea, although I think there are some potential kinks you’d have to work out. (Given that you’ll probably have 20-100ms latency in retreiving cache misses, do you set a longer cache duration? But then, do you have to wait an hour for your urgent change to get pushed out? Can you flush only one item from the cache? What about uncacheable content, such as when users have to log in? How do you monitor many nodes to make sure they’re serving the right data? Will ISPs obey your DNS’s TTL records? Most of these things have obvious solutions, really, but the point is that it’s not an off-the-shelf solution, but something you’d have to mold to fit your exact setup.)

Aside: I’d like to put nginx, lighttpd, and Apache in a face-off. I’m reading good things about nginx.

Broken Windows

Last night we were unloading a shopping cart. When done, the place to put it away was pretty far away. But there were about ten other shopping carts littering the parking lot nearby, so I said, “Meh, what’s one more?”

As we got in the car, I proclaimed, “Broken Windows in action!” I think people were confused and assumed I was referring to a literal window which was broken. Instead, I was referring to the Broken Windows Theory, which is an interesting read. The basic premise is that researchers watched an abandoned warehouse. For weeks, no one vandalized the building. One day, one of the researchers (deliberately) broke one of the windows. In short order, vandals knocked out the rest of the windows. The theory is used a lot in policing, but I think it has applications in many other places. Such as parking lots: if you’re diligent in bringing in carts, I’d argue that you’d avoid people doing whta I did. (I also felt the same way at the bowling alley: if we frequently picked up candy wrappers and popcorn from the floor, the place seemed pretty clean. If we slacked, it felt like the place was being trashed by everyone in short order.)

The theory does have its detractors, but it also has strange people who see applications of their theory in parking lots. Enjoy the photo of chives, which have nothing to do with anything, but I just took it and I like it.

Chives

Obama Wins!

Ed.: Because the blogs have been slow, and because this is a hot topic, I’ve fudged the date on this to appear to have been published two days later, so it will stay on the main page a bit longer.

Obama LogoIt looks like Obama is the Democratic nominee, while Hillary Clinton, the woman who has twice alluded to Obama being assassinated (okay, the first time was a speaker at her event, not her), has conceded that she’d be open to running as his VP.

I’d be happier with an Obama-Richardson ticket, but people are calling Obama-Clinton the fastest way to try to heal the wounds this election cycle saw. In her defense, if she doesn’t get him assassinated, she’d make an excellent VP.

Needless to say, I’ll be watching the news tonight for what may be two very historic speeches: Obama’s victory speech and Hillary’s concession speech. (It seems like it was just weeks ago that Obama gave his “concession speech” that was anything but a concession speech, in New Hampshire, which led to the Yes We Can Song.)

The AP story is hot off the press, and many MSM outlets aren’t carrying it yet. Whether that’s because the polls don’t close for two hours, because it’s not factual, or just because MSM isn’t as obsessed with checking Google News as I am remains to be seen.

Update: It seems that Hillary hasn’t conceded quite yet. Honestly, I’m not sure how the AP is so sure that Obama’s won yet.

Update 2: USA Today has a good piece suggesting that, while Obama might do it tonight, it’s still about 30 delegates premature. And they also have this good article on exactly how the AP story was put together.

Update 3: You can follow the whole Google News thread.

Missing the Point

This comic was pretty funny, and the age/2 + 7 formula got tossed around a lot by my roommates.

Of course, it gives us the minimum age one can date without being creepy. At 22, it’s [(22/2) + 7], or 18. (I, however, maintain that this discrepancy would, in fact, be creepy.)

But what about the upper age limit? The formula itself is silent on this, but we can easily do some substitution to make it work. If the minimum acceptable age (“M”) is your own age (“A”) divided by two, plus 7, we get:

M = A/2 + 7

We typically solve for M, knowing A. However, the oldest person I could date would have my A as their M, e.g.:

22 = A/2 + 7

With this realization, it’s a simple Algebra 1 question. Subtract 7 from both sides and then multiply by two.

Thus, the maximum age one can date is 2(a-7), where a is your age. For me, it’d be 2(22-7), or 30.

What interests me, though, is that this means I’m allowed to go back four years, but forward eight, within the margin of creepiness.

I built a spreadsheet for people aged 1 to 100 showing this and various other statistics. It’s online here as an HTML document. A few interesting trends emerge that aren’t intuitively obvious working with just the formulas:

  • The formula doesn’t make any sense below age 14.
  • Age 14 is a sort of ‘identity,’ when you’re first able to start non-creepily dating people, apparently, without breaking any laws of mathematics. At age 14, you can’t date anyone older, nor younger, than 14.
  • From there on out, every year you age adds 0.5 to the minimum age you can date, while adding 2 to the maximum age. Thus at 22, I can date 18-30. When I turn 23, my new range will be 18.5 to 32. (At age 100, you can date anyone between 57 and 186. Because dating anyone over 186 would definitely be creepy.)
  • As you can see, the two don’t grow at the same speed; the upper age grows four times as fast as the lower age. An interesting side-effect of this is that this means that, as time goes on, your age becomes radically different than the median age. By the time you reach 100, you’re 21.5 years younger than the median age of people you can date.

SLRs

I think the best thing about SLRs isn’t their elimination (well, exponential reduction) of shutter lag, nor the support for high ISOs, or even advanced exposure and metering modes. It’s that even at relative high apertures (f/5.6), you can keep a shallow depth of field. Consider this photograph:

Something-flower

(Does anyone know what type of flower this is, BTW?) The photo wouldn’t be half as good if everything were in focus, as a normal camera would have rendered it. But by throwing the distracting (and ugly!) background out of focus, the shot comes out a lot better. I don’t entirely love the depth of field on this one; I wish you could see a little more of the plant clearly (which would have required that I stop the lens down a bit more), but I also wish the background were even further out of focus (which would have required that I open up the lens a bit more). BTW, a little bit of HDR going on here, as it wasn’t the best lighting.

mIMG_2571

There’s another example. Too shallow, or at least, I should have manually selected the autofocus sensor to use one on the left, so that all the caterpillars were in focus. But the background (green and purple bushes) are pleasantly blurred, keeping your attention on the tree.

m-IMG_2593

Here I totally disregarded the rule of thirds. I like it anyway. The other leaves were pretty nearby, so they’re only slightly out of focus. But again, it draws your attention in closer.

m-IMG_2596

There’s the best example. The trees in the background were across the street, and thus extremely out of focus. The camera focused on the leaves, which are tack sharp.

And now, I’m going to go finish mowing the lawn. There were just too many photo opportunities I noticed… 😉

Although I’m attending a Fishercats game tonight… It’ll be my first time with an SLR there. Let’s see how that goes.