Good Enough is Good Enough

Often, it seems that quality and quantity are inversely proportional. You can spend all day doing lots of really quick things poorly, or you can spend all day doing one thing really well. Most people would tell you that quality is really important, so you should spend all day doing one thing really well.

Sometimes, I’m sure those people are right. If you’re assembling an airplane, please take as much time as you need. But increasingly, I find the focus on perfection to be an obstacle. Guy Kawasaki is famous for his, “Don’t worry, be crappy” quote. He doesn’t mean that you should show up to work late, give a half-hearted attempt at doing your job, take a 2-hour lunch, and then leave early. The point is just that you should focus on getting something done, and worry about perfecting things when it becomes necessary.

There are really a lot of reasons to focus on being “good enough”:

  • Why try to “finish” something before getting user feedback or full testing? From the PlentyOfFish Architecture article on HighScalability.com comes this quote: “The development process is: come up with an idea. Throw it up within 24 hours. It kind of half works. See what user response is by looking at what they actually do on the site.” Maybe they like it, and then you can perfect it. Maybe they find bugs you would have missed anyway, and you can fix them. Or maybe they hate it, so you take the feature down, not having wasted too much time perfecting it in the first place.
  • Why waste time fine-tuning something that doesn’t need it? As a mundane example, I added an admin tool to something at work, and tried to figure out how to gracefully handle the fact that it would do an awful full table scan against one of our biggest tables because it depended on running a SELECT on a field with no index. The solution? Do nothing. The tool is used infrequently enough that we just wait a few seconds for the results. It’s not user-facing, and it doesn’t impact the site performance, just the particular page. Adding the proper indexes would have taken considerably more time and yet manifested itself by shaving half a second off the load time of a tool used a couple times a week. Why bother?
  • Don’t spend too much time on the small things. Just as a pastry chef wouldn’t spend all day keeping the front of his store immaculate, it’s not really my place to pour copious resources into perfecting an unimportant feature. Build something that works. If it’s an important task, do it very well. If not, it’s taking time away from you doing a better job on something more important. (Don’t mistake this with doing a bad job. The pastry chef with a small shop wouldn’t allow the front of the shop to become a disgusting mess that scared customers away, but he shouldn’t spent twenty hours a week polishing the floor and shampooing the carpets, either. Do a good enough job on the non-priorities, and focus on doing a really good job on what you do best.)
  • You might not even know what you’re building. I’m doing some rapid prototyping of some new features, and the specifications change considerably all the time. This is sort of an extension of the first point, really. As we flesh out the prototype a bit more, they firm up some features a little bit more. Just as you wouldn’t work on hanging the blinds while you were still putting up the frame of a house, it’s not a good use of your time to work on tweaking the performance of features that are still being developed. Again, don’t do a bad job and build something that can never scale without being reimplemented from scratch, because you’re shooting yourself in the foot.  But don’t do a perfect job building the most efficient interface ever on something that has an excellent change of being scrapped.

Fairly tangential to this, and yet the same general concept, is the Pareto principle (the 80/20 rule). What I find fascinating is the areas where it’s considerably more distorted. Spam is a good example, really, of something that’s more like the 99.9/0.1 rule. Both here and at work, spam has been a massive problem. But if you focus on solving the 99.9% of spammers, it turns out to be extremely simple. The ability to block registrations from an IP or range, the ability to quarantine posts containing certain keywords, and a throttle on what new users can do has practically eliminated spam as a concern. The people that would sign up and start spam-bombing every user on the site still try every now and again, but find that it doesn’t work.

We spent a while discussing some of these things. “If we block their IP, won’t they just use a proxy server? Should the limit be x or y messages, and over what time period?” At the end of the day, though, an unreasonably huge amount of spam can be stopped by a few really basic rules. In theory, spammers can just get a new IP, or can exploit a few things we identified as possible vulnerabilities. In reality, a handful of very basic features made spam volume drop orders of magnitude. Rather than spending all day working through a growing backlog of spammers, we click a few buttons every now and then to delete the few that bother. It’s somewhat like greylisting with SMTP: in theory, spammers have had years to work around it, and it should take 30 minutes of coding to make their spam software pass greylisting. In reality, something like 95% of people who get graylisted (at an inbox that gets 100% spam) either don’t try again at all, or they try again with totally different information and get rejected again.

I feel compelled to repeat that none of this is saying you should do anything but your best. You should always do your best, but often, doing your best means that you do a good enough (still acceptable) job on the things that you need to do that distract you from what actually creates value. If you slack off or cut too many corners, you’re not doing good enough. Thus doing good enough is necessarily good enough.

Leave a Reply

Your email address will not be published. Required fields are marked *