Televisions

LCD and plasma TVs are becoming increasingly popular, costing between $1,000 and $3,000.

If you have that budget in mind, something I’ve wanted to do for a long time suddenly becomes viable: buy a projector and mount it on your ceiling. Of course, only the very high-end projectors will do the 1920×1080 that 1080i and 1080p do, but 1024×768 is very doable for under $1,000, and the difference in resolution shouldn’t be all that noticeable. And then you’ve got something like a 100″ screen. Wow-a-wee-wow!

The caveat, of course, is that few (if any?) projectors include tuners, so you’d have to set up a PC for that, something like a Mythbox. But one can be put together for around $500, and that naively assumes that you don’t already have a spare computer with a tuner card or two.

Deal of the Day

I just saw this link on a site I frequent: a Compaq laptop, dual-core chip, 1 GB RAM, 80 GB disk, 15.4″ LCD, DVD burner, and integrated wireless… $300 after rebate ($440 before). (Of course, I had no idea that HP still makes Compaq-branded machines?)

For the same price, they’ve got a desktop system… It’s “just” an Athlon (with no apparent details?), but it comes with 2 GB RAM and a 250GB disk… Plus DVD burner. (Throw in a tuner and you might have a nice Mythbox?)

CompUSA has a 22″ LCD (Acer, 1600×1050) for $200, although it seems that the deal ends today. (I thought they went out of business?)

Internet Radio

I still remain a fan of SomaFM, a network of awesome streaming music.

Two interesting things I’ve come across, though:

The first is AACPlus. The webpage makes it look like a minor little project. But it’s being used by a number of streaming stations, and it’s supported by VLC and WinAmp, among others. What makes it notable is that I’m listening to a 48 kbps stream of one of Soma’s stations right now… And it sounds better than a 128 kbps stream. You can apparently drop it to 32 kbps and drop to just slightly less than CD quality, and at 24 kbps it’s still on par with MP3 streams. It works out great: it delivers high-quality audio to me, and instantly doubles (at least) the number of listeners they can handle, since bandwidth is almost always the “limiting reactant” with streaming audio.

In other news, the RIAA is apparently having luck getting Congress to raise webcasting rates enormously again… In some instances they’ll apparently go up by more than 1,000%. If you check out the Soma site, they’re coming up short on funding every month, pleading for donations to stay online. This is the case with a lot of streaming radio sites, too. They’re barely staying online as it is. Raising their rates by a factor of ten is going to kill Internet radio.

Problems

Here are the types of things my mind picked up on today that no one else on the face of the planet would notice, much less care about:

  • mot.com (MOT is Motorola’s stock ticker) resolves to 192.168.0.110, a non-routable internal IP.
  • Doing a traceroute from here to mot.com, it goes through six routers (four at school, two upstream) before they start dropping packets. Every single router, in my mind, should be checking for impossible conditions like that and dropping packets. But, if nothing else, our edge router should do this filtering, as should the first upstream router.
  • One of Waltham’s firefighters transmits a sidetone when he keys up, in addition to his MDC data. This is a weird problem. (What’s supposed to happen is that the radio transmits a little data burst at the start of each transmission, identifying his radio. The exchange takes about 200ms, so, while the radio transmits this, it beeps at the user to indicate that they shouldn’t start talking yet. When it stops, it starts transmitting his audio.) In this case, the sidetone and data burst are both getting transmitted.

Tweaking SQL

I was thinking last night about solid-state drives. In their current form, they’re really not that much faster in terms of throughput: a decent amount are actually even slower than ATA disks if you measure them in terms of MB/sec throughput. Where they shine (100 times faster, at least) is seek time, though. So where they’re ideally suited for in a server environment right now is something with lots of random reads, where you might find yourself jumping all over the disk. For example, a setup with lots and lots of small files scattered across the disk.

Many implementations of a database would be similar. Something like the database for this blog will have a lot of sequential reads: you’re always retrieving the most recent entries, so the reads tend to be fairly close. But there are lots of ways to slice the data that don’t result in reading neighboring rows or walking the table. (And what really matters is how it’s stored on disk, not how it’s stored in MySQL, but I’m assuming they’re one in the same.) Say I view my “Computers” category. That’s going to use reads from all over the table. Using a solid-state disk might give you a nifty boost there. So I think it’d be fun to buy a solid-state disk and use it in an SQL server. I wager you’d see a fairly notable boost in performance, especially in situations where you’re not just reading sequential rows.

But here’s the cool link of this post. I’m not sure exactly what goes on here in a technical sense, but they use solid-state drives, getting the instant seek time, but they also get incredible throughput: 1.5GB/sec is the slowest product they offer. I think there may be striping going on, but even then, with drives at 30MB/sec throughput, that’d be 50 drives. The lower-end ones look to just be machines with enormous RAM (16-128 GB), plus some provisions to make memory non-volatile. But they’ve got some bigger servers, which can handle multiple terabytes of storage on Flash, and still pull 2GB/sec of throughput, which they pretty clearly state isn’t counting stuff cached in RAM (which should be even faster).

I want one.

Contains Bitterant

Like most enlightened geeks, I love freeze spray. Err, canned air. The stuff you use to blow dust out of your computer’s fan. It’s very handy in that use.

But turn it up side down and you’re blowing something cold enough to give you frostbite. This is the off-label use, and it probably accounts for a three-quarters of what freeze spray–canned air, I mean–is used for. You can harass friends (this is actually pretty dangerous), or deal with misbehaving components. My external hard drive, which has been acting flaky, is running extremely warm, to the point that I worry I might burn myself if I touch it again. So I hit it with some freeze spray. (This is probably not sound practice: I have a feeling hard drives don’t like going from 110+ degrees to -30 in a second’s time. But then again, a short blast of freeze spray doesn’t do much but lower the temperature slightly.

The real problem, though, is inhalant abuse. I’m really not sure why people would do this, as it’s so incredibly useful that you’d have to already be high to think it was a good idea to waste it. But companies have started adding a “bitterant.” I know for two reasons. The first is that they mention it on the label. The second is that the bitterant floats around the room. After cooling down the hard drive, I had a disgusting bitter taste in my mouth. So now, as a legitimate user of their product, I’m trying to find ways to get it without bitterant. Because it leaves me disgusted every time I use it.

Bringing Down the Web

Engadget (but strangely, no mainstream news sites?) is reporting that a fourth underseas fiber cable has been pierced in the Middle East.

People are now starting to draw the conclusion I draw the second time: something fishy is going on. (Err, no pun intended there…)  Underseas cables don’t get cut that often, but for four of them to get cut in a week, and all to a war-torn region?

Someone is pretty clearly trying to cut off the Internet to that part of the world, and they’re doing a pretty good job. Fortunately, the Internet has always been designed to route around failures like this, but it seems like they’ve taken out a huge chunk of the backbone to some parts of the world. There was an earthquake to that region, too, though. But still, I’m suspicious.

Of course, some are saying that the fourth line wasn’t actually cut, but apparently just suffered technical issues not related to the underseas line itself. But still, I’m calling shenanigans. I’m just not sure which motive is at play: are they resisting Western influence? Trying to prevent technology? Obsessed with censorship? There are multiple motives, just as there are many, many possible culprits.

Although I have to hand it to them: those underseas cables look incredibly resilient, and I can’t imagine that too many people know where every single one is located.

Closed Source

As much as I love open source software, I tend to shy away from the die-hard “OpenSource or bust” people. I use closed-source (“restricted”) drivers when need be, and they usually work better, since the vendors can optimize them.

I’m quite frustrated, though, with ATI… The closed-source fglrx drivers give good performance, but have some major problems. Namely, they just don’t work with Xen. I’ve been looking to set up some virtual machines, but I have the choice of using VMs or having video drivers…

And hibernate / software suspend has never worked. It turns out that this is also a known bug caused by using the closed-source fglrx drivers.

It turns out that the Ubuntu kernel team is aware of both of these, and trying to find ways to fix it. But the problem lies in a closed-source module, so their hands are tied.

Argh!

Do you have the time?

I’ve been running an NTP server on this host for quite some time now. But as of yesterday, I’m a member of the pool.ntp.org group. pool.ntp.org is a round-robin-ish DNS service where requests for pool.ntp.org are given IPs from a huge block of listed nameservers, balancing the load across a pool of about 1,500 NTP servers across the world. The official “entry” for this server is my IP (72.36.178.234), but ntpd is actually listening on all IPs right now, so using blogs.n1zyy.com or ttwagner.com will work.

I’m currently synced to Stratum 2 servers, but I think that, after I finish up some open tasks (“real work,” versus playing with time servers), I’m going to look at requesting permission to sync to Stratum 1 servers. (Stratums, err, strata, are basically tiers. “Stratum 1” refers to a server directly connected to something like a GPS (which obtains extremely accurate time: having the correct time is an important part of how GPS works, so GPS actually broadcasts the time from the atomic clock) or from WWV (transmitted over HF radio). Stratum 2 servers get their time from Stratum 1 servers, and so on. As I sync to a network of stratum 2 clocks, I become a stratum 3 server. Moving up a stratum generally implies more accurate time, as there are fewer intermediaries to skew results. (Although we’re talking milliseconds of difference.) There aren’t an awful lot of stratum 2 servers, so syncing to a stratum 1 server would help to round out the stratum 2 list. (It would be fun to become a stratum 1 server, but as a stratum 2 host says of his data center, “they’re not going to let me drill a hole in the ceiling to run an antenna [for the GPS] to the roof.”)

For those of you with UNIX systems, take advantage of this! You can sync to me directly (72.36.178.234), or indirectly (the pool.ntp.org cluster). (Windows can sync to an NTP server as well, it’s just not a standard feature.)

Web Design

I’ve redone ttwagner.com. It’s no longer a random integer between 0 and 255, but instead, a decent-looking site. I’ve integrated some of the cool things I’m hosting there as well. I came across a few interesting things I wanted to point out.

The world DNS page is incredibly intensive, and, since it’s not dynamic, there’s no sense in “generating” it each time. So I used the command wget http://localhost/blah/index.php -O index.html to “download” the output, and save it as index.html in the web directory. Viola, it serves the HTML file rather than executing the script.

But the HTML output was frankly hideous. The page was written as a, “You know, I bet I could do…” type thing, written to fill some spare time (once upon a time, I had lots of it). So I’d given no attention to outputting ‘readable’ HTML. It was valid code and all, it just didn’t have linebreaks or anything of the sort, made it a nightmare to read. But I really didn’t want to rewrite my script to clean up its output so that I could download it again….

So I installed tidy (which sometimes goes by “htmltidy,” including the name of the Gentoo package). A -m flag tells it to “modify” the file in place (as opposed to writing it to standard output). The code looks much cleaner; it’s not indented, but I can live with that!

I also found that mod_rewrite is useful in ways I hadn’t envisioned using it before. I developed everything in a subdirectory (/newmain), and then just used an htaccess override to make it “look” like the main page (at ttwagner.com/ ). This simplifies things greatly, as it would complicate my existing directory structure. (It’s imperfect: you “end up” in /newmain anyway, but my goal isn’t to “hide” that directory, just to make the main page not blank.)

I’ve also found I Like Jack Daniel’s. (Potential future employers: note the missing “that” in that sentence, which changes the meaning completely!) The site is a brilliant compendium of useful information, focusing on, well, Apache, PHP, MySQL, and gzip, generally. The “world DNS” page was quite large, so I decided to start using gzip compression. He lists a quick, simple, and surefire way to get it working. (The one downside, and it’s really a fundamental ‘flaw’ with compression in general, is that you can’t draw the page until the whole transfer is complete. This has an interesting effect as you wait for the page to load: it just sits there not doing much of anything, and then, in an instant, displays the whole page.) It may be possible to flush the ‘cache’ more often, resulting in “progressive” page loading, but this would be complicated, introduce overhead, and, if done enough to be noticeable, also defeat the point of compression. (Extreme example: Imagine taking a text file, splitting it into lots and lots of one-byte files, and then compressing each of them individually. Net compression: 0. Net overhead: massive!)