I’ve always like the idea of rewarding douchebaggery with more douchebaggery. And one bit of douchebaggery that really bugs me is that, running a webserver, it’s always getting requests for pages that have never existed. What’s going on is that people are probing for common vulnerabilities. I don’t have a /phpmyadmin, but I get multiple requests a day for it. (I do have PHPMyAdmin, but it’s up to date, secure, and at an obscure URL.) Same goes for awstats.
What I’ve always wanted to do is respond to these requests with complete garbage. Unending garbage. My long-time dream was to link a page to /dev/random, a “file” in Linux that’s just, well, random. (It’s actually a device, a software random number generator.) The problem is that linking it is full of problems, and, when you finally get it working, you’ll realize that it’s smart enough to view it as a device and not a file.
So I took the lazy route and just created a 500MB file. You use dd to copy data from a disk, with /dev/urandom as the input and a file with a .html extension as output. I had it read 500 “blocks” of 1MB. Granted, this is a total waste of disk space, but right now I have some spare space.
Of course, I was left with a few resources I was concerned about: namely, RAM, CPU time, and network activity. I use thttpd for this idiotic venture, which lets me throttle network activity. I’ve got it at 16 KB/sec right now. (Which is an effective 128 kbps.) This ensures that if it gets hit a lot it won’t put me over my (1,000 GB!) bandwidth allocation.
Apparently, though, this throttling solves the problem: at first glance, it looks like it’s smart enough to just read 16KB chunks of the file and send them out, as opposed to trying to read it into memory, which would kill me on CPU time and RAM. So the net result is relatively minimal resource utilization.
Currently, it’s just sitting there at an obscure URL. But my eventual plan is to setup a /awstats and a /phpmyadmin and a /admin and a /drupal and have them all throw a redirect to this file.
The other bonus is that, at 16KB/sec, if a human gets there, they can just hit “stop” in their browser long before a crash is imminent. But, if it works as intended, infected systems looking to spread their worms/viruses won’t be smart enough to think, “This is complete gibberish and I’ve been downloading it for 30 minutes now” and will derail their attempts at propagating.
It’s not in motion yet, though… But I’ll keep you posted.