Shipping

How does one go about shipping a 65-pound box? Does FedEx have offices like the Post Office where I can mail it? Is it cool with them if I wheel the box in on a dolly?

I’m looking into trying to ship it from work, since UPS and FedEx will come to our offices to pick things up, whereas they probably will not come to my home. I tried to get an estimate, and accidentally discovered a useless bit of information: shipping a 1x1x1″ square box weighing 65 pounds, with a declared value of $1, from Boston to Chicago costs $26.27 via ground, and up to $267.60 to overnight it.

What I want to know is why FedEx thinks a 1x1x1″ box weighing 65 pounds—clearly a revolutionary scientific breakthrough—would be worth $1? Failing that, I’m curious about why they would assume a box weighing 65 pounds would measure 1x1x1″, and never ask me what its actual size is? Furthermore, I wonder if it’s even possible to ship a 1x1x1″ box? Could you fit all the information on it? Does anyone else really want to try?

I Gotta Feeling

A local DJ was talking the other day about how The Black Eyed Peas are always at the top of the charts, and how it seems like it’s probably a record.

For the past month, Boom Boom Pow has been up there, and last week, I Gotta Feeling has surpassed it. And then there was even more attention on the Black Eyed Peas, after trashy blogger Perez Hilton went off on the Black Eyed Peas, insulting Fergie and calling will.i.am a “faggot,” and then got punched in the face by some unknown person that he erroneously named as will.i.am himself… (And I’m yet to hear from a single person who was saddened by his assault.)

The Black Eyed Peas have an official YouTube channel. Most artists sue to keep their music off of YouTube, while the Black Eyed Peas upload their own music there and draw millions of views. I was looking through, and found a bunch of hits that I’d forgotten were the Black Eyed Peas’.

Fergie and will.i.am, both tremendous successes in their own right, are members of the Black Eyed Peas. will.i.am’s Yes We Can became one of the most viral videos of the 2008 campaigns, and was even performed live at Invesco Field. But We Are the Ones wasn’t quite as big a hit, even though it should have been. And I just bought It’s a New Day, the post-election followup.

It does not appear possible to buy stock in a band.

Rant

Why does Debian (and thus implicitly, MySQL) feel compelled to replace the “root” user in MySQL with “debian-sys-maint”? If there was a debian-sys-maint for system stuff and a “root” user for root to use to administer MySQL, it would be fine. But instead, the only user created is debian-sys-maint, so the user is left to either (a) suffer through a bizarre setup for no good reason, or (b) manually add the grant for user root.

Plus, I’m trying to install cacti, and the Debian package blows up because it tries to let root create the database, not debian-sys-maint.

Argh!

Resizing an ext3 Xen disk image

I’ve seen this a few other places and it didn’t quite work. Let’s say you download a half-gig barebones Linux VM to use with Xen. It runs great, but you want more than 500MB of disk space. Here’s exactly what I had to do:

  • Shut down the virtual machine and make sure nothing is using the disk image!
  • Make a backup of the current image (cp imagefile.img imagefile.img_backup) in case anything goes awry
  • Make a blank file with dd of the size you want to add, e.g., dd if=/dev/zero of=10gig.img bs=1024 count=10000000 for a 10GB image. This may take a bit.
  • Append that to the end of your file, e.g., cat 10gig.img >> imagefile.img
  • “Mount” it as a loopback device: losetup /dev/loop0 imagefile.img
  • Run an fsck, which will find errors, since growing a filesystem this way is bizarre: e2fsck -f /dev/loop0
  • After that completes, resize the filesystem. This may take a while: resize2fs /dev/loop0
  • “Delete” the loopback device (think “unmount”): losetup -d /dev/loop0
  • Start your VM up, and it should now see 10GB extra space!

Mindlessly Repeating Crap

Has anyone ever noticed that you’ll ask a question online, and people just parrot back things they’ve heard? I found a decent deal on an SSD disk, and was trying to poke around and see if I’d actually see a big boost. Here’s a summary of what I’ve found so far:

  • SSDs have no seek time so they’re good for random reads. I knew that. But I don’t do an abnormal number of random reads. I load big files and write to swap.
  • SSDs have a limited lifespan. This drives me crazy because it’s extremely misleading. It’s technically true, but seems to imply that hard drives last forever. Of the people who have actually done testing, it seems that most solid-state disks last longer.
  • SSDs are wicked fast if you stripe data across eight of them. The problem here is that (a) it’s totally unhelpful in figuring out if an SSD will give my laptop increased throughput or not, and (b) ANYTHING is wicked fast if you stripe data across eight of them. (Okay, so maybe not floppies.)

What drives me crazy is that you’ll ask a question about the gross throughput on SSDs, and people come in yapping about how you shouldn’t use an SSD because of the limited number of write cycles, or someone recommends them because their friend’s neighbor has an 8-disk SSD stripe set and he said it’s fast. Asking about actual throughput from the SSDs has gotten me a lot of silence back, actually.

In other news, you cannot use a 2.5″ SAS drive in your laptop. Both are “serial,” but it’s still ATA and SCSI. (Some SAS backplanes, however, will also support SATA drives.) We have a server with some goofy 2.5″ SAS drives, and I was hoping I could upgrade my laptop to use a 73GB, 15K RPM disk. Not possible.

Control Disk Bandwidth with dm-ioband

One problem I run into sometimes is that multiple things on a server are trying to access the same disk. This is pretty unavoidable, and often you want the “default,” which is for them to share the disk. If I have two webserver threads, they’re equal.

But sometimes this isn’t the case. A while back I had to make a backup of the local 600GB partition that one of our NFS servers was exporting, and the rsync was really killing NFS performance. I was able to “nice it down” a bit and make it work.

dm-ionice is another interesting option that I just came across, though. It does “bandwidth quotas” for disks. I can see a lot of places where this might be handy. I’m curious about exactly how it works and the full implications, but it’s a feature that I think is needed in many places but not normally available.

The Two Types of Passwords

While setting up login credentials that would be used to have a script on one machine talk to a remote machine, I had an epiphany. There are two types of passwords: the ones you have to remember and type often, and the ones you don’t.

I’d add a third category, really: the ones you occasionally have to type but ought to know. I let Firefox and Thunderbird remember most of my passwords, but need to remember them since I’m not always using this computer. And then there are ones I use every day that aren’t remembered, so I know them by heart.

But there’s that last category: the passwords you don’t have to remember. They’re either just hardcoded into a script somewhere, or they’re set and utterly forgotten. And here’s the point of all my babbling: if you don’t ever have to remember the password, why is it the least bit guessable? If I was setting up an account to be shared between several coworkers, “s3cr3t” might be cute. But no human will ever type the passwords I’ve been setting, so why not use 30 characters of banging on the keyboard with mixed-case, numbers, and symbols galore?

But  going a step further, a lot of things, like my bank login, are things that (1) Firefox usually remembers, and (2) I can have e-mailed to me if I forget them. Why not do the same there?

And an obligatory shout-out of shame to American Express, which still prohibits their customers from setting passwords longer than 8 characters. Seriously, guys, that would have been lame in 1997.

Team Cymru

Team Cymru is a pretty nifty site. I’ve found their IP-to-ASN mapping to be very helpful in the past, but just noticed that they also compile stats on malware, including a queryable malware hash database. It doesn’t aim to capture everything, but it looks like it could be a nice complement in identifying known badware.

For all of the amazingly helpful services they provide, they seem to keep a pretty low profile in the community.

Slowloris, Part II

I posted earlier about how it seemed like one could use haproxy in front of Apache to help mitigate the damage that slowloris (Slow Loris) can do. In this (Part II), I put my money where my mouth was and tried it. And, unsurprisingly, it turns out that haproxy rocks.

I had thought this would be a quick task for iptables: add an iptables rule with a connlimit clause so there couldn’t be more than 16 simultaneous connections to port 80 from any given IP. It wouldn’t guard against a distributed attack, but then again, a distributed attack is a much more complicated beast. If you’re just trying to block Slowloris, or, in more general terms, one buffoon trying to exhaust your servers’ resources, iptables makes good sense. Except it doesn’t work.

The rule should look something like iptables -I RH-Firewall-1-INPUT 10 -p tcp --syn --dport 80 -m connlimit --connlimit-above 16 -j DROP. But that comes back for me (CentOS 5.3) with a cryptic error: iptables: Unknown error 4294967295.

This posting summarizes the issue, but the short version is that you need to do lots of work if you want this to work. The solution I almost went for was to roll a new kernel by hand. But this gets hairy when you’re running a Xen host with PAE, and it also means you’re overriding your package manager. Entirely possible, but it’s paving the road for headaches later on.

But we’re not totally helpless. The other day I had posted about haproxy, a nifty software load balancer. It turns out I was right in my speculation the other day: it can buffer connections until they’re ready. Normally it makes no sense to run a load balancer in front of a single server. If anything, it’s one more thing to fail. But when it comes to defending against people trying really lame attacks against Apache, it turns out that haproxy’s ability to buffer connections is a life saver. The client will talk to haproxy, which will sit there buffering the client’s request. Once they’ve issued a full request, it will send it off to Apache. Since haproxy is much better at handling high volumes of connections, this means that it can be buffering a lot of connections without using any of Apache’s resources. (And, if you know what you’re doing, you can also make haproxy kill off connections that have been idling.)

For testing, I installed a super-basic Apache setup on the Linux box I’m provisioning. Literally, I just did a “yum install httpd” and started it up without any config changes, serving nothing but the main page.

Then I used two other machines. I installed the slowloris package on my Linux laptop and ran it with straight defaults, too. It tries to open 1,000 sockets against a webserver, which will quickly tie up all instances on a default Apache instance. On another laptop, I pulled up a web browser and tried viewing the default page served up by Apache. Without slowloris wondering, it, quite obviously, worked great. I started slowloris, and immediately, the site become unreachable. The browser just sat there waiting, getting no response. Looking on the server, there were about 425 (?) connections to port 80 showing up in netstat. I let it run for a while to hold things open, and there was no indication on the server that anything was wrong. Essentially no load. I stopped the script and the connections went away and everything went back to working just great. Kind of scary how easy it is to do, actually.

So then I set out to change things up: I’d move Apache to port 8080 and firewall it off so no one external could get to it. And I’d have haproxy listen on port 80, “load balancing” against the Apache instance on port 8080. Clients connect to port 80 like before, not aware that haproxy is actually sitting out in front of Apache.

It looks like the rpmforge repo has an haproxy package, but I couldn’t get it to find it in short order. Since this was for a proof of concept, I just downloaded the binary, gunzipped, and chmod +x’ed it. You’ll want to point it to a config file; I modified the one here for my use. Here’s the config I ended up using:

global
  daemon
  maxconn       150      # Should be set, but can be raised quite high

defaults
  mode              http
  clitimeout        3000       # maximum inactivity time on the client side
  srvtimeout        3000       # maximum inactivity time on the server side
  timeout connect   3000        # maximum time to wait for a connection attempt to a server to succeed
  timeout http-request 5000   # Close HTTP sessions after 5 seconds

  option            httpclose     # disable keepalive (HAProxy does not yet support the HTTP keep-alive mode)
  option            abortonclose  # enable early dropping of aborted requests from pending queue
  option            httpchk       # enable HTTP protocol to check on servers health
  option            forwardfor    # enable insert of X-Forwarded-For headers

  balance roundrobin            # each server is used in turns, according to assigned weight

  #stats enable                  # enable web-stats at /haproxy?stats
  #stats auth        admin:pass  # force HTTP Auth to view stats
  #stats refresh     5s          # refresh rate of stats page

listen blah *:80
  # - equal weights on all servers
  # - maxconn will queue requests at HAProxy if limit is reached
  # - minconn dynamically scales the connection concurrency (bound my maxconn) depending on size of HAProxy queue
  # - check health every 20000 microseconds

  server web1 127.0.0.1:8080 weight 1 minconn 3 maxconn 6 check inter 20000

This is extremely basic and just forms a proof-of-concept. Most of the values are fairly quick guesses. But between two boxes on my LAN, I didn’t care. Obviously, tweak and customize before using this for real.

But it does exactly what I hoped. When you connect to port 80, you get haproxy. Once you send a valid request, it goes off to Apache and gets it for you. (It “load balances” across the only server there is, which also happens to be on the same machine.) So slowloris sits there hammering out connections, and Apache doesn’t ever get them since they’re incomplete. haproxy just sits there rolling its eyes at the dumb client that keeps issuing incomplete requests. Meanwhile, a real client comes along, sends a complete request, and haproxy sends it right through.

Note that this does currently have a maximum ceiling set on connections (and quite low, at that), so you’re not entirely out of the water. You can raise it, but don’t set it arbitrarily high or you’ll be worse off when a determined attack forces it to eat up all the systems’ memory. Another issue is that this configuration looks to just forward requests, which means your application won’t see the external IP, but everything comes through as 127.0.0.1, but with an X-Forwarded-For header. It doesn’t have to be that way, you just need to play with the config.

Edit: See the comments. I threw this together as a proof of concept, mostly. Willy Tarreau, who knows a thing or two about haproxy (*grin*), pointed out some flaws in my initial configuration. I’ve updated it a bit, but you’re really doing yourself a disservice if you blindly copy the configuration here and enable it on a production server. Check out the haproxy site for more in-depth information about exactly how to configure it. Of note, I only accept 150 connections (may be too low for you), and I close any HTTP connection that’s lasted 5 seconds. This is way too long for a simple GET, but I’ve seen servers at work time out after 15 minutes when someone on a dialup connection attempts to upload myriad photos. Depending on what you do, 5 seconds is probably too long or way too short. I commented out the lines enabling stats; you might want to turn them back on, but probably not with the default password. And as I said earlier, Igvita is where I drew my inspiration for the haproxy configuration; definitely check that out for more.

Apache, squid, etc. “vulnerability”

There’s a bit of buzz around slowloris, which aims to take down webservers via resource starvation via a low-bandwidth DoS attack. It’s actually somewhat like a SYN flood, but it targets HTTP servers specifically, not TCP. Basically, it opens many HTTP connections and “stutters” requests, forcing the server to handle a number of concurrent requests. What’s interesting is that the apparent “defense,” limiting the maximum number of threads the webserver can have, actually makes the attack even easier: if you configure Apache to not serve more than 150 concurrent connections, an attacker just needs to get 150 concurrent connections and send data just slowly enough not to time out.

FreeBSD has accf_http (possibly sometimes known as HTTPReady?) that will “proxy” connections by buffering the request until the full one comes in. It doesn’t seem like it was meant for security per se, so much as to help cut down the load on webservers. As is pointed out elsewhere, accf_http doesn’t handle POST requests. Even though 95% (made-up stat) of new UNIX-ish server tools are aimed at Linux users, I sometimes feel like BSD gets all the really cool ones like accf_http and spamd and OpenBGPD and so on. We Linux people get good virtualization support, though. 🙂

IIS and lighttpd aren’t affected; squid, Apache, and a few I haven’t heard of are. Of note, squid is a proxy server, not a webserver, but it’s sometimes used as a “reverse proxy” in front of webservers. From its description, varnish seems as if it may be vulnerable, too. I’m curious about why IIS and lighttpd aren’t affected. lighttpd is meant to be super-fast and pretty small, so I wonder if it just doesn’t launch new threads to handle connections. I don’t know about IIS.

What seems to kill Apache is that it’s configured to start refusing incoming connections after a set limit (150 is a common default), which is meant to keep it from dying under heavy load. I have a hunch that removing this limit would make things worse, though, since you’d then be able to exhaust “real” resources (e.g., drive the machine into swap).

As a cheap, temporary hack, you can probably get creative with firewall rules and limit the number of concurrent connections any one IP can have open with you. This makes good sense to do anyway, but it won’t stop someone with access to many machines from doing this. Actually, at a glance, haproxy looks to buffer incoming connections. Might be interesting to let it sit in front of a single Apache box (which seemingly makes no sense: “load balancing” across one server?) and see how it performs.