I posted earlier about how it seemed like one could use haproxy in front of Apache to help mitigate the damage that slowloris (Slow Loris) can do. In this (Part II), I put my money where my mouth was and tried it. And, unsurprisingly, it turns out that haproxy rocks.
I had thought this would be a quick task for iptables: add an iptables rule with a connlimit clause so there couldn’t be more than 16 simultaneous connections to port 80 from any given IP. It wouldn’t guard against a distributed attack, but then again, a distributed attack is a much more complicated beast. If you’re just trying to block Slowloris, or, in more general terms, one buffoon trying to exhaust your servers’ resources, iptables makes good sense. Except it doesn’t work.
The rule should look something like iptables -I RH-Firewall-1-INPUT 10 -p tcp --syn --dport 80 -m connlimit --connlimit-above 16 -j DROP. But that comes back for me (CentOS 5.3) with a cryptic error: iptables: Unknown error 4294967295.
This posting summarizes the issue, but the short version is that you need to do lots of work if you want this to work. The solution I almost went for was to roll a new kernel by hand. But this gets hairy when you’re running a Xen host with PAE, and it also means you’re overriding your package manager. Entirely possible, but it’s paving the road for headaches later on.
But we’re not totally helpless. The other day I had posted about haproxy, a nifty software load balancer. It turns out I was right in my speculation the other day: it can buffer connections until they’re ready. Normally it makes no sense to run a load balancer in front of a single server. If anything, it’s one more thing to fail. But when it comes to defending against people trying really lame attacks against Apache, it turns out that haproxy’s ability to buffer connections is a life saver. The client will talk to haproxy, which will sit there buffering the client’s request. Once they’ve issued a full request, it will send it off to Apache. Since haproxy is much better at handling high volumes of connections, this means that it can be buffering a lot of connections without using any of Apache’s resources. (And, if you know what you’re doing, you can also make haproxy kill off connections that have been idling.)
For testing, I installed a super-basic Apache setup on the Linux box I’m provisioning. Literally, I just did a “yum install httpd” and started it up without any config changes, serving nothing but the main page.
Then I used two other machines. I installed the slowloris package on my Linux laptop and ran it with straight defaults, too. It tries to open 1,000 sockets against a webserver, which will quickly tie up all instances on a default Apache instance. On another laptop, I pulled up a web browser and tried viewing the default page served up by Apache. Without slowloris wondering, it, quite obviously, worked great. I started slowloris, and immediately, the site become unreachable. The browser just sat there waiting, getting no response. Looking on the server, there were about 425 (?) connections to port 80 showing up in netstat. I let it run for a while to hold things open, and there was no indication on the server that anything was wrong. Essentially no load. I stopped the script and the connections went away and everything went back to working just great. Kind of scary how easy it is to do, actually.
So then I set out to change things up: I’d move Apache to port 8080 and firewall it off so no one external could get to it. And I’d have haproxy listen on port 80, “load balancing” against the Apache instance on port 8080. Clients connect to port 80 like before, not aware that haproxy is actually sitting out in front of Apache.
It looks like the rpmforge repo has an haproxy package, but I couldn’t get it to find it in short order. Since this was for a proof of concept, I just downloaded the binary, gunzipped, and chmod +x’ed it. You’ll want to point it to a config file; I modified the one here for my use. Here’s the config I ended up using:
global daemon maxconn 150 # Should be set, but can be raised quite high defaults mode http clitimeout 3000 # maximum inactivity time on the client side srvtimeout 3000 # maximum inactivity time on the server side timeout connect 3000 # maximum time to wait for a connection attempt to a server to succeed timeout http-request 5000 # Close HTTP sessions after 5 seconds option httpclose # disable keepalive (HAProxy does not yet support the HTTP keep-alive mode) option abortonclose # enable early dropping of aborted requests from pending queue option httpchk # enable HTTP protocol to check on servers health option forwardfor # enable insert of X-Forwarded-For headers balance roundrobin # each server is used in turns, according to assigned weight #stats enable # enable web-stats at /haproxy?stats #stats auth admin:pass # force HTTP Auth to view stats #stats refresh 5s # refresh rate of stats page listen blah *:80 # - equal weights on all servers # - maxconn will queue requests at HAProxy if limit is reached # - minconn dynamically scales the connection concurrency (bound my maxconn) depending on size of HAProxy queue # - check health every 20000 microseconds server web1 127.0.0.1:8080 weight 1 minconn 3 maxconn 6 check inter 20000
This is extremely basic and just forms a proof-of-concept. Most of the values are fairly quick guesses. But between two boxes on my LAN, I didn’t care. Obviously, tweak and customize before using this for real.
But it does exactly what I hoped. When you connect to port 80, you get haproxy. Once you send a valid request, it goes off to Apache and gets it for you. (It “load balances” across the only server there is, which also happens to be on the same machine.) So slowloris sits there hammering out connections, and Apache doesn’t ever get them since they’re incomplete. haproxy just sits there rolling its eyes at the dumb client that keeps issuing incomplete requests. Meanwhile, a real client comes along, sends a complete request, and haproxy sends it right through.
Note that this does currently have a maximum ceiling set on connections (and quite low, at that), so you’re not entirely out of the water. You can raise it, but don’t set it arbitrarily high or you’ll be worse off when a determined attack forces it to eat up all the systems’ memory. Another issue is that this configuration looks to just forward requests, which means your application won’t see the external IP, but everything comes through as 127.0.0.1, but with an X-Forwarded-For header. It doesn’t have to be that way, you just need to play with the config.
Edit: See the comments. I threw this together as a proof of concept, mostly. Willy Tarreau, who knows a thing or two about haproxy (*grin*), pointed out some flaws in my initial configuration. I’ve updated it a bit, but you’re really doing yourself a disservice if you blindly copy the configuration here and enable it on a production server. Check out the haproxy site for more in-depth information about exactly how to configure it. Of note, I only accept 150 connections (may be too low for you), and I close any HTTP connection that’s lasted 5 seconds. This is way too long for a simple GET, but I’ve seen servers at work time out after 15 minutes when someone on a dialup connection attempts to upload myriad photos. Depending on what you do, 5 seconds is probably too long or way too short. I commented out the lines enabling stats; you might want to turn them back on, but probably not with the default password. And as I said earlier, Igvita is where I drew my inspiration for the haproxy configuration; definitely check that out for more.