Removing Duplicates from MySQL

I Googled that a while ago. How do you remove duplicate rows from a SQL database? (MySQL specifically.)

If you have a ‘simple’ duplicate setup, it’s pretty easy, and lots of other sites go into great detail. (And in that case, you might look into how to require that fields be unique so you don’t have that problem anymore…)

In our case, we ran into an obscure problem where we expected the combination of two fields to be unique. Without publicizing anything secret, we had a summary table of user_id and month. (Not that it’s actually anything secret, but I refer to the table I did this on as “tablename,” just since blogging about our production database schemas seems like a bad idea…) A brief bug introduced some duplicates. My task? Remove the duplicate rows.

So I went about it somewhat backwards, and started wondering how do a SELECT DISTINCT across two columns. You can’t, really. Someone recommended something clever: CONCAT() across the two field names, and a DISTINCT on that: SELECT DISTINCT(CONCAT(user_id, month))… After toying with that a bit, I was able to create a temporary table housing a de-duped version of our data:


create temporary table nodupes
SELECT DISTINCT(CONCAT(user_id, month)) asdf, tablename.*
FROM tablename GROUP BY asdf;

But how does that help? The problem with temporary tables is that they’re, uhh, temporary. If you lived on the edge, you could probably “DROP TABLE original_table; CREATE TABLE original_table (SELECT * FROM temp_table)”, but this has a few horrible problems. One is that you’re throwing away all your disk-based data so you can populate it with a version that exists only in RAM. If anything went wrong, you’d lose the data. Furthermore, in our case, this is a live, busy table. Dropping it in production is awful, and it would take many minutes to repopulate it since it’s a huge table.

So then it was clear what I wanted to do: something like DELETE FROM original_table WHERE id NOT IN(SELECT * FROM temp_table_with_no_dupes), except not using an IN() on a table with hundreds of thousands of rows. And this is where it got tricky. The key is to use that temporary table to derive a new temporary table, housing only the duplicates. (Actually, speaking of “the key,” create one in your temporary table: CREATE INDEX gofast ON nodupes(id); — you’ll need this soon, as it’s the difference between the next query taking 10 seconds and me aborting it after 30 minutes.)

The temporary table of duplicate rows is basically derived by joining your ‘real’ table with the list of de-duped tables, with a rather obscure trick: joining where id=NULL. So here’s what I did:


CREATE TEMPORARY TABLE dupes
(SELECT tablename.*
FROM tablename
LEFT OUTER JOIN nodupes nd
ON nd.id=tablename.id WHERE nd.id IS NULL);

For bonus points, we’re selecting from the real table, so the silly CONCAT() field doesn’t come through into this temporary table.

So once again, I momentarily flashed a sly grin. This is going to be easy! DELETE FROM tablename WHERE id IN(SELECT id FROM dupes);. But then my sly grin faded. It turns out that there are two problems here. The first is that there are still a lot of rows. In my case, I was doing an IN() with about 6,000 rows. MySQL doesn’t like that. But there’s a bigger problem. (Thanks to Andrew for confirming that this is a real bug: #9090.) MySQL doesn’t use indexes when you use a subquery like that. It seems the bug is pretty narrow in scope, but this is precisely it. So that query is awful, basically doing 6,000 full table scans. And on a really big table.

So the solution comes from something I’ve never even thought about doing before. MySQL calls it the “multi-delete.” I call it the, “Holy smokes, this could go really, really wrong if I’m not careful” delete.


-- Don't run this yet
DELETE bb
FROM tablename AS bb
RIGHT JOIN dupes d
ON (d.id=bb.id);

That’s the command you want, but a few caveats. First, change the first line to “SELECT *” and run it to see what happens. Make sure it’s only grabbing the duplicate rows. You’re not able to use a LIMIT in a multidelete query, so you have to be extremely careful here. Second, we tested this multiple times on various test databases before doing it against our master database, and I was still uneasy when it came time to hit enter. Keep backups before you do this. And finally, in many cases, MySQL will expect an explicit “USE database_name” before the multi-delete, where database_name is the database housing the table you’re deleting from. Keep that in mind if you get an error that it can’t find the table, even if you explicitly reference it (via database_name.table_name).

Conficker

People are already calling Conficker the worst worm in years, even though bloggers in Europe (where it’s already April 1) are saying it’s not doing anything.

It looks like ESet, makers of the fine NOD32 anti-virus software, have a page on how to remove Conficker, including what seems to be a free tool for its removal. Based on my limited reading, it seems that installing the latest security patches for Windows pretty much render your immune. But we all know people who don’t do that.

It looks like Conficker alters some Windows network internals, causing it to exhibit some different fingerprint characteristics when probed, so tools like Nessus and nmap are apparently able to detect it. Though Nessus is a gigantic problem, and I don’t have any machines with Conficker on my home LAN, so I can’t confirm this.

Viewing all cron jobs

Periodically I run into the situation where I’m trying to find a cron job on a particular machine, but I can’t remember which user owns it. At least on CentOS, it’s easy:

cat /var/spool/cron/* will show all crons. The crontab command doesn’t seem to support doing this. The downside is that that command just mashes them all into one list, which is only useful if you don’t care who the job runs as. Usually I do. Here’s a simple little script to format the output a little bit:

for i in `ls /var/spool/cron/`; do
        echo "Viewing crons for $i"
        echo "--------------------------------------"
        cat /var/spool/cron/$i
        echo
done

SHOW CREATE TABLE

Silly MySQL trick of the day. This is probably no stranger to those work with MySQL all the time, but it’s a nice trick to have up your sleeve.

For reasons I still can’t wrap my head around, one of the MySQL tables we need for Cacti just up and vanished. It’s not really used for anything essential, so it’s not a great loss, just really annoying. So I need to recreate it. Fortunately, I have MySQL running on another machine.

mysql> desc poller_output;
+---------------+-----------------------+------+-----+---------------------+-------+
| Field         | Type                  | Null | Key | Default             | Extra |
+---------------+-----------------------+------+-----+---------------------+-------+
| local_data_id | mediumint(8) unsigned | NO   | PRI | 0                   |       |
| rrd_name      | varchar(19)           | NO   | PRI | NULL                |       |
| time          | datetime              | NO   | PRI | 0000-00-00 00:00:00 |       |
| output        | text                  | NO   |     | NULL                |       |
+---------------+-----------------------+------+-----+---------------------+-------+
4 rows in set (0.00 sec)

It’s not that complicated. I could take a second and write out the CREATE TABLE statement I’d need to run on the machine missing the table. But here’s a neat trick (run on the system with the table, obviously):

mysql> show create table poller_outputG
*************************** 1. row ***************************
       Table: poller_output
Create Table: CREATE TABLE `poller_output` (
  `local_data_id` mediumint(8) unsigned NOT NULL default '0',
  `rrd_name` varchar(19) NOT NULL default '',
  `time` datetime NOT NULL default '0000-00-00 00:00:00',
  `output` text NOT NULL,
  PRIMARY KEY  (`local_data_id`,`rrd_name`,`time`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1
1 row in set (0.00 sec)

Why write out the CREATE TABLE statement when we can just let MySQL tell us exactly how to re-create a table? I initially discovered this with the considerably more complex issue of views (SHOW CREATE VIEW), but it extends to tables, too. So now we can just copy-and-paste. Viola!

Well not viola for me, but viola for anyone whose filesystem hasn’t gone corrupt.

Deal

Kingston (who I never knew made CF cards) has a 4GB CompactFlash card for $24 at NewEgg. At 8MB writes and 10MB reads, it’s hardly blazing, but if you don’t do any high-volume activity, it’s not a bad deal.

Actually, if you’re not looking for speed or name brands…

They seem better-suited for a ultra-low-power PC than a camera, of course: no one needs a 32GB CF card, and the type of people that would ‘need’ one are the type of people that couldn’t put up with anything but the highest throughput available.

A New Take on RAM and Swap?

A really random thought just popped into my head…

UNIX systems have their “swap” partition, a disk partition where unused stuff in RAM is “swapped out” to disk to make room for newer stuff in RAM. Of course, no hard drive is as fast as RAM, so you obviously want lots of RAM so you never touch swap. Windows has the same concept but calls it a paging file.

But what if your disk was as fast as RAM? I remain fascinated by OCZ’s new 64GB SSD, which has an almost-zero seek time, and throughput rivaling the best of hard drives. (Though I’m yet to read any actual reviews, as it seems to have still not quite shipped.) I suspect that, given factors like the direct bus between your CPU and RAM, and all the work on boosting RAM “speeds,” a solid-state disk wouldn’t literally be as fast as RAM. But I also think that the difference between having more RAM and “swapping out” to SSD would be somewhat negligible.

I think it’d be interesting to test the extent of this… Plunk an SSD (one with high throughput!) into a system, and run it on as little memory as you can. (Though I think you’d be hard-pressed to find anything less than 256MB DIMMs these days, and even those might be hard to find? I wonder if Linux has a facility for deliberately overlooking a quantity of its RAM?) And with that set up, go crazy with stuff to ensure that you’re using several gigs of “memory.”

We can sit around all day and measure bus speeds and Mbps throughput and latency and seek time, but I think the only meaningful measure would be to try this and see how it feels. I have a hunch that it wouldn’t be that big of a slowdown (compared to RAM), but that the biggest problem would be ensuring your SSD was on a separate controller/ bus/ channel, so you didn’t obliterate your hard drive performance. While it’s easy to get an affordable system with a couple gigs of RAM now, RAM remains expensive if you need a decent amount of it. Buying a system with 64GB of RAM would still be extraordinarily expensive, but with a 64GB SSD for under $300, you could imitate it fairly well.

Deal!

I’d posted before about my interest in picking up a low-capacity SSD card for my laptop, to drastically speed up disk access. (This actually has nothing to do with my recent posts about slow hard drives…)

Newegg seems to have a 64 GB SSD, 2.5″ SATA disk for $240 after rebate. Interestingly, from the specs, it seems as if not only are the seek times nill (on account of being solid-state), but the throughput exceeds that of your average hard disk. It won’t be released for four days, however. (Found via FatWallet, which also links to a review here.)

For those who aren’t major geeks, SSD is short for “solid-state disk.” Your ordinary hard drive is a bunch of spinning platters, whereas solid-state is the technology you see in a USB thumb drive or the like: no moving parts. The major benefit of SSDs thus far has been seek time: with a normal hard disk, the disk has to find the right spot on the disk and read it. Seek times average 8-10ms on most normal drives, but that adds up quickly with fragmentation or concurrent I/O. With an SSD, there are no moving parts, so “seek time” is pretty much non-existent: files are ready instantly. Early SSDs seemed to not be capable of moving as much data (in terms of MB/sec), though, meaing that SSDs were great for lots of small “random” access, but not so hot for handling big, contiguous files. Now, it’s looking as if OCZ has made SSDs kick butt over normal hard drives, and somehow offered the product at a fraction of what it normally costs. (This 64GB SSD is more normally-priced, to give you an idea of why they haven’t caught on so quickly.)

Incidentally, today I came across deals on two different notebooks for about $700, both of which have 4GB RAM, but 1280×800-pixel screens. The RAM is incredible, as are most of the other specs (though it’s 5400RPM drives), but I think you can do much better on the resolution.

Disk Throughput

I think I’ve alluded earlier to the fact that I’ve been trying to speed up some systems at home, and how some of them are really slow. (I’m starting to suspect Norton, actually, but more on that when I find out more.)

I just came across this spiffy application, which will write and then read a test file to measure disk performance. My laptop gets 27.1 MB/sec. (sequential) write, 41 MB/sec. sequential read, and 29.9 MB/sec. random reads. This was on a 1GB file; it wanted to do a ~4 GB file, but I really didn’t feel like spending the time. I suspect the goal is to make sure that it’s not being “fooled” by caching, but I figured 1GB was sufficient for that. Some of the results show read speeds of 600+ MB/sec., which is most definitely coming from cache. (That said, this is a more “real-life” test… Just don’t think you have a hard drive that does 800MB/sec. reads!)

Location Error vs. Time Error

This post christens my newest category, Thinking Aloud. It’s meant to house random thoughts that pop into my head, versus fully fleshed-out ideas. Thus it’s meant more as an invitation for comments than something factual or informative, and is likely full of errors…

Aside from “time geeks,” those who deal with it professionally, and those intricately familiar with the technical details, most people probably are unaware that each of the GPS satellites carries an atomic clock on board. This is necessary because the way the system works, in a nutshell, by triangulating your position from various satellites, where an integral detail is knowing precisely where the satellite is at a given time. More precise time means a more precise location, and there’s not much margin of error here. The GPS satellites are also syncronized daily to the “main” atomic clock (actually a bunch of atomic clocks based on a few different standards), so the net result is that the time from a GPS satellite is accurate down to the nano-second level: they’re within a few billionths of a second of the true time. Of course, GPS units, since they don’t cost millions of dollars, rarely output time this accurately, so even the best units seem to have “only” microsecond accuracy, or time down to a millionth of a second. Still, that’s pretty darn precise.

Thus many–in fact, most–of the stratum 1 NTP servers in the world derive their time from GPS, since it’s now pretty affordable and incredibly accurate.

The problem is that GPS isn’t perfect. Anyone with a GPS probably knows this. It’s liable to be anywhere from a foot off to something like a hundred feet off. This server (I feel bad linking, having just seen what colocation prices out there are like) keeps a scatter plot of its coordinates as reported by GPS. This basically shows the random noise (some would call it jitter) of the signal: the small inaccuracies in GPS are what result in the fixed server seemingly moving around.

We know that an error in location will also cause (or, really, is caused by) an error in time, even if it’s miniscule.

So here’s the wondering aloud part: we know that the server is not moving. (Or at least, we can reasonably assume it’s not.) So suppose we define one position as “right,” and any deviation in that as inaccurate. We could do what they did with Differential GPS and “precision-survey” the location, which would be very expensive. But we could also go for the cheap way, and just take an average. It looks like the center of that scatter graph is around -26.01255, 28.11445. (Unless I’m being dense, that graph seems ‘sideways’ from how we typically view a map, but I digress. The latitude was also stripped of its sign, which put it in Egypt… But again, I digress.)

So suppose we just defined that as the “correct” location, as it’s a good median value. Could we not write code to take the difference in reported location and translate it into a shift in time? Say that six meters East is the same as running 2 microseconds fast? (Totally arbitrary example.) I think the complicating factors wouldn’t whether it was possible, but knowing what to use as ‘true time,’ since if you picked an inaccurate assumed-accurate location, you’d essentially be introducing error, albeit a constant one. The big question, though, is whether it’s worth it: GPS is quite accurate as it is. I’m a perfectionist, so there’s no such thing as “good enough” time, but I have to wonder whether the benefit would even show up.

Building an Improvised CDN

From my “Random ideas I wish I had the resources to try out…” file…

The way the “pretty big” sites work is that they have a cluster of servers… A few are database servers, many are webservers, and a few are front-end caches. The theory is that the webservers do the ‘heavy lifting’ to generate a page… But many pages, such as the main page of the news, Wikipedia, or even these blogs, don’t need to be generated every time. The main page only updates every now and then. So you have a caching server, which basically handles all of the connections. If the page is in cache (and still valid), it’s served right then and there. If the page isn’t in cache, it will get the page from the backend servers and serve it up, and then add it to the cache.

The way the “really big” sites work is that they have many data centers across the country and your browser hits the closest one. This enhances load times and adds in redundancy (data centers do periodically go offline: The Planet did it just last week when a transformer inside blew up and the fire marshalls made them shut down all the generators.). Depending on whether they’re filthy rich or not, they’ll either use GeoIP-based DNS, or have elaborate routing going on. Many companies offer these services, by the way. It’s called CDN, or a Contribution Distribution Network. Akamai is the most obvious one, though you’ve probably used LimeLight before, too, along with some other less-prominent ones.

I’ve been toying with SilverStripe a bit, which is very spiffy, but it has one fatal flaw in my mind: its out-of-box performance is atrocious. I was testing it in a VPS I haven’t used before, so I don’t have a good frame of reference, but I got between 4 and 6 pages/second under benchmarking. That was after I turned on MySQL query caching and installed APC. Of course, I was using SilverStripe to build pages that would probably stay unchanged for weeks at a time. The 4-6 pages/second is similar to how WordPress behaved before I worked on optimizing it. For what it’s worth, static content (that is, stuff that doesn’t require talking to databases and running code) can handle 300-1000 pages/second on my server as some benchmarks I did demonstrated.

There were two main ways to enhance SilverStripe’s performance that I thought of. (Well, a third option, too: realize that no one will visit my SilverStripe site and leave it as-is. But that’s no fun.) The first is to ‘fix’ Silverstripe itself. With WordPress, I tweaked MySQL and set up APC (which gave a bigger boost than with SilverStripe, but still not a huge gain). But then I ended up coding the main page from scratch, and it uses memcache to store the generated page in RAM for a period of time. Instantly, benchmarking showed that I could handle hundreds of pages a second on the meager hardware I’m hosted on. (Soon to change…)

The other option, and one that may actually be preferable, is to just run the software normally, but stick it behind a cache. This might not be an instant fix, as I’m guessing the generated pages are tagged to not allow caching, but that can be fixed. (Aside: people seem to love setting huge expiry times for cached data, like having it cached for an hour. The main page here caches data for 30 seconds, which means that, worst case, the backend would be handling two pages a minute. Although if there were a network involved, I might bump it up or add a way to selectively purge pages from the cache.) squid is the most commonly-used one, but I’ve also heard interesting things about varnish, which was tailor-made for this purpose and is supposed to be a lot more efficient. There’s also pound, which seems interesting, but doesn’t cache on its own. varnish doesn’t yet support gzip compression of pages, which I think would be a major boost in throughput. (Although at the cost of server resources, of course… Unless you could get it working with a hardware gzip card!)

But then I started thinking… That caching frontend doesn’t have to be local! Pick up a machine in another data center as a ‘reverse proxy’ for your site. Viewers hit that, and it will keep an updated page in its cache. Pick a server up when someone’s having a sale and set it up.

But then, you can take it one step further, and pick up boxes to act as your caches in multiple data centers. One on the East Coast, one in the South, one on the West Coast, and one in Europe. (Or whatever your needs call for.) Use PowerDNS with GeoIP to direct viewers to the closest cache. (Indeed, this is what Wikipedia does: they have servers in Florida, the Netherlands, and Korea… DNS hands out the closest server based on where your IP is registered.) You can also keep DNS records with a fairly short TTL, so if one of the cache servers goes offline, you can just pull it from the pool and it’ll stop receiving traffic. You can also use the cache nodes themselves as DNS servers, to help make sure DNS is highly redundant.

It seems to me that it’d be a fairly promising idea, although I think there are some potential kinks you’d have to work out. (Given that you’ll probably have 20-100ms latency in retreiving cache misses, do you set a longer cache duration? But then, do you have to wait an hour for your urgent change to get pushed out? Can you flush only one item from the cache? What about uncacheable content, such as when users have to log in? How do you monitor many nodes to make sure they’re serving the right data? Will ISPs obey your DNS’s TTL records? Most of these things have obvious solutions, really, but the point is that it’s not an off-the-shelf solution, but something you’d have to mold to fit your exact setup.)

Aside: I’d like to put nginx, lighttpd, and Apache in a face-off. I’m reading good things about nginx.