iPhone Price Reductions 7

Apple’s reduced the price on the 8GB iPhone by $200 — and decided to drop the 4GB model altogether. Until supplies are exhausted, the 4GB iPod is available at the Apple store for $299. I’m having a hard time coming up with a reason to get one. Except for that cancellation fee. And crippled bluetooth. And no SDK. Hmm…

A big small camera 3

Sigma is releasing the DP1, a compact camera with a “full-scale sensor” (about 7-12x larger than most compact digitals), a 28mm (equivalent) F/4 prime lens, shutter and aperture priority or manual exposure control, and a 2.5″ LCD.

I want that in my pocket now, not later.

Children++ 2

Tomorrow my wife is being induced, and, theoretically, our third child will be born. A girl (or so we’ve been told).

When a coworker called a few days ago to let us know that his wife would be induced later that evening, one my teammates inquired as to the proper response in such a situation.

“Is ‘Good luck!’ acceptable?” he wondered. “How does that come across? There’d be nothing quite like saying, ‘Hope she doesn’t die!‘”

MySQL Query Profiling 4

Apparently MySQL quietly (logical fallacy: I assume it was quiet because I didn’t hear anything about it) slipped a query profiler patch in MySQL 5.0 (as early as 5.0.37) that will give you time statistics about each step of a query’s lifetime. From the aforementioned:

mysql> show profile for query 1;
+--------------------+------------+
| Status             | Duration   |
+--------------------+------------+
| (initialization)   | 0.00006300 |
| Opening tables     | 0.00001400 |
| System lock        | 0.00000600 |
| Table lock         | 0.00001000 |
| init               | 0.00002200 |
| optimizing         | 0.00001100 |
| statistics         | 0.00009300 |
| preparing          | 0.00001700 |
| executing          | 0.00000700 |
| Sending data       | 0.00016800 |
| end                | 0.00000700 |
| query end          | 0.00000500 |
| freeing items      | 0.00001200 |
| closing tables     | 0.00000800 |
| logging slow query | 0.00000400 |
+--------------------+------------+
15 rows in set (0.00 sec)

This looks immensely useful! And while the MySQL documentation is mute on the matter, according to comments on the blog of the original author, the times reported aren’t specific to a thread. In other words, you’ll only get reliable numbers when there’s only one thread running.

Note: this seems to have been left out of the version of MySQL available in Gentoo portage. We’re not sure why, but we have some good minds (i.e., not mine) trying to figure out where it went. Consequently, I haven’t actually used it yet.

Pressing WordPress 14

This post is designed to inspire our beloved server administrator to get some sort of caching installed.

So I finally decided that enough was enough and I wasn’t going to wait until I spontaneously combusted with the motivation to build my own blogging/CMS system and finish my site. Last night I downloaded WordPress and began hacking my design into their template framework.

While testing it all out, I noticed that the responses seemed pretty slow. I wasn’t sure if it was the wireless network (I’d been wrangling with it a few hours earlier), so this evening I decided to do some benchmarking. I am something of a performance freak, after all. (So I lied about my intentions at the beginning of the post… who cares?) I’d already looked at some of the code (quite hideous, in my personal opinion), so I had a feeling things wouldn’t be pretty out of the box.

I was right: the default installation managed a measly 4 requests per second. First I installed APC, which, under Ubuntu, requires installing the PEAR and php5-dev packages, then running sudo pecl install apc. The addition of byte-code caching pushed it up to 13 requests/second. Clearly, the code was suffering from runtime — not compilation — issues.

I didn’t have any real desire to delve too deep into the code, so I opted for the prebuilt WP-Cache plugin. And this one was worth the money: just by enabling the plugin I jumped to nearly 500 requests/second. Note that this is a 125 times better than I started with. (Out of curiosity, I also ran with caching on and APC off; about 200 requests/second.)

In short: if you’re running WordPress and you can/have self respect (*grin*), install APC and WP-Cache.

Query Uncache 0

Apparently, in MySQL 5.0.36, there was a bug that prevented INSERT INTO ... ON DUPLICATE KEY UPDATE ... queries from flushing the query cache for the table they were modifying. According to the MySQL manual:

If a table changes, all cached queries that use the table become invalid and are removed from the cache. This includes queries that use MERGE tables that map to the changed table. A table can be changed by many types of statements, such as INSERT, UPDATE, DELETE, TRUNCATE, ALTER TABLE, DROP TABLE, or DROP DATABASE.

But the evidence speaks for itself:

mysql> insert into sessions values ('fooh', 'blah', now())
  on duplicate key update session_data=values(session_data);
Query OK, 1 row affected (0.00 sec)

mysql> select * from sessions where session_id='fooh';
+------------+--------------+---------------------+
| session_id | session_data | date_modified       |
+------------+--------------+---------------------+
| fooh       | blah         | 2007-07-27 07:41:57 | 
+------------+--------------+---------------------+
1 row in set (0.00 sec)

mysql> insert into sessions values ('fooh', 'blah2', now())
  on duplicate key update session_data=values(session_data);
Query OK, 0 rows affected (0.00 sec)

mysql> select * from sessions where session_id='fooh';
+------------+--------------+---------------------+
| session_id | session_data | date_modified       |
+------------+--------------+---------------------+
| fooh       | blah         | 2007-07-27 07:41:57 | 
+------------+--------------+---------------------+
1 row in set (0.00 sec)

mysql> select sql_no_cache * from sessions where session_id='fooh';
+------------+--------------+---------------------+
| session_id | session_data | date_modified       |
+------------+--------------+---------------------+
| fooh       | blah2        | 2007-07-27 07:41:57 | 
+------------+--------------+---------------------+
1 row in set (0.00 sec)

Now, for us the fix is as simple as adding the SQL_NO_CACHE hint to our queries (or updating our version of MySQL, possibly). And, actually, adding it to the session queries isn’t a bad idea anyways — there’s not really any point in attempting to cache data from a table that gets written to with every page load. This could free up space in the query cache for other data that might have a chance of sticking.

As far as I can tell, this bug was fixed in 5.0.41 (at the latest).

Getting Exaile to sing 0

If you’re wanting to install Exaile on your Gentoo box AND have it play MP3s (and perhaps this goes for all multimedia applications), you’ll want to enable the mad USE flag.

Oh, and so far I’d recommend Exaile if you’re looking for a player that doesn’t come with a lot of baggage — like Gnome/KDE libraries. It’s fitting into my XFCE4 desktop very nicely and has all of the features you’d (or at least, I’d) expect from a player.

Emerging XDebug 2.0.0 0

XDebug has finally (after ~4 years, according to their website) gone 2.0. Unfortunately, there’s no ebuild for the new version in Gentoo’s Portage. Luckily, they’re really easy to create, as all you have to do is change the name of the file.

The only trick, however, is that you’ll want to set it up in a portage overlay. But this is easy to setup, too.

Creating an Overlay

I created my overlay in /usr/local/portage; it gives me a convenient place to store hacked up ebuilds (like this one). First, create the directory:

# mkdir /usr/local/portage
# cd /usr/local/portage

Now you need to add this directory to the PORTDIR_OVERLAY variable in /etc/make.conf. Multiple values are separated by spaces.

Building the eBuilds

# cd /usr/local/portage/
# mkdir dev-php5
# cd dev-php5
# mkdir xdebug
# cp /usr/portage/dev-php5/xdebug/xdebug-2.0.0-rc4.ebuild ./xdebug-2.0.0.ebuild
# ebuild ./xdebug-2.0.0.ebuild digest

It also depends on your xdebug-client package being the same version, so create one of those as well:

# cd /usr/local/portage/
# mkdir dev-php
# cd dev-php
# mkdir xdebug-client
# cp /usr/portage/dev-php/xdebug-client/xdebug-client-2.0.0-rc4.ebuild ./xdebug-client-2.0.0.ebuild
# ebuild ./xdebug-client-2.0.0.ebuild digest

Done

And now you should be ready to emerge the package like usual!

Including benchmarks 3

As part of the namespace discussion, the performance difference between including multiple files and concatenating those files into a single file, mainly because of the aforementioned limitation.

In response to an argument for concatenation, Rasmus Lerdorf (original author of PHP) said:

Note that what you are saving by combining the files into one is just a single stat syscall per file. And that can be alleviated by setting apc.stat=0 in your config. That of course means you need to restart your server or flush your cache whenever you want to update the files. In apc.stat=0 mode you gain nothing by merging all the files.

Now, I’d always theorized that concatenation could a present measurable performance increase, so I was a little muffed by this statement. Consequently, I ventured forth to prove this out.

Setup

First, I generated 100 files each containing a single class. The files and classes were named consecutively, 1-100. I then created one script that dynamically included each file, another that statically included each file, and one that contained all of the files concatenated together. I used Apache Bench and had APC enabled. I ran one set of tests with apc.stat=0 and one with apc.stat=1. The tests were run on a AMD Sempron 2600+ with 512MB of RAM, running Apache 2.0.58 with mpm-prefork and PHP 5.1.6.

Results

I was a little surprised by the results. Because they were better than I’d ever theorized: even with apc.stat=0, the concatenated file request times were less than half those of the static list.

Dynamic

Requests per second:    247.86 [#/sec] (mean)
Time per request:       40.346 [ms] (mean)

Static

Requests per second:    279.86 [#/sec] (mean)
Time per request:       35.732 [ms] (mean)

Concatenated

Requests per second:    605.07 [#/sec] (mean)
Time per request:       16.527 [ms] (mean)

Also of note: the dynamic includes didn’t really lose by much to the static includes. That should make people using __autoload (like me, at work) happy (it does).

Apparently, there’s still some performance to be gained by concatenating your libraries into a single file, even with a byte-code cache. Now, granted, you may not always include 100 files, or your entire library. However, by analyzing your usage patterns, you could always create a single file containing the classes you use the most, and leave those only occasionally used to your autoload implementation.

Solving the wrong problem 0

I’m going to apologize in advance for bringing this up again, but there’s been more talk on the php.internals list about the namespace patch that was applied to PHP 6. And somewhere in the discussion (erm, I think discussion implies that there are two opinions given equal weight), someone linked to the original post that started this round.

Main assumption of the model is that the problem that we are to solve is the problem of the very long class names in PHP libraries.

Suddenly, I realized what was wrong: they set out to solve the wrong problem!

Namespaces reduce typing? No. Namespaces allow the same symbol to be used in two contexts. Importing a namespace, thereby saving keystrokes, is just a nice (and popular) addition.

I’m not giving up hope, though. Maybe, just maybe, one of us will cut through the fog.

« Previous PageNext Page »