I've been working on the performance of my Drupal site and thought I'd share my results to date in case they might help someone else. My baseline performance using out-of-the-box settings was around 28 requests per second, and the end result is a rate of 68 requests per second.
The site, www.kidpub.com, is lightly loaded, with between 400 and 600 visits per day for a total of 3,000 to 4000 pageviews per day. The system is a dedicated 1.8GHz Celeron with 500M RAM and a 100M ethernet. It's running CentOS. Apache is 2.0.52. Latest build of Drupal 5 and MySQL 5.
I looked at three main areas for performance tweaking: Apache, MySQL, and PHP. Most of the changes are pretty simple but result in good performance gains.
Benchmarking Apache with ab
Most of my tuning was done in Apache. I found that the default settings for the web server were not appropriate for a small system such as mine; in many cases they were 10x higher than I needed. I used the
ab utility to benchmark server performance; for example:
ab -n 500 -c 50 http://www.kidpub.com/latest/
This makes 500 requests for /latest with 50 concurrent users. The output of
ab looks like this:
Document Path: /latest
Document Length: 25166 bytes
Concurrency Level: 50
Time taken for tests: 7.344847 seconds
Complete requests: 500
Failed requests: 0
Write errors: 0
Total transferred: 10614843 bytes
HTML transferred: 10405231 bytes
Requests per second: 68.07 [#/sec] (mean)
Time per request: 734.485 [ms] (mean)
Time per request: 14.690 [ms] (mean, across all concurrent requests)
Transfer rate: 1411.33 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 7 15.5 0 59
Processing: 44 650 210.2 729 1110
Waiting: 2 636 204.9 711 1109
Total: 50 658 207.2 747 1123
Percentage of the requests served within a certain time (ms)
100% 1123 (longest request)
This is one of my heavier pages. You can see that 99% of the requests were served in under a second, with a total throughput of about 68 requests per second. My goal was to bring this representative page down from its old value of 5s average response time to closer to 1s.
Most of the tuning done in Apache was around the number of httpd processes started and kept alive (my server uses the prefork MPM).Here's the prefork section from httpd.conf:
The big problem for a small server is that the default values for these settings is quite high. For example, MaxClients is the total number of httpd child processes that may be running simultaneously. The default value is 150. The issue for Drupal is that each of those child processes is going to allocate a large chunk of physical memory...I typically see 15M to 20M per process on my server. Twenty httpd processes, each allocating 20M, would consume nearly the entire 500M physical memory and the system would start to swap. Add another twenty and the system will be thrashing.
As a rule of thumb, I allocate 50% of the available physical memory to Apache. So, 250M / 20M per process = 12 httpd processes maximum. I add a few more for headroom under the assumption that the median memory footprint is smaller than 20M. This allows plenty of room for MySQL, PHP, and the operating system.
Based on this stanza in httpd.conf, then, my server will initially start 3 httpd processes (StartServers). Based on load, there should always be at least 2 processes idle and waiting for a request (MinSpareServers) but no more than 3 idle httpd processes (MaxSpareServers). At no time will there be more than 15 httpd processes running (MaxClients).
MaxRequestsPerChild is set relatively low (5000 requests). When a process exceeds MaxRequestsPerChild, it is killed and, if necessary, a new process will be spawned to replace it. I do this to minimize the size of each process...an httpd process will allocate memory if needed, but it doesn't release it. For example, if the very first request served by a process is a big one, say 25M, and the remaining requests are small, say 10M, the proces will hang on to its 25M of memory. By limiting the total number of requests a process can serve, I'll get a new process with a small memory allocation periodically. This has to balanced with the cost of process creation, but the impact is minimal.
By default, KeepAlive is turned off. By turning it on (
KeepAlive On), we allow a single TCP connection to make multiple requests without dropping the connection. For Drupal this is important, since each page typically has several elements on it, and a single hit on that page might make multiple requests. If KeepAlive is off, each element request will result in a new TCP connection and its associated overhead. The total number of requests available per connection is set by MaxKeepAliveRequests, here set to 100. The total amount of seconds the connection will stay up is set by KeepAliveTimeout, and I've set it to twice the length of my longest average page request.
Changes to MySQL configuration cover two areas; caching and connections. These are made in /etc/my.cnf
The number of maximum connections to the database defaults to 1. Each httpd process can open a connection, so I've increased the maximum value to the largest number of httpd processes I expect to see (MaxClient), or 15. I've also set aside 64M of memory to hold cached queries; on my site, there are several large queries that change infrequently, and the idea is that these will be served from cache when possible.
You can examine the state of the cache from the mysql command line:
mysql> show status like 'qcache%';
| Variable_name | Value |
| Qcache_free_blocks | 9990 |
| Qcache_free_memory | 34431360 |
| Qcache_hits | 2165383 |
| Qcache_inserts | 461500 |
| Qcache_lowmem_prunes | 113692 |
| Qcache_not_cached | 1894 |
| Qcache_queries_in_cache | 28203 |
| Qcache_total_blocks | 66628 |
8 rows in set (0.00 sec)
Here we can see that there have been 2 million cache hits and that about half of the allocated cache is still available (free_memory). The cache has been flushed (lowmem_prunes), so it might be useful to increase the value of cache_size in my.cnf slightly.
Performance tuning is a bit of art. For my site, these changes had a significant impact, doubling the performance and placing safeguards around resource consumption. Be sure to benchmark the site before you start working on tuning so that you have a baseline value to compare against.