A few days ago I had an annoying out of disk space issue with the blog server. This was, of course, my own making. It’s hosted on Amazon’s EC2, but they only provide the bare metal for the server and disk.
But, in my haste to diagnose and fix the problem I wound up screwing up my MySQL config. Naturally, I didn’t know this at first.
Normally, most accesses to most pages are cached so the average user wouldn’t wind up seeing the problem unless they were the unlucky schlub that wound up tripping on an uncached page. But that didn’t change the fact that something was amiss. Looking at top confirmed that things were odd. mysqld had racked up almost half an hour of CPU time in half a day — the compared to almost nothing before.
The first thing that came to mind is to fire up MySQL Workbench to see how things are doing. It looked pretty dire with 0% of queries hitting the query cache. This is a bad thing.
Opening up /etc/my.cnf is a nightmare to someone who is not well versed in the art of MySQL configuration. Lots of things to tweak. Likes of tweaks that can have negative effects. A bit more googling led me to mysqltuner.pl. A few iterations of running that Perl script got me to a good place. Now, my hit rate is almost 70% instead of 0 and things are a lot snappier.
This wasn’t really an in-depth troubleshooting, but I’m figuring I’ll throw this out there in case it helps someone.
In the end, the config I wound up with was the my-medium.cnf file with a few tweaks. The top section now looks like this:
key_buffer_size = 16M
max_allowed_packet = 1M
table_open_cache = 64
sort_buffer_size = 512K
net_buffer_length = 8K
read_buffer_size = 256K
read_rnd_buffer_size = 512K
myisam_sort_buffer_size = 8M
query_cache_size = 16M
query_cache_limit = 32M
tmp_table_size = 128M
max_heap_table_size = 128M
thread_cache_size = 4
table_cache = 128
From what I can tell this seems to be working fine without overloading the memory of the machine. (The memory is a relatively limited 1.7GB)