Press "Enter" to skip to content

Category: MySQL

Differences in Hardware/Software for a Database Server

I previously posted about the migration we performed for a customer for their email server.

This weekend we performed a cutover for another client from an older dedicated MySQL database server to a newer piece of hardware. First, the graph:


And so where the system was under an almost continuous load of 1.0, the system now seems almost bored.

Specs for old DB Server

  • Dell PowerEdge 2850
  • 2GB RAM
  • 2 x 3.6Ghz Xeon with Hyperthreading enabled
  • 3 x 146GB 15K rpm SCSI – PERC RAID 5
  • Debian Linux 4 – 2.4.31-bf2.4 SMP – 32-bit kernel
  • MySQL 4.1.11

Specs for new DB Server

  • Dell PowerEdge 2950
  • 8GB RAM
  • 2 x Dual core 3.0Ghz Xeon with HT disabled
  • 7 x 146GB 10k rpm 2.5″ SAS drives – PERC RAID 5
  • Debian Linux 6 – 2.6.32-5-amd64 SMP – 64-bit kernel
  • MySQL 5.0.51a (soon to be 5.1)

The client is understandably very happy with the result.


Simple Checkout of phpMyAdmin with git

When setting up webservers for clients, I’ll usually configure and secure an installation of phpMyAdmin to allow them easy access to the MySQL database server.

I would also want to make sure it was easy to update that installation to the latest stable version. In the past this was easily done by initially checking out the STABLE tag from their Subversion repository using the following style command:

svn co

and then when I wanted to update, it was a simple matter of running:

svn update

Well, the phpMyAdmin folks switched over to using git as their SCM system and unfortunately, they didn’t post any instructions on how a web server administrator who was never going to be changing/committing code back into the system would perform the same action using git. It took me about an hour of searching, reading, digesting how git works, cursing, but I finally came up with the following set of git commands necessary to checkout the STABLE branch of phpMyAdmin:

git clone --depth=1 git://
cd phpmyadmin
git remote update
git fetch
git checkout --track -b PMASTABLE  origin/STABLE

…what that means is: clone the main repository from the specified path, drop into the newly created repository directory and thirdly create a new branch in the local repository called PMASTABLE that will track the remote repository’s branch called “origin/STABLE”.

The “depth=1” parameter tells git not to copy a huge set of changes, but only the most recent set.

So, from here on, I should be able to merely run: “git pull” on that repository and it should update it to the latest STABLE.

Hopefully, others will find this useful.

UPDATE – Mar 13, 2014
I updated the above commands to show the correct GitHub URL and two more commands to make sure the local repo sees the STABLE branch before trying to create a local branch.

The Great Leap Beyond One – Creating Scalable PHP Web Applications

I gave a presentation to the Dallas PHP user group on May 11, 2010 on Creating Scalable PHP Web Applications.

Download the presentation in PDF.

Here is a basic outline:

  • Introduction
    • Traditional Single Server and Dedicated DB-2 Server data flows.
    • What does it mean to be Scalable, Available and Redundant?
  • Planning your Delivery Architecture.
    • Delivery Resource Types – html/image/pdf/email/rss
    • URL types and origins for main text/html, static images, user generated media
  • Delivery Architecture Components
    • Web Servers
    • Database Systems
    • Load Balancers
    • Caching systems
    • PHP Application Code
  • Web Server systems
    • Make fast and agile and identical
    • Key concept: Web systems must be thought of as being Disposable.
    • Storage of source and non-source delivery resources
    • Deployment of web servers – OS/PHP Load, Code Deployment/Updates
  • Database systems
    • Hardest to Scale, throw money at this problem
    • Replication and caching layers can extend life/performance of primary database.
    • Make a plan to deal with Primary Failure – what in site will/won’t work.
    • Make a plan to deal with Primary Recovery
    • Redundant MySQL Overview
    • Caching Layers Overview
  • Load Balancers
    • Hardware/Software List
    • Primary Features
    • Secondary Features
    • Example Service/Content rule Pseudo-Config
  • PHP Component Code changes
    • Sessions
      • Custom Session Data Handler
      • Basics and Gotchas
      • Example Session SQL Table
    • Non-Source File & Data Storage
      • Uploaded images/documents (avatars/photos)
      • System generated files (chart images for emails)
      • System Generated Data (calculated results data)
      • Data pulled from external system (RSS feed cache)
      • Store into shared system accessible by all front-ends
      • Admin system for forced pushes/cleanouts. Monitoring.
    • Resource Delivery
      • Simple and complex examples.
      • Code for abstracting URL generation – App::URL(‘logo.jpg’, ‘branded’)
      • Example of complex URL structures.
      • Delivery issues with CSS and JavaScript
      • Serving SSL protected content with references to external static media; needs SSL too!
      • Using ErrorDocument to create a Just-In-Time delivery system to media servers.
    • Periodic Process Execution
      • Using Queues and Semaphores to control execution.

Solving the MySQL on Windows Open File limit – VMWare Linux

This is a continuation of the saga of helping a customer of ours with their MySQL on Windows issues.

The basic premise is that MySQL 5 running under Windows has problems with large numbers of connections/open files.

We initially presented our client with 2 choices for solving their problem:

  1. Setup MySQL on a different server running Linux
  2. Move their database backend from MySQL to Microsoft SQL Server

Option 1 would require a non-trivial capital outlay and time to setup and migrate the data over

Option 2 held more interest for them as a longer term solution as they were interested in some of the powerful reporting and analysis tools. However, the server that all of this was running on turned out to present a problem in the following way:

The system is a Dell PowerEdge 2950 with dual dual-core Xeons (with HyperThreading enabled, resulting in 8 logical cpus), 8GB RAM and running Windows 2003 Server x64. For a general ballpark estimate of pricing for MS SQL Server, I went to the Dell website and configured a similar machine and went to select a SQL Server as an add-on. Due to the way that Microsoft prices SQL Server for a web environment, you must license it for each processor you have installed in your server. A dual-processor licensed SQL Server 2005 running under Windows x64 turned out to be in excess of $14,000 on that page.

Needless to say, the customer had a bit of sticker shock. Not only would they have to plop down FOURTEEN grand, they would still need man-hours to work out kinks in transitioning the backend of the app from MySQL. Getting a fully loaded separate server for half that cost running Linux/MySQL was looking more attractive.

I was chatting with a friend about the customers dilemma and he had a brilliant suggestion: Run a Linux instance under VMWare on the existing hardware.

I broached the idea with the client and they were game to give it a shot. The server was at this point extremely underutilized and was only limited by the specific software implementations. They were already running MySQL here anyway, so putting a different wrapper around it in order to escape those limits was worth investigating.

The server needed some more disk space so they ordered 6 more of the 146GB 2.5″ SAS drive modules like the 2 that were already installed in a mirrored pair. These are great little devices for getting a decent number of spindles into a database server.

While they did that I installed the latest Release Candidate of VMWare Server 2.0 and went about setting up a Debian based Linux install with the latest MySQL 5.0 binary available. During testing we had some interesting processor affinity issues for the VM environment and after a very good experience and excellent responses in the VMWare community forums, I tweaked the config to have that VirtualMachine attach itself to processors 5-8 which correspond to all of the logical cpus on the second socket. This left the first socket’s worth of cpus for OS and IIS/ASP usage. Doing this would help avoid the cache issues that occur with multi-cpu systems and multi-threaded processes.

I ended up configuring the VM for 3.5GB RAM, 2 processors and tweaking the MySQL config to make good use of those resources for their INNOdb based tables.

The system has been running very reliably for over 2 months and is making very good use of the resources on that system. Load Average in the VM stays well under 1.0 and the cpus in the Windows system are easily handling all the different loads.

Overall I spent probably 8-10 hours installing and tweaking the config and helping them migrate their data. This was much less expensive to the other options and they got to make use of their existing hardware and software configurations.

A very positive experience and outcome.

MySQL 5 limits on Windows OS – 2048 max open files

A customer of ours recently asked us to help them troubleshoot some performance problems they have been having with their ASP/MySQL based solution. They are running the latest version of MySQL 5 under Windows 2003 Server 64-bit edition. Their IIS/ASP based application is using an ODBC connection to connect to MySQL.

They have recently added a partner that is sending them sales leads through an API call. There are occasional peaks in this traffic where the incoming connections into IIS far exceeds what they have been able to handle in the concurrent open connections into MySQL. Tracking this down through the application stack we looked at several low-level file/network system items:

  1. Ephemeral port exhaustion
  2. Stale socket disposal
  3. Max Open Files

#1 and #2 has caused some issues for another customer in the past, but this time it turned out to be #3. Even though they had set an appropriate value for max-connections in their my.ini file:

max-connections = 4000

MySQL was artificially reducing these settings at runtime to much lower than was needed to support this flash traffic. Upon startup, MySQL would emit this into the Application Event Log:

Changed limits: max_open_files: 2048 max_connections: 1910 table_cache: 64

According to this bug report on the MySQL site, the approximate formula is used to determine the maximum open files for MySQL:


This condition has been reported as far back as 2006. A fix that switched MySQL from using the C Runtime Library in the Windows binary for opening files to using the native Win32 API calls was completed in mid June 2008. However, this will only be rolled into the MySQL 6 line of development.

We had looked at seeing about increasing this limit to something higher than 2048, but according to this site, that value is hardcoded. This particular customer has at least one thing going for them, their exclusive use of the InnoDB engine. This engine uses native calls so the table cache value does not need to be anything very high and they can maximize the number of open descriptors for use in incoming connections.

Realistic Options:

  • Artificially limit the maximum number of incoming connections to IIS so as to limit the passthrough the MySQL. This leaves serious amounts of processing power of this particular server untapped.
  • Move to MySQL running on a non-Windows OS
  • Move their backend to MS SQL Server. documented differences.

MySQL Replication Slave Control shell script

I set up a replication slave at our office to a MySQL server running at our colo and the master server is pretty busy. So busy that even with the compressed protocol option turned on the stream was taking a good 60-70 kbps out of the available bandwidth of our T1. Since it isn’t critical that this data be real-time slaved, I made a small shell script that can take parameters for starting and stopping either the sql or io threads:

#!/bin/sh -

SCRIPT="/usr/local/mysql/bin/mysql -u$USER -p$PASSWD -e "

StartService ()
    case $1 in
      sql  ) $SCRIPT "START SLAVE SQL_THREAD"   ;;
      io   ) $SCRIPT "START SLAVE IO_THREAD"    ;;
      *      ) echo "$0: unknown Start argument: $1";;

StopService ()
    case $1 in
      sql  ) $SCRIPT "STOP SLAVE SQL_THREAD"   ;;
      io   ) $SCRIPT "STOP SLAVE IO_THREAD"    ;;
      *      ) echo "$0: unknown Stop argument: $1";;

CheckCommand ()
    case $1 in
      start  ) StartService "$2"   ;;
      stop   ) StopService "$2"    ;;
      *      ) echo "$0: unknown argument: $1";;

CheckCommand "$1" "$2"

I can call this like so: start io stop sql

I setup a couple of crontab entries, one to start the io thread at 8pm and one to stop the io thread at 6AM.

The sql thread will always be running.

Convert from US Short Date to SQL date using MySQL parsing

I had a customer submit a batch file to be processed in one of our systems and they handed me an Excel file with short dates (mm/dd/yy). Sheesh, this is 2007, can we not get full 4 digit years, please!

so, I wanted to work out a method for quickly converting those date strings within mysql

assume a table of:

  • BirthDate char(16)
  • BDMonth tinyint
  • BDDay tinyint
  • BDYear smallint

UPDATE MyDatesTable
BDMonth = SUBSTRING_INDEX(BirthDate, ‘/’, 1),
BDDay = SUBSTRING_INDEX( SUBSTRING_INDEX(BirthDate, ‘/’, 2 ) , ‘/’, -1 ),
BDYear = SUBSTRING_INDEX(BirthDate, ‘/’, -1)+1900,

UPDATE MyDatesTable
SET BirthDate = CONCAT(BDYear, ‘-‘, BDMonth, ‘-‘, BDDay)

ALTER TABLE  MyDatesTable MyDatesTable MyDatesTable datetime;

2 digits years are SOOO last century… 🙂