Skip to main content.
May 17th, 2010

The Great Leap Beyond One – Creating Scalable PHP Web Applications

I gave a presentation to the Dallas PHP user group on May 11, 2010 on Creating Scalable PHP Web Applications.

Download the presentation in PDF.

Here is a basic outline:

Posted by Brian Blood as Content Networking, Database, Hardware, MySQL, php, Servers, Web App Development at 11:47 PM UTC

No Comments »

April 30th, 2007

Cisco CSS Content Smart Switch – Refurbishment

(or, the device formerly known as the ArrowPoint Content Smartâ„¢ Web switch)

Back in the heady days of the dot com boom, one needed to be able to assure you could handle a large amount of traffic for all those visitors that you just knew were coming to your web property. In order to do that, your web application needed to be able to scale, which meant load balancing amongst any number of servers. ArrowPoint was the darling of this market, founded in 1997 and scooped up by the Cisco mother ship in May 2000 for a whopping $6.1 Billion in stock. (yes, that’s billion; you know, real money) This was 491.94 times their current revenue. Obviously, Cisco wanted into this market in a big way.

So, ArrowPoint had a nice lineup of Layer 7 switches both for the big guys (the CS-800) and the not-so big (the CS-100 line). (SlashDot used a CS-100 at one point and then upgraded to a CS-800 after a DDOS killed it.)

CSS-800 CS-100

These devices primary purpose was to distribute IP packets amongst a server farm based on whatever criteria you could think of. You could make your content rules as simple (basic IP address/any port) or as complex as you wanted (testing for existence of a specific cookie or portion of a URL on a specific domain)

After Cisco bought them, the primary change they made was turn the case their standard blue. They also added a few models with GBICs and more buffer ram on each port (CSS11155). They even added a model that used a PCMCIA Flash Disk (e.g. CSS11154-FD-AC) instead of the 2 or 4GB IDE drives that are used to hold the OS, known as the NS and the configuration and logs.

We had been using one of those first Cisco based versions (locked flash vers 4.0, operational flash 4.0) for the sell.com server farm. We were about to deploy a new load balanced app for another client, so we decided to do some refurbishing of our supply of CSSen. I picked up a couple off eBay, one of the Cisco versions and one of the older ArrowPoint versions for under $500 and started tearing them apart. These devices when they were new went for upwards of $20,000!!!

Some interesting tidbits

RAM:

There are two slots on the CSS motherboard for installed RAM. The slots are underneath (if yours has one) the daughtercard that is used for the additional ports (2 GBIC, 4 x 100FX, 4 x 100BT), so in order to upgrade the ram, you will need to pull this card out temporarily. It’s got about 3 screws and comes out without too much fuss.

The chip to use is a Micron 128MB DDR 100 MHz MT8LSDT1664HG-10EB1, so with 2 of these the CSS will have 256MB. I picked up a couple for sub $15 each.

Disk system:

The IDE drive in the non FD (Flash Disk) models is usually a Fujitsu 2, 4 or 6GB drive. The NS and logs take up a very small part of the space on these disks, so we decided to replace the only non-solid state part of the CSS (not counting the fans) with some newer, more reliable technology. I found a CompactFlash to IDE adapter for sub $20 and a 2GB CompactFlash card for about $60. I did some research into the long-term reliability and durability of CompactFlash. There are industrial-strength CF cards, but they are about 5-10 times as expensive. The major technological consideration of CF cards is the use of single-cell vs multi-cell memory. For long-term reliability, you want single-cell as the electronics on the card will actually monitor the health and adjust the storage of data within the cells as it finds problems and single-cell CF is also rated for a higher number of writes and has a higher MTBF. Good explanation here: DailyTech – Solid-state Drives Ready for Prime Time

So, with a 2GB Kingston Elite Pro “disk” installed, we merely use the Offline Diagnostic Menu accessible from the console port to format the new disk and use the boot from FTP function to pull down an updated NS (an ADI or ArrowPoint Distribution Image) onto the disk and it’s ready to start configuring.

The FD model of CSS comes with a PCMCIA to IDE sled in the place of the hard drive. Inserted into that slot is a 350MB SanDisk PCMCIA flash card. We’ve purchased the 1.2GB version of these cards and done the same process as above. Flash goodness all around.

One interesting note, I expected the see some decent amount of savings in amps when replacing an actual hard disk drive with a flash drive, but curiously, I didn’t. The device pulled about 0.92 amps (110V) with the hard drive and only went down to 0.85A with the flash drive. It’s interesting that a device of this type pulls so much current in the first place. Most of the switches we utilize typically draw in the 0.3A range or less. I guess that could be related to why we see a higher failure rate with the power supplies.

Summary

In the end, we ended up with some new/spare load balancers that have been cleaned up, upgraded and made more reliable. Not bad for a couple hundred dollars spent.

Posted by Brian Blood as Content Networking, Hardware, Routers and Firewalls at 5:56 PM UTC

No Comments »

April 4th, 2007

Intel MacMini Blade Server Followup

A followup to my article that generated a lot of traffic and comment.

Point of clarification: the Minis in this system are only doing one thing: executing PHP.

They don’t serve up image files or store anything persistent on disk. They all communicate to a central database system that has proper redundancy built in.

Since the Minis in this application don’t use any local disk resources other than to store PHP code, if a Mini were to fail or fall out of the set for some reason, the system as a whole still runs. The redundancy you would normally have built into a bigger beefier server (and pay extra for) is handled by just having more servers. Google does the same thing. When a server in their system dies they really don’t care too much because they built their redundancy in through sheer numbers. For the price of 2 well outfitted Intel Xserves, I can purchase 12 MacMinis and use less power (~ 4.2A) and 2-3 times more processing power and have a LOT more redundancy.

Based on the wording of articles linking to my post (and perhaps my own wild performance claims), there seemed to be an inordinate amount of chatter focusing on how the MacMini was faster than the Xserve G5 in cpu performance, when that wasn’t the primary concept I wanted to convey about why the Mini’s are a good fit for a front end web farm. (Perhaps better writing skills would help)

My main thrust was that the balance between price, performance, power consumption, heat generation and reliability (Apple has on average produced better and longer lasting hardware) of the MacMini was hard to match.

When compared against traditional blade systems like what is offered by IBM, Dell, HP, etc.., it’s obvious one loses certain features. Primarily these are centralized management tools, unified chassis for power and network cabling amongst others. However those hardware systems cost more overall

In deploying networked systems, really in life, one must balance certain choices.

For this application for this client, the MacMini has the right balance in all those critical areas.


Some example hardware configs I threw together for comparison of performance, cost, power:

Dell PowerEdge 1950 ConfigDual - Quad Core CPU BladeIntel Xserve Dual - DualCore 3.0GhzMacMini 1.66Ghz Core Duo - 2GB RAM, bigger hard drive

For comparison sake, I’m going to use the Mini’s 1.66 Ghz Core Duo cpu as a base unit. One Mini has 2 each of the “core-unit”. Also, I tried to maintain a 1GB of RAM per core.

I’m going to make up another value: Power Ratio. This is Amps per core-unit. For example: a MacMini has a Power Ratio of 0.15

So, assuming a minimum of two systems for redundancy:

Some interesting choices here. Not exactly a scientific analysis, but the Mini definitely holds its own against the single unit servers. Against the Blade Systems, it’s a closer call especially with the ability to choose 2 x Quad Core cpus for the Dell Blades. The best part of the blade system is that the power usage grows so much slower per unit of performance than any other system. 12 core-units per 1A of power is an excellent value. Power Ratio of 0.833


Our client has expressed interest in adding 2-3 more minis to the set and I think we will give Debian/Ubuntu a shot this time for comparison

One person on the ArsTechinca comment section did make an interesting point as to why you would want to run an OS X machine for a web front end: WebObjects.

So, thanks for reading everyone, it’s always interesting to throw something out there and see what has to be said about it.

Brian

Posted by Brian Blood as Content Networking, Servers at 6:09 PM UTC

3 Comments »

March 29th, 2007

Intel MacMinis – The OS X Blade Server

A really good client of ours has been colocating with us since late 2003. They’ve grown their web application from running on a Xserve 1.0Ghz DP G4 to an Xserve 2.0Ghz DP G5, then moving their database off to a separate big hardware RAIDed Dell server.

They came to us about a year ago (May 2006) and said they were getting a big new client who wanted to run their entire site on their system and they were going to need a load balanced system with plenty of power and scalability. Earlier in the year (Feb 2006), Apple had introduced their second generation MacMini that now sported the new Intel Core Duo chips along with Gigabit Ethernet. At that time, we were also concerned about increased power usage in our cage, so we picked up an Intel Mac Mini 1.66Ghz Core Duo, had it upgraded to 2GB of RAM (the G4′s could only handle 1GB) and started to really put it through it’s paces.

It turned out to be a real winner.

The final configuration we ended up with was:

MacMini Intel 1.66Ghz Core Duo, 2GB RAM and we replaced the stock Seagate 80GB 5400rpm SATA 2.5″ notebook drives with the Hitachi E7K100 60GB 7200rpm SATA drive. These are the drives that IBM puts on it’s blade servers as they are rated for 24/7 usage.

Total Cost: ~$1,000 each. (we bought the Apple ram)

We worked with the client to help factor their web application so that it could be properly load balanced. Changes were necessary in the following areas:

After all the hardware and software was ready, we setup the content rules for the load balancer and turned it on. It was very gratifying to see the Minis perform very well even under adverse load conditions. (The big client sends out large email newsletter runs that bring flash crowds to the site.)

One of the more interesting experiences we’ve had with this system was when we migrated their Xserve G5 in as web server #4 in the load balanced group. Mind you this is not a puny box. We even upgraded the drive system in it to a hardware RAID 5 based set. Since that time we have had to periodically adjust the weighting rules on the load balancer to give more and more priority to sending hits to the Intel Minis instead of the Xserve G5. We are now at a 3:3:3:1 ratio and the Xserve is finally at a lower overall load average than the minis.

Yes, you read that right: the Intel MacMini is somewhere between 2 and 3 times faster than a Xserve G5 in raw cpu performance.

And it uses one fifth the power. With a nice sliding rack tray, you can easily get 6 of them into a 2U space. (excuse the cabling mess)

MacMini 1 - 3


We’ve also considered a different configuration whereby we set the Minis on their side and “stack” them horizontally. The Mini is right at 2 inches tall and a 6.5 inch square, so accounting for some space for air flow and cabling you could get 7, maybe 8 of them in a row, resulting in a 4U tall set. With the right mounting, you could get easily 2, maybe 3 of these rows on a sliding tray. You do have to account for the external power supply, but that separation actually works out as a major benefit as the cables could be run so that you have a single U of dedicated space for the power supplies and put some directional cooling air flow over them.

Result: With only 2 rows of Minis, you get:


The Intel Dual Xeon Xserve looks promising as well in terms of raw cpu performance, but I’ve seen reports that it suffers from the same high power consumption as any other Dual Xeon system does: ~ 3A. However, this is TEN TIMES the power consumption of a MacMini, so for a high density web farm, this is not a better solution. Better to utilize that Intel Xserve as a database server to take advantage of it’s greater RAM capacity (32GB), increased threading (dual dual-core) and 64-bit capability.

Related CPU performance anecdote: We have another client with a Compressor video encoding grid made up of Intel MacMinis and he found out (Apple also confirmed it) that he needed to remove the PowerMac G5 DP from that grid as it was the weakest link!

In summary, a MacMini based farm is a powerful solution for almost any web application. You get low cost, low power, the ease of use and security of a OS X based system. A very compelling formula. Our client is very happy and is looking to add 2 more MacMinis in the near future. We’ve built this type of system for another client and they are extremely happy with the performance of their web application, too. If your organization is interested in a MacMini web farm, please contact us for a quote.

Some additional links regarding MacMinis

I’ve posted a followup to this article.

We did add 3 more to this setup a couple months later. Here is the gaggle:
six-minis

Posted by Brian Blood as Content Networking, Servers, Web App Development at 9:54 AM UTC

15 Comments »