Skip to main content.
April 30th, 2007

Cisco CSS Content Smart Switch – Refurbishment

(or, the device formerly known as the ArrowPoint Content Smart™ Web switch)

Back in the heady days of the dot com boom, one needed to be able to assure you could handle a large amount of traffic for all those visitors that you just knew were coming to your web property. In order to do that, your web application needed to be able to scale, which meant load balancing amongst any number of servers. ArrowPoint was the darling of this market, founded in 1997 and scooped up by the Cisco mother ship in May 2000 for a whopping $6.1 Billion in stock. (yes, that’s billion; you know, real money) This was 491.94 times their current revenue. Obviously, Cisco wanted into this market in a big way.

So, ArrowPoint had a nice lineup of Layer 7 switches both for the big guys (the CS-800) and the not-so big (the CS-100 line). (SlashDot used a CS-100 at one point and then upgraded to a CS-800 after a DDOS killed it.)

CSS-800 CS-100

These devices primary purpose was to distribute IP packets amongst a server farm based on whatever criteria you could think of. You could make your content rules as simple (basic IP address/any port) or as complex as you wanted (testing for existence of a specific cookie or portion of a URL on a specific domain)

After Cisco bought them, the primary change they made was turn the case their standard blue. They also added a few models with GBICs and more buffer ram on each port (CSS11155). They even added a model that used a PCMCIA Flash Disk (e.g. CSS11154-FD-AC) instead of the 2 or 4GB IDE drives that are used to hold the OS, known as the NS and the configuration and logs.

We had been using one of those first Cisco based versions (locked flash vers 4.0, operational flash 4.0) for the sell.com server farm. We were about to deploy a new load balanced app for another client, so we decided to do some refurbishing of our supply of CSSen. I picked up a couple off eBay, one of the Cisco versions and one of the older ArrowPoint versions for under $500 and started tearing them apart. These devices when they were new went for upwards of $20,000!!!

Some interesting tidbits

RAM:

There are two slots on the CSS motherboard for installed RAM. The slots are underneath (if yours has one) the daughtercard that is used for the additional ports (2 GBIC, 4 x 100FX, 4 x 100BT), so in order to upgrade the ram, you will need to pull this card out temporarily. It’s got about 3 screws and comes out without too much fuss.

The chip to use is a Micron 128MB DDR 100 MHz MT8LSDT1664HG-10EB1, so with 2 of these the CSS will have 256MB. I picked up a couple for sub $15 each.

Disk system:

The IDE drive in the non FD (Flash Disk) models is usually a Fujitsu 2, 4 or 6GB drive. The NS and logs take up a very small part of the space on these disks, so we decided to replace the only non-solid state part of the CSS (not counting the fans) with some newer, more reliable technology. I found a CompactFlash to IDE adapter for sub $20 and a 2GB CompactFlash card for about $60. I did some research into the long-term reliability and durability of CompactFlash. There are industrial-strength CF cards, but they are about 5-10 times as expensive. The major technological consideration of CF cards is the use of single-cell vs multi-cell memory. For long-term reliability, you want single-cell as the electronics on the card will actually monitor the health and adjust the storage of data within the cells as it finds problems and single-cell CF is also rated for a higher number of writes and has a higher MTBF. Good explanation here: DailyTech – Solid-state Drives Ready for Prime Time

So, with a 2GB Kingston Elite Pro “disk” installed, we merely use the Offline Diagnostic Menu accessible from the console port to format the new disk and use the boot from FTP function to pull down an updated NS (an ADI or ArrowPoint Distribution Image) onto the disk and it’s ready to start configuring.

The FD model of CSS comes with a PCMCIA to IDE sled in the place of the hard drive. Inserted into that slot is a 350MB SanDisk PCMCIA flash card. We’ve purchased the 1.2GB version of these cards and done the same process as above. Flash goodness all around.

One interesting note, I expected the see some decent amount of savings in amps when replacing an actual hard disk drive with a flash drive, but curiously, I didn’t. The device pulled about 0.92 amps (110V) with the hard drive and only went down to 0.85A with the flash drive. It’s interesting that a device of this type pulls so much current in the first place. Most of the switches we utilize typically draw in the 0.3A range or less. I guess that could be related to why we see a higher failure rate with the power supplies.

Summary

In the end, we ended up with some new/spare load balancers that have been cleaned up, upgraded and made more reliable. Not bad for a couple hundred dollars spent.

Posted by Brian Blood as Content Networking, Hardware, Routers and Firewalls at 5:56 PM UTC

No Comments »

April 28th, 2007

FileMaker Scripting – PrivilegeSetName

Quick FileMaker Pro tip:

if you are trying to implement some privilege set specific behavior and you are depending on the call to Get(PrivilegeSetName), be aware that if you set the “Run Script with Full Privileges” option on that script that that function does not return the Privilege Set Name for the current user, but instead returns [Full Access].

Took me about 10 minutes to figure out why my script was not executing. This actually turned out to be a good thing since it forced me to factor my script a bit more and thereby create a more flexible function.

One other thing that is a bit frustrating is that there is no way (other than setting at startup and referring to a global global) to abstract/define a constant for a privilege set name. If you needed to make heavy use of privilege set specific branching, picking good privilege set names in the beginning would be crucial.

Posted by Brian Blood as Database, FileMaker at 1:08 PM UTC

No Comments »

April 27th, 2007

Tiger FTP Server problems – Bad Security Update

A poster to Apple Mac OS X Server mailing list confirmed the problem with the FTP server in Tiger Server post 10.4.9.

This is a big screwup by Apple.

— BEGIN POST —

I’ve been facing the very same issue at a customer’s place. FTP service was set with “FTP Root and Share Points” and was working fine until I apply the most recent security update. Now, when connecting the this ftp box, I’m sent to the file system root (/). Of course, I can connect but permissions don’t let me copy anything there. I had to twist this setup big way for it to -kind of- work. More investigations to come.

Well, after I tested this deeper this morning, I can tell you what happened.

The 2007-004 Security Update replaced the ftp.plist in /System/Library/LaunchDaemons from Mac OS X server with the version from Mac OS X *Client*. There is no check in the installer if the update installs on client or Server, and it is the same update for both.

But, of course, FTP services on client and server are *very* different. With the client ftp.plist from client on the server, it is ftpd which is launched, not xftpd.

The solution is to replace the ftp.plist with a previous version from Mac OS X Server. If you don’t have it, here is its content :

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
        <key>Label</key>
        <string>com.apple.xftpd</string>
        <key>Program</key>
        <string>/usr/libexec/xftpd</string>
        <key>ProgramArguments</key>
        <array>
                <string>xftpd</string>
                <string>-a</string>
        </array>
        <key>Sockets</key>
        <dict>
                <key>Listeners</key>
                <dict>
                        <key>SockPassive</key>
                        <true/>
                        <key>SockServiceName</key>
                        <string>ftp</string>
                        <key>SockType</key>
                        <string>SOCK_STREAM</string>
                </dict>
        </dict>
        <key>inetdCompatibility</key>
        <dict>
                <key>Wait</key>
                <false/>
        </dict>
</dict>
</plist>

Restart the server (relaunching the FTP service is not enough), and you should be up and running.
— END POST —

Thanks to Guillaume Gete

related url:

Apple Discussion Board thread

UPDATE: Looks like Apple has posted a Security Update to address this.

Posted by Brian Blood as OS X Server at 11:44 AM UTC

1 Comment »

April 23rd, 2007

Panther Server to Tiger Server – Wrong FTP daemon

We upgraded our main shared hosting Xserve this weekend from Panther Server to Tiger Server.

The next day we started to get some complaints about weird directory changes. Since we didn’t move anything around, we were sure these customers were crazy or something. ;-)  We also had some issues with the permissions not being set correctly on newly uploaded items. Apache couldn’t see the items. I had fought with the OS X Server FTP umask settings in the past and ended up having to lock the ftpaccess file because even opening the FTP screen in Server Admin would cause my defumask setting to be blown away. (I wanted a umask setting of 0002)

In the course of investigating these issues, we found that the Tiger Upgrader kept the old FTP server launch info around and was using ftpd instead of xftpd. I figured this out by checking the entries for /System/Library/LaunchDaemons/ftp.plist on a system that had had a fresh 10.4 server install. Once I copied over the correct directives and did a launchctl unload and load, FTP service behavior returned to “normal”.

Ugh. Thanks, Apple.

Now, I have to go back to all the systems we’ve upgraded from 10.3 to 10.4 and make sure this nonsense hasn’t happened anywhere else.

Posted by Brian Blood as OS X Server at 3:49 PM UTC

No Comments »

April 4th, 2007

Intel MacMini Blade Server Followup

A followup to my article that generated a lot of traffic and comment.

Point of clarification: the Minis in this system are only doing one thing: executing PHP.

They don’t serve up image files or store anything persistent on disk. They all communicate to a central database system that has proper redundancy built in.

Since the Minis in this application don’t use any local disk resources other than to store PHP code, if a Mini were to fail or fall out of the set for some reason, the system as a whole still runs. The redundancy you would normally have built into a bigger beefier server (and pay extra for) is handled by just having more servers. Google does the same thing. When a server in their system dies they really don’t care too much because they built their redundancy in through sheer numbers. For the price of 2 well outfitted Intel Xserves, I can purchase 12 MacMinis and use less power (~ 4.2A) and 2-3 times more processing power and have a LOT more redundancy.

Based on the wording of articles linking to my post (and perhaps my own wild performance claims), there seemed to be an inordinate amount of chatter focusing on how the MacMini was faster than the Xserve G5 in cpu performance, when that wasn’t the primary concept I wanted to convey about why the Mini’s are a good fit for a front end web farm. (Perhaps better writing skills would help)

My main thrust was that the balance between price, performance, power consumption, heat generation and reliability (Apple has on average produced better and longer lasting hardware) of the MacMini was hard to match.

When compared against traditional blade systems like what is offered by IBM, Dell, HP, etc.., it’s obvious one loses certain features. Primarily these are centralized management tools, unified chassis for power and network cabling amongst others. However those hardware systems cost more overall

In deploying networked systems, really in life, one must balance certain choices.

For this application for this client, the MacMini has the right balance in all those critical areas.


Some example hardware configs I threw together for comparison of performance, cost, power:

Dell PowerEdge 1950 ConfigDual - Quad Core CPU BladeIntel Xserve Dual - DualCore 3.0GhzMacMini 1.66Ghz Core Duo - 2GB RAM, bigger hard drive

For comparison sake, I’m going to use the Mini’s 1.66 Ghz Core Duo cpu as a base unit. One Mini has 2 each of the “core-unit”. Also, I tried to maintain a 1GB of RAM per core.

I’m going to make up another value: Power Ratio. This is Amps per core-unit. For example: a MacMini has a Power Ratio of 0.15

So, assuming a minimum of two systems for redundancy:

Some interesting choices here. Not exactly a scientific analysis, but the Mini definitely holds its own against the single unit servers. Against the Blade Systems, it’s a closer call especially with the ability to choose 2 x Quad Core cpus for the Dell Blades. The best part of the blade system is that the power usage grows so much slower per unit of performance than any other system. 12 core-units per 1A of power is an excellent value. Power Ratio of 0.833


Our client has expressed interest in adding 2-3 more minis to the set and I think we will give Debian/Ubuntu a shot this time for comparison

One person on the ArsTechinca comment section did make an interesting point as to why you would want to run an OS X machine for a web front end: WebObjects.

So, thanks for reading everyone, it’s always interesting to throw something out there and see what has to be said about it.

Brian

Posted by Brian Blood as Content Networking, Servers at 6:09 PM UTC

3 Comments »