Skip to main content.
February 14th, 2011

Differences in Hardware/Software for an Email Server

One of our customers is running our ECMSquared Email server solution and recently decided they had outgrown the platform it was installed on. Mailbox access was slow, webmail was slow and it felt constantly overloaded.

When planning for an upgrade like this you have to allot for not only the hardware, but the expert’s time and this customer was on a tight budget, so they decided that spending money on our services making sure the transition was a higher priority than getting the biggest fanciest hardware rig. After all this is email, a service that may not seem critical, but it’s the first thing that people notice is not functioning correctly. So we put together a proposal for the migration.

Old system: Apple Xserve G5 – 2x 2.0Ghz G5 – 6GB RAM – 3 x 250GB SATA H/W RAID 5 running Tiger Server.

Upgrading the OS on the system from Tiger to Leopard Server should have yielded some performance gains, especially with the finer grained kernel locking that was introduced in Leopard, but with the main issue being slow mailbox access, we felt that the file system was going to continue to be the biggest bottleneck. HFS+ doesn’t handle 1000s of files in a single directory very efficiently and having the enumerate a directory like that on every delivery and every POP3/IMAP access was taking it’s toll. Also with Apple discontinuing PPC support along with the demise of the Xserve, the longevity of this hardware was assessed as low.

The decision was made to go to a Linux based system running ext3 as the file system. Obviously this opened up the hardware choices quite a bit.

A mail server is very much like a database server in that the biggest bottleneck is almost always disk throughput, not CPU or network. Based on the customers budget concerns we wanted to get them the biggest fastest drive array in the eventual system for the budget allowed. There aren’t a lot of choices when it comes to bigger/faster hard drives within a reasonable budget, so we ended up choosing a 3 x 146GB 10k rpm SCSI drives in a RAID 5 array.

New System: Dell PowerEdge 1750 – 2x 3.2Ghz Xeon – 8GB RAM – 3x 146GB NEW SCSI drives in HW RAID 5

Obviously this is relatively old hardware, but we were able to get everything procured along with some spare drives for ~$600

We installed Debian Lenny and custom-compiled version of Exim onto the system and ran several days of testing.
Then we migrated their system over late one night and everything went smoothly.

The change in that hardware/OS/file system stack produced the following graphic for the Load Average for the system:

Load Average

You can see how dramatic the difference in how loaded the server was from before. The customer is very happy in the snappiness of the system now.

Even though the server hardware is a bit older, it’s applying the right resources in the right spot that makes thing run very smoothly.

We expect many more years of usage from this system.

Posted by Brian Blood as Hardware, Linux, Mail Server, Servers at 10:48 PM UTC

1 Comment »

November 4th, 2010

First Dead MacMini Power Supply

I was at the datacenter we are moving out of this evening rearranging power connections for some straggler customers so I could free up some power feeds. As part of that process I was unplugging and replugging power supplies in a load balanced set of MacMinis and I went to turn one of them back on and it would not power on. Turns out the power supply must have died, perhaps from a spike. I’ve worked with MacMinis of varying designed for many years now, easily over 50 of them and this is the first one I’ve had that died.

I guess that’s a good track record.

Posted by Brian Blood as Hardware at 12:11 AM UTC

No Comments »

July 26th, 2010

Ubuntu Server 10.04 on a Dell PowerEdge 2450

We have a Dell PowerEdge 2450 laying around doing nothing, and my friend asked to set up a server for him so he has a dedicated system to do some Drupal work. I said, no problem….. Boy was I in for it.

I downloaded the Server ISO and burned it. After upgrading the RAM from 1GB to 2GB and setting up the 3 x 18GB 10k rpm SCSI disks in a RAID 5, I booted from the fresh disc and the Ubuntu installer came up and when it dropped into the Debian base installation and tried to load components from CD, it would get stuck about 17% of the way through saying it could not read the CD-ROM any longer. So, I tried burning another copy…. Same thing.

OK, this system is pretty old, so I swap out the older CD-ROM for a tray-load DVD-ROM. Same thing, but at 21%. Grrr.

I try a THIRD CD burn in a different burner and still halts at 21%. I pop into the psuedo-shell in the Installer and try to do a ls on the /cdrom directory. I get some Input/Output error lines for docs, isolinux and some other items, but do get some output lines from that directory….

OK, now I’m wondering if my ISO didn’t perhaps get corrupted in the initial download. Unfortunately, Ubuntu does NOT provide MD5 checsums on their ISO images at least not directly on their website where you download it.

Let’s ask the Google. Apparently others have had the same issue since at least the 7.0 series. The Minimal CD works, but there doesn’t seem to be a way to install into the Server version from that.

I finally find a post (see link below) where success was had by using a SECOND copy of the Installer in a USB connected CD-ROM drive. The system boots off the internal CD but pulls all the material off the CD on the USB drive.

It is finishing the install as I type this.

Wow, what a Rabbit-hole!

Just another example of: “Linux is Free if you’re time is worth nothing”

Posted by Brian Blood as Hardware, Linux, Servers at 9:05 PM UTC

No Comments »

June 18th, 2010

Speaking at Dallas TechFest 2010

If you will be in the DFW area at the end of July, please come see the talk I will be giving at the 3rd session of the PHP track on  Building Scalable PHP Web Applications.

The conference will be at the University of Texas at Dallas on July 30.

http://dallastechfest.com/Tracks/PHP/tabid/74/Default.aspx

Brian

Posted by Brian Blood as php, Servers, Soap Box, Web App Development at 4:21 PM UTC

No Comments »

May 17th, 2010

The Great Leap Beyond One – Creating Scalable PHP Web Applications

I gave a presentation to the Dallas PHP user group on May 11, 2010 on Creating Scalable PHP Web Applications.

Download the presentation in PDF.

Here is a basic outline:

Posted by Brian Blood as Content Networking, Database, Hardware, MySQL, php, Servers, Web App Development at 11:47 PM UTC

No Comments »

April 7th, 2010

Leopard Server Upgrade – postfix not logging or delivering

We have a development server for a client that was recently upgraded from Tiger Server to Leopard Server. This system holds the subversion repository and the staging sites for their hosted application. One of the configured pieces is that whenever someone commits into the SVN repository, we have a post-commit hook that sends a message to all the developers with the information from the revision commit. Email on this system is handling by the Apple built-in Postfix. When the system was upgraded, we noticed that we no longer received our SVN commit messages. Investigating this I found two things that needed fixing.

My first problem was that the logging that postfix was sending to syslogd was very sparse. so I checked through all the settings twice in Server Admin and directly in the main.cf and master.cf files. Took me a while, but I finally looked at the /etc/syslogd.conf file and found that the facility entry for mail was set to mail.warn. Checked the Server Admin setting for SMTP log level and set it to Debug.

Second problem: now that logging was fixed, I can see that the relayhost set in config was rejecting the messages. So not only were the original messages being rejected, the bounce messages were being bounced. Essentially anything being send was dying a quick death. I fixed the relayhost setting, tried another messages and BAM, message delivered.

Upgrading from Tiger to Leopard is an important step to take, but with all upgrades, one must really go through all your settings once again to verify their correctness.

Posted by Brian Blood as OS X Server, Servers at 3:54 PM UTC

No Comments »

March 11th, 2010

Site to Site VPN with Mac OS X Server and a NetScreen

A client needs to have a Site to Site VPN between a server at their office and a NetScreen at their colo.

I did a fresh new install of Leopard Server fully and cleanly updated to 10.5.8 running on a G4 MacMini to make sure I can configure both sides properly.
My test Server is on a clean public static IP address for the built-in ethernet.
Secondary ethernet using a USB Ethernet adapter for the private side of the network.

System has no issues until…..

I used the s2svpnadmin cli tool to create a new shared-secret IPSec tunnel to a NetScreen at our colo.
Very basic setup, nothing fancy (not like the tool lets you do anything fancy.)

After creating the config I start to get these entries in my system.log:

Mar 10 12:55:56 test1 vpnd[1614]: Server ‘TestColo’ starting…
Mar 10 12:55:56 test1 TestColo[1614]: 2010-03-10 12:55:56 CST    Server ‘TestColo’ starting…
Mar 10 12:55:56 test1 vpnd[1614]: Listening for connections…
Mar 10 12:55:56 test1 TestColo[1614]: 2010-03-10 12:55:56 CST    Listening for connections…
Mar 10 12:55:57 test1 ReportCrash[1615]: Formulating crash report for process vpnd[1614]
Mar 10 12:55:57 test1 com.apple.launchd[1] (TestColo[1614]): Exited abnormally: Bus error
Mar 10 12:55:57 test1 com.apple.launchd[1] (TestColo): Throttling respawn: Will start in 9 seconds
Mar 10 12:55:57 test1 ReportCrash[1615]: Saved crashreport to /Library/Logs/CrashReporter/vpnd_2010-03-10-125556_MacServe-Test1.crash using uid: 0 gid: 0, euid: 0 egid: 0

and looking at the crash report:

Process:         vpnd [1614]
Path:            /usr/sbin/vpnd
Identifier:      vpnd
Version:         ??? (???)
Code Type:       PPC (Native)
Parent Process:  launchd [1]

Date/Time:       2010-03-10 12:55:56.252 -0600
OS Version:      Mac OS X Server 10.5.8 (9L34)
Report Version:  6
Anonymous UUID:  7E25DC5D-7D93-42B5-8F69-F7C823244418

Exception Type:  EXC_BAD_ACCESS (SIGBUS)
Exception Codes: KERN_PROTECTION_FAILURE at 0×0000000000000000
Crashed Thread:  0

Thread 0 Crashed:
0   ???                               0000000000 0 + 0
1   vpnd                              0x0000444c accept_connections + 1280
2   vpnd                              0x00002a08 main + 1572
3   vpnd                              0x00001a48 start + 68
4   ???                               0000000000 0 + 0

Thread 0 crashed with PPC Thread State 32:
srr0: 0×00000000  srr1: 0x4200f030   dar: 0x000513b0 dsisr: 0×42000000

…. etc. etc.

I do NOT have the VPN service “running”.

I did find this post on Apple discussions:

http://discussions.apple.com/thread.jspa?threadID=1491028#7116067

and followed the posters directions for manually starting the tunnel.
I still get a bit of fussing, but no crash.
I checked the IPSec SA/SPD info with setkey -PD and some basic pings across the network and the tunnel is active.

The crashing doesn’t seem to be cpu arch dependent as my system is ppc and the OP on the Apple board is using a x86 machine.

Kind of a bummer. It looks like there is probably some really simple issue here as the crash apparently happens very early in the setup process: “accept_connections”.

Hopefully this will help someone in the future.

Oh and FYI:

Leopard Server IPSec parameters for a Shared Secret based VPN:

Phase 1: DiffieHellman Group 2, 3DES, MD5, lifetime: 28800

Phase 2: No Perfect Forward Secrecy; Encapsulated Packet (no AH); AES128 encryption; SHA1 hash; lifetime: 3600; Compression: Deflate (this is optional)

Posted by Brian Blood as OS X Server, Routers and Firewalls at 11:22 AM UTC

No Comments »

February 25th, 2009

Optimizing a NetScreen 5GT as a Transparent Firewall

We have some Windows-based servers that we colocate for some clients.

We’ve always insisted that those devices sit behind some sort of protection and for a long time, we’ve used a Cisco 2621 as  a screening router for a smaller subnet of our main address space. Any traffic that wanted to reach the protected IPs was routed through this device and we applied access list screening both inwards and outwards.

Over time, this device become unable to handle the traffic that was pushed through it and we decided to replace it. We had a 10-user model NetScreen 5GT that was untasked and since we had only a handful of devices on that protected subnet, we found a new home for the 5GT as a transparent firewall for those systems.

The protected subnet was compartmentalized with the use of a non-tagging VLAN on a our main Cisco customer attach switch, so segregation of a broadcast domain was not an issue. We merely needed to configure the 5GT into Layer 2 mode and setup the right policies for both directions of traffic.

I like to filter Bogons on our network so I started there. Now in this context, any traffic originating from the Untrusted side and having a source IP that existed on the Trusted could also be considered a Bogon so I made sure that rule was in place as well. Since Defined Addresses must be defined in terms of a security zone, I had to setup our Protected IPs in both Zones so I could define the correct policy

One problem I did run up against is the way Sessions are handled in ScreenOS. The max Sessions that can be tracked by this model of NetScreen is 2064, and in a busy period after installing the device we did get close to reaching the limit. The solution was to drop the timeout value for POP3 (one of the servers is a Mail Server) and HTTP/HTTPS in the Predefined Services section down to a very low value. This would ensure that there would be a faster turnover of entries in the Sessions table and keep it further away from the limit. This does mean a bit more work for the CPU, but the NetScreen’s ASICs are up to the challenge.

It has turned out to be a very good switchout of better hardware and management access policies in the ScreenOS Web management is much easier than with the Cisco ACL approach. My main gripe there is that to make a change to an access-list, I have to remove it from the interface, remote it from the router, then add the new access-list back to the router, then reapply it to the interface. A very tedious chore.

Posted by Brian Blood as Routers and Firewalls at 12:47 AM UTC

No Comments »

January 8th, 2009

noatime for Mac OS X Server boot disk

The new G4 MacMini with the SSD is running beautifully. However, there is one little detail I’d like to take care of to help prolong the life of the SSD: disable the atime updating in the file system.

When we build out Linux servers, one of the configuration changes we always make is to add a noatime flag to the mount options for the file systems. atime is the Last Access Timestamp and really is useless in a server environment.

After some empirical testing…..

Under Tiger:

# mount -uvw -o noatime /
/dev/disk0s3 on / (local, journaled)

no effect. Even produced this entry in the system.log:

Jan 8 14:19:27 vpn KernelEventAgent[34]: tid 00000000 received
unknown event (256)

Leopard:

# mount -vuw -o noatime /
/dev/disk4 on / (hfs, local, journaled, noatime)

where it looks to be supported…

The test is to check the last access time with a ls -lu , then simply cat the file, then ls -lu again.

I guess I’ll need to upgrade the Mini to Leopard Server!

Posted by Brian Blood as Hardware, OS X Server at 3:27 PM UTC

No Comments »

December 20th, 2008

Solving the MySQL on Windows Open File limit – VMWare Linux

This is a continuation of the saga of helping a customer of ours with their MySQL on Windows issues.

The basic premise is that MySQL 5 running under Windows has problems with large numbers of connections/open files.

We initially presented our client with 2 choices for solving their problem:

  1. Setup MySQL on a different server running Linux
  2. Move their database backend from MySQL to Microsoft SQL Server

Option 1 would require a non-trivial capital outlay and time to setup and migrate the data over

Option 2 held more interest for them as a longer term solution as they were interested in some of the powerful reporting and analysis tools. However, the server that all of this was running on turned out to present a problem in the following way:

The system is a Dell PowerEdge 2950 with dual dual-core Xeons (with HyperThreading enabled, resulting in 8 logical cpus), 8GB RAM and running Windows 2003 Server x64. For a general ballpark estimate of pricing for MS SQL Server, I went to the Dell website and configured a similar machine and went to select a SQL Server as an add-on. Due to the way that Microsoft prices SQL Server for a web environment, you must license it for each processor you have installed in your server. A dual-processor licensed SQL Server 2005 running under Windows x64 turned out to be in excess of $14,000 on that page.

Needless to say, the customer had a bit of sticker shock. Not only would they have to plop down FOURTEEN grand, they would still need man-hours to work out kinks in transitioning the backend of the app from MySQL. Getting a fully loaded separate server for half that cost running Linux/MySQL was looking more attractive.

I was chatting with a friend about the customers dilemma and he had a brilliant suggestion: Run a Linux instance under VMWare on the existing hardware.

I broached the idea with the client and they were game to give it a shot. The server was at this point extremely underutilized and was only limited by the specific software implementations. They were already running MySQL here anyway, so putting a different wrapper around it in order to escape those limits was worth investigating.

The server needed some more disk space so they ordered 6 more of the 146GB 2.5″ SAS drive modules like the 2 that were already installed in a mirrored pair. These are great little devices for getting a decent number of spindles into a database server.

While they did that I installed the latest Release Candidate of VMWare Server 2.0 and went about setting up a Debian based Linux install with the latest MySQL 5.0 binary available. During testing we had some interesting processor affinity issues for the VM environment and after a very good experience and excellent responses in the VMWare community forums, I tweaked the config to have that VirtualMachine attach itself to processors 5-8 which correspond to all of the logical cpus on the second socket. This left the first socket’s worth of cpus for OS and IIS/ASP usage. Doing this would help avoid the cache issues that occur with multi-cpu systems and multi-threaded processes.

I ended up configuring the VM for 3.5GB RAM, 2 processors and tweaking the MySQL config to make good use of those resources for their INNOdb based tables.

The system has been running very reliably for over 2 months and is making very good use of the resources on that system. Load Average in the VM stays well under 1.0 and the cpus in the Windows system are easily handling all the different loads.

Overall I spent probably 8-10 hours installing and tweaking the config and helping them migrate their data. This was much less expensive to the other options and they got to make use of their existing hardware and software configurations.

A very positive experience and outcome.

Posted by Brian Blood as Database, Hardware, Linux, MySQL, Servers at 2:41 PM UTC

No Comments »

« Previous Page« Previous Entries  Next Entries »Next Page »