Skip to main content.
January 25th, 2007

Slow laptops – Drive speed vs Drive size, RAM, VM

Some good friends of mine emailed me asking the following:

Does it cause pain and or destruction to our machines in any way to
force quit applications?

Andy-san and I are both too impatient to wait for laptops to close
3 programs so we can start up two others.
Force quit much faster. It is very snap-snap, but we worry about
snap-snap ohhh… you break it.

My reply:

Basic question:
Why are you closing the applications in the first place?

The OS will reallocate RAM to your active applications on an as-needed basis.

Also, you don’t HAVE to wait for an application to finish quitting before doing something else. Use Cmd-Tab to bring up the application switcher, hit tab to scoot over to the app you want to quit (still holding down the Cmd key)…. then hit the Q key and that will send a quit command to the app. You could do all of this while staying in your current application.

In general, I would NOT do what you are doing in force quitting. The most immediate thing I’d be concerned about is corrupting documents, pref files, etc…..

Two things to consider for boosting performance on your laptop:

1. More RAM.
This will ALWAYS help and is worth more than anything.

2. Faster and BIGGER disk.

Just read a review of laptop drive and relative performance.

They were doing tests of the nominally sized 100 GB 7200 rpm drives and comparing them to the bigger and slower (5400rpm) drives. What they found was very interesting. In general, yes, the 7200 rpm drives did usually beat out the slower 5400 rpm drives, but they then redid the same tests when the drives were loaded with 74GB of STUFF. The 100GB 7200rpm drive at 74% full showed a considerable drop in performance compared to the 160GB 5400 rpm drive which was only 50% full. Enough of a drop to make the drives perform about the same.

Now, to bring the relevance back to your situation…..

As you get more and more applications/data open on your system, the OS will actually save out sections of RAM onto your disk and read them back in as necessary. So, the more RAM, the less of a possibility of needing to swap out to disk…. Ultimately, over time, as you use your computer it will almost assuredly need to swap things in and out (that’s what the spinning beach ball is when you flip back to an app that’s been sitting in the background for a while, it’s swapping data back into ram from the hard drive)

So, the FASTER the hard drive is, the faster it can SWAP. Again, you want to avoid swap, and that is where more ram comes in. (see the vicious cycle here?)

If you have older laptops…. it’s likely you have even slower hard drives in them: 4200 rpm; further aggravating your performance issues.

clear as mud?

Brian

Posted by Brian Blood as General at 4:41 PM UTC

No Comments »

January 7th, 2007

Software Update Server – using launchd and sed to ensure off-net transfers

Like most hosting companies, I imagine, we have a public side network (10 and 100mbit connections) and a private side network (GigE)
We monitor the public side network switch ports for billing purposes and any local server to server traffic we try to keep on the private side network. Things like backups and database queries and mail routing, etc, etc..

Well, with the Software Update Server in Tiger Server we now have the ability to update a large set of systems using a single downloaded copy of updates. So using the nice little utility Software Update Enabler built by Andrew Wellington, we can point all our Xserves running Tiger Server to our single SUS through the backside network.

However….

The SUS is really just a customized instance of Apache whose content is updated by the swupd_sync process that is periodically run. It connects to the Apple SUS (akamai cache somewhere close) and compares the “index.sucatalog” file, which is merely a big plist xml file to it’s local copy and downloads the missing pieces.

When the swupd_sync process is done it replaces the host name/prefix directory portion of all the URLs that are in that file with it’s own hostname value like so:

<key>English</key>
<string>http://swcdn.apple.com/content/downloads/55/22/022-2725/252QW5yQ2d3BLTgrZn8CVB5KGj7YtyZt3b/022-2725.English.dist</string>
is transformed to:

<key>English</key>
<string>http://server.yoursus.com:8088/022-2725/252QW5yQ2d3BLTgrZn8CVB5KGj7YtyZt3b/022-2725.English.dist</string>

So here’s the problem.

That “server.yoursus.com” host name is more than likely the hostname of the public interface of your server. When a Tiger Xserve connects to our SUS (through it’s backside interface), it will read the index.sucatalog file, but when it goes to download the actual update files, it will be using those urls that swupd_sync generated. Which means the downloads will travel through the public side interfaces and completely negate the purpose of running the Updates through the backside network.

Now, I have asked the OS X Server product manager to allow a preference in the Software Update panel of Server Admin to set the host name that will be used by swpupd_sync in constructing those urls, but that feature hasn’t been added yet, so I had to come up with a workaround of my own to ensure that those urls in the index.sucatalog file refer to a private side network interface.

Enter launchd and sed

First thing to do was to come up with a script that altered those urls. I needed a quick and dirty find and replace on a text file. Not much out there that’s simpler at doing that than sed.

My simple shell script “alterswuphost.sh” for altering index.sucatalog which is in /usr/share/swupd/html
#!/bin/sh -
#

cd /usr/share/swupd/html;
sed -i .bkp ‘/server/s/server\.yoursus\.com/serverINTERNAL\.yoursus\.com/g’ index.sucatalog;
exit 0;

this says: for lines that contain “server”, find instances of “server.yoursus.com” and change them to “serverINTERNAL.yoursus.com”

the switch -i .bkp makes a backup of the file before running the find/replace

Now, I needed some way to know when I should run this script and this is where launchd came in to play.
I created a new LaunchAgent script to run whenever launchd noticed a change to the index.sucatalog file using the WatchPaths parameter:

<?xml version=”1.0″ encoding=”UTF-8″?>
<!DOCTYPE plist PUBLIC -//Apple Computer//DTD PLIST 1.0//EN http://www.apple.com/DTDs/PropertyList-1.0.dtd >
<plist version=”1.0″>
<dict>
<key>Label</key>
<string>info.networkjack.alterswupdhost</string>
<key>ProgramArguments</key>
<array>
<string>/Server/Scripts/alterswuphost.sh</string>
</array>
<key>WatchPaths</key>
<array>
<string>/Library/Logs/SoftwareUpdateServer.log</string>
</array>
<key>OnDemand</key>
<true/>
</dict>
</plist>

which I put into /Library/LaunchAgents/info.networkjack.alterswupdhost

and activated with:

launchctl load /Library/LaunchAgents/info.networkjack.alterswupdhost
so when swupd_sync updates it’s log file, launchd will follow it up with running this script that alters the hostnames with sed.

The Launch Agent watches “/Library/Logs/SoftwareUpdateServer.log” since that’s the most reliable of all the files to watch.

and that’s it!

I have this setup now and it seems to work ok.

Posted by Brian Blood as OS X Server at 2:36 AM UTC

4 Comments »

January 3rd, 2007

Debian Linux (Etch) Software RAID 1

Having built a couple of other Debian servers using software RAID 1, but not recalling exactly how I got it to work, I decided to actually document the results here.

So I needed to build up a system that we could dump really large drives into for some customers so they can do offsite backups. We had started doing this using FireWire drives attached to a G4 running Panther server, but it started to get a bit messy and sometimes FireWire busses can be a bit finicky.

We had a rackmount system that had 8 hot swap IDE bays in it, powered by an old AMD board with 6 PCI slots and it was perfect for what we needed. We had it at the colo for doing backups there, but the RAID card had some issues, so we had pulled it and it was sitting on a shelf for the past year.

I took a SATA PCI card (fake raid, don’t get me started) and mounted two 160GB SATA drives (that we had pulled from two different PowerMac G5s) into one of the internal drive cages. This gave me 2 nice big disks to create my boot system with.

Booting from a RC1 biz card install of Debian Etch, I got to the Partition Disks section of the install. This is the really tricky part because if you don’t do things in the right order, the partioner will not be able to set things up correctly and produce something you can actually install onto.

Here is the basic outline of what I ended up with in terms of partitions:

/boot
/
/swap

however!!! all mounts are not created the same.

so, let’s start with our two physical disks, sda and sdb

on each of these I created two actual partitions:

  1. one small partition at the beginning (around 64MB) that will be used for the boot mount
  2. the rest of the disk that will be used for everything else

so I ended up with 4 partitions: sda1, sda2, sdb1 and sdb2. All 4 of these partitions are set to have a type to be used for software raid

next I created 2 new software raid1 devices using the corresponding partitions on each disk.

  1. The first raid1 disk I formatted as ext3 and designate it’s mount as /boot. Nothing more needs to done with this disk.
  2. The second raid1 disk I do NOT format, but designate it as to be used for the Logical Volume Manager (LVM) All remaining partitions will be created from this device.

Proceeding into the LVM screens I did the following:

  1. I create a single Logical Volume Group using the single raid1 device I made from the sda2 and sdb2 partitions.
  2. I then created two Logical Volumes from this one LVG: sys (most of the disk) and swap (the ending 8GB of space)
  3. I formatted the sys LV as ext3 and designated it to be the root mount point /
  4. I designated the swap LV as (surprise) swap

Once this tree of raid1 devices and LVG/LVs were in place…. I had no problem installing Debian and continuing on with setting up my big drive box. I will use PureFTPd and netatalk (for reasons I will explain in another article) for server side daemons.

Posted by Brian Blood as Linux, Servers at 10:17 PM UTC

No Comments »