Skip to main content.
October 14th, 2014

snmp-mibs-downloader screenos unpack error

Debian wheezy system, trying to get some SNMP names in my snmpwalk.

% download-mibs



inflating: /tmp/tmp.4hFK5WH1Uf/6.3mib/6.3mib/standards/mib-rfc3896.txt
cat: /tmp/tmp.4hFK5WH1Uf/snmpv2/NS-ADDR.mib: No such file or directory
WARNING: Module(s) not found: NETSCREEN-ADDR-MIB

 

modify /etc/snmp-mibs-downloader/screenoslist

change the ARCHDIR line from:

ARCHDIR=snmpv2

to:

ARCHDIR=6.3mib/6.3mib/snmpv2

 

Posted by Brian Blood as Linux at 3:21 PM UTC

No Comments »

August 18th, 2013

GPLPV drivers in an ISO

We host Windows based virtual servers on Xen host machines and to get the best performance from these systems, we add in the GPL’ed Paravirtualization drivers available on the Meadowcourt page.

Maybe I’m doing it wrong or the hard way, but when doing a fresh install of a Windows system, the only way of getting the drivers onto the new domU is to make a ‘throwaway’ model=e1000 VIF which I can use to download the drivers onto the system. Then after getting the drivers onto the system (all the while actually accessing the system via the VNC frame-buffer), I can properly configure the new and final interface(s).

I’ve come up with a slightly easier/faster way of getting the drivers onto the system: put them into ISO image and attach it to the domU so it looks like a CD-ROM from which I can run the driver installers.

Here is the result for anyone to download and do the same:

 GPLPV-20130818.iso

Here are some snippets from the xen cfg:

disk = [
 "phy:/dev/vm5vg1/win7bench-disk,hda,w",
# "file:/storage/iso/GPLPV-20130818.iso,ioemu:hdc:cdrom,r",
# "file:/storage/iso/SW_DVD5_Win_Pro_7w_SP1_64BIT_English_-2_MLF_X17-59279.ISO,ioemu:hdc:cdrom,r",
 ]
vif = [
# "bridge=xenbr999,model=e1000,vifname=win7bench-colo",
 "bridge=xenbr3,type=paravirtualized,vifname=win7bench-dmz",
 "bridge=xenbr999,type=paravirtualized,vifname=win7bench-colo",
 ]

I used Windows XP under VMWare Fusion on my Mac to download the drivers and created the ISO with MagicISO. I saved that to a shared directory with then Mac and then scp’ed the file to the Linux server. Hows that for geek?

 

ok, this shell script looks even more geekier:

https://github.com/tmartinx/xenlivecd/blob/master/arch-common/32-gen-windows-gplpv-drivers-iso.sh

 

Posted by Brian Blood as Servers, Virtualization at 2:05 PM UTC

No Comments »

June 9th, 2012

Mini-PCIe SSD in MacMini

Been playing with ports/drives/SSDs in MacMinis recently and I thought I would attempt something that I hadn’t seen anyone else try.

Replace the Mini-PCIe Airport card in an Intel MacMini with an SSD. The idea being that this would become the boot device to the 2 mirrored drives internally.

So I picked up this item off eBay: SuperTalent 32 GB,Internal (FEM32GFDL) (SSD) Solid State Drive

These are primarily meant for a Dell Mini 9, but I hoped the Mac would see it as a regular drive. This MacMini is running 10.6.8 Server.

Tore MacMini open, installed card, put everything back together and booted the system.

Unfortunately the drive was not recognized.

It didn’t show up in Disk Utility. I did look through the device tree with IORegistryExplorer and found what looked like an entry for the card, but apparently there aren’t any drivers for this type of device (essentially a PATA bus/drive presented via the PCIe interface).

Next 2 stops down the road of MacMini Mania are the hack of connecting one of the internal SATA ports out through the case to an eSATA device. (Drobo S likely) and trying the Mini PCIe card that adds 2 SATA ports (same external eSATA connection)

 

 

 

Posted by Brian Blood as Hardware at 4:03 PM UTC

No Comments »

August 30th, 2011

QuickTime Streaming on Lion Server

Apple, in it’s Infinite wisdom, has decided that we no longer need QuickTime Streaming as a function of OS X Server. A client recently purchased a new MacMini Server (Lion Server) and in the process of working through migrating data and services off some old XServe G4s we found that QTSS was missing.

There is the Darwin Streaming Server project as Mac OS Forge, so downloading and installing the latest build there (6.0.3) seemed like the proper course of action. One glitch though, the installer refuses to proceed against any OS Install that is OS X Server.

Since those checks are done with scripts, the workaround is to copy the DarwinStreamingServer.pkg installer bundle to your hard drive where it can be edited.

In the script: DarwinStreamingServer.pkg/Contents/Resources/VolumeCheck
on line 29 that first stanza of checks which begins with a check for $INSTALLED_SERVER_OS_VERS…

…comment that entire stanza out with # and the Installer will then let you install the DSS on your Lion Server install.

This might also prove useful:
chmod +ai “qtss allow list,search,readattr,readextattr,readsecurity” QTSSMoviesDir

Posted by Brian Blood as OS X Server, Servers at 11:41 AM UTC

1 Comment »

July 8th, 2011

When Search Algorithms Go Wrong

Was searching for some laptop drives this morning on Google Shopping and searching for the term “Seagate Momentus” produced this as the first hit:

Lays Potato Drives

Not sure how I’ll fit a bag of yummy chips into a MacMini, but I’ll give it a try, Google!!!

 

Posted by Brian Blood as General, Text Munging at 11:28 AM UTC

No Comments »

June 11th, 2011

Lone Star PHP 2011 Presentation – Anecdotal D&D

Thanks to everyone who came to my presentation today at the Lone Star PHP conference.

Here is my presentation in PDF form.

Anecdotal Development and Deployment

Please let me know if you have any questions.

Brian

 

Posted by Brian Blood as General, MySQL, php, Servers, Web App Development at 2:53 PM UTC

No Comments »

May 23rd, 2011

Differences in Hardware/Software for a Database Server

I previously posted about the migration we performed for a customer for their email server.

This weekend we performed a cutover for another client from an older dedicated MySQL database server to a newer piece of hardware. First, the graph:

 

And so where the system was under an almost continuous load of 1.0, the system now seems almost bored.

Specs for old DB Server

Specs for new DB Server

The client is understandably very happy with the result.

 

Posted by Brian Blood as Database, Hardware, Linux, MySQL, Servers at 11:31 AM UTC

No Comments »

April 17th, 2011

Speaking at LoneStar PHP conference – June 11

Please come to the LoneStar PHP conference on June 11 where I will be giving a talk about setups/workflows for development and deployment of PHP based web sites/applications.

Where Crowne Plaza Suites – Coit/635
When: Saturday, June 11, 2011 – 8:00am – 5:30pm
How: A bargain at $60 and lunch is provided.

 

Posted by Brian Blood as php, Servers, Soap Box, Web App Development at 11:39 AM UTC

No Comments »

April 5th, 2011

Simple Checkout of phpMyAdmin with git

When setting up webservers for clients, I’ll usually configure and secure an installation of phpMyAdmin to allow them easy access to the MySQL database server.

I would also want to make sure it was easy to update that installation to the latest stable version. In the past this was easily done by initially checking out the STABLE tag from their Subversion repository using the following style command:

svn co https://phpmyadmin.svn.sourceforge.net/svnroot/phpmyadmin/tags/STABLE/phpMyAdmin

and then when I wanted to update, it was a simple matter of running:

svn update

Well, the phpMyAdmin folks switched over to using git as their SCM system and unfortunately, they didn’t post any instructions on how a web server administrator who was never going to be changing/committing code back into the system would perform the same action using git. It took me about an hour of searching, reading, digesting how git works, cursing, but I finally came up with the following set of git commands necessary to checkout the STABLE branch of phpMyAdmin:

git clone --depth=1 git://github.com/phpmyadmin/phpmyadmin.git
cd phpmyadmin
git remote update
git fetch
git checkout --track -b PMASTABLE  origin/STABLE

…what that means is: clone the main repository from the specified path, drop into the newly created repository directory and thirdly create a new branch in the local repository called PMASTABLE that will track the remote repository’s branch called “origin/STABLE”.

The “depth=1″ parameter tells git not to copy a huge set of changes, but only the most recent set.

So, from here on, I should be able to merely run: “git pull” on that repository and it should update it to the latest STABLE.

Hopefully, others will find this useful.

UPDATE – Mar 13, 2014
I updated the above commands to show the correct GitHub URL and two more commands to make sure the local repo sees the STABLE branch before trying to create a local branch.

Posted by Brian Blood as Database, MySQL, php, Servers, Web App Development, Web Software at 9:24 PM UTC

5 Comments »

March 31st, 2011

Debian 6.0 Squeeze on Xserve G5 with 4TB

As far as Apple servers go, Xserve G5s are now in a tight spot.

  1. Good: Reasonably fast CPUs, they are certainly powerful enough for most any internet based web site
  2. Good: They can take 8/16GB of RAM
  3. Good: They support SATA disks so at least you can buy modern replacements.
  4. Bad: Mac OS X Server support halted at Leopard
  5. Good/Bad: Hardware RAID works, for the most part.
  6. Good: FireWire 800
  7. Bad: Only one power supply
  8. Bad: Only 2 slots for expansion cards. No built-in video.
  9. Good/Bad: Market value is pretty low right now.

I’m not one to let extra server hardware lay around. I’ll find a use for it. I still have Xserve G4s in production. However, I’d like to see a more up to date, leaner OS run on it and Debian keeps a very good port up to date for PowerPC. With the latest rev, 6.0, just released, I thought I would combine the two and see what results. My main goal is to be able to continue to use these machines for certain specific tasks and not have to rely on Apple to keep the OS up to date, as Leopard support will surely drop pretty soon.

Some uses I can think of immediately:

  1. Dedicated MySQL replication slave – with enough disk space and RAM, I can create multiple instances of MySQL configured to replicate from our different Master Servers and perform mysqldumps for backup purposes on the slaves instead of the masters.
  2. Dedicated SpamAssassin, ClamAV scanners.
  3. Bulk mail relay/mailing list server.
  4. DNS resolver
  5. Bulk File Delivery/FTP Server
  6. Bulk Backup storage.
  7. iSCSI target for some of the Xen-based virtualization we have been doing. Makes it easy to backup the logical volume for a domU. Just mount the iSCSI target from within dom0 and dd the domU’s LV over to an image file on the Xserve G5.

First goal is to determine how easy it is to install/manage this kind of setup. Second is to define how well the system performs under load.

As for configuration, the main thing I’m curious about is whether the Hardware RAID PCI card works and is manageable from within Debian. I would likely choose to not use that card, as doing so would require more stringent/expensive disk options and would take another PCI slot. In the end, I’ll likely lean towards a small SSD to use for the Boot volume and then software RAID mirror 2 largish SATA drives in the other bays. I don’t expect to use this system for large amounts of transaction work, so something reliable and large is the goal as we want to extend the life of this system another couple of years.

Partitioning and Basic Install

Using just a plain 80GB SATA disk in Bay 1, I was able to install from the NetInstall CD without issue. The critical item appears to be creating a set of partitions correctly to make sure the OpenFirmware will boot the system correctly:

  1. 32k Apple partition map – created automatically by the partitioner
  2. 1MB partition for the yaboot boot loader
  3. 200MB partition for /boot
  4. 78GB partition for /
  5. 1.9GB partition for swap

Things installed smoothly and without issue. Worked like a normal Debian install should. System booted fairly quickly; shutdown and cold-boot worked as expected.

This partitioning setup comes from: XserveHowTo

Hardware RAID PCI card compatibility

I dropped in one of these cards and a set of 3 Apple firmware drives I have laying around and booted the system off the CD. Unfortunately, I immediately started getting some spurious failures with the keyboard/video/network. No problem, to keep using that card with bigger drives will require buying expensive Apple-firmware drives. No thanks. This is a simple bulk data server, so I pull the card out and now that leaves me a slot for another NIC or some other card.

Software RAID and the SSD Boot disk

Linux also has a software RAID 5 capability, so the goal will be to use 2TB SATA disks in each of the three drive bays. Then use software RAID5 to create a 4TB array. One important thing to note when putting these newer bigger disks into the Xserve: make sure to put the jumper on the drives pins to force SATA 1 (1.5Gps) mode. Otherwise the SATA bus on the Xserve will not recognize the drive. Your tray will simply have a continuously lit activity LED.

With the 3 drive bays occupied by the 2TB drives, instead of configuring and installing the OS on the RAID5 array, I thought I would be clever and put a simple little 2.5″ SSD into a caddy that replaces the optical drive and that would serve as my boot drive.

The optical drive in the Xserve G5 is an IDE model, but no worry, you can purchase a caddy with an IDE host interface and a SATA disk interface. The caddy has a IDE/SATA bridge built into it.

I happened to have a 32GB IDE 2.5″ SSD, so I got a straight IDE/IDE caddy. Ultimately, you will want to have that drive in place when you run the installer which turns out to not be so easy, but it is doable.

The general outline for this install is: perform a hard drive media install with an HFS formatted SATA disk in Bay 1. Install to the SSD in the optical caddy, then setup the MD RAID5 device comprising of the 3 x 2TB disks AFTER you get the system setup and running on the SSD. Because of the peculiarities of OpenFirmware and the yaboot boot loader, it’s much simpler to get the system installed on a setup that will be the final configuration.

Hard Disk Based Install

Basic outline:

  1. Format new HFS volume on a SATA disk
  2. copy the initrd.gz, vmlinux, yaboot and yaboot.conf files onto the disk.
  3. Place disk into ADM tray and insert into bay 1.
  4. Have a USB stick with the broadcom firmware deb package on it plugged into the server.
  5. Boot the machine into OpenFirmware (Cmd-Apple-O-F)
  6. issue command: boot hd:3,yaboot; If that doesn’t work try: boot hd:4,yaboot
  7. choose “install” from yaboot screen
  8. perform standard Debian installation

From this point, on there aren’t really any differences between this system and any other debian install.

RAID and LVM

After installation is complete, shut the system down and insert the three 2TB disks and boot back up.

Install MDADM and LVM packages:

apt-get install mdadm lvm2

Basic steps for creating the RAID 5 array:

  1. setting up partition table and single linux RAID partition on each 2TB drive
  2. creating RAID5 array with:   mdadm –level=raid5 –raid-devices=3/dev/sda2/dev/sdb2/dev/sdc2

Logical Volume Manager setup:

The idea here is to grab the entire 4TB logical disk designate it as a “physical volume” upon which we will put a single “Logical Volume Group” for LVM to manage. That way we can create seperate Logical Volumes within the VG for different purposes. iSCSI target support will want to use a LV, so we can easily carve out a 1TB  section of the Logical Volume Group and do the same for potentially other purposes.

Basic LVM setup commands:

  1. pvcreate /dev/md0
  2. vgcreate vg1 /dev/md0

Here is the resulting disks and volume group:

root@debppc:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/hda4              26G  722M   24G   3% /
tmpfs                 492M     0  492M   0% /lib/init/rw
udev                  486M  204K  486M   1% /dev
tmpfs                 492M     0  492M   0% /dev/shm
/dev/hda3             185M   31M  146M  18% /boot
root@debppc:~# vgs
 VG   #PV #LV #SN Attr   VSize VFree
 vg1    1   0   0 wz--n- 3.64t 3.64t

Then:

lvcreate --size=1T --name iSCSIdiskA vg1

So to recap the layers involved here:

  1. 3 physical 2TB disks, sda, sdb, sdc
  2. Linux RAID type partition on each, sda2, sdb2, sdc2 (powerpc partitioning likes to put a 32K Apple partition at the start)
  3. Software RAID5 combining the three RAID partitions into a single multi-disk device: md0
  4. LVM taking the entire md0 device as a Physical Volume for LVM
  5. A single Volume Group: vg1 built on that md0 Physical Volume
  6. A single 1TB Logical Volume carved out of the 4TB Volume Group
  7. iSCSI could then share that Logical Volume out as a “disk” which is seen as a block-level device to be mounted on another computer which can format/partition as it sees fit. Even a Mac with iSCSI initiator driver, which could be in another country since it’s mounted over an IP network.

Performance

Used to be that running RAID5 in software was “inconceivable!”, but the Linux folks have latched onto the great SIMD engines that the chip manufacturers have put into their products over the years and are using that hardware directly to support RAID xor/parity operations. From dmesg:

[   14.298984] raid6: int64x1   1006 MB/s
[   14.370982] raid6: int64x2   1598 MB/s
[   14.442974] raid6: int64x4   1769 MB/s
[   14.514972] raid6: int64x8   1697 MB/s
[   14.586965] raid6: altivecx1  2928 MB/s
[   14.658965] raid6: altivecx2  3631 MB/s
[   14.730951] raid6: altivecx4  4550 MB/s
[   14.802961] raid6: altivecx8  3859 MB/s
[   14.807759] raid6: using algorithm altivecx4 (4550 MB/s)
[   14.816033] xor: measuring software checksum speed
[   14.838951]    8regs     :  5098.000 MB/sec
[   14.862951]    8regs_prefetch:  4606.000 MB/sec
[   14.886951]    32regs    :  5577.000 MB/sec
[   14.910951]    32regs_prefetch:  4828.000 MB/sec
[   14.915087] xor: using function: 32regs (5577.000 MB/sec)

So, running software RAID 5 should have minimal effect on the overall performance of this machine.

Summary

I’ve been running the system for a couple of weeks and a couple of observations:

  1. the software RAID 5 has not been a big deal as far as I can tell.
  2. after installing the hfsplus debian package I was able to attach and mount a firewire drive with data I wanted to move over quickly from a Mac OS X Server.
  3. I installed and compiled in the iscsitarget kernel module and started creating iscsi target volumes for use on some other servers. very nice.
  4. I configured my network interfaces using some clever ip route statements I found to attach and dedicate a different gigabit NIC for iSCSI purposes even though both interfaces are on the same subnet.
  5. The performance on the system is adequate, but not stupendous. I was copying about 300GB worth of MySQL tables from a database server using an iSCSI target volume and the load on the Xserve stayed around 8 for a couple of hours. Whether that’s good/bad I’m not sure.

Overall, it’s been an interesting exercise and I’m really glad I could repurpose the machine into such a useful item.

Posted by Brian Blood as Hardware, Linux, Servers at 12:47 PM UTC

No Comments »

« Previous Entries  Next Page »