Press "Enter" to skip to content

Network Jack Posts

Juniper vSRX Virtual Router on Xen

We’ve been an extensive user of Juniper router/firewall products for years; the NetScreen/SSG line of systems all the way from NetScreen 10’s through SSG350s. As this line of products nears the end of its life-cycle, we are looking to learn and implement a new platform. The JunOS platform was the nearest cousin as it shared the same hardware platform. You could convert an SSG320 to a J2320 with a simple software/sticker change. However the earlier versions of JunOS didn’t sit well with me and how it “did” things. So we waited.

Around the JunOS v9/10/11 timeframe, Juniper rebuilt JunOS to switch to the same flow-based security architecture that I had grown to appreciate in the ScreenOS. Having purchased an SRX100 to do some testing on, it looked like a better and better system. The latest v15 line has some pretty great features. So I’ve been watching the pricing on an SRX240 as that is roughly comparable to the SSG320s that we mostly deploy.

For one of our customers who does a ton of very low bandwidth VPNs, adding another 1U hardware router that can do at most 500 or 1000 concurrent VPNs seemed like a potentially losing battle in terms of rack space. As we already make extensive use of virtualization, the fact that Juniper provides a virtualized SRX seemed very intriguing. We could set up a couple of decently powered 1U Intel servers that could run 5-6 vSRX each supporting up to 800-1000 VPNs. That’s a decent win.

So, I wanted to try out the vSRX, but Juniper only provides container files pre-built for VMWare or KVM. We run Xen exclusively. Not XenServer, bare Debian Xen. Hand configuring bridges and xen.cfg files is the level of detail/control that provides us with a very robust architecture.

To that end, I figured there must be a way to run the vSRX at least in full HVM mode on Xen to test out it’s capabilities. So I downloaded the KVM container which is a qcow format file and went about installing and setting it up. Here are some of the particulars of how I did that.

For domU storage, we use LVM exclusively. So to convert the qcow format into a logical volume you have to use qemu-img. “QCOW” stands for QEMU Copy On Write. It is essentially a sparse image. Regular LVM logical volumes are not. So I had interrogate the file to see how big of a LV I needed to create:

# qemu-img info junos-vsrx-vmdisk-15.1X49-D20.2.qcow2
image: junos-vsrx-vmdisk-15.1X49-D20.2.qcow2
file format: qcow2
virtual size: 16G (17179869184 bytes)
disk size: 2.7G
cluster_size: 65536
Format specific information:
compat: 0.10

So, a 16G LV it is:

lvcreate –size=17179869184B –name=srx1-disk vm2fast1

Then write that qcow image into the lv:

qemu-img convert -O host_device junos-vsrx-vmdisk-15.1X49-D20.2.qcow2 /dev/vm2fast1/srx1-disk

Now for the vm config file. I tried many different variations of pvhm, pv-nics, virtio nics, etc, but this is the only config that ever produced a usable system with one management interface (the first one is set as the default upon initial setup) and one Untrust and one DMZ interfaces:


builder = “hvm”
device_model_version = “qemu-xen”

vcpus = ‘2’
memory = ‘4096’
pool = ‘Pool-CPU1’
cpu_weight = 384

disk = [

# Networking
vif = [

vfb = [ “type=vnc,vncdisplay=3,vncpasswd=VNCsecret,keymap=en-us” ]
# Behaviour
on_poweroff = ‘destroy’
on_reboot = ‘restart’
on_crash = ‘restart’

I even made a screen recording via the VNC session of full startup and then shutdown (via JunOS cli: request system power-off)


So, the interesting thing here is: JunOS is based off FreeBSD. In order to deliver that as a widely usable virtual machine on the major platforms, Juniper wrapped it’s FreeBSD/JunOS with a Linux (Juniper Linux) and is running the JunOS as a virtual machine in that space. That is why they require you to enable nested HVM. Crazy.

I have had some trouble with restarts getting the two ge-0/0 interfaces to stay visible to the final underlying JunOS. I think I will definitely need to stand up a KVM based dom0 host to do some more testing so that the virtio based interfaces can be fully paravirtualized.

Juniper sells the licensing for vSRX based on a bandwidth and feature set model. The base license gives you 10Mbps of bandwidth which would definitely cover the 1000 tunnels our client would want to deploy on each vSRX. A perpetual license for that base vSRX license runs about $1500, which is not bad. An SRX240 currently goes for about $2000-$2400 and then add in support contract and each one takes up a full 1U of rackspace, the vSRX looks like a good deal.


snmp-mibs-downloader screenos unpack error

Debian wheezy system, trying to get some SNMP names in my snmpwalk.

% download-mibs

inflating: /tmp/tmp.4hFK5WH1Uf/6.3mib/6.3mib/standards/mib-rfc3896.txt
cat: /tmp/tmp.4hFK5WH1Uf/snmpv2/NS-ADDR.mib: No such file or directory
WARNING: Module(s) not found: NETSCREEN-ADDR-MIB


modify /etc/snmp-mibs-downloader/screenoslist

change the ARCHDIR line from:





GPLPV drivers in an ISO

We host Windows based virtual servers on Xen host machines and to get the best performance from these systems, we add in the GPL’ed Paravirtualization drivers available on the Meadowcourt page.

Maybe I’m doing it wrong or the hard way, but when doing a fresh install of a Windows system, the only way of getting the drivers onto the new domU is to make a ‘throwaway’ model=e1000 VIF which I can use to download the drivers onto the system. Then after getting the drivers onto the system (all the while actually accessing the system via the VNC frame-buffer), I can properly configure the new and final interface(s).

I’ve come up with a slightly easier/faster way of getting the drivers onto the system: put them into ISO image and attach it to the domU so it looks like a CD-ROM from which I can run the driver installers.

Here is the result for anyone to download and do the same:


Here are some snippets from the xen cfg:

disk = [
# "file:/storage/iso/GPLPV-20130818.iso,ioemu:hdc:cdrom,r",
# "file:/storage/iso/SW_DVD5_Win_Pro_7w_SP1_64BIT_English_-2_MLF_X17-59279.ISO,ioemu:hdc:cdrom,r",
vif = [
# "bridge=xenbr999,model=e1000,vifname=win7bench-colo",

I used Windows XP under VMWare Fusion on my Mac to download the drivers and created the ISO with MagicISO. I saved that to a shared directory with then Mac and then scp’ed the file to the Linux server. Hows that for geek?


ok, this shell script looks even more geekier:


Mini-PCIe SSD in MacMini

Been playing with ports/drives/SSDs in MacMinis recently and I thought I would attempt something that I hadn’t seen anyone else try.

Replace the Mini-PCIe Airport card in an Intel MacMini with an SSD. The idea being that this would become the boot device to the 2 mirrored drives internally.

So I picked up this item off eBay: SuperTalent 32 GB,Internal (FEM32GFDL) (SSD) Solid State Drive

These are primarily meant for a Dell Mini 9, but I hoped the Mac would see it as a regular drive. This MacMini is running 10.6.8 Server.

Tore MacMini open, installed card, put everything back together and booted the system.

Unfortunately the drive was not recognized.

It didn’t show up in Disk Utility. I did look through the device tree with IORegistryExplorer and found what looked like an entry for the card, but apparently there aren’t any drivers for this type of device (essentially a PATA bus/drive presented via the PCIe interface).

Next 2 stops down the road of MacMini Mania are the hack of connecting one of the internal SATA ports out through the case to an eSATA device. (Drobo S likely) and trying the Mini PCIe card that adds 2 SATA ports (same external eSATA connection)




QuickTime Streaming on Lion Server

Apple, in it’s Infinite wisdom, has decided that we no longer need QuickTime Streaming as a function of OS X Server. A client recently purchased a new MacMini Server (Lion Server) and in the process of working through migrating data and services off some old XServe G4s we found that QTSS was missing.

There is the Darwin Streaming Server project as Mac OS Forge, so downloading and installing the latest build there (6.0.3) seemed like the proper course of action. One glitch though, the installer refuses to proceed against any OS Install that is OS X Server.

Since those checks are done with scripts, the workaround is to copy the DarwinStreamingServer.pkg installer bundle to your hard drive where it can be edited.

In the script: DarwinStreamingServer.pkg/Contents/Resources/VolumeCheck
on line 29 that first stanza of checks which begins with a check for $INSTALLED_SERVER_OS_VERS…

…comment that entire stanza out with # and the Installer will then let you install the DSS on your Lion Server install.

This might also prove useful:
chmod +ai “qtss allow list,search,readattr,readextattr,readsecurity” QTSSMoviesDir

Differences in Hardware/Software for a Database Server

I previously posted about the migration we performed for a customer for their email server.

This weekend we performed a cutover for another client from an older dedicated MySQL database server to a newer piece of hardware. First, the graph:


And so where the system was under an almost continuous load of 1.0, the system now seems almost bored.

Specs for old DB Server

  • Dell PowerEdge 2850
  • 2GB RAM
  • 2 x 3.6Ghz Xeon with Hyperthreading enabled
  • 3 x 146GB 15K rpm SCSI – PERC RAID 5
  • Debian Linux 4 – 2.4.31-bf2.4 SMP – 32-bit kernel
  • MySQL 4.1.11

Specs for new DB Server

  • Dell PowerEdge 2950
  • 8GB RAM
  • 2 x Dual core 3.0Ghz Xeon with HT disabled
  • 7 x 146GB 10k rpm 2.5″ SAS drives – PERC RAID 5
  • Debian Linux 6 – 2.6.32-5-amd64 SMP – 64-bit kernel
  • MySQL 5.0.51a (soon to be 5.1)

The client is understandably very happy with the result.


Speaking at LoneStar PHP conference – June 11

Please come to the LoneStar PHP conference on June 11 where I will be giving a talk about setups/workflows for development and deployment of PHP based web sites/applications.

Where Crowne Plaza Suites – Coit/635
When: Saturday, June 11, 2011 – 8:00am – 5:30pm
How: A bargain at $60 and lunch is provided.


Simple Checkout of phpMyAdmin with git

When setting up webservers for clients, I’ll usually configure and secure an installation of phpMyAdmin to allow them easy access to the MySQL database server.

I would also want to make sure it was easy to update that installation to the latest stable version. In the past this was easily done by initially checking out the STABLE tag from their Subversion repository using the following style command:

svn co

and then when I wanted to update, it was a simple matter of running:

svn update

Well, the phpMyAdmin folks switched over to using git as their SCM system and unfortunately, they didn’t post any instructions on how a web server administrator who was never going to be changing/committing code back into the system would perform the same action using git. It took me about an hour of searching, reading, digesting how git works, cursing, but I finally came up with the following set of git commands necessary to checkout the STABLE branch of phpMyAdmin:

git clone --depth=1 git://
cd phpmyadmin
git remote update
git fetch
git checkout --track -b PMASTABLE  origin/STABLE

…what that means is: clone the main repository from the specified path, drop into the newly created repository directory and thirdly create a new branch in the local repository called PMASTABLE that will track the remote repository’s branch called “origin/STABLE”.

The “depth=1” parameter tells git not to copy a huge set of changes, but only the most recent set.

So, from here on, I should be able to merely run: “git pull” on that repository and it should update it to the latest STABLE.

Hopefully, others will find this useful.

UPDATE – Mar 13, 2014
I updated the above commands to show the correct GitHub URL and two more commands to make sure the local repo sees the STABLE branch before trying to create a local branch.