Having built a couple of other Debian servers using software RAID 1, but not recalling exactly how I got it to work, I decided to actually document the results here.
So I needed to build up a system that we could dump really large drives into for some customers so they can do offsite backups. We had started doing this using FireWire drives attached to a G4 running Panther server, but it started to get a bit messy and sometimes FireWire busses can be a bit finicky.
We had a rackmount system that had 8 hot swap IDE bays in it, powered by an old AMD board with 6 PCI slots and it was perfect for what we needed. We had it at the colo for doing backups there, but the RAID card had some issues, so we had pulled it and it was sitting on a shelf for the past year.
I took a SATA PCI card (fake raid, don’t get me started) and mounted two 160GB SATA drives (that we had pulled from two different PowerMac G5s) into one of the internal drive cages. This gave me 2 nice big disks to create my boot system with.
Booting from a RC1 biz card install of Debian Etch, I got to the Partition Disks section of the install. This is the really tricky part because if you don’t do things in the right order, the partioner will not be able to set things up correctly and produce something you can actually install onto.
Here is the basic outline of what I ended up with in terms of partitions:
however!!! all mounts are not created the same.
so, let’s start with our two physical disks, sda and sdb
on each of these I created two actual partitions:
- one small partition at the beginning (around 64MB) that will be used for the boot mount
- the rest of the disk that will be used for everything else
so I ended up with 4 partitions: sda1, sda2, sdb1 and sdb2. All 4 of these partitions are set to have a type to be used for software raid
next I created 2 new software raid1 devices using the corresponding partitions on each disk.
- The first raid1 disk I formatted as ext3 and designate it’s mount as /boot. Nothing more needs to done with this disk.
- The second raid1 disk I do NOT format, but designate it as to be used for the Logical Volume Manager (LVM) All remaining partitions will be created from this device.
Proceeding into the LVM screens I did the following:
- I create a single Logical Volume Group using the single raid1 device I made from the sda2 and sdb2 partitions.
- I then created two Logical Volumes from this one LVG: sys (most of the disk) and swap (the ending 8GB of space)
- I formatted the sys LV as ext3 and designated it to be the root mount point /
- I designated the swap LV as (surprise) swap
Once this tree of raid1 devices and LVG/LVs were in place…. I had no problem installing Debian and continuing on with setting up my big drive box. I will use PureFTPd and netatalk (for reasons I will explain in another article) for server side daemons.
- March 2016
- October 2014
- August 2013
- June 2012
- August 2011
- July 2011
- June 2011
- May 2011
- April 2011
- March 2011
- February 2011
- November 2010
- July 2010
- June 2010
- May 2010
- April 2010
- March 2010
- February 2009
- January 2009
- December 2008
- September 2008
- August 2008
- July 2008
- June 2008
- May 2008
- February 2008
- January 2008
- December 2007
- November 2007
- October 2007
- August 2007
- July 2007
- May 2007
- April 2007
- March 2007
- February 2007
- January 2007
- September 2006