LittleFe - A portable cluster for parallel and distributed computing education

 
 
 
 
Abstract
Put LittleFe abstract here.
 
 
Keywords: Put LittleFe keywords here.

Contents

1  LittleFe - Overview
    1.1  Motivation
    1.2  Overall Design
    1.3  Hardware
        1.3.1  Mainboard
        1.3.2  Storage
        1.3.3  Network Fabric
        1.3.4  Power
        1.3.5  Cooling
    1.4  Packaging
        1.4.1  Frame
        1.4.2  Traveling Case
    1.5  Assembly and Testing
    1.6  Software
    1.7  Status
2  LittleFe - Parts Manifests
    2.1  Parts Manifests
3  LittleFe - Assembly Instructions
    3.1  Overview
    3.2  Hardware Assembly
        3.2.1  Frame
        3.2.2  Wiring
        3.2.3  Network
        3.2.4  CPU and Disk Cards
        3.2.5  BIOS Configuration
        3.2.6  Testing
    3.3  Software Installation
        3.3.1  BCCD
        3.3.2  Liberation
        3.3.3  Testing
        3.3.4  Adding Functionality

List of Tables

    2.1  LittleFe v3 Computer Parts Manifest
    2.2  LittleFe v3 Traveling Case Parts Manifest

List of Figures

    1.1  An early version 3 production unit almost ready for deployment.
    1.2  A demonstration of LittleFe at the Oklahoma Supercomputing Symposium in 2006.
    3.1  Rails, card-edge guides, and end plates. Note that only one of the end plates is prepped for the 110/220VAC line input switch and the shorter spacing between the holes in the rails and the end of the rails at the bottom of the picture.
    3.2  Assembly of the first set of rails and card-edge guides. Note the shorter hole spacing to the right, the 110/220VAC regulated power supply mounting bracket, and the larger holes at the bottom of the fourth card-edge guide from the left.
    3.3  Assembly of the second set of rails and card-edge guides. Note the shorter hole spacing to the left and the absence of the 110/220VAC regulated power supply mounting bracket.
    3.4  Card-edge supports and the cross-tie bar mounted to the card-edge guides.
    3.5  End plate showing placement of small and large rubber feet.
    3.6  End plate showing placement of 110/220VAC line input switch.
    3.7  Completed end plate and rail assemblies.
    3.8  110/220VAC regulated power supply mounting.
    3.9  110/220VAC regulated power supply connection block showing ground, neutral, and load lugs (the three on the far right).
    3.10  Routing and mounting the 110/220VAC feed line.
    3.11  Network switch mounted to the plate with the first two network jumpers installed.
    3.12  Routing the network jumpers.
    3.13  Securing the network jumpers.
    3.14  The angle bracket which holds the 12VDC input feed is highlighted with a green laser on the left-hand side of the figure.
    3.15  The on-board ATX power supply is highlighted on the right with a green laser, the 12VDC input connector is on the left in the bracket.
    3.16  The power switch mounted on top of the audio/PS2 block.
    3.17  Overhead view showing the routing of the power switch cables to the header pins located in the upper right-hand corner of the board near the 12VDC input connector.
    3.18  Detail showing the cable management for the Molex connectors on the side of the mainboard.
    3.19  The five compute mainboard cards installed in the frame.
    3.20  The CD/DVD drive, disk drive, and disk card with the O rings mounted on it.
    3.21  The drives mounted on the disk card.
    3.22  Drive and power cabling for the disk drive card.
    3.23  Installing the head node mainboard card and disk card into the frame. Note the orientation of the disk card.

Chapter 1
LittleFe - Overview

LittleFe is a complete multi-node Beowulf-style [Brown, 2008] portable computational cluster designed as an "educational appliance" for substantially reducing the friction associated with teaching high performance computing (HPC) and computational science in a variety of settings. The entire package weighs less than 50 pounds, easily travels via checked baggage, and sets-up in 5 minutes. Working with colleagues Paul Gray, Thomas Murphy, and David Joiner, I am jointly responsible for the design and primarily responsible for the engineering and production of LittleFe.
LittleFe's design grew out of our work building stationary clusters and our experience teaching workshops in a variety of places that lacked parallel computational facilities. Once we had some gear and some experience moving it around we worked through three different approaches before arriving at the system described here. The principle design constraints for LittleFe are:
  • $3,000USD total cost
  • Less than 50lb (including the Pelican travel case)
  • Less than 5 minutes to setup
  • Minimal power consumption; less than 100 Watts peak, 80 Watts average
The current production LittleFe design is composed of the following major components:
  • 6 mainboards (Mini-ITX, 1-2GHz CPU, 512MB-1GB RAM, 100Mb/1Gb ethernet)
  • 6 12VDC-ATX power supplies
  • 1 320 Watt 110VAC-12VDC switching power supply
  • 1 40GB 7200RPM ATA disk drive (2.5" form factor)
  • 1 DVD/CD optical drive (slim-line form factor)
  • 1 8 port 100Mb/1Gb ethernet switch
  • 1 rack assembly
  • 1 1610 Pelican travel case
  • Fasteners, cabling, and mounting hardware
The $3,000USD cost per unit includes about 10 hours of student labor to assemble and test each unit. This includes liberating the Bootable Cluster CD image onto the disk drive and configuring the users. The mainboards, CPUs, and RAM comprise the bulk of the cost. With all 6 nodes idling, LittleFe draws about 80 Watts of power (about the same as an incandescent light bulb). When running a CPU-intensive molecular dynamics simulation using every node LittleFe draws about 88 Watts of power. See Appendix A LittleFe - Parts Manifests for a detailed parts manifests, cost estimates and sources.
 
Figure 1.1: An early version 3 production unit almost ready for deployment.

1.1  Motivation

One of the principle challenges to computational science and HPC education is that many institutions do not have access to HPC platforms for demonstrations and laboratories. Paul Gray's Bootable Cluster CD (BCCD) project [Gray, 2004] has made great strides in this area by making it possible to non-destructively, and with little effort, convert a computer lab of Windows or Macintosh computers into an ad hoc cluster for educational use.
LittleFe takes that concept one step further by merging the BCCD with an inexpensive design for an 6-8 node portable computational cluster. The result is a machine that weighs less than 50 pounds, easily and safely travels via checked baggage on the airlines, and sets-up in 5 minutes wherever there is a 110VAC outlet and a wall to project an image on. The BCCD's package management feature supports curriculum modules in a variety of natural science disciplines, making the combination of LittleFe and the BCCD a ready-to-run solution for computational science and HPC education.
LittleFe's principle edge is resource availability for computational science education. To teach a realistic curriculum in computational science, there must be guaranteed and predictable access to HPC resources. There are currently two common barriers to this access. Local policies typically allocate HPC resources under a "research first, pedagogy second" prioritization scheme, which often precludes the use of "compute it now" science applications in the classroom. The second barrier is the capital and ongoing maintenance costs associated with owning an HPC resource. This affects most mid-size and smaller educational institutions and is particularly acute in liberal arts environments, community colleges, and K-12 settings.
While relatively low-cost Beowulf style clusters have improved this situation somewhat, HPC resource ownership is still out of reach for many educational institutions. LittleFe's total cost is less than $3,000USD, making it easily affordable by a wide variety of K-16 schools. This is particularly important for institutions which serve traditionally under-served groups; typically they have access to fewer technology resources than other schools.
LittleFe's second important feature is ease of use, both technically and educationally. Our adoption of the BCCD as the software distribution toolkit makes it possible to smoothly and rapidly advance from bare hardware to science. Further, we have minimized ongoing maintenance since both hardware and software are standardized. Paul Gray, from the University of Northern Iowa, and a number of our student research assistants have successfully maintained the BCCD for many years now via a highly responsive listserv and well organized web presence, http://bccd.net.
Portability is useful in a variety of settings, such as workshops, conferences, outreach events and the like. It is also useful for educators, whether illustrating principles in the K-12 arena or being easily passed from college classroom to college classroom.

1.2  Overall Design

The first LittleFe consisted of eight Travla Mini-ITX VIA computers placed in a nearly indestructible Pelican case. To use it one would take all the nodes, networking gear, power supplies, etc. out of the case and set it up on a table. Each node was a complete computer with its own hard drive. While this design met the portability, cost, and low-power design goals, it was overweight and deployment was both time-consuming and error-prone.
Successive versions of LittleFe have moved to a physical architecture where the compute nodes are bare Mini-ITX mainboards mounted in a custom designed frame, which in turn is housed in a Pelican traveling case. To accomplish this we stripped the Travla nodes down, using only their mainboards, and replaced their relatively large power supplies with daughter-board style units that mount directly to the mainboard's ATX power connector. These changes saved both space and weight. Current LittleFe's use disk-less compute nodes, only the head node has a disk drive. Removing seven disk drives from the system reduced power consumption considerably and further reduced the weight and packaging complexity.

1.3  Hardware

1.3.1  Mainboard

Basing our design around the Mini-ITX mainboard form factor standard has served us well. Currently there is significant demand in the industry for this size system, which yields rapid evolution, a significant number of choices from multiple vendors such as VIA, Intel, and Advanced Micro Devices (AMD), and consequently low price points.
Smaller boards such as the PC-104 form factor, while using less electrical power, lack the computational power to be useful. Larger boards, while offering much more computational power, would be impractical in terms of both the physical packaging and electrical power consumption.
When specifying the mainboard, care should be taken to ensure that the overall height is less than the inter-board spacing in the frame (see the diagrams in Appendix B LittleFe - Assembly Instructions.
The head node should have a minimum of 1GB of RAM since it will be a file server for itself and all the compute nodes. Compute nodes should have at least 512MB of RAM.

1.3.2  Storage

All of the persistent storage devices are attached to the head node. Due to packaging and weight constraints the disk drive must be in a 2.5" form factor. The disk drive can be ATA or SATA: spindle speed, buffer size, and overall transfer rate are the most important criteria. Since there is only a single disk in each LittleFe, it should be a fast one. For most applications a 60GB disk drive is sufficient.
The speed of the CD/DVD is not particularly important as it is usually only used to load software. The CD/DVD drive must be a slimline form factor to fit with the disk drive on the frame.

1.3.3  Network Fabric

While 100Mb networking is sufficient for some applications, the availability and cost of Mini-ITX mainboards with 1Gb NICs makes them very attractive now for most new units.
Any small unmanaged 8-12 port network switch which uses 9-12VDC for line-in voltage can usually be mounted in the frame. Some LittleFe units sport a small WiFi access point, allowing a group of people to interact with a simulation from their laptops.

1.3.4  Power

120-240VAC line-input is brought to a frame mounted fused switch and then routed to a 320 Watt 12VDC regulated power supply. This minimizes the amount of high-voltage wiring in the system and provides the source for powering the 120 Watt daughter-board ATX power supplies located on each mainboard. With a peak draw of about 100 Watts, the primary power supply is generously sized for cool operation and increased reliability and longevity. For the head node, the daughter-board ATX power supply provides Molex connectors for the disk drive and CD/DVD drive.

1.3.5  Cooling

LittleFe's open frame, vertically mounted board design promotes a significant amount of natural cooling. This reduces the need for additional fans, further reducing the power consumption profile and the amount of system noise. LittleFe is quiet enough to use even when sitting in the arrivals lounge at an airport.
The Pelican traveling case, with the lid open and nothing packed around LittleFe, provides enough circulation that the unit can be run without removing it from the case. This is a particularly useful feature when using LittleFe in the field; for example when collecting, analyzing, and visualizing data from an attached water parameter probe.

1.4  Packaging

1.4.1  Frame

The frame is made of .080" smooth plate aluminum with punches for the rail mounts, case mounts, and line-input power switch. The rails and board guides are also made of standard aluminum stock with pre-drilled mounting holes. See Appendix B LittleFe - Assembly Instructions for a diagram of the frame.
The mainboards are mounted on 1/4" AA luan plywood plates using 1/8" nylon standoffs. We explored many other materials for the mainboard plates, particularly aluminum and a variety of plastics and polymers. None could match the strength to weight ratio, cost, or ease of use associated with high quality plywood.

1.4.2  Traveling Case

The system is shock-mounted in a Pelican 1610 traveling case using a two-part rubber cup and plug system. The cups are mounted on the floor and lid of the Pelican case. The plugs are mounted on the top and bottom of each frame's end-plates. When you place the frame in the case the plugs nestle in the cups on the floor. When the lid is closed those cups encase the plugs on top. This gives the system support and shock resistance in each direction. We have tested this system extensively both in field trials and by examining the results of approximately 25 commercial flights where LittleFe travelled as checked baggage. While there is occasional distortion of an end-plate if an excessive amount of baggage is placed on top of the Pelican case, on the whole the system appears to functional adequately.
In terms of overall weight, LittleFe, traveling case, and any accessories, can be no more than 50lbs. This is the maximum allowed by airlines before heavy baggage surcharges apply. Practically, it is also about the maximum amount that most people can safely maneuver around an airport or school building. One advantage of the Pelican 1610 is that is has built-in wheels and a retractable tow handle.

1.5  Assembly and Testing

Assembling LittleFe consists of the following steps:
  1. Assembling the frame and rails
  2. Mounting the regulated 110/240 VAC power supply to the frame
  3. Mounting the network switch and installing the network cabling
  4. Mounting the mainboards to the cards
  5. Mounting the power supplies and switches to the mainboards
  6. Installing the up-link NIC on the head-node
  7. Installing the mainboards in the cage
  8. Cabling the power supplies
  9. Mounting the disk drive and CD/DVD drive to the frame and installing the power and data cables
  10. Plugging in the monitor, keyboard, and mouse
  11. Performing the initial power-up tests
  12. Configuring the BIOS on 5 of the mainboards to boot via the LAN and PXE
Basic hand tools: screwdrivers, pliers, wire cutters, adjustable wrench, drill, and a soldering iron are all the tools which are needed to fully assemble a unit. Most people budget a full day to do a complete assembly, test, and software installation. With practice, it has been shown that if nothing goes wrong, a single unit can be assembled in about 4 hours. See Appendix B LittleFe - Assembly Instructions for detailed step-by-step instructions and the URL of a video that illustrates the assembly of a unit.

1.6  Software

Early versions of LittleFe used the Debian Linux distribution as the basis for the system software. This was augmented by a wide variety of system, communication, and computational science packages, each of which had to be installed and configured on each of the nodes. Even with cluster management software such as the C3 tools, this was still a time-consuming process.
One of the primary goals of this project has been to reduce the friction associated with using HPC resources for computational science education. This friction is made up of the time and knowledge required to configure and maintain HPC resources. To this end, LittleFe's system software was re-designed to use Paul Gray's Bootable Cluster CD distribution [Gray, 2004]. The BCCD comes ready-to-run with many of the system and scientific software tools necessary to support a wide range of computational science education.

1.7  Status

Funding from TeraGrid, the SuperComputing Conference, and private sources has enabled my group to put about 15 LittleFe units into production as of this writing. LittleFe units are used in a variety of contexts: undergraduate computer science education at Earlham and other colleges and universities, K-12 science outreach and engagement programs, the SuperComputing Education Program's workshops and conference program, and the Dine'h Grid project of the Navajo Nation in Crownpoint, New Mexico.
LittleFe is very much a work in progress. Over the past three years, my colleagues and I have done extensive work in this area prototyping and testing fundamental design considerations, developing power and cooling solutions within a narrow design envelope, and porting and developing software laboratories for education, outreach, and training. As Moore's "Law" [Moore, 1965] continues to hold true, we reconsider design choices in an effort to make LittleFe smaller, cheaper, more powerful computationally, and lower in power consumption.
For more information about LittleFe see the chapter Results and Future Work.
 
Figure 1.2: A demonstration of LittleFe at the Oklahoma Supercomputing Symposium in 2006.

Chapter 2
LittleFe - Parts Manifests

2.1  Parts Manifests

The following parts manifests (Table A.1 and Table A.2) capture the v3 production series of LittleFe, circa 2007.
As with any reference design based on digital technology the particular details of the mainboards, disks, and networking components must be revisited every 6-12 months. While it creates extra work, overall this pace of change favors LittleFe in the long run since it serves to drive the performance up and the cost down. The basic framework remains constant while the technology it encloses continually evolves.
Table 2.1: LittleFe v3 Computer Parts Manifest
 
Component Part Number # Each Per Unit Source
Mainboard VIA CN10000 6 173.00 1,038.00 Logic Supply
Memory DDR2 533 memory 1GB 1 122.00 122.00 Logic Supply
Memory DDR2 533 memory 512MB 5 64.00 320.00 Logic Supply
Power supply Pico PSU 120W 1 49.00 49.00 Logic Supply
Power supply Pico PSU 80W 5 39.00 195.00 Logic Supply
Frame Aluminum ends and rails 1 100.00 100.00 Locally Supplied
Switch D-Link DSS-8+ 10/100 switch 1 17.00 17.00 NewEgg
Power supply MeanWell SP-320-12 1 90.00 90.00 PowerGate
Jumpers 1 per motherboard plus 1 uplink 7 2.00 14.00 Locally supplied
Disk drive Travelstar 7K100 Hitachi 1 100.00 100.00 Directron
CD drive Panasonic CW-8124-B CD/DVD 1 77.00 77.00 Logic Supply
NIC Low-profile 10/100 PCI card 1 12.50 12.50 Logic Supply
Well nuts Feet for the frame 8 1.65 13.20 Ace Hardware
Aluminum 1/2" x 1/2" angle, in feet 12 1.00 12.00 Ace Hardware
Retainers Hitch pins 8 0.12 0.96 Ace Hardware
Standoffs Nylon, mainboards and switch 28 0.12 3.36 Ace Hardware
12V Input Lead, mainboard and switch 7 1.90 13.30 Mouser
110/220VAC Line input and switch 1 14.00 14.00 Mouser
IDE-IDE Motherboard to 3.5" IDE cable 1 10.00 10.00 Logic Supply
IDE-LPFF Motherboard to 2.5" IDE cable 1 10.00 10.00 Logic Supply
Power control Case front panel switch 6 10.00 60.00 Xoxide.com
Cards Luan plywood mounting cards 8 0.50 4.00 Locally supplied
Table 2.2: LittleFe v3 Traveling Case Parts Manifest
 
Component Part Number Quantity Each Per Unit Source
Case Pelican 1610 1 173.00 173.00 Commonly available
Cups Case mounting 8 0.20 1.60 Ace Hardware

Chapter 3
LittleFe - Assembly Instructions

3.1  Overview

Assembling LittleFe from a parts kit requires only basic knowledge of handtools and computer components. You will need large and small flat and philips screwdrivers, pliers and cable ties to complete the assembly. In addition to these illustrated instructions there is a narrated video of the assembly process available at http://LittleFe.net.

3.2  Hardware Assembly

3.2.1  Frame

The frame is assembled from the rails, card-edge guides, and end plates, see Figure 3.1.
Figure 3.1: Rails, card-edge guides, and end plates. Note that only one of the end plates is prepped for the 110/220VAC line input switch and the shorter spacing between the holes in the rails and the end of the rails at the bottom of the picture.
The rails and card-edge guides are assembled first. Layout the rails on a table lining-up the holes so that the ends with the shorter hole spacing are together and to the right. Lay the card-edge guides on top of the rails so that the large hole is facing away from you, see Figure 3.2.
Figure 3.2: Assembly of the first set of rails and card-edge guides. Note the shorter hole spacing to the right, the 110/220VAC regulated power supply mounting bracket, and the larger holes at the bottom of the fourth card-edge guide from the left.
Using the flat-head screws and nylon lock nuts mount the card-edge guides to the rails. The bracket which holds the regulated power supply should be placed on the left end of one assembly with the right-most mounting hole on the fourth card-edge bracket from the left. Two of the the card-edge guides have larger holes at the bottom, these should be placed fourth from the left (this is where the cross-tie bar goes). Before tightening the screws be sure the rails are aligned properly, the easiest way to do this is to use a carpenter's square.
The second rail card-edge assembly is a mirror image of the first, that is the short spacing would be to the left on the table as you assemble it. This ensures that when combined with the end plates that the card-edge guides face each other, see Figure 3.3
Figure 3.3: Assembly of the second set of rails and card-edge guides. Note the shorter hole spacing to the left and the absence of the 110/220VAC regulated power supply mounting bracket.
Next mount the card-edge supports to each card-edge guides, these go in the large hole in the bottom of each card-edge guide, with the threaded end facing out, and are secured with nylon lock nuts. The fourth card-edge from the left on each rail set is for the cross-tie bar, see Figure 3.4.
Figure 3.4: Card-edge supports and the cross-tie bar mounted to the card-edge guides.
Next the end plates are assembled. This consists of mounting the rubber feet on the top and bottom and mounting the 110/220VAC line input switch. The smaller rubber feet go on the top and the larger feet on the bottom, see Figure 3.5. The line input switch mounts with two screws, make sure that the switch is mounted such that it's accessible from the outside of the frame, see Figure 3.6.
Figure 3.5: End plate showing placement of small and large rubber feet.
Figure 3.6: End plate showing placement of 110/220VAC line input switch.
Now that the rail/card-edge assemblies and end plates are complete the frame can be assembled. Note that the end-plate with the 110/220VAC line input switch should be attached to the end of the rails with the longer hole spacing, see Figure 3.7.
Figure 3.7: Completed end plate and rail assemblies.
The frame is now ready for the installation of the high voltage wiring, low voltage wiring, and 110/220VAC regulated power supply.

3.2.2  Wiring

The 110/220VAC regulated power supply is mounted to the bracket with two screws and lock washers. The end with the connection block should face to the right as you look at the frame from the outside, see figure 3.8. The 110/220VAC power supply cable should be attached to the line input switch and then to the marked terminal locations on the regulated power supply. Take care to insure that load (white), neutral (black), and ground (green) are all properly connected, see Figures 3.9 and 3.11. Routing and securing the 110/220VAC feed line can be seen in Figure 3.10.
Figure 3.8: 110/220VAC regulated power supply mounting.
Figure 3.9: 110/220VAC regulated power supply connection block showing ground, neutral, and load lugs (the three on the far right).
Figure 3.10: Routing and mounting the 110/220VAC feed line.

3.2.3  Network

The network switch should be mounted to the card with the indicator lights facing up and the network connections facing down. Before inserting the network card into the frame the network jumpers should be installed starting with port 1, note that the cables are numbered, lf0 should go to port 1, lf1 to port 2, etc. See Figure 3.11
Figure 3.11: Network switch mounted to the plate with the first two network jumpers installed.
Place the network card in the frame routing the cables underneath the rails in preparation for securing them to the rails as illustrated in Figure 3.12. Secure the cables to each rail as shown in Figure 3.13 with nylon cable ties. Take care to ensure that the network jumpers each have a smooth path with no sharp bends or kinks.
Figure 3.12: Routing the network jumpers.
Figure 3.13: Securing the network jumpers.

3.2.4  CPU and Disk Cards

The mainboards are mounted to the cards using bolts, 1/8" nylon spacers, and nylon lock nuts. The nuts should go on the card side, the bolt heads on the mainboard side. The corner opposite the power supply connector uses a 1/16" nylon spacer and flat metal washer in conjunction with the angle bracket, this provides a mount for 12VDC input connector, see figure 3.14. Note that the nylon spacer should be in contact with the mainboard, then the angle bracket, then the flat metal washer which is in contact with the card.
Figure 3.14: The angle bracket which holds the 12VDC input feed is highlighted with a green laser on the left-hand side of the figure.
Once the mainboard is mounted to the card the on-board ATX power supply can be inserted into the mainboard connector and the 12VDC input connector can be mounted to the angle bracket, see figure 3.15.
Figure 3.15: The on-board ATX power supply is highlighted on the right with a green laser, the 12VDC input connector is on the left in the bracket.
The power switch is mounted with a single screw and nylon lock nut to the top of the audio out/PS2 block on the front of the mainboard. The wires are routed around the heat sink and then the connectors are placed onto the header pins located on the back of the mainboard near the angle bracket, see figures 3.16 and 3.17.
Figure 3.16: The power switch mounted on top of the audio/PS2 block.
Figure 3.17: Overhead view showing the routing of the power switch cables to the header pins located in the upper right-hand corner of the board near the 12VDC input connector.
The connectors from the power switch are labeled power switch, HDD LED, reset switch, etc.. See the mainboard manual for a pin-out of the header to which they are connected.
The uplink NIC uses the PCI bus connector on the head-node's mainboard. After removing the RJ-45 socket bezel from the card install it. A very small cable tie can be used to secure the card from wiggling free of the PCI connector.
Cable management is done using nylon cable ties. Re-usable cable ties, i.e. ones with a release tab, facilitate easy dissassembly and reassembly which in turn makes it possible to use LittleFe to show students the inner workings of a computational system. The extra Molex connectors are secured along the side of the mainboard with one cable tie on each mounting screw. The wires for the power switch are included in the bundle at the rear of the board, see figure 3.18.
Figure 3.18: Detail showing the cable management for the Molex connectors on the side of the mainboard.
Once the on-board ATX power supplies and power switches are mounted the five compute mainboard cards can be installed in the frame, see figure 3.19.
Figure 3.19: The five compute mainboard cards installed in the frame.
The CD/DVD drive and disk drive are mounted to the drive card using O rings as illustrated in figures 3.20 and 3.21. Note the orientation of the drives, this is important for the cabling between the drive card and the head node mainboard card.
Figure 3.20: The CD/DVD drive, disk drive, and disk card with the O rings mounted on it.
Figure 3.21: The drives mounted on the disk card.
Install the drive cables to the drives and then to the head-node mainboard as illustrated in figure 3.22.
Figure 3.22: Drive and power cabling for the disk drive card.
Once the drive card has been assembled and cabled to the head-node mainboard those two cards can be installed in the frame as illustrated in figure 3.23.
Figure 3.23: Installing the head node mainboard card and disk card into the frame. Note the orientation of the disk card.

3.2.5  BIOS Configuration

The five compute nodes should have their BIOSs configured to boot over the LAN using PXE. The head-node BIOS should be set to boot from the IDE disk first and CD/DVD second. For testing and debugging purposes all of the compute node BIOSs should be set to boot from CD/DVD second.

3.2.6  Testing

A basic test of the system can be performed by booting from an ISO such as the BBCD. While this only tests the head node is does ensure that the disk subsystem is functioning correctly and prepares us for installing the BCCD in the next section. If you have access to a USB CD/DVD drive you can test each of the compute nodes in a similar fashion.

3.3  Software Installation

3.3.1  BCCD

LittleFe runs the Bootable Cluster CD (BCCD) image [Gray, 2004]. The BCCD project is re-working their web presence as of January, 2009. Currently there are two sources of information and software for the BCCD, the 2.x codeline is available at http://bccd.net and what is known as the NG release or 3.x codeline is available at http://cluster.earlham.edu/trac/bccd-ng. When the dust settles the URLs below will be updated with the correct, stable URLs for each codeline.

3.3.2  Liberation

Liberation is the process of taking the BCCD 3.x live ISO and installing the software image onto the disk drive attached to LittleFe's head node.
Instructions for performing an initial liberation and subsequent updates can be found at http://cluster.earlham.edu/trac/bccd-ng/wiki/InstallInstructions.

3.3.3  Testing

There are a small set of software package tests available for the BCCD 3.x codeline at http://cluster.earlham.edu/trac/bccd-ng/wiki/Tests. Users are encouraged to contribute tests for software packages they add to their BCCD installation. This is easy to do with e.g. Debian's apt environment.

3.3.4  Adding Functionality

Since the BCCD 3.x codeline is built on the Debian Linux distribution it's easy to customize your local installation of the BCCD with Debian's apt environment. More information is available with the man command.
If you would like to build your own live ISO based on the BCCD 3.x codeline with additional functionality or configuration information it's easy to do so. Step-by-step instructions for building from source are available at http://cluster.earlham.edu/trac/bccd-ng/wiki/DevelopmentInstructions.

Bibliography

[Brown, 2008]
Brown, R. G. (2008). Engineering a Beowulf-style compute cluster. Physics Department, Duke University, Durham, NC.
[Gray, 2004]
Gray, P. (2004). The Bootable Cluster CD (BCCD). Web site: http://bccd.cs.uni.edu.
[Moore, 1965]
Moore, G. E. (1965). Cramming more components onto integrated circuits. Electronics, 38(8).



File translated from TEX by TTH, version 3.86.
On 28 Nov 2009, 14:50.