From Earlham Cluster Department

Revision as of 19:20, 30 November 2013 by Charliep (Talk | contribs)
Jump to: navigation, search

Al-Salam is the working name for the Earlham Computer Science Department's upcoming cluster computer.


Installation Notes


I'll be maintaining a script, /root/install/, that will also serve as a log. Also following along with BobSCEd-new logs for consistency between clusters.


Have done

gcc.x86_64 gcc-c++.x86_64 gcc-gfortran.x86_64 \
gcc44.x86_64 gcc44-c++.x86_64 gcc44-gfortran.x86_64 \
apr-x86_64 apr-devel.x86_64 expat-devel.x86_64 \
blas.x86_64 dhcp.x86_64
cluster al-salam {
static_routes="bs0 as0"
route_as0=""      IN  A       IN  CNAME as0 IN  CNAME as0
 acl al-salam {; // Al-Salam internal network; // Al-Salam headnode

view al-salam {
        match-clients { al-salam; };

        zone "al-salam.loc" {
                type master;
                allow-transfer { none; };
                file "master/al-salam.loc";

        zone "" {
                type master;
                allow-transfer { none; };
                file "master/";
        zone "" {
                type master;
                allow-transfer { servers; };
                file "master/";
        zone "234.28.159.IN-ADDR.ARPA" {
                type master;
                allow-transfer { servers; };
                file "master/";

        zone "." {
                type hint;
                file "master/named.root";
150 IN  PTR
        subnet netmask {

                option routers        ;
                option subnet-mask    ;
                option domain-name              "al-salam.loc";
                option domain-name-servers;

                next-server           ;
                filename "pxelinux.0";

                host { hardware ethernet 00:30:48:F2:99:DC; fixed-address; }
                host { hardware ethernet 00:30:48:F3:0D:32; fixed-address; }
                host { hardware ethernet 00:30:48:F2:99:DA; fixed-address; }
                host { hardware ethernet 00:30:48:F2:99:CC; fixed-address; }
                host { hardware ethernet 00:30:48:F2:99:C4; fixed-address; }
                host { hardware ethernet 00:30:48:F2:9A:06; fixed-address; }
                host { hardware ethernet 00:30:48:F3:0D:30; fixed-address; }
                host { hardware ethernet 00:30:48:F2:99:D6; fixed-address; }
                host { hardware ethernet 00:30:48:F2:99:C6; fixed-address; }
                host { hardware ethernet 00:30:48:F2:9A:0A; fixed-address; }
                host { hardware ethernet 00:30:48:F2:99:E0; fixed-address; }
                host { hardware ethernet 00:30:48:F2:99:A2; fixed-address; }
# Command line options here
INTERFACES="eth0 eth1"   # on layout both interfaces are required, originally only one was listed here
$ yum install -y dhcp
$ /etc/sysconfig/dhcrelay
INTERFACES="eth0 eth1"
 vi exports # add entries (check to make sure they aren't already covered by existing rules)
 vi hosts.allow # add entries

compute nodes

Latest Overarching Questions

Parts List

  1. Nodes - case, motherboard(s), power supply, CPU, RAM, GPGPU cards
  2. Switch - managed, cut-through
    1. Fitz: Having a hard time finding anyone who sells cut-through switches
      1. How about this store-and-forward switch from hp?
  3. Power distribution - rack-mount PDUs

Tentative Specifications



Specialty Nodes

Educationally, we could expect to get significant use out of GPGPUs, but the production use is limited. Increasing the variance of the architecture landscape would be a bonus to education.




Quick breakdown


ION #61116 ION #61164 SM #174536 Newegg #1 Newegg #2 Intel List #1 AMD List #1 AMD List #2
CPU 72 2.4GHz Intel E5530 80 2.4GHz Intel E5530 80 2.4GHz Intel E5530 128 2.4GHz Intel E5530 112 2.4GHz Intel E5530 100 2.4GHz Intel E5530 156 2.0GHz AMD Opteron 2350 126 2.6GHz AMD Opteron 2435
RAM 108GB PC3-10600 120GB PC3-10600 120GB DDR3-1333 192GB DDR3-1333 168GB DDR3-1333 144GB DDR3-1333 160GB DDR2-800 120GB DDR2-800
GPU 2 Tesla C1060 2 Tesla C1060 2 Tesla C1060 None 4 Tesla C1060 2 Tesla C1060 2 Tesla C1060 2 Tesla C1060
Local disk Yes Yes Yes Yes Yes Yes Yes Yes
Shared chassis No Yes Yes No No No No No
Remote mgmt No No IPMI No IPMI on GPU nodes IPMI IPMI IPMI
Size (just nodes) 9U 6U 6U 16U 12U 12U 20U 10U
Price $33,173.20 $33,054.30 $30,078.00 $32,910.56 $34,696.78 $35,846.00 $35,275.00 $33,755.00

Power distribution

PDU1220 PDUMH20 AP9563 AP7801
Vendor TrippLite TrippLite APC APC
Size 1U 1U 1U 1U
Capabilities Dumb Metered Dumb Metered
Input power 20A, 1x NEMA 5-20P 20A, 1x NEMA L5-20P w/ NEMA 5-20P adapter 20A, 1x NEMA 5-20P 20A, 1x NEMA 5-20P
Output power 13x NEMA 5-20R 12x NEMA 5-20R 10x NEMA 5-20R 8x NEMA 5-20R
Price $195 $230 $120 $380

ION Computer Systems Quotation #61116

ION Computer Systems Quotation #61164

Silicon Mechanics Quote #174536

Newegg Quote #1

Newegg Quote #2 (the one we purchased?)

Intel List #1

AMD List #1

AMD List #2

Personal tools
this semester