Al-salam

From Earlham Cluster Department

(Difference between revisions)
Jump to: navigation, search
(Removing all content from page)
(Undo revision 10571 by Leemasa (Talk))
Line 1: Line 1:
 +
Damascus is the working name for the Earlham Computer Science Department's upcoming cluster computer.
 +
At the moment Damascus exists only as a $40,000 grant and a growing list of tentative specifications:
 +
 +
== Latest Overarching Questions ==
 +
*Should we build this machine ourselves?
 +
*#Are we wasting our money and learning opportunity letting them do the building for us?
 +
*#If it is cheaper, Would it be a useful experience for the students this coming semester to take a large collection of hardware and make it into a cluster?
 +
*How much if any GPGPU hardware do we want?  0, 1 or 2 nodes worth?
 +
*Do we want a high bandwidth/low latency network?
 +
**We do not. More expensive than it works
 +
*What software stack do we want to run?  Vendor supplied or the BCCD?
 +
**Both. Vendor-supplied base with a BCCD virtual machine
 +
* Do compute nodes have spinning disk?
 +
** Compute nodes have a spinning disk
 +
* What's on the local persistent store?  /tmp? An entire OS?
 +
 +
== Tentative Specifications ==
 +
 +
=== Budget ===
 +
* $35,000 (leaving $5,000 for discretionary spending)
 +
 +
=== Nodes ===
 +
* Intel Nehalem processors
 +
* 4 core processors minimum
 +
** Six cores still expensive
 +
* 1.5GB RAM per core
 +
 +
=== Specialty Nodes ===
 +
* Two nodes should support CUDA GPGPU
 +
 +
Educationally, we could expect to get significant use out of GPGPUs, but the production use is limited.
 +
Increasing the variance of the architecture landscape would be a bonus to education.
 +
 +
=== Network ===
 +
* Gigabit Ethernet fabric with switch
 +
 +
=== Disk ===
 +
* Spinning Disk
 +
 +
=== OS ===
 +
* Virtual BCCD on top of built-in OS.
 +
 +
== ION Computer Systems Quotation #61116  ==
 +
*2  ION G10 Server with GPU: 4,972.00 each
 +
** (2) Intel® Quad-Core Xeon® processor E5530 (2.40GHz, 8MB Cache, 5.86GT/s, 80W)
 +
** 12GB RAM [Bank 1 of 2: (6) 2GB ECC PC3-10600 1333MHz 2rank DDR3 RDIMM Modules][Smart]
 +
** Total memory: 12GB DDR3_1333
 +
** Configure 1 RAID sets / arrays.
 +
** Seagate SV35.3 250GB, 7200RPM, SATA 3Gb for SDVR 3.5“ Disk
 +
** (1) NVidia Tesla C1060 w. 4GB DDR3
 +
** Dual Intel Gigabit Server NICs with IOAT2 Integrated
 +
 +
*7 ION G10 Server without GPU: $3,697.00 each
 +
** (2) Intel® Quad-Core Xeon® processor E5530 (2.40GHz, 8MB Cache, 5.86GT/s, 80W)
 +
** 12GB RAM [Bank 1 of 2: (6) 2GB ECC PC3-10600 1333MHz 2rank DDR3 RDIMM Modules][Smart]
 +
** Total memory: 12GB DDR3_1333
 +
** Configure 1 RAID sets / arrays.
 +
** Seagate SV35.3 250GB, 7200RPM, SATA 3Gb for SDVR 3.5“ Disk
 +
** Dual Intel Gigabit Server NICs with IOAT2 Integrated
 +
 +
* Networking Fabric
 +
**Network not included
 +
 +
* Other stuff
 +
** scorpion: ION bootable USB Flash device for trouble shooting.
 +
** 3 year Next Business Day response Onsite Repair Service by Source Support
 +
** Default load for testing (Service Partition + CentOS 5.3 for Intel64)
 +
 +
* Price tag: $33,173.20
 +
 +
== ION Computer Systems Quotation #61164  ==
 +
 +
* 2 ION G10 Server with GPU: $4,972.00 each
 +
** (2) Intel® Quad-Core Xeon® processor E5530 (2.40GHz, 8MB Cache, 5.86GT/s, 80W)
 +
** 12GB RAM [Bank 1 of 2: (6) 2GB ECC PC3-10600 1333MHz 2rank DDR3 RDIMM Modules][Smart]
 +
** Total memory: 12GB DDR3_1333
 +
** Configure 1 RAID sets / arrays.
 +
** Seagate SV35.3 250GB, 7200RPM, SATA 3Gb for SDVR 3.5“ Disk
 +
** (1) NVidia Tesla C1060 w. 4GB DDR3
 +
** Dual Intel Gigabit Server NICs with IOAT2 Integrated
 +
 +
* 4 ION T11 DualNode: $6,477.0 each
 +
** (2x2) Intel® Quad-Core Xeon® processor E5530 (2.40GHz, 8MB Cache, 5.86GT/s, 80W)
 +
** Total memory: 12GB DDR3_1333 per node
 +
** No RAID, Separate disks (NO redundancy)
 +
** Configure 1 RAID sets / arrays.
 +
** Seagate Constellation 160GB, 7200RPM, SATA 3Gb NCQ 2.5“ Disk
 +
** Dual Intel Gigabit Server NICs with IOAT2 Integrated
 +
** These nodes are modular. One can be unplugged and worked on while the others remain running.
 +
 +
* Network
 +
** Network Not Included
 +
 +
* Other stuff
 +
** scorpion: ION bootable USB Flash device for trouble shooting.
 +
** 3 year Next Business Day response Onsite Repair Service by Source Support
 +
** Default load for testing (Service Partition + CentOS 5.3 for Intel64)
 +
 +
*Price Tag: $33,054.30
 +
 +
==Silicon Mechanics Quote #174536==
 +
* 2x Rackform iServ R4410:  $11043.00 each ($10601.00 each with education) [http://www.siliconmechanics.com/quotes/174536?confirmation=879366549 link]
 +
**Shared Chassis: The following chassis resources are shared by all 4 compute nodes
 +
**External Optical Drive: No Item Selected
 +
**Power Supply: Shared, Redundant 1400W Power Supply with PMBus - 80 PLUS Gold Certified
 +
**Rail Kit: Quick-Release Rail Kit for Square Holes, 26.5 - 36.4 inches
 +
**Compute Nodes x4
 +
***CPU: 2 x Intel Xeon E5530 Quad-Core 2.40GHz, 8MB Cache, 5.86GT/s QPI
 +
***RAM: 12GB (6 x 2GB) Operating at 1333MHz Max (DDR3-1333 ECC Unbuffered DIMMs)
 +
***NIC: Intel 82576 Dual-Port Gigabit Ethernet Controller - Integrated
 +
***Management: Integrated IPMI with KVM over LAN
 +
***Hot-Swap Drive - 1: 250GB Western Digital RE3 (3.0Gb/s, 7.2Krpm, 16MB Cache) SATA
 +
 +
* 2x Rackform iServ R350-GPU: $5196.00 each ($4433.00 each with education) [http://www.siliconmechanics.com/quotes/174542?confirmation=712641984 link]
 +
**CPU: 2 x Intel Xeon E5530 Quad-Core 2.40GHz, 8MB Cache, 5.86GT/s QPI
 +
**RAM: 12GB (6 x 2GB) Operating at 1333MHz Max (DDR3-1333 ECC Unbuffered DIMMs)
 +
**NIC: Intel 82576 Dual-Port Gigabit Ethernet Controller - Integrated
 +
**Management: Integrated IPMI 2.0 & KVM with Dedicated LAN
 +
**GPU: 1U System with 1 x Tesla C1060 GPU, Actively Cooled
 +
**LP PCIe x4 2.0 (x16 Slot): No Item Selected
 +
**Hot-Swap Drive - 1: 250GB Seagate Barracuda ES.2 (3Gb/s, 7.2Krpm, 32MB Cache, NCQ) SATA
 +
**Power Supply: 1400W Power Supply with PMBus - 80 PLUS Gold Certified
 +
**Rail Kit: 1U Rail Kit
 +
 +
*Price Tag: $32,478 ($30,078 with education)
 +
 +
*Questions
 +
**Can we lose the hot-swappability to save money?
 +
**Do we need to get a Gig-Switch?
 +
***Would Cairo do?

Revision as of 18:28, 16 December 2009

Damascus is the working name for the Earlham Computer Science Department's upcoming cluster computer.

At the moment Damascus exists only as a $40,000 grant and a growing list of tentative specifications:

Contents

Latest Overarching Questions

Tentative Specifications

Budget

Nodes

Specialty Nodes

Educationally, we could expect to get significant use out of GPGPUs, but the production use is limited. Increasing the variance of the architecture landscape would be a bonus to education.

Network

Disk

OS

ION Computer Systems Quotation #61116

ION Computer Systems Quotation #61164

Silicon Mechanics Quote #174536

Personal tools
Namespaces
Variants
Actions
websites
wiki
this semester
Toolbox