Cluster Information

From Earlham Cluster Department

Revision as of 19:48, 23 January 2010 by Amweeden06 (Talk | contribs)
Jump to: navigation, search

Contents

Current To-Do

Date represents last meeting where we discussed the item

 * 2 y-axis scales (runtime and problem size)
 * error bars for left and right y-axes with checkboxes for each
 * eps file for the poster
 * Check new GalaxSee on BobSCEd
 * Update configure.ac based on Fitz's notes from Kraken
 * Status on pople?
 * Remember Big Red has a debug queue
 * GalaxSee -- after stats changes, GalaxSee slowed down by a lot.
   * Double check on BobSCEd, find the algorithm problem
 * wiki page
 * Meeting Wednesday @ 11 EST
 * Better to not spend $$$ on next-day business response.  Losing a node is not a big deal.
 * BobSCEd already has IB.  Any value to having CUDA and IB in the same nodes?
 * Add MPI binding to matrix
 * Get Fitz's liberation instructions into wiki
 * Get Kevin's VirtualBox instructions into wiki
 * pxe booting -- see if they booted, if you can ssh to them, if the run matrix works 
 * Test all boot options
 * CUDA LittleFe is now online -- test CUDA boot options
 * Send /etc/bccd-revision with each email
 * Send output of netstat -rn and /sbin/ifconfig -a with each email
 * Run Matrix
 * For the future:  scripts to boot & change bios, watchdog timer, 'test' mode in bccd, send emails about errors

Summer of Fun (2009)

An external doc for GalaxSee
Documentation for OpenSim GalaxSee

What's in the database?

GalaxSee (MPI) area-under-curve (MPI, openmpi) area-under-curve (Hybrid, openmpi)
acl0-5 bs0-5 GigE bs0-5 IB acl0-5 bs0-5 GigE bs0-5 IB acl0-5 bs0-5 GigE bs0-5 IB
np X-XX 2-20 2-48 2-48 2-12 2-48 2-48 2-20 2-48 2-48

What works so far? B = builds, R = runs, W = works

area under curve GalaxSee (standalone)
Serial MPI OpenMP Hybrid Serial MPI OpenMP Hybrid
acls BRW BRW BRW BRW BR
bobsced0 BRW BRW BRW BRW BR
c13 BR
BigRed BRW BRW BRW BRW
Sooner BRW BRW BRW BRW
pople
Charlie's laptop BR

To Do

Implementations of area under the curve

GalaxSee Goals

GalaxSee - scale to petascale with MPI and OpenMP hybrid.

LittleFe

Notes from May 21, 2009 Review

BobSCEd Upgrade

Build a new image for BobSCEd:

  1. One of the Suse versions supported for Gaussian09 on EM64T [v11.1] - Red Hat Enterprise Linux 5.3; SuSE Linux 9.3, 10.3, 11.1; or SuSE Linux Enterprise 10 (see G09 platform list) <-- CentOS 5.3 runs Gaussian binaries for RHEL ok
  2. Firmware update?
  3. C3 tools and configuration [v4.0.1]
  4. Ganglia and configuration [v3.1.2]
  5. PBS and configuration [v2.3.16]
  6. /cluster/bobsced local to bs0
  7. /cluster/... passed-through to compute nodes
  8. Large local scratch space on each node
  9. Gaussian09
  10. WebMO and configuration [v9.1] - Gamess, Gaussian, Mopac, Tinker
  11. Infiniband and configuration
  12. GNU toolchain with OpenMPI and MPICH [GCC v4.4.0], [OpenMPI v1.3.2] [MPICH v1.2.7p1]
  13. Intel toolchain with OpenMPI and native libraries
  14. Sage with do-dads (see Charlie)
  15. Systemimager for the client nodes?

Installed:

Fix the broken nodes.

(Old) To Do

BCCD Liberation

Curriculum Modules

LittleFe

Infrastructure

SC Education

Current Projects

Past Projects

General Stuff

Items Particular to a Specific Cluster

Curriculum Modules

Possible Future Projects

Archive

Personal tools
Namespaces
Variants
Actions
websites
wiki
this semester
Toolbox