OU Supercomputing Center for Education & Research
University of Oklahoma   OSCER   OU IT

 

 

OSCER Resources


Cluster Supercomputer
Dell Linux Pentium4 Xeon
E5405 ("Harpertown")
sooner.oscer.ou.edu

Cluster Hardware

Item Description Quantity
Performance GFLOPs
(billions of floating point operations per second)
34,450.24 GFLOPs theoretical peak
28,030 GFLOPs sustained
(High Performance Linpack benchmark for the Top 500 list of the fastest supercomputers in the world —
debuted at #90 worldwide, #47 in the US, #14 among US academic supercomputers, #10 among US academic excluding national centers, #2 in the Big 12
Racks Dell PowerEdge 4210 42U
(info sheet)
31
Nodes
Compute Nodes
General Use
486
Harpertown
E5405
2.0 GHz
Dell
PowerEdge 1950 III
(spec sheet)
Compute Nodes
General Use
32
Harpertown
E5405
2.0 GHz
Dell
PowerEdge 2900 III
(spec sheet)
Compute Nodes
Owned by Prof. Yang Hong
8
Harpertown
E5405
2.0 GHz
Dell
PowerEdge 1950 III
(spec sheet)
Compute Nodes
Owned by Zewdu Tessema
2
Harpertown
E5405
2.0 GHz
Dell
PowerEdge 1950 III
(spec sheet)
Compute Nodes
Owned by Prof. Kurt Marfurt
3
Clovertown
E5345
2.33 GHz

3
Harpertown
E5430
2.66 GHz
Dell
PowerEdge 1950 III
(spec sheet)
"Phat" Nodes 2
Tigerton
E7340
2.4 GHz
Dell
PowerEdge R900
(spec sheet)
Head Nodes (interactive logins) 2
Harpertown
E5405
2.0 GHz
Dell
PowerEdge 2950 III
(spec sheet)
Grid Service Nodes 2
Harpertown
E5405
2.0 GHz
Dell
PowerEdge 2950 III
(spec sheet)
Global NFS Home Storage Nodes 4
Harpertown
E5405
2.0 GHz
Dell
PowerEdge 2950 III
(spec sheet)
Global NFS Slow Scratch Storage Nodes 4
Harpertown
E5405
2.0 GHz
Dell
PowerEdge 2950 III
(spec sheet)
Management Nodes (scheduler, etc) 2
Harpertown
E5405
2.0 GHz
Dell
PowerEdge 2950 III
(spec sheet)
Cluster Hardware Management 1
Harpertown
E5405
2.0 GHz
Dell
PowerEdge 2950 III
(spec sheet)
Infiniband Subnet Management Nodes 2
Harpertown
E5405
2.0 GHz
Dell
PowerEdge 2950 III
(spec sheet)
Global NFS Hong Storage Node 1
Harpertown
E5405
2.0 GHz
Dell
PowerEdge 2950 III
(spec sheet)
Global NFS Tessema Storage Node 1
Harpertown
E5405
2.0 GHz
Dell
PowerEdge 2950 III
(spec sheet)
Global NFS Marfurt Storage Node 1
Clovertown
E5345
2.33 GHz
Dell
PowerEdge 2950 III
(spec sheet)
GPGPU NVIDIA Tesla S1070 servers
1.296 GHz
6 servers
(24 cards)
CPUs
Pentium4 Xeon E5405 "Harpertown"
2.00 GHz
EM64T
quad core per chip
2 x 6 MB L2 cache
(data + instructions, each 6 MB shared between 2 cores)
16 KB L1 instruction cache per core
16 KB L1 data cache per core
1333 MHz Front Side Bus
45 nm
80 W
additional specs
2 sockets per node
 
486 compute nodes (1950), 32 compute nodes (2900) and 29 support nodes (2950) owned by OSCER
 
8 compute nodes (1950) and 1 support node (2950) owned by Prof. Yang Hong
 
2 compute nodes (1950) owned by Zewdu Tessema
Pentium4 Xeon E5430 "Harpertown"
2.66 GHz
EM64T
quad core per chip
2 x 6 MB L2 cache
(data + instructions, each 6 MB shared between 2 cores)
16 KB L1 instruction cache per core
16 KB L1 data cache per core
1333 MHz Front Side Bus
45 nm
80 W
additional specs
2 sockets per node
 
3 compute nodes (1950) owned by Prof. Kurt Marfurt
Pentium4 Xeon E5345 "Clovertown"
2.33 GHz
EM64T
quad core per chip
2 x 4 MB L2 cache
(data + instructions, each 4 MB shared between 2 cores)
32 KB L1 instruction cache per core
32 KB L1 data cache per core
1333 MHz Front Side Bus
65 nm
80 W
additional specs
2 sockets per node
 
3 compute nodes (1950) and 1 support node (2950) owned by Prof. Kurt Marfurt
Pentium4 XeonMP E7340 "Tigerton"
2.40 GHz
EM64T
quad core per chip
2 x 4 MB L2 cache
(data + instructions, each 4 MB shared between 2 cores)
16 KB L1 instruction cache per core
16 KB L1 data cache per core
1066 MHz Front Side Bus
45 nm
80 W
4 sockets per node
 
2 fat nodes (R900) owned by OSCER
Chipset
Intel 5000X Compute Nodes,
Support Nodes
1 per node
Intel 7300 "Phat" Nodes 1 per node
Main Memory
DDR2 667 MHz 1333 MHz or 1066 MHz Front Side Bus
User Accessible
(536 nodes)
  486 Compute (OSCER) x 16 GB each (1333)
+ 32 Compute (OSCER) x 16 GB each (1333)
+ 8 Compute (Hong) x 16 GB each (1333)
+ 2 Compute (Tessema) x 16 GB each (1333)
+ 3 Compute (Marfurt) x 16 GB each (1333)
+ 3 Compute (Marfurt) x 16 GB each (1333)
+ 2 "Phat" (OSCER) x 128 GB each (1066)
= 8800 GB total
Management
(24 nodes
plus 4 spares)
  2 Head x 16 GB each
+ 2 Grid Service x 16 GB each
+ 4 Global NFS
Home Storage
x 16 GB each
+ 4 Global NFS
Scratch Storage
x 16 GB each
+ 2 Management
(scheduler etc)
x 16 GB each
+ 1 Cluster Hardware
Management
x 16 GB each
+ 2 Infiniband Subnet
Management
x 16 GB each
+ 3 LDAP/license
Management
x 16 GB each
+ 1 Backup
Service
x 16 GB each
+ 1 Prof. Yang Hong Filesystem x 16 GB each
+ 1 Zewdu Tessema Filesystem x 16 GB each
+ 1 Prof. Kurt Marfurt Filesystem x 16 GB each
= 386 GB total
Hard Disk,
Global,
User Accessible
Fast Scratch
Panasas
Quantity: 6 shelves
Attached via 10GigE
Peak speed 3.3 GB/sec
Panasas
ActiveStor 5000
(data sheet)
SATA 7200 RPM
PanFS
RAID5 plus a hot spare
120 x 1 TB =
120 TB raw, ~100 TB usable
Home
Storage Area Network
Quantity: 1
Attached to 4 support nodes
Dell/EMC
PowerVault AX-150
(spec sheet)
SATA 7200 RPM
NFS
RAID5 plus a hot spare
12 x 750 GB =
9 TB raw, 5.7 TB usable
Slow Scratch
Direct Attached Storage
Quantity: 2
Attached to 2 support nodes each
Dell
PowerVault MD-3000
(spec sheet)
SATA 7200 RPM
NFS
RAID5 plus a hot spare
15 x 750 GB =
11.25 TB raw each,
7.8 and 10.7 TB usable
Prof. Yang Hong Storage
Direct Attached Storage
Quantity: 1
Attached to 1 support node
Dell
PowerVault MD-1000
(spec sheet)
SATA 7200 RPM
NFS
RAID5 plus a hot spare
15 x 750 GB =
11.25 TB raw, 7.9 TB usable
Zewdu Tessema Storage
Direct Attached Storage
Quantity: 1
Attached to 1 support node
Dell
PowerVault MD-3000
(spec sheet)
SATA 7200 RPM
NFS
RAID5 plus a hot spare
15 x 1 TB =
15 TB raw, 11.8 TB usable
Prof. Kurt Marfurt Storage
Direct Attached Storage
Quantity: 1
Attached to 1 support node
Dell
PowerVault MD-1000
(spec sheet)
SATA 7200 RPM
NFS
RAID5 plus a hot spare
15 x 1 TB =
11.25 TB raw, 11.9 TB usable
Hard Disk,
Global,
Management
Virtualization
Storage Area Network
Management,
Quantity: 1
Attached to ? support nodes
Dell/EMC
PowerVault AX4-5F
(spec sheet)
SATA 7200 RPM
NFS
RAID5 plus a hot spare
12 x 750 GB =
9 TB raw, ? TB usable
Backup System Disk Cache
Direct Attached Storage
Management
Quantity: 1
Attached to 2 support nodes
Dell
PowerVault MD-3000
(spec sheet)
SATA 7200 RPM
NFS
RAID5 plus a hot spare
15 x 750 GB =
11.25 TB raw, ? TB usable
Hard Disk,
Local,
User Accessible
Compute, owned by OSCER (518)

Compute, owned by Prof. Yang Hong (8)

Compute, owned by Zewdu Tessema (2)
OS & local scratch
SATA 7200 RPM 3.5"
Each Node:
1 x 250 GB raw
~180 GB usable
Total:
128.9 TB raw
92.8 TB usable
Compute, owned by Prof. Kurt Marfurt (3) OS & local scratch:
SATA 7200 RPM ?"
Each Node:
1 x ? GB raw
? GB usable
Total:
? GB raw
? GB usable
Phat, owned by OSCER (2) OS & local scratch
SAS 15,000 RPM 3.5"
Each Node:
3 x 146 GB
RAID5 plus a hot spare
? GB usable
Total:
876 GB raw
? GB usable
Hard Disk,
Local,
Management
Head (2)

Global NFS Home Storage (4)

Global NFS Slow Scratch Storage (4)

Global NFS Prof. Yang Hong Storage (1)

Management (2)

Cluster Hardware Management (1)

Infiniband Subnet Management (2)

LDAP/license (2)

Backup (2)

Spare (3)
OS
SAS 15,000 RPM 3.5"
Each Node:
3 x 146 GB
? GB usable
Total:
10,950 GB raw
? GB usable
Global NFS Prof. Kurt Marfurt Storage (1) OS
SAS 15,000 RPM 3.5"
Each Node:
? x ? GB
? GB usable
292 GB raw
124 usable
Grid Service (2)

Spare (1)
OS
SATA 7200 RPM 3.5"
Each Node:
6 x 750 GB
? GB usable
Total:
18,000 GB raw
? GB usable
Interconnects
High Performance InfiniBand QLogic
9240 fabric director switch
quantity 1
(data sheet)

QLogic
9024 edge switches
quantity 37
(data sheet)

QLogic
QLE7240 HCA cards
quantity 567
(data sheet)

Cables
Copper within each 9024
Fiber from 9024s to 9240
I/O GigaBit Ethernet Force10 Networks E1200i
(spec sheet)

GigE, 2:1 oversubscribed
624 ports

10GigE, 4:1 oversubscribed
16 ports
Management 100 Mbps Ethernet Dell PowerConnect 5448
GigE, 48 ports
quantity 1
(spec sheet)

Dell PowerConnect 3548
100 Mbps, 48 ports
quantity 13
(spec sheet)

Cluster Software

Operating System Red Hat Linux Enterprise 5.0 (kernel 2.6.18)
Message Passing MPICH2
MVAPICH
OpenMPI
Scheduler Platform LSF HPC
Compilers GNU: gcc, g++, gfortran
Intel: ifort (Fortran 77/90), icc (C), icpc (C++)
NAG: f95 (Fortran 90/95; front end to gcc)
Portland Group: pgf90, pgf77, pghpf, pgcc, pgCC
Numerical Libraries BLAS, LAPACK, ScaLAPACK
Intel Math Kernel Library
IMSL
Parallel Debugger TotalView
Code Analyzer LintPlus for C, Fortran Lint
Other software purchases to be announced


This resource and its availability are subject to change without notice (although notice will be given if at all possible).

 

 


Copyright (C) 2005-2010 University of Oklahoma