|
OSCER Resources
Cluster Hardware
Item |
Description |
Quantity |
Performance |
GFLOPs
(billions of floating point operations per second)
|
34,450.24 GFLOPs theoretical peak
28,030 GFLOPs sustained
(High
Performance Linpack
benchmark for the
Top 500
list of the fastest supercomputers in the world
—
debuted at #90 worldwide,
#47 in the US,
#14 among US academic supercomputers,
#10 among US academic excluding national centers,
#2 in the Big 12
|
Racks |
Dell
PowerEdge
4210
42U
(info
sheet)
|
31 |
Nodes |
|
GPGPU |
NVIDIA
Tesla
S1070
servers
1.296 GHz
|
6 servers
(24 cards)
|
CPUs |
Pentium4
Xeon
E5405
"Harpertown"
2.00 GHz
EM64T
quad core per chip
2 x 6 MB L2 cache
(data + instructions,
each 6 MB shared between 2 cores)
16 KB L1 instruction cache per core
16 KB L1 data cache per core
1333 MHz Front Side Bus
45 nm
80 W
additional
specs
|
2 sockets per node
486 compute nodes (1950), 32 compute nodes (2900)
and 29 support nodes (2950)
owned by OSCER
8 compute nodes (1950) and 1 support node (2950)
owned by Prof. Yang Hong
2 compute nodes (1950)
owned by Zewdu Tessema
|
Pentium4
Xeon
E5430
"Harpertown"
2.66 GHz
EM64T
quad core per chip
2 x 6 MB L2 cache
(data + instructions,
each 6 MB shared between 2 cores)
16 KB L1 instruction cache per core
16 KB L1 data cache per core
1333 MHz Front Side Bus
45 nm
80 W
additional
specs
|
2 sockets per node
3 compute nodes (1950)
owned by Prof. Kurt Marfurt
|
Pentium4
Xeon
E5345
"Clovertown"
2.33 GHz
EM64T
quad core per chip
2 x 4 MB L2 cache
(data + instructions,
each 4 MB shared between 2 cores)
32 KB L1 instruction cache per core
32 KB L1 data cache per core
1333 MHz Front Side Bus
65 nm
80 W
additional
specs
|
2 sockets per node
3 compute nodes (1950) and 1 support node (2950)
owned by Prof. Kurt Marfurt
|
Pentium4
XeonMP
E7340
"Tigerton"
2.40 GHz
EM64T
quad core per chip
2 x 4 MB L2 cache
(data + instructions,
each 4 MB shared between 2 cores)
16 KB L1 instruction cache per core
16 KB L1 data cache per core
1066 MHz Front Side Bus
45 nm
80 W
|
4 sockets per node
2 fat nodes (R900)
owned by OSCER
|
|
Chipset |
|
Main Memory |
DDR2 667 MHz
1333 MHz or 1066 MHz Front Side Bus
|
User Accessible
(536 nodes)
|
|
486
|
Compute
|
(OSCER)
|
x
|
16
|
GB each
|
(1333)
|
+
|
32
|
Compute
|
(OSCER)
|
x
|
16
|
GB each
|
(1333)
|
+
|
8
|
Compute
|
(Hong)
|
x
|
16
|
GB each
|
(1333)
|
+
|
2
|
Compute
|
(Tessema)
|
x
|
16
|
GB each
|
(1333)
|
+
|
3
|
Compute
|
(Marfurt)
|
x
|
16
|
GB each
|
(1333)
|
+
|
3
|
Compute
|
(Marfurt)
|
x
|
16
|
GB each
|
(1333)
|
+
|
2
|
"Phat"
|
(OSCER)
|
x
|
128
|
GB each
|
(1066)
|
=
|
8800 GB total
|
|
Management
(24 nodes
plus 4 spares)
|
|
2
|
Head
|
x
|
16
|
GB each
|
+
|
2
|
Grid Service
|
x
|
16
|
GB each
|
+
|
4
|
Global NFS
Home Storage
|
x
|
16
|
GB each
|
+
|
4
|
Global NFS
Scratch Storage
|
x
|
16
|
GB each
|
+
|
2
|
Management
(scheduler etc)
|
x
|
16
|
GB each
|
+
|
1
|
Cluster Hardware
Management
|
x
|
16
|
GB each
|
+
|
2
|
Infiniband
Subnet
Management
|
x
|
16
|
GB each
|
+
|
3
|
LDAP/license
Management
|
x
|
16
|
GB each
|
+
|
1
|
Backup
Service
|
x
|
16
|
GB each
|
+
|
1
|
Prof. Yang Hong Filesystem
|
x
|
16
|
GB each
|
+
|
1
|
Zewdu Tessema Filesystem
|
x
|
16
|
GB each
|
+
|
1
|
Prof. Kurt Marfurt Filesystem
|
x
|
16
|
GB each
|
=
|
386 GB total
|
|
|
Hard Disk,
Global,
User Accessible
|
|
Hard Disk,
Global,
Management
|
|
Hard Disk,
Local,
User Accessible
|
Compute, owned by OSCER (518)
Compute, owned by Prof. Yang Hong (8)
Compute, owned by Zewdu Tessema (2)
|
OS & local scratch
SATA
7200 RPM 3.5"
Each Node:
1 x 250 GB raw
~180 GB usable
Total:
128.9 TB raw
92.8 TB usable
|
Compute, owned by Prof. Kurt Marfurt (3)
|
OS & local scratch:
SATA
7200 RPM ?"
Each Node:
1 x ? GB raw
? GB usable
Total:
? GB raw
? GB usable
|
Phat, owned by OSCER (2)
|
OS & local scratch
SAS
15,000 RPM 3.5"
Each Node:
3 x 146 GB
RAID5
plus a hot spare
? GB usable
Total:
876 GB raw
? GB usable
|
|
Hard Disk,
Local,
Management
|
Head (2)
Global NFS Home Storage (4)
Global NFS Slow Scratch Storage (4)
Global NFS Prof. Yang Hong Storage (1)
Management (2)
Cluster Hardware Management (1)
Infiniband
Subnet Management (2)
LDAP/license (2)
Backup (2)
Spare (3)
|
OS
SAS
15,000 RPM 3.5"
Each Node:
3 x 146 GB
? GB usable
Total:
10,950 GB raw
? GB usable
|
Global NFS Prof. Kurt Marfurt Storage (1)
|
OS
SAS
15,000 RPM 3.5"
Each Node:
? x ? GB
? GB usable
292 GB raw
124 usable
|
Grid Service (2)
Spare (1)
|
OS
SATA
7200 RPM 3.5"
Each Node:
6 x 750 GB
? GB usable
Total:
18,000 GB raw
? GB usable
|
|
Interconnects |
|
Cluster Software
Operating System |
Red Hat Linux Enterprise 5.0 (kernel 2.6.18) |
Message Passing |
MPICH2
MVAPICH
OpenMPI
|
Scheduler |
Platform
LSF HPC
|
Compilers |
GNU:
gcc ,
g++ ,
gfortran
Intel:
ifort (Fortran 77/90),
icc (C),
icpc (C++)
NAG:
f95
(Fortran 90/95; front end to gcc )
Portland Group:
pgf90 ,
pgf77 ,
pghpf ,
pgcc ,
pgCC
|
Numerical Libraries |
BLAS, LAPACK, ScaLAPACK
Intel
Math
Kernel Library
IMSL
|
Parallel Debugger |
TotalView
|
Code Analyzer |
LintPlus for C,
Fortran Lint
|
Other software purchases
to be announced |
This resource and its availability are
subject to change without notice
(although notice will be given if at all possible).
|