OU Supercomputing Center for Education & Research
University of Oklahoma   OSCER   OU IT

 

 

OSCER Resources


Cluster Supercomputer
Dell Linux Xeon
E5-2650 ("Sandy Bridge")
boomer.oscer.ou.edu

Cluster Hardware

Item Description Quantity
Performance GFLOPs
(billions of floating point operations per second)
109,072.64 GFLOPs theoretical peak
(excluding accelerators)
Racks Dell PowerEdge 4220 42U 21
Nodes
Compute Nodes
General Use
334
Sandy Bridge
E5-2650
oct core
2.0 GHz
32 GB RAM
1 x 250 GB SATA 2.5"
Dell
PowerEdge R620
(spec sheet)
Accelerator Nodes
General Use
18
Sandy Bridge
E5-2650
oct core
2.0 GHz
32 GB RAM
1 x 500 GB SATA 3.5"
3 of these nodes each have dual NVIDIA Tesla Fermi M2075 GPU cards
Dell
PowerEdge R720
(spec sheet)
Compute Nodes
Owned by Prof. Uli Hansmann
32
Sandy Bridge
E5-2650
oct core
2.0 GHz
32 GB RAM
1 x 250 GB SATA 2.5"
Dell
PowerEdge R620
(spec sheet)
Compute Nodes
Owned by Dr. Lou Wicker
16
Sandy Bridge
E5-2650
oct core
2.0 GHz
64 GB RAM
1 x 250 GB SATA 2.5"
Dell
PowerEdge R620
(spec sheet)
Compute Nodes
Owned by Prof. Barbara Capogrosso Sansone
NOTE: NO INFINIBAND
11
Sandy Bridge
E5-2650
oct core
2.0 GHz
32 GB RAM
1 x 250 GB SATA 2.5"
Dell
PowerEdge R620
(spec sheet)
Compute Nodes
Owned by Prof. Kurt Marfurt
9
Sandy Bridge
E5-2650
oct core
2.0 GHz
32 GB RAM
1 x 250 GB SATA 2.5"
Dell
PowerEdge R620
(spec sheet)
Compute Nodes
Owned by Prof. Berrien Moore & Sean Crowell
2
Sandy Bridge
E5-2650
oct core
2.0 GHz
64 GB RAM
1 x 250 GB SATA 2.5"
Dell
PowerEdge R620
(spec sheet)
Compute Nodes
Owned by Prof. Peter Lamb
2
Sandy Bridge
E5-2650
oct core
2.0 GHz
32 GB RAM
1 x 250 GB SATA 2.5"
Dell
PowerEdge R620
(spec sheet)
Compute Node
Owned by Prof. Dee Wu
1
Sandy Bridge
E5-2650
oct core
2.0 GHz
64 GB RAM
1 x 250 GB SATA 2.5"
Dell
PowerEdge R620
(spec sheet)
Fat Node 1
Westmere
E7-4830
oct core
2.13 GHz
1 TB RAM
2 x 300 GB
SAS 6 Gbps 2.5"
Dell
PowerEdge R910
(spec sheet)
Login Nodes (editing, compiling, etc) 3
Sandy Bridge
E5-2650
oct core
2.0 GHz
32 GB RAM
1 x 250 GB SATA 2.5"
Dell
PowerEdge R620
(spec sheet)
Grid Service Node 1
Sandy Bridge
E5-2650
oct core
2.0 GHz
64 GB RAM
12 x 3 TB
Near-line SAS
6 Gbps 3.5"
Dell
PowerEdge R720xd
(spec sheet)
Global NFS Home/Slow Scratch Diskfull Nodes 6
Sandy Bridge
E5-2620
hex core
2.0 GHz
64 GB RAM
12 x 3 TB
Near-line SAS
6 Gbps 3.5"
Dell
PowerEdge R720xd
(spec sheet)
Virtualization Nodes (scheduler, etc) 3
Sandy Bridge
E5-2620
hex core
2.0 GHz
64 GB RAM
2 x 600 GB SAS 6 Gbps 2.5"
Dell
PowerEdge R720
(spec sheet)
Web/SMTP Node (web server, e-mail, etc) 1
Sandy Bridge
E5-2620
hex core
2.0 GHz
64 GB RAM
2 x 600 GB SAS 6 Gbps 2.5"
Dell
PowerEdge R720
(spec sheet)
Administration Node 1
Sandy Bridge
E5-2620
hex core
2.0 GHz
64 GB RAM
2 x 600 GB SAS 6 Gbps 2.5"
Dell
PowerEdge R720
(spec sheet)
Backup Non-diskfull (disk backups to tape) 1
Sandy Bridge
E5-2620
hex core
2.0 GHz
64 GB RAM
2 x 600 GB SAS 6 Gbps 2.5"
Dell
PowerEdge R720
(spec sheet)
Backup Diskfull Nodes (disk backups to tape) 1
Sandy Bridge
E5-2620
hex core
2.0 GHz
64 GB RAM
12 x 3 TB
Near-line SAS
6 Gbps 3.5"
Dell
PowerEdge R720xd
(spec sheet)
Satellite (provisioning) 1
Sandy Bridge
E5-2620
hex core
2.0 GHz
64 GB RAM
12 x 3 TB
Near-line SAS
6 Gbps 3.5"
Dell
PowerEdge R720xd
(spec sheet)
Spare Diskfull Node 2
Sandy Bridge
E5-2650
oct core
2.0 GHz
64 GB RAM
12 x 3 TB
Near-line SAS
6 Gbps 3.5"
Dell
PowerEdge R720xd
(spec sheet)
Infiniband Subnet Management Nodes 2
Sandy Bridge
E5-2620
hex core
2.0 GHz
32 GB RAM
1 x 250 GB SATA 2.5"
Dell
PowerEdge R620
(spec sheet)
Data Handler Nodes 2
Sandy Bridge
E5-2620
hex core
2.0 GHz
32 GB RAM
1 x 250 GB SATA 2.5"
Dell
PowerEdge R620
(spec sheet)
Boomer Archive Nodes 2
Sandy Bridge
E5-2620
hex core
2.0 GHz
32 GB RAM
1 x 250 GB SATA 2.5"
Dell
PowerEdge R620
(spec sheet)
Archive SFTP Nodes 2
Sandy Bridge
E5-2620
hex core
2.0 GHz
32 GB RAM
1 x 250 GB SATA 2.5"
Dell
PowerEdge R620
(spec sheet)
Global NFS Hansmann Diskfull Node 1
Sandy Bridge
E5-2620
hex core
2.0 GHz
64 GB RAM
12 x 3 TB
Near-line SAS
6 Gbps 3.5"
Dell
PowerEdge R720xd
(spec sheet)
Global NFS Marfurt Diskfull Node 1
Sandy Bridge
E5-2620
hex core
2.0 GHz
64 GB RAM
12 x 3 TB
Near-line SAS
6 Gbps 3.5"
Dell
PowerEdge R720xd
(spec sheet)
Global NFS Lamb Diskfull Node 1
Sandy Bridge
E5-2620
hex core
2.0 GHz
64 GB RAM
12 x 3 TB
Near-line SAS
6 Gbps 3.5"
Dell
PowerEdge R720xd
(spec sheet)
Global NFS Wicker Diskfull Node 1
Sandy Bridge
E5-2620
hex core
2.0 GHz
64 GB RAM
12 x 3 TB
Near-line SAS
6 Gbps 3.5"
Dell
PowerEdge R720xd
(spec sheet)
CPUs
Xeon E5-2650 "Sandy Bridge"
2.00 GHz
oct core per chip
20 MB L3 cache
(data + instructions, shared among all cores)
8 x 256 KB L2 cache
(data + instructions)
6 x 32 KB L1 instruction cache per core
6 x 32 KB L1 data cache per core
8.0 GT/sec QPI
32 nm
95 W
2 sockets per node
 
Owned by OSCER:
334 compute nodes (R620)
18 GPU nodes (R720)
3 login nodes (R620)
1 grid node (R720xd)
2 spare diskfull nodes (R720xd)
 
Owned by Prof. Uli Hansmann:
32 compute nodes (R620)
 
Owned by Dr. Lou Wicker:
16 compute nodes (R620)
 
Owned by Prof. Barbara Capogrosso Sansone:
11 compute nodes (R620)
 
Owned by Prof. Kurt Marfurt:
9 compute nodes (R620)
 
Owned by Prof. Peter Lamb:
2 compute nodes (R620)
 
Owned by Prof. Berrien Moore and Sean Crowell:
2 compute nodes (R620)
 
Owned by Prof. Dee Wu:
1 compute node (R620)
Xeon E5-2620 "Sandy Bridge"
2.00 GHz
hex core per chip
20 MB L3 cache
(data + instructions, shared among all cores)
6 x 256 KB L2 cache
(data + instructions)
6 x 32 KB L1 instruction cache per core
6 x 32 KB L1 data cache per core
7.2 GT/sec QPI
32 nm
95 W
2 sockets per node
 
Owned by OSCER:
2 Infiniband subnet manager nodes (R620)
2 data handler nodes (R620)
2 Boomer archive nodes (R620)
2 archive sftp nodes (R620)
3 virtualization nodes (R720)
1 web/smtp node (R720)
1 administration node (R720)
1 backup non-diskfull node (R720)
6 home/scratch nodes (R720xd)
1 satellite node (R720xd)
1 backup diskfull node (R720xd)
 
Owned by Prof. Uli Hansmann:
1 diskfull node (R720xd)
 
Owned by Dr. Lou Wicker:
1 diskfull node (R720xd)
 
Owned by Prof. Kurt Marfurt:
1 diskfull node (R720xd)
 
Owned by Prof. Peter Lamb:
1 diskfull node (R720xd)
Xeon E7-4830 "Westmere"
2.13 GHz
oct core per chip
24 MB L3 cache
(data + instructions, shared among all cores)
8 x 256 KB L2 cache
(data + instructions)
32 KB L1 instruction cache per core
32 KB L1 data cache per core
4 x 6.4 GT/sec QPI
32 nm
105 W
additional specs
4 sockets per node
 
1 fat node (R910) owned by OSCER
Chipset
Intel C600 series Compute Nodes,
GPU Nodes,
Support Nodes,
Diskfull Nodes
1 per node
Intel 7510 Fat Node 1 per node
Main Memory
DDR3 667 MHz 1333 MHz or 1066 MHz Front Side Bus
User Accessible
(536 nodes)
  486 Compute (OSCER) x 16 GB each (1333)
+ 32 Compute (OSCER) x 16 GB each (1333)
+ 8 Compute (Hong) x 16 GB each (1333)
+ 2 Compute (Lamb) x 16 GB each (1333)
+ 3 Compute (Marfurt) x 16 GB each (1333)
+ 3 Compute (Marfurt) x 16 GB each (1333)
+ 2 Fat (OSCER) x 128 GB each (1066)
= 8800 GB total
Management
(24 nodes
plus 4 spares)
  2 Head x 16 GB each
+ 2 Grid Service x 16 GB each
+ 4 Global NFS
Home Storage
x 16 GB each
+ 4 Global NFS
Scratch Storage
x 16 GB each
+ 2 Management
(scheduler etc)
x 16 GB each
+ 1 Cluster Hardware
Management
x 16 GB each
+ 2 Infiniband Subnet
Management
x 16 GB each
+ 3 LDAP/license
Management
x 16 GB each
+ 1 Backup
Service
x 16 GB each
+ 1 Prof. Yang Hong Filesystem x 16 GB each
+ 1 Peter Lamb Filesystem x 16 GB each
+ 1 Prof. Kurt Marfurt Filesystem x 16 GB each
= 386 GB total
Hard Disk,
Global,
User Accessible
Fast Scratch
Home
Panasas
Quantity: 6 shelves
Attached via 10GigE
Peak speed 3.3 GB/sec
Panasas
ActiveStor 11
(data sheet)
SATA 7200 RPM
PanFS
RAID5 plus a hot spare
120 x 2 TB =
240 TB raw, ~180 TB usable
Slow Scratch
Diskfull Nodes
Quantity: 6
Dell
PowerVault R720xd
(spec sheet)
Nearline SAS6 7200 RPM
NFS
RAID1 for OS
2 x 2 TB =
~1.8 TB usable
RAID6 for file storage
10 x 2 TB =
~7 TB usable each
Slow Scratch
Direct Attached Storage
Quantity: 2
Attached to 2 support nodes each
Dell
PowerVault MD-3000
(spec sheet)
SATA 7200 RPM
NFS
RAID5 plus a hot spare
15 x 750 GB =
11.25 TB raw each,
7.8 and 10.7 TB usable
Prof. Yang Hong Storage
Direct Attached Storage
Quantity: 1
Attached to 1 support node
Dell
PowerVault MD-1000
(spec sheet)
SATA 7200 RPM
NFS
RAID5 plus a hot spare
15 x 750 GB =
11.25 TB raw, 7.9 TB usable
Peter Lamb Storage
Direct Attached Storage
Quantity: 1
Attached to 1 support node
Dell
PowerVault MD-3000
(spec sheet)
SATA 7200 RPM
NFS
RAID5 plus a hot spare
15 x 1 TB =
15 TB raw, 11.8 TB usable
Prof. Kurt Marfurt Storage
Direct Attached Storage
Quantity: 1
Attached to 1 support node
Dell
PowerVault MD-1000
(spec sheet)
SATA 7200 RPM
NFS
RAID5 plus a hot spare
15 x 1 TB =
11.25 TB raw, 11.9 TB usable
Hard Disk,
Global,
Management
Virtualization
Storage Area Network
Management,
Quantity: 1
Attached to ? support nodes
Dell/EMC
PowerVault AX4-5F
(spec sheet)
SATA 7200 RPM
NFS
RAID5 plus a hot spare
12 x 750 GB =
9 TB raw, ? TB usable
Backup System Disk Cache
Direct Attached Storage
Management
Quantity: 1
Attached to 2 support nodes
Dell
PowerVault MD-3000
(spec sheet)
SATA 7200 RPM
NFS
RAID5 plus a hot spare
15 x 750 GB =
11.25 TB raw, ? TB usable
Hard Disk,
Local,
User Accessible
Compute, owned by OSCER (518)

Compute, owned by Prof. Yang Hong (8)

Compute, owned by Prof. Peter Lamb (2)
OS & local scratch
SATA 7200 RPM 3.5"
Each Node:
1 x 250 GB raw
~180 GB usable
Total:
128.9 TB raw
92.8 TB usable
Compute, owned by Prof. Kurt Marfurt (3) OS & local scratch:
SATA 7200 RPM ?"
Each Node:
1 x ? GB raw
? GB usable
Total:
? GB raw
? GB usable
Fat, owned by OSCER (2) OS & local scratch
SAS 15,000 RPM 3.5"
Each Node:
3 x 146 GB
RAID5 plus a hot spare
? GB usable
Total:
876 GB raw
? GB usable
Hard Disk,
Local,
Management
Head (2)

Global NFS Home Storage (4)

Global NFS Slow Scratch Storage (4)

Global NFS Prof. Yang Hong Storage (1)

Management (2)

Cluster Hardware Management (1)

Infiniband Subnet Management (2)

LDAP/license (2)

Backup (2)

Spare (3)
OS
SAS 15,000 RPM 3.5"
Each Node:
3 x 146 GB
? GB usable
Total:
10,950 GB raw
? GB usable
Global NFS Prof. Kurt Marfurt Storage (1) OS
SAS 15,000 RPM 3.5"
Each Node:
? x ? GB
? GB usable
292 GB raw
124 usable
Grid Service (2)

Spare (1)
OS
SATA 7200 RPM 3.5"
Each Node:
6 x 750 GB
? GB usable
Total:
18,000 GB raw
? GB usable
Interconnects
High Performance InfiniBand QLogic
9240 fabric director switch
quantity 1
(data sheet)

QLogic
9024 edge switches
quantity 37
(data sheet)

QLogic
QLE7240 HCA cards
quantity 567
(data sheet)

Cables
Copper within each 9024
Fiber from 9024s to 9240
I/O GigaBit Ethernet Force10 Networks E1200i
(spec sheet)

GigE, 2:1 oversubscribed
624 ports

10GigE, 4:1 oversubscribed
16 ports
Management 100 Mbps Ethernet Dell PowerConnect 5448
GigE, 48 ports
quantity 1
(spec sheet)

Dell PowerConnect 3548
100 Mbps, 48 ports
quantity 13
(spec sheet)

Cluster Software

Operating System Red Hat Linux Enterprise 5.0 (kernel 2.6.18)
Message Passing MPICH2
MVAPICH
OpenMPI
Scheduler Platform HPC Enterprise Edition
Compilers GNU: gcc, g++, gfortran
Intel: ifort (Fortran 77/90), icc (C), icpc (C++)
NAG: nagf95 (Fortran 90/95; front end to gcc)
Portland Group: pgf90, pgf77, pghpf, pgcc, pgCC
Numerical Libraries BLAS, LAPACK, ScaLAPACK
Intel Math Kernel Library
IMSL
Parallel Debugger TotalView
Other software purchases to be announced


This resource and its availability are subject to change without notice (although notice will be given if at all possible).

 

 


Copyright (C) 2005-2010 University of Oklahoma