|
OSCER Resources
Currently, OSCER hardware resources are housed in
Stephenson
Research & Technology Center
on the OU Norman campus.
Cluster Hardware
Picture of interconnect logical layout:
JPEG
GIF
PNG
|
Item |
Description |
Quantity |
Performance |
GFLOP/s |
1080 theoretical peak
606.9
sustained (HPL)
|
Racks |
Standard 42U |
11 |
Nodes |
Compute |
135 |
Head
(interactive logins)
|
2 |
Storage
(NFS)
|
6 |
FiberChannel
connected to FAStT500 disk server
|
2 |
Management
(runs
batch manager)
|
1 |
|
CPUs |
Pentium4 Xeon DP
"Prestonia"
2.0 GHz 512KB L2 cache |
2 per node |
Motherboard/ Chipset |
Supermicro/Intel 860
Compute nodes without SCSI controller
Non-compute nodes with SCSI controller
|
1 per node |
Main Memory |
RDRAM
|
2 GB per node |
Hard Disk
(EIDE except as noted)
|
Compute
|
OS & local scratch:
1 x 40 GB each
5,400 GB total
|
Head |
OS:
2 x 36 GB SCSI (RAID1) each, 146 GB total
|
Storage |
OS:
2 x 36 GB SCSI (RAID1) each, 438 GB total
Global User Space
(NSF):
1 x 6-bay RAID (RAID5 + 1 hot spare),
200 GB per drive, 800 GB usable space;
2 x 12-bay RAID (RAID5 + 1 hot spare),
200 GB per drive,
2,000 GB usable space each RAID,
4,000 GB usable space total;
Global Home
(NSF):
1,298 GB
Global Scratch
(NSF):
2,643 GB
|
Management |
OS:
2 x 36 GB SCSI (RAID1) each, 73 GB total
logging:
2 x 120 GB
(240 GB total)
|
GRAND TOTAL
(OS, management, home, scratch)
|
11,200 GB
|
|
Interconnect |
Myrinet2000
(primary);
100 Mbps Ethernet (backup)
|
all nodes |
Programming Model |
Message Passing |
|
Cluster Software
This resource and its availability are
subject to change without notice
(although notice will be given if at all possible).
|