IT logo, Information Technology, University of OklahomaPhoto of City Skyline

Oklahoma Supercomputing Symposium 2011Oklahoma Supercomputing Symposium 2011


OSCER

OU IT

OK EPSCoR

Great Plains Network


Table of Contents

Other speakers to be announced


KEYNOTE SPEAKER

Dr. Barry I. Schneider
Barry I. Schneider

Program Director for Cyberinfrastructure
Office of Cyberinfrastructure
National Science Foundation

Keynote Topic: "XSEDE: An Advanced Cyberinfrastructure for US Scientists and Engineers"

Slides: available after the Symposium

Keynote Talk Abstract

On July 1 2011, the TeraGrid project was succeeded by the National Science Foundation's eXtreme Digital (XD) program, opening a new chapter in cyberinfrastructure by creating the most advanced, powerful, and robust collection of integrated advanced digital resources and services in the world. This talk will introduce the new program and the XSEDE project, which reaches beyond TeraGrid in depth, breadth and most importantly, in potential scientific impact.

XSEDE will establish an increasingly virtualized approach to the provision of high-end digital services providing a common framework for researchers at all levels of sophistication and creating a seamless environment from the desktop, to local university resources, to national resources. XSEDE is interested in engaging with all NSF supported researchers as well as the rest of the open scientific community to more effectively support their research and educational objectives requiring high-end digital services.

Biography

Dr. Barry I. Schneider is a Program Director for the National Science Foundation's Office of Cyberinfrastructure, specifically for the eXtreme Digital (XD) program. He received his Bachelors in Chemistry from Brooklyn College, his Masters in Chemistry from Yale University and his PhD in Theoretical Chemistry from the University of Chicago. Before coming to the NSF, he worked at Los Alamos National Laboratory (LANL) in the Theoretical Division, at GTE Laboratories as a member of the Technical Staff, and since 1992 has held visiting appointments at LANL and at the National Institute of Standards and Testing (NIST).


PLENARY SPEAKERS

Henry Neeman
Henry Neeman

Director
OU Supercomputing Center for Education & Research (OSCER)
Information Technology
University of Oklahoma

Topic: "OSCER State of the Center Address"

Slides: available after the Symposium   PowerPoint2007

Talk Abstract

The OU Supercomputing Center for Education & Research (OSCER) celebrates its 10th anniversary on August 31 2011. In this report, we examine what OSCER is, what OSCER does, what OSCER has accomplished in its first decade, and where OSCER is going.

Biography

Dr. Henry Neeman is the Director of the OU Supercomputing Center for Education & Research and an adjunct assistant professor in the School of Computer Science at the University of Oklahoma. He received his BS in computer science and his BA in statistics with a minor in mathematics from the State University of New York at Buffalo in 1987, his MS in CS from the University of Illinois at Urbana-Champaign in 1990 and his PhD in CS from UIUC in 1996. Prior to coming to OU, Dr. Neeman was a postdoctoral research associate at the National Center for Supercomputing Applications at UIUC, and before that served as a graduate research assistant both at NCSA and at the Center for Supercomputing Research & Development.

In addition to his own teaching and research, Dr. Neeman collaborates with dozens of research groups, applying High Performance Computing techniques in fields such as numerical weather prediction, bioinformatics and genomics, data mining, high energy physics, astronomy, nanotechnology, petroleum reservoir management, river basin modeling and engineering optimization. He serves as an ad hoc advisor to student researchers in many of these fields.

Dr. Neeman's research interests include high performance computing, scientific computing, parallel and distributed computing and computer science education.

Douglas Cline
Douglas Cline

Manager, Aerodynamics and Computational Fluid Dynamics
Engineering Division
Lockheed Martin Aerospace Company
Topic: "Industrial-Strength High Performance Computing for Science and Engineering"
Slides: coming soon

Talk Abstract

Anyone studying computational science in the 1970's and 1980's should be considered the ultimate optimists, for all technology trends indicated that the computational speed and memory required to perform physically relevant and useful computer simulations were many decades away. Without a doubt the greatest single impact on computational science over the past 25 years has been the rise of commodity-based high performance computing systems. Technical fields that were once thought to be hopeless for lack of computing power were transformed in a few short years, enabling a new generation of computational scientists and engineers to perform scientific and engineering simulations with unprecedented speed and fidelity. This presentation will focus on some of the technology drivers that gave rise to massively parallel computing and the "radical" concept of scalable software and how issues relevant twenty-five years ago continue to shape the future of high performance computing.

Biography

Coming soon

Leesa Brieger
Leesa Briefer

Senior Research Software Developer
Renaissance Computing Institute (RENCI)
University of North Carolina at Chapel Hill
Topic: "iRODS and Large-Scale Data Management"
Slides:     PPTX   PDF

Talk Abstract

While data management may hold little interest for many scientists, there is increasing need for data management plans and the technology to implement them. Aside from NSF mandates, there are requirements of traceability and reproducibility, sharing and publishing, even concerns of legal liability that are causing growing numbers of scientists to embrace metadata, versioning, and overall data policy in a way that would warm an archivist's heart.

iRODS is a technology that supports the execution of event-driven, data-side services as a means of implementing policy-based data management. Groups with large-scale data (challenges) are particularly motivated adopters of iRODS. This talk will include a general introduction to iRODS, as well as a description of some of the impact it is having on management of large-scale scientific data.

Biography

As Sr. Research Software Developer at RENCI, Leesa Brieger bootstrapped the irods@renci group there and is now the Business Development/Outreach lead for that team. Having first joined the DICE group at SDSC, Leesa works closely with DICE now at UNC Chapel Hill. Her background is in numerical analysis (a Bachelor's in math at UC Berkeley and a Master's in applied math at the University of Utah), which paved the way to many years of computational science and HPC - in materials science (EPFL, Lausanne, Switzerland), environmental modeling and geophysics (CRS4, Sardinia, Italy), grid computing and astronomical mosaicking (SDSC) - before she fell in with the data crowd.

Stephen Wheat
Stephen Wheat

Senior Director, High Performance Computing Worldwide Business Operations
Intel

Topic: "On Bringing HPC Home for Growth"

Slides:   PDF

Talk Abstract

At last year's Oklahoma Oklahoma Supercomputing Symposium, I compared the relatively young HPC Market Segment to that of much older and more mature spaces, calling the question as to why the HPC community appears to be prematurely in the relatively non-vibrant phase of a segment's life cycle. I noted that our Red Ocean experience is likely due to the fact that our "Ocean" has not flowed over the barriers to adoption of HPC technology to a broader base of HPC participants and beneficiaries.

In this talk, I will review the significant events that have transpired and the progress our community has made in resolving those barriers since our last meeting. We'll look at the scope of potential impact in our continued pursuit, laying out a roadmap of how we arrive at the Blue Ocean. I'll describe a model of community action that will get us there, at the scale of engagement required.

As we consider the magnified adoption of HPC technology, the potential enormity of HPC's impact at the Every Person level not only comes into focus, but also moves to the possible. And in so doing, creates that much more excitement and motivation to excel our overall pursuit of advancing the technology, as it's not just for the "us" we know now, but the much larger "us" of the near future.

The non-hidden agenda of this talk is to enlist the efforts of others that would make the navigation to the Blue Ocean a reality.

Biography

Dr. Stephen Wheat is the Senior Director for the HPC Worldwide Business Operations directorate within Intel's HPC Business Unit. He is responsible for driving the development of Intel's HPC strategy and the pursuit of that strategy through platform architecture, eco-system development and collaborations. While in this role, Dr. Wheat has influenced the deployment of several Top10 systems and many more Top500 HPC systems.

Dr. Wheat has a wide breadth of experience that gives him a unique perspective in understanding large scale HPC deployments. He was the Advanced Development manager for the Storage Components Division, the manager of the RAID Products Development group, the manager of the Workstation Products Group software and validation groups, and manager of the Supercomputing Systems Division (SSD) operating systems software group. At SSD, he was a Product Line Architect and was the systems software architect for the ASCI Red system.

Before joining Intel in 1995, Dr. Wheat worked at Sandia National Laboratories, performing leading research in distributed systems software, where he created and led the SUNMOS and PUMA/Cougar programs. Dr. Wheat is a 1994 Gordon Bell Prize winner and has been awarded Intel's prestigious Achievement Award. He has a patent in Dynamic Load Balancing in HPC systems. He has also twice been honored as one of HPCwire's People to Watch, in 2006 and 2011.

Dr. Wheat holds a Ph.D. in Computer Science and has several publications on the subjects of load balancing, inter-process communication, and parallel I/O in large-scale HPC systems. Outside of Intel, he is a commercial multi-engine pilot and a FAA certified multi-engine, instrument flight instructor.


PANELISTS

Daniel Andresen
Daniel Andresen

Associate Professor
Department of Computing & Information Sciences
Kansas State University

Panel Topic: "The Impact of OSCER on OU, Oklahoma and Beyond"

Panel Abstract

The OU Supercomputing Center for Education & Research (OSCER) recently celebrated its 10th anniversary. What has the impact of OSCER on OU, Oklahoma, the region and the country? What impacts are anticipated in the future? In this panel, we'll examine the role that OSCER plays and its value to the research and education community.

Talk Topic: "The Emerging Role of the Great Plains Network in Bridging Campuses to Regional and National Cyberinfrastructure Resources"
(with
Rick McMullen)

Talk Abstract

Regional Optical Networks (RONs) are a critical piece of Cyberinfrastructure (CI) for connecting researchers to each other and to computing and data resources. In addition to network connectivity, RONs are uniquely situated to provide a broader range of support to help researchers and support staff bridge the gap between computing facilities in their laboratories and on their campuses, and the leading or peak computing facilities that are available at national centers. Recently the Great Plains Network (GPN) made broad support for regional CI a strategic priority, through a new Cyberinfrastructure Program Committee. In this talk, we will discuss the Great Plains Network and the process by which it is evolving to support a broader range of CI needs in the region. A recent set of reports by the National Science Foundation (NSF) Advisory Committee for Cyberinfrastructure (ACCI) included a Task Force on Campus Bridging Report. This report presents deep implications for the GPN CI Program and other RON-based CI support programs in advancing CI priorities at the campus and national levels.

Biography

Daniel Andresen, Ph.D. is an associate professor at Kansas State University. His research includes embedded and distributed computing, biomedical systems, and high performance scientific computing. Dr. Andresen coordinates the activities of the K-State research computing cluster, Beocat, and advises the local ACM chapter. He is a National Science Foundation CAREER award winner, and has been granted research funding from the NSF, the Defense Advanced Research Projects Agency (DARPA), and industry. He is a member of the Association for Computing Machinery, the IEEE Computer Society, the Electronic Frontier Foundation, and the American Society for Engineering Education.

Dimitrios Papavassiliou
Dimitrios Papavassiliou

Professor
School of Chemical, Biological & Materials Engineering
University of Oklahoma

Panel Topic: "The Impact of OSCER on OU, Oklahoma and Beyond"

Panel Abstract

The OU Supercomputing Center for Education & Research (OSCER) recently celebrated its 10th anniversary. What has the impact of OSCER on OU, Oklahoma, the region and the country? What impacts are anticipated in the future? In this panel, we'll examine the role that OSCER plays and its value to the research and education community.

Talk Topic: "Design of Thermally Conducting Nanocomposites with Computations"
Slides:   available after the Symposium

Talk Abstract

Carbon nanotubes and graphene sheets, which exhibit exceptionally high thermal conductivities, appear to be promising fillers for manufacturing thermally conducting nanocomposites. However, this promise has not yet been fulfilled, while it has been found experimentally that the effective thermal conductivity of carbon nanocomposites is much lower than what is theoretically expected. The reason for this behavior is the presence of an interfacial resistance to heat transfer at the nanoinclusion-matrix interface. This resistance, also known as Kapitza resistance, can become a dominant factor in the transport of heat. In this presentation, the macroscopically observed thermal properties of nanocomposites will be examined through a combination of mesoscopic and molecular scale simulations. The simulation results can not only lead to insights about the physics of nano-scale heat transport and the role of the Kapitza resistance, but can also guide the design of nanocomposite materials with superior thermal properties.

Biography

Dimitrios Papavassiliou is a Presidential Professor in the School of Chemical, Biological & Materials Engineering at the University of Oklahoma. He received a BS degree from the Aristotle University of Thessaloniki, and MS and PhD degrees from the University of Illinois at Urbana-Champaign. He joined OU in 1999, after 2.5 years with Mobil's Upstream Strategic Research Center in Dallas, Texas. His research contributions are in the area of computations and numerical methods for turbulent flows and flows in porous media, in the area of micro- and nano-flows, and, lately, in the area of biologically relevant flows. Dimitrios has co-authored over 70 journal articles and book chapters, and he has presented his work in more than 120 conference presentations and more than 20 invited talks to academe and industry. His research group has received funding from federal sources (NSF, DoE, ONR, AFOSR), private sources (ACS-PRF) and industrial consortia.

Dan Weber
Dan Weber

Computer Scientist
Software Group (76 SMXG)
Tinker Air Force Base

Panel Topic: "The Impact of OSCER on OU, Oklahoma and Beyond"

Panel Abstract

The OU Supercomputing Center for Education & Research (OSCER) recently celebrated its 10th anniversary. What has the impact of OSCER on OU, Oklahoma, the region and the country? What impacts are anticipated in the future? In this panel, we'll examine the role that OSCER plays and its value to the research and education community.

Biography

Dr. Dan Weber has 25 years of experience in modeling and simulation of physical systems such as severe weather and most recently, computational electromagnetics (CEM). In addition to performing research and writing numerous papers on thunderstorms and computer software optimization techniques targeted at massively parallel computers, he has taught courses in weather forecasting techniques and severe and unusual weather. He has held positions with the National Weather Service, at the University of Oklahoma (OU) and in private industry. Dr. Weber is currently employed as a Computer Scientist with the 76th Software Maintenance Group (SMXG) at Tinker Air Force Base and supports flight simulators. He is also leading the efforts to develop HPC capabilities within the 76th SMXG and the modeling and simulation of weather phenomenon and radio waves in support of the war fighter.

Dr. Weber graduated with undergraduate and graduate degrees in Meteorology and Geology from the University of Utah and a doctoral degree in Meteorology from the University of Oklahoma (OU). His current research interests include optimization of models on General Purpose computation on Graphics Processing Units (GPGPU) technology and urban weather prediction. Dr. Weber has participated in several forensic weather projects and has supported several real-time weather forecasting efforts via the installation and optimization of a state of the art weather prediction system that he helped develop at OU.

Dr. James P. Wicksted
James Wicksted

Professor
Noble Research Fellow
Director of Multidisciplinary Research
Department of Physics
Oklahoma State University
Associate Director
Oklahoma EPSCoR

Panel Topic: "The Impact of OSCER on OU, Oklahoma and Beyond"

Slides:   PowerPoint

Panel Abstract

The OU Supercomputing Center for Education & Research (OSCER) recently celebrated its 10th anniversary. What has the impact of OSCER on OU, Oklahoma, the region and the country? What impacts are anticipated in the future? In this panel, we'll examine the role that OSCER plays and its value to the research and education community.

Biography

Dr. James P. Wicksted received his B.A. degree (1975) from New York University and his M.A. (1978) and Ph.D. (1983) from the City University of New York. He became a member of the Department of Physics at Oklahoma State University in 1985, where he is currently a full professor and a Noble Research Fellow in optical materials. He is also the Director of Multidisciplinary Research in the College of Arts & Sciences at OSU.

His current research interests include the optical studies of various types of nanoparticle complexes that have potential biosensing and biomedical applications. Dr. Wicksted has also collaborated with the Biomedical Engineering Center at the University of Texas Medical Branch in Galveston since 1992, where he has worked with medical doctors and bioengineers on the noninvasive applications of lasers in diagnosing disease.

Dr. Wicksted is the associate director of the Oklahoma NSF EPSCoR Program and the Director of the Oklahoma DOE EPSCoR Program. He is currently the principal investigator of a $15 million Research Infrastructure Improvement Grant funded by the NSF EPSCoR Program.

OTHER PLENARY SPEAKERS TO BE ANNOUNCED


BREAKOUT SPEAKERS

Alex Barclay

Director
Tulsa Community Supercomputing Center
Tulsa Research Partners
Oklahoma Innovation Institute

Topic: "An Introduction to the Tulsa Community Supercomputer"

Slides: available after the Symposium

Talk Abstract

Coming soon

Biography

Coming soon

David Bigham
David Bigham

Field Systems Engineer
Isilon Systems

Topic: "Scale Out NAS for High Performance Applications"

Slides: available after the Symposium

Talk Abstract

File based application workflows are creating tremendous pressures on today's data storage systems. The introduction of compute clusters and multi-core processors has shifted the performance bottleneck from application processing to data access, pushing traditional storage systems beyond their means. In this brief talk, we will explore the use of a clustered scale-out storage architecture with High Performance Compute clusters.

Biography

David Bigham is a Field Systems Engineer with Isilon Systems, a subsidiary of EMC Corporation, a leader in Scale-Out NAS solutions. Before joining Isilon, David worked at AT&T as a member of the Virtualization Team responsible for deploying and supporting a large distributed server and desktop virtualization infrastructure. Prior to the company being acquired by AT&T, David held various positions, including in architecture and design responsibilities within IT, at Dobson Communications Corporation.

Keith Brewster
Keith Brewster

Senior Research Scientist and Associate Director
Center for Analysis & Prediction of Storms
University of Oklahoma

Topic: "Nowcasting and Short-term Forecasting of Thunderstorms and Severe Weather using OSCER"

Slides:   PowerPoint   PDF

Talk Abstract

Coming soon

Biography

Keith Brewster is a Senior Research Scientist at the Center for Analysis and Prediction of Storms at the University of Oklahoma and an Adjunct Associate Professor in the OU School of Meteorology. His research involves data assimilation of advanced observing systems for high resolution numerical weather analysis and prediction, including data from Doppler radars, satellites, wind profilers, aircraft and surface mesonet systems. He earned an M.S. and Ph.D. in Meteorology from the University of Oklahoma and a B.S. from the University of Utah.

Dana Brunson
Dana Brunson

Senior Systems Engineer
High Performance Computing Center
Adjunct Associate Professor
Department of Computer Science
Oklahoma State University

Panel Topic: "Engaging Campuses through Extreme Science and Engineering Discovery Environment (XSEDE)"
(with S. Kay Hunt and Jeff Pummill)

Panel Slides: available after the Symposium

Talk Topic: "How I Got a Grant for a Supercomputer"

Talk Slides: available after the Symposium

Panel Abstract

This presentation will provide an overview of XSEDE (the National Science Foundation follow-on to TeraGrid) with an emphasis on how XSEDE is working with campuses to support the computational science and engineering and HPC needs of campus researchers and educators. The presenters include the XSEDE Campus Champions coordinator, along with two Campus Champions. The presenters will be able to assist the participants in learning how campuses, researchers, educators, and students can gain access to the resources of XSEDE. The session will include a presentation about campus engagement through XSEDE, followed by a Q&A session.

Talk Abstract

Oklahoma State University was recently awarded a National Science Foundation Major Research Instrumentation grant to acquire a High Performance Compute Cluster, "Cowboy." This break-out session will include an overview of the MRI program and a walk-through of writing a successful proposal.

Biography

Dana Brunson oversees the High Performance Computing Center and is an adjunct associate professor in the Computer Science Department at Oklahoma State University (OSU). Before transitioning to High Performance Computing in the fall of 2007, she taught mathematics and served as systems administrator for the OSU Mathematics Department. She earned her Ph.D. in Numerical Analysis at the University of Texas at Austin in 2005 and her M.S. and B.S. in Mathematics from OSU. In addition, Dana is serving on the ad hoc committee for OSU's new Bioinformatics Graduate Certificate program and is the XSEDE Campus Champion for OSU.

Brian Cremeans
Brian Cremeans

Informatics Analyst
Information Technology
University of Oklahoma

Topic: "Informatics Services and Infrastructure for Research Support"

Slides: available after the Symposium

Abstract

When fields of scientific study become enriched with an abundance of diverse data or expand the scope and complexity of their interests, the workflows for managing, accessing, and processing that data often need to improve. The CyberCommons Ecological Forecasting project involves several such disciplines, many projects, and a wide array of highly varied data sets and challenges. This requires the researchers to make use of new technologies to promote data exploration, data discovery, and collaboration. Here we will discuss some of the approaches taken, lessons learned, and improvements made while working on the CyberCommons project, and will explore how we can extend such approaches to enable scientists to make better use of data and computational resources.

Biography

Brian Cremeans is an Informatics Analyst at the University of Oklahoma, where he supports several cross-disciplinary research projects. He received his BS in Computer Science with a minor in Mathematics (2002), MS in Computer Science (2007), and MA in Mathematics (2007), all from OU. He is currently pursuing a PhD in Computer Science at OU. Prior to joining OU Information Technology's Informatics team, he worked as a System Analyst in OU Outreach and as a Research Associate in the OU School of Meteorology. He currently is working on the CyberCommons Ecological Forecasting Project and applications of cloud frameworks and virtualization to research support.

Larry Fisher
Larry Fisher

Owner
Creative Consultants

Topic: "Career Development -- Your Six Choices"

Slides:   available after the Symposium

Talk Abstract

We have six choices concerning the direction our careers will take. This presentation covers those six choices and includes a right-brain exercise to help participants determine the personal impact that their career choice might have on their future. This is a fun, participative session.

Biography

Larry Fisher is owner of Creative Consultants, a management training and development consulting company. He was formerly Assistant Administrator for Human Resource Development Services, Office of Personnel Management, State of Oklahoma, where he administered a statewide management training and professional development program for state employees. He also worked at the University of Oklahoma in administration, management development, and visiting lecturer for the College of Business Administration and the Political Science Department. He has consulted nationally for many private and public organizations. He is known nationally through memberships in the American Society for Training and Development (ASTD), the National Association for Government Trainers (NAGTAD), and the International Association for Continuing Education and Training (IACET). He served as national president of NAGTAD, a commissioner for IACET, and president of the Oklahoma City Chapter of ASTD. He has taught for Oklahoma State University, Rose State College in Oklahoma, the University of Phoenix, and the Keller Graduate School of DeVry University. He attained the status of Certified Personnel Professional for the State of Oklahoma. Larry has a B.S. in Chemistry and an M.A. in Public Administration. He has completed all coursework for a Ph.D. in Political Science.

Brian Forbes
Brian Forbes

Senior Solution Architect
Mellanox Technologies
Topic: "Paving the Road to Exascale Computing"
Slides: available after the Symposium

Talk Abstract

PetaScale and Exascale systems will span tens of thousands of nodes, all connected together via high speed connectivity solutions. With the growing size of clusters and of CPU/GPU cores per cluster node, the interconnect needs to provide not only the highest throughput and lowest latency, but also to be able to offload the processing units (CPUs, GPUs) from the communications work, in order to deliver the desired efficiency and scalability. Mellanox scalable HPC solutions accelerate MPI and SHMEM environments with smart offloading techniques, and deliver the needed infrastructure for faster and more efficient GPU communications. This presentation will cover the latest technology and solutions from Mellanox that connect the world's fastest supercomputers, and a roadmap for next generation InfiniBand speed.

Biography

Brian Forbes is a Senior Solution Architect at Mellanox Technologies, a leading supplier of InfiniBand and 40 Gigabit Ethernet solutions. He was an original member of the InfiniBand engineering team, chairing the Systems Management working group and the Routing sub-group. As a member of the Brocade Communications technology team, he contributed to the Internet Engineering Task Force's iSCSI specification as well as to ANSI's SCSI work. At Tandem/Compaq, he was part of the ServerNet and NT cluster product development teams. Prior to that, Brian held various engineering positions at Burroughs/Unisys, including Director of Development for their PC division.

Jim Glover
Jim Glover

Adjunct Instructor
Business Technologies
Oklahoma State University - Oklahoma City

Topic: "A Day in the Life of an Oklahoma Information Technology Mentorship Program Volunteer"

Slides: available after the Symposium

Talk Abstract

The Oklahoma Information Technology Mentorship Program exposes Oklahoma students to the practical day-to-day life of IT professionals. Students learn more about careers they may be considering, and the IT community gains newcomers who better understand what it takes to succeed professionally. It turns out that these relatively obvious benefits are the tip of an iceberg of opportunities for all concerned.

Biography

Jim Glover is Network Manger at the Oklahoma Department of Corrections, and an Adjunct Instructor at Oklahoma State University-Oklahoma City. He received his BS in Computer Science from Louisiana Tech University in 1990. He has been in IT since 1980, serving as a mainframe operator, then system administrator on systems ranging from microcomputers to mainframes, and now at the Department of Corrections, where he designs, maintains, and manages network connections for locations located across the state of Oklahoma. Outside of work, he is active in Amateur Radio, and likes to dabble in photography.

Jim Gutowski
Jim Gutowski

Business Development Manager
Research Computing Solutions
Dell

Topic: "Dell Research Computing: Giving XSEDE Researchers the POWER TO DO MORE"

Slides: available after the Symposium

Talk Abstract

Dell collaborates with many XSEDE (TeraGrid / XD) institutions that both provide and consume high performance computing resources. This presentation focuses on enabling research and scientific discovery though the implementation of Dell HPC solutions as XSEDE resources, including clusters at the Texas Advanced Computing Center (TACC), the National Center for Supercomputing Applications (NCSA) and Cornell University. TeraGrid users can run MATLAB on the Cornell MATLAB cluster, run a wide range of applications on the HPC/GPU cluster iFORGE at NCSA, and do remote visualization and data analysis (VDA), on TACC's Longhorn, the largest dedicated VDA machine in the world. This machine opens new possibilities for remote and collaborative visualization. This presentation will provide an overview of these XSEDE resources and the underlying Dell HPC solutions that power them. It will also provide a glimpse of Stampede, a 10 PetaFLOPs resource that will come on line in February 2013. This talk will also cover how researchers, including those not in XSEDE, can build their own high performance clusters as well as access HPC Cloud computing resources with Dell.

Biography

Jim Gutowski is Dell's HPC business development manager for research computing in the US. He is an HPC veteran, with over 20 years in technical and high performance computing, including 16 years at Hewlett-Packard, several years in start-up companies, 3 years at Sun Microsystems, and the past 2 years at Dell. He's had a variety of sales and marketing roles in those companies, and started his career as an engineer at McDonnell Douglas (now part of Boeing) in design of commercial aircraft. Jim is an engineering graduate of, and is fanatical about, the University of Notre Dame, and also has an MBA from the University of California Los Angeles (UCLA). He lives in Fort Collins, Colorado.

S. Kay Hunt
S. Kay Hunt

XSEDE Campus Champion Project Director
Purdue University
Panel Topic: "Engaging Campuses through Extreme Science and Engineering Discovery Environment (XSEDE)"
(with Dana Brunson and Jeff Pummill)
Slides: available after the Symposium

Panel Abstract

This panel will provide an overview of XSEDE (the National Science Foundation follow-on to TeraGrid) with an emphasis on how XSEDE is working with campuses to support the computational science and engineering and HPC needs of campus researchers and educators. The presenters include the XSEDE Campus Champions coordinator, along with two Campus Champions. The presenters will be able to assist the participants in learning how campuses, researchers, educators, and students can gain access to the resources of XSEDE. The session will include a presentation about campus engagement through XSEDE, followed by a Q&A session.

Biography

Kay Hunt is the project coordinator for the XSEDE Campus Champions program, a national project sponsored by the National Science Foundation. The Campus Champions program supports campus representatives as the local source of knowledge about high performance computing opportunities and resources. She has responsibility for over 130 Campus Champions located at over 100 institutions, who develop relationships between and among faculty and staff. The knowledge and assistance provided by the Champions empower campus researchers, educators, and students to advance scientific discovery.

Hunt's responsibilities with the XSEDE, in addition to the Campus Champion program, are working with the Education and Outreach group. Hunt's primary focus areas at Purdue University are project management, communications, and outreach.

Hunt has been with Purdue University over 35 years and has many years experience in information technology and scientific research. She received her Bachelor of Arts degree in Mathematics from Indiana University.

Kirk Jordan
Kirk Jordan

Emerging Solutions Executive
Computational Science Center
T. J. Watson Research
IBM
Topic: "HPC Directions Toward Exascale: An Application Orientation"

Slides
PDF

Abstract

Learn how IBM is addressing the challenges involved in achieving Petascale and Exascale performance on high end system platforms, running real workloads to obtain significant results in science, engineering, business and social policy and partnering and collaborating with key clients on the most challenging applications and workloads.

Biography

Dr. Kirk E. Jordan is the Emerging Solutions Executive in the Computational Science Center at IBM T.J. Watson Research Center. He has vast experience in high performance and parallel computing. The Computational Science Center is addressing the challenges involved in achieving Petascale and Exascale performance on IBM's very high end system platforms, running real workloads to obtain significant results in science, engineering, business and social policy, and partnering and collaborating with key IBM clients on the most challenging applications and workloads on these large systems. Dr. Jordan oversees development of applications for IBM's advanced computing architectures, investigates and develops concepts for new areas of growth involving high performance computing (HPC), and provides leadership in high-end computing and simulation in such areas as computational fluid dynamics, systems biology and high-end visualization. At IBM, he held several positions promoting HPC and high performance visualization, including leading technical efforts in the Deep Computing organization within IBM's Systems and Technology Group, managing IBM's University Relations SUR (Shared University Research) Program and leading IBM's Healthcare and Life Sciences Strategic Relationships and Institutes of Innovation Programs. In addition to his IBM responsibilities, Jordan is able to maintain his visibility as a computational applied mathematician in the high-performance computing community. He is active on national and international committees on science and high-performance computing issues and has received several awards for his work on supercomputers. His main research interests lie in the efficient use of advanced architectures computers for simulation and modeling especially in the area of systems biology and physical phenomena. He has authored numerous papers on performance analysis of advanced computer architectures and investigated methods that exploit these architectures. Areas he has published include interactive visualization on parallel computers, parallel domain decomposition for reservoir/groundwater simulation, turbulent convection flows, parallel spectral methods, multigrid techniques, wave propagation systems biology and tumor modeling.

Nicholas F. Materer
Nicholas F. Materer

Associate Professor
Department of Chemistry
Oklahoma State University
Topic: "Computational Studies of Surface Reaction: From Small Cluster to Nanoporous Materials"

Talk Abstract

The use of clusters to model surface reactions can provide insight on the bonding, reaction mechanism and expected products. As we strive for better and better predictions, larger- clusters and the concurrent increase in computational cost is becoming as increasingly important issue. One example in my group is the adsorption of ClCN on multiple Si-dimer clusters. We show how larger clusters are required to describe chemical reaction of the Si(100). Another example is our work modeling reaction on Mo surfaces with mixed oxidation states. Even small clusters contain 6 and 10 Mo atoms (Mo6O23H10 and Mo10O36H12), have a very large number of electronics, even with the use of pseudopotentials. On these clusters, density-functional studies indicate that HOOH adsorbs molecularly on the Mo(VI) clusters, while the Mo(VI)/M(V) clusters can decomposes the peroxides. Finally, we are investigating hexamethylene triperoxide diamine (HMTD), triacetone triperoxide (TATP), trinitrotoluene (TNT), and cyclotrimethylene trinitramine (RDX) adsorbed inside mesoporous silica (MCM-41) and nanopores anodized alumina. These systems are extremely large and contain upwards of one thousand atoms. Useful results has been obtained utilizing a hybrid quantum mechanics/molecular mechanics (QM/MM) approach.

Biography

In 1990, I completed a Bachelor of Science in Chemistry with Honors at the University of Missouri-Columbia. I received my Ph.D. in 1995 from the University of California, Berkeley under the guidance of Professor Gabor Somorjai and Dr. Michel Van Hove. After UC Berkeley, I took a postdoctoral fellow position in the group of Stephen R. Leone at JILA and the Department of Chemistry and Biochemistry at the University of Colorado. Since 1998, I have been a faculty member in the department of chemistry at OSU. In the summer of 2004, I was promoted to the Associate Professor level. I also recently received the College of Arts and Science Junior Faculty Award for Scholarly Excellence. I have published over forty papers in pre-reviewed journals. My research involves experimental surface science studies of interfacial chemistry and physics. Ongoing projects are the exploration of the surface chemistry of silicon with organic molecules, chemical mechanism for detection and desensitizing of explosives and corrosion sensors for our ageing infrastructure.

Donald F. (Rick) McMullen
Donald F. (Rick) McMullen

Senior Scientist
Research & Graduate Studies
Department of Electrical Engineering & Computer Science
University of Kansas

Talk Topic: "The Emerging Role of the Great Plains Network in Bridging Campuses to Regional and National Cyberinfrastructure Resources"
(with Dan Andresen)

Talk Abstract

Regional Optical Networks (RONs) are a critical piece of Cyberinfrastructure (CI) for connecting researchers to each other and to computing and data resources. In addition to network connectivity, RONs are uniquely situated to provide a broader range of support to help researchers and support staff bridge the gap between computing facilities in their laboratories and on their campuses, and the leading or peak computing facilities that are available at national centers. Recently the Great Plains Network (GPN) made broad support for regional CI a strategic priority, through a new Cyberinfrastructure Program Committee. In this talk, we will discuss the Great Plains Network and the process by which it is evolving to support a broader range of CI needs in the region. A recent set of reports by the National Science Foundation (NSF) Advisory Committee for Cyberinfrastructure (ACCI) included a Task Force on Campus Bridging Report. This report presents deep implications for the GPN CI Program and other RON-based CI support programs in advancing CI priorities at the campus and national levels.

Biography

Rick McMullen is a Senior Scientist at the University of Kansas, where he plans and develops research computing technologies and services. Representing KU regionally and nationally in research computing and networking organizations, Rick serves as Chair of the Great Plains Network Cyberinfrastructure Program Committee and works closely with GPN and KanREN to support the development of regional CI that works with and supports campus research goals.

Prior to coming to KU, Rick was Director and Principal Scientist of the Knowledge Acquisition and Projection Lab in the Pervasive Technology Institute at Indiana University, a founding faculty member of the Indiana University School of Informatics and Computing, and adjunct faculty in the Computer Science Department. He has served as an Investigator on major international networking projects and is currently PI or co-PI on several National Science Foundation network and cyberinfrastructure development projects. His research interests include sensor networks, high performance research networking, and Artificial Intelligence applications that support knowledge management and decision-making in scientific research collaborations. Rick's background is in Chemistry. He received a Ph.D. in 1982 from Indiana University.

Charlie Peck
Charlie Peck

Associate Professor
Department of Computer Science
Earlham College
Topic: "LittleFe: The HPC Education Appliance"
Slides:   available after the Symposium

Talk Abstract:

Many institutions have little or no access to parallel computing platforms for in-class computational science or parallel programming and distributed computing education — yet key concepts, motivated by science, are taught more effectively and memorably on an actual parallel platform. LittleFe is a complete 6 node Beowulf-style portable HPC cluster — designed specifically for education — that weighs less than 50 pounds, easily travels, and sets up in 5 minutes. Current generation LittleFe hardware includes multicore processors and General Purpose Graphics Processing Units (GPGPU) capability, enabling support for shared memory parallelism, distributed memory parallelism, GPGPU parallelism, and hybrid models. Leveraging the Bootable Cluster CD project, LittleFe is an affordable, powerful, ready-to-run, computational science, parallel programming, and distributed computing educational appliance.

Biography

Charlie teaches computer science at Earlham College in Richmond IN. He is also the nominal leader of Earlham's Cluster Computing Group. His research interests include parallel and distributed computing, computational science, and education. Working with colleagues, Charles is co-PI for the LittleFe project. During the summer, he often teaches parallel and distributed computing workshops for undergraduate science faculty under the auspices of the National Computational Science Institute and the SC (Supercomputing Conference) Education Program.

Jeff Pummill
Jeff Pummill

Manager for Cyberinfrastructure Enablement
Arkansas High Performance Computing Center
University of Arkansas

Panel Topic: "Engaging Campuses through Extreme Science and Engineering Discovery Environment (XSEDE)"
(with S. Kay Hunt and Dana Brunson)

Panel Slides: available after the Symposium

Topic: "Birds of a Feather Session: HPC System Administrator Town Hall"

Birds of a Feather Slides: available after the Symposium

Panel Abstract

This presentation will provide an overview of XSEDE (the National Science Foundation follow-on to TeraGrid) with an emphasis on how XSEDE is working with campuses to support the computational science and engineering and HPC needs of campus researchers and educators. The presenters include the XSEDE Campus Champions coordinator, along with two Campus Champions. The presenters will be able to assist the participants in learning how campuses, researchers, educators, and students can gain access to the resources of XSEDE. The session will include a presentation about campus engagement through XSEDE, followed by a Q&A session.

Birds of a Feather Abstract

This Birds of a Feather session will provide an opportunity to get together with other HPC managers and system administrators to discuss success stories and areas needing improvement, or simply to ask questions about best practices with a group of peers. Bring your comments, critiques and questions, and expect a lively discussion.

Biography
Jeff Pummill is the Manager for Cyberinfrastructure Enablement at the University of Arkansas. He has supported the high performance computing activities at the University of Arkansas since 2005, serving first as Senior Linux Cluster Administrator before his current role, and has more than a decade of experience in managing high performance computing resources. Jeff is also the XSEDE Campus Champion for the University of Arkansas, and is a very active contributor at the national level on the Campus Champion Leadership Team.

Lina Sawalha
Lina Sawalha

Graduate Research Assistant
School of Electrical & Computer Engineering
University of Oklahoma

Topic: "Phase-aware Thread Scheduling for Heterogeneous Systems from Multicore Processors to the Cloud"

Slides:   PDF

Talk Abstract

Heterogeneous systems offer significant advantages over homogenous ones in terms of both increased power efficiency and performance. Unfortunately, such systems also create unique challenges in effective mapping of jobs to processing cores. The greater the difference between systems, the more complex this problem becomes. This work focuses on scheduling jobs for heterogeneous systems starting from heterogeneous cores on the same processor chip and later extending similar techniques to heterogeneous nodes in the cloud. Previous scheduling approaches for heterogeneous multicore processors sampled performance while permuting the schedule across each type of core each time a change in application behavior is detected. However, frequent performance sampling on each type of core may be impractical.

We propose a new thread scheduling approach for heterogeneous systems that uses not simply the detection of a program behavior change, but the identification and recording of unique phase behaviors. We highlight the correlation between the execution phases of an application and the performance of those phases on particular core/processor type. We present mechanisms that exploit this correlation between program phases and appropriate scheduling decisions and demonstrate near optimal mapping of thread segments to processor cores can be performed without frequently sampling the performance of each thread on each processor core type. A similar approach can be exploited to improve the mapping of jobs to different nodes in the cloud. Using replication and thread migration, the best-performing node for each execution phase of a long-running thread can be found. This leads to an improved performance over more random assignments of jobs to the nodes in the cloud.

Biography

Lina Sawalha received the BS degree in computer engineering from Jordan University of Science and Technology in 2006 and the MS degree in Electrical & Computer Engineering from the University of Oklahoma (OU) in 2009. Her research interests include computer architecture, microarchitecture, hardware design and high performance computing. She is a PhD candidate in electrical and computer engineering at OU. She is currently working on heterogeneous multicore processors scheduling and design. She is a student member of the IEEE and ACM. She is also a member of Eta Kappa Nu, Golden Key International Honour Society and Jordan Engineers Association.

James E. Stine, Jr.
James Stine

Professor
School of Electrical & Computer Engineering
Oklahoma State University

Topic: "Using Parallel Computing to Design and Simulate Multi-Core System on Chip Architectures"

Slides: available after the Symposium

Talk Abstract

Coming soon

Biography

Coming soon

Luis M. Vicente
Luis M. Vicente

Associate Professor
MSEE Program Coordinator
Sponsor Research Office Coordinator
Department of Electrical & Computer Engineering and Computer Science
Polytechnic University of Puerto Rico

Topic: "The First Workshop Between OSCER and PUPR: Buildout of a LittleFe in the Caribbean"

Slides: available after the Symposium

Talk Abstract

In 2011, the University of Oklahoma (OU) and the Polytechnic University of Puerto Rico (PUPR) worked with the National Computational Science Institute (NCSI) to organize a weeklong tutorial workshop on Intermediate Parallel Programming & Cluster Computing, held jointly via videoconferencing between the two institutions as well as broadcast out to remote offsite participants nationwide, which included the first ever LittleFe Buildout event, during which six LittleFe units were built at OU and three were built at PUPR. The particulars of the workshop — the first of its kind in Puerto Rico — as well as the challenges that the build teams faced in building the LittleFe units, will be discussed. Also, we will talk about what was learned from this experience, and how to leverage this new understanding, not only to make these events sustainable but also to increase the participation and engagement of faculty and students.

Biography

Dr. Luis M. Vicente is an Associate Professor and Coordinator of the MS EE program at the Polytechnic University of Puerto Rico (PUPR). He received Ph.D. in Electrical & Computer Engineering at the University of Missouri-Columbia in May 2009. From February 1990 to February 2003, Dr. Vicente worked in industry with the Aerospace Division, SENER Group, Spain. He also worked with Voyetra Inc., New York, and with SIEMENS Corp., Madrid. From February 2003 to June 2009, he served as an Assistant Professor at the PUPR. In 2009, Dr. Vicente was promoted to Associate Professor and coordinator of the Masters Program in Electrical Engineering at the PUPR. In 2011, he was appointed Sponsored Research Office Coordinator. His research interests include beamforming, array processing, statistical signal processing, adaptive filters and High Performance Computing on signal processing. As a graduate thesis advisor, he has already graduated two students; currently, he is advising three graduate students in the digital signal processing area, high performance computing and parallel processing.

Justin M. Wozniak
Justin Wozniak

Assistant Computer Scientist
Mathematics and Computer Science Division
Argonne National Laboratory

Topic: "Swift: Parallel Scripting for Clusters, Clouds and Supercomputers"

Slides: available after the Symposium

Abstract

An important tool in the development of scientific applications is "composition:" the construction of a complete application from composite software packages, tools, and libraries. The Swift language allows users to compose complex, highly concurrent scripts that integrate programs and data sources into a unified dataflow program. This talk will be a highly practical overview of the use of cluster and high-performance computing resources, and the way they may be targeted for use by Swift. Several use cases will be presented that demonstrate Swift's ability to rapidly develop distributed applications to make use of a wide variety of computing systems.

Biography

Justin Wozniak received a Ph.D. in computer science and engineering from the University of Notre Dame in 2008. He holds an MMath from the University of Waterloo and a B.Sc. from the University of Illinois at Urbana-Champaign. Wozniak joined the Mathematics and Computer Science Division at Argonne National Laboratory in spring 2008 and the staff of the Computation Institute at the University of Chicago in winter 2009. He has designed and implemented novel distributed storage infrastructures for scientific applications, combining techniques from high-performance computing, grid computing, and Internet computing. To gain insight into next-generation computing systems, he has developed simulators to study the effects of policy changes in large-scale system software such as schedulers and storage management algorithms. Currently, he is developing multiple technologies to support the rapid development of scalable applications and to provide portable, efficient access to the largest computing installations.

Tom Zahniser
Tom Zahniser

Director, HPC Systems Engineering
QLogic Corporation

Topic: "MPI Performance on AMD & Intel Platforms: Not All IB is Created Equal"

Slides: available after the Symposium

Talk Abstract

QLogic InfiniBand products provide the highest performing, most scalable systems deployed throughout the world. This is because of the architectural approach QLogic implemented to address the high performance computing (HPC) challenges. This presentation will explain this architectural approach and how it improves application performance & scalability. In addition, you will understand why Lawrence Livermore National Laboratory (LLNL) deployed 4,000 nodes of QLogic IB in 2010, why the Tri-Labs (LLNL, Sandia National Laboratories and Los Alamos National Laboratory) selected QLogic IB for their 20,000 node Tri-Laboratory Linux Capacity Cluster (TLCC2) deployments in 2011-2012, and why you will want to request QLogic IB for your next deployment.

Biography

Tom Zahniser is the Director of HPC Systems Engineering at QLogic Corporation, a leading supplier of high performance network infrastructure solutions which include Fibre Channel, Ethernet and InfiniBand based offerings. Tom focuses specifically on the HPC InfiniBand solutions and began his InfiniBand career in 2001 with InfiniCon Systems/SilverStorm Technologies as their first Systems Engineer. In 2006, SilverStorm was acquired by QLogic. Tom spent the first 7 years of his career as a systems programmer for Burroughs/Unisys, followed by 5 years at IBM Global Services as an I/T Specialist. He then moved to Indonesia for 3 years and worked as a consultant for Axiom.

OTHER BREAKOUT SPEAKERS TO BE ANNOUNCED


OU Logo