This is an archived version of CCC's website. Please visit the new ccc website for the latest information.

The Impact of NITRD: Two Decades of Game-Changing Breakthroughs in Network and Information Technology -- Expanding Possibilities Ahead

February 16, 2012 - Washington, DC Overview and Agenda Photos

 

Back to Main Page

High Performance Computing in Science and Engineering: The Tree and the Fruit


pdf Download Summary pdf Download PDF Slides mov Download Video

 

Summary

In the last 20 years:

  • Large-scale simulation (“theoretical experiments”) has emerged as a complementary means of scientific discovery, engineering design, and policy support, beyond theory and experiment.

  • We have witnessed phenomenal improvements of capability and cost-effectiveness of scientific simulation: thousand‐fold performance improvements and thousand‐fold cost reductions per delivered flop per decade over more than two decades, as measured by the Gordon Bell Prizes.

  • Computer engineering (as in Moore’s Law) and algorithmic invention (as in complexity reduction to accomplish the same task) have had complementary roles in extending the predictive power of simulation.

  • A “tall essential stack” of developments from ancient to recent have contributed to the ubiquity of large scale simulation in computer science and engineering: scientific modeling, numerical analysis, computer architecture, software engineering — allowing people who are expert in something else (e.g., cellular biology, chemical process engineering) to compute like experts.

  • Investment in community standards and libraries for encapsulating expertise for scientific productivity has been important, and the agencies participating in the Federal Networking and Information Technology Research and Development (NITRD) Program have played essential roles in supporting these achievements and reinforcing career reward structures that support their development (e.g., MPI, PETSc, VisIt).

  • There have been many examples of thresholds of performance for qualitatively new advances in the predictive power of simulations and the understanding they yield, and their economic benefit (e.g., Boeing wing design, Alzheimer’s disease mechanisms) and security benefits (e.g., stockpile stewardship, earthquake damage prevention).

In the years ahead:

  • It is critically important that we continue to push simulation to extreme scales.

  • Open-source development in terms of boundary-less free contributions will continue to be important, as it places pressure on leaders to use their thin advantage at any given time to stay ahead.

  • We must leverage lessons that we learned in the last 20 years. For example, ideas that at first seemed “pure” and “curiosity driven” but subsequently reappeared as critical practical enabling technologies (e.g., space filling curves as a means of laying out memory, Schwarz alternative procedure as a means of producing concurrency) serve as a motivation for a portfolio that must include long-term, blue-sky investment along with short-term, mission-driven investment.

  • There will increasingly be synergisms between large-scale data sets and large-scale simulation: inversion, data assimilation, uncertainty quantification, and the vast improvements in accuracy, understanding, and speed coming in the future due to their fusion.

Ultimately, the U.S. innovation ecosystem — spanning government, industry, and academia — makes advances in supercomputing possible, and provides the U.S. with significant advantages over other nations.

David Keyes

David KeyesDavid Keyes, formerly the Fu Foundation Chair of Applied Mathematics at Columbia University, is the inaugural dean of the Mathematical and Computer Sciences and Engineering Division at the King Abdullah University of Science and Technology. With backgrounds in engineering, applied mathematics, and computer science, Dr. Keyes works at the algorithmic interface between parallel computing and the numerical analysis of partial differential equations, across a variety of applications. Newton-Krylov-Schwarz parallel implicit methods, introduced in a 1993 paper, are now widely used throughout computational physics and engineering and scale to the edge of today's distributed memory multiprocessors. Dr. Keyes, who earned a B.S.E. in Mechanical Engineering at Princeton and a Ph.D. in Applied Mathematics at Harvard, is a former NSF Graduate Research Fellow and Presidential Young Investigator grantee, a Fellow of Society for Industrial and Applied Mathematics (SIAM), and has been awarded ACM's Gordon Bell Prize and IEEE's Sidney Fernbach Prize. He has edited several US federal agency reports on high performance computing and has served on the advisory committees of the Office of CyberInfrastructure and the Mathematical and Physical Sciences Directorate of NSF. In 2011, SIAM awarded Dr. Keyes its Prize for Distinguished Service to the Profession for his leadership and advocacy of high performance computing in science and engineering.


The materials on this webpage, including speakers' slides and videos, are copyright the author(s).
Permission is granted for non-commercial use with credit to the author(s) and the Computing Community Consortium (CCC).