Home  ›  Resource  ›  Lectures  ›  Content

Performance Modeling for Future Computing Technologies

Title: Performance Modeling for Future Computing Technologies

Speaker: Torsten Hoefler (ETH Zurich)

Time: June 11, 10:00-11:00

Location: East main-buliding 10-103

Abstract:

We are close to a phase-change in the computing industry. The demand for computing power is steadily increasing with the advent of (deep) learning and other high-performance computing techniques. However, the end of Dennard scaling and Moore's law force us to design for parallelism and specialization to improve compute performance. The complexity of programming is enormous and we propose the use of mathematical models to understand the performance requirements of practical algorithms. We first show pitfalls of seemingly simple performance measurements, followed by a methodology to design close-to-optimal programs. We also showcase a mathematical system design methodology for high-performance networks. All these examples testify to the value of modeling in practical high-performance computing. We assume that a broader use  of these techniques and the development of a solid theory for parallel performance will lead to deep insights at many fronts.
BIO:

Torsten Hoefler directs the Scalable Parallel Computing Laboratory (SPCL) at D-INFK ETH Zurich. He received his PhD degree in 2007 at Indiana University and started his first professor appointment in 2011 at the University of Illinois at Urbana-Champaign. Torsten has served as the lead for performance modeling and analysis in the US NSF Blue Waters project at NCSA/UIUC. Since 2013, he is a professor of computer science at ETH Zurich and has held visiting positions at Argonne National Laboratories, Sandia National Laboratories, and Microsoft Research Redmond (Station Q). Dr. Hoefler's research aims at understanding the performance of parallel computing systems ranging from parallel computer architecture through parallel programming to parallel algorithms. He is also active in the application areas of Weather and Climate simulations as well as Machine Learning with a focus on Distributed Deep Learning. In those areas, he has coordinated tens of funded projects and an ERC Starting Grant on Data-Centric Parallel Programming.