Technical Sponsors


     IEEE     IEEE CS   

IEEE Washington DC Section            TCSC 

HPCL


 

Invited Speakers

 

Invited Speaker 1: Brad Chamberlain, Cray Inc.

Title: The Future of PGAS Programming from a Chapel perspective

Date: September 17th

Abstract:

PGAS programming models have been around for a couple of decades now, yet without ever having achieved the tipping point of broad adoption. As we approach exascale computing, many users and vendors have been looking at PGAS programming models with renewed interest --- particularly as the alternatives become less applicable and attractive in the face of deeper memory hierarchies and heterogeneous computing.

In this talk, I'll outline my vision for what PGAS programming models must do in order to move from their current niche into broad use within HPC or even mainstream computing.  In doing so, I'll relate these requirements to the efforts being undertaken by the Chapel team and Cray Inc. as we approach exascale computing.  Finally, I'll explain why I think the question for the PGAS community is not "Whether" they will ever become broadly adopted --- I believe this to be an inevitability.  Rather, I think the key questions are simply "When?" and "What can we do to accelerate that process?"

Bio:

Bradford Chamberlain is a Principal Engineer at Cray Inc. where he works on parallel programming models, focusing primarily on the design and implementation of the Chapel language in his role as technical lead for that project.  Brad received his Ph.D. in Computer Science & Engineering from the University of Washington in 2001 where his work focused on the design and implementation of the ZPL parallel array language.  His thesis explored the concept of 'regions' in ZPL --- a first-class index set supporting global-view data parallelism and a syntactic performance model. Brad remains associated with the University of Washington as an affiliate faculty member.  He received his Bachelor's degree in Computer Science with honors from Stanford University in 1992.


 

Invited Speaker 2: Mike Chu, AMD 

Title: Supporting PGAS and future programming models with the Heterogeneous System Architecture (HSA)

Date: September 18th

Abstract:

The Heterogeneous System Architecture (HSA) is a multi-vendor specification for programming systems with CPUs, GPUs and other accelerators, including coherent memory sharing and user-level task scheduling. HSA forms the underlying platform for AMD’s heterogeneous processors. One of the main goals of HSA is to enable programming models that better utilize the heterogeneous components within a system to improve portability, usability, and performance. This presentation will first provide an overview of HSA and the features it makes available to programming model and runtime environments. This will be followed by a discussion of possible future extensions to HSA currently being studied by AMD Research to help further enable PGAS and other future programming models.

 

Bio:

Michael Chu is a Senior Member of Technical Staff in the AMD Research lab of Advanced Micro Devices. He has been at AMD since 2011 and focused on programming models and runtime systems, as well as the Heterogeneous Systems Architecture (HSA) which underlies all of AMD’s future heterogeneous processors. Prior to joining AMD, Michael worked at Microsoft within the Parallel Computing Platform developing the Concurrency Runtime. Michael received his Ph.D. in Computer Engineering from the University of Michigan in 2007, with research focused on compiler techniques for code and data partitioning.


 

Invited Speaker 3: Richard Graham, Mellanox Technologies Inc.

Title: InfiniBand – A View Towards Extreme-Scale Networking Technology

Date: September 18th

Abstract: 

Partitioned Global Address Space (PGAS) programming models, such as OpenSHMEM and MPI-3, have been gaining traction in recent years, with the promise of a low-overhead scalable communication paradigm.  This talk with focus on the technological strides Mellanox Technologies has been making, in the context of support for PGAS programming models, in the design and implementation of scalable and robust InifiniBand technologies, on the road to extreme-scale computing.   It will present technologies such as the scalable Dynamically Connected transport, DC, support for non-contiguous remote memory operations, network hardware managed interdependent data flows with the Cross-Channel Synchronization capabilities, as well as support for adaptive-routing.  Finally, how these capabilities are used in the implementation of high-performance communication libraries, such as the UCX and FCA point-to-point and collective libraries, respectively.

Bio:

Dr. Richard Graham is a Staff Architect at Mellanox Technologies Inc. His primary focus is on the High Performance Computing market, working on OFED and communication middleware architecture issues, as they relate to extreme-scale computing.  Prior to moving to Mellanox, Rich spent thirteen years at Los Alamos National Laboratory and Oak Ridge National Laboratory, in computer science technical and administrative roles, with a technical focus on communication libraries and application analysis tools.  He is cofounder of the Open MPI collaboration, was chairman of the MPI 3.0 standardization efforts.


 

Invited Speaker 4: Jeff Hammond, Intel

Title: Lessons learned from using MPI-3 as a PGAS runtime system

Date: September 17th

Abstract: 

MPI-3 RMA introduced features specifically designed to enable PGAS programming models, such as OpenSHMEM, Global Arrays and UPC.  Implementing two of these models (OpenSHMEM and Global Arrays) using MPI-3 RMA has revealed a number of interesting properties in both the MPI standard and well-known implementations.  In this talk, I will describe the lessons learned in the course of developing these projects in hopes of enabling developers and end-users to make more effective use of MPI-3 as a PGAS runtime system.

Bio:

Jeff Hammond is a Research Scientist in the Parallel Computing Lab at Intel Labs. His research interests include: one-sided and global view programming models, load-balancing for irregular algorithms, and shared- and distributed-memory tensor contractions. He has a long-standing interest in enabling the simulation of physical phenomena - primarily the behavior of molecules and materials at atomistic resolution - with massively parallel computing.


 

Invited Speaker 5: Olivier Tardieu, IBM

Title: Resilient and Elastic APGAS

Date: September 17th

Abstract:

The APGAS programming model (Asynchronous Partitioned Global Address Space) is a simple but powerful model of concurrency and distribution, known primarily as the foundation of the X10 programming language but also developed for Java and Scala. APGAS combines PGAS with asynchrony. The data in an application is logically partitioned into places. The computation is organized into lightweight asynchronous tasks. APGAS can express both regular and irregular parallelism, within and across shared-memory nodes in a distributed system. Recently APGAS has been enriched to support failure-aware and elastic programming. Resilient applications can detect the loss of a place and implement recovery strategies. Elastic applications can dynamically add places to a running instance. In this talk, I will give an introduction to resilient and elastic APGAS, discussing design principles, implementation efforts, and applications.

Bio:

Dr. Tardieu is a Research Staff Member at IBM's T.J. Watson Research Center, NY, USA. He is one of the designers of the X10 programming language and is currently leading the design and implementation of the  APGAS runtime. His research interests include parallel programming models and languages, HPC systems, software safety and fault tolerance. He received a PhD in Computer Science from Ecole des Mines de Paris, France (2004) and is a graduate of Ecole Polytechnique, France (1998).