University of Illinois at Urbana-Champaign, National Center for Supercomputing Applications, Urbana, IL
Harvard University, Cambridge, MA
Louisiana State University, Center for Computation & Technology, Baton Rouge, LA
Pittsburgh Supercomputing Center, Pittsburgh, PA
Princeton University, Princeton Institute for Computational Science and Engineering, Princeton, NJ
Rutgers University, Piscataway, NJ
University of California Los Angeles, Los Angeles, CA
University of Michigan, Ann Arbor, MI
University of Oklahoma, Norman, OK
University of South Carolina, Columbia, SC
University of Tennessee Knoxville, Knoxville, TN
University of Utah, Salt Lake City, UT
July 1013, 2012
Studying many current GPU computing applications, we have learned that the limits of an application's scalability are often related to some combination of memory bandwidth saturation, memory contention, imbalanced data distribution, or data structure/algorithm interactions. Successful GPU application developers often adjust their data structures and problem formulation specifically for massive threading and executed their threads leveraging shared on-chip memory resources for bigger impact. We looked for patterns among those transformations, and here present the seven most common and crucial algorithm and data optimization techniques we discovered. Each can improve performance of applicable kernels by 2-10X in current processors while improving future scalability.
High-Performance Computing clusters are increasingly built with heterogeneous parallel computing nodes to achieve higher power efficiency and computation throughput. Petascale systems like Blue Waters and Titan will come online this year with both multicore CPUs and many-core GPUs. These systems will provide unprecedented capabilities to conduct computational experiments of historic significance. Upcoming Exascale systems are expected to embrace even more heterogeneity in order to overcome power limitations. While the computing community is racing to build tools and libraries to easy the use of these heterogeneous parallel computing systems, effective and confident use of these systems will always require knowledge about the low-level programming interfaces in these systems. This course is designed for researchers in computational science and engineering disciplines to be introduced to the essence of these programming interfaces (CUDA, OpenMP, and MPI) and how they should orchestrate the use of these interfaces to achieve their application goals. The course is unique in that it is application oriented and only introduces the necessary underlying computer science and engineering knowledge to solidify understanding. The one-week course will serve as a quick start for researchers who want to begin to use heterogeneous parallel computing systems ranging from laptops and Exascale clusters. It also provides a strong foundation for students to take full-semester courses in parallel programming interfaces and techniques.
NOTE: Students are required to provide their own laptops.