Hartree Centre at SC20
At this year’s virtual Supercomputing 2020 conference, the Hartree Centre is organising and co-organising several workshops. Check them out here!
Since 2014, the Best Practices for HPC Training workshop series at SC has provided a global forum for addressing common challenges and solutions for enhancing HPC training and education, for sharing information through large and small group discussions, and for fostering collaboration opportunities. The Seventh workshop (BPHTE20), an ACM SIGHPC Education Chapter coordinated effort, is a full day workshop focusing on extending collaborations among practitioners from traditional and emerging fields, exploring the challenges in developing and deploying HPC training and education, and identifying new challenges and opportunities for the latest HPC platforms. The workshop will provide opportunities for: disseminating results, understanding the recent challenges with effectiveness of HPC education and training materials and promoting collaborations among HPC educators, trainers and users. We are planning for papers, lightning talks and demos to be presented by members of the international community. The workshop papers, extended lightning talks and demo abstracts will be published in a special issue of Journal of Computational Science Education .
Evguenia Alexandrova, Hartree Centre
Scott Lathrop, Shodor Education Foundation and University of Illinois
Julia Mullen, Massachusetts Institute of Technology (MIT)
Nitin Sukhija, Slippery Rock University of Pennsylvania
Novel scalable scientific algorithms are needed to enable key science applications to exploit the computational power of large-scale systems. These extreme-scale algorithms need to hide network and memory latency, have very high computation/communication overlap and minimal communication, have no synchronization points. With the advent of Big Data and AI the need of such scalable mathematical methods and algorithms able to handle data and compute intensive applications at scale becomes even more important. Scientific algorithms for multi-petaflop and exa-flop systems also need to be fault tolerant and fault resilient, since the probability of faults increases with scale. Finally, with the advent of heterogeneous compute nodes that employ standard processors as well as GPGPUs, scientific algorithms need to match these architectures to extract the most performance. Key science applications require novel mathematics and mathematical models and system software that address the scalability and resilience challenges of current- and future-generation extreme-scale HPC systems.
Vassil Alexandrov, Hartree Centre
Jack Dongarra, University of Tennessee and Oak Ridge National Laboratory (ORNL)
Christian Engelmann, Oak Ridge National Laboratory (ORNL)
Al Geist, Oak Ridge National Laboratory (ORNL)
Python remains one of the fastest-growing programming languages with large communities of users in academia and industry. Its high-level syntax lowers the barrier to entry and improves productivity, making it the “go-to” language for data science, machine learning, whilst also remaining increasingly popular in high-performance and distributed computing.
PyHPC returns to Supercomputing to bring researchers, developers and Python practitioners to share their experiences using Python across a broad spectrum of disciplines and applications. The goal of the workshop is to provide a platform for the community to present novel Python applications from a wide range of disciplines, to enable topical discussions regarding the use of Python, and to share experiences using Python in scientific computing and education.
William Scullin, University of Rochester, Laboratory for Laser Energetics
Neelofer Banglawala, Edinburgh Parallel Computing Centre (EPCC)
Rosa M. Badia, Barcelona Supercomputing Center
James Clark, Hartree Centre
Provide your details to receive regular updates from the STFC Hartree Centre.