# Quantum and Hybrid Quantum-Classical Computing Approaches Workshop 2023 Agenda

Date: 20 September 2023

Introduction and welcome remarks by chairs

30 minutes talk + 5 minutes Q&A

**Abstract:** The integration of quantum processing units (QPUs) into current and future high-performance computing (HPC) systems affords new opportunities for accelerating scientific application workflows. Here we highlight some of the scientific and technical opportunities afforded by todays cloud-based quantum computing systems as well as future node-integrated QPUs. We examine application workflows based on machine learning, quantum simulation, and device validation, for which the performance of the HPC and QPU system can be assessed. These applications give rise to distinctive workflows that highlight current challenges for QPU integration and yield expectations for a software stack that enable fine-grain control of the quantum-enabled applications as well as diagnostics of the runtime environment. We conclude with future system designs that may overcome these challenges to enable quantum-accelerated HPC applications that surpass their conventional counterparts.

2 speakers x 15 minutes talk + 5 minutes Q&A each

*Alexander Wennersteen,*(PASQAL) – HPC-QC integration for neutral atoms based QPUs: an application perspective.

**Abstract: **Programmable arrays of Rydberg neutral atoms are one of the most versatile and promising architectures for quantum computing and quantum simulation. It supports both digital and analog mode execution of quantum programs. Moreover, non-solid state devices such as those based on neutral atoms are particularly hard to manufacture and transport. For the foreseeable future this means that access to both QPUs and emulators with a unified cloud based access model, rather than on-premise solutions, is very attractive and a requirement to develop a full quantum solution that can be integrated in an industrial workflow. In this talk we wish to give an overview over the neutral atoms technology, a state of the art cloud-based quantum platform where a standard GPU based and Slurm managed HPC cluster is an integral part and how particular quantum hardware properties shape the cloud platform and QC-HPC integration initiatives. Lastly, we will consider how neutral-atom specific algorithms and this platform interact when creating complete industrial quantum solutions and seeking practical quantum advantage.

*Bob Fletcher and Tim Rogers***(IonQ) – Quantum Development: Benchmarks, Simulation, and Hybrid Solutions**

**Abstract: **Following the quantum hype cycle, it’s easy to be confused by all the industry and technical claims and counterclaims. We will unpack the hype with independent benchmark results and explain what’s important to deliver real quantum application insight. Quantum simulators are great toolsets for the near-term development of quantum algorithms. They validate that today’s algorithms are producing accurate results. However, these simulations are only practical for small subsets of problems and use cases, since simulators have a limited number of qubits they can simulate in a reasonable timeframe. Whether financially impractical in 2024 or technically impossible in 2025, the challenges that quantum computers will be able to solve will soon be too massive to simulate. IonQ’s quantum hybrid approaches allow our quantum hardware to solve a piece of the puzzle for which quantum computing excels. We focus on the computationally expensive slice of the HPC workflows to drive a quantum speedup to deliver business value. We then engage conventional HPC systems to perform the other workflow steps, resulting in a better holistic solution across the spectrum of hardware. Boston Computing Group claims that ninety percent of the quantum market value capture will happen with the early adopters. Let’s show you the path to quantum application leadership.

2 speakers x 15 minutes talk + 5 minutes Q&A each

*Matthias Beuerle (*IQM – Integrating Quantum Computers into HPC: Work in Progress and Lessons Learned

**Abstract: **Quantum Computers hold the promise to significantly outperform classical computers for certain HPC workloads. Furthermore, there are a variety of ideas to combine quantum computers with classical HPC to enhance the utility of these resources. Those two points indicate that quantum computers will be widespread in the HPC landscape in the future once the technology matures to provide quantum advantage. Therefore, it is critical to ensure proper integration of these. In this talk, we discuss our efforts taken at IQM to achieve HPC integration of quantum computers. This entails everything from making sure quantum computers have an appropriate form factor, to adapting quantum computing software to HPC needs, as well as continuous operation of quantum computers. Finally, we also want to highlight areas where quantum computers workloads benefit from the presence of classical HPC resources and vice versa.

*Edric Matwiejew, Christopher Harris, Marco De La Pierre, Ugo Varetto and Jingbo Wang*(Pawsey and University of Western Australia) – HIP-Accelerated Simulation for Quantum Variational Algorithm Design on Large Supercomputers

**Abstract: **We present QuOp_Wavefront, a GPU-accelerated framework for simulating Quantum Variational Algorithms (QVAs). QVAs are a class of hybrid quantum-classical algorithms in which a classically parameterised quantum kernel is tuned via classical optimisation techniques. These algorithms are robust to noise and have a flexible structure that targets near-term Noisy Intermediate-Scale Quantum (NISQ) processors. Accordingly, QVAs have been increasingly spotlighted as a practical application for near-term quantum processors, promising to solve complex computational problems in logistics, chemistry, finance and more. Our contribution extends QuOp_MPI (J. Comput. Sci. 62, 101711, 2022), a scalable framework for designing and simulating QVAs on massively parallel systems. The framework provided a user-friendly object-oriented python interface, allowing researchers with minimal experience in parallel computing to explore novel QVAs at large scales, with a backend optimised to the structure of unitary operations common to QVAs. It enabled the simulation of their fundamental unitary dynamics with sophisticated state and operator generation and flexible optimisation schemes. QuOp_Wavefront features the integration of cutting-edge GPU-accelerated methods into the existing QuOp_MPI codebase; this provides for leveraging both shared- and distributed-memory parallelism, supporting the scaling of simulations from personal computers to supercomputers. The open-source HIP programming model is used so that the new framework can run on both AMD and Nvidia GPUs, and potentially any accelerator that HIP will support in the future. The performance of this new framework is compared against other established libraries for QVA simulation, and its application at scale is demonstrated on Setonix, a 30 PetaFlop HPE Cray EX system hosted at the Pawsey Supercomputing Research Centre.

Coffee break

30 minutes talk + 5 minutes Q&A

**Abstract: **Over the last few decades, quantum information processing has emerged as a gateway towards new, powerful approaches to scientific computing. Quantum technologies are nowadays experiencing a rapid development and could lead to effective solutions in different domains including physics, chemistry, and life sciences, as well as optimisation, artificial intelligence, and finance. To achieve this goal, noise-resilient quantum algorithms together with error mitigation schemes have been proposed and implemented in hybrid workflows with the aim of improving the synergies between quantum and classical computational platforms. In this talk, I will review the state-of-the-art and recent progress in the field, both in terms of hardware and software, and present a broad spectrum of potential applications, with a focus on natural sciences and machine learning.

2 speakers x 15 minutes talk + 5 minutes Q&A each

*Emre Sahin*(The Hartree Centre) – Integrating Quantum Machine Learning with Computational Histopathology for Cancer Slides Classification.

**Abstract: **In this project, we propose the use of Quantum Computing in Computational Pathology (QC-CP) to improve the accuracy of disease diagnosis and treatment selection. By utilizing Quantum Machine Learning (QML) tools, we aim to enhance traditional ML approaches in the field of histopathology. Specifically, using Quantum Kernel methods in Support Vector Machines (SVMs) for image classification, we aim to exploit the high dimensionality of the Quantum Hilbert space to linearly separate data into the right diagnostic classes. The method involves feeding low-resolution cancer slides to a deep neural network to extract embedded features, which are then reduced and encoded into a quantum circuit to produce a quantum kernel. The results show that this method with reduced image definition and number of features can perform just as well as a histopathology domain specific Graph Neural Network (GNN) fed with high-resolution slides, and in some instances can perform more accurate classification of slides. This is an important first step towards the integration of QC with CP, suggesting that QML may provide matching or improved classification results using low-resolution slides, potentially reducing the training time of the model.

*Hugo Wallner, Alex Martin, Omar Bacarreza and William Clements*(ORCA Computing) – Building large-scale hybrid quantum generative models

**Abstract: **We present a hybrid quantum/classical generative modelling algorithm that scales to large quantum processors and to multiple GPUs. This algorithm leverages currently available photonic quantum processors, which are combined with classical neural networks in a generative adversarial network (GAN) architecture in which the quantum processor provides the inputs to a classical generator. We show that this algorithm can provide higher-quality results than a purely classical generative model on a toy dataset. To demonstrate the scalability of this approach, we train this algorithm on the CIFAR-10 image dataset using a large-scale photonic quantum processor involving over 50 photons and a classical computation node using 8 A100 NVIDIA GPUs.

Lunch break

30 minutes talk + 5 minutes Q&A

**Abstract:** Several benchmarks have been proposed to holistically measure quantum computing performance. While some have focused on the end user’s perspective (e.g., in application-oriented benchmarks), the real industrial value taking into account the physical footprint of the quantum processor are not discussed. Different use-cases come with different requirements for size, weight, power consumption, or data privacy while demanding to surpass certain thresholds of fidelity, speed, problem size, or precision. This paper aims to incorporate these characteristics into a concept coined quantum utility, which demonstrates the effectiveness and practicality of quantum computers for various applications where quantum advantage — defined as either being faster, more accurate, or demanding less energy – is achieved over a classical machine of similar size, weight, and cost. To successively pursue quantum utility, a level-based classification scheme – constituted as application readiness levels (ARLs) – as well as extended classification labels are introduced. These are demonstratively applied to different quantum applications from the fields of quantum chemistry, quantum simulation, quantum machine learning, and data analysis followed by a brief discussion.

3 speakers x 15 minutes talk + 5 minutes Q&A each

*Phillip Kerger, David Bernal Neira, Zoe Gonzalez Izquirdo and Eleanor Rieffel*(Johns Hopkins University Dept. of Applied Math / NASA QuAIL / USRA RIACS) – Asymptotically Faster Quantum Distributed Algorithms for Approximate Steiner Trees and Directed Minimum Spanning Trees

**Abstract:** The CONGEST and CONGEST-CLIQUE models have been carefully studied to represent situations where the communication bandwidth between processors in a network is severely limited. Messages of only O(log(n)) bits of information each may be sent between processors in each round. The quantum versions of these models allow the processors instead to communicate and compute with quantum bits under the same bandwidth limitations. This leads to the following natural research question: What problems can be solved more efficiently in these quantum models than in the classical ones? Building on existing work, we contribute to this question in two ways. Firstly, we present two algorithms in the Quantum CONGEST-CLIQUE model of distributed computation that succeed with high probability; one for producing an approximately optimal Steiner Tree, and one for producing an exact directed minimum spanning tree, each of which uses $\tilde{O}(n^{1/4})$ rounds of communication and $\tilde{O}(n^{9/4})$ messages, where $n$ is the number of nodes in the network. The algorithms thus achieve a lower asymptotic round and message complexity than any known algorithms in the classical CONGEST-CLIQUE model. At a high level, we achieve these results by combining classical algorithmic frameworks with quantum subroutines. An existing framework for using a distributed version of Grover’s search algorithm to accelerate triangle finding lies at the core of the asymptotic speed-up. Secondly, we carefully characterize the constants and logarithmic factors involved in our algorithms as well as related algorithms, otherwise commonly obscured by $\tilde{O}$ notation. The analysis shows that some improvements are needed to render both our and existing related quantum and classical algorithms practical, as their asymptotic speed-ups only help for very large values of $n$.

*Jiří Tomčala,*(IT4Innovations, VSB – Technical University of Ostrava) – Three Ways of Quantum Integer Factorization and Their Implementation

**Abstract: **The content of this talk is a description of the implementation of two well-known quantum integer factorization methods and, in addition, a proposal of a new, third way along with some examples of its use. For all three methods, the measured results of their circuits will be shown. The first method is Shor’s algorithm, where a quantum circuit is used only to find the period of a specially assembled modular exponentiating function. The rest of the algorithm is preprocessing and post-processing that can be done on a classical computer, so one could say that this is a hybrid algorithm. During this talk, new results of the successful factorization of the number 119, performed by this algorithm on a real 7-qubit quantum computer, will be presented. It is worth mentioning here that the highest number successfully factorized by this algorithm on a real quantum computer so far was 35. The second method is variational quantum factorization (VQF). In this method, preprocessing builds a system of equations based on the classical binary multiplication procedure of two numbers, resulting in a factorized number. This system of equations is then used to form a Hamiltonian in such a way that its ground state corresponds to the solution of this equation system. Only now the hybrid quantum optimization comes into play to iteratively find this ground state. The advantage over Shor’s algorithm is that this method does not require a fault-tolerant quantum computer, but on the other hand it approaches the solution iteratively using a classical computer, which reduces the potential quantum advantage. The new, third method employs Grover’s search algorithm in an unusual way to create a quantum factorization circuit. A key part of this approach is the quantum implementation of a multiplying oracle. The main advantage of this method over the previous ones is that it can be used to assemble a relatively simple universal factorization circuit. Moreover, there is almost no preprocessing and post-processing required. This method is still under development, but the first results will be shown during this talk.

*Sean Greenaway, Sukin Sim and William Pol*(PsiQuantum) – Practical strategies for implementing matrix functions via quantum signal processing

**Abstract: **At a high-level, quantum information processing can be written as abstract transformations of quantum states. While quantum algorithms are usually written in highly complex, theoretical, procedural terms, recent theoretical advances have showed such transformations can be represented by simple matrix functions. In particular, quantum signal processing (QSP) is a modern protocol for efficiently implementing a wide class of useful matrix polynomials on a quantum computer. However, while QSP is well established theoretically in the literature, many practical challenges and open questions remain around its use in any given quantum application. In this talk, I will start by providing a pedagogical introduction to quantum signal processing and compare it to previous approaches in the literature before providing explicit details on using QSP to implement useful target functions, with a particular emphasis on the practical considerations required for such an implementation. Specifically, I will highlight the challenges associated with maximizing the fidelity and success probability of realizing a given polynomial and provide strategies for maximizing these values.

Coffee break

30 minutes talk + 5 minutes Q&A

**Abstract:** Atom Computing is creating a quantum processing platform based on nuclear spin qubits. The system makes use of optical tweezers to assemble and individually manipulate a two-dimensional register. We will explore progress on the Atom Computing hardware platform and the potential of the technology to create scalable quantum computing solutions. Applications and benchmarks suitable for neutral atom quantum computing will be addressed.

2 speakers x 15 minutes talk + 5 minutes Q&A each

*Hamid Tebyanian*(University of York) – Hassle-free Extra Randomness from quantum state’s identicalness with untrusted components

**Abstract: **This paper investigates a semi-device-independent protocol for quantum randomness generation constructed on the prepare-and-measure scenario based on the on-off-keying encoding scheme and with various detection methods, i.e., homodyne, heterodyne, and single photon detection schemes. The security estimation is based on lower bounding the guessing probability for a general case and is numerically optimized by utilizing semi-definite programming. Additionally, a practical, easy-to- implement optical setup is presented, which can be implemented via commercial off-the-shelf components.

*Francesca Schiavello (*The Hartree Centre) – An Evolutionary Parallel QAOA

**Abstract: **I will present some exploratory work on a hybrid quantum classical algorithm concerning the variational quantum algorithm QAOA. In this work, the quantum part remains the traditional one, while the classical side uses an evolutionary algorithm for the optimization part. Originally, the optimization for these circuits was done using some gradient descent or linear algorithm, which are also widely used in other optimization and machine learning applications. The motivations then to stray from this standard, and compare this research, are multiple. Firstly, QAOA, like many variational quantum algorithms, notoriously suffers from barren plateaus, where exploding vanishing gradients will mean optimization algorithms will likely get stuck in local optima. An evolutionary algorithm on the other hand does not suffer from this problem as it does not rely on gradients at all for its evolution, rather it harnesses an exploratory power, much like in system annealing. Secondly, because of this exploratory property, an evolutionary algorithm can be embarrassingly parallel, where each individual solution of the population can be independently evaluated before being mutated. In this case, a population of n solutions can be run through the circuit and have their parameters optimized in parallel. Whereas, a gradient descent approach will have to compute the cost functions sequentially to update the parameters at each evaluation. The total number of evaluations within an evolutionary algorithm will not necessarily be lower than in gradient descent, but rather, if they are in parallel, they will require less time to compute a solution for large enough problems. As such, the presentation will be divided as follows: firstly, introduce both the original QAOA algorithm, and evolutionary algorithm theory for background knowledge, then the structure and methodology used to optimize this quantum classical hybrid pair, and lastly some preliminary results.

Closing remarks

## Join Newsletter

Provide your details to receive regular updates from the STFC Hartree Centre.