Improving near-term quantum algorithms with techniques from machine learning
File(s)
Author(s)
Smith, Alistair
Type
Thesis or dissertation
Abstract
Quantum computers may one day enable us to solve many classically intractable problems. However, current devices are too noisy to be used for full fault-tolerant quantum computation. Near-term quantum algorithms attempt to extract useful performance from these devices and are a crucial bench-marking tool for device comparison and development.
In this thesis we introduce several, primarily machine learning-based, methods for reducing the quantum resources required in various stages of near-term quantum algorithms. First, we introduce a tensor network approach for designing low-depth and parameter-efficient ansatz circuits for modest sized variational algorithms. Next, we turn to the implementation of these variational algorithms and show that a Bayesian optimizer using a quantum kernel-based surrogate model greatly reduces the number of circuits submitted to the device.
We then consider the problem of efficiently characterizing an algorithm's output quantum state, introducing a neural network architecture which can be used to learn and sample from the outcome distributions of a state in arbitrary local measurement bases.
Finally, we demonstrate how a simple bit-flipping strategy can be used to simplify the effective readout errors experienced on a noisy device. As a result, far fewer calibration measurements are needed to mitigate the effective readout errors in post-processing, enabling more accurate quantitative results to be obtained from near-term algorithms.
In this thesis we introduce several, primarily machine learning-based, methods for reducing the quantum resources required in various stages of near-term quantum algorithms. First, we introduce a tensor network approach for designing low-depth and parameter-efficient ansatz circuits for modest sized variational algorithms. Next, we turn to the implementation of these variational algorithms and show that a Bayesian optimizer using a quantum kernel-based surrogate model greatly reduces the number of circuits submitted to the device.
We then consider the problem of efficiently characterizing an algorithm's output quantum state, introducing a neural network architecture which can be used to learn and sample from the outcome distributions of a state in arbitrary local measurement bases.
Finally, we demonstrate how a simple bit-flipping strategy can be used to simplify the effective readout errors experienced on a noisy device. As a result, far fewer calibration measurements are needed to mitigate the effective readout errors in post-processing, enabling more accurate quantitative results to be obtained from near-term algorithms.
Version
Open Access
Date Issued
2023-06-13
Date Awarded
2023-09-01
Copyright Statement
Attribution-NonCommercial 4.0 International Licence (CC BY-NC)
Advisor
Kim, Myungshik
Sponsor
Engineering and Physical Sciences Research Council
Samsung Advanced Institute of Technology
Grant Number
EP/P510257/1
Publisher Department
Physics
Publisher Institution
Imperial College London
Qualification Level
Doctoral
Qualification Name
Doctor of Philosophy (PhD)