Partial differential equation solvers are required to solve the Navier-Stokes equations for fluid flow. Recently, algorithms have been proposed to simulate fluid dynamics on quantum computers. Fault-tolerant quantum devices might enable exponential speedups over algorithms on classical computers. However, current and foreseeable quantum hardware introduce noise into computations, requiring algorithms that make judicious use of quantum resources: shallower circuit depths and fewer qubits. Under these restrictions, variational algorithms are more appropriate and robust. This work presents a hybrid quantum-classical algorithm for the incompressible Navier--Stokes equations. A classical device performs nonlinear computations, and a quantum one uses a variational solver for the pressure Poisson equation. A lid-driven cavity problem benchmarks the method. We verify the algorithm via noise-free simulation and test it on noisy IBM superconducting quantum hardware. Results show that high-fidelity results can be achieved via this approach, even on current quantum devices. Multigrid preconditioning of the Poisson problem helps avoid local minima and reduces resource requirements for the quantum device. A quantum state readout technique called HTree is used for the first time on a physical problem. Htree is appropriate for real-valued problems and achieves linear complexity in the qubit count, making the Navier-Stokes solve further tractable on current quantum devices. We compare the quantum resources required for near-term and fault-tolerant solvers to determine quantum hardware requirements for fluid simulations with complexity improvements.
Quantum computing may provide advantage in solving classical optimization problems. One promising algorithm is the quantum approximate optimization algorithm (QAOA). There have been many proposals for improving this algorithm, such as using an initial state informed by classical approximation solutions. A variation of QAOA called ADAPT-QAOA constructs the ansatz dynamically and can speed up convergence. However, it faces the challenge of frequently converging to excited states which correspond to local minima in the energy landscape, limiting its performance. In this work, we propose to start ADAPT-QAOA with an initial state inspired by a classical approximation algorithm. Through numerical simulations we show that this new algorithm can reach the same accuracy with fewer layers than the standard QAOA and the original ADAPT-QAOA. It also appears to be less prone to the problem of converging to excited states.
Many-qubit Mølmer-Sørensen (MS) interactions applied to trapped ions offer unique capabilities for quantum information processing, with applications including quantum simulation and the quantum approximate optimization algorithm (QAOA). Here, we develop a physical model to describe many-qubit MS interactions under four sources of experimental noise: vibrational mode frequency fluctuations, laser power fluctuations, thermal initial vibrational states, and state preparation and measurement errors. The model parameterizes these errors from simple experimental measurements, without free parameters. We validate the model in comparison with experiments that implement sequences of MS interactions on two $^{171}$Yb$^+$ ions. The model shows reasonable agreement after several MS interactions as quantified by the reduced chi-squared statistic $\chi^2_\mathrm{red} \approx 2$. As an application we examine MaxCut QAOA experiments on three and six ions. The experimental performance is quantified by approximation ratios that are $91\%$ and $83\%$ of the optimal theoretical values. Our model predicts $0.93^{+0.03}_{-0.02}$ and $0.95^{+0.04}_{-0.03}$, respectively, with disagreement in the latter value attributable to secondary noise sources beyond those considered in our analysis. With realistic experimental improvements to reduce measurement error and radial trap frequency variations the model achieves approximation ratios that are 99$\%$ of the optimal. Incorporating these improvements into future experiments is expected to reveal new aspects of noise for future modeling and experimental improvements.
We generalize the Quantum Approximate Optimization Algorithm (QAOA) of Farhi et al. (2014) to allow for arbitrary separable initial states with corresponding mixers such that the starting state is the most excited state of the mixing Hamiltonian. We demonstrate this version of QAOA, which we call QAOA-warmest, by simulating Max-Cut on weighted graphs. We initialize the starting state as a warm-start using $2$ and $3$-dimensional approximations obtained using randomized projections of solutions to Max-Cut's semi-definite program, and define a warm-start dependent custom mixer. We show that these warm-starts initialize the QAOA circuit with constant-factor approximations of $0.658$ for $2$-dimensional and $0.585$ for $3$-dimensional warm-starts for graphs with non-negative edge weights, improving upon previously known trivial (i.e., $0.5$ for standard initialization) worst-case bounds at $p=0$. These factors in fact lower bound the approximation achieved for Max-Cut at higher circuit depths, since we also show that QAOA-warmest with any separable initial state converges to Max-Cut under the adiabatic limit as $p\rightarrow \infty$. However, the choice of warm-starts significantly impacts the rate of convergence to Max-Cut, and we show empirically that our warm-starts achieve a faster convergence compared to existing approaches. Additionally, our numerical simulations show higher quality cuts compared to standard QAOA, the classical Goemans-Williamson algorithm, and a warm-started QAOA without custom mixers for an instance library of $1148$ graphs (upto $11$ nodes) and depth $p=8$. We further show that QAOA-warmest outperforms the standard QAOA of Farhi et al. in experiments on current IBM-Q and Quantinuum hardware.
In order to quantify the relative performance of different testbed quantum computing devices, it is useful to benchmark them using a common protocol. While some benchmarks rely on the performance of random circuits and are generic in nature, here we instead propose and implement a practical, application-based benchmark. In particular, our protocol calculates the energy of the ground state in the single particle subspace of a 1-D Fermi Hubbard model, a problem which is efficient to solve classically. We provide a quantum ansatz for the problem that is provably able to probe the full single particle subspace for a general length 1-D chain and scales efficiently in number of gates and measurements. Finally, we demonstrate and analyze the benchmark performance on superconducting and ion trap testbed hardware from three hardware vendors and with up to 24 qubits.
The rapid progress of noisy intermediate-scale quantum (NISQ) computing underscores the need to test and evaluate new devices and applications. Quantum chemistry is a key application area for these devices, and therefore serves as an important benchmark for current and future quantum computer performance. Previous benchmarks in this field have focused on variational methods for computing ground and excited states of various molecules, including a benchmarking suite focused on performance of computing ground states for alkali-hydrides under an array of error mitigation methods. Here, we outline state of the art methods to reach chemical accuracy in hybrid quantum-classical electronic structure calculations of alkali hydride molecules on NISQ devices from IBM. We demonstrate how to extend the reach of variational eigensolvers with new symmetry preserving Ansätze. Next, we outline how to use quantum imaginary time evolution and Lanczos as a complementary method to variational techniques, highlighting the advantages of each approach. Finally, we demonstrate a new error mitigation method which uses systematic error cancellation via hidden inverse gate constructions, improving the performance of typical variational algorithms. These results show that electronic structure calculations have advanced rapidly, to routine chemical accuracy for simple molecules, from their inception on quantum computers a few short years ago, and they point to further rapid progress to larger molecules as the power of NISQ devices grows.
We present methods for constructing any target coupling graph using limited global controls in an Ising-like quantum spin system. Our approach is motivated by implementing the quantum approximate optimization algorithm (QAOA) on trapped ion quantum hardware to find approximate solutions to Max-Cut. We present a mathematical description of the problem and provide approximately optimal algorithmic constructions that generate arbitrary unweighted coupling graphs with $n$ nodes in $O(n)$ global entangling operations and weighted graphs with $m$ edges in $O(m)$ operations. These upper bounds are not tight in general, and we formulate a mixed-integer program to solve the graph coupling problem to optimality. We perform numeric experiments on small graphs with $n\le8$ and show that optimal sequences, which use fewer operations, can be found using mixed-integer programs. Noisy simulations of Max-Cut QAOA show that our implementation is less susceptible to noise than the standard gate-based compilation.
The variational quantum eigensolver (VQE) is currently the flagship algorithm for solving electronic structure problems on near-term quantum computers. This hybrid quantum/classical algorithm involves implementing a sequence of parameterized gates on quantum hardware to generate a target quantum state, and then measuring the expectation value of the molecular Hamiltonian. Due to finite coherence times and frequent gate errors, the number of gates that can be implemented remains limited on current quantum devices, preventing accurate applications to systems with significant entanglement, such as strongly correlated molecules. In this work, we propose an alternative algorithm (which we refer to as ctrl-VQE) where the quantum circuit used for state preparation is removed entirely and replaced by a quantum control routine which variationally shapes a pulse to drive the initial Hartree-Fock state to the full CI target state. As with VQE, the objective function optimized is the expectation value of the qubit-mapped molecular Hamiltonian. However, by removing the quantum circuit, the coherence times required for state preparation can be drastically reduced by directly optimizing the pulses. We demonstrate the potential of this method numerically by directly optimizing pulse shapes which accurately model the dissociation curves of the hydrogen molecule (covalent bond) and helium hydride ion (ionic bond), and we compute the single point energy for LiH with four transmons.
One of the most promising applications of noisy intermediate-scale quantum computers is the simulation of molecular Hamiltonians using the variational quantum eigensolver. We show that encoding symmetries of the simulated Hamiltonian in the VQE ansatz reduces both classical and quantum resources compared to other, widely available ansatze. Through simulations of the H$_2$ molecule, we verify that these improvements persist in the presence of noise. This simulation is performed with IBM software using noise models from real devices. We also demonstrate how these techniques can be used to find molecular excited states of various symmetries using a noisy processor. We use error mitigation techniques to further improve the quality of our results.
The variational quantum eigensolver is one of the most promising approaches for performing chemistry simulations using noisy intermediate-scale quantum (NISQ) processors. The efficiency of this algorithm depends crucially on the ability to prepare multi-qubit trial states on the quantum processor that either include, or at least closely approximate, the actual energy eigenstates of the problem being simulated while avoiding states that have little overlap with them. Symmetries play a central role in determining the best trial states. Here, we present efficient state preparation circuits that respect particle number, total spin, spin projection, and time-reversal symmetries. These circuits contain the minimal number of variational parameters needed to fully span the appropriate symmetry subspace dictated by the chemistry problem while avoiding all irrelevant sectors of Hilbert space. We show how to construct these circuits for arbitrary numbers of orbitals, electrons, and spin quantum numbers, and we provide explicit decompositions and gate counts in terms of standard gate sets in each case. We test our circuits in quantum simulations of the $H_2$ and $LiH$ molecules and find that they outperform standard state preparation methods in terms of both accuracy and circuit depth.
While relatively easy to engineer, static transverse coupling between a qubit and a cavity mode satisfies the criteria for a quantum non-demolition (QND) measurement only if the coupling between the qubit and cavity is much less than their mutual detuning. This can put significant limits on the speed of the measurement, requiring trade-offs in the circuit design between coupling, detuning, and decoherence introduced by the cavity mode. Here, we study a circuit in which the qubit-cavity and the cavity-feedline coupling can be turned on and off, which helps to isolate the qubit. We do not rely on the rotating-wave or dispersive approximations, but solve the full transverse interaction between the qubit and the cavity mode. We show that by carefully choosing the detuning and interaction time, we can exploit a recurrence in the qubit-cavity dynamics in a way that makes it possible to perform very fast, high fidelity, QND measurements. Here, the qubit measurement is performed more like a gate operation between the qubit and the cavity, where the cavity state can be amplified, squeezed, and released in a time-sequenced fashion. In addition, we also show that the non-demolition property of the off-resonant approximation breaks down much faster than its dispersive property, suggesting that many of the dispersive measurements to date have been implemented outside the QND regime.
A candidate for converting quantum information from microwave to optical frequencies is the use of a single atom that interacts with a superconducting microwave resonator on one hand and an optical cavity on the other. The large electric dipole moments and microwave transition frequencies possessed by Rydberg states allow them to couple strongly to superconducting devices. Lasers can then be used to connect a Rydberg transition to an optical transition to realize the conversion. Since the fundamental source of noise in this process is spontaneous emission from the atomic levels, the resulting control problem involves choosing the pulse shapes of the driving lasers so as to maximize the transfer rate while minimizing this loss. Here we consider the concrete example of a cesium atom, along with two specific choices for the levels to be used in the conversion cycle. Under the assumption that spontaneous emission is the only significant source of errors, we use numerical optimization to determine the likely rates for reliable quantum communication that could be achieved with this device. These rates are on the order of a few Mega-qubits per second.
Claudio Parazzoli, Benjamin Koltenbah, David Gerwe, Paul Idell, Bryan Gard, Richard Birrittella, S M Hashemi Rafsanjani, Mohammad Mirhosseini, O S Magan-Loiza, Jonathan Dowling, Christofer Gerry, Robert Boyd, Barbara Capron Long-baseline interferometry (LBI) is used to reconstruct the image of faint thermal objects. The image quality, for a given exposure time, is in general limited by a low signal-to-noise ratio (SNR). We show theoretically that a significant increase of the SNR, in a LBI, is possible by adding or subtracting photons to the thermal beam. At low photon counts, photon addition-subtraction technology strongly enhances the image quality. We have experimentally realized a nondeterministic physical protocol for photon subtraction. Our theoretical predictions are supported by experimental results.
Bryan T. Gard, Dong Li, Chenglong You, Kaushik P. Seshadreesan, Richard Birrittella, Jerome Luine, Seyed Mohammad Hashemi Rafsanjani, Mohammad Mirhosseini, Omar S.Magaña-Loaiza, Benjamin E. Koltenbah, Claudio G. Parazzoli, Barbara A. Capron, Robert W. Boyd, Christopher C. Gerry, Hwang Lee, Jonathan P. Dowling Probabilistic amplification through photon addition, at the output of an Mach-Zehnder interferometer is discussed for a coherent input state. When a metric of signal to noise ratio is considered, nondeterministic, noiseless amplification of a coherent state shows improvement over a standard coherent state, for the general addition of $m$ photons. The efficiency of realizable implementation of photon addition is also considered and shows how the collected statistics of a post selected state, depend on this efficiency. We also consider the effects of photon loss and inefficient detectors.
Seyed Mohammad Hashemi Rafsanjani, Mohammad Mirhosseini, Omar S. Magana-Loaiza, Bryan T. Gard, Richard Birrittella, B. E. Koltenbah, C. G. Parazzoli, Barbara A. Capron, Christopher C. Gerry, Jonathan P. Dowling, Robert W. Boyd We propose and implement a quantum procedure for enhancing the sensitivity with which one can determine the phase shift experienced by a weak light beam possessing thermal statistics in passing through an interferometer. Our procedure entails subtracting exactly one (which can be generalized to m) photons from the light field exiting an interferometer containing a phase-shifting element in one of its arms. As a consequence of the process of photon subtraction, and somewhat surprisingly, the mean photon number and signal-to-noise ratio of the resulting light field are thereby increased, leading to enhanced interferometry. This method can be used to increase measurement sensitivity in a variety of practical applications, including that of forming the image of an object illuminated only by weak thermal light.
The use of an interferometer to perform an ultra-precise parameter estimation under noisy conditions is a challenging task. Here we discuss nearly optimal measurement schemes for a well known,sensitive input state, squeezed vacuum and coherent light. We find that a single mode intensity measurement, while the simplest and able to beat the shot-noise limit, is outperformed by other measurement schemes in the low-power regime. However, at high powers, intensity measurement is only outperformed by a small factor. Specifically, we confirm, that an optimal measurement choice under lossless conditions is the parity measurement. In addition, we also discuss the performance of several other common measurement schemes when considering photon loss, detector efficiency, phase drift, and thermal photon noise. We conclude that, with noise considerations, homodyne remains near optimal in both the low and high power regimes. Surprisingly, some of the remaining investigated measurement schemes, including the previous optimal parity measurement, do not remain even near optimal when noise is introduced.
This dissertation serves as a general introduction to Wigner functions, phase space, and quantum metrology but also strives to be useful as a how-to guide for those who wish to delve into the realm of using continuous variables, to describe quantum states of light and optical interferometry. We discuss the advantages of Wigner functions and their use to describe many quantum states of light. Throughout our metrology discussions, we will also discuss various quantum limits and use quantum Fisher information to show optimal bounds. When applicable, we also discuss the use of quantum Gaussian information and how it relates to our Wigner function treatment. The remainder of our discussion focuses on investigating the effects of photon addition and subtraction to various states of light and analyze the nondeterministic nature of this process. We use examples of $m$ photon additions to a coherent state as well as discuss the properties of an $m$ photon subtracted thermal state. We also provide an argument that this process must always be a nondeterministic one, or the ability to violate quantum limits becomes apparent. We show that using phase measurement as one's metric is much more restrictive, which limits the usefulness of photon addition and subtraction. When we consider SNR however, we show improved SNR statistics, at the cost of increased measurement time. In this case of SNR, we also quantify the efficiency of the photon addition and subtraction process.
We theoretically investigate the phase sensitivity with parity detection on an SU(1,1) interferometer with a coherent state combined with a squeezed vacuum state. This interferometer is formed with two parametric amplifiers for beam splitting and recombination instead of beam splitters. We show that the sensitivity of estimation phase approaches Heisenberg limit and give the corresponding optimal condition. Moreover, we derive the quantum Cramér-Rao bound of the SU(1,1) interferometer.
Boson-sampling is a simplified model for quantum computing that may hold the key to implementing the first ever post-classical quantum computer. Boson-sampling is a non-universal quantum computer that is significantly more straightforward to build than any universal quantum computer proposed so far. We begin this chapter by motivating boson-sampling and discussing the history of linear optics quantum computing. We then summarize the boson-sampling formalism, discuss what a sampling problem is, explain why boson-sampling is easier than linear optics quantum computing, and discuss the Extended Church-Turing thesis. Next, sampling with other classes of quantum optical states is analyzed. Finally, we discuss the feasibility of building a boson-sampling device using existing technology.
Aaronson and Arkhipov recently used computational complexity theory to argue that classical computers very likely cannot efficiently simulate linear, multimode, quantum-optical interferometers with arbitrary Fock-state inputs [Aaronson and Arkhipov, Theory Comput. 9, 143 (2013)]. Here we present an elementary argument that utilizes only techniques from quantum optics. We explicitly construct the Hilbert space for such an interferometer and show that its dimension scales exponentially with all the physical resources. We also show in a simple example just how the Schrödinger and Heisenberg pictures of quantum theory, while mathematically equivalent, are not in general computationally equivalent. Finally, we conclude our argument by comparing the symmetry requirements of multiparticle bosonic to fermionic interferometers and, using simple physical reasoning, connect the nonsimulatability of the bosonic device to the complexity of computing the permanent of a large matrix.
We show a simulation of quantum random walks with multiple photons using a staggered array of 50/50 beam splitters with a bank of detectors at any desired level. We discuss the multiphoton interference effects that are inherent to this setup, and introduce one, two, and threefold coincidence detection schemes. The use of Feynman diagrams are used to intuitively explain the unique multiphoton interference effects of these quantum random walks.