Forecasting demand for assets and services can be addressed in various markets, providing a competitive advantage when the predictive models used demonstrate high accuracy. However, the training of machine learning models incurs high computational costs, which may limit the training of prediction models based on available computational capacity. In this context, this paper presents an approach for training demand prediction models using quantum neural networks. For this purpose, a quantum neural network was used to forecast demand for vehicle financing. A classical recurrent neural network was used to compare the results, and they show a similar predictive capacity between the classical and quantum models, with the advantage of using a lower number of training parameters and also converging in fewer steps. Utilizing quantum computing techniques offers a promising solution to overcome the limitations of traditional machine learning approaches in training predictive models for complex market dynamics.
Ben W. Reichardt, David Aasen, Rui Chao, Alex Chernoguzov, Wim van Dam, John P. Gaebler, Dan Gresh, Dominic Lucchetti, Michael Mills, Steven A. Moses, Brian Neyenhuis, Adam Paetznick, Andres Paz, Peter E. Siegfried, Marcus P. da Silva, Krysta M. Svore, Zhenghan Wang, Matt Zanner A critical milestone for quantum computers is to demonstrate fault-tolerant computation that outperforms computation on physical qubits. The tesseract subsystem color code protects four logical qubits in 16 physical qubits, to distance four. Using the tesseract code on Quantinuum's trapped-ion quantum computers, we prepare high-fidelity encoded graph states on up to 12 logical qubits, beneficially combining for the first time fault-tolerant error correction and computation. We also protect encoded states through up to five rounds of error correction. Using performant quantum software and hardware together allows moderate-depth logical quantum circuits to have an order of magnitude less error than the equivalent unencoded circuits.
We demonstrate the first end-to-end integration of high-performance computing (HPC), reliable quantum computing, and AI in a case study on catalytic reactions producing chiral molecules. We present a hybrid computation workflow to determine the strongly correlated reaction configurations and estimate, for one such configuration, its active site's ground state energy. We combine 1) the use of HPC tools like AutoRXN and AutoCAS to systematically identify the strongly correlated chemistry within a large chemical space with 2) the use of logical qubits in the quantum computing stage to prepare the quantum ground state of the strongly correlated active site, demonstrating the advantage of logical qubits compared to physical qubits, and 3) the use of optimized quantum measurements of the logical qubits with so-called classical shadows to accurately predict various properties of the ground state including energies. The combination of HPC, reliable quantum computing, and AI in this demonstration serves as a proof of principle of how future hybrid chemistry applications will require integration of large-scale quantum computers with classical computing to be able to provide a measurable quantum advantage.
M. P. da Silva, C. Ryan-Anderson, J. M. Bello-Rivas, A. Chernoguzov, J. M. Dreiling, C. Foltz, F. Frachon, J. P. Gaebler, T. M. Gatterman, L. Grans-Samuelsson, D. Hayes, N. Hewitt, J. Johansen, D. Lucchetti, M. Mills, S. A. Moses, B. Neyenhuis, A. Paz, J. Pino, P. Siegfried, et al (7) The promise of quantum computers hinges on the ability to scale to large system sizes, e.g., to run quantum computations consisting of more than 100 million operations fault-tolerantly. This in turn requires suppressing errors to levels inversely proportional to the size of the computation. As a step towards this ambitious goal, we present experiments on a trapped-ion QCCD processor where, through the use of fault-tolerant encoding and error correction, we are able to suppress logical error rates to levels below the physical error rates. In particular, we entangled logical qubits encoded in the [[7,1,3]] code with error rates 9.8 times to 500 times lower than at the physical level, and entangled logical qubits encoded in a [[12,2,4]] code with error rates 4.7 times to 800 times lower than at the physical level, depending on the judicious use of post-selection. Moreover, we demonstrate repeated error correction with the [[12,2,4]] code, with logical error rates below physical circuit baselines corresponding to repeated CNOTs, and show evidence that the error rate per error correction cycle, which consists of over 100 physical CNOTs, approaches the error rate of two physical CNOTs. These results signify an important transition from noisy intermediate scale quantum computing to reliable quantum computing, and demonstrate advanced capabilities toward large-scale fault-tolerant quantum computing.
We devise a new realization of the surface code on a rectangular lattice of qubits utilizing single-qubit and nearest-neighbor two-qubit Pauli measurements and three auxiliary qubits per plaquette. This realization gains substantial advantages over prior pairwise measurement-based realizations of the surface code. It has a short operation period of 4 steps and our performance analysis for a standard circuit noise model yields a high fault-tolerance threshold of approximately $0.66\% $. The syndrome extraction circuits avoid bidirectional hook errors, so we can achieve full code distance by choosing appropriate boundary conditions. We also construct variants of the syndrome extraction circuits that entirely prevent hook errors, at the cost of larger circuit depth. This achieves full distance regardless of boundary conditions, with only a modest decrease in the threshold. Furthermore, we propose an efficient strategy for dealing with dead components (qubits and measurements) in our surface code realization, which can be adopted more generally for other surface code realizations. This new surface code realization is highly optimized for Majorana-based hardware, accounting for constraints imposed by layouts and the implementation of measurements, making it competitive with the recently proposed Floquet codes.
Several variational quantum circuit approaches to machine learning have been proposed in recent years, with one promising class of variational algorithms involving tensor networks operating on states resulting from local feature maps. In contrast, a random feature approach known as quantum kitchen sinks provides comparable performance, but leverages non-local feature maps. Here we combine these two approaches by proposing a new circuit ansatz where a tree tensor network coherently processes the non-local feature maps of quantum kitchen sinks, and we run numerical experiments to empirically evaluate the performance of the new ansatz on image classification. From the perspective of classification performance, we find that simply combining quantum kitchen sinks with tensor networks yields no qualitative improvements. However, the addition of feature optimization greatly boosts performance, leading to state-of-the-art quantum circuits for image classification, requiring only shallow circuits and a small number of qubits -- both well within reach of near-term quantum devices.
Quantum error correction is crucial for any quantum computing platform to achieve truly scalable quantum computation. The surface code and its variants have been considered the most promising quantum error correction scheme due to their high threshold, low overhead, and relatively simple structure that can naturally be implemented in many existing qubit architectures, such as superconducting qubits. The recent development of Floquet codes offers another promising approach. By going beyond the usual paradigm of stabilizer codes, Floquet codes achieve similar performance while being constructed entirely from two-qubit measurements. This makes them particularly suitable for platforms where two-qubit measurements can be implemented directly, such as measurement-only topological qubits based on Majorana zero modes (MZMs). Here, we explain how two variants of Floquet codes can be implemented on MZM-based architectures without any auxiliary qubits for syndrome measurement and with shallow syndrome extraction sequences. We then numerically demonstrate their favorable performance. In particular, we show that they improve the threshold for scalable quantum computation in MZM-based systems by an order of magnitude, and significantly reduce space and time overheads below threshold.
The typical model for measurement noise in quantum error correction is to randomly flip the binary measurement outcome. In experiments, measurements yield much richer information - e.g., continuous current values, discrete photon counts - which is then mapped into binary outcomes by discarding some of this information. In this work, we consider methods to incorporate all of this richer information, typically called soft information, into the decoding of quantum error correction codes, and in particular the surface code. We describe how to modify both the Minimum Weight Perfect Matching and Union-Find decoders to leverage soft information, and demonstrate these soft decoders outperform the standard (hard) decoders that can only access the binary measurement outcomes. Moreover, we observe that the soft decoder achieves a threshold 25\% higher than any hard decoder for phenomenological noise with Gaussian soft measurement outcomes. We also introduce a soft measurement error model with amplitude damping, in which measurement time leads to a trade-off between measurement resolution and additional disturbance of the qubits. Under this model we observe that the performance of the surface code is very sensitive to the choice of the measurement time - for a distance-19 surface code, a five-fold increase in measurement time can lead to a thousand-fold increase in logical error rate. Moreover, the measurement time that minimizes the physical error rate is distinct from the one that minimizes the logical performance, pointing to the benefits of jointly optimizing the physical and quantum error correction layers.
Marcos Allende, Diego López León, Sergio Cerón, Antonio Leal, Adrián Pareja, Marcelo Da Silva, Alejandro Pardo, Duncan Jones, David Worrall, Ben Merriman, Jonathan Gilmore, Nick Kitchener, Salvador E. Venegas-Andraca This paper describes the work carried out by the Inter-American Development Bank, the IDB Lab, LACChain, Cambridge Quantum Computing (CQC), and Tecnologico de Monterrey to identify and eliminate quantum threats in blockchain networks. The advent of quantum computing threatens internet protocols and blockchain networks because they utilize non-quantum resistant cryptographic algorithms. When quantum computers become robust enough to run Shor's algorithm on a large scale, the most used asymmetric algorithms, utilized for digital signatures and message encryption, such as RSA, (EC)DSA, and (EC)DH, will be no longer secure. Quantum computers will be able to break them within a short period of time. Similarly, Grover's algorithm concedes a quadratic advantage for mining blocks in certain consensus protocols such as proof of work. Today, there are hundreds of billions of dollars denominated in cryptocurrencies that rely on blockchain ledgers as well as the thousands of blockchain-based applications storing value in blockchain networks. Cryptocurrencies and blockchain-based applications require solutions that guarantee quantum resistance in order to preserve the integrity of data and assets in their public and immutable ledgers. We have designed and developed a layer-two solution to secure the exchange of information between blockchain nodes over the internet and introduced a second signature in transactions using post-quantum keys. Our versatile solution can be applied to any blockchain network. In our implementation, quantum entropy was provided via the IronBridge Platform from CQC and we used LACChain Besu as the blockchain network.
Semi-device-independent quantum key distribution aims to achieve a balance between the highest level of security, device independence, and experimental feasibility. Semi-quantum key distribution presents an intriguing approach that seeks to minimize users' reliance on quantum operations while maintaining security, thus enabling the development of simplified and hardware fault-tolerant quantum protocols. In this work, we introduce a coherence-based, semi-device-independent, semi-quantum key distribution protocol built upon a noise-robust version of a coherence equality game that witnesses various types of coherence. Security is proven in the bounded quantum storage model, requiring users to implement only classical operations, specifically fixed-basis detections.
Errors in quantum logic gates are usually modeled by quantum process matrices (CPTP maps). But process matrices can be opaque, and unwieldy. We show how to transform a gate's process matrix into an error generator that represents the same information more usefully. We construct a basis of simple and physically intuitive elementary error generators, classify them, and show how to represent any gate's error generator as a mixture of elementary error generators with various rates. Finally, we show how to build a large variety of reduced models for gate errors by combining elementary error generators and/or entire subsectors of generator space. We conclude with a few examples of reduced models, including one with just $9N^2$ parameters that describes almost all commonly predicted errors on an N-qubit processor.
In this paper, we present the Quantum Information Software Developer Kit - Qiskit, for teaching quantum computing to undergraduate students, with basic knowledge of quantum mechanics postulates. We focus on presenting the construction of the programs on any common laptop or desktop computer and their execution on real quantum processors through the remote access to the quantum hardware available on the IBM Quantum Experience platform. The codes are made available throughout the text so that readers, even with little experience in scientific computing, can reproduce them and adopt the methods discussed in this paper to address their own quantum computing projects. The results presented are in agreement with theoretical predictions and show the effectiveness of the Qiskit package as a robust classroom working tool for the introduction of applied concepts of quantum computing and quantum information theory.
The set of all electronic states that can be expressed as a single Slater determinant forms a submanifold, isomorphic to the Grassmannian, of the projective Hilbert space of wave functions. We explored this fact by using tools of Riemannian geometry of Grassmannians as described by Absil et. al [Acta App. Math. 80, 199 (2004)], to propose an algorithm that converges to a Slater determinant that is critical point of the overlap function with a correlated wave function. This algorithm can be applied to quantify the entanglement or correlation of a wave function. We show that this algorithm is equivalent to the Newton method using the standard parametrization of Slater determinants by orbital rotations, but it can be more efficiently implemented because the orbital basis used to express the correlated wave function is kept fixed throughout the iterations. We present the equations of this method for a general configuration interaction wave function and for a wave function with up to double excitations over a reference determinant. Applications of this algorithm to selected electronic systems are also presented and discussed.
In order to support near-term applications of quantum computing, a new compute paradigm has emerged--the quantum-classical cloud--in which quantum computers (QPUs) work in tandem with classical computers (CPUs) via a shared cloud infrastructure. In this work, we enumerate the architectural requirements of a quantum-classical cloud platform, and present a framework for benchmarking its runtime performance. In addition, we walk through two platform-level enhancements, parametric compilation and active qubit reset, that specifically optimize a quantum-classical architecture to support variational hybrid algorithms (VHAs), the most promising applications of near-term quantum hardware. Finally, we show that integrating these two features into the Rigetti Quantum Cloud Services (QCS) platform results in considerable improvements to the latencies that govern algorithm runtime.
Near-term applications of quantum information processors will rely on optimized circuit implementations to minimize gate depth and therefore mitigate the impact of gate errors in noisy intermediate-scale quantum (NISQ) computers. More expressive gate sets can significantly reduce the gate depth of generic circuits. Similarly, structured algorithms can benefit from a gate set that more directly matches the symmetries of the problem. The XY interaction generates a family of gates that provides expressiveness well tailored to quantum chemistry as well as to combinatorial optimization problems, while also offering reductions in circuit depth for more generic circuits. Here we implement the full family of XY entangling gates in a transmon-based superconducting qubit architecture. We use a composite pulse scheme that requires calibration of only a single gate pulse and maintains constant gate time for all members of the family. This allows us to maintain a high fidelity implementation of the gate across all entangling angles. The average fidelity of gates sampled from this family ranges from $95.67 \pm 0.60\%$ to $99.01 \pm 0.15\%$, with a median fidelity of $97.35 \pm 0.17\%$, which approaches the coherence-limited gate fidelity of the qubit pair. We furthermore demonstrate the utility of XY in a quantum approximation optimization algorithm in enabling circuit depth reductions as compared to the CZ only case.
With superconducting transmon qubits --- a promising platform for quantum information processing --- two-qubit gates can be performed using AC signals to modulate a tunable transmon's frequency via magnetic flux through its SQUID loop. However, frequency tunablity introduces an additional dephasing mechanism from magnetic fluctuations. In this work, we experimentally study the contribution of instrumentation noise to flux instability and the resulting error rate of parametrically activated two-qubit gates. Specifically, we measure the qubit coherence time under flux modulation while injecting broadband noise through the flux control channel. We model the noise's effect using a dephasing rate model that matches well to the measured rates, and use it to prescribe a noise floor required to achieve a desired two-qubit gate infidelity. Finally, we demonstrate that low-pass filtering the AC signal used to drive two-qubit gates between the first and second harmonic frequencies can reduce qubit sensitivity to flux noise at the AC sweet spot (ACSS), confirming an earlier theoretical prediction. The framework we present to determine instrumentation noise floors required for high entangling two-qubit gate fidelity should be extensible to other quantum information processing systems.
In state-of-the-art quantum computing platforms, including superconducting qubits and trapped ions, imperfections in the 2-qubit entangling gates are the dominant contributions of error to system-wide performance. Recently, a novel 2-qubit parametric gate was proposed and demonstrated with superconducting transmon qubits. This gate is activated through RF modulation of the transmon frequency and can be operated at an amplitude where the performance is first-order insensitive to flux-noise. In this work we experimentally validate the existence of this AC sweet spot and demonstrate its dependence on white noise power from room temperature electronics. With these factors in place, we measure coherence-limited entangling-gate fidelities as high as 99.2 $\pm$ 0.15%.
The ubiquitous presence of $1/f$ flux noise was a significant barrier to long-coherence in superconducting qubits until the development of qubits that could operate in static, flux noise insensitive configurations commonly referred to as `sweet-spots'. Several proposals for entangling gates in superconducting qubits tune the flux bias away from these spots, thus reintroducing the dephasing problem to varying degrees. Here we revisit one such proposal, where interactions are parametrically activated by rapidly modulating the flux bias of the qubits around these sweet-spots, and study the effect of modulation on the sensitivity to flux noise. We explicitly calculate how dephasing rates depend on different components of the flux-noise spectrum, and show that, while these parametric gates are insensitive to $1/f$ flux noise, dephasing rates are increased under modulation, and dominated by white noise. Remarkably, we find that simple filtering of the flux control signal allows for entangling gates to operate in a novel sweet-spot for dephasing under flux modulation. This sweet spot, which we dub the AC sweet spot, is insensitive to $1/f$ flux noise, and much less sensitive to white noise in the control electronics, allowing for interactions of quality that is limited only by higher order effects and other sources of noise.
Noisy intermediate-scale quantum computing devices are an exciting platform for the exploration of the power of near-term quantum applications. Performing nontrivial tasks in such devices requires a fundamentally different approach than what would be used on an error-corrected quantum computer. One such approach is to use hybrid algorithms, where problems are reduced to a parameterized quantum circuit that is often optimized in a classical feedback loop. Here we describe one such hybrid algorithm for machine learning tasks by building upon the classical algorithm known as random kitchen sinks. Our technique, called quantum kitchen sinks, uses quantum circuits to nonlinearly transform classical inputs into features that can then be used in a number of machine learning algorithms. We demonstrate the power and flexibility of this proposal by using it to solve binary classification problems for synthetic datasets as well as handwritten digits from the MNIST database. Using the Rigetti quantum virtual machine, we show that small quantum circuits provide significant performance lift over standard linear classical algorithms, reducing classification error rates from 50% to $<0.1\%$, and from $4.1\%$ to $1.4\%$ in these two examples, respectively. Further, we are able to run the MNIST classification problem, using full-sized MNIST images, on a Rigetti quantum processing unit, finding a modest performance lift over the linear baseline.
The promise of quantum computing with imperfect qubits relies on the ability of a quantum computing system to scale cheaply through error correction and fault-tolerance. While fault-tolerance requires relatively mild assumptions about the nature of qubit errors, the overhead associated with coherent and non-Markovian errors can be orders of magnitude larger than the overhead associated with purely stochastic Markovian errors. One proposal to address this challenge is to randomize the circuits of interest, shaping the errors to be stochastic Pauli errors but leaving the aggregate computation unaffected. The randomization technique can also suppress couplings to slow degrees of freedom associated with non-Markovian evolution. Here we demonstrate the implementation of Pauli-frame randomization in a superconducting circuit system, exploiting a flexible programming and control infrastructure to achieve this with low effort. We use high-accuracy gate-set tomography to characterize in detail the properties of the circuit error, with and without the randomization procedure, which allows us to make rigorous statements about Markovianity as well as the nature of the observed errors. We demonstrate that randomization suppresses signatures of non-Markovian evolution to statistically insignificant levels, from a Markovian model violation ranging from $43\sigma$ to $1987\sigma$, down to violations between $0.3\sigma$ and $2.7\sigma$ under randomization. Moreover, we demonstrate that, under randomization, the experimental errors are well described by a Pauli error model, with model violations that are similarly insignificant (between $0.8\sigma$ and $2.7\sigma$). Importantly, all these improvements in the model accuracy were obtained without degradation to fidelity, and with some improvements to error rates as quantified by the diamond norm.
The measure of quantum entanglement is determined for any dimer, either ferromagnetic or antiferromagnetic, spin-1/2 Heisenberg systems in the presence of external magnetic field. The physical quantity proposed as a measure of thermal quantum entanglement is the distance between states defined through the Hilbert-Schmidt norm. It has been shown that for ferromagnetic systems there is no entanglement at all. However, although under applied magnetic field, antiferromagnetic spin-1/2 dimers exhibit entanglement for temperatures below the decoherence temperature -- the one above which the entanglement vanishes. In addition to that, the decoherence temperature shows to be proportional to the exchange coupling constant and independent on the applied magnetic field, consequently, the entanglement may not be destroyed by external magnetic fields -- the phenomenon of \it magnetic shielding effect of quantum entanglement states. This effect is discussed for the binuclear nitrosyl iron complex [Fe$_2$(SC$_3$H$_5$N$_2$)$_2$(NO)$_4$] and it is foreseen that the quantum entanglement survives even under high magnetic fields of Tesla orders of magnitude.
J. S. Otterbach, R. Manenti, N. Alidoust, A. Bestwick, M. Block, B. Bloom, S. Caldwell, N. Didier, E. Schuyler Fried, S. Hong, P. Karalekas, C. B. Osborn, A. Papageorge, E. C. Peterson, G. Prawiroatmodjo, N. Rubin, Colm A. Ryan, D. Scarabelli, M. Scheer, E. A. Sete, et al (10) Machine learning techniques have led to broad adoption of a statistical model of computing. The statistical distributions natively available on quantum processors are a superset of those available classically. Harnessing this attribute has the potential to accelerate or otherwise improve machine learning relative to purely classical performance. A key challenge toward that goal is learning to hybridize classical computing resources and traditional learning techniques with the emerging capabilities of general purpose quantum processors. Here, we demonstrate such hybridization by training a 19-qubit gate model processor to solve a clustering problem, a foundational challenge in unsupervised learning. We use the quantum approximate optimization algorithm in conjunction with a gradient-free Bayesian optimization to train the quantum machine. This quantum/classical hybrid algorithm shows robustness to realistic noise, and we find evidence that classical optimization can be used to train around both coherent and incoherent imperfections.
We consider a complete study of the influence of the cavity size on the spontaneous decay of an atom excited state, roughly approximated by a harmonic oscillator. We confine the oscillator-field system in a spherical cavity of radius $R$, perfectly reflective, and work in the formalism of dressed coordinates and states, which allows to perform non-perturbative calculations for the probability of the atom to decay spontaneously from the first excited state to the ground state. In free space, $R\to\infty$, we obtain known exact results an for sufficiently small $R$ we have developed a power expansion calculation in this parameter. Furthermore, for arbitrary cavity size radius, we developed numerical computations and showed complete agreement with the exact one for $R\to\infty$ and the power expansion results for small cavities, in this way showing the robustness of our results. We have found that in general the spontaneous decay of an excited state of the atom increases with the cavity size radius and vice versa. For sufficiently small cavities the atom practically does not suffers spontaneous decay, whereas for large cavities the spontaneous decay approaches the free-space $R\to\infty$ value. On the other hand, for some particular values of the cavity radius, in which the cavity is in resonance with the natural frequency of the atom, the spontaneous decay transition probability is increased compared to the free-space case. Finally, we showed how the probability spontaneous decay go from an oscillatory time behaviour, for finite cavity radius, to an almost exponential decay, for free space.
The contribution from quantum vacuum fluctuations of a real massless scalar field to the motion of a test particle that interacts with the field in the presence of a perfectly reflecting flat boundary is here investigated. There is no quantum induced dispersions on the motion of the particle when it is alone in the empty space. However, when a reflecting wall is introduced, dispersions occur with magnitude dependent on how fast the system evolves between the two scenarios. A possible way of implementing this process would be by means of an idealized sudden switching, for which the transition occurs instantaneously. Although the sudden process is a simple and mathematically convenient idealization it brings some divergences to the results, particularly at a time corresponding to a round trip of a light signal between the particle and the wall. It is shown that the use of smooth switching functions, besides regularizing such divergences, enables us to better understand the behavior of the quantum dispersions induced on the motion of the particle. Furthermore, the action of modifying the vacuum state of the system leads to a change in the particle energy that depends on how fast the transition between these states is implemented. Possible implications of these results to the similar case of an electric charge near a perfectly conducting wall are discussed.
In many experiments on microscopic quantum systems, it is implicitly assumed that when a macroscopic procedure or "instruction" is repeated many times -- perhaps in different contexts -- each application results in the same microscopic quantum operation. But in practice, the microscopic effect of a single macroscopic instruction can easily depend on its context. If undetected, this can lead to unexpected behavior and unreliable results. Here, we design and analyze several tests to detect context-dependence. They are based on invariants of matrix products, and while they can be as data intensive as quantum process tomography, they do not require tomographic reconstruction, and are insensitive to imperfect knowledge about the experiments. We also construct a measure of how unitary (reversible) an operation is, and show how to estimate the volume of physical states accessible by a quantum operation.
S. Caldwell, N. Didier, C. A. Ryan, E. A. Sete, A. Hudson, P. Karalekas, R. Manenti, M. Reagor, M. P. da Silva, R. Sinclair, E. Acala, N. Alidoust, J. Angeles, A. Bestwick, M. Block, B. Bloom, A. Bradley, C. Bui, L. Capelluto, R. Chilcott, et al (42) We describe and implement a family of entangling gates activated by radio-frequency flux modulation applied to a tunable transmon that is statically coupled to a neighboring transmon. The effect of this modulation is the resonant exchange of photons directly between levels of the two-transmon system, obviating the need for mediating qubits or resonator modes and allowing for the full utilization of all qubits in a scalable architecture. The resonance condition is selective in both the frequency and amplitude of modulation and thus alleviates frequency crowding. We demonstrate the use of three such resonances to produce entangling gates that enable universal quantum computation: one iSWAP gate and two distinct controlled Z gates. We report interleaved randomized benchmarking results indicating gate error rates of 6% for the iSWAP (duration 135ns) and 9% for the controlled Z gates (durations 175 ns and 270 ns), limited largely by qubit coherence.
M. Reagor, C. B. Osborn, N. Tezak, A. Staley, G. Prawiroatmodjo, M. Scheer, N. Alidoust, E. A. Sete, N. Didier, M. P. da Silva, E. Acala, J. Angeles, A. Bestwick, M. Block, B. Bloom, A. Bradley, C. Bui, S. Caldwell, L. Capelluto, R. Chilcott, et al (39) We show that parametric coupling techniques can be used to generate selective entangling interactions for multi-qubit processors. By inducing coherent population exchange between adjacent qubits under frequency modulation, we implement a universal gateset for a linear array of four superconducting qubits. An average process fidelity of $\mathcal{F}=93\%$ is estimated for three two-qubit gates via quantum process tomography. We establish the suitability of these techniques for computation by preparing a four-qubit maximally entangled state and comparing the estimated state fidelity against the expected performance of the individual entangling gates. In addition, we prepare an eight-qubit register in all possible bitstring permutations and monitor the fidelity of a two-qubit gate across one pair of these qubits. Across all such permutations, an average fidelity of $\mathcal{F}=91.6\pm2.6\%$ is observed. These results thus offer a path to a scalable architecture with high selectivity and low crosstalk.
Building a scalable quantum computer requires developing appropriate models to understand and verify its complex quantum dynamics. We focus on superconducting quantum processors based on transmons for which full numerical simulations are already challenging at the level of qubytes. It is thus highly desirable to develop accurate methods of modeling qubit networks that do not rely solely on numerical computations. Using systematic perturbation theory to large orders in the transmon regime, we derive precise analytic expressions of the transmon parameters. We apply our results to the case of parametrically-modulated transmons to study recently-implemented parametrically-activated entangling gates.
Several platforms are currently being explored for simulating physical systems whose complexity increases faster than polynomially with the number of particles or degrees of freedom in the system. Defects and vacancies in semiconductors or dielectric materials, magnetic impurities embedded in solid helium \citelemeshko13, atoms in optical lattices, photons, trapped ions and superconducting q-bits are among the candidates for predicting the behaviour of spin glasses, spin-liquids, and classical magnetism among other phenomena with practical technological applications. Here we investigate the potential of polariton graphs as an efficient simulator for finding the global minimum of the $XY$ Hamiltonian. By imprinting polariton condensate lattices of bespoke geometries we show that we can simulate a large variety of systems undergoing the U(1) symmetry breaking transitions. We realise various magnetic phases, such as ferromagnetic, anti-ferromagnetic, and frustrated spin configurations on unit cells of various lattices: square, triangular, linear and a disordered graph. Our results provide a route to study unconventional superfluids, spin-liquids, Berezinskii-Kosterlitz-Thouless phase transition, classical magnetism among the many systems that are described by the $XY$ Hamiltonian.
The main promise of quantum computing is to efficiently solve certain problems that are prohibitively expensive for a classical computer. Most problems with a proven quantum advantage involve the repeated use of a black box, or oracle, whose structure encodes the solution. One measure of the algorithmic performance is the query complexity, i.e., the scaling of the number of oracle calls needed to find the solution with a given probability. Few-qubit demonstrations of quantum algorithms, such as Deutsch-Jozsa and Grover, have been implemented across diverse physical systems such as nuclear magnetic resonance, trapped ions, optical systems, and superconducting circuits. However, at the small scale, these problems can already be solved classically with a few oracle queries, and the attainable quantum advantage is modest. Here we solve an oracle-based problem, known as learning parity with noise, using a five-qubit superconducting processor. Running classical and quantum algorithms on the same oracle, we observe a large gap in query count in favor of quantum processing. We find that this gap grows by orders of magnitude as a function of the error rates and the problem size. This result demonstrates that, while complex fault-tolerant architectures will be required for universal quantum computing, a quantum advantage already emerges in existing noisy systems
The minimum probability of error (MPE) measurement discriminates between a set of candidate quantum states with the minimum average error probability allowed by quantum mechanics. Conditions for a measurement to be MPE were derived by Yuen, Kennedy and Lax (YKL). MPE measurements have been found for states that form a single orbit under a group action, i.e., there is a transitive group action on the states in the set. For such state sets, termed geometrically uniform (GU) by Forney, it was shown that the `pretty good measurement' (PGM) attains the MPE. Even so, evaluating the actual probability of error (and other performance metrics) attained by the PGM on a GU set involves inverting large matrices, and is not easy in general. Our first contribution is a formula for the MPE and conditional probabilities of GU sets, using group representation theory. Next, we consider sets of pure states that have multiple orbits under the group action. Such states are termed compound geometrically uniform (CGU). MPE measurements for general CGU sets are not known. In this paper, we show how our representation-theoretic description of optimal measurements for GU sets naturally generalizes to the CGU case. We show how to compute the MPE measurement for CGU sets by reducing the problem to solving a few simultaneous equations. The number of equations depends on the sizes of the multiplicity space of irreducible representations. For many common group representations (such as those of several practical good linear codes), this is much more tractable than solving large semi-definite programs---which is what is needed to solve the YKL conditions numerically for arbitrary state sets. We show how to evaluate MPE measurements for CGU states for some examples relevant to quantum-limited classical optical communication.
Typical quantum gate tomography protocols struggle with a self-consistency problem: the gate operation cannot be reconstructed without knowledge of the initial state and final measurement, but such knowledge cannot be obtained without well-characterized gates. A recently proposed technique, known as randomized benchmarking tomography (RBT), sidesteps this self-consistency problem by designing experiments to be insensitive to preparation and measurement imperfections. We implement this proposal in a superconducting qubit system, using a number of experimental improvements including implementing each of the elements of the Clifford group in single `atomic' pulses and custom control hardware to enable large overhead protocols. We show a robust reconstruction of several single-qubit quantum gates, including a unitary outside the Clifford group. We demonstrate that RBT yields physical gate reconstructions that are consistent with fidelities obtained by randomized benchmarking.
The Brownian motion of a test particle interacting with a quantum scalar field in the presence of a perfectly reflecting boundary is studied in (1 + 1)-dimensional flat spacetime. Particularly, the expressions for dispersions in velocity and position of the particle are explicitly derived and their behaviors examined. The results are similar to those corresponding to an electric charge interacting with a quantum electromagnetic field near a reflecting plane boundary, mainly regarding the divergent behavior of the dispersions at the origin (where the boundary is placed), and at the time interval corresponding to a round trip of a light pulse between the particle and the boundary. We close by addressing some effects of allowing the position of the particle to fluctuate.
We present methods and results of shot-by-shot correlation of noisy measurements to extract entangled state and process tomography in a superconducting qubit architecture. We show that averaging continuous values, rather than counting discrete thresholded values, is a valid tomographic strategy and is in fact the better choice in the low signal-to-noise regime. We show that the effort to measure $N$-body correlations from individual measurements scales exponentially with $N$, but with sufficient signal-to-noise the approach remains viable for few-body correlations. We provide a new protocol to optimally account for the transient behavior of pulsed measurements. Despite single-shot measurement fidelity that is less than perfect, we demonstrate appropriate processing to extract and verify entangled states and processes.
We describe how randomized benchmarking can be used to reconstruct the unital part of any trace-preserving quantum map, which in turn is sufficient for the full characterization of any unitary evolution, or more generally, any unital trace-preserving evolution. This approach inherits randomized benchmarking's robustness to preparation, measurement, and gate imperfections, therefore avoiding systematic errors caused by these imperfections. We also extend these techniques to efficiently estimate the average fidelity of a quantum map to unitary maps outside of the Clifford group. The unitaries we consider correspond to large circuits commonly used as building blocks to achieve scalable, universal, and fault-tolerant quantum computation. Hence, we can efficiently verify all such subcomponents of a circuit-based universal quantum computer. In addition, we rigorously bound the time and sampling complexities of randomized benchmarking procedures, proving that the required non-linear estimation problem can be solved efficiently.
Laser light is widely used for communication and sensing applications, so the optimal discrimination of coherent states--the quantum states of light emitted by a laser--has immense practical importance. However, quantum mechanics imposes a fundamental limit on how well different coher- ent states can be distinguished, even with perfect detectors, and limits such discrimination to have a finite minimum probability of error. While conventional optical receivers lead to error rates well above this fundamental limit, Dolinar found an explicit receiver design involving optical feedback and photon counting that can achieve the minimum probability of error for discriminating any two given coherent states. The generalization of this construction to larger sets of coherent states has proven to be challenging, evidencing that there may be a limitation inherent to a linear-optics-based adaptive measurement strategy. In this Letter, we show how to achieve optimal discrimination of any set of coherent states using a resource-efficient quantum computer. Our construction leverages a recent result on discriminating multi-copy quantum hypotheses (arXiv:1201.6625) and properties of coherent states. Furthermore, our construction is reusable, composable, and applicable to designing quantum-limited processing of coherent-state signals to optimize any metric of choice. As illustrative examples, we analyze the performance of discriminating a ternary alphabet, and show how the quantum circuit of a receiver designed to discriminate a binary alphabet can be reused in discriminating multimode hypotheses. Finally, we show our result can be used to achieve the quantum limit on the rate of classical information transmission on a lossy optical channel, which is known to exceed the Shannon rate of all conventional optical receivers.
Sideband transitions have been shown to generate controllable interaction between superconducting qubits and microwave resonators. Up to now, these transitions have been implemented with voltage drives on the qubit or the resonator, with the significant disadvantage that such implementations only lead to second-order sideband transitions. Here we propose an approach to achieve first-order sideband transitions by relying on controlled oscillations of the qubit frequency using a flux-bias line. Not only can first-order transitions be significantly faster, but the same technique can be employed to implement other tunable qubit-resonator and qubit-qubit interactions. We discuss in detail how such first-order sideband transitions can be used to implement a high fidelity controlled-NOT operation between two transmons coupled to the same resonator.
Easwar Magesan, Jay M. Gambetta, B. R. Johnson, Colm A. Ryan, Jerry M. Chow, Seth T. Merkel, Marcus P. da Silva, George A. Keefe, Mary B. Rothwell, Thomas A. Ohki, Mark B. Ketchen, M. Steffen We describe a scalable experimental protocol for obtaining estimates of the error rate of individual quantum computational gates. This protocol, in which random Clifford gates are interleaved between a gate of interest, provides a bounded estimate of the average error of the gate under test so long as the average variation of the noise affecting the full set of Clifford gates is small. This technique takes into account both state preparation and measurement errors and is scalable in the number of qubits. We apply this protocol to a superconducting qubit system and find gate errors that compare favorably with the gate errors extracted via quantum process tomography.
Experimental implementations of quantum information processing have now reached a level of sophistication where quantum process tomography is impractical. The number of experimental settings as well as the computational cost of the data post-processing now translates to days of effort to characterize even experiments with as few as 8 qubits. Recently a more practical approach to determine the fidelity of an experimental quantum process has been proposed, where the experimental data is compared directly to an ideal process using Monte Carlo sampling. Here we present an experimental implementation of this scheme in a circuit quantum electrodynamics setup to determine the fidelity of two qubit gates, such as the cphase and the cnot gate, and three qubit gates, such as the Toffoli gate and two sequential cphase gates.
Due to the technical difficulty of building large quantum computers, it is important to be able to estimate how faithful a given implementation is to an ideal quantum computer. The common approach of completely characterizing the computation process via quantum process tomography requires an exponential amount of resources, and thus is not practical even for relatively small devices. We solve this problem by demonstrating that twirling experiments previously used to characterize the average fidelity of quantum memories efficiently can be easily adapted to estimate the average fidelity of the experimental implementation of important quantum computation processes, such as unitaries in the Clifford group, in a practical and efficient manner with applicability in current quantum devices. Using this procedure, we demonstrate state-of-the-art coherent control of an ensemble of magnetic moments of nuclear spins in a single crystal solid by implementing the encoding operation for a 3 qubit code with only a 1% degradation in average fidelity discounting preparation and measurement errors. We also highlight one of the advances that was instrumental in achieving such high fidelity control.
The quantum Toffoli gate allows universal reversible classical computation. It is also an important primitive in many quantum circuits and quantum error correction schemes. Here we demonstrate the realization of a Toffoli gate with three superconducting transmon qubits coupled to a microwave resonator. By exploiting the third energy level of the transmon qubit, the number of elementary gates needed for the implementation of the Toffoli gate, as well as the total gate time can be reduced significantly in comparison to theoretical proposals using two-level systems only. We characterize the performance of the gate by full process tomography and Monte Carlo process certification. The gate fidelity is found to be $68.5\pm0.5$%.
Teleportation of a quantum state may be used for distributing entanglement between distant qubits in quantum communication and for quantum computation. Here we demonstrate the implementation of a teleportation protocol, up to the single-shot measurement step, with superconducting qubits coupled to a microwave resonator. Using full quantum state tomography and evaluating an entanglement witness, we show that the protocol generates a genuine tripartite entangled state of all three-qubits. Calculating the projection of the measured density matrix onto the basis states of two qubits allows us to reconstruct the teleported state. Repeating this procedure for a complete set of input states we find an average output state fidelity of 88%.
Quantum tomography is the main method used to assess the quality of quantum information processing devices, but its complexity presents a major obstacle for the characterization of even moderately large systems. The number of experimental settings required to extract complete information about a device grows exponentially with its size, and so does the running time for processing the data generated by these experiments. Part of the problem is that tomography generates much more information than is usually sought. Taking a more targeted approach, we develop schemes that enable (i) estimating the fidelity of an experiment to a theoretical ideal description, (ii) learning which description within a reduced subset best matches the experimental data. Both these approaches yield a significant reduction in resources compared to tomography. In particular, we demonstrate that fidelity can be estimated from a number of simple experimental settings that is independent of the system size, removing an important roadblock for the experimental study of larger quantum information processing units.
C. Lang, D. Bozyigit, C. Eichler, L. Steffen, J. M. Fink, A. A. Abdumalikov Jr., M. Baur, S. Filipp, M. P. da Silva, A. Blais, A. Wallraff Creating a train of single photons and monitoring its propagation and interaction is challenging in most physical systems, as photons generally interact very weakly with other systems. However, when confining microwave frequency photons in a transmission line resonator, effective photon-photon interactions can be mediated by qubits embedded in the resonator. Here, we observe the phenomenon of photon blockade through second-order correlation function measurements. The experiments clearly demonstrate antibunching in a continuously pumped source of single microwave photons measured using microwave beam splitters, linear amplifiers, and quadrature amplitude detectors. We also investigate resonance fluorescence and Rayleigh scattering in Mollow-triplet-like spectra.
Correlations are important tools in the characterization of quantum fields. They can be used to describe statistical properties of the fields, such as bunching and anti-bunching, as well as to perform field state tomography. Here we analyse experiments by Bozyigit et al. [arXiv:1002.3738] where correlation functions can be observed using the measurement records of linear detectors (i.e. quadrature measurements), instead of relying on intensity or number detectors. We also describe how large amplitude noise introduced by these detectors can be quantified and subtracted from the data. This enables, in particular, the observation of first- and second-order coherence functions of microwave photon fields generated using circuit quantum-electrodynamics and propagating in superconducting transmission lines under the condition that noise is sufficiently low.
D. Bozyigit, C. Lang, L. Steffen, J. M. Fink, M. Baur, R. Bianchetti, P. J. Leek, S. Filipp, M. P. da Silva, A. Blais, A. Wallraff At optical frequencies the radiation produced by a source, such as a laser, a black body or a single photon source, is frequently characterized by analyzing the temporal correlations of emitted photons using single photon counters. At microwave frequencies, however, there are no efficient single photon counters yet. Instead, well developed linear amplifiers allow for efficient measurement of the amplitude of an electromagnetic field. Here, we demonstrate how the properties of a microwave single photon source can be characterized using correlation measurements of the emitted radiation with such detectors. We also demonstrate the cooling of a thermal field stored in a cavity, an effect which we detect using a cross-correlation measurement of the radiation emitted at the two ends of the cavity.
In this paper we present results illustrating the power and flexibility of one-bit teleportations in quantum bus computation. We first show a scheme to perform a universal set of gates on continuous variable modes, which we call a quantum bus or qubus, using controlled phase-space rotations, homodyne detection, ancilla qubits and single qubit measurement. The resource usage for this scheme is lower than any previous scheme to date. We then illustrate how one-bit teleportations into a qubus can be used to encode qubit states into a quantum repetition code, which in turn can be used as an efficient method for producing GHZ states that can be used to create large cluster states. Each of these schemes can be modified so that teleportation measurements are post-selected to yield outputs with higher fidelity, without changing the physical parameters of the system.
The task of finding a correctable encoding that protects against some physical quantum process is in general hard. Two main obstacles are that an exponential number of experiments are needed to gain complete information about the quantum process, and known algorithmic methods for finding correctable encodings involve operations on exponentially large matrices. However, we show that in some cases it is possible to find such encodings with only partial information about the quantum process. Such useful partial information can be systematically extracted by averaging the channel under the action of a set of unitaries in a process known as "twirling". In this paper we prove that correctable encodings for a twirled channel are also correctable for the original channel. We investigate the particular case of twirling over the set of Pauli operators and qubit permutations, and show that the resulting quantum operation can be characterized experimentally in a scalable manner. We also provide a postprocessing scheme for finding unitarily correctable codes for these twirled channels which does not involve exponentially large matrices.
J. Baugh, J. Chamilliard, C. M. Chandrashekar, M. Ditty, A. Hubbard, R. Laflamme, M. Laforest, D. Maslov, O. Moussa, C. Negrevergne, M. Silva, S. Simmons, C. A. Ryan, D. G. Cory, J. S. Hodges, C. Ramanathan This paper describes recent progress using nuclear magnetic resonance (NMR) as a platform for implementing quantum information processing (QIP) tasks. The basic ideas of NMR QIP are detailed, examining the successes and limitations of liquid and solid state experiments. Finally, a future direction for implementing quantum processors is suggested,utilizing both nuclear and electron spin degrees of freedom.
A major goal of developing high-precision control of many-body quantum systems is to realise their potential as quantum computers. Probably the most significant obstacle in this direction is the problem of "decoherence": the extreme fragility of quantum systems to environmental noise and other control limitations. The theory of fault-tolerant quantum error correction has shown that quantum computation is possible even in the presence of decoherence provided that the noise affecting the quantum system satisfies certain well-defined theoretical conditions. However, existing methods for noise characterisation have become intractable already for the systems that are controlled in today's labs. In this paper we introduce a technique based on symmetrisation that enables direct experimental characterisation of key properties of the decoherence affecting a multi-body quantum system. Our method reduces the number of experiments required by existing methods from exponential to polynomial in the number of subsystems. We demonstrate the application of this technique to the optimisation of control over nuclear spins in the solid state.
Dec 13 2006
quant-ph arXiv:quant-ph/0612097v1
In this paper we investigate stabilizer quantum error correction codes using controlled phase rotations of strong coherent probe states. We explicitly describe two methods to measure the Pauli operators which generate the stabilizer group of a quantum code. First, we show how to measure a Pauli operator acting on physical qubits using a single coherent state with large average photon number, displacement operations, and photon detection. Second, we show how to measure the stabilizer operators fault-tolerantly by the deterministic preparation of coherent cat states along with one-bit teleportations between a qubit-like encoding of coherent states and physical qubits.
Nov 29 2006
quant-ph arXiv:quant-ph/0611273v2
We discuss a simple variant of the one-way quantum computing model [R. Raussendorf and H.-J. Briegel, PRL 86, 5188, 2001], called the Pauli measurement model, where measurements are restricted to be along the eigenbases of the Pauli X and Y operators, while auxiliary qubits can be prepared both in the $\ket{+_{\pi\over 4}}:={1/\sqrt{2}}(\ket{0}+e^{i{\pi\over 4}}\ket{1})$ state, and the usual $\ket{+}:={1/ \sqrt{2}}(\ket{0}+\ket{1})$ state. We prove the universality of this quantum computation model, and establish a standardization procedure which permits all entanglement and state preparation to be performed at the beginning of computation. This leads us to develop a direct approach to fault-tolerance by simple transformations of the entanglement graph and preparation operations, while error correction is performed naturally via syndrome-extracting teleportations.
Nov 14 2006
quant-ph arXiv:quant-ph/0611123v1
We compare the principles and experimental results of two different QPSK signal detection configurations, photon counting and super homodyning, for applications in fiber-optic Quantum Key Distribution (QKD) systems operating at telecom wavelength, using the BB84 protocol.
Nov 10 2006
quant-ph arXiv:quant-ph/0611100v1
We present a QKD system with fainted pulses using self-homodyne coherent detection in optical fibers at 1543nm. BB84 protocol key is encoded in the optical phase using a twoelectrode Mach-Zehnder modulator, producing a QPSK modulation.
Nov 10 2006
quant-ph arXiv:quant-ph/0611102v1
We present the integration of the optical and electronic subsystems of a BB84-QKD fiber link. A highspeed FPGA MODEM generates the random QPSK sequences for a fiber-optic delayed self-homodyne scheme using APD detectors.
Feb 17 2005
quant-ph arXiv:quant-ph/0502101v1
We calculate the error threshold for the linear optics quantum computing proposal by Knill, Laflamme and Milburn [Nature 409, pp. 46--52 (2001)] under an error model where photon detectors have efficiency <100% but all other components -- such as single photon sources, beam splitters and phase shifters -- are perfect and introduce no errors. We make use of the fact that the error model induced by the lossy hardware is that of an erasure channel, i.e., the error locations are always known. Using a method based on a Markov chain description of the error correction procedure, our calculations show that, with the 7 qubit CSS quantum code, the gate error threshold for fault tolerant quantum computation is bounded below by a value between 1.78% and 11.5% depending on the construction of the entangling gates.
May 20 2004
quant-ph arXiv:quant-ph/0405112v1
Using an error models motivated by the Knill, Laflamme, Milburn proposal for efficient linear optics quantum computing [Nature 409,46--52, 2001], error rate thresholds for erasure errors caused by imperfect photon detectors using a 7 qubit code are derived and verified through simulation. A novel method -- based on a Markov chain description of the erasure correction procedure -- is developed and used to calculate the recursion relation describing the error rate at different encoding levels from which the threshold is derived, matching threshold predictions by Knill, Laflamme and Milburn [quant-ph/0006120, 2000]. In particular, the erasure threshold for gate failure rate in the same order as the measurement failure rate is found to be above 1.78%.