Simulating time evolution under quantum Hamiltonians is one of the most natural applications of quantum computers. We introduce TE-PAI, which simulates time evolution exactly by sampling random quantum circuits for the purpose of estimating observable expectation values at the cost of an increased circuit repetition. The approach builds on the Probabilistic Angle Interpolation (PAI) technique and we prove that it simulates time evolution without discretisation or algorithmic error while achieving optimally shallow circuit depths that saturate the Lieb-Robinson bound. Another significant advantage of TE-PAI is that it only requires executing random circuits that consist of Pauli rotation gates of only two kinds of rotation angles $\pm\Delta$ and $\pi$, along with measurements. While TE-PAI is highly beneficial for NISQ devices, we additionally develop an optimised early fault-tolerant implementation using catalyst circuits and repeat-until-success teleportation, concluding that the approach requires orders of magnitude fewer T-states than conventional techniques, such as Trotterization -- we estimate $3 \times 10^{5}$ T states are sufficient for the fault-tolerant simulation of a $100$-qubit Heisenberg spin Hamiltonian. Furthermore, TE-PAI allows for a highly configurable trade-off between circuit depth and measurement overhead by adjusting the rotation angle $\Delta$ arbitrarily. We expect that the approach will be a major enabler in the late NISQ and early fault-tolerant periods as it can compensate circuit-depth and qubit-number limitations through an increased circuit repetition.
CoVarince Root finding with classical shadows (CoVaR) was recently introduced as a new paradigm for training variational quantum circuits. Common approaches, such as variants of the Variational Quantum Eigensolver, aim to optimise a non-linear classical cost function and thus suffer from, e.g., poor local minima, high shot requirements and barren plateaus. In contrast, CoVaR fully exploits powerful classical shadows and finds joint roots of a very large number of covariances using only a logarithmic number of shots and linearly scaling classical HPC compute resources. As a result, CoVaR has been demonstrated to be particularly robust against local traps, however, its main limitation has been that it requires a sufficiently good initial state. We address this limitation by introducing an adiabatic morphing of the target Hamiltonian and demonstrate in a broad range of application examples that CoVaR can successfully prepare eigenstates of the target Hamiltonian when no initial warm start is known. CoVaR succeeds even when Hamiltonian energy gaps are very small -- this is in stark contrast to adiabatic evolution and phase estimation algorithms where circuit depths scale inversely with the Hamiltonian energy gaps. On the other hand, when the energy gaps are relatively small then adiabatic CoVaR may converge to higher excited states as opposed to a targeted specific low-lying state. Nevertheless, we exploit this feature of adiabatic CoVaR and demonstrate that it can be used to map out the low lying spectrum of a Hamiltonian which can be useful in practical applications, such as estimating thermal properties or in high-energy physics.
We explore the important task of applying a phase $exp(i f(x))$ to a computational basis state $\left| x \right>$. The closely related task of rotating a target qubit by an angle depending on $f(x)$ is also studied. Such operations are key in many quantum subroutines, and often the function $f$ can be well-approximated by a piecewise linear composition. Examples range from the application of diagonal Hamiltonian terms (such as the Coulomb interaction) in grid-based many-body simulation, to derivative pricing algorithms. Here we exploit a parallelisation of the piecewise approach so that all constituent elementary rotations are performed simultaneously, that is, we achieve a total rotation depth of one. Moreover, we explore the use of recursive catalyst 'towers' to implement these elementary rotations efficiently. Depending on the choice of implementation strategy, we find a depth as low as $O(log n + log S)$ for a register of $n$ qubits and a piecewise approximation of $S$ sections. In the limit of multiple repetitions of the oracle, we find that catalyst tower approaches have an $O(S \cdot n)$ T-count, whereas linear interpolation with QROM has an $O(n^{log_2(3)})$ T-count.
Reducing the effect of errors is essential for reliable quantum computation. Quantum error mitigation (QEM) and quantum error correction (QEC) are two frameworks that have been proposed to address this task, each with its respective challenges: sampling costs and inability to recover the state for QEM, and qubit overheads for QEC. In this work, we combine ideas from these two frameworks and introduce an information-theoretic machinery called a quantum filter that can purify or correct quantum channels. We provide an explicit construction of a filter that can correct arbitrary types of noise in an $n$-qubit Clifford circuit using $2n$ ancillary qubits based on a commutation-derived error detection circuit. We also show that this filtering scheme can partially purify noise in non-Clifford gates (e.g. T and CCZ gates). In contrast to QEC, this scheme works in an error-reduction sense because it does not require prior encoding of the input state into a QEC code and requires only a single instance of the target channel. Under the assumptions of clean ancillary qubits, this scheme overcomes the exponential sampling overhead in QEM because it can deterministically correct the error channel without discarding any result. We further propose an ancilla-efficient Pauli filter which can remove nearly all the low-weight Pauli components of the error channel in a Clifford circuit using only 2 ancillary qubits similar to flag error correction codes. We prove that for local depolarising noise, this filter can achieve a quadratic reduction in the average infidelity of the channel. The Pauli filter can also be used to convert an unbiased error channel into a completely biased error channel and thus is compatible with biased-noise QEC codes which have high code capacity. These examples demonstrate the utility of the quantum filter as an efficient error-reduction technique.
Simulating time evolution is one of the most natural applications of quantum computers and is thus one of the most promising prospects for achieving practical quantum advantage. Here we develop quantum algorithms to extract thermodynamic properties by estimating the density of states (DOS), a central object in quantum statistical mechanics. We introduce key innovations that significantly improve the practicality and extend the generality of previous techniques. First, our approach allows one to estimate the DOS for a specific subspace of the full Hilbert space. This is crucial for fermionic systems, since fermion-to-qubit mappings partition the full Hilbert space into subspaces of fixed number, on which both canonical and grand canonical ensemble properties depend. Second, in our approach, by time evolving very simple, random initial states (e.g. random computational basis states), we can exactly recover the DOS on average. Third, due to circuit-depth limitations, we only reconstruct the DOS up to a convolution with a Gaussian window - thus all imperfections that shift the energy levels by less than the width of the convolution window will not significantly affect the estimated DOS. For these reasons we find the approach is a promising candidate for early quantum advantage as even short-time, noisy dynamics yield a semi-quantitative reconstruction of the DOS (convolution with a broad Gaussian window), while early fault tolerant devices will likely enable higher resolution DOS reconstruction through longer time evolution. We demonstrate the practicality of our approach in representative Fermi-Hubbard and spin models and find that our approach is highly robust to algorithmic errors in the time evolution and to gate noise. We show that our approach is compatible with NISQ-friendly variational methods, introducing a new technique for variational time evolution in noisy DOS computations.
We introduce a post-processing technique for classical shadow measurement data that enhances the precision of ground state estimation through high-dimensional subspace expansion; the dimensionality is only limited by the amount of classical post-processing resources rather than by quantum resources. Crucial steps of our approach are the efficient identification of useful observables from shadow data, followed by our regularised subspace expansion that is designed to be numerically stable even when using noisy data. We analytically investigate noise propagation within our method, and upper bound the statistical fluctuations due to the limited number of snapshots in classical shadows. In numerical simulations, our method can achieve a reduction in the energy estimation errors in many cases, sometimes by more than an order of magnitude. We also demonstrate that our performance improvements are robust against both coherent errors (bad initial state) and gate noise in the state-preparation circuits. Furthermore, performance is guaranteed to be at least as good - and in many cases better - than direct energy estimation without using additional quantum resources and the approach is thus a very natural alternative for estimating ground state energies directly from classical shadow data.
Successful implementations of quantum technologies require protocols and algorithms that use as few quantum resources as possible. Many applications require a desired quantum operation, such as rotation gates in quantum computing or broadband pulses in NMR or MRI applications, that is not feasible to directly implement or would require longer coherence times than achievable. This work develops an approach that enables -- at the cost of a modestly increased measurement repetition rate -- the exact implementation of such operations. One proceeds by first building a library of a large number of different approximations to the desired gate operation; by randomly selecting these operations according to a pre-optimised probability distribution, one can on average implement the desired operation with a rigorously controllable approximation error. The approach relies on sophisticated tools from convex optimisation to efficiently find optimal probability distributions. A diverse spectrum of applications are demonstrated as (a) exactly synthesising rotations in fault-tolerant quantum computers using only short T-depth circuits and (b) synthesising broadband and band-selective pulses of superior performance in quantum optimal control with (c) further applications in NMR or MRI. The approach is very general and a broad spectrum of practical applications in quantum technologies are explicitly demonstrated.
Extracting classical information from quantum systems is of fundamental importance, and classical shadows allow us to extract a large amount of information using relatively few measurements. Conventional shadow estimators are unbiased and thus approach the true mean in the infinite-sample limit. In this work, we consider a biased scheme, intentionally introducing a bias by rescaling the conventional classical shadows estimators can reduce the error in the finite-sample regime. The approach is straightforward to implement and requires no quantum resources. We analytically prove average case as well as worst- and best-case scenarios, and rigorously prove that it is, in principle, always worth biasing the estimators. We illustrate our approach in a quantum simulation task of a $12$-qubit spin-ring problem and demonstrate how estimating expected values of non-local perturbations can be significantly more efficient using our biased scheme.
Classical simulation of quantum computers is an irreplaceable step in the design of quantum algorithms. Exponential simulation costs demand the use of high-performance computing techniques, and in particular distribution, whereby the quantum state description is partitioned between a network of cooperating computers - necessary for the exact simulation of more than approximately 30 qubits. Distributed computing is notoriously difficult, requiring bespoke algorithms dissimilar to their serial counterparts with different resource considerations, and which appear to restrict the utilities of a quantum simulator. This manuscript presents a plethora of novel algorithms for distributed full-state simulation of gates, operators, noise channels and other calculations in digital quantum computers. We show how a simple, common but seemingly restrictive distribution model actually permits a rich set of advanced facilities including Pauli gadgets, many-controlled many-target general unitaries, density matrices, general decoherence channels, and partial traces. These algorithms include asymptotically, polynomially improved simulations of exotic gates, and thorough motivations for high-performance computing techniques which will be useful for even non-distributed simulators. Our results are derived in language familiar to a quantum information theory audience, and our algorithms formalised for the scientific simulation community. We have implemented all algorithms herein presented into an isolated, minimalist C++ project, hosted open-source on Github with a permissive MIT license, and extensive testing. This manuscript aims both to significantly improve the high-performance quantum simulation tools available, and offer a thorough introduction to, and derivation of, full-state simulation techniques.
Quantum computing requires a universal set of gate operations; regarding gates as rotations, any rotation angle must be possible. However a real device may only be capable of $B$ bits of resolution, i.e. it might support only $2^B$ possible variants of a given physical gate. Naive discretization of an algorithm's gates to the nearest available options causes coherent errors, while decomposing an impermissible gate into several allowed operations increases circuit depth. Conversely, demanding higher $B$ can greatly complexify hardware. Here we explore an alternative: Probabilistic Angle Interpolation (PAI). This effectively implements any desired, continuously parametrised rotation by randomly choosing one of three discretised gate settings and postprocessing individual circuit outputs. The approach is particularly relevant for near-term applications where one would in any case average over many runs of circuit executions to estimate expected values. While PAI increases that sampling cost, we prove that a) the approach is optimal in the sense that PAI achieves the least possible overhead and c) the overhead is remarkably modest even with thousands of parametrised gates and only $7$ bits of resolution available. This is a profound relaxation of engineering requirements for first generation quantum computers where even $5-6$ bits of resolution may suffice and, as we demonstrate, the approach is many orders of magnitude more efficient than prior techniques. Moreover we conclude that, even for more mature late-NISQ hardware, no more than $9$ bits will be necessary.
Classical shadows enable us to learn many properties of a quantum state $\rho$ with very few measurements. However, near-term and early fault-tolerant quantum computers will only be able to prepare noisy quantum states $\rho$ and it is thus a considerable challenge to efficiently learn properties of an ideal, noise free state $\rho_{id}$. We consider error mitigation techniques, such as Probabilistic Error Cancellation (PEC), Zero Noise Extrapolation (ZNE) and Symmetry Verification (SV) which have been developed for mitigating errors in single expected value measurements and generalise them for mitigating errors in classical shadows. We find that PEC is the most natural candidate and thus develop a thorough theoretical framework for PEC shadows with the following rigorous theoretical guarantees: PEC shadows are an unbiased estimator for the ideal quantum state $\rho_{id}$; the sample complexity for simultaneously predicting many linear properties of $\rho_{id}$ is identical to that of the conventional shadows approach up to a multiplicative factor which is the sample overhead due to error mitigation. Due to efficient post-processing of shadows, this overhead does not depend directly on the number of qubits but rather grows exponentially with the number of noisy gates. The broad set of tools introduced in this work may be instrumental in exploiting near-term and early fault-tolerant quantum computers: We demonstrate in detailed numerical simulations a range of practical applications of quantum computers that will significantly benefit from our techniques.
Shallow quantum circuits are believed to be the most promising candidates for achieving early practical quantum advantage - this has motivated the development of a broad range of error mitigation techniques whose performance generally improves when the quantum state is well approximated by a global depolarising (white) noise model. While it has been crucial for demonstrating quantum supremacy that random circuits scramble local noise into global white noise - a property that has been proved rigorously - we investigate to what degree practical shallow quantum circuits scramble local noise into global white noise. We define two key metrics as (a) density matrix eigenvalue uniformity and (b) commutator norm. While the former determines the distance from white noise, the latter determines the performance of purification based error mitigation. We derive analytical approximate bounds on their scaling and find in most cases they nicely match numerical results. On the other hand, we simulate a broad class of practical quantum circuits and find that white noise is in certain cases a bad approximation posing significant limitations on the performance of some of the simpler error mitigation schemes. On a positive note, we find in all cases that the commutator norm is sufficiently small guaranteeing a very good performance of purification-based error mitigation. Lastly, we identify techniques that may decrease both metrics, such as increasing the dimensionality of the dynamical Lie algebra by gate insertions or randomised compiling.
We present shadow spectroscopy as a simulator-agnostic quantum algorithm for estimating energy gaps using very few circuit repetitions (shots) and no extra resources (ancilla qubits) beyond performing time evolution and measurements. The approach builds on the fundamental feature that every observable property of a quantum system must evolve according to the same harmonic components: we can reveal them by post-processing classical shadows of time-evolved quantum states to extract a large number of time-periodic signals $N_o\propto 10^8$, whose frequencies correspond to Hamiltonian energy differences with Heisenberg-limited precision. We provide strong analytical guarantees that (a) quantum resources scale as $O(\log N_o)$, while the classical computational complexity is linear $O(N_o)$, (b) the signal-to-noise ratio increases with the number of processed signals as $\propto \sqrt{N_o}$, and (c) spectral peak positions are immune to reasonable levels of noise. We demonstrate our approach on model spin systems and the excited state conical intersection of molecular CH$_2$ and verify that our method is indeed intuitively easy to use in practice, robust against gate noise, amiable to a new type of algorithmic-error mitigation technique, and uses orders of magnitude fewer number of shots than typical near-term quantum algorithms -- as low as 10 shots per timestep is sufficient. Finally, we measured a high-quality, experimental shadow spectrum of a spin chain on readily-available IBM quantum computers, achieving the same precision as in noise-free simulations without using any advanced error mitigation, and verified scalability in tensor-network simulations of up to 100-qubit systems.
Quantum computers will be able solve important problems with significant polynomial and exponential speedups over their classical counterparts, for instance in option pricing in finance, and in real-space molecular chemistry simulations. However, key applications can only achieve their potential speedup if their inputs are prepared efficiently. We effectively solve the important problem of efficiently preparing quantum states following arbitrary continuous (as well as more general) functions with complexity logarithmic in the desired resolution, and with rigorous error bounds. This is enabled by the development of a fundamental subroutine based off of the simulation of rank-1 projectors. Combined with diverse techniques from quantum information processing, this subroutine enables us to present a broad set of tools for solving practical tasks, such as state preparation, numerical integration of Lipschitz continuous functions, and superior sampling from probability density functions. As a result, our work has significant implications in a wide range of applications, for instance in financial forecasting, and in quantum simulation.
Exploiting near-term quantum computers and achieving practical value is a considerable and exciting challenge. Most prominent candidates as variational algorithms typically aim to find the ground state of a Hamiltonian by minimising a single classical (energy) surface which is sampled from by a quantum computer. Here we introduce a method we call CoVaR, an alternative means to exploit the power of variational circuits: We find eigenstates by finding joint roots of a polynomially growing number of properties of the quantum state as covariance functions between the Hamiltonian and an operator pool of our choice. The most remarkable feature of our CoVaR approach is that it allows us to fully exploit the extremely powerful classical shadow techniques, i.e., we simultaneously estimate a very large number $>10^4-10^7$ of covariances. We randomly select covariances and estimate analytical derivatives at each iteration applying a stochastic Levenberg-Marquardt step via a large but tractable linear system of equations that we solve with a classical computer. We prove that the cost in quantum resources per iteration is comparable to a standard gradient estimation, however, we observe in numerical simulations a very significant improvement by many orders of magnitude in convergence speed. CoVaR is directly analogous to stochastic gradient-based optimisations of paramount importance to classical machine learning while we also offload significant but tractable work onto the classical processor.
Any architecture for practical quantum computing must be scalable. An attractive approach is to create multiple cores, computing regions of fixed size that are well-spaced but interlinked with communication channels. This exploded architecture can relax the demands associated with a single monolithic device: the complexity of control, cooling and power infrastructure as well as the difficulties of cross-talk suppression and near-perfect component yield. Here we explore interlinked multicore architectures through analytic and numerical modelling. While elements of our analysis are relevant to diverse platforms, our focus is on semiconductor electron spin systems in which numerous cores may exist on a single chip. We model shuttling and microwave-based interlinks and estimate the achievable fidelities, finding values that are encouraging but markedly inferior to intra-core operations. We therefore introduce optimsed entanglement purification to enable high-fidelity communication, finding that $99.5\%$ is a very realistic goal. We then assess the prospects for quantum advantage using such devices in the NISQ-era and beyond: we simulate recently proposed exponentially-powerful error mitigation schemes in the multicore environment and conclude that these techniques impressively suppress imperfections in both the inter- and intra-core operations.
Although near-term quantum devices have no comprehensive solution for correcting errors, numerous techniques have been proposed for achieving practical value. Two works have recently introduced the very promising Error Suppression by Derangements (ESD) and Virtual Distillation (VD) techniques. The approach exponentially suppresses errors and ultimately allows one to measure expectation values in the pure state as the dominant eigenvector of the noisy quantum state. Interestingly this dominant eigenvector is, however, different than the ideal computational state and it is the aim of the present work to comprehensively explore the following fundamental question: how significantly different are these two pure states? The motivation for this work is two-fold. First, comprehensively understanding the effect of this coherent mismatch is of fundamental importance for the successful exploitation of noisy quantum devices. As such, the present work rigorously establishes that in practically relevant scenarios the coherent mismatch is exponentially less severe than the incoherent decay of the fidelity -- where the latter can be suppressed exponentially via the ESD/VD technique. Second, the above question is closely related to central problems in mathematics, such as bounding eigenvalues of a sum of two matrices (Weyl inequalities) -- solving of which was a major breakthrough. The present work can be viewed as a first step towards extending the Weyl inequalities to eigenvectors of a sum of two matrices -- and completely resolves this problem for the special case of the considered density matrices.
As quantum computers mature, quantum error correcting codes (QECs) will be adopted in order to suppress errors to any desired level $E$ at a cost in qubit-count $n$ that is merely poly-logarithmic in $1/E$. However in the NISQ era, the complexity and scale required to adopt even the smallest QEC is prohibitive. Instead, error mitigation techniques have been employed; typically these do not require an increase in qubit-count but cannot provide exponential error suppression. Here we show that, for the crucial case of estimating expectation values of observables (key to almost all NISQ algorithms) one can indeed achieve an effective exponential suppression. We introduce the Error Suppression by Derangement (ESD) approach: by increasing the qubit count by a factor of $n\geq 2$, the error is suppressed exponentially as $Q^n$ where $Q<1$ is a suppression factor that depends on the entropy of the errors. The ESD approach takes $n$ independently-prepared circuit outputs and applies a controlled derangement operator to create a state whose symmetries prevent erroneous states from contributing to expected values. The approach is therefore `NISQ-friendly' as it is modular in the main computation and requires only a shallow circuit that bridges the $n$ copies immediately prior to measurement. Imperfections in our derangement circuit do degrade performance and therefore we propose an approach to mitigate this effect to arbitrary precision due to the remarkable properties of derangements. a) they decompose into a linear number of elementary gates -- limiting the impact of noise b) they are highly resilient to noise and the effect of imperfections on them is (almost) trivial. In numerical simulations validating our approach we confirm error suppression below $10^{-6}$ for circuits consisting of several hundred noisy gates (two-qubit gate error $0.5\%$) using no more than $n=4$ circuit copies.
Variational algorithms have particular relevance for near-term quantum computers but require non-trivial parameter optimisations. Here we propose Analytic Descent: Given that the energy landscape must have a certain simple form in the local region around any reference point, it can be efficiently approximated in its entirety by a classical model -- we support these observations with rigorous, complexity-theoretic arguments. One can classically analyse this approximate function in order to directly `jump' to the (estimated) minimum, before determining a more refined function if necessary. We derive an optimal measurement strategy and generally prove that the asymptotic resource cost of a `jump' corresponds to only a single gradient vector evaluation.
Quantum devices are preparing increasingly more complex entangled quantum states. How can one effectively study these states in light of their increasing dimensions? Phase spaces such as Wigner functions provide a suitable framework. We focus on phase spaces for finite-dimensional quantum states of single qudits or permutationally symmetric states of multiple qubits. We present methods to efficiently compute the corresponding phase-space functions which are at least an order of magnitude faster than traditional methods. Quantum many-body states in much larger dimensions can now be effectively studied by experimentalists and theorists using these phase-space techniques.
Variational quantum algorithms are promising tools for near-term quantum computers as their shallow circuits are robust to experimental imperfections. Their practical applicability, however, strongly depends on how many times their circuits need to be executed for sufficiently reducing shot-noise. We consider metric-aware quantum algorithms: variational algorithms that use a quantum computer to efficiently estimate both a matrix and a vector object. For example, the recently introduced quantum natural gradient approach uses the quantum Fisher information matrix as a metric tensor to correct the gradient vector for the co-dependence of the circuit parameters. We rigorously characterise and upper bound the number of measurements required to determine an iteration step to a fixed precision, and propose a general approach for optimally distributing samples between matrix and vector entries. Finally, we establish that the number of circuit repetitions needed for estimating the quantum Fisher information matrix is asymptotically negligible for an increasing number of iterations and qubits.
Variational quantum algorithms are promising tools whose efficacy depends on their optimisation method. For noise-free unitary circuits, the quantum generalisation of natural gradient descent has been introduced and shown to be equivalent to imaginary time evolution: the approach is effective due to a metric tensor reconciling the classical parameter space to the device's Hilbert space. Here we generalise quantum natural gradient to consider arbitrary quantum states (both mixed and pure) via completely positive maps; thus our circuits can incorporate both imperfect unitary gates and fundamentally non-unitary operations such as measurements. We employ the quantum Fisher information (QFI) as the core metric in the space of density operators. A modification of the Error Suppression by Derangements (ESD) and Virtual Distillation (VD) techniques enables an accurate and experimentally-efficient approximation of the QFI via the Hilbert-Schmidt metric tensor using prior results on the dominant eigenvector of noisy quantum states. Our rigorous proof also establishes the fundamental observation that the geometry of typical noisy quantum states is (approximately) identical in either the Hilbert-Schmidt metric or as characterised by the QFI. In numerical simulations of noisy quantum circuits we demonstrate the practicality of our approach and confirm it can significantly outperform other variational techniques.
Quantum technologies exploit entanglement to enhance various tasks beyond their classical limits including computation, communication and measurements. Quantum metrology aims to increase the precision of a measured quantity that is estimated in the presence of statistical errors using entangled quantum states. We present a novel approach for finding (near) optimal states for metrology in the presence of noise, using variational techniques as a tool for efficiently searching the classically intractable high-dimensional space of quantum states. We comprehensively explore systems consisting of up to 9 qubits and find new highly entangled states that are not symmetric under permutations and non-trivially outperform previously known states up to a constant factor 2. We consider a range of environmental noise models; while passive quantum states cannot achieve a fundamentally superior scaling (as established by prior asymptotic results) we do observe a significant absolute quantum advantage. We finally outline a possible experimental setup for variational quantum metrology which can be implemented in near-term hardware.
Phase spaces as given by the Wigner distribution function provide a natural description of infinite-dimensional quantum systems. They are an important tool in quantum optics and have been widely applied in the context of time-frequency analysis and pseudo-differential operators. Phase-space distribution functions are usually specified via integral transformations or convolutions which can be averted and subsumed by (displaced) parity operators proposed in this work. Building on earlier work for Wigner distribution functions [A. Grossmann, Comm. Math. Phys. 48(3), 191 (1976)], parity operators give rise to a general class of distribution functions in the form of quantum-mechanical expectation values. This enables us to precisely characterize the mathematical existence of general phase-space distribution functions. We then relate these distribution functions to the so-called Cohen class [L. Cohen, J. Math. Phys. 7(5), 781 (1966)] and recover various quantization schemes and distribution functions from the literature. The parity-operator approach is also applied to the Born-Jordan distribution which originates from the Born-Jordan quantization [M. Born, P. Jordan, Z. Phys. 34(1), 858 (1925)]. The corresponding parity operator is written as a weighted average of both displacements and squeezing operators and we determine its generalized spectral decomposition. This leads to an efficient computation of the Born-Jordan parity operator in the number-state basis and example quantum states reveal unique features of the Born-Jordan distribution.
We study continuous phase spaces of single spins and develop a complete description of their time evolution. The time evolution is completely specified by so-called star products. We explicitly determine these star products for general spin numbers using a simplified approach which applies spin-weighted spherical harmonics. This approach naturally relates phase spaces of increasing spin number to their quantum-optical limit and allows for efficient approximations of the time evolution for large spin numbers. We also approximate phase-space representations of certain quantum states that are challenging to calculate for large spin numbers. All of these applications are explored in concrete examples and we outline extensions to coupled spin systems.
Continuous phase spaces have become a powerful tool for describing, analyzing, and tomographically reconstructing quantum states in quantum optics and beyond. A plethora of these phase-space techniques are known, however a thorough understanding of their relations was still lacking for finite-dimensional quantum states. We present a unified approach to continuous phase-space representations which highlights their relations and tomography. The infinite-dimensional case from quantum optics is then recovered in the large-spin limit.
Phase-space representations as given by Wigner functions are a powerful tool for representing the quantum state and characterizing its time evolution in the case of infinite-dimensional quantum systems and have been widely used in quantum optics and beyond. Continuous phase spaces have also been studied for finite-dimensional quantum systems such as spin systems. However, much less is known for finite-dimensional, coupled systems, and we present a complete theory of Wigner functions for this case. In particular, we provide a self-contained Wigner formalism for describing and predicting the time evolution of coupled spins which lends itself to visualizing the high-dimensional structure of multi-partite quantum states. We completely treat the case of an arbitrary number of coupled spins 1/2, thereby establishing the equation of motion using Wigner functions. The explicit form of the time evolution is then calculated for up to three spins 1/2. The underlying physical principles of our Wigner representations for coupled spin systems are illustrated with multiple examples which are easily translatable to other experimental scenarios.