Free-fermionic states, also known as fermionic Gaussian states, represent an important class of quantum states ubiquitous in physics. They are uniquely and efficiently described by their correlation matrix. However, in practical experiments, the correlation matrix can only be estimated with finite accuracy. This raises the question: how does the error in estimating the correlation matrix affect the trace-distance error of the state? We show that if the correlation matrix is known with an error $\varepsilon$, the trace-distance error also scales as $\varepsilon$ (and vice versa). Specifically, we provide distance bounds between (both pure and mixed) free-fermionic states in relation to their correlation matrix distance. Our analysis also extends to cases where one state may not be free-fermionic. Importantly, we leverage our preceding results to derive significant advancements in property testing and tomography of free-fermionic states. Property testing involves determining whether an unknown state is close to or far from being a free-fermionic state. We first demonstrate that any algorithm capable of testing arbitrary (possibly mixed) free-fermionic states would inevitably be inefficient. Then, we present an efficient algorithm for testing low-rank free-fermionic states. For free-fermionic state tomography, we provide improved bounds on sample complexity in the pure-state scenario, substantially improving over previous literature, and we generalize the efficient algorithm to mixed states, discussing its noise-robustness.
Daniel Miller, Kyano Levi, Lukas Postler, Alex Steiner, Lennart Bittel, Gregory A. L. White, Yifan Tang, Eric J. Kuehnke, Antonio A. Mele, Sumeet Khatri, Lorenzo Leone, Jose Carrasco, Christian D. Marciniak, Ivan Pogorelov, Milena Guevara-Bertsch, Robert Freund, Rainer Blatt, Philipp Schindler, Thomas Monz, Martin Ringbauer, et al (1) Throughout its history, the theory of quantum error correction has heavily benefited from translating classical concepts into the quantum setting. In particular, classical notions of weight enumerators, which relate to the performance of an error-correcting code, and MacWilliams' identity, which helps to compute enumerators, have been generalized to the quantum case. In this work, we establish a distinct relationship between the theoretical machinery of quantum weight enumerators and a seemingly unrelated physics experiment: we prove that Rains' quantum shadow enumerators - a powerful mathematical tool - arise as probabilities of observing fixed numbers of triplets in a Bell sampling experiment. This insight allows us to develop here a rigorous framework for the direct measurement of quantum weight enumerators, thus enabling experimental and theoretical studies of the entanglement structure of any quantum error-correcting code or state under investigation. On top of that, we derive concrete sample complexity bounds and physically-motivated robustness guarantees against unavoidable experimental imperfections. Finally, we experimentally demonstrate the possibility of directly measuring weight enumerators on a trapped-ion quantum computer. Our experimental findings are in good agreement with theoretical predictions and illuminate how entanglement theory and quantum error correction can cross-fertilize each other once Bell sampling experiments are combined with the theoretical machinery of quantum weight enumerators.
Lie groups, and therefore Lie algebras, are fundamental structures in quantum physics that determine the space of possible trajectories of evolving systems. However, classification and characterization methods for these structures are often impractical for larger systems. In this work, we provide a comprehensive classification of Lie algebras generated by an arbitrary set of Pauli operators, from which an efficient method to characterize them follows. By mapping the problem to a graph setting, we identify a reduced set of equivalence classes: the free-fermionic Lie algebra, the set of all anti-symmetric Paulis on n qubits, the Lie algebra of symplectic Paulis on n qubits, and the space of all Pauli operators on n qubits, as well as controlled versions thereof. Moreover, out of these, we distinguish 6 Clifford inequivalent cases and find a simple set of canonical operators for each, which allow us to give a physical interpretation of the dynamics of each class. Our findings reveal a no-go result for the existence of small Lie algebras beyond the free-fermionic case in the Pauli setting and offer efficiently computable criteria for universality and extendibility of gate sets. These results bear significant impact in ideas in a number of fields like quantum control, quantum machine learning, or classical simulation of quantum circuits.
Quantum state tomography, aimed at deriving a classical description of an unknown state from measurement data, is a fundamental task in quantum physics. In this work, we analyse the ultimate achievable performance of tomography of continuous-variable systems, such as bosonic and quantum optical systems. We prove that tomography of these systems is extremely inefficient in terms of time resources, much more so than tomography of finite-dimensional systems: not only does the minimum number of state copies needed for tomography scale exponentially with the number of modes, but it also exhibits a dramatic scaling with the trace-distance error, even for low-energy states, in stark contrast with the finite-dimensional case. On a more positive note, we prove that tomography of Gaussian states is efficient. To accomplish this, we answer a fundamental question for the field of continuous-variable quantum information: if we know with a certain error the first and second moments of an unknown Gaussian state, what is the resulting trace-distance error that we make on the state? Lastly, we demonstrate that tomography of non-Gaussian states prepared through Gaussian unitaries and a few local non-Gaussian evolutions is efficient and experimentally feasible.
Magic-state resource theory is a powerful tool with applications in quantum error correction, many-body physics, and classical simulation of quantum dynamics. Despite its broad scope, finding tractable resource monotones has been challenging. Stabilizer entropies have recently emerged as promising candidates (being easily computable and experimentally measurable detectors of nonstabilizerness) though their status as true resource monotones has been an open question ever since. In this Letter, we establish the monotonicity of stabilizer entropies for $\alpha \geq 2$ within the context of magic-state resource theory restricted to pure states. Additionally, we show that linear stabilizer entropies serve as strong monotones. Furthermore, we extend stabilizer entropies to mixed states as monotones via convex roof constructions, whose computational evaluation significantly outperforms optimization over stabilizer decompositions for low-rank density matrices. As a direct corollary, we provide improved conversion bounds between resource states, revealing a preferred direction of conversion between magic states. These results conclusively validate the use of stabilizer entropies within magic-state resource theory and establish them as the only known family of monotones that are experimentally measurable and computationally tractable.
Free-fermionic states, also known as matchgates or Gaussian states, are a fundamental class of quantum states due to their efficient classical simulability and their crucial role across various domains of Physics. With the advent of quantum devices, experiments now yield data from quantum states, including estimates of expectation values. We establish that deciding whether a given dataset, formed by a few Majorana correlation functions estimates, can be consistent with a free-fermionic state is an NP-complete problem. Our result also extends to datasets formed by estimates of Pauli expectation values. This is in stark contrast to the case of stabilizer states, where the analogous problem can be efficiently solved. Moreover, our results directly imply that free-fermionic states are computationally hard to properly PAC-learn, where PAC-learning of quantum states is a learning framework introduced by Aaronson. Remarkably, this is the first class of classically simulable quantum states shown to have this property.
Variational Quantum Algorithms (VQAs), such as the Quantum Approximate Optimization Algorithm (QAOA) of [Farhi, Goldstone, Gutmann, 2014], have seen intense study towards near-term applications on quantum hardware. A crucial parameter for VQAs is the \emphdepth of the variational ``ansatz'' used -- the smaller the depth, the more amenable the ansatz is to near-term quantum hardware in that it gives the circuit a chance to be fully executed before the system decoheres. In this work, we show that approximating the optimal depth for a given VQA ansatz is intractable. Formally, we show that for any constant $\epsilon>0$, it is QCMA-hard to approximate the optimal depth of a VQA ansatz within multiplicative factor $N^{1-\epsilon}$, for $N$ denoting the encoding size of the VQA instance. (Here, Quantum Classical Merlin-Arthur (QCMA) is a quantum generalization of NP.) We then show that this hardness persists in the even ``simpler'' QAOA-type settings. To our knowledge, this yields the first natural QCMA-hard-to-approximate problems.
Many optimization methods for training variational quantum algorithms are based on estimating gradients of the cost function. Due to the statistical nature of quantum measurements, this estimation requires many circuit evaluations, which is a crucial bottleneck of the whole approach. We propose a new gradient estimation method to mitigate this measurement challenge and reduce the required measurement rounds. Within a Bayesian framework and based on the generalized parameter shift rule, we use prior information about the circuit to find an estimation strategy that minimizes expected statistical and systematic errors simultaneously. We demonstrate that this approach can significantly outperform traditional gradient estimation methods, reducing the required measurement rounds by up to an order of magnitude for a common QAOA setup. Our analysis also shows that an estimation via finite differences can outperform the parameter shift rule in terms of gradient accuracy for small and moderate measurement budgets.
We are interested in how quantum data can allow for practical solutions to otherwise difficult computational problems. A notoriously difficult phenomenon from quantum many-body physics is the emergence of many-body localization (MBL). So far, is has evaded a comprehensive analysis. In particular, numerical studies are challenged by the exponential growth of the Hilbert space dimension. As many of these studies rely on exact diagonalization of the system's Hamiltonian, only small system sizes are accessible. In this work, we propose a highly flexible neural network based learning approach that, once given training data, circumvents any computationally expensive step. In this way, we can efficiently estimate common indicators of MBL such as the adjacent gap ratio or entropic quantities. Our estimator can be trained on data from various system sizes at once which grants the ability to extrapolate from smaller to larger ones. Moreover, using transfer learning we show that already a two-dimensional feature vector is sufficient to obtain several different indicators at various energy densities at once. We hope that our approach can be applied to large-scale quantum experiments to provide new insights into quantum many-body physics.
Variational quantum algorithms are proposed to solve relevant computational problems on near term quantum devices. Popular versions are variational quantum eigensolvers and quantum ap- proximate optimization algorithms that solve ground state problems from quantum chemistry and binary optimization problems, respectively. They are based on the idea of using a classical computer to train a parameterized quantum circuit. We show that the corresponding classical optimization problems are NP-hard. Moreover, the hardness is robust in the sense that, for every polynomial time algorithm, there are instances for which the relative error resulting from the classical optimization problem can be arbitrarily large assuming P $\neq$ NP. Even for classically tractable systems composed of only logarithmically many qubits or free fermions, we show the optimization to be NP-hard. This elucidates that the classical optimization is intrinsically hard and does not merely inherit the hardness from the ground state problem. Our analysis shows that the training landscape can have many far from optimal persistent local minima. This means that gradient and higher order descent algorithms will generally converge to far from optimal solutions.