Photonics provides a viable path to a scalable fault-tolerant quantum computer. The natural framework for this platform is measurement-based quantum computation, where fault-tolerant graph states supersede traditional quantum error-correcting codes. However, the existing formalism for foliation - the construction of fault-tolerant graph states - does not reveal how certain properties, such as single-shot error correction, manifest in the measurement-based setting. We introduce the fault complex, a representation of dynamic quantum error correction protocols particularly well-suited to describe foliation. Our approach enables precise computation of fault tolerance properties of foliated codes and provides insights into circuit-based quantum computation. Analyzing the fault complex leads to improved thresholds for three- and four-dimensional toric codes, a generalization of stability experiments, and the existence of single-shot lattice surgery with higher-dimensional topological codes.
Many routines that one might want to run on a quantum computer can benefit from adaptive circuits, relying on mid-circuit measurements and feed-forward operations. Any such measurement has to be compiled into a sequence of elementary gates involving only a small number of qubits. In this work, we formalize dynamical weight reduction (DWR) schemes in which a high-weight Pauli measurement is decomposed into a sequence of measurements of smaller weight at the cost of adding additional auxiliary qubits. We first present our main method, deforming a ZX diagram that represents the measurement we want to compile. We then construct a general recipe that constructs a DWR on a given connectivity whenever the underlying connectivity graph fulfills certain necessary conditions. Further, we construct a family of DWR schemes using a given number of auxiliary qubits with indications that the schemes we present are optimal in terms of spacetime resource overheads needed for a DWR. We highlight three examples that achieve a constant time or a constant space overhead, respectively. Finally, we discuss different trade-offs of space and time overhead and how they might be chosen differently on different levels of abstraction within a (fault-tolerant) quantum computation. This work showcases the flexibility in compiling a measurement circuit in terms of lower-weight measurements using deformations of ZX diagrams and can find applications in quantum error correction, quantum simulation as well as near-term quantum computing tasks where the quality of a computation highly depends on the physical implementation of a given logical operation.
Quantum error correction (QEC) is essential for protecting quantum information against noise, yet understanding the structure of the Knill-Laflamme (KL) coefficients $\lambda_{ij}$ from the condition $PE_i^\dagger E_j P = \lambda_{ij} P$ remains challenging, particularly for nonadditive codes. In this work, we introduce the signature vector $\vec{\lambda}(P)$, composed of the off-diagonal KL coefficients $\lambda_{ij}$, where each coefficient corresponds to equivalence classes of errors counted only once. We define its Euclidean norm $\lambda^*(P)$ as a scalar measure representing the total strength of error correlations within the code subspace defined by the projector $P$. We parameterize $P$ on a Stiefel manifold and formulate an optimization problem based on the KL conditions to systematically explore possible values of $\lambda^*$. Moreover, we show that, for $((n,K,d))$ codes, $\lambda^*$ is invariant under local unitary transformations. Applying our approach to the $((6, 2, 3))$ quantum code, we find that $\lambda^*_{\text{min}} = \sqrt{0.6}$ and $\lambda^*_{\text{max}} = 1$, with $\lambda^* = 1$ corresponding to a known degenerate stabilizer code. We construct continuous families of new nonadditive codes parameterized by vectors in $\mathbb{R}^5$, with $\lambda^*$ varying over the interval $[\sqrt{0.6}, 1]$. For the $((7, 2, 3))$ code, we identify $\lambda^*_{\text{min}} = 0$ (corresponding to the non-degenerate Steane code) and $\lambda^*_{\text{max}} = \sqrt{7}$ (corresponding to the permutation-invariant code by Pollatsek and Ruskai), and we demonstrate continuous paths connecting these extremes via cyclic codes characterized solely by $\lambda^*$. Our findings provide new insights into the structure of quantum codes, advance the theoretical foundations of QEC, and open new avenues for investigating intricate relationships between code subspaces and error correlations.
We consider the problem of testing whether an unknown $n$-qubit quantum state $|\psi\rangle$ is a stabilizer state, with only single-copy access. We give an algorithm solving this problem using $O(n)$ copies, and conversely prove that $\Omega(\sqrt{n})$ copies are required for any algorithm. The main observation behind our algorithm is that when repeatedly measuring in a randomly chosen stabilizer basis, stabilizer states are the most likely among the set of all pure states to exhibit linear dependencies in measurement outcomes. Our algorithm is designed to probe deviations from this extremal behavior. For the lower bound, we first reduce stabilizer testing to the task of distinguishing random stabilizer states from the maximally mixed state. We then argue that, without loss of generality, it is sufficient to consider measurement strategies that a) lie in the commutant of the tensor action of the Clifford group and b) satisfy a Positive Partial Transpose (PPT) condition. By leveraging these constraints, together with novel results on the partial transposes of the generators of the Clifford commutant, we derive the lower bound on the sample complexity.
We identify regimes where post-selection can be used scalably in quantum error correction (QEC) to improve performance. We use statistical mechanical models to analytically quantify the performance and thresholds of post-selected QEC, with a focus on the surface code. Based on the non-equilibrium magnetization of these models, we identify a simple heuristic technique for post-selection that does not require a decoder. Along with performance gains, this heuristic allows us to derive analytic expressions for post-selected conditional logical thresholds and abort thresholds of surface codes. We find that such post-selected QEC is characterised by four distinct thermodynamic phases, and detail the implications of this phase space for practical, scalable quantum computation.
The existence of pseudorandom unitaries (PRUs) -- efficient quantum circuits that are computationally indistinguishable from Haar-random unitaries -- has been a central open question, with significant implications for cryptography, complexity theory, and fundamental physics. In this work, we close this question by proving that PRUs exist, assuming that any quantum-secure one-way function exists. We establish this result for both (1) the standard notion of PRUs, which are secure against any efficient adversary that makes queries to the unitary $U$, and (2) a stronger notion of PRUs, which are secure even against adversaries that can query both the unitary $U$ and its inverse $U^\dagger$. In the process, we prove that any algorithm that makes queries to a Haar-random unitary can be efficiently simulated on a quantum computer, up to inverse-exponential trace distance.
Quantum many-body systems provide a unique platform for exploring the rich interplay between chaos, randomness, and complexity. In a recently proposed paradigm known as deep thermalization, random quantum states of system A are generated by performing projective measurements on system B following chaotic Hamiltonian evolution acting jointly on AB. In this scheme, the randomness of the projected state ensemble arises from the intrinsic randomness of the outcomes when B is measured. Here we propose a modified scheme, in which classical randomness injected during the protocol is converted by quantum chaos into quantum randomness of the resulting state ensemble. We show that for generic chaotic systems this conversion is optimal in that each bit of injected classical entropy generates as much additional quantum randomness as adding an extra qubit to B. This significantly enhances the randomness of the projected ensemble without imposing additional demands on the quantum hardware. Our scheme can be easily implemented on typical analog quantum simulators, providing a more scalable route for generating quantum randomness valuable for many applications. In particular, we demonstrate that the accuracy of a shadow tomography protocol can be substantially improved.
Pseudoentangled states are defined by their ability to hide their entanglement structure: they are indistinguishable from random states to any observer with polynomial resources, yet can have much less entanglement than random states. Existing constructions of pseudoentanglement based on phase- and/or subset-states are limited in the entanglement structures they can hide: e.g., the states may have low entanglement on a single cut, on all cuts at once, or on local cuts in one dimension. Here we introduce new constructions of pseudoentangled states based on (pseudo)random tensor networks that affords much more flexibility in the achievable entanglement structures. We illustrate our construction with the simplest example of a matrix product state, realizable as a staircase circuit of pseudorandom unitary gates, which exhibits pseudo-area-law scaling of entanglement in one dimension. We then generalize our construction to arbitrary tensor network structures that admit an isometric realization. A notable application of this result is the construction of pseudoentangled `holographic' states whose entanglement entropy obeys a Ryu-Takayanagi `minimum-cut' formula, answering a question posed in [Aaronson et al., arXiv:2211.00747].
Understanding quantum noise is an essential step towards building practical quantum information processing systems. Pauli noise is a useful model that has been widely applied in quantum benchmarking, error mitigation, and error correction. Despite intensive study, most existing works focus on learning Pauli noise channels associated with some specific gates rather than treating the gate set as a whole. A learning algorithm that is self-consistent, complete, and efficient at the same time is yet to be established. In this work, we study the task of gate set Pauli noise learning, where a set of quantum gates, state preparation, and measurements all suffer from unknown Pauli noise channels with a customized noise ansatz. Using tools from algebraic graph theory, we analytically characterize the self-consistently learnable degrees of freedom for Pauli noise models with arbitrary linear ansatz, and design experiments to efficiently learn all the learnable information. Specifically, we show that all learnable information about the gate noise can be learned to relative precision, under mild assumptions on the noise ansatz. We then demonstrate the flexibility of our theory by applying it to concrete physically motivated ansatzs (such as spatially local or quasi-local noise) and experimentally relevant gate sets (such as parallel CZ gates). These results not only enhance the theoretical understanding of quantum noise learning, but also provide a feasible recipe for characterizing existing and near-future quantum information processing devices.
Understanding the computational complexity of learning efficient classical programs in various learning models has been a fundamental and important question in classical computational learning theory. In this work, we study the computational complexity of quantum state learning, which can be seen as a quantum generalization of distributional learning introduced by Kearns et.al [STOC94]. Previous works by Chung and Lin [TQC21], and Bădescu and O$'$Donnell [STOC21] study the sample complexity of the quantum state learning and show that polynomial copies are sufficient if unknown quantum states are promised efficiently generatable. However, their algorithms are inefficient, and the computational complexity of this learning problem remains unresolved. In this work, we study the computational complexity of quantum state learning when the states are promised to be efficiently generatable. We show that if unknown quantum states are promised to be pure states and efficiently generateable, then there exists a quantum polynomial time algorithm $A$ and a language $L \in PP$ such that $A^L$ can learn its classical description. We also observe the connection between the hardness of learning quantum states and quantum cryptography. We show that the existence of one-way state generators with pure state outputs is equivalent to the average-case hardness of learning pure states. Additionally, we show that the existence of EFI implies the average-case hardness of learning mixed states.