This paper introduces a method for calculating the quantum relative entropy of channels, an essential quantity in quantum channel discrimination and resource theories of quantum channels. By building on recent developments in the optimization of relative entropy for quantum states [Kossmann and Schwonnek, arXiv:2404.17016], we introduce a discretized linearization of the integral representation for the relative entropy for states, enabling us to handle maximization tasks of the relative entropy of a channel over input states. Our approach here extends previous work on minimizing relative entropy to the more complicated domain of maximization. It also provides efficiently computable upper and lower bounds that sandwich the true value with any desired precision, leading to a practical method for computing the relative entropy of channels.
Estimating the ground-state energy of Hamiltonians is a fundamental task for which it is believed that quantum computers can be helpful. Several approaches have been proposed toward this goal, including algorithms based on quantum phase estimation and hybrid quantum-classical optimizers involving parameterized quantum circuits, the latter falling under the umbrella of the variational quantum eigensolver. Here, we analyze the performance of quantum Boltzmann machines for this task, which is a less explored ansatz based on parameterized thermal states and which is not known to suffer from the barren-plateau problem. We delineate a hybrid quantum-classical algorithm for this task and rigorously prove that it converges to an $\varepsilon$-approximate stationary point of the energy function optimized over parameter space, while using a number of parameterized-thermal-state samples that is polynomial in $\varepsilon^{-1}$, the number of parameters, and the norm of the Hamiltonian being optimized. Our algorithm estimates the gradient of the energy function efficiently by means of a novel quantum circuit construction that combines classical sampling, Hamiltonian simulation, and the Hadamard test, thus overcoming a key obstacle to quantum Boltzmann machine learning that has been left open since [Amin \textitet al., Phys.~Rev.~X \textbf8, 021050 (2018)]. Additionally supporting our main claims are calculations of the gradient and Hessian of the energy function, as well as an upper bound on the matrix elements of the latter that is used in the convergence analysis.
Quantum networks consist of various quantum technologies, spread across vast distances, and involve various users at the same time. Certifying the functioning and efficiency of the individual components is a task that is well studied and widely used. However, the power of quantum networks can only be realized by integrating all the required quantum technologies and platforms across a large number of users. In this work, we demonstrate how to certify the distillable entanglement available in multipartite states produced by quantum networks, without relying on the physical realization of its constituent components. We do so by using the paradigm of device independence.
In quantum computing, knowing the symmetries a given system or state obeys or disobeys is often useful. For example, Hamiltonian symmetries may limit allowed state transitions or simplify learning parameters in machine learning applications, and certain asymmetric quantum states are known to be resourceful in various applications. Symmetry testing algorithms provide a means to identify and quantify these properties with respect to a representation of a group. In this paper, we present a collection of quantum algorithms that realize projections onto the symmetric subspace, as well as the asymmetric subspace, of quantum systems. We describe how this can be modified to realize an antisymmetric projection as well, and we show how projectors can be combined in a systematic way to effectively measure various projections in a single quantum circuit. Using these constructions, we demonstrate applications such as testing for Werner-state symmetry and estimating Schmidt ranks of bipartite states, supported by experimental data from IBM Quantum systems. This work underscores the pivotal role of symmetry in simplifying quantum calculations and advancing quantum information tasks.
Quantum communication relies on the existence of high quality quantum channels to exchange information. In practice, however, all communication links are affected by noise from the environment. Here we investigate the ability of quantum channels to perform quantum communication tasks by restricting the participants to use only local operations and one-way classical communication (one-way LOCC) along with the available quantum channel. In particular, a channel can be used to distill a highly entangled state between two parties, which further enables quantum or private communication. In this work, we invoke the framework of superchannels to study the distillation of a resourceful quantum state, such as a maximally entangled state or a private state, using multiple instances of a point-to-point quantum channel. We use the idea of $k$-extendibility to obtain a semidefinite relaxation of the set of one-way LOCC superchannels and define a class of entanglement measures for quantum channels that decrease monotonically under such superchannels; therefore these measures, dubbed collectively the ``unextendible entanglement of a channel'', yield upper bounds on several communication-theoretic quantities of interest in the regimes of resource distillation and zero error. We then generalize the formalism of $k$-extendibility to bipartite superchannels, thus obtaining functions that are monotone under two-extendible superchannels. This allows us to analyze probabilistic distillation of ebits or secret key bits from a bipartite state when using a resourceful quantum channel. Moreover, we propose semidefinite programs to evaluate several of these quantities, providing a computationally feasible method of comparison between quantum channels for resource distillation.
Quantum state exclusion is an operational task that has significance in studying foundational questions related to interpreting quantum theory. In such a task, one is given a system whose state is randomly selected from a finite set, and the goal is to identify a state from the set that is not the true state of the system. An error, i.e., an unsuccessful exclusion, occurs if and only if the state identified is the true state. In this paper, we study the optimal error probability of quantum state exclusion and its error exponent -- the rate at which the error probability decays asymptotically -- from an information-theoretic perspective. Our main finding is a single-letter upper bound on the error exponent of state exclusion given by the multivariate log-Euclidean Chernoff divergence, and we prove that this improves upon the best previously known upper bound. We also extend our analysis to the more complicated task of quantum channel exclusion, and we establish a single-letter and efficiently computable upper bound on its error exponent, even assuming the use of adaptive strategies. We derive both upper bounds, for state and channel exclusion, based on one-shot analysis and formulate them as a type of multivariate divergence measure called a barycentric Chernoff divergence. Moreover, our result on channel exclusion has implications in two important special cases. First, for the special case of two hypotheses, our upper bound provides the first known efficiently computable upper bound on the error exponent of symmetric binary channel discrimination. Second, for the special case of classical channels, we show that our upper bound is achievable by a nonadaptive strategy, thus solving the exact error exponent of classical channel exclusion and generalising a similar result on symmetric binary classical channel discrimination.
A quantum generalized divergence by definition satisfies the data-processing inequality; as such, the relative decrease in such a divergence under the action of a quantum channel is at most one. This relative decrease is formally known as the contraction coefficient of the channel and the divergence. Interestingly, there exist combinations of channels and divergences for which the contraction coefficient is strictly less than one. Furthermore, understanding the contraction coefficient is fundamental for the study of statistical tasks under privacy constraints. To this end, here we establish upper bounds on contraction coefficients for the hockey-stick divergence under privacy constraints, where privacy is quantified with respect to the quantum local differential privacy (QLDP) framework, and we fully characterize the contraction coefficient for the trace distance under privacy constraints. With the machinery developed, we also determine an upper bound on the contraction of both the Bures distance and quantum relative entropy relative to the normalized trace distance, under QLDP constraints. Next, we apply our findings to establish bounds on the sample complexity of quantum hypothesis testing under privacy constraints. Furthermore, we study various scenarios in which the sample complexity bounds are tight, while providing order-optimal quantum channels that achieve those bounds. Lastly, we show how private quantum channels provide fairness and Holevo information stability in quantum learning settings.
The measured relative entropies of quantum states and channels find operational significance in quantum information theory as achievable error rates in hypothesis testing tasks. They are of interest in the near term, as they correspond to hybrid quantum-classical strategies with technological requirements far less challenging to implement than required by the most general strategies allowed by quantum mechanics. In this paper, we prove that these measured relative entropies can be calculated efficiently by means of semi-definite programming, by making use of variational formulas for the measured relative entropies of states and semi-definite representations of the weighted geometric mean and the operator connection of the logarithm. Not only do the semi-definite programs output the optimal values of the measured relative entropies of states and channels, but they also provide numerical characterizations of optimal strategies for achieving them, which is of significant practical interest for designing hypothesis testing protocols.
Matrix geometric means between two positive definite matrices can be defined equivalently from distinct perspectives - as solutions to certain nonlinear systems of equations, as points along geodesics in Riemannian geometry, and as solutions to certain optimisation problems. This diversity already suggests the potential for varied applications, as well as acting as a bridge between different domains. Here we devise new quantum subroutines to efficiently prepare quantum unitary operators that embed the standard matrix geometric mean and its generalisations called the weighted matrix geometric mean. This enables the construction of solutions to the algebraic Riccati equation, which is an important class of nonlinear systems of equations that appears in machine learning, optimal control, estimation, and filtering. Using these subroutines, we present a new class of quantum learning algorithms called quantum geometric mean metric learning. This has applications in efficiently finding the best distance measure and solving classification problems in the weakly supervised limit and for anomaly detection, for both classical and quantum problems. We also show how our method can be generalised to a particular p^th-order system of nonlinear equations. These quantum subroutines for matrix geometric means are also useful in other areas of quantum information. For example, we show how to use them in the estimation of geometric Renyi relative entropies and the Uhlmann fidelity by means of the Fuchs-Caves observable. In particular, our quantum algorithms for estimating the Uhlmann and Matsumoto fidelities have optimal dependence on the precision. Finally, we provide a BQP-complete problem based on matrix geometric means that can be solved by our subroutines, thus characterising their computational capability.
The main contribution of our paper is to introduce a number of multivariate quantum fidelities and show that they satisfy several desirable properties that are natural extensions of those of the Uhlmann and Holevo fidelities. We propose three variants that reduce to the average pairwise fidelity for commuting states: average pairwise $z$-fidelities, the multivariate semi-definite programming (SDP) fidelity, and a multivariate fidelity inspired by an existing secrecy measure. The second one is obtained by extending the SDP formulation of the Uhlmann fidelity to more than two states. All three of these variants satisfy the following properties: (i) reduction to multivariate classical fidelities for commuting states, (ii) the data-processing inequality, (iii) invariance under permutations of the states, (iv) its values are in the interval $[0,1]$; they are faithful, that is, their values are equal to one if and only if all the states are equal, and they satisfy orthogonality, that is their values are equal to zero if and only if the states are mutually orthogonal to each other, (v) direct-sum property, (vi) joint concavity, and (vii) uniform continuity bounds under certain conditions. Furthermore, we establish inequalities relating these different variants, indeed clarifying that all these definitions coincide with the average pairwise fidelity for commuting states. Lastly, we introduce another multivariate fidelity called multivariate log-Euclidean fidelity, which is a quantum generalization of the Matusita multivariate fidelity. We also show that it satisfies most of the desirable properties listed above, it is a function of a multivariate log-Euclidean divergence, and has an operational interpretation in terms of quantum hypothesis testing with an arbitrarily varying null hypothesis.
A pure state of fixed Hamming weight is a superposition of computational basis states such that each bitstring in the superposition has the same number of ones. Given a Hilbert space of the form $\mathcal{H} = (\mathbb{C}_2)^{\otimes n}$, or an $n$-qubit system, the identity operator can be decomposed as a sum of projectors onto subspaces of fixed Hamming weight. In this work, we propose several quantum algorithms that realize a coherent Hamming weight projective measurement on an input pure state, meaning that the post-measurement state of the algorithm is the projection of the input state onto the corresponding subspace of fixed Hamming weight. We analyze a depth-width trade-off for the corresponding quantum circuits, allowing for a depth reduction of the circuits at the cost of more control qubits. For an $n$-qubit input, the depth-optimal algorithm uses $O(n)$ control qubits and the corresponding circuit has depth $O(\log (n))$, assuming that we have the ability to perform qubit resets. Furthermore, the proposed algorithm construction uses only one- and two-qubit gates.
The probabilistic one-way distillable secret key is equal to the largest expected rate at which perfect secret key bits can be probabilistically distilled from a bipartite state by means of local operations and one-way classical communication. Here we define the set of super two-extendible states and prove that an arbitrary state in this set cannot be used for probabilistic one-way secret-key distillation. This broad class of states includes both erased states and all full-rank states. Comparing the probabilistic one-way distillable secret key with the more commonly studied approximate one-way distillable secret key, our results demonstrate an extreme gap between them for many states of interest, with the approximate one-way distillable secret key being much larger. Our findings naturally extend to probabilistic one-way entanglement distillation, with similar conclusions.
Quantum hypothesis testing (QHT) has been traditionally studied from the information-theoretic perspective, wherein one is interested in the optimal decay rate of error probabilities as a function of the number of samples of an unknown state. In this paper, we study the sample complexity of QHT, wherein the goal is to determine the minimum number of samples needed to reach a desired error probability. By making use of the wealth of knowledge that already exists in the literature on QHT, we characterize the sample complexity of binary QHT in the symmetric and asymmetric settings, and we provide bounds on the sample complexity of multiple QHT. In more detail, we prove that the sample complexity of symmetric binary QHT depends logarithmically on the inverse error probability and inversely on the negative logarithm of the fidelity. As a counterpart of the quantum Stein's lemma, we also find that the sample complexity of asymmetric binary QHT depends logarithmically on the inverse type II error probability and inversely on the quantum relative entropy, provided that the type II error probability is sufficiently small. We then provide lower and upper bounds on the sample complexity of multiple QHT, with it remaining an intriguing open question to improve these bounds. The final part of our paper outlines and reviews how sample complexity of QHT is relevant to a broad swathe of research areas and can enhance understanding of many fundamental concepts, including quantum algorithms for simulation and search, quantum learning and classification, and foundations of quantum mechanics. As such, we view our paper as an invitation to researchers coming from different communities to study and contribute to the problem of sample complexity of QHT, and we outline a number of open directions for future research.
In this paper, we develop the resource theory of quantum secret key. Operating under the assumption that entangled states with zero distillable key do not exist, we define the key cost of a quantum state, and device. We study its properties through the lens of a quantity that we call the key of formation. The main result of our paper is that the regularized key of formation is an upper bound on the key cost of a quantum state. The core protocol underlying this result is privacy dilution, which converts states containing ideal privacy into ones with diluted privacy. Next, we show that the key cost is bounded from below by the regularized relative entropy of entanglement, which implies the irreversibility of the privacy creation-distillation process for a specific class of states. We further focus on mixed-state analogues of pure quantum states in the domain of privacy, and we prove that a number of entanglement measures are equal to each other for these states, similar to the case of pure entangled states. The privacy cost and distillable key in the single-shot regime exhibit a yield-cost relation, and basic consequences for quantum devices are also provided.
With a view toward addressing the explosive growth in the computational demands of nuclear structure and reactions modeling, we develop a novel quantum algorithm for neutron-nucleus simulations with general potentials, which provides acceptable bound-state energies even in the presence of noise, through the noise-resilient training method. In particular, the algorithm can now solve for any band-diagonal to full Hamiltonian matrices, as needed to accommodate a general central potential. This includes exponential Gaussian-like potentials and ab initio inter-cluster potentials (optical potentials). The approach can also accommodate the complete form of the chiral effective-field-theory nucleon-nucleon potentials used in ab initio nuclear calculations. We make this potential available for three different qubit encodings, including the one-hot (OHE), binary (BE), and Gray encodings (GE), and we provide a comprehensive analysis of the number of Pauli terms and commuting sets involved. We find that the GE allows for an efficient scaling of the model-space size $N$ (or number of basis states used) and is more resource efficient not only for tridiagonal Hamiltonians, but also for band-diagonal Hamiltonians having bandwidth up to $N$. We introduce a new commutativity scheme called distance-grouped commutativity (DGC) and compare its performance with the well-known qubit-commutativity (QC) scheme. We lay out the explicit grouping of Pauli strings and the diagonalizing unitary under the DGC scheme, and we find that it outperforms the QC scheme, at the cost of a more complex diagonalizing unitary. Lastly, we provide first solutions of the neutron-alpha dynamics from quantum simulations suitable for NISQ processors, using an optical potential rooted in first principles, and a study of the bound-state physics in neutron-Carbon systems, along with a comparison of the efficacy of the OHE and GE.
Dephasing is a prominent noise mechanism that afflicts quantum information carriers, and it is one of the main challenges towards realizing useful quantum computation, communication, and sensing. Here we consider discrimination and estimation of bosonic dephasing channels, when using the most general adaptive strategies allowed by quantum mechanics. We reduce these difficult quantum problems to simple classical ones based on the probability densities defining the bosonic dephasing channels. By doing so, we rigorously establish the optimal performance of various distinguishability and estimation tasks and construct explicit strategies to achieve this performance. To the best of our knowledge, this is the first example of a non-Gaussian bosonic channel for which there are exact solutions for these tasks.
Solving optimization problems is a key task for which quantum computers could possibly provide a speedup over the best known classical algorithms. Particular classes of optimization problems including semi-definite programming (SDP) and linear programming (LP) have wide applicability in many domains of computer science, engineering, mathematics, and physics. Here we focus on semi-definite and linear programs for which the dimensions of the variables involved are exponentially large, so that standard classical SDP and LP solvers are not helpful for such large-scale problems. We propose the QSlack and CSlack methods for estimating their optimal values, respectively, which work by 1) introducing slack variables to transform inequality constraints to equality constraints, 2) transforming a constrained optimization to an unconstrained one via the penalty method, and 3) replacing the optimizations over all possible non-negative variables by optimizations over parameterized quantum states and parameterized probability distributions. Under the assumption that the SDP and LP inputs are efficiently measurable observables, it follows that all terms in the resulting objective functions are efficiently estimable by either a quantum computer in the SDP case or a quantum or probabilistic computer in the LP case. Furthermore, by making use of SDP and LP duality theory, we prove that these methods provide a theoretical guarantee that, if one could find global optima of the objective functions, then the resulting values sandwich the true optimal values from both above and below. Finally, we showcase the QSlack and CSlack methods on a variety of example optimization problems and discuss details of our implementation, as well as the resulting performance. We find that our implementations of both the primal and dual for these problems approach the ground truth, typically achieving errors of order $10^{-2}$.
The variational quantum eigensolver (VQE) is a hybrid quantum--classical variational algorithm that produces an upper-bound estimate of the ground-state energy of a Hamiltonian. As quantum computers become more powerful and go beyond the reach of classical brute-force simulation, it is important to assess the quality of solutions produced by them. Here we propose a dual variational quantum eigensolver (dual-VQE) that produces a lower-bound estimate of the ground-state energy. As such, VQE and dual-VQE can serve as quality checks on their solutions; in the ideal case, the VQE upper bound and the dual-VQE lower bound form an interval containing the true optimal value of the ground-state energy. The idea behind dual-VQE is to employ semi-definite programming duality to rewrite the ground-state optimization problem as a constrained maximization problem, which itself can be bounded from below by an unconstrained optimization problem to be solved by a variational quantum algorithm. When using a convex combination ansatz in conjunction with a classical generative model, the quantum computational resources needed to evaluate the objective function of dual-VQE are no greater than those needed for that of VQE. We simulated the performance of dual-VQE on the transverse-field Ising model, and found that, for the example considered, while dual-VQE training is slower and noisier than VQE, it approaches the true value with error of order $10^{-2}$.
We consider stellar interferometry in the continuous-variable (CV) quantum information formalism and use the quantum Fisher information (QFI) to characterize the performance of three key strategies: direct interferometry (DI), local heterodyne measurement, and a CV teleportation-based strategy. In the lossless regime, we show that a squeezing parameter of $r\approx 2$ (18 dB) is required to reach $\approx$ 95\% of the QFI achievable with DI; such a squeezing level is beyond what has been achieved experimentally. In the low-loss regime, the CV teleportation strategy becomes inferior to DI, and the performance gap widens as loss increases. Curiously, in the high-loss regime, a small region of loss exists where the CV teleportation strategy slightly outperforms both DI and local heterodyne, representing a transition in the optimal strategy. We describe this advantage as limited because it occurs for a small region of loss, and the magnitude of the advantage is also small. We argue that practical difficulties further impede achieving any quantum advantage, limiting the merits of a CV teleportation-based strategy for stellar interferometry.
In this paper, we investigate the problem of simulating open system dynamics governed by the well-known Lindblad master equation. In our prequel paper, we introduced an input model in which Lindblad operators are encoded into pure quantum states, called program states, and we also introduced a method, called wave matrix Lindbladization, for simulating Lindbladian evolution by means of interacting the system of interest with these program states. Therein, we focused on a simple case in which the Lindbladian consists of only one Lindblad operator and a Hamiltonian. Here, we extend the method to simulating general Lindbladians and other cases in which a Lindblad operator is expressed as a linear combination or a polynomial of the operators encoded into the program states. We propose quantum algorithms for all these cases and also investigate their sample complexity, i.e., the number of program states needed to simulate a given Lindbladian evolution approximately. Finally, we demonstrate that our quantum algorithms provide an efficient route for simulating Lindbladian evolution relative to full tomography of encoded operators, by proving that the sample complexity for tomography is dependent on the dimension of the system, whereas the sample complexity of wave matrix Lindbladization is dimension independent.
Testing the symmetries of quantum states and channels provides a way to assess their usefulness for different physical, computational, and communication tasks. Here, we establish several complexity-theoretic results that classify the difficulty of symmetry-testing problems involving a unitary representation of a group and a state or a channel that is being tested. In particular, we prove that various such symmetry-testing problems are complete for BQP, QMA, QSZK, QIP(2), QIP_EB(2), and QIP, thus spanning the prominent classes of the quantum interactive proof hierarchy and forging a non-trivial connection between symmetry and quantum computational complexity. Finally, we prove the inclusion of two Hamiltonian symmetry-testing problems in QMA and QAM, while leaving it as an intriguing open question to determine whether these problems are complete for these classes.
The concept of antidistinguishability of quantum states has been studied to investigate foundational questions in quantum mechanics. It is also called quantum state elimination, because the goal of such a protocol is to guess which state, among finitely many chosen at random, the system is not prepared in (that is, it can be thought of as the first step in a process of elimination). Antidistinguishability has been used to investigate the reality of quantum states, ruling out $\psi$-epistemic ontological models of quantum mechanics [Pusey et al., Nat. Phys., 8(6):475-478, 2012]. Thus, due to the established importance of antidistinguishability in quantum mechanics, exploring it further is warranted. In this paper, we provide a comprehensive study of the optimal error exponent -- the rate at which the optimal error probability vanishes to zero asymptotically -- for classical and quantum antidistinguishability. We derive an exact expression for the optimal error exponent in the classical case and show that it is given by the multivariate classical Chernoff divergence. Our work thus provides this divergence with a meaningful operational interpretation as the optimal error exponent for antidistinguishing a set of probability measures. For the quantum case, we provide several bounds on the optimal error exponent: a lower bound given by the best pairwise Chernoff divergence of the states, a single-letter semi-definite programming upper bound, and lower and upper bounds in terms of minimal and maximal multivariate quantum Chernoff divergences. It remains an open problem to obtain an explicit expression for the optimal error exponent for quantum antidistinguishability.
Symmetry is an important and unifying notion in many areas of physics. In quantum mechanics, it is possible to eliminate degrees of freedom from a system by leveraging symmetry to identify the possible physical transitions. This allows us to simplify calculations and characterize potentially complicated dynamics of the system with relative ease. Previous works have focused on devising quantum algorithms to ascertain symmetries by means of fidelity-based symmetry measures. In our present work, we develop alternative symmetry testing quantum algorithms that are efficiently implementable on quantum computers. Our approach estimates asymmetry measures based on the Hilbert--Schmidt distance, which is significantly easier, in a computational sense, than using fidelity as a metric. The method is derived to measure symmetries of states, channels, Lindbladians, and measurements. We apply this method to a number of scenarios involving open quantum systems, including the amplitude damping channel and a spin chain, and we test for symmetries within and outside the finite symmetry group of the Hamiltonian and Lindblad operators.
The single-letter characterisation of the entanglement-assisted capacity of a quantum channel is one of the seminal results of quantum information theory. In this paper, we consider a modified communication scenario in which the receiver is allowed an additional, `inconclusive' measurement outcome, and we employ an error metric given by the error probability in decoding the transmitted message conditioned on a conclusive measurement result. We call this setting postselected communication and the ensuing highest achievable rates the postselected capacities. Here, we provide a precise single-letter characterisation of postselected capacities in the setting of entanglement assistance as well as the more general nonsignalling assistance, establishing that they are both equal to the channel's projective mutual information -- a variant of mutual information based on the Hilbert projective metric. We do so by establishing bounds on the one-shot postselected capacities, with a lower bound that makes use of a postselected teleportation-based protocol and an upper bound in terms of the postselected hypothesis testing relative entropy. As such, we obtain fundamental limits on a channel's ability to communicate even when this strong resource of postselection is allowed, implying limitations on communication even when the receiver has access to postselected closed timelike curves.
Density Matrix Exponentiation is a technique for simulating Hamiltonian dynamics when the Hamiltonian to be simulated is available as a quantum state. In this paper, we present a natural analogue to this technique, for simulating Markovian dynamics governed by the well known Lindblad master equation. For this purpose, we first propose an input model in which a Lindblad operator $L$ is encoded into a quantum state $\psi$. Then, given access to $n$ copies of the state $\psi$, the task is to simulate the corresponding Markovian dynamics for time $t$. We propose a quantum algorithm for this task, called Wave Matrix Lindbladization, and we also investigate its sample complexity. We show that our algorithm uses $n = O(t^2/\varepsilon)$ samples of $\psi$ to achieve the target dynamics, with an approximation error of $O(\varepsilon)$.
Entropy measures quantify the amount of information and correlation present in a quantum system. In practice, when the quantum state is unknown and only copies thereof are available, one must resort to the estimation of such entropy measures. Here we propose a variational quantum algorithm for estimating the von Neumann and Rényi entropies, as well as the measured relative entropy and measured Rényi relative entropy. Our approach first parameterizes a variational formula for the measure of interest by a quantum circuit and a classical neural network, and then optimizes the resulting objective over parameter space. Numerical simulations of our quantum algorithm are provided, using a noiseless quantum simulator. The algorithm provides accurate estimates of the various entropy measures for the examples tested, which renders it as a promising approach for usage in downstream tasks.
We propose a versatile privacy framework for quantum systems, termed quantum pufferfish privacy (QPP). Inspired by classical pufferfish privacy, our formulation generalizes and addresses limitations of quantum differential privacy by offering flexibility in specifying private information, feasible measurements, and domain knowledge. We show that QPP can be equivalently formulated in terms of the Datta-Leditzky information spectrum divergence, thus providing the first operational interpretation thereof. We reformulate this divergence as a semi-definite program and derive several properties of it, which are then used to prove convexity, composability, and post-processing of QPP mechanisms. Parameters that guarantee QPP of the depolarization mechanism are also derived. We analyze the privacy-utility tradeoff of general QPP mechanisms and, again, study the depolarization mechanism as an explicit instance. The QPP framework is then applied to privacy auditing for identifying privacy violations via a hypothesis testing pipeline that leverages quantum algorithms. Connections to quantum fairness and other quantum divergences are also explored and several variants of QPP are examined.
The fidelity-based smooth min-relative entropy is a distinguishability measure that has appeared in a variety of contexts in prior work on quantum information, including resource theories like thermodynamics and coherence. Here we provide a comprehensive study of this quantity. First we prove that it satisfies several basic properties, including the data-processing inequality. We also establish connections between the fidelity-based smooth min-relative entropy and other widely used information-theoretic quantities, including smooth min-relative entropy and smooth sandwiched Rényi relative entropy, of which the sandwiched Rényi relative entropy and smooth max-relative entropy are special cases. After that, we use these connections to establish the second-order asymptotics of the fidelity-based smooth min-relative entropy and all smooth sandwiched Rényi relative entropies, finding that the first-order term is the quantum relative entropy and the second-order term involves the quantum relative entropy variance. Utilizing the properties derived, we also show how the fidelity-based smooth min-relative entropy provides one-shot bounds for operational tasks in general resource theories in which the target state is mixed, with a particular example being randomness distillation. The above observations then lead to second-order expansions of the upper bounds on distillable randomness, as well as the precise second-order asymptotics of the distillable randomness of particular classical-quantum states. Finally, we establish semi-definite programs for smooth max-relative entropy and smooth conditional min-entropy, as well as a bilinear program for the fidelity-based smooth min-relative entropy, which we subsequently use to explore the tightness of a bound relating the last to the first.
Quantifying entanglement is an important task by which the resourcefulness of a quantum state can be measured. Here, we develop a quantum algorithm that tests for and quantifies the separability of a general bipartite state by using the quantum steering effect, the latter initially discovered by Schrödinger. Our separability test consists of a distributed quantum computation involving two parties: a computationally limited client, who prepares a purification of the state of interest, and a computationally unbounded server, who tries to steer the reduced systems to a probabilistic ensemble of pure product states. To design a practical algorithm, we replace the role of the server with a combination of parameterized unitary circuits and classical optimization techniques to perform the necessary computation. The result is a variational quantum steering algorithm (VQSA), a modified separability test that is implementable on quantum computers that are available today. We then simulate our VQSA on noisy quantum simulators and find favorable convergence properties on the examples tested. We also develop semidefinite programs, executable on classical computers, that benchmark the results obtained from our VQSA. Thus, our findings provide a meaningful connection between steering, entanglement, quantum algorithms, and quantum computational complexity theory. They also demonstrate the value of a parameterized mid-circuit measurement in a VQSA.
The pretty good measurement is a fundamental analytical tool in quantum information theory, giving a method for inferring the classical label that identifies a quantum state chosen probabilistically from an ensemble. Identifying and constructing the pretty good measurement for the class of bosonic Gaussian states is of immediate practical relevance in quantum information processing tasks. Holevo recently showed that the pretty good measurement for a bosonic Gaussian ensemble is a bosonic Gaussian measurement that attains the accessible information of the ensemble (IEEE Trans. Inf. Theory, 66(9):5634-564, 2020). In this paper, we provide an alternate proof of Gaussianity of the pretty good measurement for a Gaussian ensemble of multimode bosonic states, with a focus on establishing an explicit and efficiently computable Gaussian description of the measurement. We also compute an explicit form of the mean square error of the pretty good measurement, which is relevant when using it for parameter estimation. Generalizing the pretty good measurement is a quantum instrument, called the pretty good instrument. We prove that the post-measurement state of the pretty good instrument is a faithful Gaussian state if the input state is a faithful Gaussian state whose covariance matrix satisfies a certain condition. Combined with our previous finding for the pretty good measurement and provided that the same condition holds, it follows that the expected output state is a faithful Gaussian state as well. In this case, we compute an explicit Gaussian description of the post-measurement and expected output states. Our findings imply that the pretty good instrument for bosonic Gaussian ensembles is no longer merely an analytical tool, but that it can also be implemented experimentally in quantum optics laboratories.
The distillable randomness of a bipartite quantum state is an information-theoretic quantity equal to the largest net rate at which shared randomness can be distilled from the state by means of local operations and classical communication. This quantity has been widely used as a measure of classical correlations, and one version of it is equal to the regularized Holevo information of the ensemble that results from measuring one share of the state. However, due to the regularization, the distillable randomness is difficult to compute in general. To address this problem, we define measures of classical correlations and prove a number of their properties, most importantly that they serve as upper bounds on the distillable randomness of an arbitrary bipartite state. We then further bound these measures from above by some that are efficiently computable by means of semi-definite programming, we evaluate one of them for the example of an isotropic state, and we remark on the relation to quantities previously proposed in the literature.
Bidirectional quantum teleportation is a fundamental protocol for exchanging quantum information between two parties. Specifically, the two individuals make use of a shared resource state as well as local operations and classical communication (LOCC) to swap quantum states. In this work, we concisely highlight the contributions of our companion paper [Siddiqui and Wilde, arXiv:2010.07905]. We develop two different ways of quantifying the error of nonideal bidirectional teleportation by means of the normalized diamond distance and the channel infidelity. We then establish that the values given by both metrics are equal for this task. Additionally, by relaxing the set of operations allowed from LOCC to those that completely preserve the positivity of the partial transpose, we obtain semidefinite programming lower bounds on the error of nonideal bidirectional teleportation. We evaluate these bounds for some key examples -- isotropic states and when there is no resource state at all. In both cases, we find an analytical solution. The second example establishes a benchmark for classical versus quantum bidirectional teleportation. Another example that we investigate consists of two Bell states that have been sent through a generalized amplitude damping channel (GADC). For this scenario, we find an analytical expression for the error, as well as a numerical solution that agrees with the former up to numerical precision.
Continuous-variable (CV) teleportation is a foundational protocol in quantum information science. A number of experiments have been designed to simulate ideal teleportation under realistic conditions. In this paper, we detail an analytical approach for determining optimal input states for quantifying the performance of CV unidirectional and bidirectional teleportation. The metric that we consider for quantifying performance is the energy-constrained channel fidelity between ideal teleportation and its experimental implementation, and along with this, our focus is on determining optimal input states for distinguishing the ideal process from the experimental one. We prove that, under certain energy constraints, the optimal input state in unidirectional, as well as bidirectional, teleportation is a finite entangled superposition of twin-Fock states saturating the energy constraint. Moreover, we also prove that, under the same constraints, the optimal states are unique; that is, there is no other optimal finite entangled superposition of twin-Fock states.
We study a variant of quantum hypothesis testing wherein an additional 'inconclusive' measurement outcome is added, allowing one to abstain from attempting to discriminate the hypotheses. The error probabilities are then conditioned on a successful attempt, with inconclusive trials disregarded. We completely characterise this task in both the single-shot and asymptotic regimes, providing exact formulas for the optimal error probabilities. In particular, we prove that the asymptotic error exponent of discriminating any two quantum states $\rho$ and $\sigma$ is given by the Hilbert projective metric $D_{\max}(\rho\|\sigma) + D_{\max}(\sigma \| \rho)$ in asymmetric hypothesis testing, and by the Thompson metric $\max \{ D_{\max}(\rho\|\sigma), D_{\max}(\sigma \| \rho) \}$ in symmetric hypothesis testing. This endows these two quantities with fundamental operational interpretations in quantum state discrimination. Our findings extend to composite hypothesis testing, where we show that the asymmetric error exponent with respect to any convex set of density matrices is given by a regularisation of the Hilbert projective metric. We apply our results also to quantum channels, showing that no advantage is gained by employing adaptive or even more general discrimination schemes over parallel ones, in both the asymmetric and symmetric settings. Our state discrimination results make use of no properties specific to quantum mechanics and are also valid in general probabilistic theories.
The quantum relative entropy is known to play a key role in determining the asymptotic convertibility of quantum states in general resource-theoretic settings, often constituting the unique monotone that is relevant in the asymptotic regime. We show that this is no longer the case when one allows stochastic protocols that may only succeed with some probability, in which case the quantum relative entropy is insufficient to characterize the rates of asymptotic state transformations, and a new entropic quantity based on a regularization of the Hilbert projective metric comes into play. Such a scenario is motivated by a setting where the cost associated with transformations of quantum states, typically taken to be the number of copies of a given state, is instead identified with the size of the quantum memory needed to realize the protocol. Our approach allows for constructing transformation protocols that achieve strictly higher rates than those imposed by the relative entropy. Focusing on the task of resource distillation, we give broadly applicable strong converse bounds on the asymptotic rates of probabilistic distillation protocols, and show them to be tight in relevant settings such as entanglement distillation with non-entangling operations. This generalizes and extends previously known limitations that only applied to deterministic protocols. Our methods are based on recent results for probabilistic one-shot transformations as well as a new asymptotic equipartition property for the projective relative entropy.
The task of learning a quantum circuit to prepare a given mixed state is a fundamental quantum subroutine. We present a variational quantum algorithm (VQA) to learn mixed states which is suitable for near-term hardware. Our algorithm represents a generalization of previous VQAs that aimed at learning preparation circuits for pure states. We consider two different ansätze for compiling the target state; the first is based on learning a purification of the state and the second on representing it as a convex combination of pure states. In both cases, the resources required to store and manipulate the compiled state grow with the rank of the approximation. Thus, by learning a lower rank approximation of the target state, our algorithm provides a means of compressing a state for more efficient processing. As a byproduct of our algorithm, one effectively learns the principal components of the target state, and hence our algorithm further provides a new method for principal component analysis. We investigate the efficacy of our algorithm through extensive numerical implementations, showing that typical random states and thermal states of many body systems may be learnt this way. Additionally, we demonstrate on quantum hardware how our algorithm can be used to study hardware noise-induced states.
The mixedness of one share of a pure bipartite state determines whether the overall state is a separable, unentangled state. Here we consider quantum computational tests of mixedness, and we derive an exact expression of the acceptance probability of such tests as the number of copies of the state becomes larger. We prove that the analytical form of this expression is given by the cycle index polynomial of the symmetric group $S_k$, which is itself related to the Bell polynomials. After doing so, we derive a family of quantum separability tests, each of which is generated by a finite group; for all such algorithms, we show that the acceptance probability is determined by the cycle index polynomial of the group. Finally, we produce and analyze explicit circuit constructions for these tests, showing that the tests corresponding to the symmetric and cyclic groups can be executed with $O(k^2)$ and $O(k\log(k))$ controlled-SWAP gates, respectively, where $k$ is the number of copies of the state being tested.
A colloquial interpretation of entropy is that it is the knowledge gained upon learning the outcome of a random experiment. Conditional entropy is then interpreted as the knowledge gained upon learning the outcome of one random experiment after learning the outcome of another, possibly statistically dependent, random experiment. In the classical world, entropy and conditional entropy take only non-negative values, consistent with the intuition that one has regarding the aforementioned interpretations. However, for certain entangled states, one obtains negative values when evaluating commonly accepted and information-theoretically justified formulas for the quantum conditional entropy, leading to the confounding conclusion that one can know less than nothing in the quantum world. Here, we introduce a physically motivated framework for defining quantum conditional entropy, based on two simple postulates inspired by the second law of thermodynamics (non-decrease of entropy) and extensivity of entropy, and we argue that all plausible definitions of quantum conditional entropy should respect these two postulates. We then prove that all plausible quantum conditional entropies take on negative values for certain entangled states, so that it is inevitable that one can know less than nothing in the quantum world. All of our arguments are based on constructions of physical processes that respect the first postulate, the one inspired by the second law of thermodynamics.
The ideal realization of quantum teleportation relies on having access to a maximally entangled state; however, in practice, such an ideal state is typically not available and one can instead only realize an approximate teleportation. With this in mind, we present a method to quantify the performance of approximate teleportation when using an arbitrary resource state. More specifically, after framing the task of approximate teleportation as an optimization of a simulation error over one-way local operations and classical communication (LOCC) channels, we establish a semi-definite relaxation of this optimization task by instead optimizing over the larger set of two-PPT-extendible channels. The main analytical calculations in our paper consist of exploiting the unitary covariance symmetry of the identity channel to establish a significant reduction of the computational cost of this latter optimization. Next, by exploiting known connections between approximate teleportation and quantum error correction, we also apply these concepts to establish bounds on the performance of approximate quantum error correction over a given quantum channel. Finally, we evaluate our bounds for various examples of resource states and channels.
There is a folkloric belief that a depth-$\Theta(m)$ quantum circuit is needed to estimate the trace of the product of $m$ density matrices (i.e., a multivariate trace), a subroutine crucial to applications in condensed matter and quantum information science. We prove that this belief is overly conservative by constructing a constant quantum-depth circuit for the task, inspired by the method of Shor error correction. Furthermore, our circuit demands only local gates in a two dimensional circuit -- we show how to implement it in a highly parallelized way on an architecture similar to that of Google's Sycamore processor. With these features, our algorithm brings the central task of multivariate trace estimation closer to the capabilities of near-term quantum processors. We instantiate the latter application with a theorem on estimating nonlinear functions of quantum states with "well-behaved" polynomial approximations.
We investigate the performance of parallel and adaptive quantum channel discrimination strategies for a finite number of channel uses. It has recently been shown that, in the asymmetric setting with asymptotically vanishing type I error probability, adaptive strategies are asymptotically not more powerful than parallel ones. We extend this result to the non-asymptotic regime with finitely many channel uses, by explicitly constructing a parallel strategy for any given adaptive strategy, and bounding the difference in their performances, measured in terms of the decay rate of the type II error probability per channel use. We further show that all parallel strategies can be optimized over in time polynomial in the number of channel uses, and hence our result can also be used to obtain a poly-time-computable asymptotically tight upper bound on the performance of general adaptive strategies.
The capacities of noisy quantum channels capture the ultimate rates of information transmission across quantum communication lines, and the quantum capacity plays a key role in determining the overhead of fault-tolerant quantum computation platforms. In the case of bosonic systems, central to many applications, no closed formulas for these capacities were known for bosonic dephasing channels, a key class of non-Gaussian channels modelling, e.g., noise affecting superconducting circuits or fiber-optic communication channels. Here we provide the first exact calculation of the quantum, private, two-way assisted quantum, and secret-key agreement capacities of all bosonic dephasing channels. We prove that that they are equal to the relative entropy of the distribution underlying the channel to the uniform distribution. Our result solves a problem that has been open for over a decade, having been posed originally by [Jiang & Chen, Quantum and Nonlinear Optics 244, 2010].
Symmetries in a Hamiltonian play an important role in quantum physics because they correspond directly with conserved quantities of the related system. In this paper, we propose quantum algorithms capable of testing whether a Hamiltonian exhibits symmetry with respect to a group. We demonstrate that familiar expressions of Hamiltonian symmetry in quantum mechanics correspond directly with the acceptance probabilities of our algorithms. We execute one of our symmetry-testing algorithms on existing quantum computers for simple examples of both symmetric and asymmetric cases.
In this note, I define error exponents and strong converse exponents for the tasks of distinguishability distillation and dilution. These are counterparts to the one-shot distillable distinguishability and the one-shot distinguishability cost, as previously defined in the resource theory of asymmetric distinguishability. I show that they can be evaluated by semi-definite programming, establish a number of their properties, bound them using Renyi relative entropies, and relate them to each other.
The Rains relative entropy of a bipartite quantum state is the tightest known upper bound on its distillable entanglement -- which has a crisp physical interpretation of entanglement as a resource -- and it is efficiently computable by convex programming. It has not been known to be a selective entanglement monotone in its own right. In this work, we strengthen the interpretation of the Rains relative entropy by showing that it is monotone under the action of selective operations that completely preserve the positivity of the partial transpose, reasonably quantifying entanglement. That is, we prove that Rains relative entropy of an ensemble generated by such an operation does not exceed the Rains relative entropy of the initial state in expectation, giving rise to the smallest, most conservative known computable selective entanglement monotone. Additionally, we show that this is true not only for the original Rains relative entropy, but also for Rains relative entropies derived from various Rényi relative entropies. As an application of these findings, we prove, in both the non-asymptotic and asymptotic settings, that the probabilistic approximate distillable entanglement of a state is bounded from above by various Rains relative entropies.
A semidefinite program (SDP) is a particular kind of convex optimization problem with applications in operations research, combinatorial optimization, quantum information science, and beyond. In this work, we propose variational quantum algorithms for approximately solving SDPs. For one class of SDPs, we provide a rigorous analysis of their convergence to approximate locally optimal solutions, under the assumption that they are weakly constrained (i.e., $N\gg M$, where $N$ is the dimension of the input matrices and $M$ is the number of constraints). We also provide algorithms for a more general class of SDPs that requires fewer assumptions. Finally, we numerically simulate our quantum algorithms for applications such as MaxCut, and the results of these simulations provide evidence that convergence still occurs in noisy settings.
Quantum error correction (QEC) is a procedure by which the quantum state of a system is protected against a known type of noise, by preemptively adding redundancy to that state. Such a procedure is commonly used in quantum computing when thermal noise is present. Interestingly, thermal noise has also been known to play a central role in quantum thermodynamics (QTD). This fact hints at the applicability of certain QTD statements in the QEC of thermal noise, which has been discussed previously in the context of Maxwell's demon. In this article, we view QEC as a quantum heat engine with a feedback controller (i.e., a demon). We derive an upper bound on the measurement heat dissipated during the error-identification stage in terms of the Groenewold information gain, thereby providing the latter with a physical meaning also when it is negative. Further, we derive the second law of thermodynamics in the context of this QEC engine, operating with general quantum measurements. Finally, we show that, under a set of physically motivated assumptions, this leads to a fundamental triple trade-off relation, which implies a trade-off between the maximum achievable fidelity of QEC and the super-Carnot efficiency that heat engines with feedback controllers have been known to possess. A similar trade-off relation occurs for the thermodynamic efficiency of the QEC engine and the efficacy of the quantum measurement used for error identification.
In this work, we introduce multipartite intrinsic non-locality as a method for quantifying resources in the multipartite scenario of device-independent (DI) conference key agreement. We prove that multipartite intrinsic non-locality is additive, convex, and monotone under a class of free operations called local operations and common randomness. As one of our technical contributions, we establish a chain rule for two variants of multipartite mutual information, which we then use to prove that multipartite intrinsic non-locality is additive. This chain rule may be of independent interest in other contexts. All of these properties of multipartite intrinsic non-locality are helpful in establishing the main result of our paper: multipartite intrinsic non-locality is an upper bound on secret key rate in the general multipartite scenario of DI conference key agreement. We discuss various examples of DI conference key protocols and compare our upper bounds for these protocols with known lower bounds. Finally, we calculate upper bounds on recent experimental realizations of DI quantum key distribution.
We introduce an axiomatic approach for characterizing quantum conditional entropy. Our approach relies on two physically motivated axioms: monotonicity under conditional majorization and additivity. We show that these two axioms provide sufficient structure that enable us to derive several key properties applicable to all quantum conditional entropies studied in the literature. Specifically, we prove that any quantum conditional entropy must be negative on certain entangled states and must equal -log(d) on dxd maximally entangled states. We also prove the non-negativity of conditional entropy on separable states, and we provide a generic definition for the dual of a quantum conditional entropy. Finally, we develop an operational approach for characterizing quantum conditional entropy via games of chance, and we show that, for the classical case, this complementary approach yields the same ordering as the axiomatic approach.
Although it is well known that the amount of resources that can be asymptotically distilled from a quantum state or channel does not exceed the resource cost needed to produce it, the corresponding relation in the non-asymptotic regime hitherto has not been well understood. Here, we establish a quantitative relation between the one-shot distillable resource yield and dilution cost in terms of transformation errors involved in these processes. Notably, our bound is applicable to quantum state and channel manipulation with respect to any type of quantum resource and any class of free transformations thereof, encompassing broad types of settings, including entanglement, quantum thermodynamics, and quantum communication. We also show that our techniques provide strong converse bounds relating the distillable resource and resource dilution cost in the asymptotic regime. Moreover, we introduce a class of channels that generalize twirling maps encountered in many resource theories, and by directly connecting it with resource quantification, we compute analytically several smoothed resource measures and improve our one-shot yield--cost bound in relevant theories. We use these operational insights to exactly evaluate important measures for various resource states in the resource theory of magic states.
It is known that a party with access to a Deutschian closed timelike curve (D-CTC) can perfectly distinguish multiple non-orthogonal quantum states. In this paper, we propose a practical method for discriminating multiple non-orthogonal states, by using a previously known quantum circuit designed to simulate D-CTCs. This method relies on multiple copies of an input state, multiple iterations of the circuit, and a fixed set of unitary operations. We first characterize the performance of this circuit and study its asymptotic behavior. We also show how it can be equivalently recast as a local, adaptive circuit that may be implemented simply in an experiment. Finally, we prove that our state discrimination strategy achieves the multiple Chernoff bound when discriminating an arbitrary set of pure qubit states.
Quantum information theory sets the ultimate limits for any information-processing task. In rangefinding and LIDAR, the presence or absence of a target can be tested by detecting different states at the receiver. In this Letter, we use quantum hypothesis testing for an unknown coherent-state return signal in order to derive the limits of symmetric and asymmetric error probabilities of single-shot ranging experiments. We engineer a single measurement independent of the range, which in some cases saturates the quantum bound and for others is presumably the best measurement to approach it. In addition, we verify the theoretical predictions by performing numerical simulations. This work bridges the gap between quantum information and quantum sensing and engineering and will contribute to devising better ranging sensors, as well as setting the path for finding practical limits for other quantum tasks.
The performance of a quantum information processing protocol is ultimately judged by distinguishability measures that quantify how distinguishable the actual result of the protocol is from the ideal case. The most prominent distinguishability measures are those based on the fidelity and trace distance, due to their physical interpretations. In this paper, we propose and review several algorithms for estimating distinguishability measures based on trace distance and fidelity. The algorithms can be used for distinguishing quantum states, channels, and strategies (the last also known in the literature as "quantum combs"). The fidelity-based algorithms offer novel physical interpretations of these distinguishability measures in terms of the maximum probability with which a single prover (or competing provers) can convince a verifier to accept the outcome of an associated computation. We simulate many of these algorithms by using a variational approach with parameterized quantum circuits. We find that the simulations converge well in both the noiseless and noisy scenarios, for all examples considered. Furthermore, the noisy simulations exhibit a parameter noise resilience. Finally, we establish a strong relationship between various quantum computational complexity classes and distance estimation problems.
Resource theories in quantum information science are helpful for the study and quantification of the performance of information-processing tasks that involve quantum systems. These resource theories also find applications in other areas of study; e.g., the resource theories of entanglement and coherence have found use and implications in the study of quantum thermodynamics and memory effects in quantum dynamics. In this paper, we introduce the resource theory of unextendibility, which is associated to the inability of extending quantum entanglement in a given quantum state to multiple parties. The free states in this resource theory are the k-extendible states, and the free channels are k-extendible channels, which preserve the class of k-extendible states. We make use of this resource theory to derive non-asymptotic, upper bounds on the rate at which quantum communication or entanglement preservation is possible by utilizing an arbitrary quantum channel a finite number of times, along with the assistance of k-extendible channels at no cost. We then show that the bounds obtained are significantly tighter than previously known bounds for quantum communication over both the depolarizing and erasure channels.
Symmetry is a unifying concept in physics. In quantum information and beyond, it is known that quantum states possessing symmetry are not useful for certain information-processing tasks. For example, states that commute with a Hamiltonian realizing a time evolution are not useful for timekeeping during that evolution, and bipartite states that are highly extendible are not strongly entangled and thus not useful for basic tasks like teleportation. Motivated by this perspective, this paper details several quantum algorithms that test the symmetry of quantum states and channels. For the case of testing Bose symmetry of a state, we show that there is a simple and efficient quantum algorithm, while the tests for other kinds of symmetry rely on the aid of a quantum prover. We prove that the acceptance probability of each algorithm is equal to the maximum symmetric fidelity of the state being tested, thus giving a firm operational meaning to these latter resource quantifiers. Special cases of the algorithms test for incoherence or separability of quantum states. We evaluate the performance of these algorithms on choice examples by using the variational approach to quantum algorithms, replacing the quantum prover with a parameterized circuit. We demonstrate this approach for numerous examples using the IBM quantum noiseless and noisy simulators, and we observe that the algorithms perform well in the noiseless case and exhibit noise resilience in the noisy case. We also show that the maximum symmetric fidelities can be calculated by semi-definite programs, which is useful for benchmarking the performance of these algorithms for sufficiently small examples. Finally, we establish various generalizations of the resource theory of asymmetry, with the upshot being that the acceptance probabilities of the algorithms are resource monotones and thus well motivated from the resource-theoretic perspective.
The distillable entanglement of a bipartite quantum state does not exceed its entanglement cost. This well known inequality can be understood as a second law of entanglement dynamics in the asymptotic regime of entanglement manipulation, excluding the possibility of perpetual entanglement extraction machines that generate boundless entanglement from a finite reserve. In this paper, I establish a refined second law of entanglement dynamics that holds for the non-asymptotic regime of entanglement manipulation.
The quantum relative entropy is a measure of the distinguishability of two quantum states, and it is a unifying concept in quantum information theory: many information measures such as entropy, conditional entropy, mutual information, and entanglement measures can be realized from it. As such, there has been broad interest in generalizing the notion to further understand its most basic properties, one of which is the data processing inequality. The quantum f-divergence of Petz is one generalization of the quantum relative entropy, and it also leads to other relative entropies, such as the Petz--Renyi relative entropies. In this contribution, I introduce the optimized quantum f-divergence as a related generalization of quantum relative entropy. I prove that it satisfies the data processing inequality, and the method of proof relies upon the operator Jensen inequality, similar to Petz's original approach. Interestingly, the sandwiched Renyi relative entropies are particular examples of the optimized f-divergence. Thus, one benefit of this approach is that there is now a single, unified approach for establishing the data processing inequality for both the Petz--Renyi and sandwiched Renyi relative entropies, for the full range of parameters for which it is known to hold.
We develop a resource theory of symmetric distinguishability, the fundamental objects of which are elementary quantum information sources, i.e., sources that emit one of two possible quantum states with given prior probabilities. Such a source can be represented by a classical-quantum state of a composite system $XA$, corresponding to an ensemble of two quantum states, with $X$ being classical and $A$ being quantum. We study the resource theory for two different classes of free operations: $(i)$ ${\rm{CPTP}}_A$, which consists of quantum channels acting only on $A$, and $(ii)$ conditional doubly stochastic (CDS) maps acting on $XA$. We introduce the notion of symmetric distinguishability of an elementary source and prove that it is a monotone under both these classes of free operations. We study the tasks of distillation and dilution of symmetric distinguishability, both in the one-shot and asymptotic regimes. We prove that in the asymptotic regime, the optimal rate of converting one elementary source to another is equal to the ratio of their quantum Chernoff divergences, under both these classes of free operations. This imparts a new operational interpretation to the quantum Chernoff divergence. We also obtain interesting operational interpretations of the Thompson metric, in the context of the dilution of symmetric distinguishability.
Quantum teleportation is a primitive in several important applications, including quantum communication, quantum computation, error correction, and quantum networks. In this work, we propose an optimal test for the performance of continuous-variable (CV) quantum teleportation in terms of the energy-constrained channel fidelity between ideal CV teleportation and its experimental implementation. Work prior to ours considered suboptimal tests of the performance of CV teleportation, focusing instead on its performance for particular states, such as ensembles of coherent states, squeezed states, cat states, etc. Here we prove that the optimal state for testing CV teleportation is an entangled superposition of twin Fock states. We establish this result by reducing the problem of estimating the energy-constrained channel fidelity between ideal CV teleportation and its experimental approximation to a quadratic program and solving it. As an additional result, we obtain an analytical solution to the energy-constrained diamond distance between a photodetector and its experimental approximation. These results are relevant for experiments that make use of CV teleportation and photodetectors.
This is a preliminary version of a book in progress on the theory of quantum communication. We adopt an information-theoretic perspective throughout and give a comprehensive account of fundamental results in quantum communication theory from the past decade (and earlier), with an emphasis on the modern one-shot-to-asymptotic approach that underlies much of today's state-of-the-art research in this field. In Part I, we cover mathematical preliminaries and provide a detailed study of quantum mechanics from an information-theoretic perspective. We also provide an extensive and thorough review of quantum entropies, and we devote an entire chapter to the study of entanglement measures. Equipped with these essential tools, in Part II we study classical communication (with and without entanglement assistance), entanglement distillation, quantum communication, secret key distillation, and private communication. In Part III, we cover the latest developments in feedback-assisted communication tasks, such as quantum and classical feedback-assisted communication, LOCC-assisted quantum communication, and secret key agreement.
Bidirectional teleportation is a fundamental protocol for exchanging quantum information between two parties by means of a shared resource state and local operations and classical communication (LOCC). Here we develop two seemingly different ways of quantifying the simulation error of unideal bidirectional teleportation by means of the normalized diamond distance and the channel infidelity, and we prove that they are equivalent. By relaxing the set of operations allowed from LOCC to those that completely preserve the positivity of the partial transpose, we obtain semi-definite programming lower bounds on the simulation error of unideal bidirectional teleportation. We evaluate these bounds for several key examples: when there is no resource state at all and for isotropic and Werner states, in each case finding an analytical solution. The first aforementioned example establishes a benchmark for classical versus quantum bidirectional teleportation. Another example consists of a resource state resulting from the action of a generalized amplitude damping channel on two Bell states, for which we find an analytical expression for the simulation error. We then evaluate the performance of some schemes for bidirectional teleportation due to Kiktenko et al. and find that they are suboptimal and do not go beyond the aforementioned classical limit. We offer a scheme alternative to theirs that is provably optimal. Finally, we generalize the whole development to the setting of bidirectional controlled teleportation, in which there is an additional assisting party who helps with the exchange of quantum information, and we establish semi-definite programming lower bounds on the simulation error for this task. More generally, we provide semi-definite programming lower bounds on the performance of bipartite and multipartite channel simulation using a shared resource state and LOCC.
We introduce various measures of forward classical communication for bipartite quantum channels. Since a point-to-point channel is a special case of a bipartite channel, the measures reduce to measures of classical communication for point-to-point channels. As it turns out, these reduced measures have been reported in prior work of Wang et al. on bounding the classical capacity of a quantum channel. As applications, we show that the measures are upper bounds on the forward classical capacity of a bipartite channel. The reduced measures are upper bounds on the classical capacity of a point-to-point quantum channel assisted by a classical feedback channel. Some of the various measures can be computed by semi-definite programming.
One of the fundamental tasks in quantum metrology is to estimate multiple parameters embedded in a noisy process, i.e., a quantum channel. In this paper, we study fundamental limits to quantum channel estimation via the concept of amortization and the right logarithmic derivative (RLD) Fisher information value. Our key technical result is the proof of a chain-rule inequality for the RLD Fisher information value, which implies that amortization, i.e., access to a catalyst state family, does not increase the RLD Fisher information value of quantum channels. This technical result leads to a fundamental and efficiently computable limitation for multiparameter channel estimation in the sequential setting, in terms of the RLD Fisher information value. As a consequence, we conclude that if the RLD Fisher information value is finite, then Heisenberg scaling is unattainable in the multiparameter setting.
The optimized quantum $f$-divergences form a family of distinguishability measures that includes the quantum relative entropy and the sandwiched Rényi relative quasi-entropy as special cases. In this paper, we establish physically meaningful refinements of the data-processing inequality for the optimized $f$-divergence. In particular, the refinements state that the absolute difference between the optimized $f$-divergence and its channel-processed version is an upper bound on how well one can recover a quantum state acted upon by a quantum channel, whenever the recovery channel is taken to be a rotated Petz recovery channel. Not only do these results lead to physically meaningful refinements of the data-processing inequality for the sandwiched Rényi relative entropy, but they also have implications for perfect reversibility (i.e., quantum sufficiency) of the optimized $f$-divergences. Along the way, we improve upon previous physically meaningful refinements of the data-processing inequality for the standard $f$-divergence, as established in recent work of Carlen and Vershynina [arXiv:1710.02409, arXiv:1710.08080]. Finally, we extend the definition of the optimized $f$-divergence, its data-processing inequality, and all of our recoverability results to the general von Neumann algebraic setting, so that all of our results can be employed in physical settings beyond those confined to the most common finite-dimensional setting of interest in quantum information theory.
Quantum entanglement is a key physical resource in quantum information processing that allows for performing basic quantum tasks such as teleportation and quantum key distribution, which are impossible in the classical world. Ever since the rise of quantum information theory, it has been an open problem to quantify entanglement in an information-theoretically meaningful way. In particular, every previously defined entanglement measure bearing a precise information-theoretic meaning is not known to be efficiently computable, or if it is efficiently computable, then it is not known to have a precise information-theoretic meaning. In this Letter, we meet this challenge by introducing an entanglement measure that has a precise information-theoretic meaning as the exact cost required to prepare an entangled state when two distant parties are allowed to perform quantum operations that completely preserve the positivity of the partial transpose. Additionally, this entanglement measure is efficiently computable by means of a semidefinite program, and it bears a number of useful properties such as additivity and faithfulness. Our results bring key insights into the fundamental entanglement structure of arbitrary quantum states, and they can be used directly to assess and quantify the entanglement produced in quantum-physical experiments.
The Petz recovery channel plays an important role in quantum information science as an operation that approximately reverses the effect of a quantum channel. The pretty good measurement is a special case of the Petz recovery channel, and it allows for near-optimal state discrimination. A hurdle to the experimental realization of these vaunted theoretical tools is the lack of a systematic and efficient method to implement them. This paper sets out to rectify this lack: using the recently developed tools of quantum singular value transformation and oblivious amplitude amplification, we provide a quantum algorithm to implement the Petz recovery channel when given the ability to perform the channel that one wishes to reverse. Moreover, we prove that, in some sense, our quantum algorithm's usage of the channel implementation cannot be improved by more than a quadratic factor. Our quantum algorithm also provides a procedure to perform pretty good measurements when given multiple copies of the states that one is trying to distinguish.
Quantum channel estimation and discrimination are fundamentally related information processing tasks of interest in quantum information science. In this paper, we analyze these tasks by employing the right logarithmic derivative Fisher information and the geometric Rényi relative entropy, respectively, and we also identify connections between these distinguishability measures. A key result of our paper is that a chain-rule property holds for the right logarithmic derivative Fisher information and the geometric Rényi relative entropy for the interval $\alpha\in(0,1) $ of the Rényi parameter $\alpha$. In channel estimation, these results imply a condition for the unattainability of Heisenberg scaling, while in channel discrimination, they lead to improved bounds on error rates in the Chernoff and Hoeffding error exponent settings. More generally, we introduce the amortized quantum Fisher information as a conceptual framework for analyzing general sequential protocols that estimate a parameter encoded in a quantum channel, and we use this framework, beyond the aforementioned application, to show that Heisenberg scaling is not possible when a parameter is encoded in a classical-quantum channel. We then identify a number of other conceptual and technical connections between the tasks of estimation and discrimination and the distinguishability measures involved in analyzing each. As part of this work, we present a detailed overview of the geometric Rényi relative entropy of quantum states and channels, as well as its properties, which may be of independent interest.
Recently, the resource theory of asymmetric distinguishability for quantum strategies was introduced by [Wang et al., Phys. Rev. Research 1, 033169 (2019)]. The fundamental objects in the resource theory are pairs of quantum strategies, which are generalizations of quantum channels that provide a framework to describe an arbitrary quantum interaction. In the present paper, we provide semi-definite program characterizations of the one-shot operational quantities in this resource theory. We then apply these semi-definite programs to study the advantage conferred by adaptive strategies in discrimination and distinguishability distillation of generalized amplitude damping channels. We find that there are significant gaps between what can be accomplished with an adaptive strategy versus a non-adaptive strategy.
What is the minimum number of guesses needed on average to correctly guess a realization of a random variable? The answer to this question led to the introduction of the notion of a quantity called guesswork by Massey in 1994, which can be viewed as an alternate security criterion to entropy. In this paper, we consider the guesswork in the presence of quantum side information, and show that a general sequential guessing strategy is equivalent to performing a single measurement and choosing a guessing strategy from the outcome. We use this result to deduce entropic one-shot and asymptotic bounds on the guesswork in the presence of quantum side information, and to formulate a semi-definite program (SDP) to calculate the quantity. We evaluate the guesswork for a simple example involving the BB84 states, both numerically and analytically, and prove a continuity result that certifies the security of slightly imperfect key states when the guesswork is used as the security criterion.
This paper introduces coherent quantum channel discrimination as a coherent version of conventional quantum channel discrimination. Coherent channel discrimination is phrased here as a quantum interactive proof system between a verifier and a prover, wherein the goal of the prover is to distinguish two channels called in superposition in order to distill a Bell state at the end. The key measure considered here is the success probability of distilling a Bell state, and I prove that this success probability does not increase under the action of a quantum superchannel, thus establishing this measure as a fundamental measure of channel distinguishability. Also, I establish some bounds on this success probability in terms of the success probability of conventional channel discrimination. Finally, I provide an explicit semi-definite program that can compute the success probability.