Channel resolvability concerns the minimum resolution for approximating the channel output. We study the resolvability of classical-quantum channels in two settings, for the channel output generated from the worst input, and form the fixed independent and identically distributed (i.i.d.) input. The direct part of the worst-input setting is derived from sequential hypothesis testing as it involves of non-i.i.d.~inputs. The strong converse of the worst-input setting is obtained via the connection to identification codes. For the fixed-input setting, while the direct part follows from the known quantum soft covering result, we exploit the recent alternative quantum Sanov theorem to solve the strong converse.
In 1927, during the fifth Solvay Conference, Einstein and Bohr described a double-slit interferometer with a "movable slit" that can detect the momentum recoil of one photon. Here, we report a faithful realization of the Einstein-Bohr interferometer using a single atom in an optical tweezer, cooled to the motional ground state in three dimensions. The single atom has an intrinsic momentum uncertainty comparable to a single photon, which serves as a movable slit obeying the minimum Heisenberg uncertainty principle. The atom's momentum wavefunction is dynamically tunable by the tweezer laser power, which enables observation of an interferometric visibility reduction at a shallower trap, demonstrating the quantum nature of this interferometer. We further identify classical noise due to atom heating and precession, illustrating a quantum-to-classical transition.
The trade-offs between error probabilities in quantum hypothesis testing are by now well-understood in the centralized setting, but much less is known for distributed settings. Here, we study a distributed binary hypothesis testing problem to infer a bipartite quantum state shared between two remote parties, where one of these parties communicates classical information to the tester at zero-rate (while the other party communicates classical or quantum information to the tester at zero-rate or higher). As our main contribution, we derive an efficiently computable single-letter formula for the Stein's exponent of this problem, when the state under the alternative is product. For the general case, we show that the Stein's exponent is given by a multi-letter expression involving max-min optimization of regularized measured relative entropy. While this becomes single-letter for the fully classical case, we further prove that this already does not happen in the same way for classical-quantum states in general. As a key tool for proving the converse direction of our results, we develop a quantum version of the blowing-up lemma which may be of independent interest.
We determine the exact error and strong converse exponents of shared randomness-assisted channel simulation in worst case total-variation distance. Namely, we find that these exponents can be written as simple optimizations over the Rényi channel mutual information. Strikingly, and in stark contrast to channel coding, there are no critical rates, allowing a tight characterization for arbitrary rates below and above the simulation capacity. We derive our results by asymptotically expanding the meta-converse for channel simulation [Cao \it et al., IEEE Trans.~Inf.~Theory (2024)], which corresponds to non-signaling assisted codes. We prove this to be asymptotically tight by employing the approximation algorithms from [Berta \it et al., Proc.~IEEE ISIT (2024)], which show how to round any non-signaling assisted strategy to a strategy that only uses shared randomness. Notably, this implies that any additional quantum entanglement-assistance does not change the error or the strong converse exponents.
\textitQuantum discord can demonstrate \textitquantum nonlocality in the context of a \textitsemi-device-independent Bell or steering scenario, i.e., by assuming only the Hilbert-space dimension. This work addresses which aspect of \textitbipartite coherence is essential to such semi-device-independent quantum information tasks going beyond standard Bell nonlocality or quantum steering. It has been shown that the \textitglobal coherence of a single system can be transformed into \textitbipartite entanglement. However, global coherence can also be present in quantum discord. At the same time, discord can display bipartite coherence locally, i.e., only in a subsystem or both subsystems. Thus, global coherence of bipartite separable states is defined here as a form of bipartite coherence that is not reducible to local coherence in any of the subsystems or both subsystems. To answer the above-mentioned question, we demonstrate that global coherence is necessary to demonstrate semi-device-independent nonlocality of quantum discord in Bell or steering scenarios. From this result, it follows that any \textitlocal operations of the form $\Phi_A \otimes \Phi_B$ that may create \textitcoherence locally are \textitfree operations in the resource theory of semi-device-independent nonlocality of discord. As a byproduct, we identify the precise quantum resource for the quantum communication task of \textitremote state preparation using two-qubit separable states.
In this work, we consider decoupling a bipartite quantum state via a general quantum channel. We propose a joint state-channel decoupling approach to obtain a one-shot error exponent bound without smoothing, in which trace distance is used to measure how good the decoupling is. The established exponent is expressed in terms of a sum of two sandwiched Rényi entropies, one quantifying the amount of initial correlation between the state and environment, while the other characterizing the effectiveness of the quantum channel. This gives an explicit exponential decay of the decoupling error in the whole achievable region, which was missing in the previous results [Commun. Math. Phys. 328, 2014]. Moreover, it strengthens the error exponent bound obtained in a recent work [IEEE Trans. Inf. Theory, 69(12), 2023], for exponent from the channel part. As an application, we establish a one-shot error exponent bound for quantum channel coding given by a sandwiched Rényi coherent information.
We propose an innovative scheme to efficiently prepare strong mechanical squeezing through utilizing the synergistic mechanism of two-tone driving and parametric pumping in an optomechanical system. By reasonable choosing the system parameters, the proposal highlights the following prominent advantages: the squeezing effect of the cavity field induced by the optical parametric amplifier can be transferred to the mechanical oscillator, which has been squeezed by the two-tone driving, and the degree of squeezing of the mechanical oscillator will surpass that obtained by any single mechanism; the joint mechanism can enhance the degree of squeezing significantly and break the 3 dB mechanical squeezing limit, which is particularly evident in range where the red/blue-detuned ratio is sub-optimal; the mechanical squeezing achieved through this distinctive joint mechanism exhibits notable robustness against both thermal noise and decay of mechanical oscillator. Our project offers a versatile and efficient approach for generating strong mechanical squeezing across a wide range of conditions.
Collective quantum states, such as subradiant and superradiant states, are useful for controlling optical responses in many-body quantum systems. In this work, we study novel collective quantum phenomena in waveguide-coupled Bragg atom arrays with inhomogeneous frequencies. For atoms without free-space dissipation, collectively induced transparency is produced by destructive quantum interference between subradiant and superradiant states. In a large Bragg atom array, multi-frequency photon transparency can be obtained by considering atoms with different frequencies. Interestingly, we find collectively induced absorption (CIA) by studying the influence of free-space dissipation on photon transport. Tunable atomic frequencies nontrivially modify decay rates of subradiant states. When the decay rate of a subradiant state equals to the free-space dissipation, photon absorption can reach a limit at a certain frequency. In other words, photon absorption is enhanced with low free-space dissipation, distinct from previous photon detection schemes. We also show multi-frequency CIA by properly adjusting atomic frequencies. Our work presents a way to manipulate collective quantum states and exotic optical properties in waveguide QED systems.
Quantum state discrimination is an important problem in many information processing tasks. In this work we are concerned with finding its best possible sample complexity when the states are preprocessed by a quantum channel that is required to be locally differentially private. To that end we provide achievability and converse bounds for different settings. This includes symmetric state discrimination in various regimes and the asymmetric case. On the way, we also prove new sample complexity bounds for the general unconstrained setting. An important tool in this endeavor are new entropy inequalities that we believe to be of independent interest.
Quantum hypothesis testing (QHT) has been traditionally studied from the information-theoretic perspective, wherein one is interested in the optimal decay rate of error probabilities as a function of the number of samples of an unknown state. In this paper, we study the sample complexity of QHT, wherein the goal is to determine the minimum number of samples needed to reach a desired error probability. By making use of the wealth of knowledge that already exists in the literature on QHT, we characterize the sample complexity of binary QHT in the symmetric and asymmetric settings, and we provide bounds on the sample complexity of multiple QHT. In more detail, we prove that the sample complexity of symmetric binary QHT depends logarithmically on the inverse error probability and inversely on the negative logarithm of the fidelity. As a counterpart of the quantum Stein's lemma, we also find that the sample complexity of asymmetric binary QHT depends logarithmically on the inverse type II error probability and inversely on the quantum relative entropy, provided that the type II error probability is sufficiently small. We then provide lower and upper bounds on the sample complexity of multiple QHT, with it remaining an intriguing open question to improve these bounds. The final part of our paper outlines and reviews how sample complexity of QHT is relevant to a broad swathe of research areas and can enhance understanding of many fundamental concepts, including quantum algorithms for simulation and search, quantum learning and classification, and foundations of quantum mechanics. As such, we view our paper as an invitation to researchers coming from different communities to study and contribute to the problem of sample complexity of QHT, and we outline a number of open directions for future research.
In this paper, we consider the standard quantum information decoupling, in which Alice aims to decouple her system from the environment by local operations and discarding some of her systems. To achieve an $\varepsilon$-decoupling with trace distance as the error criterion, we establish a near-optimal one-shot characterization for the largest dimension of the remainder system in terms of the conditional $(1-\varepsilon)$-hypothesis-testing entropy. When the underlying system is independent and identically prepared, our result leads to the matched second-order rate as well as the matched moderate deviation rate. As an application, we find an achievability bound in entanglement distillation protocol, where the objective is for Alice and Bob to transform their quantum state to maximally entangled state with largest possible dimension using only local operations and one-way classical communications.
In this work, maximal $\alpha$-leakage is introduced to quantify how much a quantum adversary can learn about any sensitive information of data upon observing its disturbed version via a quantum privacy mechanism. We first show that an adversary's maximal expected $\alpha$-gain using optimal measurement is characterized by measured conditional Rényi entropy. This can be viewed as a parametric generalization of König et al.'s famous guessing probability formula [IEEE Trans. Inf. Theory, 55(9), 2009]. Then, we prove that the $\alpha$-leakage and maximal $\alpha$-leakage for a quantum privacy mechanism are determined by measured Arimoto information and measured Rényi capacity, respectively. Various properties of maximal $\alpha$-leakage, such as data processing inequality and composition property are established as well. Moreover, we show that regularized $\alpha$-leakage and regularized maximal $\alpha$-leakage for identical and independent quantum privacy mechanisms coincide with $\alpha$-tilted sandwiched Rényi information and sandwiched Rényi capacity, respectively.
Strong converse theorems refer to the study of impossibility results in information theory. In particular, Mosonyi and Ogawa established a one-shot strong converse bound for quantum hypothesis testing [Comm. Math. Phys, 334(3), 2014], which servers as a primitive tool for establishing a variety of tight strong converse theorems in quantum information theory. In this short note, we demonstrate an alternative one-line proof for this bound via the variational expression of measured Rényi divergences [Lett. Math. Phys, 107(12), 2017]. Then, we show that the variational expression is a direct consequence of Hölder's inequality.
We study the coherence of two coupled spin qubits in the presence of a bath of nuclear spins simulated using generalized cluster correlation expansion (gCCE) method. In our model, two electron spin qubits coupled with isotropic exchange or magnetic dipolar interactions interact with an environment of random nuclear spins. We study the time-evolution of the two-qubit reduced density matrix (RDM) and resulting decay of the off diagonal elements, corresponding to decoherence, which allows us to calculate gate fidelity in the regime of pure dephasing. We contrast decoherence when the system undergoes free evolution and evolution with dynamical decoupling pulses applied. Moreover, we study the dependence of decoherence on external magnetic field and system parameters which mimic realistic spin qubits, emphasizing magnetic molecules. Lastly, we comment on the application and limitations of gCCE in simulating nuclear-spin induced two-qubit relaxation processes.
With the progress in the quantum algorithm in recent years, much of the existing literature claims the exponential quantum advantage against their classical counterpart. However, many of these successes hinge on the assumption that arbitrary states can be efficiently prepared in quantum circuits. In reality, crafting a circuit to prepare a generic $n$-qubit quantum state demands an operation count on the order of $\mathcal{O}(2^n)$, which is prohibitively demanding for the quantum algorithm to demonstrate its advantage against the classical one. To tackle this data-loading problem, numerous strategies have been put forward. Nonetheless, most of these approaches only consider a very simple and easy-to-implement circuit structure, which has been shown to suffer from serious optimization issues. In this study, we harness quantum circuits as Born machines to generate probability distributions. Drawing inspiration from methods used to investigate electronic structures in quantum chemistry and condensed matter physics, we present a novel algorithm "Adaptive Circuit Learning of Born Machine" (ACLBM) that dynamically expands the ansatz circuit. Our algorithm is tailored to selectively integrate two-qubit entangled gates that best capture the complex entanglement present within the target state. Empirical results underscore the proficiency of our approach in encoding real-world data through amplitude embedding, demonstrating not only compliance with but also enhancement over the performance benchmarks set by previous research.
Consider the problem of minimizing an expected logarithmic loss over either the probability simplex or the set of quantum density matrices. This problem includes tasks such as solving the Poisson inverse problem, computing the maximum-likelihood estimate for quantum state tomography, and approximating positive semi-definite matrix permanents with the currently tightest approximation ratio. Although the optimization problem is convex, standard iteration complexity guarantees for first-order methods do not directly apply due to the absence of Lipschitz continuity and smoothness in the loss function. In this work, we propose a stochastic first-order algorithm named $B$-sample stochastic dual averaging with the logarithmic barrier. For the Poisson inverse problem, our algorithm attains an $\varepsilon$-optimal solution in $\smash{\tilde{O}}(d^2/\varepsilon^2)$ time, matching the state of the art, where $d$ denotes the dimension. When computing the maximum-likelihood estimate for quantum state tomography, our algorithm yields an $\varepsilon$-optimal solution in $\smash{\tilde{O}}(d^3/\varepsilon^2)$ time. This improves on the time complexities of existing stochastic first-order methods by a factor of $d^{\omega-2}$ and those of batch methods by a factor of $d^2$, where $\omega$ denotes the matrix multiplication exponent. Numerical experiments demonstrate that empirically, our algorithm outperforms existing methods with explicit complexity guarantees.
Rydberg microwave (MW) sensors are superior to conventional antenna-based techniques because of their wide operating frequency range and outstanding potential sensitivity. Here, we demonstrate a Rydberg microwave receiver with a high sensitivity of $62\,\mathrm{nV} \mathrm{cm}^{-1} \mathrm{Hz}^{-1/2}$ and broad instantaneous bandwidth of up to $10.2\,\mathrm{MHz}$. Such excellent performance was achieved by the amplification of one generated sideband wave induced by the strong coupling field in the six-wave mixing process of the Rydberg superheterodyne receiver, which was well predicted by our theory. Our system, which possesses a uniquely enhanced instantaneous bandwidth and high-sensitivity features that can be improved further, will promote the application of Rydberg microwave electrometry in radar and communication.
We consider privacy amplification against quantum side information by using regular random binning as an effective extractor. For constant-type sources, we obtain error exponent and strong converse bounds in terms of the so-called quantum Augustin information. Via type decomposition, we then recover the error exponent for independent and identically distributed sources proved by Dupuis [arXiv:2105.05342]. As an application, we obtain an achievable secrecy exponent for classical-quantum wiretap channel coding in terms of the Augustin information, which solves an open problem in [IEEE Trans.~Inf.~Theory, 65(12):7985, 2019]. Our approach is to establish an operational equivalence between privacy amplification and quantum soft covering; this may be of independent interest.
Carbon nanoribbon or nanographene qubit arrays can facilitate quantum-to-quantum transduction between light, charge, and spin, making them an excellent testbed for fundamental science in quantum coherent systems and for the construction of higher-level qubit circuits. In this work, we study spin decoherence due to coupling with a surrounding nuclear spin bath of an electronic molecular spin of a vanadyl phthalocyanine (VOPc) molecule integrated on an armchair-edged graphene nanoribbon (GNR). Density functional theory (DFT) is used to obtain ground state atomic configurations. Decay of spin coherence in Hahn echo experiments is then simulated using the cluster correlation expansion method with a spin Hamiltonian involving hyperfine and electric field gradient tensors calculated from DFT. We find that the decoherence time $T_2$ is anisotropic with respect to magnetic field orientation and determined only by the hydrogen nuclear spins both on VOPc and GNR. Large electron spin echo envelope modulation (ESEEM) due to nitrogen and vanadium nuclear spins is present at specific field ranges and can be completely suppressed by tuning the magnetic field. The relation between these field ranges and the hyperfine interactions is analyzed. The effects of interactions with the nuclear quadrupole moments are also studied, validating the applicability and limitations of the spin Hamiltonian when they are disregarded.
The mutual information is bounded from above by a decreasing affine function of the square of the distance between the input distribution and the set of all capacity-achieving input distributions $\Pi_{\mathcal{A}}$, on small enough neighborhoods of $\Pi_{\mathcal{A}}$, using an identity due to Topsøe and the Pinsker's inequality, assuming that the input set of the channel is finite and the constraint set $\mathcal{A}$ is polyhedral, i.e., can be described by (possibly multiple but) finitely many linear constraints. Counterexamples demonstrating nonexistence of such a quadratic bound are provided for the case of infinitely many linear constraints and the case of infinite input sets. Using Taylor's theorem with the remainder term, rather than the Pinsker's inequality and invoking Moreau's decomposition theorem the exact characterization of the slowest decrease of the mutual information with the distance to $\Pi_{\mathcal{A}}$ is determined on small neighborhoods of $\Pi_{\mathcal{A}}$. Corresponding results for classical-quantum channels are established under separable output Hilbert space assumption for the quadratic bound and under finite-dimensional output Hilbert space assumption for the exact characterization. Implications of these observations for the channel coding problem and applications of the proof techniques to related problems are discussed.
Convex splitting is a powerful technique in quantum information theory used in proving the achievability of numerous information-processing protocols such as quantum state redistribution and quantum network channel coding. In this work, we establish a one-shot error exponent and a one-shot strong converse for convex splitting with trace distance as an error criterion. Our results show that the derived error exponent (strong converse exponent) is positive if and only if the rate is in (outside) the achievable region. This leads to new one-shot exponent results in various tasks such as communication over quantum wiretap channels, secret key distillation, one-way quantum message compression, quantum measurement simulation, and quantum channel coding with side information at the transmitter. We also establish a near-optimal one-shot characterization of the sample complexity for convex splitting, which yields matched second-order asymptotics. This then leads to stronger one-shot analysis in many quantum information-theoretic tasks.
We show that the communication cost of quantum broadcast channel simulation under free entanglement assistance between the sender and the receivers is asymptotically characterized by an efficiently computable single-letter formula in terms of the channel's multipartite mutual information. Our core contribution is a new one-shot achievability result for multipartite quantum state splitting via multipartite convex splitting. As part of this, we face a general instance of the quantum joint typicality problem with arbitrarily overlapping marginals. The crucial technical ingredient to sidestep this difficulty is a conceptually novel multipartite mean-zero decomposition lemma, together with employing recently introduced complex interpolation techniques for sandwiched Rényi divergences. Moreover, we establish an exponential convergence of the simulation error when the communication costs are within the interior of the capacity region. As the costs approach the boundary of the capacity region moderately quickly, we show that the error still vanishes asymptotically.
In maximum-likelihood quantum state tomography, both the sample size and dimension grow exponentially with the number of qubits. It is therefore desirable to develop a stochastic first-order method, just like stochastic gradient descent for modern machine learning, to compute the maximum-likelihood estimate. To this end, we propose an algorithm called stochastic mirror descent with the Burg entropy. Its expected optimization error vanishes at a $O ( \sqrt{ ( 1 / t ) d \log t } )$ rate, where $d$ and $t$ denote the dimension and number of iterations, respectively. Its per-iteration time complexity is $O ( d^3 )$, independent of the sample size. To the best of our knowledge, this is currently the computationally fastest stochastic first-order method for maximum-likelihood quantum state tomography.
We theoretically study how a scattered electron can entangle molecular spin qubits (MSQs). This requires solving the inelastic transport of a single electron through a scattering region described by a tight-binding interacting Hamiltonian. We accomplish this using a Green's function solution. We can model realistic physical implementations of MSQs by parameterizing the tight-binding Hamiltonian with first-principles descriptions of magnetic anisotropy and exchange interactions. We find that for two-MSQ systems with inversion symmetry, the spin degree of freedom of the scattered electron offers probabilistic control of the degree of entanglement between the MSQs.
Qubit Mapping is a critical aspect of implementing quantum circuits on real hardware devices. Currently, the existing algorithms for qubit mapping encounter difficulties when dealing with larger circuit sizes involving hundreds of qubits. In this paper, we introduce an innovative qubit mapping algorithm, Duostra, tailored to address the challenge of implementing large-scale quantum circuits on real hardware devices with limited connectivity. Duostra operates by efficiently determining optimal paths for double-qubit gates and inserting SWAP gates accordingly to implement the double-qubit operations on real devices. Together with two heuristic scheduling algorithms, the Limitedly-Exhausitive (LE) Search and the Shortest-Path (SP) Estimation, it yields results of good quality within a reasonable runtime, thereby striving toward achieving quantum advantage. Experimental results showcase our algorithm's superiority, especially for large circuits beyond the NISQ era. For example, on large circuits with more than 50 qubits, we can reduce the mapping cost on an average 21.75% over the virtual best results among QMAP, t|ket>, Qiskit and SABRE. Besides, for mid-size circuits such as the SABRE-large benchmark, we improve the mapping costs by 4.5%, 5.2%, 16.3%, 20.7%, and 25.7%, when compared to QMAP, TOQM, t|ket>, Qiskit, and SABRE, respectively.
Consider an online convex optimization problem where the loss functions are self-concordant barriers, smooth relative to a convex function $h$, and possibly non-Lipschitz. We analyze the regret of online mirror descent with $h$. Then, based on the result, we prove the following in a unified manner. Denote by $T$ the time horizon and $d$ the parameter dimension. 1. For online portfolio selection, the regret of $\widetilde{\text{EG}}$, a variant of exponentiated gradient due to Helmbold et al., is $\tilde{O} ( T^{2/3} d^{1/3} )$ when $T > 4 d / \log d$. This improves on the original $\tilde{O} ( T^{3/4} d^{1/2} )$ regret bound for $\widetilde{\text{EG}}$. 2. For online portfolio selection, the regret of online mirror descent with the logarithmic barrier is $\tilde{O}(\sqrt{T d})$. The regret bound is the same as that of Soft-Bayes due to Orseau et al. up to logarithmic terms. 3. For online learning quantum states with the logarithmic loss, the regret of online mirror descent with the log-determinant function is also $\tilde{O} ( \sqrt{T d} )$. Its per-iteration time is shorter than all existing algorithms we know.
Achievability in information theory refers to demonstrating a coding strategy that accomplishes a prescribed performance benchmark for the underlying task. In quantum information theory, the crafted Hayashi-Nagaoka operator inequality is an essential technique in proving a wealth of one-shot achievability bounds since it effectively resembles a union bound in various problems. In this work, we show that the pretty-good measurement naturally plays a role as the union bound as well. A judicious application of it considerably simplifies the derivation of one-shot achievability for classical-quantum (c-q) channel coding via an elegant three-line proof. The proposed analysis enjoys the following favorable features. (i) The established one-shot bound admits a closed-form expression as in the celebrated Holevo-Helstrom Theorem. Namely, the error probability of sending $M$ messages through a c-q channel is upper bounded by the minimum error of distinguishing the joint channel input-output state against $(M-1)$ decoupled products states. (ii) Our bound directly yields asymptotic results in the large deviation, small deviation, and moderate deviation regimes in a unified manner. (iii) The coefficients incurred in applying the Hayashi-Nagaoka operator inequality are no longer needed. Hence, the derived one-shot bound sharpens existing results relying on the Hayashi-Nagaoka operator inequality. In particular, we obtain the tightest achievable $\epsilon$-one-shot capacity for c-q channel coding heretofore, improving the third-order coding rate in the asymptotic scenario. (iv) Our result holds for infinite-dimensional Hilbert space. (v) The proposed method applies to deriving one-shot achievability for classical data compression with quantum side information, entanglement-assisted classical communication over quantum channels, and various quantum network information-processing protocols.
Owing to the unique electronic spin properties, the nitrogen-vacancy (NV) centers hosted in diamond have emerged as a powerful quantum sensor for various physical parameters and biological species. In this work, a miniature optical-fiber quantum probe, configured by chemically-modifying nanodiamonds NV centers on the surface of a cone fiber tip, is developed. Based on continue-wave optically detected magnetic resonance method and lock-in amplifying technique, it is found that the sensing performance of the probe can be engineered by varying the nanodiamonds dispersion concentration and modification duration in the chemical modification process. Combined with a pair of magnetic flux concentrators, the magnetic field detection sensitivity of the probe is significantly enhanced to 0.57 nT/Hz1/2 @ 1Hz, a new record among the fiber magnetometers based on nanodiamonds NV. Taking Gd3+ as the demo, the capability of the probe in paramagnetic species detection is also demonstrated experimentally. Our work provides a new approach to develop NV center as quantum probe featuring high integration, miniature size, multifunction, and high sensitivity, etc.
We study quantum soft covering and privacy amplification against quantum side information. The former task aims to approximate a quantum state by sampling from a prior distribution and querying a quantum channel. The latter task aims to extract uniform and independent randomness against quantum adversaries. For both tasks, we use trace distance to measure the closeness between the processed state and the ideal target state. We show that the minimal amount of samples for achieving an $\varepsilon$-covering is given by the $(1-\varepsilon)$-hypothesis testing information (with additional logarithmic additive terms), while the maximal extractable randomness for an $\varepsilon$-secret extractor is characterized by the conditional $(1-\varepsilon)$-hypothesis testing entropy. When performing independent and identical repetitions of the tasks, our one-shot characterizations lead to tight asymptotic expansions of the above-mentioned operational quantities. We establish their second-order rates given by the quantum mutual information variance and the quantum conditional information variance, respectively. Moreover, our results extend to the moderate deviation regime, which are the optimal asymptotic rates when the trace distances vanish at sub-exponential speed. Our proof technique is direct analysis of trace distance without smoothing.
How well can we approximate a quantum channel output state using a random codebook with a certain size? In this work, we study the quantum soft covering problem. Namely, we use a random codebook with codewords independently sampled from a prior distribution and send it through a classical-quantum channel to approximate the target state. When using a random codebook sampled from an independent and identically distributed prior with a rate above the quantum mutual information, we show that the expected trace distance between the codebook-induced state and the target state decays with exponent given by the sandwiched Rényi information. On the other hand, when the rate of the codebook size is below the quantum mutual information, the trace distance converges to one exponentially fast. We obtain similar results when using a random constant composition codebook, whereas the sandwiched Augustin information expresses the error exponent. In addition to the above large deviation analysis, our results also hold in the moderate deviation regime. That is, we show that even when the rate of the codebook size approaches the quantum mutual information moderately quickly, the trace distance still vanishes asymptotically.
We establish a one-shot strong converse bound for privacy amplification against quantum side information using trace distance as a security criterion. This strong converse bound implies that in the independent and identical scenario, the trace distance exponentially converges to one in every finite blocklength when the rate of the extracted randomness exceeds the quantum conditional entropy. The established one-shot bound has an application to bounding the information leakage of classical-quantum wiretap channel coding and private communication over quantum channels. That is, the trace distance between Alice and Eavesdropper's joint state and its decoupled state vanishes as the rate of randomness used in hashing exceeds the quantum mutual information. On the other hand, the trace distance converges to one when the rate is below the quantum mutual information, resulting in an exponential strong converse. Our result also leads to an exponential strong converse for entropy accumulation, which complements a recent result by Dupuis [arXiv:2105.05342]. Lastly, our result and its applications apply to the moderate deviation regime. Namely, we characterize the asymptotic behaviors of the trace distances when the associated rates approach the fundamental thresholds with speeds slower than $O(1/\sqrt{n})$.
Simulating electronic structure on a quantum computer requires encoding of fermionic systems onto qubits. Common encoding methods transform a fermionic system of $N$ spin-orbitals into an $N$-qubit system, but many of the fermionic configurations do not respect the required conditions and symmetries of the system so the qubit Hilbert space in this case may have unphysical states and thus can not be fully utilized. We propose a generalized qubit-efficient encoding (QEE) scheme that requires the qubit number to be only logarithmic in the number of configurations that satisfy the required conditions and symmetries. For the case of considering only the particle-conserving and singlet configurations, we reduce the qubit count to an upper bound of $\mathcal O(m\log_2N)$, where $m$ is the number of particles. This QEE scheme is demonstrated on an H$_2$ molecule in the 6-31G basis set and a LiH molecule in the STO-3G basis set using fewer qubits than the common encoding methods. We calculate the ground-state energy surfaces using a variational quantum eigensolver algorithm with a hardware-efficient ansatz circuit. We choose to use a hardware-efficient ansatz since most of the Hilbert space in our scheme is spanned by desired configurations so a heuristic search for an eigenstate is sensible. The simulations are performed on IBM Quantum machines and the Qiskit simulator with a noise model implemented from a IBM Quantum machine. Using the methods of measurement error mitigation and error-free linear extrapolation, we demonstrate that most of the distributions of the extrapolated energies using our QEE scheme agree with the exact results obtained by Hamiltonian diagonalization in the given basis sets within chemical accuracy. Our proposed scheme and results show the feasibility of quantum simulations for larger molecular systems in the noisy intermediate-scale quantum (NISQ) era.
We propose an iterative algorithm that computes the maximum-likelihood estimate in quantum state tomography. The optimization error of the algorithm converges to zero at an $O ( ( 1 / k ) \log D )$ rate, where $k$ denotes the number of iterations and $D$ denotes the dimension of the quantum state. The per-iteration computational complexity of the algorithm is $O ( D ^ 3 + N D ^2 )$, where $N$ denotes the number of measurement outcomes. The algorithm can be considered as a parameter-free correction of the $R \rho R$ method [A. I. Lvovsky. Iterative maximum-likelihood reconstruction in quantum homodyne tomography. \textitJ. Opt. B: Quantum Semiclass. Opt. 2004] [G. Molina-Terriza et al. Triggered qutrits for quantum communication protocols. \textitPhys. Rev. Lett. 2004.].
The unitary coupled cluster (UCC) approximation is one of the more promising wave-function ansätze for electronic structure calculations on quantum computers via the variational quantum eigensolver algorithm. However, for large systems with many orbitals, the required number of UCC factors still leads to very deep quantum circuits, which can be challenging to implement. Based on the observation that most UCC amplitudes are small for weakly correlated molecules, we devise an algorithm that employs a Taylor expansion in the small amplitudes, trading off circuit depth for extra measurements. Strong correlations can be taken into account by performing the expansion about a small set of UCC factors, which are treated exactly. Near equilibrium, the Taylor series expansion often works well without the need to include any exact factors; as the molecule is stretched and correlations increase, we find only a small number of factors need to be treated exactly.
The factorized form of the unitary coupled cluster ansatz is a popular state preparation ansatz for electronic structure calculations of molecules on quantum computers. It often is viewed as an approximation (based on the Trotter product formula) for the conventional unitary coupled cluster operator. In this work, we show that the factorized form is quite flexible, allowing one to range from conventional configuration interaction, to conventional unitary coupled cluster, to efficient approximations that lie in between these two. The variational minimization of the energy often allows simpler factorized unitary coupled cluster approximations to achieve high accuracy, even if they do not accurately approximate the Trotter product formula. This is similar to how quantum approximate optimization algorithms can achieve high accuracy with a small number of levels.
Quantum information quantities play a substantial role in characterizing operational quantities in various quantum information-theoretic problems. We consider numerical computation of four quantum information quantities: Petz-Augustin information, sandwiched Augustin information, conditional sandwiched Renyi entropy and sandwiched Renyi information. To compute these quantities requires minimizing some order-$\alpha$ quantum Renyi divergences over the set of quantum states. Whereas the optimization problems are obviously convex, they violate standard bounded gradient/Hessian conditions in literature, so existing convex optimization methods and their convergence guarantees do not directly apply. In this paper, we propose a new class of convex optimization methods called mirror descent with the Polyak step size. We prove their convergence under a weak condition, showing that they provably converge for minimizing quantum Renyi divergences. Numerical experiment results show that entropic mirror descent with the Polyak step size converges fast in minimizing quantum Renyi divergences.
In this paper, we study the problem of learning an unknown quantum circuit of a certain structure. If the unknown target is an $n$-qubit Clifford circuit, we devise an efficient algorithm to reconstruct its circuit representation by using $O(n^2)$ queries to it. For decades, it has been unknown how to handle circuits beyond the Clifford group since the stabilizer formalism cannot be applied in this case. Herein, we study quantum circuits of $T$-depth one on the computational basis. We show that the output state of a $T$-depth one circuit \textitof full $T$-rank can be represented by a stabilizer pseudomixture with a specific algebraic structure. Using Pauli and Bell measurements on copies of the output states, we can generate a hypothesis circuit that is equivalent to the unknown target circuit on computational basis states as input. If the number of $T$ gates of the target is of the order $O({{\log n}})$, our algorithm requires $O(n^2)$ queries to it and produces its equivalent circuit representation on the computational basis in time $O(n^3)$. Using further additional $O(4^{3n})$ classical computations, we can derive an exact description of the target for arbitrary input states. Our results greatly extend the previously known facts that stabilizer states can be efficiently identified based on the stabilizer formalism.
The ability to design quantum systems that decouple from environmental noise sources is highly desirable for development of quantum technologies with optimal coherence. The chemical tunability of electronic states in magnetic molecules combined with advanced electron spin resonance techniques provides excellent opportunities to address this problem. Indeed, so-called clock transitions (CTs) have been shown to protect molecular spin qubits from magnetic noise, giving rise to significantly enhanced coherence. Here we conduct a spectroscopic and computational investigation of this physics, focusing on the role of the nuclear bath. Away from the CT, linear coupling to the nuclear degrees of freedom causes a modulation and decay of electronic coherence, as quantified via electron spin echo signals generated experimentally and $\textit{in silico}$. Meanwhile, the effective hyperfine interaction vanishes at the CT, resulting in electron-nuclear decoupling and an absence of quantum information leakage to the nuclear bath, providing opportunities to characterize other decoherence sources.
We propose a logical qubit based on the Blume-Capel model: a higher spin generalization of the Ising chain and which allows for an on-site anisotropy preserving rotational invariance around the Ising axis. We show that such a spin-3/2 Blume-Capel model can also support localized Majorana bound states at the ends of the chain. Inspired by known braiding protocols of these Majorana bound states, upon appropriate manipulation of the system parameters, we demonstrate a set of universal gate operations which act on qubits encoded in the doubly degenerate ground states of the chain.
We study quantum hypothesis testing between orthogonal states under restricted local measurements in the many-copy scenario. For testing arbitrary multipartite entangled pure state against its orthogonal complement state via the local operation and classical communication (LOCC) operation, we prove that the optimal average error probability always decays exponentially in the number of copies. Second, we provide a sufficient condition for the LOCC operations to achieve the same performance as the positive-partial-transpose (PPT) operations. We further show that testing a maximally entangled state against its orthogonal complement and testing extremal Werner states both fulfill the above-mentioned condition. Hence, we determine the explicit expressions for the optimal average error probability, the optimal trade-off between the type-I and type-II errors, and the associated Chernoff, Stein, Hoeffding, and strong converse exponents. Then, we show an infinite asymptotic separation between the separable (SEP) and PPT operations by providing a pair of states constructed from an unextendible product basis (UPB). The quantum states can be distinguished perfectly by PPT operations, while the optimal error probability, with SEP operations, admits an exponential lower bound. On the technical side, we prove this result by providing a quantitative version of the well-known statement that the tensor product of UPBs is a UPB.
We have designed and implemented a straightforward method to deterministically measure the temperature of the selected segment of a cold atom ensemble, and we have also developed an upgrade in the form of nondestructive thermometry. The essence is to monitor the thermal expansion of the targeted cold atoms after labeling them through manipulating the internal states, and the nondestructive property relies upon the nearly lossless detection via driving a cycling transition. For cold atoms subject to isotropic laser cooling, this method has the unique capability of addressing only the atoms on the optical detection axis within the enclosure, which is exactly the part we care about in major applications such as atomic clock or quantum sensing. Furthermore, our results confirm the sub-Doppler cooling features in isotropic laser cooling, and we have investigated the relevant cooling properties. Meanwhile, we have applied the recently developed optical configuration with the cooling laser injection in the form of hollow beams, which helps to enhance the cooling performance and accumulate more cold atoms in the central regions.
Using a separable many-body variational wavefunction, we formulate a self-consistent effective Hamiltonian theory for fermionic many-body system. The theory is applied to the two-dimensional Hubbard model as an example to demonstrate its capability and computational effectiveness. Most remarkably for the Hubbard model in 2-d, a highly unconventional quadruple-fermion non-Cooper-pair order parameter is discovered.
We consider a distributed quantum hypothesis testing problem with communication constraints, in which the two hypotheses correspond to two different states of a bipartite quantum system, multiple identical copies of which are shared between Alice and Bob. They are allowed to perform local operations on their respective systems and send quantum information to Charlie at limited rates. By doing measurements on the systems that he receives, Charlie needs to infer which of the two different states the original bipartite state was in, that is, which of the two hypotheses is true. We prove that the Stein exponent for this problem is given by a regularized quantum relative entropy. The latter reduces to a single letter formula when the alternative hypothesis consists of the products of the marginals of the null hypothesis, and there is no rate constraint imposed on Bob. Our proof relies on certain properties of the so-called quantum information bottleneck function. The second part of this paper concerns the general problem of finding finite blocklength strong converse bounds in quantum network information theory. In the classical case, the analogue of this problem has been reformulated in terms of the so-called image size characterization problem. Here, we extend this problem to the classical-quantum setting and prove a second order strong converse bound for it. As a by-product, we obtain a similar bound for the Stein exponent for distributed hypothesis testing in the special case in which the bipartite system is a classical-quantum system, as well as for the task of quantum source coding with compressed classical side information. Our proofs use a recently developed tool from quantum functional inequalities, namely, the tensorization property of reverse hypercontractivity for the quantum depolarizing semigroup.
We consider the transmission of classical information through a degraded broadcast channel, whose outputs are two quantum systems, with the state of one being a degraded version of the other. Yard et al. proved that the capacity region of such a channel is contained in a region characterized by certain entropic quantities. We prove that this region satisfies the strong converse property, that is, the maximal probability of error incurred in transmitting information at rates lying outside this region converges to one exponentially in the number of uses of the channel. In establishing this result, we prove a second-order Fano-type inequality, which might be of independent interest. A powerful analytical tool which we employ in our proofs is the tensorization property of the quantum reverse hypercontractivity for the quantum depolarizing semigroup.
Rényi and Augustin information are generalizations of mutual information defined via the Rényi divergence, playing a significant role in evaluating the performance of information processing tasks by virtue of its connection to the error exponent analysis. In quantum information theory, there are three generalizations of the classical Rényi divergence -- the Petz's, sandwiched, and log-Euclidean versions, that possess meaningful operational interpretation. However, the associated quantum Rényi and Augustin information are much less explored compared with their classical counterpart, and lacking crucial properties hinders applications of these quantities to error exponent analysis in the quantum regime. The goal of this paper is to analyze fundamental properties of the Rényi and Augustin information from a noncommutative measure-theoretic perspective. Firstly, we prove the uniform equicontinuity for all three quantum versions of Rényi and Augustin information, and it hence yields the joint continuity of these quantities in order and prior input distributions. Secondly, we establish the concavity of the scaled Rényi and Augustin information in the region of $s\in(-1,0)$ for both Petz's and the sandwiched versions. This completes the open questions raised by Holevo [IEEE Trans.~Inf.~Theory, 46(6):2256--2261, 2000], and Mosonyi and Ogawa [Commun.~Math.~Phys., 355(1):373--426, 2017]. For the applications, we show that the strong converse exponent in classical-quantum channel coding satisfies a minimax identity, which means that the strong converse exponent can be attained by the best constant composition code. The established concavity is further employed to prove an entropic duality between classical data compression with quantum side information and classical-quantum channel coding, and a Fenchel duality in joint source-channel coding with quantum side information.
In this paper, we establish an interesting duality between two different quantum information-processing tasks, namely, classical source coding with quantum side information, and channel coding over c-q channels. The duality relates the optimal error exponents of these two tasks, generalizing the classical results of Ahlswede and Dueck. We establish duality both at the operational level and at the level of the entropic quantities characterizing these exponents. For the latter, the duality is given by an exact relation, whereas for the former, duality manifests itself in the following sense: an optimal coding strategy for one task can be used to construct an optimal coding strategy for the other task. Along the way, we derive a bound on the error exponent for c-q channel coding with constant composition codes which might be of independent interest.
Reconfigurable photonic circuits have applications ranging from next-generation computer architectures to quantum networks, coherent radar and optical metamaterials. However, complete reconfigurability is only currently practical on millimetre-scale device footprints. Here, we overcome this barrier by developing an on-chip high quality microcavity with resonances that can be electrically tuned across a full free spectral range (FSR). FSR tuning allows resonance with any source or emitter, or between any number of networked microcavities. We achieve it by integrating nanoelectronic actuation with strong optomechanical interactions that create a highly strain-dependent effective refractive index. This allows low voltages and sub-nanowatt power consumption. We demonstrate a basic reconfigurable photonic network, bringing the microcavity into resonance with an arbitrary mode of a microtoroidal optical cavity across a telecommunications fibre link. Our results have applications beyond photonic circuits, including widely tuneable integrated lasers, reconfigurable optical filters for telecommunications and astronomy, and on-chip sensor networks.
In this paper, we analyze classical data compression with quantum side information (also known as the classical-quantum Slepian-Wolf protocol) in the so-called large and moderate deviation regimes. In the non-asymptotic setting, the protocol involves compressing classical sequences of finite length $n$ and decoding them with the assistance of quantum side information. In the large deviation regime, the compression rate is fixed, and we obtain bounds on the error exponent function, which characterizes the minimal probability of error as a function of the rate. Devetak and Winter showed that the asymptotic data compression limit for this protocol is given by a conditional entropy. For any protocol with a rate below this quantity, the probability of error converges to one asymptotically and its speed of convergence is given by the strong converse exponent function. We obtain finite blocklength bounds on this function, and determine exactly its asymptotic value. In the moderate deviation regime for the compression rate, the latter is no longer considered to be fixed. It is allowed to depend on the blocklength $n$, but assumed to decay slowly to the asymptotic data compression limit. Starting from a rate above this limit, we determine the speed of convergence of the error probability to zero and show that it is given in terms of the conditional information variance. Our results complement earlier results obtained by Tomamichel and Hayashi, in which they analyzed the so-called small deviation regime of this protocol.
We study lower bounds on the optimal error probability in classical coding over classical-quantum channels at rates below the capacity, commonly termed quantum sphere-packing bounds. Winter and Dalai have derived such bounds for classical-quantum channels; however, the exponents in their bounds only coincide when the channel is classical. In this paper, we show that these two exponents admit a variational representation and are related by the Golden-Thompson inequality, reaffirming that Dalai's expression is stronger in general classical-quantum channels. Second, we establish a sphere-packing bound for classical-quantum channels, which significantly improves Dalai's prefactor from the order of subexponential to polynomial. Furthermore, the gap between the obtained error exponent for constant composition codes and the best known classical random coding exponent vanishes in the order of $o(\log n / n)$, indicating our sphere-packing bound is almost exact in the high rate regime. Finally, for a special class of symmetric classical-quantum channels, we can completely characterize its optimal error probability without the constant composition code assumption. The main technical contributions are two converse Hoeffding bounds for quantum hypothesis testing and the saddle-point properties of error exponent functions.
In this work, we study the tradeoffs between the error probabilities of classical-quantum channels and the blocklength $n$ when the transmission rates approach the channel capacity at a rate slower than $1/\sqrt{n}$, a research topic known as moderate deviation analysis. We show that the optimal error probability vanishes under this rate convergence. Our main technical contributions are a tight quantum sphere-packing bound, obtained via Chaganty and Sethuraman's concentration inequality in strong large deviation theory, and asymptotic expansions of error-exponent functions. Moderate deviation analysis for quantum hypothesis testing is also established. The converse directly follows from our channel coding result, while the achievability relies on a martingale inequality.
We provide a sphere-packing lower bound for the optimal error probability in finite blocklengths when coding over a symmetric classical-quantum channel. Our result shows that the pre-factor can be significantly improved from the order of the subexponential to the polynomial. The established pre-factor is essentially optimal because it matches the best known random coding upper bound in the classical case. Our approaches rely on a sharp concentration inequality in strong large deviation theory and crucial properties of the error-exponent function.
The auxiliary function of a classical channel appears in two fundamental quantities that upper and lower bound the error probability, respectively. A crucial property of the auxiliary function is its concavity, which leads to several important results in finite block length analysis. In this paper, we prove that the auxiliary function of a classical-quantum channel also enjoys the same concave property, extending an earlier partial result to its full generality. The key component in our proof is a beautiful result of geometric means of operators.
We derive new characterisations of the matrix $\mathrm{\Phi}$-entropy functionals introduced in [Electron.~J.~Probab., 19(20): 1--30, 2014]. Notably, all known equivalent characterisations of the classical $\Phi$-entropies have their matrix correspondences. Next, we propose an operator-valued generalisation of the matrix $\Phi$-entropy functionals, and prove their subadditivity under Löwner partial ordering. Our results demonstrate that the subadditivity of operator-valued $\Phi$-entropies is equivalent to the convexity of various related functions. This result can be used to demonstrate an interesting result in quantum information theory: the matrix $\Phi$-entropy of a quantum ensemble is monotone under unital quantum channels. Finally, we derive the operator Efron-Stein inequality to bound the operator-valued variance of a random matrix.
In the study of Markovian processes, one of the principal achievements is the equivalence between the $\Phi$-Sobolev inequalities and an exponential decrease of the $\Phi$-entropies. In this work, we develop a framework of Markov semigroups on matrix-valued functions and generalize the above equivalence to the exponential decay of matrix $\Phi$-entropies. This result also specializes to spectral gap inequalities and modified logarithmic Sobolev inequalities in the random matrix setting. To establish the main result, we define a non-commutative generalization of the carré du champ operator, and prove a de Bruijn's identity for matrix-valued functions. The proposed Markov semigroups acting on matrix-valued functions have immediate applications in the characterization of the dynamical evolution of quantum ensembles. We consider two special cases of quantum unital channels, namely, the depolarizing channel and the phase-damping channel. In the former, since there exists a unique equilibrium state, we show that the matrix $\Phi$-entropy of the resulting quantum ensemble decays exponentially as time goes on. Consequently, we obtain a stronger notion of monotonicity of the Holevo quantity - the Holevo quantity of the quantum ensemble decays exponentially in time and the convergence rate is determined by the modified log-Sobolev inequalities. However, in the latter, the matrix $\Phi$-entropy of the quantum ensemble that undergoes the phase-damping Markovian evolution generally will not decay exponentially. This is because there are multiple equilibrium states for such a channel. Finally, we also consider examples of statistical mixing of Markov semigroups on matrix-valued functions. We can explicitly calculate the convergence rate of a Markovian jump process defined on Boolean hypercubes, and provide upper bounds of the mixing time on these types of examples.
Sobolev-type inequalities have been extensively studied in the frameworks of real-valued functions and non-commutative $\mathbb{L}_p$ spaces, and have proven useful in bounding the time evolution of classical/quantum Markov processes, among many other applications. In this paper, we consider yet another fundamental setting - matrix-valued functions - and prove new Sobolev-type inequalities for them. Our technical contributions are two-fold: (i) we establish a series of matrix Poincaré inequalities for separably convex functions and general functions with Gaussian unitary ensembles inputs; and (ii) we derive $\Phi$-Sobolev inequalities for matrix-valued functions defined on Boolean hypercubes and for those with Gaussian distributions. Our results recover the corresponding classical inequalities (i.e.~real-valued functions) when the matrix has one dimension. Finally, as an application of our technical outcomes, we derive the upper bounds for a fundamental entropic quantity - the Holevo quantity - in quantum information science since classical-quantum channels are a special instance of matrix-valued functions. This is obtained through the equivalence between the constants in the strong data processing inequality and the $\Phi$-Sobolev inequality.
Quantum machine learning has received significant attention in recent years, and promising progress has been made in the development of quantum algorithms to speed up traditional machine learning tasks. In this work, however, we focus on investigating the information-theoretic upper bounds of sample complexity - how many training samples are sufficient to predict the future behaviour of an unknown target function. This kind of problem is, arguably, one of the most fundamental problems in statistical learning theory and the bounds for practical settings can be completely characterised by a simple measure of complexity. Our main result in the paper is that, for learning an unknown quantum measurement, the upper bound, given by the fat-shattering dimension, is linearly proportional to the dimension of the underlying Hilbert space. Learning an unknown quantum state becomes a dual problem to ours, and as a byproduct, we can recover Aaronson's famous result [Proc. R. Soc. A 463:3089-3144 (2007)] solely using a classical machine learning technique. In addition, other famous complexity measures like covering numbers and Rademacher complexities are derived explicitly. We are able to connect measures of sample complexity with various areas in quantum information science, e.g. quantum state/measurement tomography, quantum state discrimination and quantum random access codes, which may be of independent interest. Lastly, with the assistance of general Bloch-sphere representation, we show that learning quantum measurements/states can be mathematically formulated as a neural network. Consequently, classical ML algorithms can be applied to efficiently accomplish the two quantum learning tasks.
In this paper, combined with infinite time-evolving block decimation (iTEBD) algorithm and Bell-type inequalities, we investigate multi-partite quantum nonlocality in an infinite one-dimensional quantum spin-1/2 XXZ system. High hierarchy of multipartite nonlocality can be observed in the gapless phase of the model, meanwhile only the lowest hierarchy of multipartite nonlocality is observed in most regions of the gapped anti-ferromagnetic phase. Thereby, Bell-type inequalities disclose different correlation structures in the two phases of the system. Furthermore, at the infinite-order QPT (or Kosterlitz-Thouless QPT) point of the model, the correlation measures always show a local minimum value, regardless of the length of the subchains. It indicates that relatively low hierarchy of multi-partite nonlocality would be observed at the infinite-order QPT point in a Bell-type experiment. The result is in contrast to the existing results of the second-order QPT in the one-dimensional XY model, where multi-partite nonlocality with the hierarchy has been observed. Thus, multi-partite nonlocality provides us an alternative perspective to distinguish between these two kinds of QPTs. Reliable clues for the existence of tripartite quantum entanglement have also been found.
Zhao-Yu Sun, Yan-E Liao, Bin Guo, Hai-Lin Huang, Duo Zhang, Jian Xu, Bi-Fu Zhan, Yu-Yin Wu, Hong-Guang Cheng, Guo-Zhi Wen, Chao Fang, Cheng-Bo Duan, Bo Wang In this paper, we study global quantum discord (GQD) in infinite-size spin chains. For this purpose, in the framework of matrix product states (MPSs), we propose an effective procedure to calculate GQD (denoted as Gn) for consecutive n-site subchains in infinite chains. For a spin-1/2 three-body interaction model, whose ground state can be exactly expressed as MPSs, We use the procedure to study Gn with n up to $24$. Then for a spin-1/2 XXZ chain, we firstly use infinite time-evolving block decimation (iTEBD) algorithm to obtain the approximate wavefunction in the from of MPSs, and then figure out Gn with n up to $18$. In both models, Gn shows an interesting linear growth as the increase of n, that is, Gn = k*n+b. Moreover, in non-critical regions the slope $k$ of Gn converges very fast, while in critical regions it converges relatively slow, and the behaviors are explained in a clear physical picture with the short-range and long-range correlations. Based on these results, we propose to use Gn/n to describe the global correlations in infinite chains. Gn/n has twofold physical meanings. Firstly, it can be regarded as "global discord per site", very similar to "energy per site" or "magnetization per site" in quantum magnetic systems. Secondly, Gn/n (when n is large enough) describes the quantum correlation between a single site and an (n-1)-site block. Then we successfully apply our theory to an exactly soluble infinite-size spin XY chain which is beyond the matrix product formula, and the Hamiltonian can reduce to the transverse-field Ising model and the XX model. The relation between GQD and quantum phase transitions in these models is discussed.