Large-scale quantum computation requires to be performed in the fault-tolerant manner. One crucial issue of fault-tolerant quantum computing (FTQC) is reducing the overhead of implementing logical gates. Recently proposed correlated decoding and ``algorithmic fault tolerance" achieve fast logical gates that enables universal quantum computation. However, for circuits involving mid-circuit measurements and feedback, this approach is incompatible with window-based decoding, which is a natural requirement for handling large-scale circuits. In this letter, we propose an alternative architecture that employs delayed fixup circuits, integrating window-based correlated decoding with fast transversal gates. This design significantly reduce both the frequency and duration of correlated decoding, while maintaining support for constant-time logical gates and universality across a broad class of quantum codes. More importantly, by spatial parallelism of windows, this architecture well adapts to time-optimal FTQC, making it particularly useful for large-scale computation. Using Shor's algorithm as an example, we explore the application of our architecture and reveals the promising potential of using fast transversal gates to perform large-scale quantum computing tasks with acceptable overhead on physical systems like ion traps.
Probing coherent quantum dynamics in light-matter interactions at the microscopic level requires high-repetition-rate isolated attosecond pulses (IAPs) in pump-probe experiments. To date, the generation of IAPs has been mainly limited to the kilohertz regime. In this work, we experimentally achieve attosecond control of extreme-ultraviolet (XUV) high harmonics in the wide-bandgap dielectric MgO, driven by a synthesized field of two femtosecond pulses at 800nm and 2000nm with relative phase stability. The resulting quasi-continuous harmonic plateau with ~ 9 eV spectral width centered around 16.5 eV photon energy can be tuned by the two-color phase and supports the generation of an IAP (~ 730 attoseconds), confirmed by numerical simulation based on three-band semiconductor Bloch equations. Leveraging the high-repetition-rate driver laser and the moderate intensity requirements of solid-state high-harmonic generation, we achieve IAP production at an unprecedented megahertz repetition rate, paving the way for all-solid compact XUV sources for IAP generation.
Gaurav Gyawali, Tyler Cochran, Yuri Lensky, Eliott Rosenberg, Amir H. Karamlou, Kostyantyn Kechedzhi, Julia Berndtsson, Tom Westerhout, Abraham Asfaw, Dmitry Abanin, Rajeev Acharya, Laleh Aghababaie Beni, Trond I. Andersen, Markus Ansmann, Frank Arute, Kunal Arya, Nikita Astrakhantsev, Juan Atalaya, Ryan Babbush, Brian Ballard, et al (200) One of the most challenging problems in the computational study of localization in quantum manybody systems is to capture the effects of rare events, which requires sampling over exponentially many disorder realizations. We implement an efficient procedure on a quantum processor, leveraging quantum parallelism, to efficiently sample over all disorder realizations. We observe localization without disorder in quantum many-body dynamics in one and two dimensions: perturbations do not diffuse even though both the generator of evolution and the initial states are fully translationally invariant. The disorder strength as well as its density can be readily tuned using the initial state. Furthermore, we demonstrate the versatility of our platform by measuring Renyi entropies. Our method could also be extended to higher moments of the physical observables and disorder learning.
Quantum secret sharing (QSS) plays a significant role in multiparty quantum communication and is a crucial component of future quantum multiparty computing networks. Therefore, it is highly valuable to develop a QSS protocol that offers both information-theoretic security and validation in real optical systems under a finite-key regime. In this work, we propose a three-user QSS protocol based on phase-encoding technology. By adopting symmetric procedures for the two players, our protocol resolves the security loopholes introduced by asymmetric basis choice without prior knowledge of the identity of the malicious player. Kato's concentration inequality is exploited to provide security against coherent attacks with the finite-key effect. Moreover, the practicality of our protocol has been validated under a 30-dB channel loss with a transmission distance of 5-km fiber. Our protocol achieves secure key rates ranging from 432 to 192 bps by choosing different pulse intensities and basis selection probabilities. Offering enhanced security and practicality, our protocol stands as an essential element for the realization of quantum multiparty computing networks.
Tyler A. Cochran, Bernhard Jobst, Eliott Rosenberg, Yuri D. Lensky, Gaurav Gyawali, Norhan Eassa, Melissa Will, Dmitry Abanin, Rajeev Acharya, Laleh Aghababaie Beni, Trond I. Andersen, Markus Ansmann, Frank Arute, Kunal Arya, Abraham Asfaw, Juan Atalaya, Ryan Babbush, Brian Ballard, Joseph C. Bardin, Andreas Bengtsson, et al (172) Lattice gauge theories (LGTs) can be employed to understand a wide range of phenomena, from elementary particle scattering in high-energy physics to effective descriptions of many-body interactions in materials. Studying dynamical properties of emergent phases can be challenging as it requires solving many-body problems that are generally beyond perturbative limits. We investigate the dynamics of local excitations in a $\mathbb{Z}_2$ LGT using a two-dimensional lattice of superconducting qubits. We first construct a simple variational circuit which prepares low-energy states that have a large overlap with the ground state; then we create particles with local gates and simulate their quantum dynamics via a discretized time evolution. As the effective magnetic field is increased, our measurements show signatures of transitioning from deconfined to confined dynamics. For confined excitations, the magnetic field induces a tension in the string connecting them. Our method allows us to experimentally image string dynamics in a (2+1)D LGT from which we uncover two distinct regimes inside the confining phase: for weak confinement the string fluctuates strongly in the transverse direction, while for strong confinement transverse fluctuations are effectively frozen. In addition, we demonstrate a resonance condition at which dynamical string breaking is facilitated. Our LGT implementation on a quantum processor presents a novel set of techniques for investigating emergent particle and string dynamics.
Zhe Ding, Zhousheng Chen, Xiaodong Fan, Weihui Zhang, Jun Fu, Yumeng Sun, Zhi Cheng, Zhiwei Yu, Kai Yang, Yuxin Li, Xing Liu, Pengfei Wang, Ya Wang, Jianhua Jiang, Hualing Zeng, Changgan Zeng, Guosheng Shi, Fazhan Shi, Jiangfeng Du The one-dimensional side gate based on graphene edges shows a significant capability of reducing the channel length of field-effect transistors, further increasing the integration density of semiconductor devices. The nano-scale electric field distribution near the edge provides the physical limit of the effective channel length, however, its imaging under ambient conditions still lacks, which is a critical aspect for the practical deployment of semiconductor devices. Here, we used scanning nitrogen-vacancy microscopy to investigate the electric field distribution near edges of a single-layer-graphene. Real-space scanning maps of photo-charged floating graphene flakes were acquired with a spatial resolution of $\sim$ 10 nm, and the electric edge effect was quantitatively studied by analyzing the NV spin energy level shifts due to the electric Stark effect. Since the graphene flakes are isolated from external electric sources, we brought out a theory based on photo-thermionic effect to explain the charge transfer from graphene to oxygen-terminated diamond probe with a disordered distribution of charge traps. Real-time tracing of electric fields detected the photo-thermionic emission process and the recombination process of the emitted electrons. This study provides a new perspective for graphene-based one-dimensional gates and opto-electronics with nanoscale real-space imaging, and moreover, offers a novel method to tune the chemical environment of diamond surfaces based on optical charge transfer.
Quantum computers, with parallel computing and entanglement effects, excel in cryptography analysis and big data processing. However, they are not fully developed yet, and their performance needs further evaluation. Traditional computer data, especially in simulating quantum phase transitions, are still needed for reference. Two-dimensional frustrated lattice systems can be chosen for studying quantum phase transitions. Currently, significant progress has been made in the study of frustrated square and triangular lattices using traditional computers, while research on hexagonal lattices is limited. This paper consists of four parts. The first part introduces the background of quantum computers and the concept of quantum phase transitions, with the selection of order parameters in hexagonal lattices. The second part elaborates the ideas of the quantum Monte Carlo algorithm. The third part presents numerical simulations, exploring the impact of different transverse magnetic fields on order parameters under low-temperature conditions and showcasing results for various lattice sizes. The fourth part summarizes and looks ahead, comparing the results with those of square and triangular lattices as well as relevant theoretical analyses.
Rajeev Acharya, Laleh Aghababaie-Beni, Igor Aleiner, Trond I. Andersen, Markus Ansmann, Frank Arute, Kunal Arya, Abraham Asfaw, Nikita Astrakhantsev, Juan Atalaya, Ryan Babbush, Dave Bacon, Brian Ballard, Joseph C. Bardin, Johannes Bausch, Andreas Bengtsson, Alexander Bilmes, Sam Blackwell, Sergio Boixo, Gina Bortoli, et al (229) Quantum error correction provides a path to reach practical quantum computing by combining multiple physical qubits into a logical qubit, where the logical error rate is suppressed exponentially as more qubits are added. However, this exponential suppression only occurs if the physical error rate is below a critical threshold. In this work, we present two surface code memories operating below this threshold: a distance-7 code and a distance-5 code integrated with a real-time decoder. The logical error rate of our larger quantum memory is suppressed by a factor of $\Lambda$ = 2.14 $\pm$ 0.02 when increasing the code distance by two, culminating in a 101-qubit distance-7 code with 0.143% $\pm$ 0.003% error per cycle of error correction. This logical memory is also beyond break-even, exceeding its best physical qubit's lifetime by a factor of 2.4 $\pm$ 0.3. We maintain below-threshold performance when decoding in real time, achieving an average decoder latency of 63 $\mu$s at distance-5 up to a million cycles, with a cycle time of 1.1 $\mu$s. To probe the limits of our error-correction performance, we run repetition codes up to distance-29 and find that logical performance is limited by rare correlated error events occurring approximately once every hour, or 3 $\times$ 10$^9$ cycles. Our results present device performance that, if scaled, could realize the operational requirements of large scale fault-tolerant quantum algorithms.
Qi Zhou, Zi-Hao Mei, Han-Qing Shi, Liang-Liang Guo, Xiao-Yan Yang, Yun-Jie Wang, Xiao-Fan Xu, Cheng Xue, Wei-Cheng Kong, Jun-Chao Wang, Yu-Chun Wu, Zhao-Yun Chen, Guo-Ping Guo Quantum computing holds immense potential for addressing a myriad of intricate challenges, which is significantly amplified when scaled to thousands of qubits. However, a major challenge lies in developing an efficient and scalable quantum control system. To address this, we propose a novel Hierarchical MicroArchitecture (HiMA) designed to facilitate qubit scaling and exploit quantum process-level parallelism. This microarchitecture is based on three core elements: (i) discrete qubit-level drive and readout, (ii) a process-based hierarchical trigger mechanism, and (iii) multiprocessing with a staggered triggering technique to enable efficient quantum process-level parallelism. We implement HiMA as a control system for a 72-qubit tunable superconducting quantum processing unit, serving a public quantum cloud computing platform, which is capable of expanding to 6144 qubits through three-layer cascading. In our benchmarking tests, HiMA achieves up to a 4.89x speedup under a 5-process parallel configuration. Consequently, to the best of our knowledge, we have achieved the highest CLOPS (Circuit Layer Operations Per Second), reaching up to 43,680, across all publicly available platforms.
Continuous-time quantum walks (CTQWs) play a crucial role in quantum computing, especially for designing quantum algorithms. However, how to efficiently implement CTQWs is a challenging issue. In this paper, we study implementation of CTQWs on sparse graphs, i.e., constructing efficient quantum circuits for implementing the unitary operator $e^{-iHt}$, where $H=\gamma A$ ($\gamma$ is a constant and $A$ corresponds to the adjacency matrix of a graph). Our result is, for a $d$-sparse graph with $N$ vertices and evolution time $t$, we can approximate $e^{-iHt}$ by a quantum circuit with gate complexity $(d^3 \|H\| t N \log N)^{1+o(1)}$, compared to the general Pauli decomposition, which scales like $(\|H\| t N^4 \log N)^{1+o(1)}$. For sparse graphs, for instance, $d=O(1)$, we obtain a noticeable improvement. Interestingly, our technique is related to graph decomposition. More specifically, we decompose the graph into a union of star graphs, and correspondingly, the Hamiltonian $H$ can be represented as the sum of some Hamiltonians $H_j$, where each $e^{-iH_jt}$ is a CTQW on a star graph which can be implemented efficiently.
The intensity correlations due to imperfect modulation during the quantum-state preparation in a measurement-device-independent quantum key distribution (MDI QKD) system compromise its security performance. Therefore, it is crucial to assess the impact of intensity correlations on the practical security of MDI QKD systems. In this work, we propose a theoretical model that quantitatively analyzes the secure key rate of MDI QKD systems under intensity correlations. Furthermore, we apply the theoretical model to a practical MDI QKD system with measured intensity correlations, which shows that the system struggles to generate keys efficiently under this model. We also explore the boundary conditions of intensity correlations to generate secret keys. This study extends the security analysis of intensity correlations to MDI QKD protocols, providing a methodology to evaluate the practical security of MDI QKD systems.
Chuang Zhou, Yang Li, Li Ma, Jie Yang, Wei Huang, Ao Sun, Heng Wang, Yujie Luo, Yong Li, Ziyang Chen, Francis C. M. Lau, Yichen Zhang, Song Yu, Hong Guo, Bingjie Xu An integrated error-correction scheme with high throughput, low frame errors rate (FER) and high reconciliation efficiency under low signal to noise ratio (SNR) is one of the major bottlenecks to realize high-performance and low-cost continuous variable quantum key distribution (CV-QKD). To solve this long-standing problem, a novel two-stage error correction method with limited precision that is suitable for integration given limited on-chip hardware resource while maintaining excellent decoding performance is proposed, and experimentally verified on a commercial FPGA. Compared to state-of-art results, the error-correction throughput can be improved more than one order of magnitude given FER<0.1 based on the proposed method, where 544.03 Mbps and 393.33 Mbps real-time error correction is achieved for typical 0.2 and 0.1 code rate, respectively. Besides, compared with traditional decoding method, the secure key rate (SKR) for CV-QKD under composable security framework can be improved by 140.09% and 122.03% by using the proposed two-stage decoding method for codes rate 0.2 and 0.1, which can support 32.70 Mbps and 5.66 Mbps real-time SKR under typical transmission distances of 25 km and 50 km, correspondingly. The record-breaking results paves the way for large-scale deployment of high-rate integrated CV-QKD systems in metropolitan quantum secure network.
The large-scale deployment of quantum secret sharing (QSS) in quantum networks is currently challenging due to the requirements for the generation and distribution of multipartite entanglement states. Here we present an efficient source-independent QSS protocol utilizing entangled photon pairs in quantum networks. Through the post-matching method, which means the measurement events in the same basis are matched, the key rate is almost independent of the number of participants. In addition, the unconditional security of our QSS against internal and external eavesdroppers can be proved by introducing an equivalent virtual protocol. Our protocol has great performance and technical advantages in future quantum networks.
Quantum learning tasks often leverage randomly sampled quantum circuits to characterize unknown systems. An efficient approach known as "circuit reusing," where each circuit is executed multiple times, reduces the cost compared to implementing new circuits. This work investigates the optimal reusing parameter that minimizes the variance of measurement outcomes for a given experimental cost. We establish a theoretical framework connecting the variance of experimental estimators with the reusing parameter R. An optimal R is derived when the implemented circuits and their noise characteristics are known. Additionally, we introduce a near-optimal reusing strategy that is applicable even without prior knowledge of circuits or noise, achieving variances close to the theoretical minimum. To validate our framework, we apply it to randomized benchmarking and analyze the optimal R for various typical noise channels. We further conduct experiments on a superconducting platform, revealing a non-linear relationship between R and the cost, contradicting previous assumptions in the literature. Our theoretical framework successfully incorporates this non-linearity and accurately predicts the experimentally observed optimal R. These findings underscore the broad applicability of our approach to experimental realizations of quantum learning protocols.
We introduce a novel technique for enhancing the robustness of light-pulse atom interferometers against the pulse infidelities that typically limit their sensitivities. The technique uses quantum optimal control to favorably harness the multipath interference of the stray trajectories produced by imperfect atom-optics operations. We apply this method to a resonant atom interferometer and achieve thousand-fold phase amplification, representing a fifty-fold improvement over the performance observed without optimized control. Moreover, we find that spurious interference can arise from the interplay of spontaneous emission and many-pulse sequences and demonstrate optimization strategies to mitigate this effect. Given the ubiquity of spontaneous emission in quantum systems, these results may be valuable for improving the performance of a diverse array of quantum sensors. We anticipate our findings will significantly benefit the performance of matter-wave interferometers for a variety of applications, including dark matter, dark energy, and gravitational wave detection.
Photon triplet generation based on third-order spontaneous parametric down-conversion remains as an experimental challenge. The challenge stems from the trade-offs between source brightness and instrument noise. This work presents a probability theory of coincidence detection to address the detection limit in source characterization. We use Bayes' theorem to model instruments as a noisy communication channel and apply statistical inference to identify the minimum detectable coincidence rate. A triplet generation rate of 1-100 Hz is required for source characterization performed over 1-72 hours using superconducting nanowire single-photon detectors.
Quantum digital signatures (QDSs), which distribute and measure quantum states by key generation protocols and then sign messages via classical data processing, are a key area of interest in quantum cryptography. However, the practical implementation of a QDS network has many challenges, including complex interference technical requirements, linear channel loss of quantum state transmission, and potential side-channel attacks on detectors. Here, we propose an asynchronous measurement-device-independent (MDI) QDS protocol with asynchronous two-photon interference strategy and one-time universal hashing method. The two-photon interference approach protects our protocol against all detector side-channel attacks and relaxes the difficulty of experiment implementation, while the asynchronous strategy effectively reduces the equivalent channel loss to its square root. Compared to previous MDI-QDS schemes, our protocol shows several orders of magnitude performance improvements and doubling of transmission distance when processing multi-bit messages. Our findings present an efficient and practical MDI-QDS scheme, paving the way for large-scale data processing with non-repudiation in quantum networks.
Sheng Zhang, Peng Duan, Yun-Jie Wang, Tian-Le Wang, Peng Wang, Ren-Ze Zhao, Xiao-Yan Yang, Ze-An Zhao, Liang-Liang Guo, Yong Chen, Hai-Feng Zhang, Lei Du, Hao-Ran Tao, Zhi-Fei Li, Yuan Wu, Zhi-Long Jia, Wei-Cheng Kong, Zhao-Yun Chen, Yu-Chun Wu, Guo-Ping Guo In the NISQ era, achieving large-scale quantum computing demands compact circuits to mitigate decoherence and gate error accumulation. Quantum operations with diverse degrees of freedom hold promise for circuit compression, but conventional approaches encounter challenges in simultaneously adjusting multiple parameters. Here, we propose a transition composite gate (TCG) scheme grounded on state-selective transition path engineering, enabling more expressive conditional operations. We experimentally validate a controlled unitary (CU) gate as an example, with independent and continuous parameters. By adjusting the parameters of $\rm X^{12}$ gate, we obtain the CU family with a fidelity range of 95.2% to 99.0% leveraging quantum process tomography (QPT). To demonstrate the capability of circuit compression, we use TCG scheme to prepare 3-qubit Greenberger-Horne-Zeilinger (GHZ) and W states, with the fidelity of 96.77% and 95.72%. TCG can achieve the reduction in circuit depth of about 40% and 44% compared with the use of CZ gates only. Moreover, we show that short-path TCG (SPTCG) can further reduce the state-preparation circuit time cost. The TCG scheme exhibits advantages in certain quantum circuits and shows significant potential for large-scale quantum algorithms.
Given the limitations on the number of qubits in current NISQ devices, the implementation of large-scale quantum algorithms on such devices is challenging, prompting research into distributed quantum computing. This paper focuses on the issue of excessive communication complexity in distributed quantum computing oriented towards quantum circuits. To reduce the number of quantum state transmissions, i.e., the transmission cost, in distributed quantum circuits, a circuit partitioning method based on the QUBO model is proposed, coupled with the lookahead method for transmission cost optimization. Initially, the problem of distributed quantum circuit partitioning is transformed into a graph minimum cut problem. The QUBO model, which can be accelerated by quantum algorithms, is introduced to minimize the number of quantum gates between QPUs and the transmission cost. Subsequently, the dynamic lookahead strategy for the selection of transmission qubits is proposed to optimize the transmission cost in distributed quantum circuits. Finally, through numerical simulations, the impact of different circuit partitioning indicators on the transmission cost is explored, and the proposed method is evaluated on benchmark circuits. Experimental results demonstrate that the proposed circuit partitioning method has a shorter runtime compared with current circuit partitioning methods. Additionally, the transmission cost optimized by the proposed method is significantly lower than that of current transmission cost optimization methods, achieving noticeable improvements across different numbers of partitions.
Solving combinatorial optimization problems using variational quantum algorithms (VQAs) represents one of the most promising applications in the NISQ era. However, the limited trainability of VQAs could hinder their scalability to large problem sizes. In this paper, we improve the trainability of variational quantum eigensolver (VQE) by utilizing convex interpolation to solve portfolio optimization. The idea is inspired by the observation that the Dicke state possesses an inherent clustering property. Consequently, the energy of a state with a larger Hamming distance from the ground state intuitively results in a greater energy gap away from the ground state energy in the overall distribution trend. Based on convex interpolation, the location of the ground state can be evaluated by learning the property of a small subset of basis states in the Hilbert space. This enlightens naturally the proposals of the strategies of close-to-solution initialization, regular cost function landscape, and recursive ansatz equilibrium partition. The successfully implementation of a $40$-qubit experiment using only $10$ superconducting qubits demonstrates the effectiveness of our proposals. Furthermore, the quantum inspiration has also spurred the development of a prototype greedy algorithm. Extensive numerical simulations indicate that the hybridization of VQE and greedy algorithms achieves a mutual complementarity, combining the advantages of both global and local optimization methods. Our proposals can be extended to improve the trainability for solving other large-scale combinatorial optimization problems that are widely used in real applications, paving the way to unleash quantum advantages of NISQ computers in the near future.
Quantum digital signatures (QDS), which utilize correlated bit strings among sender and recipients, guarantee the authenticity, integrity and non-repudiation of classical messages based on quantum laws. Continuous-variable (CV) quantum protocol with heterodyne and homodyne measurement has obvious advantages of low-cost implementation and easy wavelength division multiplexing. However, security analyses in previous researches are limited to the proof against collective attacks in finite-size scenarios. Moreover, existing multi-bit CV QDS schemes have primarily focused on adapting single-bit protocols for simplicity of security proof, often sacrificing signature efficiency. Here, we introduce a CV QDS protocol designed to withstand general coherent attacks through the use of a cutting-edge fidelity test function, while achieving high signature efficiency by employing a refined one-time universal hashing signing technique. Our protocol is proved to be robust against finite-size effects and excess noise in quantum channels. In simulation, results demonstrate a significant reduction of over 6 orders of magnitude in signature length for a megabit message signing task compared to existing CV QDS protocols and this advantage expands as the message size grows. Our work offers a solution with enhanced security and efficiency, paving the way for large-scale deployment of CV QDS in future quantum networks.
Yuxin Li, Zhe Ding, Chen Wang, Haoyu Sun, Zhousheng Chen, Pengfei Wang, Ya Wang, Ming Gong, Hualing Zeng, Fazhan Shi, Jiangfeng Du Critical fluctuations play a fundamental role in determining the spin orders for low-dimensional quantum materials, especially for recently discovered two-dimensional (2D) magnets. Here we employ the quantum decoherence imaging technique utilizing nitrogen-vacancy centers in diamond to explore the critical magnetic fluctuations and the associated temporal spin noise in van der Waals magnet $\rm{Fe_{3}GeTe_{2}}$. We show that the critical fluctuation contributes to a random magnetic field characterized by the noise spectra, which can be changed dramatically near the critical temperature $T_c$. A theoretical model to describe this phenomenon is developed, showing that the spectral density is characterized by a $1/f$ noise near the $T_c$, while away from this point it behaves like a white noise. The crossover at a certain temperature between these two situations is determined by changing of the distance between the sample and the diamond. This work provides a new way to study critical fluctuation and to extract some of the critical exponents, which may greatly deepen our understanding of criticality in a wide range of physical systems.
Quantum conferencing enables multiple nodes within a quantum network to share a secure group key for private message broadcasting. The key rate, however, is limited by the repeaterless capacity to distribute multiparticle entangled states across the network. Currently, in the finite-size regime, no feasible schemes utilizing existing experimental techniques can overcome the fundamental rate-distance limit of quantum conferencing in quantum networks without repeaters. Here, we propose a practical, multi-field scheme that breaks this limit, involving virtually establishing Greenberger-Horne-Zeilinger states through post-measurement coincidence matching. This proposal features a measurement-device-independent characteristic and can directly scale to support any number of users. Simulations show that the fundamental limitation on the group key rate can be overcome in a reasonable running time of sending $10^{14}$ pulses. We predict that it offers an efficient design for long-distance broadcast communication in future quantum networks.
Xian-He Zhao, Han-Sen Zhong, Feng Pan, Zi-Han Chen, Rong Fu, Zhongling Su, Xiaotong Xie, Chaoxing Zhao, Pan Zhang, Wanli Ouyang, Chao-Yang Lu, Jian-Wei Pan, Ming-Cheng Chen Random quantum circuit sampling serves as a benchmark to demonstrate quantum computational advantage. Recent progress in classical algorithms, especially those based on tensor network methods, has significantly reduced the classical simulation time and challenged the claim of the first-generation quantum advantage experiments. However, in terms of generating uncorrelated samples, time-to-solution, and energy consumption, previous classical simulation experiments still underperform the \textitSycamore processor. Here we report an energy-efficient classical simulation algorithm, using 1432 GPUs to simulate quantum random circuit sampling which generates uncorrelated samples with higher linear cross entropy score and is 7 times faster than \textitSycamore 53 qubits experiment. We propose a post-processing algorithm to reduce the overall complexity, and integrated state-of-the-art high-performance general-purpose GPU to achieve two orders of lower energy consumption compared to previous works. Our work provides the first unambiguous experimental evidence to refute \textitSycamore's claim of quantum advantage, and redefines the boundary of quantum computational advantage using random circuit sampling.
Quantum conference key agreement (QCKA) enables the unconditional secure distribution of conference keys among multiple participants. Due to challenges in high-fidelity preparation and long-distance distribution of multi-photon entanglement, entanglement-based QCKA is facing severe limitations in both key rate and scalability. Here, we propose a source-independent QCKA scheme utilizing the post-matching method, feasible within the entangled photon pair distribution network. We introduce an equivalent distributing virtual multi-photon entanglement protocol for providing the unconditional security proof even in the case of coherent attacks. For the symmetry star-network, comparing with previous $n$-photon entanglement protocol, the conference key rate is improved from $O(\eta^{n})$ to $O(\eta^{2})$, where $\eta$ is the transmittance from the entanglement source to one participant. Simulation results show that the performance of our protocol has multiple orders of magnitude advantages in the intercity distance. We anticipate that our approach will demonstrate its potential in the implementation of quantum networks.
Quantum conference key agreement facilitates secure communication among multiple parties through multipartite entanglement and is anticipated to be an important cryptographic primitive for future quantum networks. However, the experimental complexity and low efficiency associated with the synchronous detection of multipartite entangled states have significantly hindered their practical application. In this work, we propose a measurement-device-independent conference key agreement protocol that utilizes asynchronous Greenberger-Horne-Zeilinger state measurement.This approach achieves a linear scaling of the conference key rate among multiple parties, exhibiting performance similar to that of the single-repeater scheme in quantum networks. The asynchronous measurement strategy bypasses the need for complex global phase locking technologies, concurrently extending the intercity transmission distance with composable security in the finite key regime. Additionally, our work also showcases the advantages of the asynchronous pairing concept in multiparty quantum entanglement.
Great progress has been made in quantum computing in recent years, providing opportunities to overcome computation resource poverty in many scientific computations like computational fluid dynamics (CFD). In this work, efforts are made to exploit quantum potentialities in CFD, and a hybrid classical and quantum computing CFD framework is proposed to release the power of current quantum computing. In this framework, the traditional CFD solvers are coupled with quantum linear algebra libraries in weak form to achieve collaborative computation between classical and quantum computing. The quantum linear solver provides high-precision solutions and scalable problem sizes for linear systems and is designed to be easily callable for solving linear algebra systems similar to classical linear libraries, thus enabling seamless integration into existing CFD solvers. Some typical cases are performed to validate the feasibility of the proposed framework and the correctness of quantum linear algorithms in CFD.
Satellite-to-ground quantum communication constitutes the cornerstone of the global quantum network, heralding the advent of the future of quantum information. Continuous-variable quantum key distribution is a strong candidate for space-ground quantum communication due to its simplicity, stability, and ease of implementation, especially for the robustness of space background light noise. Recently, the discrete-modulated continuous-variable protocol has garnered increased attention, owing to its lower implementation requirements, acceptable security key rate, and pronounced compatibility with extant infrastructures. Here, we derive key rates for discrete-modulated continuous-variable quantum key distribution protocols in free-space channel environments across various conditions through numerical simulation, revealing the viability of its application in satellite-to-ground communication.
Multipartite entanglement is one of the crucial resources in quantum information processing tasks such as quantum metrology, quantum computing and quantum communications. It is essential to verify not only the multipartite entanglement, but also the entanglement structure in both fundamental theories and the applications of quantum information technologies. However, it is proved to be challenging to detect the entanglement structures, including entanglement depth, entanglement intactness and entanglement stretchability, especially for general states and large-scale quantum systems. By using the partitions of the tensor product space we propose a systematic method to construct powerful entanglement witnesses which identify better the multipartite entanglement structures. Besides, an efficient algorithm using semi-definite programming and a gradient descent algorithm are designed to detect entanglement structure from the inner polytope of the convex set containing all the states with the same entanglement structure. We demonstrate by detailed examples that our criteria perform better than other known ones. Our results may be applied to many quantum information processing tasks.
Zhao-Yun Chen, Teng-Yang Ma, Chuang-Chao Ye, Liang Xu, Ming-Yang Tan, Xi-Ning Zhuang, Xiao-Fan Xu, Yun-Jie Wang, Tai-Ping Sun, Yong Chen, Lei Du, Liang-Liang Guo, Hai-Feng Zhang, Hao-Ran Tao, Tian-Le Wang, Xiao-Yan Yang, Ze-An Zhao, Peng Wang, Sheng Zhang, Chi Zhang, et al (12) Quantum computational fluid dynamics (QCFD) offers a promising alternative to classical computational fluid dynamics (CFD) by leveraging quantum algorithms for higher efficiency. This paper introduces a comprehensive QCFD method, including an iterative method "Iterative-QLS" that suppresses error in quantum linear solver, and a subspace method to scale the solution to a larger size. We implement our method on a superconducting quantum computer, demonstrating successful simulations of steady Poiseuille flow and unsteady acoustic wave propagation. The Poiseuille flow simulation achieved a relative error of less than $0.2\%$, and the unsteady acoustic wave simulation solved a 5043-dimensional matrix. We emphasize the utilization of the quantum-classical hybrid approach in applications of near-term quantum computers. By adapting to quantum hardware constraints and offering scalable solutions for large-scale CFD problems, our method paves the way for practical applications of near-term quantum computers in computational science.
Quantum machine learning has demonstrated significant potential in solving practical problems, particularly in statistics-focused areas such as data science and finance. However, challenges remain in preparing and learning statistical models on a quantum processor due to issues with trainability and interpretability. In this letter, we utilize the maximum entropy principle to design a statistics-informed parameterized quantum circuit (SI-PQC) for efficiently preparing and training of quantum computational statistical models, including arbitrary distributions and their weighted mixtures. The SI-PQC features a static structure with trainable parameters, enabling in-depth optimized circuit compilation, exponential reductions in resource and time consumption, and improved trainability and interpretability for learning quantum states and classical model parameters simultaneously. As an efficient subroutine for preparing and learning in various quantum algorithms, the SI-PQC addresses the input bottleneck and facilitates the injection of prior knowledge.
Huan-Yu Liu, Xiaoshui Lin, Zhao-Yun Chen, Cheng Xue, Tai-Ping Sun, Qing-Song Li, Xi-Ning Zhuang, Yun-Jie Wang, Yu-Chun Wu, Ming Gong, Guo-Ping Guo The rapid development of quantum computers has enabled demonstrations of quantum advantages on various tasks. However, real quantum systems are always dissipative due to their inevitable interaction with the environment, and the resulting non-unitary dynamics make quantum simulation challenging with only unitary quantum gates. In this work, we present an innovative and scalable method to simulate open quantum systems using quantum computers. We define an adjoint density matrix as a counterpart of the true density matrix, which reduces to a mixed-unitary quantum channel and thus can be effectively sampled using quantum computers. This method has several benefits, including no need for auxiliary qubits and noteworthy scalability. Moreover, accurate long-time simulation can also be achieved as the adjoint density matrix and the true dissipated one converge to the same state. Finally, we present deployments of this theory in the dissipative quantum $XY$ model for the evolution of correlation and entropy with short-time dynamics and the disordered Heisenberg model for many-body localization with long-time dynamics. This work promotes the study of real-world many-body dynamics with quantum computers, highlighting the potential to demonstrate practical quantum advantages.
Trond I. Andersen, Nikita Astrakhantsev, Amir H. Karamlou, Julia Berndtsson, Johannes Motruk, Aaron Szasz, Jonathan A. Gross, Alexander Schuckert, Tom Westerhout, Yaxing Zhang, Ebrahim Forati, Dario Rossi, Bryce Kobrin, Agustin Di Paolo, Andrey R. Klots, Ilya Drozdov, Vladislav D. Kurilovich, Andre Petukhov, Lev B. Ioffe, Andreas Elben, et al (207) Understanding how interacting particles approach thermal equilibrium is a major challenge of quantum simulators. Unlocking the full potential of such systems toward this goal requires flexible initial state preparation, precise time evolution, and extensive probes for final state characterization. We present a quantum simulator comprising 69 superconducting qubits which supports both universal quantum gates and high-fidelity analog evolution, with performance beyond the reach of classical simulation in cross-entropy benchmarking experiments. Emulating a two-dimensional (2D) XY quantum magnet, we leverage a wide range of measurement techniques to study quantum states after ramps from an antiferromagnetic initial state. We observe signatures of the classical Kosterlitz-Thouless phase transition, as well as strong deviations from Kibble-Zurek scaling predictions attributed to the interplay between quantum and classical coarsening of the correlated domains. This interpretation is corroborated by injecting variable energy density into the initial state, which enables studying the effects of the eigenstate thermalization hypothesis (ETH) in targeted parts of the eigenspectrum. Finally, we digitally prepare the system in pairwise-entangled dimer states and image the transport of energy and vorticity during thermalization. These results establish the efficacy of superconducting analog-digital quantum processors for preparing states across many-body spectra and unveiling their thermalization dynamics.
Fault-tolerant quantum computing (FTQC) is essential for achieving large-scale practical quantum computation. Implementing arbitrary FTQC requires the execution of a universal gate set on logical qubits, which is highly challenging. Particularly, in the superconducting system, two-qubit gates on surface code logical qubits have not been realized. Here, we experimentally implement logical CNOT gate as well as arbitrary single-qubit rotation gates on distance-2 surface codes using the superconducting quantum processor \textitWukong, thereby demonstrating a universal logical gate set. In the experiment, we design encoding circuits to prepare the required logical states, where the fidelities of the fault-tolerantly prepared logical states surpass those of the physical states. Furthermore, we demonstrate the transversal CNOT gate between two logical qubits and fault-tolerantly prepare four logical Bell states, all with fidelities exceeding those of the Bell states on the physical qubits. Using the logical CNOT gate and an ancilla logical state, arbitrary single-qubit rotation gate is implemented through gate teleportation. All logical gates are characterized on a complete state set and their fidelities are evaluated by logical Pauli transfer matrices. Implementation of the universal logical gate set and entangled logical states beyond physical fidelity marks a significant step towards FTQC on superconducting quantum processors.
Fighting against noise is crucial for NISQ devices to demonstrate practical quantum applications. In this work, we give a new paradigm of quantum error mitigation based on the vectorization of density matrices. Different from the ideas of existing quantum error mitigation methods that try to distill noiseless information from noisy quantum states, our proposal directly changes the way of encoding information and maps the density matrices of noisy quantum states to noiseless pure states, which is realized by a novel and NISQ-friendly measurement protocol and a classical post-processing procedure. Our protocol requires no knowledge of the noise model, no ability to tune the noise strength, and no ancilla qubits for complicated controlled unitaries. Under our encoding, NISQ devices are always preparing pure quantum states which are highly desired resources for variational quantum algorithms to have good performance in many tasks. We show how this protocol can be well-fitted into variational quantum algorithms. We give several concrete ansatz constructions that are suitable for our proposal and do theoretical analysis on the sampling complexity, the expressibility, and the trainability. We also give a discussion on how this protocol is influenced by large noise and how it can be well combined with other quantum error mitigation protocols. The effectiveness of our proposal is demonstrated by various numerical experiments.
The standard approach to universal fault-tolerant quantum computing is to develop a general purpose quantum error correction mechanism that can implement a universal set of logical gates fault-tolerantly. Given such a scheme, any quantum algorithm can be realized fault-tolerantly by composing the relevant logical gates from this set. However, we know that quantum computers provide a significant quantum advantage only for specific quantum algorithms. Hence, a universal quantum computer can likely gain from compiling such specific algorithms using tailored quantum error correction schemes. In this work, we take the first steps towards such algorithm-tailored quantum fault-tolerance. We consider Trotter circuits in quantum simulation, which is an important application of quantum computing. We develop a solve-and-stitch algorithm to systematically synthesize physical realizations of Clifford Trotter circuits on the well-known $[\![ n,n-2,2 ]\!]$ error-detecting code family. Our analysis shows that this family implements Trotter circuits with optimal depth, thereby serving as an illuminating example of tailored quantum error correction. We achieve fault-tolerance for these circuits using flag gadgets, which add minimal overhead. The solve-and-stitch algorithm has the potential to scale beyond this specific example and hence provide a principled approach to tailored fault-tolerance in quantum computing.
We study the complexity of estimating the partition function $\mathsf{Z}(\beta)=\sum_{x\in\chi} e^{-\beta H(x)}$ for a Gibbs distribution characterized by the Hamiltonian $H(x)$. We provide a simple and natural lower bound for quantum algorithms that solve this task by relying on reflections through the coherent encoding of Gibbs states. Our primary contribution is a $\varOmega(1/\epsilon)$ lower bound for the number of reflections needed to estimate the partition function with a quantum algorithm. The proof is based on a reduction from the problem of estimating the Hamming weight of an unknown binary string.
The emerging field of free-electron quantum optics enables electron-photon entanglement and holds the potential for generating nontrivial photon states for quantum information processing. Although recent experimental studies have entered the quantum regime, rapid theoretical developments predict that qualitatively unique phenomena only emerge beyond a certain interaction strength. It is thus pertinent to identify the maximal electron-photon interaction strength and the materials, geometries, and particle energies that enable one to approach it. We derive an upper limit to the quantum vacuum interaction strength between free electrons and single-mode photons, which illuminates the conditions for the strongest interaction. Crucially, we obtain an explicit energy selection recipe for electrons and photons to achieve maximal interaction at arbitrary separations and identify two optimal regimes favoring either fast or slow electrons over those with intermediate velocities. We validate the limit by analytical and numerical calculations on canonical geometries and provide near-optimal designs indicating the feasibility of strong quantum interactions. Our findings offer fundamental intuition for maximizing the quantum interaction between free electrons and photons and provide practical design rules for future experiments on electron-photon and electron-mediated photon-photon entanglement. They should also enable the evaluation of key metrics for applications such as the maximum power of free-electron radiation sources and the maximum acceleration gradient of dielectric laser accelerators.
The Hilbert-Pólya conjecture asserts that the imaginary parts of the nontrivial zeros of the Riemann zeta function (the Riemann zeros) are the eigenvalues of a self-adjoint operator (a quantum mechanical Hamiltonian, in the physical sense), as a promising approach to prove the Riemann hypothesis (cf.\citeSH2011). Instead of the eigenvalues, in this paper we consider observable-geometric phases as the realization of the Riemann zeros in a periodically driven quantum system, which were introduced in \citeChen2020 for the study of geometric quantum computation. To this end, we further introduce the notion of non-Abelian observable-geometric phases, involving which we give an approach to finding a physical system to study the Riemann zeros. Since the observable-geometric phases are connected with the geometry of the observable space according to the evolution of the Heisenberg equation, this sheds some light on the investigation of the Riemann hypothesis.
Long-range and anisotropic dipolar interactions induce complex order in quantum systems. It becomes particularly interesting in two-dimension (2D), where the superfluidity with quasi-long-range order emerges via Berezinskii-Kosterlitz-Thouless (BKT) mechanism, which still remains elusive with dipolar interactions. Here, we observe the BKT transition from a normal gas to the superfluid phase in a quasi-2D dipolar Bose gas of erbium atoms. Controlling the orientation of dipoles, we characterize the transition point by monitoring extended coherence and measuring the equation of state. This allows us to gain a systematic understanding of the BKT transition based on an effective short-range description of dipolar interaction in 2D. Additionally, we observe anisotropic density fluctuations and non-local effects in the superfluid regime, which establishes the dipolar nature of the 2D superfluid. Our results lay the ground for understanding the behavior of dipolar bosons in 2D and open up opportunities for examining complex orders in a dipolar superfluid.
We observe non-perturbative high harmonic generation in solids driven by a macroscopic quantum state of light, bright squeezed vacuum (BSV), which we generate in a single spatiotemporal mode. The BSV-driven process is considerably more efficient in the generation of high harmonics than classical light of the same mean intensity. Due to its broad photon-number distribution, covering states from $0$ to $2 \times 10^{13}$ photons per pulse, and sub-cycle electric field fluctuations over $\pm1\hbox{V}/\hbox{\r{A}}$, BSV provides access to free carrier dynamics within a much broader range of peak intensities than accessible with classical light. Our findings contribute to recent developments of quantum optics with extreme intensities, moving beyond its traditional focus on low photon numbers, and providing a new method for exploring extreme nonlinearities in solids.
Xu Jing, Cheng Qian, Chen-Xun Weng, Bing-Hong Li, Zhe Chen, Chen-Quan Wang, Jie Tang, Xiao-Wen Gu, Yue-Chan Kong, Tang-Sheng Chen, Hua-Lei Yin, Dong Jiang, Bin Niu, Liang-Liang Lu Quantum communication networks are crucial for both secure communication and cryptographic networked tasks. Building quantum communication networks in a scalable and cost-effective way is essential for their widespread adoption, among which a stable and miniaturized high-quality quantum light source is a key component. Here, we establish a complete polarization entanglement-based fully connected network, which features an ultrabright integrated Bragg reflection waveguide quantum source, managed by an untrusted service provider, and a streamlined polarization analysis module, which requires only one single-photon detector for each end user. We perform a continuously working quantum entanglement distribution and create correlated bit strings between users. Within the framework of one-time universal hashing, we provide the first experimental implementation of source-independent quantum digital signatures using imperfect keys circumventing the necessity for private amplification. More importantly, we further beat the 1/3 fault-tolerance bound in Byzantine agreement, achieving unconditional security without relying on sophisticated techniques. Our results offer an affordable and practical route for addressing consensus challenges within the emerging quantum network landscape.
Probabilistic machine learning utilizes controllable sources of randomness to encode uncertainty and enable statistical modeling. Harnessing the pure randomness of quantum vacuum noise, which stems from fluctuating electromagnetic fields, has shown promise for high speed and energy-efficient stochastic photonic elements. Nevertheless, photonic computing hardware which can control these stochastic elements to program probabilistic machine learning algorithms has been limited. Here, we implement a photonic probabilistic computer consisting of a controllable stochastic photonic element - a photonic probabilistic neuron (PPN). Our PPN is implemented in a bistable optical parametric oscillator (OPO) with vacuum-level injected bias fields. We then program a measurement-and-feedback loop for time-multiplexed PPNs with electronic processors (FPGA or GPU) to solve certain probabilistic machine learning tasks. We showcase probabilistic inference and image generation of MNIST-handwritten digits, which are representative examples of discriminative and generative models. In both implementations, quantum vacuum noise is used as a random seed to encode classification uncertainty or probabilistic generation of samples. In addition, we propose a path towards an all-optical probabilistic computing platform, with an estimated sampling rate of ~ 1 Gbps and energy consumption of ~ 5 fJ/MAC. Our work paves the way for scalable, ultrafast, and energy-efficient probabilistic machine learning hardware.
It is known that two-dimensional two-component fundamental solitons of the semi-vortex (SV) type, with vorticities $(s_{+},s_{-})=(0,1)$ in their components, are stable ground states (GSs) in the spin-orbit-coupled (SOC) binary Bose-Einstein condensate with the contact self-attraction acting in both components, in spite of the possibility of the critical collapse in the system. However, excited states(ESs) of the SV solitons, with the vorticity set $(s_{+},s_{-})=( S_{+},S_{+}+1)$ and $S_{+}=1,2,3,...$, are unstable in the same system. We construct ESs of SV solitons in the SOC system with opposite signs of the self-interaction in the two components. The main finding is stability of the ES-SV solitons, with the extra vorticity (at least) up to $S_{+}=6$. The threshold value of the norm for the onset of the critical collapse, $N_{\mathrm{thr}}$, in these excited states is higher than the commonly known critical value, $N_{c}\approx 5.85$,associated with the single-component Townes solitons, $N_{\mathrm{thr}}$ increasing with the growth of $S_{+}$. A velocity interval for stable motion of the GS-SV solitons is found too. The results suggest a solution for the challenging problem of the creation of stable vortex solitons with high topological charges.
The field of quantum deep learning presents significant opportunities for advancing computational capabilities, yet it faces a major obstacle in the form of the "information loss problem" due to the inherent limitations of the necessary quantum tomography in scaling quantum deep neural networks. This paper introduces an end-to-end Quantum Vision Transformer (QViT), which incorporates an innovative quantum residual connection technique, to overcome these challenges and therefore optimize quantum computing processes in deep learning. Our thorough complexity analysis of the QViT reveals a theoretically exponential and empirically polynomial speedup, showcasing the model's efficiency and potential in quantum computing applications. We conducted extensive numerical tests on modern, large-scale transformers and datasets, establishing the QViT as a pioneering advancement in applying quantum deep neural networks in practical scenarios. Our work provides a comprehensive quantum deep learning paradigm, which not only demonstrates the versatility of current quantum linear algebra algorithms but also promises to enhance future research and development in quantum deep learning.
Conical intersections (CIs) are pivotal in many photochemical processes. Traditional quantum chemistry methods, such as the state-average multi-configurational methods, face computational hurdles in solving the electronic Schrödinger equation within the active space on classical computers. While quantum computing offers a potential solution, its feasibility in studying CIs, particularly on real quantum hardware, remains largely unexplored. Here, we present the first successful realization of a hybrid quantum-classical state-average complete active space self-consistent field method based on the variational quantum eigensolver (VQE-SA-CASSCF) on a superconducting quantum processor. This approach is applied to investigate CIs in two prototypical systems - ethylene (C2H4) and triatomic hydrogen (H3). We illustrate that VQE-SA-CASSCF, coupled with ongoing hardware and algorithmic enhancements, can lead to a correct description of CIs on existing quantum devices. These results lay the groundwork for exploring the potential of quantum computing to study CIs in more complex systems in the future.
Arrays of neutral atoms have emerged as promising platforms for quantum computing. Realization of high-fidelity two-qubit gates with robustness is currently a significant important task for large-scale operations. In this paper, we present a convenient approach for implementing a two-qubit controlled-phase gate using Rydberg blockade. We achieve the noncyclic geometric control with a single modulated pulse. As compared with the control scheme by cyclic evolution that determined by dynamical parameters, the robustness of the proposal against systematic errors will be remarkably improved due to the geometric characteristic. Importantly, the noncyclic geometric control reduces the gate time for small rotation angles and will be more insensitive to the decoherence effect. We accelerate the adiabatic control with the aid of shortcuts to adiabaticity to further shorten the operation time. We apply our protocol to the algorithm of quantum Fourier transformation to show the actual acceleration. Therefore, the proposed scheme will provide an analytical waveforms for arbitrary two-qubit gates and may have important use in the experiments of atomic arrays.
Designing quantum systems with the measurement speed and accuracy needed for quantum error correction using superconducting qubits requires iterative design and test informed by accurate models and characterization tools. We introduce a single protocol, with few prerequisite calibrations, which measures the dispersive shift, resonator linewidth, and drive power used in the dispersive readout of superconducting qubits. We find that the resonator linewidth is poorly controlled with a factor of 2 between the maximum and minimum measured values, and is likely to require focused attention in future quantum error correction experiments. We also introduce a protocol for measuring the readout system efficiency using the same power levels as are used in typical qubit readout, and without the need to measure the qubit coherence. We routinely run these protocols on chips with tens of qubits, driven by automation software with little human interaction. Using the extracted system parameters, we find that a model based on those parameters predicts the readout signal to noise ratio to within 10% over a device with 54 qubits.
In this work, we give a quantum algorithm for solving the ground states of a class of Hamiltonians. The mechanism of the exponential speedup that appeared in our algorithm comes from dissipation in open quantum systems. To utilize the dissipation, the central idea is to treat $n$-qubit density matrices $\rho$ as $2n$-qubit pure states $|\rho\rangle$ by vectorization and normalization. By doing so, the Lindblad master equation (LME) becomes a Schrödinger equation with non-Hermitian Hamiltonian $L$. The steady-state $\rho_{ss}$ of the LME, therefore, corresponds to the ground states $|\rho_{ss}\rangle$ of Hamiltonians with the form $L^\dag L$. The runtime of the LME has no dependence on $\zeta$ the overlap between the initial state and the ground state compared with the Heisenberg scaling $\mathcal{O}(\zeta^{-1})$ in other algorithms. For the input part, given a Hamiltonian $H$, under plausible assumptions, we give a polynomial-time classical procedure to judge and solve whether there exists $L$ such that $H-E_0=L^\dag L$. For the output part, we define the mission as estimating expectation values of arbitrary operators with respect to the ground state $|\rho_{ss}\rangle$, which can be done surprisingly by an efficient measurement protocol on $\rho_{ss}$ with no need to prepare $|\rho_{ss}\rangle$. We give several pieces of evidence on the quantum hardness of really preparing $|\rho_{ss}\rangle$, which indicates a potential complexity separation between our algorithm and those projection-based quantum algorithms such as quantum phase estimation. Further, we show that the Hamiltonians that can be efficiently solved by our algorithms contain classically hard instances assuming $\text{P}\neq \text{BQP}$. Later, we discuss and analyze several important aspects of the algorithm including generalizing to other types of Hamiltonians and the "non-linear`` dynamics in the algorithm.
The system-bath entanglement theorem (SBET) was established in terms of linear response functions [J. Chem. Phys. 152, 034102 (2020)] and generalized to correlation functions [arXiv: 2312.13618 (2023)] in our previous works. This theorem connects the entangled system-bath properties to the local system and bare bath ones. In this work, firstly we extend the SBET to field-dressed conditions with multiple bosonic Gaussian environments at different temperatures. Not only the system but also environments are considered to be of optical polarizability, as in reality. With the aid of the extended SBET developed here, for the evaluation of the nonlinear spectroscopy such as the pump-probe, the entangled system-bath contributions can be obtained upon reduced system evolutions via certain quantum dissipative methods. The extended SBET in the field-free condition and its counterpart in the classical limit is also presented. The SBET for fermionic environments is elaborated within the transport scenarios for completeness.
A full-fledged quantum network relies on the formation of entangled links between remote location with the help of quantum repeaters. The famous Duan-Lukin-Cirac-Zoller quantum repeater protocol is based on long distance single-photon interference, which not only requires high phase stability but also cannot generate maximally entangled state. Here, we propose a quantum repeater protocol using the idea of post-matching, which retains the same efficiency as the single-photon interference protocol, reduces the phase-stability requirement and can generate maximally entangled state in principle. We also outline an implementation of our scheme based on the Kerr nonlinear resonator. Numerical simulations show that our protocol has its superiority by comparing with existing protocols under a generic noise model and show the feasibility of building a large-scale quantum communication network with our scheme. We believe our work represents a crucial step towards the construction of a fully-connected quantum network.
Zhi-Wei Han, Jia-Hao Liang, Zhao-Xin Fu, Hong-Zhi Liu, Zi-Yuan Chen, Meng Wang, Ze-Rui He, Jia-Yi Huang, Qing-Xian Lv, Kai-Yu Liao, Yan-Xiong Du The braiding operations of quantum states have attracted substantial attention due to their great potential for realizing topological quantum computations. In this paper, we show that a three-fold degenerate eigen subspace can be obtained in a four-level Hamiltonian which is the minimal physical system. Braiding operations are proposed to apply to dressed states in the subspace. The topology of the braiding diagram can be characterized through physical methods once that the sequential braiding pulses are adopted. We establish an equivalent relationship function between the permutation group and the output states where different output states correspond to different values of the function. The topological transition of the braiding happens when two operations overlap, which is detectable through the measurement of the function. Combined with the phase variation method, we can analyze the wringing pattern of the braiding. Therefore, the experimentally-feasible system provides a platform to investigate braiding dynamics, the SU(3) physics and the qutrit gates.
The quantum kicked rotor is a paradigmatic model system in quantum physics. As a driven quantum system, it is used to study the transition from the classical to the quantum world and to elucidate the emergence of chaos and diffusion. In contrast to its classical counterpart, it features dynamical localization, specifically Anderson localization in momentum space. The interacting many-body kicked rotor is believed to break localization, as recent experiments suggest. Here, we present evidence for many-body dynamical localization for the Lieb-Liniger version of the many-body quantum kicked rotor. After some initial evolution, the momentum distribution of interacting quantum-degenerate bosonic atoms in one-dimensional geometry, kicked hundreds of times by means of a pulsed sinusoidal potential, stops spreading. We quantify the arrested evolution by analysing the energy and the information entropy of the system as the interaction strength is tuned. In the limiting cases of vanishing and strong interactions, the first-order correlation function exhibits a very different decay behavior. Our results shed light on the boundary between the classical, chaotic world and the realm of quantum physics.
The quantum network correlations play significant roles in long distance quantum communication,quantum cryptography and distributed quantum computing. Generally it is very difficult to characterize the multipartite quantum network correlations such as nonlocality, entanglement and steering. In this paper, we propose the network and the genuine network quantum steering models from the aspect of probabilities in the star network configurations. Linear and nonlinear inequalities are derived to detect the network and genuine network quantum steering when the central party performs one fixed measurement. We show that our criteria can detect more quantum network steering than that from the violation of the n-locality quantum networks. Moreover, it is shown that biseparable assemblages can demonstrate genuine network steering in the star network configurations.
Crosstalk represents a formidable obstacle in quantum computing. When quantum gates are executed parallelly, the resonance of qubit frequencies can lead to residual coupling, compromising the fidelity. Existing crosstalk solutions encounter difficulties in mitigating crosstalk and decoherence when dealing with parallel two-qubit gates in frequency-tunable quantum chips. Inspired by the physical properties of frequency-tunable quantum chips, we introduce a Crosstalk-Aware Mapping and gatE Scheduling (CAMEL) approach to address these challenges. CAMEL aims to mitigate crosstalk of parallel two-qubit gates and suppress decoherence. Utilizing the features of the tunable coupler, the CAMEL approach integrates a pulse compensation method for crosstalk mitigation. Furthermore, we present a compilation framework, including two steps. Firstly, we devise a qubit mapping approach that accounts for both crosstalk and decoherence. Secondly, we introduce a gate timing scheduling approach capable of prioritizing the execution of the largest set of crosstalk-free parallel gates to shorten quantum circuit execution times. Evaluation results demonstrate the effectiveness of CAMEL in mitigating crosstalk compared to crosstalk-agnostic methods. Furthermore, in contrast to approaches serializing crosstalk gates, CAMEL successfully suppresses decoherence. Finally, CAMEL exhibits better performance over dynamic-frequency awareness in low-complexity hardware.
We initiate the study of utilizing Quantum Langevin Dynamics (QLD) to solve optimization problems, particularly those non-convex objective functions that present substantial obstacles for traditional gradient descent algorithms. Specifically, we examine the dynamics of a system coupled with an infinite heat bath. This interaction induces both random quantum noise and a deterministic damping effect to the system, which nudge the system towards a steady state that hovers near the global minimum of objective functions. We theoretically prove the convergence of QLD in convex landscapes, demonstrating that the average energy of the system can approach zero in the low temperature limit with an exponential decay rate correlated with the evolution time. Numerically, we first show the energy dissipation capability of QLD by retracing its origins to spontaneous emission. Furthermore, we conduct detailed discussion of the impact of each parameter. Finally, based on the observations when comparing QLD with classical Fokker-Plank-Smoluchowski equation, we propose a time-dependent QLD by making temperature and $\hbar$ time-dependent parameters, which can be theoretically proven to converge better than the time-independent case and also outperforms a series of state-of-the-art quantum and classical optimization algorithms in many non-convex landscapes.
Independent component analysis (ICA) is a fundamental data processing technique to decompose the captured signals into as independent as possible components. Computing the contrast function, which serves as a measure of independence of signals, is vital in the separation process using ICA. This paper presents a quantum ICA algorithm which focuses on computing a specified contrast function on a quantum computer. Using the quantum acceleration in matrix operations, we efficiently deal with Gram matrices and estimate the contrast function with the complexity of $O(\epsilon_1^{-2}\mbox{poly}\log(N/\epsilon_1))$. This estimation subprogram, combined with the classical optimization framework, enables our quantum ICA algorithm, which exponentially reduces the complexity dependence on the data scale compared with classical algorithms. The outperformance is further supported by numerical experiments, while a source separation of a transcriptomic dataset is shown as an example of application.
Two-mode squeezed states, which are entangled states with bipartite quantum correlations in continuous-variable systems, are crucial in quantum information processing and metrology. Recently, continuous-variable quantum computing with the vibrational modes of trapped atoms has emerged with significant progress, featuring a high degree of control in hybridizing with spin qubits. Creating two-mode squeezed states in such a platform could enable applications that are only viable with photons. Here, we experimentally demonstrate two-mode squeezed states by employing atoms in a two-dimensional optical lattice as quantum registers. The states are generated by a controlled projection conditioned on the relative phase of two independent squeezed states. The individual squeezing is created by sudden jumps of the oscillators' frequencies, allowing generating of the two-mode squeezed states at a rate within a fraction of the oscillation frequency. We validate the states by entanglement steering criteria and Fock state analysis. Our results can be applied in other mechanical oscillators for quantum sensing and continuous-variable quantum information.
The entropic way of formulating Heisenberg's uncertainty principle not only plays a fundamental role in applications of quantum information theory but also is essential for manifesting genuine nonclassical features of quantum systems. In this paper we investigate Rényi entropic uncertainty relations (EURs) in the scenario where measurements on individual copies of a quantum system are selected with nonuniform probabilities. In contrast with EURs that characterize an observer's overall lack of information about outcomes with respect to a collection of measurements, we establish state-dependent lower bounds on the weighted sum of entropies over multiple measurements. Conventional EURs thus correspond to the special cases when all weights are equal, and in such cases, we show our results are generally stronger than previous ones. Moreover, taking the entropic steering criterion as an example, we numerically verify that our EURs could be advantageous in practical quantum tasks by optimizing the weights assigned to different measurements. Importantly, this optimization does not require quantum resources and is efficiently computable on classical computers.
Coherent-one-way (COW) quantum key distribution (QKD) is a significant communication protocol that has been implemented experimentally and deployed in practical products due to its simple equipment requirements. However, existing security analyses of COW-QKD either provide a short transmission distance or lack immunity against coherent attacks in the finite-key regime. In this paper, we present a tight finite-key security analysis within the universally composable framework for a variant of COW-QKD, which has been proven to extend the secure transmission distance in the asymptotic case. We combine the quantum leftover hash lemma and entropic uncertainty relation to derive the key rate formula. When estimating statistical parameters, we use the recently proposed Kato's inequality to ensure security against coherent attacks and achieve a higher key rate. Our paper confirms the security and feasibility of COW-QKD for practical application and lays the foundation for further theoretical study and experimental implementation.
Quantum entanglement lies at the heart in quantum information processing tasks. Although many criteria have been proposed, efficient and scalable methods to detect the entanglement of generally given quantum states are still not available yet, particularly for high-dimensional and multipartite quantum systems. Based on FixMatch and Pseudo-Label method, we propose a deep semi-supervised learning model with a small portion of labeled data and a large portion of unlabeled data. The data augmentation strategies are applied in this model by using the convexity of separable states and performing local unitary operations on the training data. We verify that our model has good generalization ability and gives rise to better accuracies compared to traditional supervised learning models by detailed examples.
E-commerce, a type of trading that occurs at a high frequency on the Internet, requires guaranteeing the integrity, authentication and non-repudiation of messages through long distance. As current e-commerce schemes are vulnerable to computational attacks, quantum cryptography, ensuring information-theoretic security against adversary's repudiation and forgery, provides a solution to this problem. However, quantum solutions generally have much lower performance compared to classical ones. Besides, when considering imperfect devices, the performance of quantum schemes exhibits a significant decline. Here, for the first time, we demonstrate the whole e-commerce process of involving the signing of a contract and payment among three parties by proposing a quantum e-commerce scheme, which shows resistance of attacks from imperfect devices. Results show that with a maximum attenuation of 25 dB among participants, our scheme can achieve a signature rate of 0.82 times per second for an agreement size of approximately 0.428 megabit. This proposed scheme presents a promising solution for providing information-theoretic security for e-commerce.
Quantum systems have entered a competitive regime where classical computers must make approximations to represent highly entangled quantum states. However, in this beyond-classically-exact regime, fidelity comparisons between quantum and classical systems have so far been limited to digital quantum devices, and it remains unsolved how to estimate the actual entanglement content of experiments. Here we perform fidelity benchmarking and mixed-state entanglement estimation with a 60-atom analog Rydberg quantum simulator, reaching a high entanglement entropy regime where exact classical simulation becomes impractical. Our benchmarking protocol involves extrapolation from comparisons against an approximate classical algorithm, introduced here, with varying entanglement limits. We then develop and demonstrate an estimator of the experimental mixed-state entanglement, finding our experiment is competitive with state-of-the-art digital quantum devices performing random circuit evolution. Finally, we compare the experimental fidelity against that achieved by various approximate classical algorithms, and find that only the algorithm we introduce is able to keep pace with the experiment on the classical hardware we employ. Our results enable a new paradigm for evaluating the ability of both analog and digital quantum devices to generate entanglement in the beyond-classically-exact regime, and highlight the evolving divide between quantum and classical systems.
Paul V. Klimov, Andreas Bengtsson, Chris Quintana, Alexandre Bourassa, Sabrina Hong, Andrew Dunsworth, Kevin J. Satzinger, William P. Livingston, Volodymyr Sivak, Murphy Y. Niu, Trond I. Andersen, Yaxing Zhang, Desmond Chik, Zijun Chen, Charles Neill, Catherine Erickson, Alejandro Grajales Dau, Anthony Megrant, Pedram Roushan, Alexander N. Korotkov, et al (4) A foundational assumption of quantum error correction theory is that quantum gates can be scaled to large processors without exceeding the error-threshold for fault tolerance. Two major challenges that could become fundamental roadblocks are manufacturing high performance quantum hardware and engineering a control system that can reach its performance limits. The control challenge of scaling quantum gates from small to large processors without degrading performance often maps to non-convex, high-constraint, and time-dependent control optimization over an exponentially expanding configuration space. Here we report on a control optimization strategy that can scalably overcome the complexity of such problems. We demonstrate it by choreographing the frequency trajectories of 68 frequency-tunable superconducting qubits to execute single- and two-qubit gates while mitigating computational errors. When combined with a comprehensive model of physical errors across our processor, the strategy suppresses physical error rates by $\sim3.7\times$ compared with the case of no optimization. Furthermore, it is projected to achieve a similar performance advantage on a distance-23 surface code logical qubit with 1057 physical qubits. Our control optimization strategy solves a generic scaling challenge in a way that can be adapted to a variety of quantum operations, algorithms, and computing architectures.
Andreas Bengtsson, Alex Opremcak, Mostafa Khezri, Daniel Sank, Alexandre Bourassa, Kevin J. Satzinger, Sabrina Hong, Catherine Erickson, Brian J. Lester, Kevin C. Miao, Alexander N. Korotkov, Julian Kelly, Zijun Chen, Paul V. Klimov Measurement is an essential component of quantum algorithms, and for superconducting qubits it is often the most error prone. Here, we demonstrate model-based readout optimization achieving low measurement errors while avoiding detrimental side-effects. For simultaneous and mid-circuit measurements across 17 qubits, we observe 1.5% error per qubit with a 500ns end-to-end duration and minimal excess reset error from residual resonator photons. We also suppress measurement-induced state transitions achieving a leakage rate limited by natural heating. This technique can scale to hundreds of qubits and be used to enhance the performance of error-correcting codes and near-term applications.
Simulating dynamics of open quantum systems is sometimes a significant challenge, despite the availability of various exact or approximate methods. Particularly when dealing with complex systems, the huge computational cost will largely limit the applicability of these methods. We investigate the usage of dynamic mode decomposition (DMD) to evaluate the rate kernels in quantum rate processes. DMD is a data-driven model reduction technique that characterizes the rate kernels using snapshots collected from a small time window, allowing us to predict the long-term behaviors with only a limited number of samples. Our investigations show that whether the external field is involved or not, the DMD can give accurate prediction of the result compared with the traditional propagations, and simultaneously reduce the required computational cost.
In this paper, we present an efficient quantum compression method for identically prepared states with arbitrary dimentional.
Applying low-depth quantum neural networks (QNNs), variational quantum algorithms (VQAs) are both promising and challenging in the noisy intermediate-scale quantum (NISQ) era: Despite its remarkable progress, criticisms on the efficiency and feasibility issues never stopped. However, whether VQAs can demonstrate quantum advantages is still undetermined till now, which will be investigated in this paper. First, we will prove that there exists a dependency between the parameter number and the gradient-evaluation cost when training QNNs. Noticing there is no such direct dependency when training classical neural networks with the backpropagation algorithm, we argue that such a dependency limits the scalability of VQAs. Second, we estimate the time for running VQAs in ideal cases, i.e., without considering realistic limitations like noise and reachability. We will show that the ideal time cost easily reaches the order of a 1-year wall time. Third, by comparing with the time cost using classical simulation of quantum circuits, we will show that VQAs can only outperform the classical simulation case when the time cost reaches the scaling of $10^0$-$10^2$ years. Finally, based on the above results, we argue that it would be difficult for VQAs to outperform classical cases in view of time scaling, and therefore, demonstrate quantum advantages, with the current workflow. Since VQAs as well as quantum computing are developing rapidly, this work does not aim to deny the potential of VQAs. The analysis in this paper provides directions for optimizing VQAs, and in the long run, seeking more natural hybrid quantum-classical algorithms would be meaningful.
In quantum logic operations, information is carried by the wavefunction rather than the energy distribution. Therefore, the relative phase is essential. Abelian and non-Abelian phases can be emulated in classical waves using passive coupled waveguides with geometric modulation. However, the dynamic phases interference induced by waveguide structure variation is inevitable.To overcome the challenges, we introduce an electroacoustic coupled system that enables the precise control of phase distribution through dynamic modulation of hopping. Such effective hopping is electronically controlled and is utilized to construct various paths in parameter space. These paths lead to state evolution with matrix-valued geometric phases, which correspond to logic operations. We report experimental realizations of several logic operations, including $Y$ gate, $Z$ gate, Hadamard gate and non-Abelian braiding. Our work introduces a temporal process to manipulate transient modes in a compact structure, providing a versatile experimental testbed for exploring other logic gates and exotic topological phenomena.
The dynamics of open quantum systems can be simulated by unraveling it into an ensemble of pure state trajectories undergoing non-unitary monitored evolution, which has recently been shown to undergo measurement-induced entanglement phase transition. Here, we show that, for an arbitrary decoherence channel, one can optimize the unraveling scheme to lower the threshold for entanglement phase transition, thereby enabling efficient classical simulation of the open dynamics for a broader range of decoherence rates. Taking noisy random unitary circuits as a paradigmatic example, we analytically derive the optimum unraveling basis that on average minimizes the threshold. Moreover, we present a heuristic algorithm that adaptively optimizes the unraveling basis for given noise channels, also significantly extending the simulatable regime. When applied to noisy Hamiltonian dynamics, the heuristic approach indeed extends the regime of efficient classical simulation based on matrix product states beyond conventional quantum trajectory methods. Finally, we assess the possibility of using a quasi-local unraveling, which involves multiple qubits and time steps, to efficiently simulate open systems with an arbitrarily small but finite decoherence rate.