The possibility of extracting more work from a physical system thanks to the information obtained from measurements has been a topic of fundamental interest in the context of thermodynamics since the formulation of the Maxwell's demon thought experiment. We here consider this problem from the perspective of an open quantum battery interacting with an environment that can be continuously measured. By modeling it via a continuously monitored collisional model, we show how to implement the corresponding dynamics as a quantum circuit, including the final conditional feedback unitary evolution that allows to enhance the amount of work extracted. By exploiting the flexibility of IBM quantum computers and by properly modelling the corresponding quantum circuit, we experimentally simulate the work extraction protocol showing how the obtained experimental values of the daemonic extracted work are close to their theoretical upper bound quantified by the so-called daemonic ergotropy. We also demonstrate how by properly modelling the noise affecting the quantum circuit, one can improve the work extraction protocol by optimizing the corresponding extraction unitary feedback operation.
Achieving high-precision measurements on near-term quantum devices is critical for advancing quantum computing applications. In this paper, we explore several practical techniques to enhance measurement accuracy using randomized measurements, focusing on minimizing shot overhead, circuit overhead, measurement noise, and time-dependent measurement noise. Our approach leverages locally biased random measurements to reduce shot overhead, in addition to repeated settings and parallel quantum detector tomography to reduce circuit overhead and mitigate measurement noise. Additionally, we employ a blended scheduling technique to mitigate time-dependent measurement noise. We demonstrate the effectiveness of these techniques through a case study on the molecular energy estimation of the BODIPY molecule using the Hartree-Fock state on an IBM Eagle r3 computer, showcasing significant improvements in measurement precision. These strategies pave the way for more reliable and accurate quantum computations, particularly in applications requiring precise molecular energy calculations.
We show how to entangle the motion of optically levitated nanoparticles in distant optical tweezers. The scheme consists in coupling the inelastically scattered light of each particle into transmission lines and directing it towards the other particle. The interference between this light and the background field introduces an effective coupling between the two particles while simultaneously reducing the effect of recoil heating. We analyze the system dynamics, showing that both transient and conditional entanglement between remote particles can be achieved under realistic experimental conditions.
Every massive particle behaves like a wave, according to quantum physics. Yet, this characteristic wave nature has only been observed in double-slit experiments with microscopic systems, such as atoms and molecules. The key aspect is that the wavefunction describing the motion of these systems extends coherently over a distance comparable to the slit separation, much larger than the size of the system itself. Preparing these states of more massive and complex objects remains an outstanding challenge. While the motion of solid-state oscillators can now be controlled at the level of single quanta, their coherence length remains comparable to the zero-point motion, limited to subatomic distances. Here, we prepare a delocalized state of a levitating solid-state nanosphere with coherence length exceeding the zero-point motion. We first cool its motion to the ground state. Then, by modulating the stiffness of the confinement potential, we achieve more than a threefold increment of the initial coherence length with minimal added noise. Optical levitation gives us the necessary control over the confinement that other mechanical platforms lack. Our work is a stepping stone towards the generation of delocalization scales comparable to the object size, a crucial regime for macroscopic quantum experiments, and towards quantum-enhanced force sensing with levitated particles.
By leveraging the Variational Quantum Eigensolver (VQE), the ``quantum equation of motion" (qEOM) method established itself as a promising tool for quantum chemistry on near term quantum computers, and has been used extensively to estimate molecular excited states. Here, we explore a novel application of this method, employing it to compute thermal averages of quantum systems, specifically molecules like ethylene and butadiene. A drawback of qEOM is that it requires measuring the expectation values of a large number of observables on the ground state of the system, and the number of necessary measurements can become a bottleneck of the method. In this work we focus on measurements through informationally complete positive operator-valued measures (IC-POVMs) to achieve a reduction in the measurements overheads. We show with numerical simulations that the qEOM combined with IC-POVM measurements ensures a satisfactory accuracy in the reconstruction of the thermal state with a reasonable number of shots.
Characterization of noise in current near-term quantum devices is of paramount importance to fully use their computational power. However, direct quantum process tomography becomes unfeasible for systems composed of tens of qubits. A promising alternative method based on tensor networks was recently proposed [Nat. Commun. 14, 2858 (2023)]. In this paper, we adapt it for the characterization of noise channels on near-term quantum computers and investigate its performance thoroughly. In particular, we show how experimentally feasible tomographic samples are sufficient to accurately characterize realistic correlated noise models affecting individual layers of quantum circuits, and study its performance on systems composed of up to 20 qubits. Furthermore, we combine this noise characterization method with a recently proposed noise-aware tensor network error mitigation protocol for correcting outcomes in noisy circuits, resulting accurate estimations even on deep circuit instances. This positions the tensor-network-based noise characterization protocol as a valuable tool for practical error characterization and mitigation in the near-term quantum computing era.
Generating macroscopic non-classical quantum states is a long-standing challenge in physics. Anharmonic dynamics is an essential ingredient to generate these states, but for large mechanical systems, the effect of the anharmonicity tends to become negligible compared to decoherence. As a possible solution to this challenge, we propose to use a motional squeezed state as a resource to effectively enhance the anharmonicity. We analyze the production of negativity in the Wigner distribution of a quantum anharmonic resonator initially in a squeezed state. We find that initial squeezing enhances the rate at which negativity is generated. We also analyze the effect of two common sources of decoherence, namely energy damping and dephasing, and find that the detrimental effects of energy damping are suppressed by strong squeezing. In the limit of large squeezing, which is needed for state-of-the-art systems, we find good approximations for the Wigner function. Our analysis is significant for current experiments attempting to prepare macroscopic mechanical systems in genuine quantum states. We provide an overview of several experimental platforms featuring nonlinear behaviors and low levels of decoherence. In particular, we discuss the feasibility of our proposal with carbon nanotubes and levitated nanoparticles.
Levitated nanoparticles in vacuum are prime candidates for generating macroscopic quantum superposition states of massive objects. Most protocols for preparing these states necessitate coherent expansion beyond the scale of the zero-point motion to produce sufficiently delocalized and pure phase-space distributions. Here, we spatially expand and subsequently recontract the thermal state of a levitated nanoparticle by modifying the stiffness of the trap holding the particle. We achieve state-expansion factors of 25 in standard deviation for a particle initially feedback-cooled to a center-of-mass thermal state of \SI155\milli\kelvin. Our method relies on a hybrid scheme combining an optical trap, for cooling and measuring the particle's motion, with a Paul trap for expanding its state. Consequently, state expansion occurs devoid of measurement backaction from photon recoil, making this approach suitable for coherent wavefunction expansion in future experiments.
We present a hybrid trapping platform that allows us to levitate a charged nanoparticle in high vacuum using either optical fields, radio-frequency fields, or a combination thereof. Our hybrid approach combines an optical dipole trap with a linear Paul trap while maintaining a large numerical aperture (0.77 NA). We detail a controlled transfer procedure that allows us to use the Paul trap as a safety net to recover particles lost from the optical trap at high vacuum. The presented hybrid platform adds to the toolbox of levitodynamics and represents an important step towards fully controllable dark potentials, providing control in the absence of decoherence due to photon recoil.
Yuri Alexeev, Maximilian Amsler, Paul Baity, Marco Antonio Barroca, Sanzio Bassini, Torey Battelle, Daan Camps, David Casanova, Young Jai Choi, Frederic T. Chong, Charles Chung, Chris Codella, Antonio D. Corcoles, James Cruise, Alberto Di Meglio, Jonathan Dubois, Ivan Duran, Thomas Eckl, Sophia Economou, Stephan Eidenbenz, et al (107) Computational models are an essential tool for the design, characterization, and discovery of novel materials. Hard computational tasks in materials science stretch the limits of existing high-performance supercomputing centers, consuming much of their simulation, analysis, and data resources. Quantum computing, on the other hand, is an emerging technology with the potential to accelerate many of the computational tasks needed for materials science. In order to do that, the quantum technology must interact with conventional high-performance computing in several ways: approximate results validation, identification of hard problems, and synergies in quantum-centric supercomputing. In this paper, we provide a perspective on how quantum-centric supercomputing can help address critical computational problems in materials science, the challenges to face in order to solve representative use cases, and new suggested directions.
Gaussian Boson Sampling (GBS) is a recently developed paradigm of quantum computing consisting of sending a Gaussian state through a linear interferometer and then counting the number of photons in each output mode. When the system encodes a symmetric matrix, GBS can be viewed as a tool to sample subgraphs: the most sampled are those with a large number of perfect matchings, and thus are the densest ones. This property has been the foundation of the novel clustering approach we propose in this work, called GBS-based clustering, which relies solely on GBS, without the need of classical algorithms. The GBS-based clustering has been tested on several datasets and benchmarked with two well-known classical clustering algorithms. Results obtained by using a GBS simulator show that on average our approach outperforms the two classical algorithms in two out of the three chosen metrics, proposing itself as a viable full-quantum clustering option.
Until fault-tolerance becomes implementable at scale, quantum computing will heavily rely on noise mitigation techniques. While methods such as zero noise extrapolation with probabilistic error amplification (ZNE-PEA) and probabilistic error cancellation (PEC) have been successfully tested on hardware recently, their scalability to larger circuits may be limited. Here, we introduce the tensor-network error mitigation (TEM) algorithm, which acts in post-processing to correct the noise-induced errors in estimations of physical observables. The method consists of the construction of a tensor network representing the inverse of the global noise channel affecting the state of the quantum processor, and the consequent application of the map to informationally complete measurement outcomes obtained from the noisy state. TEM does therefore not require additional quantum operations other than the implementation of informationally complete POVMs, which can be achieved through randomised local measurements. The key advantage of TEM is that the measurement overhead is quadratically smaller than in PEC. We test TEM extensively in numerical simulations in different regimes. We find that TEM can be applied to circuits of twice the depth compared to what is achievable with PEC under realistic conditions with sparse Pauli-Lindblad noise, such as those in [E. van den Berg et al., Nat. Phys. (2023)]. By using Clifford circuits, we explore the capabilities of the method in wider and deeper circuits with lower noise levels. We find that in the case of 100 qubits and depth 100, both PEC and ZNE fail to produce accurate results by using $\sim 10^5$ shots, while TEM succeeds.
The interplay of electronic and nuclear degrees of freedom presents an outstanding problem in condensed matter physics and chemistry. Computational challenges arise especially for large systems, long time scales, in nonequilibrium, or in systems with strong correlations. In this work, we show how downfolding approaches facilitate complexity reduction on the electronic side and thereby boost the simulation of electronic properties and nuclear motion - in particular molecular dynamics (MD) simulations. Three different downfolding strategies based on constraining, unscreening, and combinations thereof are benchmarked against full density functional calculations for selected charge density wave (CDW) systems, namely 1H-TaS$_2$, 1T-TiSe$_2$, 1H-NbS$_2$, and a one-dimensional carbon chain. We find that the downfolded models can reproduce potential energy surfaces on supercells accurately and facilitate computational speedup in MD simulations by about five orders of magnitude in comparison to purely ab initio calculations. For monolayer 1H-TaS$_2$ we report classical replica exchange and quantum path integral MD simulations, revealing the impact of thermal and quantum fluctuations on the CDW transition.
The amount of work that can be extracted from a quantum system can be increased by exploiting the information obtained from a measurement performed on a correlated ancillary system. The concept of daemonic ergotropy has been introduced to properly describe and quantify this work extraction enhancement in the quantum regime. We here explore the application of this idea in the context of continuously-monitored open quantum systems, where information is gained by measuring the environment interacting with the energy-storing quantum device. We first show that the corresponding daemonic ergotropy takes values between the ergotropy and the energy of the corresponding unconditional state. The upper bound is achieved by assuming an initial pure state and a perfectly efficient projective measurement on the environment, independently of the kind of measurement performed. On the other hand, if the measurement is inefficient or the initial state is mixed, the daemonic ergotropy is generally dependent on the measurement strategy. This scenario is investigated via a paradigmatic example of an open quantum battery: a two-level atom driven by a classical field and whose spontaneously emitted photons are continuously monitored via either homodyne, heterodyne, or photo-detection.
We study the effect of non-Markovianity in the charging process of an open-system quantum battery. We employ a collisional model framework, where the environment is described by a discrete set of ancillary systems and memory effects in the dynamics can be introduced by allowing these ancillas to interact. We study in detail the behaviour of the steady-state ergotropy and the impact of the information backflow to the system on the different features characterizing the charging process. Remarkably, we find that there is a maximum value of the ergotropy achievable: this value can be obtained either in the presence of memoryless environment, but only in the large-loss limit, as derived in [D. Farina et al., Phys. Rev. B 99, 035421 (2019)], or in the presence of an environment with memory also beyond the large-loss limit. In general, we show that the presence of an environment with memory allows us to generate steady-state ergotropy near to its maximum value for a much larger region in the parameter space and thus potentially in a shorter time. Relying on the geometrical measure of non-Markovianity, we show that in both the cases of an environment with and without memory the ergotropy maximum is obtained when the non-Markovianity of the dynamics of the battery is zero, possibly as the result of a non-trivial interplay between the memory effects induced by, respectively, the environment and the charger connected to the battery.
We propose an estimation method for quantum measurement tomography (QMT) based on semidefinite programming (SDP), and discuss how it may be employed to detect experimental imperfections, such as shot noise and/or faulty preparation of the input states on near-term quantum computers. Moreover, if the positive operator-valued measure (POVM) we aim to characterize is informationally complete, we put forward a method for self-consistent tomography, i.e., for recovering a set of input states and POVM effects that is consistent with the experimental outcomes and does not assume any a priori knowledge about the input states of the tomography. Contrary to many methods that have been discussed in the literature, our approach does not rely on additional assumptions such as low noise or the existence of a reliable subset of input states.
Quantum processing units boost entanglement at the level of hardware and enable physical simulations of highly correlated electron states in molecules and intermolecular chemical bonds. The variational quantum eigensolver provides a hardware-efficient toolbox for ground state simulation; however, with limitations in precision. Even in the absence of noise, the algorithm may result into a biased energy estimation, particularly with some shallower ansatz types. Noise additionally degrades entanglement and hinders the ground state energy estimation (especially if the noise is not fully characterized). Here we develop a method to exploit the quantum-classical interface provided by informationally complete measurements to use classical software on top of the hardware entanglement booster for ansatz- and noise-related error reduction. We use the tensor network representation of a quantum channel that drives the noisy state toward the ground one. The tensor network is a completely positive map by construction, but we elaborate on making the trace preservation condition local so as to activate the sweeping variational optimization. This method brings into reach energies below the noiseless ansatz by creating additional correlations among the qubits and denoising them. Analyzing the example of the stretched water molecule with a tangible entanglement, we argue that a hybrid strategy of using the quantum hardware together with the classical software outperforms a purely classical strategy provided the classical parts have the same bond dimension. The proposed optimization algorithm extends the variety of noise mitigation methods and facilitates the more accurate study of the energy landscape for deformed molecules. The algorithm can be applied as the final postprocessing step in the quantum hardware simulation of protein-ligand complexes in the context of drug design.
ADAPT-VQE stands out as a robust algorithm for constructing compact ansätze for molecular simulation. It enables to significantly reduce the circuit depth with respect to other methods, such as UCCSD, while achieving higher accuracy and not suffering from so-called barren plateaus that hinder the variational optimisation of many hardware-efficient ansätze. In its standard implementation, however, it introduces a considerable measurement overhead in the form of gradient evaluations trough estimations of many commutator operators. In this work, we mitigate this measurement overhead by exploiting a recently introduced method for energy evaluation relying on Adaptive Informationally complete generalised Measurements (AIM). Besides offering an efficient way to measure the energy itself, Informationally Complete (IC) measurement data can be reused to estimate all the commutators of the operators in the operator pool of ADAPT-VQE, using only classically efficient post-processing. We present the AIM-ADAPT-VQE scheme in detail, and investigate its performance with several H4 Hamiltonians and operator pools. Our numerical simulations indicate that the measurement data obtained to evaluate the energy can be reused to implement ADAPT-VQE with no additional measurement overhead for the systems considered here. In addition, we show that, if the energy is measured within chemical precision, the CNOT count in the resulting circuits is close to the ideal one. With scarce measurement data, AIM-ADAPT-VQE still converges to the ground state with high probability, albeit with an increased circuit depth in some cases.
We theoretically show that laser recoil heating in free-space levitated optomechanics can be arbitrarily suppressed by shining squeezed light onto an optically trapped nanoparticle. The presence of squeezing modifies the quantum electrodynamical light-matter interaction in a way that enables us to control the amount of information that the scattered light carries about a given mechanical degree of freedom. Moreover, we analyze the trade-off between measurement imprecision and back-action noise and show that optical detection beyond the standard quantum limit can be achieved. We predict that, with state-of-the-art squeezed light sources, laser recoil heating can be reduced by at least 60% by squeezing a single Gaussian mode with an appropriate incidence direction, and by 98% by squeezing a properly mode-matched mode. Our results, which are valid both for motional and librational degrees of freedom, will lead to improved feedback cooling schemes as well as boost the coherence time of optically levitated nanoparticles in the quantum regime.
Protein-protein interaction (PPI) networks consist of the physical and/or functional interactions between the proteins of an organism. Since the biophysical and high-throughput methods used to form PPI networks are expensive, time-consuming, and often contain inaccuracies, the resulting networks are usually incomplete. In order to infer missing interactions in these networks, we propose a novel class of link prediction methods based on continuous-time classical and quantum random walks. In the case of quantum walks, we examine the usage of both the network adjacency and Laplacian matrices for controlling the walk dynamics. We define a score function based on the corresponding transition probabilities and perform tests on four real-world PPI datasets. Our results show that continuous-time classical random walks and quantum walks using the network adjacency matrix can successfully predict missing protein-protein interactions, with performance rivalling the state of the art.
We present adaptive measurement techniques tailored for variational quantum algorithms on near-term small and noisy devices. In particular, we generalise earlier "learning to measure" strategies in two ways. First, by considering a class of adaptive positive operator valued measures (POVMs) that can be simulated with simple projective measurements without ancillary qubits, we decrease the amount of required qubits and two-qubit gates. Second, by introducing a method based on Quantum Detector Tomography to mitigate the effect of noise, we are able to optimise the POVMs as well as to infer expectation values reliably in the currently available noisy quantum devices. Our numerical simulations clearly indicate that the presented strategies can significantly reduce the number of needed shots to achieve chemical accuracy in variational quantum eigensolvers, thus helping to solve one of the bottlenecks of near-term quantum computing.
The rapid progress in quantum computing witnessed in recent years has sparked widespread interest in developing scalable quantum information theoretic methods to work with large quantum systems. For instance, several approaches have been proposed to bypass tomographic state reconstruction, and yet retain to a certain extent the capability to estimate multiple physical properties of a given state previously measured. In this paper, we introduce the Virtual Linear Map Algorithm (VILMA), a new method that enables not only to estimate multiple operator averages using classical post-processing of informationally complete measurement outcomes, but also to do so for the image of the measured reference state under low-depth circuits of arbitrary, not necessarily physical, $k$-local maps. We also show that VILMA allows for the variational optimisation of the virtual circuit through sequences of efficient linear programs. Finally, we explore the purely classical version of the algorithm, in which the input state is a state with a classically efficient representation, and show that the method can prepare ground states of many-body Hamiltonians.
Sabrina Maniscalco, Elsi-Mari Borrelli, Daniel Cavalcanti, Caterina Foti, Adam Glos, Mark Goldsmith, Stefan Knecht, Keijo Korhonen, Joonas Malmi, Anton Nykänen, Matteo A. C. Rossi, Harto Saarinen, Boris Sokolov, N. Walter Talarico, Jussi Westergren, Zoltán Zimborás, Guillermo García-Pérez Scientific and technological advances in medicine and systems biology have unequivocally shown that health and disease must be viewed in the context of the interplay among multiple molecular and environmental factors. Understanding the effects of cellular interconnection on disease progression may lead to the identification of novel disease genes and pathways, and hence influence precision diagnostics and therapeutics. To accomplish this goal, the emerging field of network medicine applies network science approaches to investigate disease pathogenesis, integrating information from relevant Omics databases, including protein-protein interaction, correlation-based, gene regulatory, and Bayesian networks. However, this requires analysing and computing large amounts of data. Moreover, if we are to efficiently search for new drugs and new drug combinations, there is a pressing need for computational methods that could allow us to access the immense chemical compound space until now largely unexplored. Finally, at the microscopic level, drug-target chemistry simulation is ultimately a quantum problem, and hence it requires a quantum solution. As we will discuss, quantum computing may be a key ingredient in enabling the full potential of network medicine. We propose to combine network medicine and quantum algorithms in a novel research field, quantum network medicine, to lay the foundations of a new era of disease prevention and drug design.
We report the results of an in-depth study of the role of graph topology on quantum transport efficiency in random removal and Watts-Strogatz networks. By using four different environmental models -- noiseless, driving by classical random telegraph noise (RTN), thermal quantum bath, and bath+RTN -- we compare the role of the environment and of the change in network topology in determining the quantum transport efficiency. We find that small and specific changes in network topology is more effective in causing large change in efficiency compared to that achievable by environmental manipulations for both network classes. Furthermore, we have found that noise dependence of transport efficiency in these networks can be categorized into six classes. In general, our results highlight the interplay that network topology and environment models play in quantum transport, and pave the way for transport studies for networks of increasing size and complexity -- when going beyond so far often used few-site transport systems.
We study spatial search with continuous-time quantum walks on real-world complex networks. We use smaller replicas of the Internet network obtained with a recent geometric renormalization method introduced by García-Pérez et al., Nat. Phys. 14, 583 (2018). This allows us to infer for the first time the behavior of a quantum spatial search algorithm on a real-world complex network. By simulating numerically the dynamics and optimizing the coupling parameter, we study the optimality of the algorithm and its scaling with the size of the network, showing that on average it is considerably better than the classical scaling $\mathcal{O}(N)$, but it does not reach the ideal quadratic speedup $\mathcal{O}(\sqrt{N})$ that can be achieved, e.g. in complete graphs. However, the performance of the search algorithm strongly depends on the degree of the nodes and, in fact, the scaling is found to be very close to optimal when we consider the nodes below the $99$th percentile ordered according to the degree.
Using the Lindblad equation approach, we study the nonequilibrium stationary state of a benzene ring connected to two reservoirs in the large bias regime, a prototype of a generic molecular electronic device. We show the emergence of an optimal working point (corresponding to a change in the monotonicity of the stationary current, as a function of the applied bias) and its robustness against chemical potential and bond disorder.
A mechanically compliant element can be set into motion by the interaction with light. In turn, this light-driven motion can give rise to ponderomotive correlations in the electromagnetic field. In optomechanical systems, cavities are often employed to enhance these correlations up to the point where they generate quantum squeezing of light. In free-space scenarios, where no cavity is used, observation of squeezing remains possible but challenging due to the weakness of the interaction, and has not been reported so far. Here, we measure the ponderomotively squeezed state of light scattered by a nanoparticle levitated in a free-space optical tweezer. We observe a reduction of the optical fluctuations by up to $25$~\% below the vacuum level, in a bandwidth of about $15$~kHz. Our results are well explained by a linearized dipole interaction between the nanoparticle and the electromagnetic continuum. These ponderomotive correlations open the door to quantum-enhanced sensing and metrology with levitated systems, such as force measurements below the standard quantum limit.
Dissipative collective effects are ubiquitous in quantum physics, and their relevance ranges from the study of entanglement in biological systems to noise mitigation in quantum computers. Here, we put forward the first fully quantum simulation of dissipative collective phenomena on a real quantum computer, based on the recently introduced multipartite collision model. First, we theoretically study the accuracy of this algorithm on near-term quantum computers with noisy gates, and we derive some rigorous error bounds that depend on the timestep of the collision model and on the gate errors. These bounds can be employed to estimate the necessary resources for the efficient quantum simulation of the collective dynamics. Then, we implement the algorithm on some IBM quantum computers to simulate superradiance and subradiance between a pair of qubits. Our experimental results successfully display the emergence of collective effects in the quantum simulation. In addition, we analyze the noise properties of the gates that we employ in the algorithm by means of full process tomography, with the aim of improving our understanding of the errors in the near-term devices that are currently accessible to worldwide researchers. We obtain the values of the average gate fidelity, unitarity, incoherence and diamond error, and we establish a connection between them and the accuracy of the experimentally simulated state. Moreover, we build a noise model based on the results of the process tomography for two-qubit gates and show that its performance is comparable with the noise model provided by IBM. Finally, we observe that the scaling of the error as a function of the number of gates is favorable, but at the same time reaching the threshold of the diamond errors for quantum fault tolerant computation may still be orders of magnitude away in the devices that we employ.
Considering its relevance in the field of cryptography, integer factorization is a prominent application where Quantum computers are expected to have a substantial impact. Thanks to Shor's algorithm this peculiar problem can be solved in polynomial time. However, both the number of qubits and applied gates detrimentally affect the ability to run a particular quantum circuit on the near term Quantum hardware. In this work, we help addressing both these problems by introducing a reduced version of Shor's algorithm that proposes a step forward in increasing the range of numbers that can be factorized on noisy Quantum devices. The implementation presented in this work is general and does not use any assumptions on the number to factor. In particular, we have found noteworthy results in most cases, often being able to factor the given number with only one iteration of the proposed algorithm. Finally, comparing the original quantum algorithm with our version on simulator, the outcomes are identical for some of the numbers considered.
We consider the problem of frequency estimation for a single bosonic field evolving under a squeezing Hamiltonian and continuously monitored via homodyne detection. In particular, we exploit reinforcement learning techniques to devise feedback control strategies achieving increased estimation precision. We show that the feedback control determined by the neural network greatly surpasses in the long-time limit the performances of both the "no-control" strategy and the standard "open-loop control" strategy, which we considered as benchmarks. We indeed observe how the devised strategy is able to optimize the nontrivial estimation problem by preparing a large fraction of trajectories corresponding to more sensitive quantum conditional states.
Understanding the emergence of objectivity from the quantum realm has been a long standing issue strongly related to the quantum to classical crossover. Quantum Darwinism provides an answer, interpreting objectivity as consensus between independent observers. Quantum computers provide an interesting platform for such experimental investigation of quantum Darwinism, fulfilling their initial intended purpose as quantum simulators. Here we assess to what degree current NISQ devices can be used as experimental platforms in the field of quantum Darwinism. We do this by simulating an exactly solvable stochastic collision model, taking advantage of the analytical solution to benchmark the experimental results.
The description of the complex separability structure of quantum states in terms of partially ordered sets has been recently put forward. In this work, we address the question of how to efficiently determine these structures for unknown states. We propose an experimentally accessible and scalable iterative methodology that identifies, on solid statistical grounds, sufficient conditions for non-separability with respect to certain partitions. In addition, we propose an algorithm to determine the minimal partitions (those that do not admit further splitting) consistent with the experimental observations. We test our methodology experimentally on a 20-qubit IBM quantum computer by inferring the structure of the 4-qubit Smolin and an 8-qubit W states. In the first case, our results reveal that, while the fidelity of the state is low, it nevertheless exhibits the partitioning structure expected from the theory. In the case of the W state, we obtain very disparate results in different runs on the device, which range from non-separable states to very fragmented minimal partitions with little entanglement in the system. Furthermore, our work demonstrates the applicability of informationally complete POVM measurements for practical purposes on current NISQ devices.
Many prominent quantum computing algorithms with applications in fields such as chemistry and materials science require a large number of measurements, which represents an important roadblock for future real-world use cases. We introduce a novel approach to tackle this problem through an adaptive measurement scheme. We present an algorithm that optimizes informationally complete positive operator-valued measurements (POVMs) on the fly in order to minimize the statistical fluctuations in the estimation of relevant cost functions. We show its advantage by improving the efficiency of the variational quantum eigensolver in calculating ground-state energies of molecular Hamiltonians with extensive numerical simulations. Our results indicate that the proposed method is competitive with state-of-the-art measurement-reduction approaches in terms of efficiency. In addition, the informational completeness of the approach offers a crucial advantage, as the measurement data can be reused to infer other quantities of interest. We demonstrate the feasibility of this prospect by reusing ground-state energy-estimation data to perform high-fidelity reduced state tomography.
Tests of quantum mechanics on a macroscopic scale require extreme control over mechanical motion and its decoherence. Quantum control of mechanical motion has been achieved by engineering the radiation-pressure coupling between a micromechanical oscillator and the electromagnetic field in a resonator. Furthermore, measurement-based feedback control relying on cavity-enhanced detection schemes has been used to cool micromechanical oscillators to their quantum ground states. In contrast to mechanically tethered systems, optically levitated nanoparticles are particularly promising candidates for matter-wave experiments with massive objects, since their trapping potential is fully controllable. In this work, we optically levitate a femto-gram dielectric particle in cryogenic free space, which suppresses thermal effects sufficiently to make the measurement backaction the dominant decoherence mechanism. With an efficient quantum measurement, we exert quantum control over the dynamics of the particle. We cool its center-of-mass motion by measurement-based feedback to an average occupancy of 0.65 motional quanta, corresponding to a state purity of 43%. The absence of an optical resonator and its bandwidth limitations holds promise to transfer the full quantum control available for electromagnetic fields to a mechanical system. Together with the fact that the optical trapping potential is highly controllable, our experimental platform offers a route to investigating quantum mechanics at macroscopic scales.
Noise-assisted transport phenomena highlight the nontrivial interplay between environmental effects and quantum coherence in achieving maximal efficiency. Due to the complexity of biochemical systems and their environments, effective open quantum system models capable of providing physical insights on the presence and role of quantum effects are highly needed. In this paper, we introduce a new approach that combines an effective quantum microscopic description with a classical stochastic one. Our stochastic collision model describes both Markovian and non-Markovian dynamics without relying on the weak coupling assumption. We investigate the consequences of spatial and temporal heterogeneity of noise on transport efficiency in a fully connected graph and in the Fenna-Matthews-Olson complex. Our approach shows how to meaningfully formulate questions, and provide answers, on important open issues such as the properties of optimal noise and the emergence of the network structure as a result of an evolutionary process.
Using the Lindblad equation approach, we derive the range of the parameters of an interacting one-dimensional electronic chain connected to two reservoirs in the large bias limit in which an optimal working point (corresponding to a change in the monotonicity of the stationary current as a function of the applied bias) emerges in the nonequilibrium stationary state. In the specific case of the one-dimensional spinless fermionic Hubbard chain, we prove that an optimal working point emerges in the dependence of the stationary current on the coupling between the chain and the reservoirs, both in the interacting and in the noninteracting case. We show that the optimal working point is robust against localized defects of the chain, as well as against a limited amount of quenched disorder. Eventually, we discuss the importance of our results for optimizing the performance of a quantum circuit by tuning its components as close as possible to their optimal working point.
We introduce an experimentally accessible network representation for many-body quantum states based on entanglement between all pairs of its constituents. We illustrate the power of this representation by applying it to a paradigmatic spin chain model, the XX model, and showing that it brings to light new phenomena. The analysis of these entanglement networks reveals that the gradual establishment of quasi-long range order is accompanied by a symmetry regarding single-spin concurrence distributions, as well as by instabilities in the network topology. Moreover, we identify the existence of emergent entanglement structures, spatially localised communities enforced by the global symmetry of the system that can be revealed by model-agnostic community detection algorithms. The network representation further unveils the existence of structural classes and a cyclic self-similarity in the state, which we conjecture to be intimately linked to the community structure. Our results demonstrate that the use of tools and concepts from complex network theory enables the discovery, understanding, and description of new physical phenomena even in models studied for decades.
We show that continuous quantum nondemolition (QND) measurement of an atomic ensemble is able to improve the precision of frequency estimation even in the presence of independent dephasing acting on each atom. We numerically simulate the dynamics of an ensemble with up to N = 150 atoms initially prepared in a (classical) spin coherent state, and we show that, thanks to the spin squeezing dynamically generated by the measurement, the information obtainable from the continuous photocurrent scales superclassically with respect to the number of atoms N. We provide evidence that such superclassical scaling holds for different values of dephasing and monitoring efficiency. We moreover calculate the extra information obtainable via a final strong measurement on the conditional states generated during the dynamics and show that the corresponding ultimate limit is nearly achieved via a projective measurement of the spin-squeezed collective spin operator. We also briefly discuss the difference between our protocol and standard estimation schemes, where the state preparation time is neglected.
The information on a quantum process acquired through measurements plays a crucial role in the determination of its non-equilibrium thermodynamic properties. We report on the experimental inference of the stochastic entropy production rate for a continuously monitored mesoscopic quantum system. We consider an optomechanical system subjected to continuous displacement Gaussian measurements and characterise the entropy production rate of the individual trajectories followed by the system in its stochastic dynamics, employing a phase-space description in terms of the Wigner entropy. Owing to the specific regime of our experiment, we are able to single out the informational contribution to the entropy production arising from conditioning the state on the measurement outcomes. Our experiment embodies a significant step towards the demonstration of full-scale control of fundamental thermodynamic processes at the mesoscopic quantum scale.
Many applications of quantum information processing (QIP) require distribution of quantum states in networks, both within and between distant nodes. Optical quantum states are uniquely suited for this purpose, as they propagate with ultralow attenuation and are resilient to ubiquitous thermal noise. Mechanical systems are then envisioned as versatile interfaces between photons and a variety of solid-state QIP platforms. Here, we demonstrate a key step towards this vision, and generate entanglement between two propagating optical modes, by coupling them to the same, cryogenic mechanical system. The entanglement persists at room temperature, where we verify the inseparability of the bipartite state and fully characterize its logarithmic negativity by homodyne tomography. We detect, without any corrections, correlations corresponding to a logarithmic negativity of $E_\mathrm{N}=0.35$. Combined with quantum interfaces between mechanical systems and solid-state qubit processors already available or under development, this paves the way for mechanical systems enabling long-distance quantum information networking over optical fiber networks.
We discuss the problem of estimating a frequency via N-qubit probes undergoing independent dephasing channels that can be continuously monitored via homodyne or photo-detection. We derive the corresponding analytical solutions for the conditional states, for generic initial states and for arbitrary efficiency of the continuous monitoring. For the detection strategies considered, we show that: i) in the case of perfect continuous detection, the quantum Fisher information (QFI) of the conditional states is equal to the one obtained in the noiseless dynamics; ii) for smaller detection efficiencies, the QFI of the conditional state is equal to the QFI of a state undergoing the (unconditional) dephasing dynamics, but with an effectively reduced noise parameter.
We introduce the concept of pairwise tomography networks to characterise quantum properties in many-body systems and demonstrate an efficient protocol to measure them experimentally. Pairwise tomography networks are generators of multiplex networks where each layer represents the graph of a relevant quantifier such as, e.g., concurrence, quantum discord, purity, quantum mutual information, or classical correlations. We propose a measurement scheme to perform two-qubit tomography of all pairs showing exponential improvement in the number of qubits $N$ with respect to previously existing methods. We illustrate the usefulness of our approach by means of several examples revealing its potential impact to quantum computation, communication and simulation. We perform a proof-of-principle experiment demonstrating pairwise tomography networks of $W$ states on IBM Q devices.
We study the population dynamics and quantum transport efficiency of a multi-site dissipative system driven by a random telegraph noise (RTN) by using a variational polaron master equation for both linear chain and ring configurations. By using two different environment descriptions -- RTN only and a thermal bath+RTN -- we show that the presence of the classical noise has a nontrivial role on the quantum transport. We observe that there exists large areas of parameter space where the combined bath+RTN influence is clearly beneficial for populating the target state of the transport, and for average trapping time and transport efficiency when accounting for the presence of the reaction center via the use of the sink. This result holds for both of the considered intra-site coupling configurations including a chain and ring. In general, our formalism and achieved results provide a platform for engineering and characterizing efficient quantum transport in multi-site systems both for realistic environments and engineered systems.
It is commonly believed that decoherence arises as a result of the entangling interaction between a quantum system and its environment, as a consequence of which the environment effectively measures the system, thus washing away its quantum properties. Moreover, this interaction results in the emergence of a classical objective reality, as described by Quantum Darwinism. In this Letter, we show that the widely believed idea that entanglement is needed for decoherence is imprecise. We propose a new mechanism, dynamical mixing, capable of inducing decoherence dynamics on a system without creating any entanglement with its quantum environment. We illustrate this mechanism with a simple and exactly solvable collision model. Interestingly, we find that Quantum Darwinism does not occur if the system undergoes entanglement-free decoherence and, only when the effect of a super-environment introducing system-environment entanglement is taken into account, the emergence of an objective reality takes place. Our results lead to the unexpected conclusion that system-environment entanglement is not necessary for decoherence or information back-flow, but plays a crucial role in the emergence of an objective reality.
The advent of Noisy Intermediate-Scale Quantum (NISQ) technology is changing rapidly the landscape and modality of research in quantum physics. NISQ devices, such as the IBM Q Experience, have very recently proven their capability as experimental platforms accessible to everyone around the globe. Until now, IBM Q Experience processors have mostly been used for quantum computation and simulation of closed systems. Here we show that these devices are also able to implement a great variety of paradigmatic open quantum systems models, hence providing a robust and flexible testbed for open quantum systems theory. During the last decade an increasing number of experiments have successfully tackled the task of simulating open quantum systems in different platforms, from linear optics to trapped ions, from Nuclear Magnetic Resonance (NMR) to Cavity Quantum Electrodynamics. Generally, each individual experiment demonstrates a specific open quantum system model, or at most a specific class. Our main result is to prove the great versatility of the IBM Q Experience processors. Indeed, we experimentally implement one and two-qubit open quantum systems, both unital and non-unital dynamics, Markovian and non-Markovian evolutions. Moreover, we realise proof-of-principle reservoir engineering for entangled state generation, demonstrate collisional models, and verify revivals of quantum channel capacity and extractable work, caused by memory effects. All these results are obtained using IBM Q Experience processors publicly available and remotely accessible online.
We address continuous-time quantum walks on graphs in the presence of time- and space-dependent noise. Noise is modeled as generalized dynamical percolation, i.e. classical time-dependent fluctuations affecting the tunneling amplitudes of the walker. In order to illustrate the general features of the model, we review recent results on two paradigmatic examples: the dynamics of quantum walks on the line and the effects of noise on the performances of quantum spatial search on the complete and the star graph. We also discuss future perspectives, including extension to many-particle quantum walk, to noise model for on-site energies and to the analysis of different noise spectra. Finally, we address the use of quantum walks as a quantum probe to characterize defects and perturbations occurring in complex, classical and quantum, networks.
Continuous weak measurement allows localizing open quantum systems in state space, and tracing out their quantum trajectory as they evolve in time. Efficient quantum measurement schemes have previously enabled recording quantum trajectories of microwave photon and qubit states. We apply these concepts to a macroscopic mechanical resonator, and follow the quantum trajectory of its motional state conditioned on a continuous optical measurement record. Starting with a thermal mixture, we eventually obtain coherent states of 78% purity--comparable to a displaced thermal state of occupation 0.14. We introduce a retrodictive measurement protocol to directly verify state purity along the trajectory, and furthermore observe state collapse and decoherence. This opens the door to measurement-based creation of advanced quantum states, and potential tests of gravitational decoherence models.
Continuous-time quantum walks may be exploited to enhance spatial search, i.e., for finding a marked element in a database structured as a complex network. However, in practical implementations, the environmental noise has detrimental effects, and a question arises on whether noise engineering may be helpful in mitigating those effects on the performance of the quantum algorithm. Here we study whether time-correlated noise inducing non-Markovianity may represent a resource for quantum search. In particular, we consider quantum search on a star graph, which has been proven to be optimal in the noiseless case, and analyze the effects of independent random telegraph noise (RTN) disturbing each link of the graph. Upon exploiting an exact code for the noisy dynamics, we evaluate the quantum non-Markovianity of the evolution, and show that it cannot be considered as a resource for this algorithm, since its presence is correlated with lower probabilities of success of the search.
Quantum mechanics dictates that the precision of physical measurements must be subject to certain constraints. In the case of inteferometric displacement measurements, these restrictions impose a 'standard quantum limit' (SQL), which optimally balances the precision of a measurement with its unwanted backaction. To go beyond this limit, one must devise more sophisticated measurement techniques, which either 'evade' the backaction of the measurement, or achieve clever cancellation of the unwanted noise at the detector. In the half-century since the SQL was established, systems ranging from LIGO to ultracold atoms and nanomechanical devices have pushed displacement measurements towards this limit, and a variety of sub-SQL techniques have been tested in proof-of-principle experiments. However, to-date, no experimental system has successfully demonstrated an interferometric displacement measurement with sensitivity (including all relevant noise sources: thermal, backaction, and imprecision) below the SQL. Here, we exploit strong quantum correlations in an ultracoherent optomechanical system to demonstrate off-resonant force and displacement sensitivity reaching 1.5dB below the SQL. This achieves an outstanding goal in mechanical quantum sensing, and further enhances the prospects of using such devices for state-of-the-art force sensing applications.
We address quantum spatial search on graphs and its implementation by continuous-time quantum walks in the presence of dynamical noise. In particular, we focus on search on the complete graph and on the star graph of order $N$, proving that also the latter is optimal in the computational limit $N \gg 1$, being nearly optimal also for small $N$. The noise is modeled by independent sources of random telegraph noise (RTN), dynamically perturbing the links of the graph. We observe two different behaviours depending on the switching rate of RTN: fast noise only slightly degrades performance, whereas slow noise is more detrimental and, in general, lowers the success probability. In particular, we still find a quadratic speed-up for the average running time of the algorithm, while for the star graph with external target node we observe a transition to classical scaling. We also address how the effects of noise depend on the order of the graphs, and discuss the role of the graph topology. Overall, our results suggest that realizations of quantum spatial search are possible with current technology, and also indicate the star graph as the perfect candidate for the implementation by noisy quantum walks, owing to its simple topology and nearly optimal performance also for just few nodes.
It has recently been shown [Rossi et al., Phys. Rev. Lett. 119, 123603 (2017); ibid. 120, 073601 (2018)] that feedback--controlled in--loop light can be used to enhance the efficiency of optomechanical systems. We analyse the theoretical ground at the basis of this approach and explore its potentialities and limitations. We discuss the validity of the model, analyse the properties of in-loop cavities and we show how they can be used to observe coherent optomechanical oscillations also with weakly coupled system, improve the sideband cooling performance, and increase ponderomotive squeezing.
Controlling a quantum system based on the observation of its dynamics is inevitably complicated by the backaction of the measurement process. Efficient measurements, however, maximize the amount of information gained per disturbance incurred. Real-time feedback then enables both canceling the measurement's backaction and controlling the evolution of the quantum state. While such measurement-based quantum control has been demonstrated in the clean settings of cavity and circuit quantum electrodynamics, its application to motional degrees of freedom has remained elusive. Here we show measurement-based quantum control of the motion of a millimetre-sized membrane resonator. An optomechanical transducer resolves the zero-point motion of the soft-clamped resonator in a fraction of its millisecond coherence time, with an overall measurement efficiency close to unity. We use this position record to feedback-cool a resonator mode to its quantum ground state (residual thermal occupation n = 0.29 +- 0.03), 9 dB below the quantum backaction limit of sideband cooling, and six orders of magnitude below the equilibrium occupation of its thermal environment. This realizes a long-standing goal in the field, and adds position and momentum to the degrees of freedom amenable to measurement-based quantum control, with potential applications in quantum information processing and gravitational wave detectors.
We study quantum frequency estimation for $N$ qubits subjected to independent Markovian noise, via strategies based on time-continuous monitoring of the environment. Both physical intuition and an extended convexity property of the quantum Fisher information (QFI) suggest that these strategies are more effective than the standard ones based on the measurement of the unconditional state after the noisy evolution. Here we focus on initial GHZ states and on parallel or transverse noise. For parallel noise, i.e. dephasing, we show that perfectly efficient time-continuous photo-detection allows to recover the unitary (noiseless) QFI, and thus to obtain a Heisenberg scaling for every value of the monitoring time. For finite detection efficiency, one falls back to the noisy standard quantum limit scaling, but with a constant enhancement due to an effective reduced dephasing. Also in the transverse noise case we obtain that the Heisenberg scaling is recovered for perfectly efficient detectors, and we find that both homodyne and photo-detection based strategies are optimal. For finite detectors efficiency, our numerical simulations show that, as expected, an enhancement can be observed, but we cannot give any conclusive statement regarding the scaling. We finally describe in detail the stable and compact numerical algorithm that we have developed in order to evaluate the precision of such time-continuous estimation strategies, and that may find application in other quantum metrology schemes.
Normal--mode splitting is the most evident signature of strong coupling between two interacting subsystems. It occurs when two subsystems exchange energy between themselves faster than they dissipate it to the environment. Here we experimentally show that a weakly coupled optomechanical system at room temperature can manifest normal--mode splitting when the pump field fluctuations are anti-squashed by a phase-sensitive feedback loop operating close to its instability threshold. Under these conditions the optical cavity exhibits an effectively reduced decay rate, so that the system is effectively promoted to the strong coupling regime.
We address memory effects and diffusive properties of a continuous-time quantum walk on a one-dimensional percolation lattice affected by spatially correlated random telegraph noise. In particular, by introducing spatially correlated time-dependent fluctuations in nearest-neighbor hopping amplitudes, we describe random domains characterized by global noise. The resulting open dynamics of the walker is then unraveled by an ensemble average over all the noise realizations. Our results show that time-dependent noise assisted by spatial correlations leads to strong memory effects in the walker dynamics and to robust diffusive behavior against the detrimental action of uncorrelated noise. We also show that spatially correlated classical noise enhances localization breaking, thus making a quantum particle spread on longer distances across the lattice.
We address the estimation of the magnetic field B acting on an ensemble of atoms with total spin J subjected to collective transverse noise. By preparing an initial spin coherent state, for any measurement performed after the evolution, the mean-square error of the estimate is known to scale as $1/J$, i.e. no quantum enhancement is obtained. Here, we consider the possibility of continuously monitoring the atomic environment, and conclusively show that strategies based on time-continuous non-demolition measurements followed by a final strong measurement may achieve Heisenberg-limited scaling $1/{J^2}$ and also a monitoring-enhanced scaling in terms of the interrogation time. We also find that time-continuous schemes are robust against detection losses, as we prove that the quantum enhancement can be recovered also for finite measurement efficiency. Finally, we analytically prove the optimality of our strategy.
We unveil a novel source of non-Markovianity for the dynamics of quantum systems, which appears when the system does not explore the full set of dynamical trajectories in the interaction with its environment. We term this effect non-Markovianity by undersampling and demonstrate its appearance in the operation of an all-optical quantum simulator involving a polarization qubit interacting with a dephasing fluctuating environment.
We realise a feedback-controlled optical Fabry-Perot cavity in which the transmitted cavity output is used to modulate the input amplitude fluctuations. The resulting phase-dependent fluctuations of the in-loop optical field, which may be either sub-shot- or super-shot-noise, can be engineered to favorably affect the optomechanical interaction with a nanomechanical membrane placed within the cavity. Here we show that in the super-shot-noise regime ("anti-squashed light") the in-loop field has a strongly reduced effective cavity linewidth, corresponding to an increased optomechanical cooperativity. In this regime feedback improves the simultaneous resolved sideband cooling of two nearly degenerate membrane mechanical modes by one order of magnitude.
We realise a phase-sensitive closed-loop control scheme to engineer the fluctuations of the pump field which drives an optomechanical system, and show that the corresponding cooling dynamics can be significantly improved. In particular, operating in the counter-intuitive "anti-squashing" regime of positive feedback and increased field fluctuations, sideband cooling of a nanomechanical membrane within an optical cavity can be improved by 7.5~dB with respect to the case without feedback. Close to the quantum regime of reduced thermal noise, such feedback-controlled light would allow going well below the quantum backaction cooling limit.
We address the dynamics of a bosonic system coupled to either a bosonic or a magnetic environment, and derive a set of sufficient conditions that allow one to describe the dynamics in terms of the effective interaction with a classical fluctuating field. We find that for short interaction times the dynamics of the open system is described by a Gaussian noise map for several different interaction models and independently on the temperature of the environment. In order to go beyond a qualitative understanding of the origin and physical meaning of the above short-time constraint, we take a general viewpoint and, based on an algebraic approach, suggest that any quantum environment can be described by classical fields whenever global symmetries lead to the definition of environmental operators that remain well defined when increasing the size, i.e. the number of dynamical variables, of the environment. In the case of the bosonic environment this statement is exactly demonstrated via a constructive procedure that explicitly shows why a large number of environmental dynamical variables and, necessarily, global symmetries, entail the set of conditions derived in the first part of the work.
We study the entanglement properties of quantum hypergraph states of $n$ qubits, focusing on multipartite entanglement. We compute multipartite entanglement for hypergraph states with a single hyperedge of maximum cardinality, for hypergraph states endowed with all possible hyperedges of cardinality equal to $n-1$ and for those hypergraph states with all possible hyperedges of cardinality greater than or equal to $n-1$. We then find a lower bound to the multipartite entanglement of a generic quantum hypergraph state. We finally apply the multipartite entanglement results to the construction of entanglement witness operators, able to detect genuine multipartite entanglement in the neighbourhood of a given hypergraph state. We first build entanglement witnesses of the projective type, then propose a class of witnesses based on the stabilizer formalism, hence called stabilizer witnesses, able to reduce the experimental effort from an exponential to a linear growth in the number of local measurement settings with the number of qubits.
We suggest and demonstrate an all-optical quantum simulator for single-qubit noisy channels originating from the interaction with a fluctuating field. The simulator employs the polarization degree of freedom of a single photon, and exploits its spectral components to average over the realizations of the stochastic dynamics. As a proof of principle, we run simulations of dephasing channels driven either by Gaussian (Ornstein-Uhlenbeck) or non-Gaussian (random telegraph) stochastic processes.
Quantum gravity theories predict a minimal length at the order of magnitude of the Planck length, under which the concepts of space and time lose every physical meaning. In quantum mechanics, the insurgence of such minimal length can be described by introducing a modified position-momentum commutator, which in turn yields a generalized uncertainty principle, where the uncertainty on the position measurement has a lower bound. The value of the minimal length is not predicted by theories and must be evaluated experimentally. In this paper, we address the quantum bound to estimability of the minimal uncertainty length by performing measurements on a harmonic oscillator, which is analytically solvable in the deformed algebra of the Hilbert subspace.
A usual assumption in quantum estimation is that the unknown parameter labels the possible states of the system, while it influences neither the sample space of outcomes nor the measurement aimed at extracting information on the parameter itself. This assumption is crucial to prove the quantum Cramér-Rao theorem and to introduce the quantum Fisher information as an upper bound to the Fisher information of any possible measurement. However, there are relevant estimation problems where this assumption does not hold and an alternative approach should be developed to find the genuine ultimate bound to precision of quantum measurements. We investigate physical situations where there is an intrinsic dependence of the measurement strategy on the parameter and find that quantum-enhanced measurements may be more precise than previously thought.
We address the quantum estimation of the diamagnetic, or $A^2$, term in an effective model of light-matter interaction featuring two coupled oscillators. First, we calculate the quantum Fisher information of the diamagnetic parameter in the interacting ground state. Then, we find that typical measurements on the transverse radiation field, such as homodyne detection or photon counting, permit to estimate the diamagnetic coupling constant with near-optimal efficiency in a wide range of model parameters. Should the model admit a critical point, we also find that both measurements would become asymptotically optimal in its vicinity. Finally, we discuss binary discrimination strategies between the two most debated hypotheses involving the diamagnetic term in circuit QED. While we adopt a terminology appropriate to the Coulomb gauge, our results are also relevant for the electric dipole gauge. In that case, our calculations would describe the estimation of the so-called transverse $P^2$ term. The derived metrological benchmarks are general and relevant to any implementation of the model, cavity and circuit QED being two relevant examples.
We address the characterization of dissipative bosonic channels and show that estimation of the loss rate by Gaussian probes (coherent or squeezed) is improved in the presence of Kerr nonlinearity. In particular, enhancement of precision may be substantial for short interaction time, i.e. for media of moderate size, e.g. biological samples. We analyze in detail the behaviour of the quantum Fisher information (QFI), and determine the values of nonlinearity maximizing the QFI as a function of the interaction time and of the parameters of the input signal. We also discuss the precision achievable by photon counting and quadrature measurement and present additional results for truncated, few-photon, probe signals. Finally, we discuss the origin of the precision enhancement, showing that it cannot be linked quantitatively to the non-Gaussianity of the interacting probe signal.
E. Serra, M. Bawaj, A. Borrielli, G. Di Giuseppe, S. Forte, N. Kralj, N. Malossi, L. Marconi, F. Marin, F. Marino, B. Morana, R. Natali, G. Pandraud, A. Pontin, G.A. Prodi, M. Rossi, P.M. Sarro, D. Vitali, M. Bonaldi In view of the integration of membrane resonators with more complex MEMS structures, we developed a general fabrication procedure for circular shape SiN$_x$ membranes using Deep Reactive Ion Etching (DRIE). Large area and high-stress SiN$_x$ membranes were fabricated and used as optomechanical resonators in a Michelson interferometer and in a Fabry-Pérot cavity. The measurements show that the fabrication process preserves both the optical quality and the mechanical quality factor of the membrane.
We address the interaction of single- and two-qubit systems with external fluctuating transverse fields and analyze in details the dynamical decoherence induced by Gaussian and non-Gaussian noise, e.g. random telegraph noise (RTN). Upon exploiting the exact RTN solution of the time-dependent Von Neumann equation, we analyze in details the behavior of quantum correlations and prove the non-Markovianity of the dynamical map in the full parameter range, i.e. for either fast or slow noise. The dynamics induced by Gaussian noise is studied numerically and compared to the RTN solution, showing the existence of (state dependent) regions of the parameter space where the two noises lead to very similar dynamics. Our results shows that while the effects of non-Gaussian noise cannot be trivially mapped to that of Gaussian noise and viceversa, i.e. the spectrum alone is not enough to summarize the noise effects, the dynamics under the effect of one kind of noise may be simulated with high fidelity by the other one.
We experimentally show how classical correlations can be turned into quantum entanglement, via the presence of non-unital local noise and the action of a CNOT gate. We first implement a simple two-qubit protocol in which entanglement production is not possible in the absence of local non-unital noise, while entanglement arises with the introduction of noise, and is proportional to the degree of noisiness. We then perform a more elaborate four-qubit experiment, by employing two hyperentangled photons initially carrying only classical correlations. We demonstrate a scheme where the entanglement is generated via local non-unital noise, with the advantage to be robust against local unitaries performed by an adversary.
We address the use of entangled qubits as quantum probes to characterize the noise induced by complex environments. In particular, we show that a joint measurement on entangled probes can improve estimation of the correlation time for a broad class of environmental noises compared to any sequential strategy involving single qubit preparation. The enhancement appears when the noise is faster than a threshold value, a regime which may always be achieved by tuning the coupling between the quantum probe and the environment inducing the noise. Our scheme exploits time-dependent sensitivity of quantum systems to decoherence and does not require dynamical control on the probes. We derive the optimal interaction time and the optimal probe preparation, showing that it corresponds to multiqubit GHZ states when entanglement is useful. We also show robustness of the scheme against depolarization or dephasing of the probe, and discuss simple measurements approaching optimal precision.