We study quantum frequency estimation for N qubits subjected to independent Markovian noise, via strategies based on time-continuous monitoring of the environment. Both physical intuition and an extended convexity property of the quantum Fisher information (QFI) suggest that these strategies are more effective than the standard ones based on the measurement of the unconditional state after the noisy evolution. Here we focus on initial GHZ states and on parallel or transverse noise. For parallel noise, i.e. dephasing, we show that perfectly efficient time-continuous photo-detection allows to recover the unitary (noiseless) QFI, and thus to obtain a Heisenberg scaling for every value of the monitoring time. For finite detection efficiency, one falls back to the noisy standard quantum limit scaling, but with a constant enhancement due to an effective reduced dephasing. Also in the transverse noise case we obtain that the Heisenberg scaling is recovered for perfectly efficient detectors, and we find that both homodyne and photo-detection based strategies are optimal. For finite detectors efficiency, our numerical simulations show that, as expected, an enhancement can be observed, but we cannot give any conclusive statement regarding the scaling. We finally describe in detail the stable and compact numerical algorithm that we have developed in order to evaluate the precision of such time-continuous estimation strategies, and that may find application in other quantum metrology schemes.
Quantum information theorems state that it is possible to exploit collective quantum resources to greatly enhance the charging power of quantum batteries (QBs) made of many identical elementary units. We here present and solve a model of a QB that can be engineered in solid-state architectures. It consists of N two-level systems coupled to a single photonic mode in a cavity. We contrast this collective model (“Dicke QB”), whereby entanglement is genuinely created by the common photonic mode, to the one in which each two-level system is coupled to its own separate cavity mode (“Rabi QB”). By employing exact diagonalization, we demonstrate the emergence of a quantum advantage in the charging power of Dicke QBs, which scales like √ N for N \gg1.
Thermalization processes are ubiquitous in macroscopic thermodynamics, being at the core of optimal processes such as a Carnot engine. However, when dealing with small quantum systems, energy exchanges do not always lead to thermalization. A more realistic model to describe these situations are partial thermalizations, where the system becomes closer to a thermal state while not reaching it. In fact, partial thermalizations happen to characterize a large set of interactions between small systems. In our work, we show that optimal thermodynamic processes can still be constructed by means of such partial thermalizations. At the fundamental level, this shows that optimal thermodynamic processes are much more common than previously expected in small quantum systems. Furthermore, this result also has implications at the experimental level, by simplifying the task of constructing optimal thermodynamic processes for small engines.
Phase estimation is one of the most fundamental applications of quantum metrology. By measuring the phase of light reflected from test masses in large-scale interferometers it is possible to detect gravitational waves. In order to better resolve this phase shift it is beneficial to use higher intensity light, yet reflected light causes random fluctuations in the mirror's motion which reduce sensitivity to the gravitational wave signal with current detector schemes. We explore whether multi-mode states of light, generated by driving the interferometer with a set of lasers operating at a distinct frequencies, can improve sensitivity and provide an advantage in overcoming radiation-pressure effects. Considering externally squeezed inputs and optical loss we apply quantum metrology techniques to interferometers with multiple carrier-modes to derive fundamental bounds and evaluate the sensitivity attainable with homodyne detection schemes.
Boson sampling devices are a prime candidate for exhibiting quantum supremacy, yet their application for solving problems of practical interest is less well understood. We show that Gaussian boson sampling (GBS) can be used for dense subgraph identification. Focusing on the NP-hard densest k-subgraph problem, we find that stochastic algorithms are enhanced through GBS, which selects dense subgraphs with high probability. These findings rely on a link between graph density and the number of perfect matchings -- enumerated by the Hafnian -- which is the relevant quantity determining sampling probabilities in GBS. We test our findings by constructing GBS-enhanced versions of the random search and simulated annealing algorithms and apply them through numerical simulations of GBS to identify the densest subgraph of a 30 vertex graph.
Randomness expansion protocols exploit a fundamental connection between randomness and nonlocality to enable the production of large quantities of certifiable randomness. Through the unification of two powerful theoretical tools from the device-independent literature, the device-independent guessing probability (DIGP) and the entropy accumulation theorem (EAT), we prove security for a large class of randomness expansion protocols. Overall, this results in a malleable framework for constructing randomness expansion protocols which can be tailored to the requirements of a user. In this talk, we present our framework and analyse the performance of several example protocols.
The study of nonclassical properties of quantum states is an important topic in the current research and their exploitation requires effective measurements. Promising results in this respect have been obtained in the last two decades by means of photon-number-resolving (PNR) detectors.
Here we show recent results achieved with the class of Silicon photomultipliers (SiPMs). They are commercial PNR detectors extensively used for particle physics applications. Till now, they have been marginally exploited in the field of Quantum Optics due to the presence of cross-talk effects and low quantum efficiency. We demonstrate that the new generation of SiPMs can be used to observe sub-shot-noise correlations between the two parties of a well-populated squeezed vacuum state. These results are promising for the application of SiPMs to more complex schemes.
In the quantum nano-technology field the study of quantum batteries can become one of the leading frontiers. In particular the understanding of the role of quantum mechanics either in the charging than in the work extraction steps is essential. To describe the charging step I present a theoretical analysis of quantum systems composed of two interacting parties - a charger A and a battery B. After an introduction on the energy transfer from A to B in the closed case and on the benefit coming from global operations, I will describe energy transfer protocols in open frameworks by including a both thermal bath and a coherent field to perform the charging. Under a local Lindblad master equation approach, exact results on the dynamics of the systems under study and optimal configurations for the energy transfer will be discussed.
We present a general framework to tackle the problem of finding time-independent dynamics generating target unitary evolutions. We show that this problem is equivalently stated as a set of conditions over the spectrum of the time-independent gate generator, thus transforming the task to an inverse eigenvalue problem. We illustrate our methodology by identifying suitable time-independent generators implementing Toffoli and Fredkin gates without the need for ancillae or effective evolutions. We show how the same conditions can be used to solve the problem numerically, via supervised learning techniques. In turn, this allows us to solve problems that are not amenable, in general, to direct analytical solution, providing at the same time a high degree of flexibility over the types of gate-design problems that can be approached. As a significant example, we find generators for the Toffoli gate using only diagonal pairwise interactions, which are easier to implement in some experimental architectures. To showcase the flexibility of the supervised learning approach, we give an example of a non-trivial four-qubit gate that is implementable using only diagonal, pairwise interactions.
Quantum channels can be activated by a kind of channels whose quantum capacity is zero. This activation effect might be useful to overcom noise of channels by attaching other channels which can enhance the capacity of a given channel. In this work, we show that such an activation is possible by specific positive-partial-transpose channels for Gaussian lossy channels whose quantum capacities are known. We also test more general case involving Gaussian thermal attenuator whose narrow upper bound on quantum capacity is recently suggested.
Autonomous thermal machines are capable of extracting energy (from thermal sources) or cooling down systems without need of external control or driving and therefore are promising for local and non-invasive technological and experimental applications. We build a model of autonomous thermal machines where the work reservoir can be any system instead of the usual harmonic oscillator or qubit. It allows us to witness special effects not present in harmonic oscillators or qubits. For the same energetic content, some non-thermal states of the work reservoir yield an efficiency way above the one provided by thermal states. The thermodynamic behaviour of non-thermal states is characterised by the introduction of the concept of apparent temperature which points out the special effects of coherence, correlations, but also non-thermal population distributions. Some example are provided to illustrate their dramatic impact. We also recover seminal results providing an unifying point of view of diverse phenomena.
Traditional descriptions of the dynamics of open quantum systems are plagued by unphysical results as soon as memory effects play a non-negligible role. These effects commonly arise when the system of interest has a complex and structured environment. To make matters worse, the mere definition of a quantum stochastic process, as well as memory and memory effects in the quantum regime is still subject of active debate and not generally agreed upon. To remedy these shortcomings, a new unified scheme for operationally describing general quantum dynamics, known as the process tensor formalism, has been developed. This new framework allows -- independent of the details of the system-environment interaction -- for a full characterization of the underlying dynamics based on a finite number of local manipulations, and enables one to quantify its complexity in an unambiguous manner.
In my talk, I will outline this general formalism, and illustrate how it naturally generalizes the theory of classical stochastic processes to the quantum regime, thus putting both theories on an equally sound mathematical footing. Furthermore, I will discuss, how this clear-cut understanding of general quantum processes elucidates the role of entanglement and memory effects as a resource for the simulation of processes that display exotic causal structures.
Phase estimation protocols provide a fundamental benchmark for the field of quantum metrology. Most theoretical and experimental studies have focused on determining the fundamental bounds and how to achieve them in the asymptotic regime. However, in most applications it is necessary to achieve optimal precisions by performing only a limited number of measurements. To this end, machine learning techniques can be applied as a powerful optimization tool.
Here we experimentally implement single-photon adaptive phase estimation protocols enhanced by machine learning, showing the capability of reaching optimal precision after a small number of trials. In particular, we introduce a new approach for Bayesian estimation that exhibits best performances for low number of photons. We study the resilience to noise of the tested methods, showing the robustness of the optimized Bayesian approach in the presence of imperfections. Application of this methodology can be envisaged in the more general multiparameter case.
We investigate whether a generic optomechanical system can be used to perform measurements of the gravitational acceleration $g$. Through analytic and numerical work, we show that the quantum and classical Fisher information coincide for a homodyne detection scheme and we use the results to estimate a fundamental bound on the sensitivity $\Delta g$. We predict a $\Delta g = 10^{-16}$ ms$^{-2}$ for both levitated nanodiamonds and Fabry-Perot cavities with moving mirrors, which surpasses the sensitivity of every other quantum system to date.
We investigate quantum steering for multipartite systems by using entropic uncertainty relations. We introduce entropic steering inequalities whose violation certifies the presence of different classes of multipartite steering. These inequalities witness both steerable states and genuine multipartite steerable states. Furthermore, we study their detection power for several classes of states of a three-qubit system.
Continuous-time quantum walks (CTQWs) describe the free evolution of quantum particles on N-vertex graphs. They have been subject of intense studies, both theoretical and experimental, as they have proven useful for several applications, ranging from universal quantum computation, to search algorithms (e. g. Grover), quantum transport and state or energy transfer. Given their relevance in applications, a realistic description of the dynamics of quantum walkers should take into account those sources of noise and imperfections that might jeopardize the discrete lattice on which the CTQW occurs.
Here we address the effects of classical random telegraph noise (a typical source of noise in solid state quantum devices) affecting the hopping amplitudes on CTQWs on various kinds of lattices and graphs, showing how the spatial and temporal correlations affect the propagation of the walker. We also discuss the effectiveness of Grover search in the presence of such noise.
Empirical data constitute our primary source of information to construct theories that explain the world around us, and to develop the necessary technologies that help us to accomplish that task. However, the quality of this information is restricted in practice by factors such as the number of experiments that we can perform. This is particularly relevant for the study of fragile systems, or when only a few observations are possible before the system under study is out of reach. In this work we discuss a methodology to develop quantum-enhanced metrology protocols that are suitable for these scenarios, and we provide tight lower bounds on the phase estimation uncertainty of a Mach-Zehnder interferometer that operates in the non-asymptotic regime of limited data and moderate prior information. These ideas are then extended to a multi-parameter scenario where we consider a network of interferometers to model quantum imaging protocols.
Microcanonical thermodynamics studies the operations that can be performed on systems with well-defined energy. So far, this approach has been applied to classical and quantum systems. Here we extend it to arbitrary physical theories, proposing two requirements for the development of a general microcanonical framework. We then formulate three resource theories, corresponding to three different choices of basic operations. We focus on a class of physical theories, called sharp theories with purification, where these three sets of operations exhibit remarkable properties. In these theories, a necessary condition for thermodynamic transitions is given by a suitable majorisation criterion. This becomes a sufficient condition in all three resource theories if and only if the dynamics allowed by the theory satisfy a condition that we call "unrestricted reversibility". Under this condition, we derive a duality between the resource theory of microcanonical thermodynamics and the resource theory of pure bipartite entanglement.
Quantum thermodynamics is an emerging topic in quantum physics, with prospects for advancing quantum computing, developing quantum technologies and providing insights in fundamentals of quantum mechanics. Measuring and calculating properties of quantum systems, however, present difficulties, and especially so when many-body interactions are present. Using techniques from Density Functional Theory (DFT), we investigate quantum thermodynamic properties of out-of-equilibrium many-body systems. We extend the approach devised by M. Herrera et al. [M. Herrera, R.M. Serra, I. D’Amico, Scientific Reports 7, 4655 (2017)], improving the DFT-based approximations for the statistics of quantum work and analysing how these approximations behave as the system size is increased.
See arXiv:1603.07508 for more details.
Phase-insensitive Gaussian channels are the typical way to model the decoherence introduced by the environment in continuous-variable quantum states. It is known that those channels can be simulated by a teleportation protocol using as a resource state either a maximally entangled state passing through the same channel, i.e., the Choi-state, or a state that is entangled at least as much as the Choi-state. Since the construction of the Choi-state requires infinite energy, we find the appropriate resource states with the minimum energy needed for the simulation that are equally entangled to the Choi-state, measured by entanglement of formation. We also show that the same amount of entanglement is enough to simulate an equally decohering channel, while even more entanglement can simulate less decohering channels. We, finally, use that fact to generalize a previously known error correction protocol which was restricted to pure lossy channels.
arXiv.1803.03516
Quantum error correction is necessary for quantum computers to achieve their full potential. 2D surface codes are the leading family of quantum error correcting codes, but they have a large overhead. Millions of qubits would be required to factor a 2000-bit number on a 2D surface code quantum computer. One of the reasons for this overhead is the fact that 2D surface codes don’t have any transversal non-Clifford gates. Such a gate is required to achieve universal quantum computation. Here, we show that 3D surface codes possess a transversal (non-Clifford) CCZ gate. We also discuss 3D surface code decoding ie using classical computation to predict a correction given an error syndrome. We show that 3D surface codes can be decoded using a combination of a Minimum-Weight Perfect-Matching decoder and a cellular automaton decoder. Finally, we present a universal quantum computing architecture which uses 2D surface codes and 3D surface codes.
The field of Hamiltonian Complexity is concerned with the Local Hamiltonian problem, which asks for the ground state energy of a Hamiltonian to some precision. The Local Hamiltonian problem has been shown to be QMA$_{EXP}$-complete for a 1D, translationally invariant, nearest neighbour Hamiltonian (Gottesman & Irani), and a complete classification has been realised for 2-qubit Hamiltonians (Cubitt & Montanaro).
We extend the field of Hamiltonian Complexity to the thermodynamic limit by introducing the `Ground State Energy Density Problem’ (GSED Problem). This asks whether the nth digit of the ground state energy density of an infinite lattice is 0 or 1 (thus in effect asking for the ground state energy density to some specified precision). We show that for a translationally invariant, nearest neighbour Hamiltonian on a 2D infinite lattice, the GSED problem is QMA_{EXP}-complete.
Thermodynamics can be formulated in either of two approaches, the phenomenological approach, which refers to the macroscopic properties of systems, and the statistical approach, which describes systems in terms of their microscopic constituents. We establish a connection between these two approaches by means of a new axiomatic framework that can take errors and imprecisions into account. This link extends to systems of arbitrary sizes including microscopic systems, for which the treatment of imprecisions is pertinent to any realistic situation.
Based on this, we can identify entropy measures known from information theory with the quantities that characterise whether certain thermodynamic processes are possible. In the error-tolerant case, these entropies are so-called smooth min and max entropies. Our considerations further show that in an appropriate macroscopic limit only one single entropy function is relevant, which turns out to be the von Neumann and the Boltzmann entropy for certain types of states.
Distribution of quantum correlations among remote users is a key procedure underlying many quantum information technologies. Einstein-Podolsky-Rosen steering, which is one kind of such correlations stronger than entanglement, has been identified as a resource for secure quantum networks. We show that this resource can be established between two and even more distant parties by transmission of a system being separable from all the parties. First, we design a protocol allowing to distribute one-way Gaussian steering between two parties via a separable carrier; the obtained steering can be used subsequently for 1sDI quantum key distribution. Further, we extend the protocol to three parties, a scenario which exhibits richer steerability properties including one-to-multimode steering and collective steering, and which can be used for 1sDI quantum secret sharing. All the proposed protocols can be implemented with squeezed states, beam splitters and displacements, and thus they can be readily realized experimentally.