In the context of multiparameter estimation theory, we suggest an optimal measurement scheme to infer two classical parameters encoded in the quadratures of two coherent states. The optimality is shown as saturation of the quantum Cramér-Rao bound under the global energy condition of the system. We propose and analyze a variety of N-mode scheme where one can encode N classical parameters into N coherent states and estimate all the parameters optimally and simultaneously.
High-quality random samples of quantum states are needed for a number of tasks in quantum information and quantum computation. These random samples have been generated using the Monte Carlo techniques in the past. However, these methods are mostly very CPU time expensive. To improve the CPU time efficiency, we propose an algorithm to sample random states from the quantum state space. For the case of qubits, our algorithm performs very well in comparison to the existing Monte Carlo algorithm. For qubits, we also implement our algorithm for different settings, showing that this method also has the desired flexibility. We are currently extending this to higher dimensions. We also give new results on the probability distributions of random quantum states. Thus, in this work we present results that are useful in the field of random quantum states in quantum information theory.
In the recent years, the development of super-resolution strategies, which would enhance sub-wavelength imaging, has gained great interest, due to the significant applications from astronomical to biomedical science. The Rayleigh criterion has been the main resolution limit in traditional imaging techniques. Recent work has taken advantage of the techniques of quantum metrology, evading entirely the diffraction limit to the estimation of the separation of two points, beating "Rayleigh's curse". We will present some progress in the generalisation of prior works to the imaging of multiple point sources and the difficulties which arise in this more elaborate.
Entanglement swapping entangles two quantum objects that don’t share a common past, each being outside the other light cone. In this process, two separated entangled pairs are prepared and a Bell measurement is performed on two particles of the different pairs. To implement quantum repeaters and relays for quantum information processing, multiple swappings are required. Using distinguishable particles, multiple swappings have been theoretically studied while they remain experimentally limited to only two iterations. Here we propose a new scheme of entanglement swapping which utilizes four independently prepared identical particles. In this protocol the indistinguishability of particles permits to use their spatial overlap as an initial entangling gate. This has the advantage of reducing the number of necessary Bell measurements and of increasing the probability of multiple swappings. Finally, this scheme is universal, in that it works for either bosons or fermions.
It is believed that boson sampling problems cannot be efficiently solved using a classical computer. A boson sampler embodies an experimentally-implementable model of non-universal quantum computation able to test quantum supremacy and identify the boundaries of classical computation.
From this general perspective, it is interesting to investigate how the quantum information locally encoded in a small subpart of a large system, such as a subset of a bunch of photons propagating through a linear network, spreads out over the whole system during the evolution. This concept, which is a central one in chaos theory, is known as “scrambling” and has been widely used to investigate a number of phenomena such as black holes. Here, we propose to use scrambling in linear optical systems to shed light on the connection between quantum computation complexity and the possibility of reconstructing the global information encoded in quantum states via local measurements.
In a previous work R. Colbeck and R. Renner constructed a theorem concluded to the ontic nature of the quantum state within the framework of an underlying physical state (see “A system’s wave function is uniquely determined by its underlying physical state”). A core assumption behind their proof was that the inputs used for a chained Bell inequality test were free-choices. In this work we relax this assumption and we test the robustness of the theorem’s conclusion. We allow the inputs to be correlated with the underlying variables and we show that there exist conditions under which the theorem’s conclusion still holds. Apart from its foundational interest our discussion interacts with ideas related to randomness amplification and to quantum cryptography.
Belonging to the Quantum Hamiltonian Computing (QHC) branch of quantum control [1-2],atomic-scale Boolean logic gates (LGs) with two inputs - one output (OR, NOR, AND, NAND, XOR, NXOR) and - two outputs (half-adder circuit) were designed on a Si(100)-(2×1)–H surface following the experimental realization of a QHC NOR gate [3] and the formal design of an half-adder with 6 quantum states in the calculating block [4]. Recently we have focused also to find an automatization way for designing and miniaturzating of the quantum systems to have for example a functioning Boolean halfadder with minimum number of states.
[1] N. Renaud and C. Joachim, Phys. Rev. A, (2008), 78, 062316.
[2] W. H. Soe et al; Phys. Rev. B: Condens. Matter, (2011), 83, 155443.
[3] M. Kolmer, et al; Nanoscale (2015), 7, 12325–12330.
[4] G. Dridi, R. Julien, M. Hliwa and C. Joachim, Nanotechnology, (2015), 26, 344003.
A basic machine learning setting is supervised learning, which deals with the task of inferring a labeling rule on a data set, given a certain number of labeled points.
We consider a generalisation of this problem in the quantum setting (quantum supervised learning problem): given N copies of an unknown state A, N copies of an unknown state B and a copy of a state C that can be either A or B, determine the best guess for C with a measurement on the resources and the test copy.
Restricting to qubit states we determine the optimal measurement in the minimum error setting and the asymptotic average probability of error for various configurations: pure states with fixed overlap but random orientation, random states with fixed purity and totally random states.
In the future, we will be using quantum computers to solve a wide class of problems that are believed to be intractable for classical computers. At the same time, however, verifying the correctness of the solution to a plethora of such problems will remain a hard task. For instance, this is the case for many problems of interest in physics, such as the simulation of large systems of particles highly entangled. It is therefore necessary to develop methods that will let us decide with high confidence whether the outcome of a quantum computation is correct or not.
In our work, we present a verification technique that can be used when a restricted set of operations work in an ideal way. To verify the correctness of a “target” quantum computation, we propose to run several computations. Only one of those (selected at random) is the target, while the others are chosen so that they always output a fixed outcome and are used as tests. We show that if the test computations yield the correct outcome, then with “high” probability also the outcome of the target is correct.
The complexity of our protocol is similar to that of the least demanding existing verification protocols. However, compared to existing techniques, the set of ideal operations required by our protocol is minimal. Moreover, we argue that the ideas behind our protocol might be furtherly adapted to other interesting scenarios, such as the verification of computations run by multiple users.
Neural networks (NNs) are artificial networks inspired by the interconnected structure of neurons in animal brains. They are now capable of computational tasks where most ordinary algorithms would fail, such as speech and pattern recognition, with a wide range of applicability both within and outside research. Hopfield NNs [1] constitute a simple, but rich example of how an associative memory can work; they have the ability to retrieve, from a set of stored network states, the one which is closest to the input pattern. In the last decades, many models have been proposed in order to combine the properties of NNs with quantum mechanics, aiming at understanding if NNs computing can take advantage from quantum effects. Here we discuss a quantum generalisation of a classical Hopfield model [2] whose dynamics is governed by purely dissipative, yet quantum, processes. We show that this dynamics may indeed yield an advantage over a purely classical one, leading to a shorter retrieval time.
[1] J.J. Hopfield, Proceedings of the National Academy of Sciences, 79, 2554, (1982)
[2] P. Rotondo et al., arXiv:1701.01727 (2017).
Gleason’s theorem is a statement that, if we make some reasonable assumptions, the Born rule is the unique way to calculate quantum probabilities. We have shown that this result contains the mathematical structure of sequential measurements as well as the state transformation. We give a small set of physically motivated axioms and show that the Kraus operator form follows. An axiomatic approach has practical relevance as well as fundamental interest, in making clear those assumptions which underlie the security of quantum communication protocols. Interestingly, the two-time formalism is seen to arise naturally in this approach.
Resource theories offer a way to incorporate symmetry principles into a QI setting. I will give a brief review of the resource theory of asymmetry. Then I will describe a way to classify the different ways that quantum states and channels break a given symmetry. This classification scheme gives necessary conditions on the resources needed to induce a given channel, while also providing coarse-grained information about irreversibility and dynamics under symmetry constraints.
The seminal continuous-variable entropic uncertainty relation expresses the complementarity between two n-tuples of canonically conjugate variables (x1,x2, . . . ,xn) and (p1,p2, . . . ,pn) in terms of Shannon differential entropy. Here we consider the generalization to variables that are not canonically conjugate and derive an entropic uncertainty relation expressing the balance between any two n-variable Gaussian projective measurements. The bound on entropies is expressed in terms of the determinant of a matrix of commutators between the measured variables. This uncertainty relation also captures the complementarity between any two incompatible linear canonical transforms, the bound being written in terms of the corresponding symplectic matrices in phase space.
We also define a more general form which takes correlations into account and is thus saturated by all pure Gaussian states. Interestingly, we can deduce from it the most general form of the Robertson uncertainty relation based on the covariance matrix of n variables.
In recent years, there has been growing interest into the use of single spin qubits as quantum sensors. Previous works have shown metrological capabilities beyond the standard measurement limit through the use of spin squeezing or entangled states, and later work has demonstrated similar capability without the need for entanglement. In this poster, we detail a proposal for a novel measurement scheme where the magnetic field applied to a single site in a spin chain can be directly inferred by a distant spin. Such a scenario may be desirable in such applications such as connecting quantum registers, or mediating quantum information between distant sites. Following a semi-classical Bayesian approach, and using sequential, adaptive spin measurements, we show that the distant magnetic field can be estimated with a high degree of accuracy, and with a substantially reduced time overhead. We further demonstrate that replacing the Bayesian approach with a neural network results in an even finer measurement accuracy, with the possibility of inferring a magnetic field which has been applied along any a-priori unknown arbitrary axis.
Spin models are widely studied in the natural sciences, from investigating magnetic materials to studying neural networks. Previous work has demonstrated that there exist simple classical and quantum spin models that are universal, in the sense that they can replicate the physics of all other classical (respectively, quantum) spin models to arbitrary precision. However, all spin models which were previously known to be universal broke translational invariance. In this paper we demonstrate that there exist translationally invariant universal classical models. We construct a translationally invariant Hamiltonian described by a single parameter that is universal. This means that there exists a single Hamiltonian which can replicate all classical spin physics just by tuning a single parameter in the Hamiltonian and varying the size of the system that the Hamiltonian is acting on. This significantly strengthens previous results. This work may also provide a starting point to constructing translationally invariant quantum models.
Non-Linear Gates Enabling Universal Quantum Computation With Continuous variables. The aim of the present research is to devise scheme to generate non-linear gates enabling universal quantum information processing (computation and simulations) over continuous variables. The motivation behind this work is based upon the continuous variable version of the Gottesman-Knill Theorem that states that we can efficiently simulate arbitrary Gaussian quantum information processing on a classical computer. This implies that to, order to have some quantum speed up, one needs non-Gaussian gates – namely, interactions of order three and above, as all Gaussian gates are at most quadratic. Focusing on a cavity opto-mechanical framework, the successful generation of squeezed and cluster states for systems with different amounts of resonators is a first necessary step, which can be shown to be achievable by driving the cavity with a multi-tone field. The next step is to make use of the intrinsic non-linearities available in this system, such as the radiation pressure coupling.
Qudit addition channels have been recently introduced in the context of deriving entropic power inequalities for finite dimensional quantum systems. We show that the relative entropic distance between the output of such a quantum addition channel and the corresponding classical mixture quantitatively captures the amount of coherence present in a quantum system. This new coherence measure admits an upper bound in terms of the relative entropy of coherence and is utilised to formulate a state-dependent uncertainty relation for two observables. We also prove a reverse entropy power equality which may be used to analytically prove an inequality conjectured recently for arbitrary dimension and arbitrary addition weight. Our results provide deep insights to the origin of quantum coherence that truly come from the discrepancy between quantum addition and the classical mixture.
With the interest in quantum technologies growing rapidly it is becoming increasingly important to focus on how useful states may be generated in realistic experiments. We have devised a computational algorithm to search for ways to produce good quantum states by combining experimentally feasible elements. For this we use a genetic algorithm which mimics natural evolution to improve on the experiments it finds. This utilises randomness and often produces experimental set-ups that are unexpected – that you would not find through following human logic alone. The algorithm is very flexible such that you can choose from the elements that are available to you. For us, the “good” states the program finds are judged by having high quantum Fisher information, useful for quantum metrology, but the algorithm may be easily expanded to use other figures of merit.
Global unitary transformations(optswaps) that optimally increase the bias of any mixed computation qubit in a quantum system – represented by a diagonal density matrix – towards a particular state of the computational basis which, in effect, increases its purity are presented. Quantum circuits that achieve this by implementing the above data compression technique – a generalization of the 3B-Comp [Fernandez, Lloyd, Mor, Roychowdhury (2004); arXiv:quant-ph/0401135] used before – are described. These circuits enable purity increment in the computation qubit by maximally transferring part of its von Neumann or Shannon entropy to n-number of surrounding qubits starting with any initial biases. Using the optswaps, a practicable new method that algorithmically achieves hierarchy-dependent cooling of qubits to their respective limits in an engineered quantum register opened to the environment is delineated. In addition to fundamental quantum thermodynamic interests, this work may be important towards satisfying the DiVincenzo criterion for qubit initialization in some quantum computing architectures.
We investigate how relativistic acceleration of the observers can affect the performance of various quantum key distribution protocols for continuous variable states of localized wavepackets.
We formulate steering by channels and Bell nonlocality of channels as generalizations of the well-known concepts of steering by measurements and Bell nonlocality of measurements. The generalization does not follow the standard line of thinking stemming from the EPR paradox, but introduces steering and Bell nonlocality as entanglement-assisted incompatibility tests. The proposed definitions are, in the special case of measurements, the same as the standard definitions, but not all of the known results for measurements generalize to channels. For example we show that for quantum channels steering is not a necessary condition for Bell nonlocality.
We consider the convex classification of quantum channels. In particular, we consider the extreme points of the set of doubly-stochastic qutrit quantum channels. Examples of known extreme points are presented along with an analysis of the associated Choi matrix and its spectrum, thereby allowing a classification. We also discuss the choice of chosen basis for these extreme points, which we argue in terms of a systematic approach to finding new extreme points for the set of doubly-stochastic channels. Finally, we explore the possibility of extending our results to higher dimensions.
Two primary facets of quantum technological advancement that holds great promise are quantum communication and quantum computation. For quantum communication, the canonical resource is entanglement. For quantum gate implementation, the resource is ‘magic’ in an auxiliary system. It has already been shown that quantum coherence is the fundamental resource for the creation of entanglement. We argue on the similar spirit that quantum coherence is the fundamental resource when it comes to the creation of magic. This unifies the two strands of modern development in quantum technology under the common underpinning of existence of quantum superposition, quantified by the coherence in quantum theory. We also attempt to obtain magic monotones inspired from coherence monotones and vice versa. We further study the interplay between quantum coherence and magic in a qutrit system and that between quantum entanglement and magic in a qutrit-qubit setting.
The first protocols in quantum cryptography relied on the sending and measuring of single photons. While this seems a simple and intuitive way to encode information, the challenges in generation and manipulation of single photons and dedicated equipment required for such protocols impose certain restrictions on real-world implementations. Recently attention has turned to exploring potential protocols that would be readily compatible with existing telecommunication hardware, relying on continuous measurements of optical coherent states. We build on recent quantum digital signature protocols with homodyne detection and four signal coherent states, extend this setup to a new quantum secret sharing scheme, and adapt postselection methods from quantum key distribution to improve the security of our protocols and thus to extend the achievable limits for secure transmission distance.
We show that the time-averaged two-mode entanglement in the spin space reaches a maximal value when it undergoes a dynamical phase transition (DPT) induced by external perturbation in a spin-orbit-coupled Bose-Einstein condensate. We employ the von Neumann entropy and a correlation-based entanglement criterion as entanglement measures and find that both of them can infer the existence of DPT. While the von Neumann entropy works only for a pure state at zero temperature and requires state tomography to reconstruct, the experimentally more feasible correlation-based entanglement criterion acts as an excellent proxy for entropic entanglement and can determine the existence of entanglement for a mixed state at finite temperature, making itself an excellent indicator for DPT. Our work provides a deeper understanding about the connection between DPTs and quantum entanglement and may allow the detection of DPT via entanglement become accessible as the examined criterion is suitable for measuring entanglement.
Quantum states of light possess capabilities exceeding those of the classical electromagnetic field.
Highly non-classical states with well-defined photon find important applications in quantum information processing, but methods for their deterministic and efficient generation remain elusive.
In this work we diffusively couple two “signal” bosonic modes via a long “tail” of further modes. All modes are subject to strong Kerr nonlinearity. Simulations demonstrate the evolution of an input classical state to a sub-Poissonian output in the signal modes. We calculate the negativity of this state and observe entanglement generation as the light propagates. Over short timescales the tail simulates a Markovian reservoir, and our system accurately reproduces the output of a known, but challenging to engineer, case of two nonlinear modes coupled via one highly lossy mode. Our system can be fabricated in photonic waveguides and opens up so-called “coherent diffusive photonics” as an exciting direction towards quantum state generation.
It is well known that majorization condition is the necessary and sufficient condition for the deterministic transformations of both bipartite pure entangled states by local operations and coherent states under incoherent operations. In this talk, I present two explicit protocols for the latter. I first present a permutation based protocol which provides a method for the single step transformation of d-level coherent states. I also give a generalization of this protocol for some special cases of d-level systems. Then, I present an alternative protocol where we use d-level (d' < d) subspace solutions of the permutation based protocol to achieve the complete transformation as a sequence of coherent state transformations. I also adapt our step-by-step method to the problem of the distillation of maximally coherent state.
Quantum State Tomography is the task of inferring the quantum state from the measurement data. A reliable Quantum State Tomography scheme should not only report the reconstructed state, but also well-justified error bars. Here we provide a simple and reliable method to generate confidence region in the state space, which is based on a quantum generalisation of the Clopper–Pearson confidence interval in classical statistics. As long as the measurement POVM is informationally complete, our scheme yields a polytope-shaped confidence region in the state space. It is prior independent, easy to compute, and flexible to adapt to any figure of merit of interest. It is the first work in generalizing the classical confidence interval to quantum state tomography. We also demonstrated how our polytope confidence region, via providing information on the error distribution, can be practical and useful in current state tomography experiments.
Imaginary time evolution is a powerful tool for studying quantum systems. While it is concep- tually simple to simulate with a classical computer, the time and memory requirements scale exponentially with the system size. Conversely, quantum computers can efficiently simulate quantum systems, but non- unitary imaginary time evolution is incompatible with unitary quantum circuits. Here, we propose a hybrid, variational algorithm for simulating imaginary time evolution on a quantum computer. We use this algorithm to find the ground state energy of many-particle Hamiltonians. Our algorithm finds the ground state with high probability, outperforming the variational quantum eigensolver. Our method can also be applied to general optimisation problems, Gibbs state preparation, and quantum machine learning.
Coherence,as a fundamental property emerging within any quantum system empowers the ability of many quantum information tasks, including cryptography, metrology, and randomness generation. Manipulation and quantification of quantum resources are fundamental problems in quantum physics. Coherence distillation and dilution have been proposed by manipulating infinite identical copies of states in an asymptotic case. In the non-asymptotic setting, finite data-size effects emerge, and the practically relevant problem of coherence manipulation using finite resources has been left open. This work establishes the one-shot theory of coherence dilution, which involves converting maximally coherent states into an arbitrary quantum state using maximally incoherent operations, dephasing-covariant incoherent operations, incoherent operations, or strictly incoherent operations. We introduce several coherence monotones with concrete operational interpretations that estimate the one-shot coherence cost under different incoherent operations.