Spelling suggestions: "subject:"quantencomputing"" "subject:"quantencomputings""
221 |
Quantum algorithms for many-body structure and dynamicsTurro, Francesco 10 June 2022 (has links)
Nuclei are objects made of nucleons, protons and neutrons. Several dynamical processes that occur in nuclei are of great interest for the scientific community and for possible applications. For example, nuclear fusion can help us produce a large amount of energy with a limited use of resources and environmental impact. Few-nucleon scattering is an essential ingredient to understand and describe the physics of the core of a star. The classical computational algorithms that aim to simulate microscopic quantum systems suffer from the exponential growth of the computational time when the number of particles is increased. Even using today's most powerful HPC devices, the simulation of many processes, such as the nuclear scattering and fusion, is out of reach due to the excessive amount of computational time needed. In the 1980s, Feynman suggested that quantum computers might be more efficient than classical devices in simulating many-particle quantum systems. Following Feynman's idea of quantum computing, a complete change in the computation devices and in the simulation protocols has been explored in the recent years, moving towards quantum computations. Recently, the perspective of a realistic implementation of efficient quantum calculations was proved both experimentally and theoretically. Nevertheless, we are not in an era of fully functional quantum devices yet, but rather in the so-called "Noisy Intermediate-Scale Quantum" (NISQ) era. As of today, quantum simulations still suffer from the limitations of imperfect gate implementations and the quantum noise of the machine that impair the performance of the device. In this NISQ era, studies of complex nuclear systems are out of reach. The evolution and improvement of quantum devices will hopefully help us solve hard quantum problems in the coming years. At present quantum machines can be used to produce demonstrations or, at best, preliminary studies of the dynamics of a few nucleons systems (or other equivalent simple quantum systems). These systems are to be considered mostly toy models for developing prospective quantum algorithms. However, in the future, these algorithms may become efficient enough to allow simulating complex quantum systems in a quantum device, proving more efficient than classical devices, and eventually helping us study hard quantum systems. This is the main goal of this work, developing quantum algorithms, potentially useful in studying the quantum many body problem, and attempting to implement such quantum algorithms in different, existing quantum devices. In particular, the simulations made use of the IBM QPU's , of the Advanced Quantum Testbed (AQT) at Lawrence Berkeley National Laboratory (LBNL), and of the quantum testbed recently based at Lawrence Livermore National Laboratory (LLNL) (or using a device-level simulator of this machine). The our research aims are to develop quantum algorithms for general quantum processors. Therefore, the same developed quantum algorithms are implemented in different quantum processors to test their efficiency. Moreover, some uses of quantum processors are also conditioned by their availability during the time span of my PhD.
The most common way to implement some quantum algorithms is to combine a discrete set of so-called elementary gates. A quantum operation is then realized in term of a sequence of such gates. This approach suffers from the large number of gates (depth of a quantum circuit) generally needed to describe the dynamics of a complex system. An excessively large circuit depth is problematic, since the presence of quantum noise would effectively erase all the information during the simulation. It is still possible to use error-correction techniques, but they require a huge amount of extra quantum register (ancilla qubits). An alternative technique that can be used to address these problems is the so-called "optimal control technique". Specifically, rather than employing a set of pre-packaged quantum gates, it is possible to optimize the external physical drive (for example, a suitably modulated electromagnetic pulse) that encodes a multi-level complex quantum gate. In this thesis, we start from the work of Holland et al. "Optimal control for the quantum simulation of nuclear dynamics" Physical Review A 101.6 (2020): 062307, where a quantum simulation of real-time neutron-neutron dynamics is proposed, in which the propagation of the system is enacted by a single dense multi-level gate derived from the nuclear spin-interaction at leading order (LO) of chiral effective field theory (EFT) through an optimal control technique.
Hence, we will generalize the two neutron spin simulations, re-including spatial degrees of freedom with a hybrid algorithm. The spin dynamics are implemented within the quantum processor and the spatial dynamics are computed applying classical algorithms. We called this method classical-quantum coprocessing. The quantum simulations using optimized optimal control methods and discrete get set approach will be presented. By applying the coprocessing scheme through the optimal control, we have a possible bottleneck due to the requested classical computational time to compute the microwave pulses. A solution to this problem will be presented. Furthermore, an investigation of an improved way to efficiently compile quantum circuits based on the Similarity Renormalization Group will be discussed. This method simplifies the compilation in terms of digital gates. The most important result contained in this thesis is the development of an algorithm for performing an imaginary time propagation on a quantum chip. It belongs to the class of methods for evaluating the ground state of a quantum system, based on operating a Wick rotation of the real time evolution operator. The resulting propagator is not unitary, implementing in some way a dissipation mechanism that naturally leads the system towards its lowest energy state. Evolution in imaginary time is a well-known technique for finding the ground state of quantum many-body systems. It is at the heart of several numerical methods, including Quantum Monte Carlo techniques, that have been used with great success in quantum chemistry, condensed matter and nuclear physics. The classical implementations of imaginary time propagation suffer (with few exceptions) of an exponential increase in the computational cost with the dimension of the system. This fact calls for a generalization of the algorithm to quantum computers. The proposed algorithm is implemented by expanding the Hilbert space of the system under investigation by means of ancillary qubits. The projection is obtained by applying a series of unitary transformations having the effect of dissipating the components of the initial state along excited states of the Hamiltonian into the ancillary space. A measurement of the ancillary qubit(s) will then remove such components, effectively implementing a "cooling" of the system. The theory and testing of this method, along with some proposals for improvements will be thoroughly discussed in the dedicated chapter.
|
222 |
The Effect of Noise Levels on the Performance of Shor’s Algorithm / Brusnivåers Effekt på Prestationen av Shors AlgoritmHöstedt, Niklas, Ljunggren, Tobias January 2023 (has links)
Advanced enough quantum computers promise to revolutionise fields such as cryptography, drug discovery and simulations of complex systems. Quantum computers are built on qubits which are fragile and susceptible to error-inducing interference, which is called noise. The aim of this study was to examine the effects of varying levels of noise interference on the success rate and runtimes of a quantum computer circuit design built to implement Shor’s quantum factorisation algorithm. This was conducted using the Qiskit framework for quantum computer simulation and custom noise model creation. Our results show a correlation between the level of noise interference on a circuit and the probability of getting the correct measurement. We also found a greater impact of readout errors on the success rates, one-qubit depolarising errors on runtimes and that two-qubit depolarising errors greatly affected both, which was also discussed in the study. Our findings are in line with previous research and help to highlight the importance of minimising errors on critical quantum logic gates in an algorithm. / Tillräckligt avancerade kvantdatorer lovar att revolutionera områden så som kryptografi, utveckling av nya läkemedel och simulering av komplexa system. Kvantdatorer är uppbyggda av qubits vilka är ömtåliga och mottagliga mot felinducerande interferens, vilket kallas brus. Målet med denna studie var att utforska effekten av varierande brusnivåers interferens på lyckade försök samt körtiden av en kvantdatorkrets designad för att implementera Shors algoritm. Detta gjordes med Qiskits ramverk för kvantdatorsimulering och anpassningsbara brusmodeller. Våra resultat visar en korrelation mellan nivån av brusinterferens på en krets och sannolikheten av att få den korrekt mätningen. Vi fann även en större påverkan av avläsningsfel på kvoten lyckade försök, en-qubit depolariserande fel på körtid och att två-qubit depolariserande fel hade en stor påverkan på båda, vilket vi även diskuterat i studien. Våra resultat är i linje med tidigare studier och hjälper till att lyfta fram vikten av att minimera inducerade fel på kritiska logiska grindar i en kvantdatoralgoritm.
|
223 |
Comparing Quantum Annealing and Simulated Annealing when Solving the Graph Coloring Problem / Jämförelse mellan kvantglödgning och simulerad härdning vid lösning av graffärgningsproblemetOdelius, Nora, Reinholdsson, Isak January 2023 (has links)
Quantum annealing (QA) is an optimization process in quantum computing similar to the probabilistic metaheuristic simulated annealing (SA). The QA process involves encoding an optimization problem into an energy landscape, which it then traverses in search for the point of minimal energy representing the global optimal state. In this thesis two different implementations of QA are examined, one run on a binary quadratic model (BQM) and one on a discrete quadratic model (DQM). These are then compared to their traditional counterpart: SA, in terms of performance and accuracy when solving the graph coloring problem (GCP). Regarding performance, the results illustrate how SA outperforms both QA implementations. However, it is apparent that these slower execution times are mostly due to various overhead costs that appear because of limited hardware. When only looking at the quantum annealing part of the process, it is about a hundred times faster than the SA process. When it comes to accuracy, both the DQM-implementation of QA and SA provided results of high quality, whereas the BQM-implementation performed notably worse, both by often not finding the optimal values and by sometimes returning invalid results. / Quantum annealing (QA) är en kvantbaserad optimeringsprocess som liknar den probabilistiska metaheuristiken simulated annealing (SA). QA går ut på att konvertera ett optimeringsproblem till ett energilandskap, som sedan navigeras för att hitta punkten med lägst energi, vilket då motsvarar den optimala lösningen på problemet. I denna uppsats undersöks två olika implementationer av QA: en som använder en binary quadratic model (BQM) och en som använder en discrete quadratic model (DQM). Dessa två implementationerna jämförs med deras traditionella motsvarighet: SA, utifrån både prestanda och korrekthet vid lösning av graffärgningsproblemet (GCP). När det gäller prestanda visar resultaten att SA är snabbare än båda QA implementationerna. Samtidigt är det tydligt att denna prestandaskillnad framförallt beror på diverse förberedelser innan exkueringen startar på kvantdatorn, vilka är krävande på grund av olika hårdvarubegränsningar. Om man endast betraktar kvantprocesserna visar vår studie att QA implementationerna är ungefär hundra gånger snabbare än SA. Gällande korrekthet gav både DQM-implementationen av QA och SA resultat av hög kvalitet medan BQM-implementationen presterade betydligt sämre. Den gjorde detta dels genom att inte skapa optimala resultat och genom att returnera otillåtna lösningar.
|
224 |
Quantum Simulation of Quantum Effects in Sub-10-nm Transistor TechnologiesWinka, Anders January 2022 (has links)
In this master thesis, a 2D device simulator run on a hybrid classical-quantum computer was developed. The simulator was developed to treat statistical quantum effects such as quantum tunneling and quantum confinement in nanoscale transistors. The simulation scheme is based on a self-consistent solution of the coupled non-linear 2D SchrödingerPoisson equations. The Open Boundary Condition (OBC) of the Schrödinger equation, which allows for electrons to pass through the device between the leads (source and drain), are modeled with the QuantumTransmitting Boundary Method (QTBM). The differential equations are discretized with the finite-element method, using rectangular mesh elements. The self-consistent loop is a very time-consuming process, mainly due to the solution of the discretized OBC Schrödinger equation. To accelerate the solution time of the Schrödinger equation, a quantum assisted domain decomposition method is implemented. The domain decomposition method of choice is the Block Cyclic Reduction (BCR) method. The BCR method is at least 15 times faster (CPU time) than solving the whole linear system of equations with the Python solver numpy.linalg.solve, based on the LAPACK routine _gesv. In the project, we also propose an alternative approach of the BCR method called the "extra layer BCR" that shows an improved accuracy for certain types of solutions. In a quantum assisted version, the matrix inverse solver as a step in the BCR method was computed on the D-Wave quantum annealer chip ADVANTAGE_SYSTEM4.1 [4]. Two alternative methods to solve the matrix inverses on a quantum annealer were compared. One is called the "unit vector" approach, based on work by Rogers and Singleton [5], and the other is called the "whole matrix" approach which was developed in the thesis. Because of the limited amount of qubits available on the quantum annealer, the "unit vector" approach was more suitable for adaption in the BCR method. Comparing the quantum annealer to the Python inverse solver numpy.linalg.inv, also based on LAPACK, it was found that an accurate solution can be achieved, but the simulation time (CPU time) is at best 500 times slower than numpy.linalg.inv.
|
225 |
Single photon generation and quantum computing with integrated photonicsSpring, Justin Benjamin January 2014 (has links)
Photonics has consistently played an important role in the investigation of quantum-enhanced technologies and the corresponding study of fundamental quantum phenomena. The majority of these experiments have relied on the free space propagation of light between bulk optical components. This relatively simple and flexible approach often provides the fastest route to small proof-of-principle demonstrations. Unfortunately, such experiments occupy significant space, are not inherently phase stable, and can exhibit significant scattering loss which severely limits their use. Integrated photonics offers a scalable route to building larger quantum states of light by surmounting these barriers. In the first half of this thesis, we describe the operation of on-chip heralded sources of single photons. Loss plays a critical role in determining whether many quantum technologies have any hope of outperforming their classical analogues. Minimizing loss leads us to choose Spontaneous Four-Wave Mixing (SFWM) in a silica waveguide for our source design; silica exhibits extremely low scattering loss and emission can be efficiently coupled to the silica chips and fibers that are widely used in quantum optics experiments. We show there is a straightforward route to maximizing heralded photon purity by minimizing the spectral correlations between emitted photon pairs. Fabrication of identical sources on a large scale is demonstrated by a series of high-visibility interference experiments. This architecture offers a promising route to the construction of nonclassical states of higher photon number by operating many on-chip SFWM sources in parallel. In the second half, we detail one of the first proof-of-principle demonstrations of a new intermediate model of quantum computation called boson sampling. While likely less powerful than a universal quantum computer, boson sampling machines appear significantly easier to build and may allow the first convincing demonstration of a quantum-enhanced computation in the not-distant future. Boson sampling requires a large interferometric network which are challenging to build with bulk optics, we therefore perform our experiment on-chip. We model the effect of loss on our postselected experiment and implement a circuit characterization technique that accounts for this loss. Experimental imperfections, including higher-order emission from our photon pair sources and photon distinguishability, are modeled and found to explain the sampling error observed in our experiment.
|
226 |
High fidelity readout and protection of a 43Ca+ trapped ion qubitSzwer, David James January 2009 (has links)
This thesis describes theoretical and experimental work whose main aim is the development of techniques for using trapped <sup>43</sup>Ca⁺ ions for quantum information processing. I present a rate equations model of <sup>43</sup>Ca⁺, and compare it with experimental data. The model is then used to investigate and optimise an electron-shelving readout method from a ground-level hyperfine qubit. The process is robust against common experimental imperfections. A shelving fidelity of up to 99.97% is theoretically possible, taking 100 μs. The laser pulse sequence can be greatly simplified for only a small reduction in the fidelity. The simplified method is tested experimentally with fidelities up to 99.8%. The shelving procedure could be applied to other commonly-used species of ion qubit. An entangling two-qubit quantum controlled-phase gate was attempted between a <sup>40</sup>Ca⁺ and a <sup>43</sup>Ca⁺ ion. The experiment did not succeed due to frequent decrystallisation of the ion pair, and strong motional decoherence. The source of the problems was never identified despite significant experimental effort, and the decision was made to suspend the experiments and continue them in an improved ion trap which is under construction. A sequence of pi-pulses, inspired by the Hahn spin-echo, was derived that is capable of greatly reducing dephasing of any qubit. If the qubit precession frequency varies with time as an nth-order polynomial, an (n+1) pulse sequence is theoretically capable of perfectly cancelling the resulting phase error. The sequence is used on a 43Ca+ magnetic-field-sensitive hyperfine qubit, with 20 pulses increasing the coherence time by a factor of 75 compared to an experiment without any spin-echo. In our ambient noise environment the well-known Carr-Purcell-Meiboom-Gill dynamic-decoupling method was found to be comparably effective.
|
227 |
High fidelity readout of trapped ion qubitsBurrell, Alice Heather January 2010 (has links)
This thesis describes experimental demonstrations of high-fidelity readout of trapped ion quantum bits ("qubits") for quantum information processing. We present direct single-shot measurement of an "optical" qubit stored in a single calcium-40 ion by the process of resonance fluorescence with a fidelity of 99.991(1)% (surpassing the level necessary for fault-tolerant quantum computation). A time-resolved maximum likelihood method is used to discriminate efficiently between the two qubit states based on photon-counting information, even in the presence of qubit decay from one state to the other. It also screens out errors due to cosmic ray events in the detector, a phenomenon investigated in this work. An adaptive method allows the 99.99% level to be reached in 145us average detection time. The readout fidelity is asymmetric: 99.9998% is possible for the "bright" qubit state, while retaining 99.98% for the "dark" state. This asymmetry could be exploited in quantum error correction (by encoding the "no-error" syndrome of the ancilla qubits in the "bright" state), as could the likelihood values computed (which quantify confidence in the measurement outcome). We then extend the work to parallel readout of a four-ion string using a CCD camera and achieve the same 99.99% net fidelity, limited by qubit decay in the 400us exposure time. The behaviour of the camera is characterised by fitting experimental data with a model. The additional readout error due to cross-talk between ion images on the CCD is measured in an experiment designed to remove the effect of qubit decay; a spatial maximum likelihood technique is used to reduce this error to only 0.2(1)x10^{-4} per qubit, despite the presence of ~4% optical cross-talk between neighbouring qubits. Studies of the cross-talk indicate that the readout method would scale with negligible loss of fidelity to parallel readout of ~10,000 qubits with a readout time of ~3us per qubit. Monte-Carlo simulations of the readout process are presented for comparison with experimental data; these are also used to explore the parameter space associated with fluorescence detection and to optimise experimental and analysis parameters. Applications of the analysis methods to readout of other atomic and solid-state qubits are discussed.
|
228 |
Higher-order semantics for quantum programming languages with classical controlAtzemoglou, George Philip January 2012 (has links)
This thesis studies the categorical formalisation of quantum computing, through the prism of type theory, in a three-tier process. The first stage of our investigation involves the creation of the dagger lambda calculus, a lambda calculus for dagger compact categories. Our second contribution lifts the expressive power of the dagger lambda calculus, to that of a quantum programming language, by adding classical control in the form of complementary classical structures and dualisers. Finally, our third contribution demonstrates how our lambda calculus can be applied to various well known problems in quantum computation: Quantum Key Distribution, the quantum Fourier transform, and the teleportation protocol.
|
229 |
Pictures of processes : automated graph rewriting for monoidal categories and applications to quantum computingKissinger, Aleks January 2011 (has links)
This work is about diagrammatic languages, how they can be represented, and what they in turn can be used to represent. More specifically, it focuses on representations and applications of string diagrams. String diagrams are used to represent a collection of processes, depicted as "boxes" with multiple (typed) inputs and outputs, depicted as "wires". If we allow plugging input and output wires together, we can intuitively represent complex compositions of processes, formalised as morphisms in a monoidal category. While string diagrams are very intuitive, existing methods for defining them rigorously rely on topological notions that do not extend naturally to automated computation. The first major contribution of this dissertation is the introduction of a discretised version of a string diagram called a string graph. String graphs form a partial adhesive category, so they can be manipulated using double-pushout graph rewriting. Furthermore, we show how string graphs modulo a rewrite system can be used to construct free symmetric traced and compact closed categories on a monoidal signature. The second contribution is in the application of graphical languages to quantum information theory. We use a mixture of diagrammatic and algebraic techniques to prove a new classification result for strongly complementary observables. Namely, maximal sets of strongly complementary observables of dimension D must be of size no larger than 2, and are in 1-to-1 correspondence with the Abelian groups of order D. We also introduce a graphical language for multipartite entanglement and illustrate a simple graphical axiom that distinguishes the two maximally-entangled tripartite qubit states: GHZ and W. Notably, we illustrate how the algebraic structures induced by these operations correspond to the (partial) arithmetic operations of addition and multiplication on the complex projective line. The third contribution is a description of two software tools developed in part by the author to implement much of the theoretical content described here. The first tool is Quantomatic, a desktop application for building string graphs and graphical theories, as well as performing automated graph rewriting visually. The second is QuantoCoSy, which performs fully automated, model-driven theory creation using a procedure called conjecture synthesis.
|
230 |
Optimisation et approximation adiabatiqueRenaud-Desjardins, Louis R.-D. 12 1900 (has links)
L'approximation adiabatique en mécanique quantique stipule que si un système quantique évolue assez lentement, alors il demeurera dans le même état propre. Récemment, une faille dans l'application de l'approximation adiabatique a été découverte. Les limites du théorème seront expliquées lors de sa dérivation.
Ce mémoire à pour but d'optimiser la probabilité de se maintenir dans le même état propre connaissant le système initial, final et le temps d'évolution total. Cette contrainte sur le temps empêche le système d'être assez lent pour être adiabatique.
Pour solutionner ce problème, une méthode variationnelle est utilisée. Cette méthode suppose connaître l'évolution optimale et y ajoute une petite variation. Par après, nous insérons cette variation dans l'équation de la probabilité d'être adiabatique et développons en série. Puisque la série est développée autour d'un optimum, le terme d'ordre un doit nécessairement être nul. Ceci devrait nous donner un critère sur l'évolution la plus adiabatique possible et permettre de la déterminer.
Les systèmes quantiques dépendants du temps sont très complexes. Ainsi, nous commencerons par les systèmes ayant des énergies propres indépendantes du temps. Puis, les systèmes sans contrainte et avec des fonctions d'onde initiale et finale libres seront étudiés. / The adiabatic approximation in quantum mechanics states that if the Hamiltonian of a physical system evolves slowly enough, then it will remain in the instantaneous eigenstate related to the initial eigenstate. Recently, two researchers found an inconsistency in the application of the approximation. A discussion about the limit of this idea will be presented. Our goal is to optimize the probability to be in the instantaneous eigenstate related to the initial eigenstate knowing the initial and final system, with the total time of the experiment fixed to $T$. This last condition prevents us from being slow enough to use the adiabatic approximation.
To solve this problem, we turn to the calculus of variation. We suppose the ideal evolution is known and we add a small variation to it. We take the result, put it in the probability to be adiabatic and expand in powers of the variation. The first order term must be zero. This enables us to derive a criterion which will give us conditions on the ideal Hamiltonian. Those conditions should define the ideal Hamiltonian.
Time dependent quantum systems are very complicated. To simplify the problem, we will start by considering systems with time independent energies. Afterward, the general case will be treated.
|
Page generated in 0.0508 seconds