Spelling suggestions: "subject:"kuantum computing"" "subject:"auantum computing""
181 |
Entangling gates using Josephson circuits coupled through non-classical microwaves.Migliore, R., Konstadopoulou, Anastasia, Vourdas, Apostolos, Spiller, T.P., Messina, A. January 2003 (has links)
No / A system consisting of two Josephson qubits coupled through a quantum monochromatic electromagnetic field mode of a resonant tank circuit is studied. It is shown that for certain values of the parameters, it can be used as an entangling gate, which entangles the two qubits whilst the electromagnetic field remains disentangled. The gate operates with decent fidelity to a gate and could form the basis for initial experimental investigations of coupled superconducting qubits.
|
182 |
CO-DESIGN OF QUANTUM SOFTWARE AND HARDWAREJinglei Cheng (18975923) 05 July 2024 (has links)
<p dir="ltr">Quantum computing is advancing rapidly, with variational quantum algorithms (VQAs) showing great promise for demonstrating quantum advantage on near-term devices. A critical component of VQAs is the ansatz, a parameterized quantum circuit that is iteratively optimized. However, compiling ansatz circuits for specific quantum hardware is challenging due to topological constraints, gate errors, and decoherence. This thesis presents a series of techniques to efficiently generate and optimize quantum circuits, with a focus on VQAs. We first introduce AccQOC, a framework combining static pre-compilation with accelerated dynamic compilation to transform quantum gates to hardware pulses using quantum optimal control (QOC). AccQOC generates pulses for frequently used gate sequences in advance and stores them in a lookup table. For new gate sequences, it utilizes a Minimum Spanning Tree based approach to find the optimal compilation order that maximizes the similarity between consecutive sequences, thereby accelerating the compilation process. By leveraging pre-computed pulses and employing a similarity-based approach, AccQOC achieves a 9.88×speedup in compilation time compared to standard QOC methods while maintaining a 2.43×latency reduction over gate-based compilation. Building on AccQOC, we propose EPOC, an extended framework integrating circuit partitioning, ZX-calculus optimization, and synthesis methods. EPOC operates at a finer granularity compared to previous coarse-grained approaches, decomposing circuits into smaller sub-circuits based on the number of qubits and circuit depth. It then applies synthesis techniques to identify equivalent representations with reduced gate count. The optimized sub-circuits are then grouped into larger unitary matrices, which are used as inputs for QOC. This approach enables increased parallelism and reduced latency in the resulting quantum pulses. Compared to the state-of-the-art pulse optimization framework, EPOC achieves a 31.74% reduction in circuit latency and a 76.80% reduction compared to gate-based methods. To construct hardware-efficient ansatz for VQAs, we introduce two novel approaches. TopGen is a topology-aware bottom-up approach that generates sub-circuits according to the connectivity constraints of the target quantum device. It starts by generating a library of subcircuits that are compatible with the device topology and evaluates them based on metrics 17 like expressibility and entangling capability. The sub-circuits with the best properties are then selected and progressively combined using different techniques. TopGen also employs dynamic circuit growth, where small sub-circuits are appended to the ansatz during training, and gate pruning, which removes gates with small parameters. Evaluated on a range of VQA tasks, TopGen achieves an average reduction of 50% in circuit depth after compilation compared to previous methods. NAPA takes a more direct approach by utilizing devicenative parametric pulses as the fundamental building blocks for constructing the ansatz. It uses cross-resonance pulses for entangling qubits and DRAG pulses for single-qubit rotations. The ansatz is constructed in a hardware-efficient manner. By using the better flexibility and expressivity of parametric pulses, NAPA demonstrates up to 97.3% latency reduction while maintaining accuracy comparable to gate-based approaches when evaluated on real quantum devices. Finally, we explore error mitigation techniques for VQAs at the pulse level. We develop a fidelity estimator based on reversed pulses, that enables randomized benchmarking of parametric pulses. This estimator compares the final state obtained after applying a sequence of pulses followed by their reversed counterparts to the initial state, using the probability of successful trials as a proxy for fidelity. Furthermore, we adapt the zero-noise extrapolation (ZNE) technique to the pulse level, enabling the error mitigation for quantum pulses. Applied to VQE tasks for H2 and HeH+ molecules, pulse-level ZNE reduces the deviation from ideal expectation values by an average of 54.1%. The techniques developed in this thesis advance the efficiency and practicality of VQAs on near-term quantum devices. The introduced frameworks, AccQOC and EPOC, provide efficient pulse optimization, while TopGen and NAPA can construct hardware-efficient ansatz. Besides, the pulse-level error mitigation techniques presented in this thesis improve the resilience of VQAs against the inherent noise and imperfections of NISQ devices. Together, these contributions help unlock the full potential of quantum computing and realize practical quantum advantages in the near future.</p>
|
183 |
Variational quantum algorithms for two-particle systemsMoffat, Cameron 13 August 2024 (has links) (PDF)
This thesis explores variational quantum algorithms, a promising class of hybrid algorithms that leverage the variational principles to simulate quantum systems in the Noisy Intermediate-Scale Quantum (NISQ) era. We present a comparative analysis of two types of Variational Quantum Eigensolver (VQE) methods, assessing their capability to obtain the ground state of a two-particle fermionic quantum system. Also presented is our work using VQE to find ground states of a two-particle system with short-ranged interactions. Additionally, we studied variational quantum time evolution using the McLachlan variational principle, and compared the state evolution to the exact solution. Our findings highlight the strengths and weaknesses of different VQE implementations in obtaining ground states of two-particle systems. Moreover, our exploration of variational time evolution demonstrated the feasibility of simulating two-particle systems over extended time steps, thus paving the way for more efficient quantum simulations to be integrated into more complex physics calculations.
|
184 |
Quantum algorithm for persistent Betti numbers and topological data analysis / パーシステント・ベッチ数およびトポロジカルデータ解析に関する量子アルゴリズムHayakawa, Ryu 25 March 2024 (has links)
京都大学 / 新制・課程博士 / 博士(理学) / 甲第25104号 / 理博第5011号 / 新制||理||1715(附属図書館) / 京都大学大学院理学研究科物理学・宇宙物理学専攻 / (主査)准教授 森前 智行, 教授 高橋 義朗, 准教授 戸塚 圭介 / 学位規則第4条第1項該当 / Doctor of Agricultural Science / Kyoto University / DFAM
|
185 |
Formal Verification of Quantum SoftwareTao, Runzhou January 2024 (has links)
Real applications of near-term quantum computing are around the corner and quantum software is a key component. Unlike classical computing, quantum software is under the threat of both quantum hardware errors and human bugs due to the unintuitiveness of quantum physics theory. Therefore, trustworthiness and reliability are critical for the success of quantum computation. However, most traditional methods to ensure software reliability, like testing, do not transfer to quantum at scale because of the destructive and probabilistic nature of quantum measurement and the exponential-sized state space.
In this thesis, I introduce a series of frameworks to ensure the trustworthiness of quantum computing software by automated formal verification. First, I present Giallar, a fully-automated verification toolkit for quantum compilers to formally prove that the compiler is bug-free. Giallar requires no manual specifications, invariants, or proofs, and can automatically verify that a compiler pass preserves the semantics of quantum circuits. To deal with unbounded loops in quantum compilers, Giallar abstracts three loop templates, whose loop invariants can be automatically inferred. To efficiently check the equivalence of arbitrary input and output circuits that have complicated matrix semantics representation, Giallar introduces a symbolic representation for quantum circuits and a set of rewrite rules for showing the equivalence of symbolic quantum circuits. With Giallar, I implemented and verified 44 (out of 56) compiler passes in 13 versions of the Qiskit compiler, the open-source quantum compiler standard, during which three bugs were detected in and confirmed by Qiskit. The evaluation shows that most of Qiskit compiler passes can be automatically verified in seconds and verification imposes only a modest overhead to compilation performance.
Second, I introduce Gleipnir, an error analysis framework for quantum programs that enable scalable and adaptive verification of quantum error through the application of tensor networks. Giallar introduces the ( 𝜌̂, 𝛿)-diamond norm, an error metric constrained by a quantum predicate consisting of the approximate state 𝜌̂ and its distance 𝛿 to the ideal state 𝜌. This predicate ( 𝜌̂, 𝛿) can be computed adaptively using tensor networks based on Matrix Product States. Giallar features a lightweight logic for reasoning about error bounds in noisy quantum programs, based on the ( 𝜌̂, 𝛿)-diamond norm metric. The experimental results show that Giallar is able to efficiently generate tight error bounds for real-world quantum programs with 10 to 100 qubits, and can be used to evaluate the error mitigation performance of quantum compiler transformations.
Finally, I present QSynth, a quantum program synthesis framework that synthesizes verified recursive quantum programs, including a new inductive quantum programming language, its specification, a sound logic for reasoning, and an encoding of the reasoning procedure into SMT instances. By leveraging existing SMT solvers, QSynth successfully synthesizes 10 quantum unitary programs including quantum arithmetic programs, quantum eigenvalue inversion, quantum teleportation and Quantum Fourier Transformation, which can be readily transpiled to executable programs on major quantum platforms, e.g., Q#, IBM Qiskit, and AWS Braket.
|
186 |
Sur les limites empiriques du calcul : calculabilité, complexité et physique / On the empirical limitations on computation : computability, complexity and physicsPégny, Maël 05 December 2013 (has links)
Durant ces dernières décennies, la communauté informatique a montré un intérêt grandissant pour les modèles de calcul non-standard, inspirés par des phénomènes physiques, biologiques ou chimiques. Les propriétés exactes de ces modèles ont parfois été l'objet de controverses: que calculent-ils? Et à quelle vitesse? Les enjeux de ces questions sont renforcés par la possibilité que certains de ces modèles pourraient transgresser les limites acceptées du calcul, en violant soit la thèse de Church-Turing soit la thèse de Church-Turing étendue. La possibilité de réaliser physiquement ces modèles a notamment été au coeur des débats. Ainsi, des considérations empiriques semblent introduites dans les fondements même de la calculabilité et de la complexité computationnelle, deux théories qui auraient été précédemment considérées comme des parties purement a priori de la logique et de l'informatique. Par conséquent, ce travail est consacré à la question suivante : les limites du calcul reposent-elles sur des fondements empiriques? Et si oui, quels sont-ils? Pour ce faire, nous examinons tout d'abord la signification précise des limites du calcul, et articulons une conception épistémique du calcul, permettant la comparaison des modèles les plus variés. Nous répondrons à la première question par l'affirmative, grâce à un examen détaillé des débats entourant la faisabilité des modèles non-standard. Enfin, nous montrerons les incertitudes entourant la deuxième question dans l'état actuel de la recherche, en montrant les difficultés de la traduction des concepts computationnels en limites physiques. / Recent years have seen a surge in the interest for non-standard computational models, inspired by physical, biological or chemical phenomena. The exact properties of some of these models have been a topic of somewhat heated discussion: what do they compute? And how fast do they compute? The stakes of these questions were heightened by the claim that these models would violate the accepted limits of computation, by violating the Church-Turing Thesis or the Extended Church-Turing Thesis. To answer these questions, the physical realizability of some of those models - or lack thereof - has often been put at the center of the argument. It thus seems that empirical considerations have been introduced into the very foundations of computability and computational complexity theory, both subjects that would have been previously considered purely a priori parts of logic and computer science. Consequently, this dissertation is dedicated to the following question: do computability and computational complexity theory rest on empirical foundations? If yes, what are these foundations? We will first examine the precise meaning of those limits of computation, and articulate a philosophical conception of computation able to make sense of this variety of models. We then answer the first question by the affirmative, through a careful examination of current debates around non-standard models. We show the various difficulties surrounding the second question, and study how they stem from the complex translation of computational concepts into physical limitations.
|
187 |
Probabilistic Exact Inversion of 2-qubit Bipartite Unitary Operations using Local Operations and Classical Communication / Probabilistisk Exakt Inversion av 2-qubit Bipartita Unitära Operationer genom Lokala Operationer och Klassisk KommunikationLindström, Ludvig January 2024 (has links)
A distributed quantum computer holds the potential to emulate a larger quantumcomputer by being partitioned it into smaller modules where local operations (LO)can be applied, and classical communication (CC) can be utilized between thesemodules. Finding algorithms under LOCC restrictions is crucial for leveraging thecapabilities of distributed quantum computing, This thesis explores probabilisticexact LOCC supermaps, that maps 2-qubit bipartite unitary operations to its inver-sion and complex conjugation. Presented are LOCC unitary inversion and complexconjugation supermaps that use 3 calls of the operation, achieving success proba-bilities of 3/128 and 3/8, respectively. These supermaps are discovered through anexamination of the Kraus Cirac decomposition and its interaction with single qubitunitary inversion supermaps. These results can be used for time reversal of as welland noise reduction in closed distributed quantum systems / En distribuerad kvantdator har potentialen att emulera en större kvantdator genom att delas upp i mindre moduler, där lokala operations (LO) kan appliceras och klassisk kommunikation (CC) användas. För att effektivt kunna använda algoritmer på en distribuerad kvantdator måste de anpassas för LOCC restriktioner. Denna avhandling studerar probabilistiskt exakta LOCC superavbildningar, somavbildar 2-qubits bipartita unitära operationer till deras invers och komplexkonjugat. I avhandlingen presenters en LOCC unitär inversion- samt en komplexkonjugatsuperavbildning vilka använder 3 anrop av operationen och lyckas med sannolikhet 3/128 respektive 3/8. Dessa superavbildningar hittades genom att studera Kraus Cirac-uppdelningen och dess interaktion med 1-qubits inversionsuperavbildningar. Förhoppningsvis kan dessa resultat användas till att invertera tiden samt brusreducering på distribuerade kvantsystem.
|
188 |
Issues of control and causation in quantum information theoryMarletto, Chiara January 2013 (has links)
Issues of control and causation are central to the Quantum Theory of Computation. Yet there is no place for them in fundamental laws of Physics when expressed in the prevailing conception, i.e., in terms of initial conditions and laws of motion. This thesis aims at arguing that Constructor Theory, recently proposed by David Deutsch to generalise the quantum theory of computation, is a candidate to provide a theory of control and causation within Physics. To this end, I shall present a physical theory of information that is formulated solely in constructor-theoretic terms, i.e., in terms of which transformations of physical systems are possible and which are impossible. This theory solves the circularity at the foundations of existing information theory; it provides a unifying relation between classical and quantum information, revealing the single property underlying the most distinctive phenomena associated with the latter: the unpredictability of the outcomes of some deterministic processes, the lack of distinguishability of some states, the irreducible perturbation caused by measurement and the existence of locally inaccessible information in composite systems (entanglement). This thesis also aims to investigate the restrictions that quantum theory imposes on copying-like tasks. To this end, I will propose a unifying, picture-independent formulation of the no-cloning theorem. I will also discuss a protocol to accomplish the closely related task of transferring perfectly a quantum state along a spin chain, in the presence of systematic errors. Furthermore, I will address the problem of whether self-replication (as it occurs in living organisms) is compatible with Quantum Mechanics. Some physicists, notably Wigner, have argued that this logic is in fact forbidden by Quantum Mechanics, thus claiming that the latter is not a universal theory. I shall prove that those claims are invalid and that the logic of self-replication is, of course, compatible with Quantum Mechanics.
|
189 |
Electronic structure calculations of defects in diamond for quantum computing : A study of the addition of dopants in the diamond structureMurillo Navarro, Diana Elisa January 2019 (has links)
When doing computations on the negatively (positively) charged NV-center in diamond, the common procedure is to add (subtract) an electron from the system. However, when using periodic boundary conditions, this addition/subtraction of an electron from the supercell would result in a divergent electrostatic energy. So an artificial background jellium charge of opposite charge that compensate the electronic charge to make the supercell neutral is needed. This introduces further problems that needs corrections. And this method is especially problematic for slab supercells, as the compensating background charge leads to a dipole, which diverges as the vacuum between the slab images increases. An alternative, recently proposed way of charging the NV-center is to introduce electron donors/acceptors in the form of nitrogen/boron atoms (at substitutional sites in the diamond lattice). In this way, we keep the supercell/slab neutral, and avoid correction schemes. In this work we verify that the addition of a substitutional nitrogen atom indeed has the same effect on the NV-center as the more traditional method of adding an extra electron to the system. Further, we investigate the effects of 1. Adding two substitutional nitrogen atoms to the system (3 nitrogen atoms in total, neutral supercell), 2. Adding a substitutional nitrogen atom and an electron to the system (2 nitrogen atom in total, negatively charged supercell), 3. Adding two electrons to the system (1 nitrogen atom, doubly negatively charged supercell). Additionally, we investigate the addition of acceptor dopants (boron) in order to analyze the effect on the electronic structure of the NV-center and diamond.
|
190 |
New bounds for information complexity and quantum query complexity via convex optimization toolsBrandeho, Mathieu 28 September 2018 (has links) (PDF)
Cette thèse rassemble trois travaux sur la complexité d'information et sur la complexité en requête quantique. Ces domaines d'études ont pour points communs les outils mathématiques pour étudier ces complexités, c'est-à-dire les problèmes d'optimisation.Les deux premiers travaux concernent le domaine de la complexité en requête quantique, en généralisant l'important résultat suivant: dans l'article cite{LMRSS11}, leurs auteurs parviennent à caractériser la complexité en requête quantique, à l'aide de la méthode par adversaire, un programme semi-définie positif introduit par A. Ambainis dans cite{Ambainis2000}. Cependant, cette caractérisation est restreinte aux modèles à temps discret, avec une erreur bornée. Ainsi, le premier travail consiste à généraliser leur résultat aux modèles à temps continu, tandis que le second travail est une démarche, non aboutie, pour caractériser la complexité en requête quantique dans le cas exact et pour erreur non bornée.Dans ce premier travail, pour caractériser la complexité en requête quantique aux modèles à temps discret, nous adaptons la démonstration des modèles à temps discret, en construisant un algorithme en requête adiabatique universel. Le principe de cet algorithme repose sur le théorème adiabatique cite{Born1928}, ainsi qu'une solution optimale du dual de la méthode par adversaire. À noter que l'analyse du temps d'exécution de notre algorithme adiabatique est basée sur preuve qui ne nécessite pas d'écart dans le spectre de l'Hamiltonien.Dans le second travail, on souhaite caractériser la complexité en requête quantique pour une erreur non bornée ou nulle. Pour cela on reprend et améliore la méthode par adversaire, avec une approche de la mécanique lagrangienne, dans laquelle on construit un Lagrangien indiquant le nombre de requêtes nécessaires pour se déplacer dans l'espace des phases, ainsi on peut définir l'``action en requête''. Or ce lagrangien s'exprime sous la forme d'un programme semi-defini, son étude classique via les équations d'Euler-Lagrange nécessite l'utilisation du théorème de l'enveloppe, un puissant outils d'économathématiques. Le dernier travail, plus éloigné, concerne la complexité en information (et par extension la complexité en communication) pour simuler des corrélations non-locales. Ou plus précisement la quantitié d'information (selon Shannon) que doive s'échanger deux parties pour obtenir ses corrélations. Dans ce but, nous définissons une nouvelle complexité, denommée la zero information complexity IC_0, via le modèle sans communication. Cette complexité a l'avantage de s'exprimer sous la forme d'une optimization convexe. Pour les corrélations CHSH, on résout le problème d'optimisation pour le cas à une seule direction où nous retrouvons un résultat connu. Pour le scénario à deux directions, on met numériquement en évidence la validité de cette borne, et on résout une forme relaxée de IC_0 qui est un nouveau résultat. / Doctorat en Sciences de l'ingénieur et technologie / info:eu-repo/semantics/nonPublished
|
Page generated in 0.0755 seconds