• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 2
  • Tagged with
  • 14
  • 14
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Nonuniversal entanglement level statistics in projection-driven quantum circuits and glassy dynamics in classical computation circuits

Zhang, Lei 12 November 2021 (has links)
In this thesis, I describe research results on three topics : (i) a phase transition in the area-law regime of quantum circuits driven by projection measurements; (ii) ultra slow dynamics in two dimensional spin circuits; and (iii) tensor network methods applied to boolean satisfiability problems. (i) Nonuniversal entanglement level statistics in projection-driven quantum circuits; Non-thermalized closed quantum many-body systems have drawn considerable attention, due to their relevance to experimentally controllable quantum systems. In the first part of the thesis, we study the level-spacing statistics in the entanglement spectrum of output states of random universal quantum circuits where, at each time step, qubits are subject to a finite probability of projection onto states of the computational basis. We encounter two phase transitions with increasing projection rate: The first is the volume-to-area law transition observed in quantum circuits with projective measurements; The second separates the pure Poisson level statistics phase at large projective measurement rates from a regime of residual level repulsion in the entanglement spectrum within the area-law phase, characterized by non-universal level spacing statistics that interpolates between the Wigner-Dyson and Poisson distributions. The same behavior is observed in both circuits of random two-qubit unitaries and circuits of universal gates, including the set implemented by Google in its Sycamore circuits. (ii) Ultra-slow dynamics in a translationally invariant spin model for multiplication and factorization; Slow relaxation of glassy systems in the absence of disorder remains one of the most intriguing problems in condensed matter physics. In the second part of the thesis we investigate slow relaxation in a classical model of short-range interacting Ising spins on a translationally invariant two-dimensional lattice that mimics a reversible circuit that, depending on the choice of boundary conditions, either multiplies or factorizes integers. We prove that, for open boundary conditions, the model exhibits no finite-temperature phase transition. Yet we find that it displays glassy dynamics with astronomically slow relaxation times, numerically consistent with a double exponential dependence on the inverse temperature. The slowness of the dynamics arises due to errors that occur during thermal annealing that cost little energy but flip an extensive number of spins. We argue that the energy barrier that needs to be overcome in order to heal such defects scales linearly with the correlation length, which diverges exponentially with inverse temperature, thus yielding the double exponential behavior of the relaxation time. (iii) Reversible circuit embedding on tensor networks for Boolean satisfiability; Finally, in the third part of the thesis we present an embedding of Boolean satisfiability (SAT) problems on a two-dimensional tensor network. The embedding uses reversible circuits encoded into the tensor network whose trace counts the number of solutions of the satisfiability problem. We specifically present the formulation of #2SAT, #3SAT, and #3XORSAT formulas into planar tensor networks. We use a compression-decimation algorithm introduced by us to propagate constraints in the network before coarse-graining the boundary tensors. Iterations of these two steps gradually collapse the network while slowing down the growth of bond dimensions. For the case of #3XORSAT, we show numerically that this procedure recognizes, at least partially, the simplicity of XOR constraints for which it achieves subexponential time to solution. For a #P-complete subset of #2SAT we find that our algorithm scales with size in the same way as state-of-the-art #SAT counters, albeit with a larger prefactor. We find that the compression step performs less efficiently for #3SAT than for #2SAT.
2

Immanants, Tensor Network States and the Geometric Complexity Theory Program

Ye, Ke 2012 August 1900 (has links)
We study the geometry of immanants, which are polynomials on n^2 variables that are defined by irreducible representations of the symmetric group Sn. We compute stabilizers of immanants in most cases by computing Lie algebras of stabilizers of immanants. We also study tensor network states, which are special tensors defined by contractions. We answer a question about tensor network states asked by Grasedyck. Both immanants and tensor network states are related to the Geometric Complexity Theory program, in which one attempts to use representation theory and algebraic geometry to solve an algebraic analogue of the P versus N P problem. We introduce the Geometric Complexity Theory (GCT) program in Section one and we introduce the background for the study of immanants and tensor network states. We also explain the relation between the study of immanants and tensor network states and the GCT program. Mathematical preliminaries for this dissertation are in Section two, including multilinear algebra, representation theory, and complex algebraic geometry. In Section three, we first give a description of immanants as trivial (SL(E) x SL(F )) ><| delta(Sn)-modules contained in the space S^n(E X F ) of polynomials of degree n on the vector space E X F , where E and F are n dimensional complex vectorspaces equipped with fixed bases and the action of Sn on E (resp. F ) is induced by permuting elements in the basis of E (resp. F ). Then we prove that the stabilizer of an immanant for any non-symmetric partition is T (GL(E) x GL(F )) ><| delta(Sn) ><| Z2, where T (GL(E) x GL(F )) is the group of pairs of n x n diagonal matrices with the product of determinants equal to 1, delta(Sn) is the diagonal subgroup of Sn x Sn. We also prove that the identity component of the stabilizer of any immanant is T (GL(E) x GL(F )). In Section four, we prove that the set of tensor network states associated to a triangle is not Zariski closed and we give two reductions of tensor network states from complicated cases to simple cases. In Section five, we calculate the dimension of the tangent space and weight zero subspace of the second osculating space of GL_(n^2) .[perm_n] at the point [perm_n] and determine the Sn x Sn-module structure of this space. We also determine some lines on the hyper-surface determined by the permanent polynomial. In Section six, we give a summary of this dissertation.
3

The Design of an Oncology Knowledge Base from an Online Health Forum

Ramadan, Omar 05 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Knowledge base completion is an important task that allows scientists to reason over knowledge bases and discover new facts. In this thesis, a patient-centric knowledge base is designed and constructed using medical entities and relations extracted from the health forum r/cancer. The knowledge base stores information in binary relation triplets. It is enhanced with an is-a relation that is able to represent the hierarchical relationship between different medical entities. An enhanced Neural Tensor Network that utilizes the frequency of occurrence of relation triplets in the dataset is then developed to infer new facts from the enhanced knowledge base. The results show that when the enhanced inference model uses the enhanced knowledge base, a higher accuracy (73.2 %) and recall@10 (35.4%) are obtained. In addition, this thesis describes a methodology for knowledge base and associated inference model design that can be applied to other chronic diseases.
4

Geometry of Feasible Spaces of Tensors

Qi, Yang 16 December 2013 (has links)
Due to the exponential growth of the dimension of the space of tensors V_(1)⊗• • •⊗V_(n), any naive method of representing these tensors is intractable on a computer. In practice, we consider feasible subspaces (subvarieties) which are defined to reduce the storage cost and the computational complexity. In this thesis, we study two such types of subvarieties: the third secant variety of the product of n projective spaces, and tensor network states. For the third secant variety of the product of n projective spaces, we determine set-theoretic defining equations, and give an upper bound of the degrees of these equations. For tensor network states, we answer a question of L. Grasedyck that arose in quantum information theory, showing that the limit of tensors in a space of tensor network states need not be a tensor network state. We also give geometric descriptions of spaces of tensor networks states corresponding to trees and loops.
5

Tensor network and neural network methods in physical systems

Teng, Peiyuan 07 November 2018 (has links)
No description available.
6

Tensor network states simulations of exciton-phonon quantum dynamics for applications in artifcial light-harvesting

Schroeder, Florian Alexander Yinkan Nepomuk January 2018 (has links)
Light-harvesting in nature is known to work differently than conventional man-made solar cells. Recent studies found electronic excitations, delocalised over several chromophores, and a soft, vibrating structural environment to be key schemes that might protect and direct energy transfer yielding increased harvest efficiencies even under adversary conditions. Unfortunately, testing realistic models of noise assisted transport at the quantum level is challenging due to the intractable size of the environmental wave function. I developed a powerful tree tensor network states (TTNS) method that finds an optimally compressed explicit representation of the combined electronic and vibrational quantum state. With TTNS it is possible to simulate exciton-phonon quantum dynamics from small molecules to larger complexes, modelled as an open quantum system with multiple bosonic environments. After benchmarking the method on the minimal spin-boson model by reproducing ground state properties and dynamics that have been reported using other methods, the vibrational quantum state is harnessed to investigate environmental dynamics and its correlation with the spin system. To enable simulations of realistic non-Born-Oppenheimer molecular quantum dynamics, a clustering algorithm and novel entanglement renormalisation tensors are employed to interface TTNS with ab initio density functional theory (DFT). A thereby generated model of a pentacene dimer containing 252 vibrational normal modes was simulated with TTNS reproducing exciton dynamics in agreement with experimental results. Based on the environmental state, the (potential) energy surfaces, underlying the observed singlet fission dynamics, were calculated yielding unprecedented insight into the super-exchange mediated avoided crossing mechanism that produces ultrafast and high yield singlet fission. This combination of DFT and TTNS is a step towards large scale material exploration that can accurately predict excited states properties and dynamics. Furthermore, application to biomolecular systems, such as photosynthetic complexes, may give valuable insights into novel environmental engineering principles for the design of artificial light-harvesting systems.
7

Numerical methods in Tensor Networks

Handschuh, Stefan 28 January 2015 (has links) (PDF)
In many applications that deal with high dimensional data, it is important to not store the high dimensional object itself, but its representation in a data sparse way. This aims to reduce the storage and computational complexity. There is a general scheme for representing tensors with the help of sums of elementary tensors, where the summation structure is defined by a graph/network. This scheme allows to generalize commonly used approaches in representing a large amount of numerical data (that can be interpreted as a high dimensional object) using sums of elementary tensors. The classification does not only distinguish between elementary tensors and non-elementary tensors, but also describes the number of terms that is needed to represent an object of the tensor space. This classification is referred to as tensor network (format). This work uses the tensor network based approach and describes non-linear block Gauss-Seidel methods (ALS and DMRG) in the context of the general tensor network framework. Another contribution of the thesis is the general conversion of different tensor formats. We are able to efficiently change the underlying graph topology of a given tensor representation while using the similarities (if present) of both the original and the desired structure. This is an important feature in case only minor structural changes are required. In all approximation cases involving iterative methods, it is crucial to find and use a proper initial guess. For linear iteration schemes, a good initial guess helps to decrease the number of iteration steps that are needed to reach a certain accuracy, but it does not change the approximation result. For non-linear iteration schemes, the approximation result may depend on the initial guess. This work introduces a method to successively create an initial guess that improves some approximation results. This algorithm is based on successive rank 1 increments for the r-term format. There are still open questions about how to find the optimal tensor format for a given general problem (e.g. storage, operations, etc.). For instance in the case where a physical background is given, it might be efficient to use this knowledge to create a good network structure. There is however, no guarantee that a better (with respect to the problem) representation structure does not exist.
8

Modelování velmi chladných plynů ve vícedimenzionálních optických mřížkách / Modelling of Ultracold Gases in Multidimensional Optical Lattices

Urbanek, Miroslav January 2017 (has links)
Title: Modelling of Ultracold Gases in Multidimensional Optical Lattices Author: Miroslav Urbanek Department: Department of Chemical Physics and Optics Supervisor: doc. Ing. Pavel Soldán, Dr. Abstract: Optical lattices are experimental devices that use laser light to confine ultracold neutral atoms to periodic spatial structures. A system of bosonic atoms in an optical lattice can be described by the Bose-Hubbard model. Although there exist powerful analytic and numerical methods to study this model in one dimension, their extensions to multiple dimensions have not been as successful yet. I present an original numerical method based on tree tensor networks to simulate time evolution in multidimensional lattice systems with a focus on the two-dimensional Bose-Hubbard model. The method is used to investigate phenomena accessible in current experiments. In particular, I have studied phase collapse and revivals, boson expansion, and many-body localization in two-dimensional optical lattices. The outcome of this work is TEBDOL - a program for modelling one-dimensional and two-dimensional lattice systems. Keywords: Bose-Hubbard model, multidimensional system, optical lattice, tensor network
9

Variational Quantum Simulations of Lattice Gauge Theories

Stornati, Paolo 17 May 2022 (has links)
Simulationen von Gittereichtheorien spielen eine grundlegende Rolle bei First-Principles-Rechnungen im Kontext der Hochenergiephysik. Diese Arbeit zielt darauf ab, aktuelle Simulationsmethoden für First-Principle-Berechnungen zu verbessern und diese Methoden auf relevante physikalische Modelle anzuwenden. Wir gehen dieses Problem mit drei verschiedenen Ansätzen an: maschinelles Lernen, Quantencomputing und Tensornetzwerke. Im Rahmen des maschinellen Lernens haben wir eine Methode zur Schätzung thermodynamischer Observablen in Gitterfeldtheorien entwickelt. Genauer gesagt verwenden wir tiefe generative Modelle, um den absoluten Wert der freien Energie abzuschätzen. Wir haben die Anwendbarkeit unserer Methode durch die Untersuchung eines Spielzeugmodells demonstriert. Unser Ansatz erzeugt genauere Messungen im Vergleich mit dem Standard-Markov-Ketten-Monte-Carlo-Verfahren, wenn wir einen Phasenübergangspunkt überqueren. Im Kontext des Quantencomputings ist es unser Ziel, die aktuellen Algorithmen für Quantensimulationen zu verbessern. In dieser Arbeit haben wir uns mit zwei Themen moderner Quantencomputer befasst: der Quantenrauschunterdrückung und dem Design guter parametrischer Quantenschaltkreise. Wir haben eine Minderungsroutine zum Auslesen von Bit-Flip-Fehlern entwickelt, die Quantensimulationen drastisch verbessern kann. Wir haben auch eine dimensionale Aussagekraftanalyse entwickelt, die überflüssige Parameter in parametrischen Quantenschaltkreisen identifizieren kann. Darüber hinaus zeigen wir, wie man Expressivitätsanalysen mit Quantenhardware effizient umsetzen kann. Im Kontext des Tensornetzwerks haben wir ein Quantenbindungsmodell U(1) und 2+1-Dimensionen in einer Leitergeometrie mit DMRG untersucht. Unser Ziel ist es, die Eigenschaften des Grundzustands des Modells in einem endlichen chemischen Potential zu analysieren. Wir haben unterschiedliche Windungszahlsektoren beobachtet, als wir chemisches Potential in das System eingebracht haben. / Simulations of lattice gauge theories play a fundamental role in first principles calculations in the context of high energy physics. This thesis aims to improve state-of-the-art simulation methods for first-principle calculations and apply those methods to relevant physical models. We address this problem using three different approaches: machine learning, quantum computing, and tensor networks. In the context of machine learning, we have developed a method to estimate thermodynamic observables in lattice field theories. More precisely, we use deep generative models to estimate the absolute value of the free energy. We have demonstrated the applicability of our method by studying a toy model. Our approach produces more precise measurements in comparison with the standard Markov chain Monte Carlo method when we cross a phase transition point. In the context of quantum computing, our goal is to improve the current algorithms for quantum simulations. In this thesis, we have addressed two issues on modern quantum computers: the quantum noise mitigation and the design of good parametric quantum circuits. We have developed a mitigation routine ffor read-out bit-flip errors that can drastically improve quantum simulations. We have also developed a dimensional expressiveness analysis that can identify superfluous parameters in parametric quantum circuits. In addition, we show how to implement expressivity analysis using quantum hardware efficiently. In the context of the tensor network, we have studied a quantum bond model U(1) and 2+1 dimensions in a ladder geometry with DMRG. Our goal is to analyze the properties of the ground state of the model in a finite chemical potential. We have observed different winding number sectors when we have introduced chemical potential in the system.
10

Adaptive learning of tensor network structures

Hashemizadehaghda, Seyed Meraj 10 1900 (has links)
Les réseaux tensoriels offrent un cadre puissant pour représenter efficacement des objets de très haute dimension. Les réseaux tensoriels ont récemment montré leur potentiel pour les applications d’apprentissage automatique et offrent une vue unifiée des modèles de décomposition tensorielle courants tels que Tucker, tensor train (TT) et tensor ring (TR). Cependant, l’identification de la meilleure structure de réseau tensoriel à partir de données pour une tâche donnée est un défi. Dans cette thèse, nous nous appuyons sur le formalisme des réseaux tensoriels pour développer un algorithme adaptatif générique et efficace pour apprendre conjointement la structure et les paramètres d’un réseau de tenseurs à partir de données. Notre méthode est basée sur une approche simple de type gloutonne, partant d’un tenseur de rang un et identifiant successivement les bords du réseau tensoriel les plus prometteurs pour de petits incréments de rang. Notre algorithme peut identifier de manière adaptative des structures avec un petit nombre de paramètres qui optimisent efficacement toute fonction objective différentiable. Des expériences sur des tâches de décomposition de tenseurs, de complétion de tenseurs et de compression de modèles démontrent l’efficacité de l’algorithme proposé. En particulier, notre méthode surpasse l’état de l’art basée sur des algorithmes évolutionnaires introduit dans [26] pour la décomposition tensorielle d’images (tout en étant plusieurs ordres de grandeur plus rapide) et trouve des structures efficaces pour compresser les réseaux neuronaux en surpassant les approches populaires basées sur le format TT [30]. / Tensor Networks (TN) offer a powerful framework to efficiently represent very high-dimensional objects. TN have recently shown their potential for machine learning applications and offer a unifying view of common tensor decomposition models such as Tucker, tensor train (TT) and tensor ring (TR). However, identifying the best tensor network structure from data for a given task is challenging. In this thesis, we leverage the TN formalism to develop a generic and efficient adaptive algorithm to jointly learn the structure and the parameters of a TN from data. Our method is based on a simple greedy approach starting from a rank one tensor and successively identifying the most promising tensor network edges for small rank increments. Our algorithm can adaptively identify TN structures with small number of parameters that effectively optimize any differentiable objective function. Experiments on tensor decomposition, tensor completion and model compression tasks demonstrate the effectiveness of the proposed algorithm. In particular, our method outperforms the state-of-the- art evolutionary topology search introduced in [26] for tensor decomposition of images (while being orders of magnitude faster) and finds efficient structures to compress neural networks outperforming popular TT based approaches [30].

Page generated in 0.0428 seconds