• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • Tagged with
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Speeding up PARAFAC : Approximation of tensor rank using the Tucker core

Arnroth, Lukas January 2018 (has links)
In this paper, the approach of utilizing the core tensor from the Tucker decomposition, in place of theuncompressed tensor, for nding a valid tensor rank for the PARAFAC decomposition is considered.Validity of the proposed method is investigated in terms of error and time consumption. As thesolutions of the PARAFAC decomposition are unique, stability of the solutions through split-halfanalysis is investigated. Simulated and real data are considered. Although, no general validity ofthe method could be observed, the results for some datasets look promising with 10% compressionin all modes. It is also shown that increased compression does not necessarily imply less timeconsumption.
2

ESTIMATING THE RESPIRATORY LUNG MOTION MODEL USING TENSOR DECOMPOSITION ON DISPLACEMENT VECTOR FIELD

Kang, Kingston 01 January 2018 (has links)
Modern big data often emerge as tensors. Standard statistical methods are inadequate to deal with datasets of large volume, high dimensionality, and complex structure. Therefore, it is important to develop algorithms such as low-rank tensor decomposition for data compression, dimensionality reduction, and approximation. With the advancement in technology, high-dimensional images are becoming ubiquitous in the medical field. In lung radiation therapy, the respiratory motion of the lung introduces variabilities during treatment as the tumor inside the lung is moving, which brings challenges to the precise delivery of radiation to the tumor. Several approaches to quantifying this uncertainty propose using a model to formulate the motion through a mathematical function over time. [Li et al., 2011] uses principal component analysis (PCA) to propose one such model using each image as a long vector. However, the images come in a multidimensional arrays, and vectorization breaks the spatial structure. Driven by the needs to develop low-rank tensor decomposition and provided the 4DCT and Displacement Vector Field (DVF), we introduce two tensor decompositions, Population Value Decomposition (PVD) and Population Tucker Decomposition (PTD), to estimate the respiratory lung motion with high levels of accuracy and data compression. The first algorithm is a generalization of PVD [Crainiceanu et al., 2011] to higher order tensor. The second algorithm generalizes the concept of PVD using Tucker decomposition. Both algorithms are tested on clinical and phantom DVFs. New metrics for measuring the model performance are developed in our research. Results of the two new algorithms are compared to the result of the PCA algorithm.
3

Amélioration du modèle de sections efficaces dans le code de cœur COCAGNE de la chaîne de calculs d'EDF / Improvement of cross section model in COCAGNE code of the calculation chain of EDF

Luu, Thi Hieu 17 February 2017 (has links)
Afin d'exploiter au mieux son parc nucléaire, la R&D d'EDF est en train de développer une nouvelle chaîne de calcul pour simuler le cœur des réacteurs nucléaires avec des outils à l'état de l'art. Ces calculs nécessitent une grande quantité de données physiques, en particulier les sections efficaces. Dans la simulation d'un cœur complet, le nombre de valeurs des sections efficaces est de l'ordre de plusieurs milliards. Ces sections efficaces peuvent être représentées comme des fonctions multivariées dépendant de plusieurs paramètres physiques. La détermination des sections efficaces étant un calcul complexe et long, nous pouvons donc les précalculer en certaines valeurs des paramètres (caluls hors ligne) puis les évaluer en tous points par une interpolation (calculs en ligne). Ce processus demande un modèle de reconstruction des sections efficaces entre les deux étapes. Pour réaliser une simulation plus fidèle du cœur dans la nouvelle chaîne d'EDF, les sections efficaces nécessitent d'être mieux représentées en prenant en compte de nouveaux paramètres. Par ailleurs, la nouvelle chaîne se doit d'être en mesure de calculer le réacteur dans des situations plus larges qu'actuellement. Le modèle d'interpolation multilinéaire pour reconstruire les sections efficaces est celui actuellement utilisé pour répondre à ces objectifs. Néanmoins, avec ce modèle, le nombre de points de discrétisation augmente exponentiellement en fonction du nombre de paramètres ou de manière considérable quand on ajoute des points sur un des axes. Par conséquence, le nombre et le temps des calculs hors ligne ainsi que la taille du stockage des données deviennent problématique. L'objectif de cette thèse est donc de trouver un nouveau modèle pour répondre aux demandes suivantes : (i)-(hors ligne) réduire le nombre de précalculs, (ii)-(hors ligne) réduire le stockage de données pour la reconstruction et (iii)-(en ligne) tout en conservant (ou améliorant) la précision obtenue par l'interpolation multilinéaire. D'un point de vue mathématique, ce problème consiste à approcher des fonctions multivariées à partir de leurs valeurs précalculées. Nous nous sommes basés sur le format de Tucker - une approximation de tenseurs de faible rang afin de proposer un nouveau modèle appelé la décomposition de Tucker . Avec ce modèle, une fonction multivariée est approchée par une combinaison linéaire de produits tensoriels de fonctions d'une variable. Ces fonctions d'une variable sont construites grâce à une technique dite de décomposition en valeurs singulières d'ordre supérieur (une « matricization » combinée à une extension de la décomposition de Karhunen-Loève). L'algorithme dit glouton est utilisé pour constituer les points liés à la résolution des coefficients dans la combinaison de la décomposition de Tucker. Les résultats obtenus montrent que notre modèle satisfait les critères exigés sur la réduction de données ainsi que sur la précision. Avec ce modèle, nous pouvons aussi éliminer a posteriori et à priori les coefficients dans la décomposition de Tucker. Cela nous permet de réduire encore le stockage de données dans les étapes hors ligne sans réduire significativement la précision. / In order to optimize the operation of its nuclear power plants, the EDF's R&D department iscurrently developing a new calculation chain to simulate the nuclear reactors core with state of the art tools. These calculations require a large amount of physical data, especially the cross-sections. In the full core simulation, the number of cross-section values is of the order of several billions. These cross-sections can be represented as multivariate functions depending on several physical parameters. The determination of cross-sections is a long and complex calculation, we can therefore pre-compute them in some values of parameters (online calculations), then evaluate them at all desired points by an interpolation (online calculations). This process requires a model of cross-section reconstruction between the two steps. In order to perform a more faithful core simulation in the new EDF's chain, the cross-sections need to be better represented by taking into account new parameters. Moreover, the new chain must be able to calculate the reactor in more extensive situations than the current one. The multilinear interpolation is currently used to reconstruct cross-sections and to meet these goals. However, with this model, the number of points in its discretization increases exponentially as a function of the number of parameters, or significantly when adding points to one of the axes. Consequently, the number and time of online calculations as well as the storage size for this data become problematic. The goal of this thesis is therefore to find a new model in order to respond to the following requirements: (i)-(online) reduce the number of pre-calculations, (ii)-(online) reduce stored data size for the reconstruction and (iii)-(online) maintain (or improve) the accuracy obtained by multilinear interpolation. From a mathematical point of view, this problem involves approaching multivariate functions from their pre-calculated values. We based our research on the Tucker format - a low-rank tensor approximation in order to propose a new model called the Tucker decomposition . With this model, a multivariate function is approximated by a linear combination of tensor products of one-variate functions. These one-variate functions are constructed by a technique called higher-order singular values decomposition (a « matricization » combined with an extension of the Karhunen-Loeve decomposition). The so-called greedy algorithm is used to constitute the points related to the resolution of the coefficients in the combination of the Tucker decomposition. The results obtained show that our model satisfies the criteria required for the reduction of the data as well as the accuracy. With this model, we can eliminate a posteriori and a priori the coefficients in the Tucker decomposition in order to further reduce the data storage in online steps but without reducing significantly the accuracy.
4

Modern Electronic Structure Theory using Tensor Product States

Abraham, Vibin 11 January 2022 (has links)
Strongly correlated systems have been a major challenge for a long time in the field of theoretical chemistry. For such systems, the relevant portion of the Hilbert space scales exponentially, preventing efficient simulation on large systems. However, in many cases, the Hilbert space can be partitioned into clusters on the basis of strong and weak interactions. In this work, we mainly focus on an approach where we partition the system into smaller orbital clusters in which we can define many-particle cluster states and use traditional many-body methods to capture the rest of the inter-cluster correlations. This dissertation can be mainly divided into two parts. In the first part of this dissertation, the clustered ansatz, termed as tensor product states (TPS), is used to study large strongly correlated systems. In the second part, we study a particular type of strongly correlated system, correlated triplet pair states that arise in singlet fission. The many-body expansion (MBE) is an efficient tool that has a long history of use for calculating interaction energies, binding energies, lattice energies, and so on. We extend the incremental full configuration interaction originally proposed for a Slater determinant to a tensor product state (TPS) based wavefunction. By partitioning the active space into smaller orbital clusters, our approach starts from a cluster mean-field reference TPS configuration and includes the correlation contribution of the excited TPSs using a many-body expansion. This method, named cluster many-body expansion (cMBE), improves the convergence of MBE at lower orders compared to directly doing a block-based MBE from an RHF reference. The performance of the cMBE method is also tested on a graphene nano-sheet with a very large active space of 114 electrons in 114 orbitals, which would require 1066 determinants for the exact FCI solution. Selected CI (SCI) using determinants becomes intractable for large systems with strong correlation. We introduce a method for SCI algorithms using tensor product states which exploits local molecular structure to significantly reduce the number of SCI variables. We demonstrate the potential of this method, called tensor product selected configuration interaction (TPSCI), using a few model Hamiltonians and molecular examples. These numerical results show that TPSCI can be used to significantly reduce the number of SCI variables in the variational space, and thus paving a path for extending these deterministic and variational SCI approaches to a wider range of physical systems. The extension of the TPSCI algorithm for excited states is also investigated. TPSCI with perturbative corrections provides accurate excitation energies for low-lying triplet states with respect to extrapolated results. In the case of traditional SCI methods, accurate excitation energies are obtained only after extrapolating calculations with large variational dimensions compared to TPSCI. We provide an intuitive connection between lower triplet energy mani- folds with Hückel molecular orbital theory, providing a many-body version of Hückel theory for excited triplet states. The n-body Tucker ansatz (which is a truncated TPS wavefunction) developed in our group provides a good approximation to the low-lying states of a clusterable spin system. In this approach, a Tucker decomposition is used to obtain local cluster states which can be truncated to prune the full Hilbert space of the system. As a truncated variational approach, it has been observed that the self-consistently optimized n-body Tucker method is not size- extensive, a property important for many-body methods. We explore the use of perturbation theory and linearized coupled-cluster methods to obtain a robust yet efficient approximation. Perturbative corrections to the n-body Tucker method have been implemented for the Heisenberg Hamiltonian and numerical data for various lattices and molecular systems has been presented to show the applicability of the method. In the second part of this dissertation, we focus on studying a particular type of strongly correlated states that occurs in singlet fission material. The correlated triplet pair state 1(TT) is a key intermediate in the singlet fission process, and understanding the mechanism by which it separates into two independent triplet states is critical for leveraging singlet fission for improving solar cell efficiency. This separation mechanism is dominated by two key interactions: (i) the exchange interaction (K) between the triplets which leads to the spin splitting of the biexciton state into 1(TT),3(TT) and 5(TT) states, and (ii) the triplet-triplet energy transfer integral (t) which enables the formation of the spatially separated (but still spin entangled) state 1(T...T). We develop a simple ab initio technique to compute both the triplet-triplet exchange (K) and triplet-triplet energy transfer coupling (t). Our key findings reveal new conditions for successful correlated triplet pair state dissociation. The biexciton exchange interaction needs to be ferromagnetic or negligible compared to the triplet energy transfer for favorable dissociation. We also explore the effect of chromophore packing to reveal geometries where these conditions are achieved for tetracene. We also provide a simple connectivity rule to predict whether the through-bond coupling will be stabilizing or destabilizing for the (TT) state in covalently linked singlet fission chromophores. By drawing an analogy between the chemical system and a simple spin-lattice, one is able to determine the ordering of the multi-exciton spin state via a generalized usage of Ovchinnikov's rule. In the case of meta connectivity, we predict 5(TT) to be formed and this is later confirmed by experimental techniques like time-resolved electron spin resonance (TR-ESR). / Doctor of Philosophy / The study of the correlated motion of electrons in molecules and materials allows scientists to gain useful insights into many physical processes like photosynthesis, enzyme catalysis, superconductivity, chemical reactions and so on. Theoretical quantum chemistry tries to study the electronic properties of chemical species. The exact solution of the electron correlation problem is exponentially complex and can only be computed for small systems. Therefore, approximations are introduced for practical calculations that provide good results for ground state properties like energy, dipole moment, etc. Sometimes, more accurate calculations are required to study the properties of a system, because the system may not adhere to the as- sumptions that are made in the methods used. One such case arises in the study of strongly correlated molecules. In this dissertation, we present methods which can handle strongly correlated cases. We partition the system into smaller parts, then solve the problem in the basis of these smaller parts. We refer to this block-based wavefunction as tensor product states and they provide accurate results while avoiding the exponential scaling of the full solution. We present accurate energies for a wide variety of challenging cases, including bond breaking, excited states and π conjugated molecules. Additionally, we also investigate molecular systems that can be used to increase the efficiency of solar cells. We predict improved solar efficiency for a chromophore dimer, a result which is later experimentally verified.

Page generated in 0.1057 seconds