Spelling suggestions: "subject:"cotensor A"" "subject:"condensor A""
71 |
Liquid crystal NMR: director dynamics and small solute moleculesKantola, A. M. (Anu M.) 03 December 2009 (has links)
Abstract
The subjects of this thesis are the dynamics of liquid crystals in external electric and magnetic fields as well as the magnetic properties of small molecules, both studied by liquid crystal nuclear magnetic resonance (LC NMR) spectroscopy. Director dynamics of a liquid crystal 5CB in external magnetic and electric fields was studied by deuterium NMR and spectral simulations. A new theory was developed to explain the peculiar oscillations observed in the experimental spectra collected during fast director rotation. A spectral simulation program based on this new theory was developed and the outcome of the simulations was compared with the experimental results to verify the tenability of the theory.
In the studies on the properties of small solute molecules, LC NMR was utilised to obtain information about anisotropic nuclear magnetic interaction tensors. The nuclear magnetic shielding tensor was studied in methyl halides, the spin-spin coupling tensor in methyl mercury halides and the quadrupolar coupling tensor in deuterated benzenes. The effects of small-amplitude molecular motions and solvent interactions on the obtained parameters were considered in each case. Finally, the experimental results were compared to the corresponding computational NMR parameters calculated in parallel with the experimental work.
|
72 |
Limited data problems in X-ray and polarized light tomographySzotten, David January 2011 (has links)
We present new reconstruction results and methods for limited data problems in photoelastic tomography. We begin with a survey of the current state of x-ray tomography. Discussing the Radon transform and its inversion we also consider some stability results for reconstruction in Sobolev spaces. We describe certain limited data problems and ways to tackle these, in particular the Two Step Hilbert reconstruction method. We then move on to photoelastic tomography, where we make use of techniques from scalar tomography to develop new methods for photoelastic tomographic reconstruction. We present the main mathematical model used in photoelasticity, the Truncated Transverse Ray Transform (TTRT). After some initial numerical studies, we extend a recently presented reconstruction algorithm for the TTRT from the Schwartz class to certain Sobolev spaces. We also give some stability results for inversion in these spaces. Moving on from general reconstruction to focus on inversion of some special cases of tensors we consider solenoidal and potential tensor fields. We discuss existing reconstruction methods and present several novel reconstructions and discuss their advantages over using more general machinery. We also extend our new algorithms, as well as existing ones, to certain cases of data truncation. Finally, we present numerical studies of the general reconstruction method. We give the first published results of TTRT reconstruction and go into some detail describing the implementation before presenting our results.
|
73 |
Metrical aspects of the complexification of tensor products and tensor normsVan Zyl, Augustinus Johannes 14 July 2009 (has links)
We study the relationship between real and complex tensor norms. The theory of tensor norms on tensor products of Banach spaces, was developed, by A. Grothendieck, in his Resumé de la théorie métrique des produits tensoriels topologiques [3]. In this monograph he introduced a variety of ways to assign norms to tensor products of Banach spaces. As is usual in functional analysis, the real-scalar theory is very closely related to the complex-scalar theory. For example, there are, up to top ological equivalence, fourteen ``natural' tensor norms in each of the real-scalar and complex-scalar theories. This correspondence was remarked upon in the Resumé, but without proving any formal relationships, although hinting at a certain injective relationship between real and complex (topological) equivalence classes of tensor norms. We make explicit connections between real and complex tensor norms in two different ways. This divides the dissertation into two parts. In the first part, we consider the ``complexifications' of real Banach spaces and find tensor norms and complexification procedures, so that the complexification of the tensor product, which is itself a Banach space, is isometrically isomorphic to the tensor product of the complexifications. We have results for the injective tensor norm as well as the projective tensor norm. In the second part we look for isomorphic results rather than isometric. We show that one can define the complexification of real tensor norm in a natural way. The main result is that the complexification of real topological equivalence classes that is induced by this definition, leads to an injective correspondence between the real and the complex tensor norm equivalence classes. / Thesis (PHD)--University of Pretoria, 2009. / Mathematics and Applied Mathematics / unrestricted
|
74 |
Nonuniversal entanglement level statistics in projection-driven quantum circuits and glassy dynamics in classical computation circuitsZhang, Lei 12 November 2021 (has links)
In this thesis, I describe research results on three topics : (i) a phase transition in the area-law regime of quantum circuits driven by projection measurements; (ii) ultra slow dynamics in two dimensional spin circuits; and (iii) tensor network methods applied to boolean satisfiability problems.
(i) Nonuniversal entanglement level statistics in projection-driven quantum circuits; Non-thermalized closed quantum many-body systems have drawn considerable attention, due to their relevance to experimentally controllable quantum systems. In the first part of the thesis, we study the level-spacing statistics in the entanglement spectrum of output states of random universal quantum circuits where, at each time step, qubits are subject to a finite probability of projection onto states of the computational basis. We encounter two phase transitions with increasing projection rate: The first is the volume-to-area law transition observed in quantum circuits with projective measurements; The second separates the pure Poisson level statistics phase at large projective measurement rates from a regime of residual level repulsion in the entanglement spectrum within the area-law phase, characterized by non-universal level spacing statistics that interpolates between the Wigner-Dyson and Poisson distributions. The same behavior is observed in both circuits of random two-qubit unitaries and circuits of universal gates, including the set implemented by Google in its Sycamore circuits.
(ii) Ultra-slow dynamics in a translationally invariant spin model for multiplication and factorization; Slow relaxation of glassy systems in the absence of disorder remains one of the most intriguing problems in condensed matter physics. In the second part of the thesis we investigate slow relaxation in a classical model of short-range interacting Ising spins on a translationally invariant two-dimensional lattice that mimics a reversible circuit that, depending on the choice of boundary conditions, either multiplies or factorizes integers. We prove that, for open boundary conditions, the model exhibits no finite-temperature phase transition. Yet we find that it displays glassy dynamics with astronomically slow relaxation times, numerically consistent with a double exponential dependence on the inverse temperature. The slowness of the dynamics arises due to errors that occur during thermal annealing that cost little energy but flip an extensive number of spins. We argue that the energy barrier that needs to be overcome in order to heal such defects scales linearly with the correlation length, which diverges exponentially with inverse temperature, thus yielding the double exponential behavior of the relaxation time.
(iii) Reversible circuit embedding on tensor networks for Boolean satisfiability; Finally, in the third part of the thesis we present an embedding of Boolean satisfiability (SAT) problems on a two-dimensional tensor network. The embedding uses reversible circuits encoded into the tensor network whose trace counts the number of solutions of the satisfiability problem. We specifically present the formulation of #2SAT, #3SAT, and #3XORSAT formulas into planar tensor networks. We use a compression-decimation algorithm introduced by us to propagate constraints in the network before coarse-graining the boundary tensors. Iterations of these two steps gradually collapse the network while slowing down the growth of bond dimensions. For the case of #3XORSAT, we show numerically that this procedure recognizes, at least partially, the simplicity of XOR constraints for which it achieves subexponential time to solution. For a #P-complete subset of #2SAT we find that our algorithm scales with size in the same way as state-of-the-art #SAT counters, albeit with a larger prefactor. We find that the compression step performs less efficiently for #3SAT than for #2SAT.
|
75 |
Towards Virtual Sensors Via Tensor CompletionRaeeji Yaneh Sari, Noorali January 2021 (has links)
<p>Sensors are being used in many industrial applications for equipment health mon-itoring and anomaly detection. However, sometimes operation and maintenanceof these sensors are costly. Thus companies are interested in reducing the num-ber of required sensors as much as possible. The straightforward solution is tocheck the prediction power of sensors and eliminate those sensors with limitedprediction capabilities. However, this is not an optimal solution because if we dis-card the identified sensors. Their historical data also will not be utilized anymore.However, typically such historical data can help improve the remaining sensors’signal power, and abolishing them does not seem the right solution. Therefore, wepropose the first data-driven approach based on tensor completion for re-utilizingdata of removed sensors, in addition to remaining sensors to create virtual sensors.We applied the proposed method on vibration sensors of high-speed separators,operating with five sensors. The producer company was interested in reducing thesensors to two. But with the aid of tensor completion-based virtual sensors, weshow that we can safely keep only one sensor and use four virtual sensors thatgive almost equal detection power compared to when we keep only two physicalsensors.</p>
|
76 |
Numerical methods in Tensor NetworksHandschuh, Stefan 14 January 2015 (has links)
In many applications that deal with high dimensional data, it is important to not store the high dimensional object itself, but its representation in a data sparse way. This aims to reduce the storage and computational complexity.
There is a general scheme for representing tensors with the help of sums of elementary tensors, where the summation structure is defined by a graph/network. This scheme allows to generalize commonly used approaches in representing a large amount of numerical data (that can be interpreted as a high dimensional object) using sums of elementary tensors. The classification does not only distinguish between elementary tensors and non-elementary tensors, but also describes the number of terms that is needed to represent an object of the tensor space. This classification is referred to as tensor network (format).
This work uses the tensor network based approach and describes non-linear block Gauss-Seidel methods (ALS and DMRG) in the context of the general tensor network framework.
Another contribution of the thesis is the general conversion of different tensor formats. We are able to efficiently change the underlying graph topology of a given tensor representation while using the similarities (if present) of both the original and the desired structure. This is an important feature in case only minor structural changes are required.
In all approximation cases involving iterative methods, it is crucial to find and use a proper initial guess. For linear iteration schemes, a good initial guess helps to decrease the number of iteration steps that are needed to reach a certain accuracy, but it does not change the approximation result. For non-linear iteration schemes, the approximation result may depend on the initial guess. This work introduces a method to successively create an initial guess that improves some approximation results. This algorithm is based on successive rank 1 increments for the r-term format.
There are still open questions about how to find the optimal tensor format for a given general problem (e.g. storage, operations, etc.). For instance in the case where a physical background is given, it might be efficient to use this knowledge to create a good network structure. There is however, no guarantee that a better (with respect to the problem) representation structure does not exist.
|
77 |
CONNECTED MULTI-DOMAIN AUTONOMY AND ARTIFICIAL INTELLIGENCE: AUTONOMOUS LOCALIZATION, NETWORKING, AND DATA CONFORMITY EVALUATIONUnknown Date (has links)
The objective of this dissertation work is the development of a solid theoretical and algorithmic framework for three of the most important aspects of autonomous/artificialintelligence (AI) systems, namely data quality assurance, localization, and communications. In the era of AI and machine learning (ML), data reign supreme. During learning tasks, we need to ensure that the training data set is correct and complete. During operation, faulty data need to be discovered and dealt with to protect from -potentially catastrophic- system failures. With our research in data quality assurance, we develop new mathematical theory and algorithms for outlier-resistant decomposition of high-dimensional matrices (tensors) based on L1-norm principal-component analysis (PCA). L1-norm PCA has been proven to be resistant to irregular data-points and will drive critical real-world AI learning and autonomous systems operations in the future. At the same time, one of the most important tasks of autonomous systems is self-localization. In GPS-deprived environments, localization becomes a fundamental technical problem. State-of-the-art solutions frequently utilize power-hungry or expensive architectures, making them difficult to deploy. In this dissertation work, we develop and implement a robust, variable-precision localization technique for autonomous systems based on the direction-of-arrival (DoA) estimation theory, which is cost and power-efficient. Finally, communication between autonomous systems is paramount for mission success in many applications. In the era of 5G and beyond, smart spectrum utilization is key.. In this work, we develop physical (PHY) and medium-access-control (MAC) layer techniques that autonomously optimize spectrum usage and minimizes intra and internetwork interference. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2020. / FAU Electronic Theses and Dissertations Collection
|
78 |
ENVELOPE MODEL FOR MULTIVARIATE LINEAR REGRESSION WITH ELLIPTICAL ERRORAlkan, Gunes, 0000-0001-9356-2173 January 2021 (has links)
In recent years, the need for models which can accommodate higher order covariates have increased greatly. We first consider linear regression with vector-valued response Y and tensor-valued predictors X. Envelope models (Cook et al., 2010) can significantly improve the estimation efficiency of the regression coefficients by linking the regression mean with the covariance of the regression error. Most existing tensor regression models assume that the conditional distribution of Y given X follows a normal distribution, which may be violated in practice. In Chapter 2, we propose an envelope multivariate linear regression model with tensor-valued predictors and elliptically contoured error distributions. The proposed estimator is more robust to violations of the error normality assumption, and it is more efficient than the estimators without considering the underlying envelope structure. We compare the new proposal with existing estimators in extensive simulation studies. In Chapter 3, we explore how the missing data problem can be addressed for multivariate linear regression setting with envelopes and elliptical error. A popular and efficient approach, multiple imputation is implemented with bootstrapped expectation-maximization (EM) algorithm to fill the missing data, which is then followed with an adjustment in estimating regression coefficients. Simulations with synthetic data as well as real data are presented to establish the superiority of the adjusted multiple imputation method proposed. / Statistics
|
79 |
Tensor Contraction OptimizationsSringeri Vageeswara, Abhijit January 2015 (has links)
No description available.
|
80 |
The Unreasonable Usefulness of Approximation by Linear CombinationLewis, Cannada Andrew 05 July 2018 (has links)
Through the exploitation of data-sparsity ---a catch all term for savings gained from a variety of approximations--- it is possible to reduce the computational cost of accurate electronic structure calculations to linear. Meaning, that the total time to solution for the calculation grows at the same rate as the number of particles that are correlated. Multiple techniques for exploiting data-sparsity are discussed, with a focus on those that can be systematically improved by tightening numerical parameters such that as the parameter approaches zero the approximation becomes exact. These techniques are first applied to Hartree-Fock theory and then we attempt to design a linear scaling massively parallel electron correlation strategy based on second order perturbation theory. / Ph. D. / The field of Quantum Chemistry is highly dependent on a vast hierarchy of approximations; all carefully balanced, so as to allow for fast calculation of electronic energies and properties to an accuracy suitable for quantitative predictions. Formally, computing these energies should have a cost that increases exponentially with the number of particles in the system, but the use of approximations based on local behavior, or nearness, of the particles reduces this scaling to low order polynomials while maintaining an acceptable amount of accuracy. In this work, we introduce several new approximations that throw away information in a specific fashion that takes advantage of the fact that the interactions between particles decays in magnitude with the distance between them (although sometimes very slowly) and also exploits the smoothness of those interactions, by factorizing their numerical representation into a linear combination of simpler items. These factorizations, while technical in nature, have benefits that are hard to obtain by merely ignoring interactions between distant particles. Through the development of new factorizations and a careful neglect of interactions between distant particles, we hope to be able to compute properties of molecules in such a way that accuracy is maintained, but that the cost of the calculations only grows at the same rate as the number of particles. It seems that very recently, circa 2015, that this goal may actually soon become a reality, potentially revolutionizing the ability of quantum chemistry to make quantitative predictions for properties of large molecules.
|
Page generated in 0.0242 seconds