• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1769
  • 718
  • 211
  • 158
  • 80
  • 50
  • 41
  • 35
  • 30
  • 19
  • 18
  • 13
  • 13
  • 10
  • 8
  • Tagged with
  • 3769
  • 1665
  • 736
  • 540
  • 404
  • 396
  • 391
  • 320
  • 318
  • 304
  • 275
  • 271
  • 265
  • 231
  • 196
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
751

Random Matrix Theory: Selected Applications from Statistical Signal Processing and Machine Learning

Elkhalil, Khalil 06 1900 (has links)
Random matrix theory is an outstanding mathematical tool that has demonstrated its usefulness in many areas ranging from wireless communication to finance and economics. The main motivation behind its use comes from the fundamental role that random matrices play in modeling unknown and unpredictable physical quantities. In many situations, meaningful metrics expressed as scalar functionals of these random matrices arise naturally. Along this line, the present work consists in leveraging tools from random matrix theory in an attempt to answer fundamental questions related to applications from statistical signal processing and machine learning. In a first part, this thesis addresses the development of analytical tools for the computation of the inverse moments of random Gram matrices with one side correlation. Such a question is mainly driven by applications in signal processing and wireless communications wherein such matrices naturally arise. In particular, we derive closed-form expressions for the inverse moments and show that the obtained results can help approximate several performance metrics of common estimation techniques. Then, we carry out a large dimensional study of discriminant analysis classifiers. Under mild assumptions, we show that the asymptotic classification error approaches a deterministic quantity that depends only on the means and covariances associated with each class as well as the problem dimensions. Such result permits a better understanding of the underlying classifiers, in practical large but finite dimensions, and can be used to optimize the performance. Finally, we revisit kernel ridge regression and study a centered version of it that we call centered kernel ridge regression or CKRR in short. Relying on recent advances on the asymptotic properties of random kernel matrices, we carry out a large dimensional analysis of CKRR under the assumption that both the data dimesion and the training size grow simultaneiusly large at the same rate. We particularly show that both the empirical and prediction risks converge to a limiting risk that relates the performance to the data statistics and the parameters involved. Such a result is important as it permits a better undertanding of kernel ridge regression and allows to efficiently optimize the performance.
752

First-principles investigation of the electronic states at perovskite and pyrite hetero-interfaces

Nazir, Safdar 09 1900 (has links)
Oxide heterostructures are attracting huge interest in recent years due to the special functionalities of quasi two-dimensional quantum gases. In this thesis, the electronic states at the interface between perovskite oxides and pyrite compounds have been studied by first-principles calculations based on density functional theory. Optimization of the atomic positions are taken into account, which is considered very important at interfaces, as observed in the case of LaAlO3/SrTiO3. The creation of metallic states at the interfaces thus is explained in terms of charge transfer between the transition metal and oxygen atoms near the interface. It is observed that with typical thicknesses of at least 10-12 °A the gases still extend considerably in the third dimension, which essentially determines the magnitude of quantum mechanical effects. To overcome this problem, we propose incorporation of highly electronegative cations (such as Ag) in the oxides. A fundamental interest is also the thermodynamic stability of the interfaces due to the possibility of atomic intermixing in the interface region. Therefore, different cation intermixed configurations are taken into account for the interfaces aiming at the energetically stable state. The effect of O vacancies is also discussed for both polar and non-polar heterostructures. The interface metallicity is enhanced for the polar system with the creation of O vacancies, while the clean interface at the non-polar heterostructure exhibits an insulating state and becomes metallic in presence of O vacancy. The O vacancy formation energies are calculated and explained in terms of the increasing electronegativity and effective volume of A the side cation. Along with these, the electronic and magnetic properties of an interface between the ferromagnetic metal CoS2 and the non-magnetic semiconductor FeS2 is investigated. We find that this contact shows a metallic character. The CoS2 stays quasi half metallic at the interface, while the FeS2 becomes metallic. At the interface, ferromagnetic ordering is found to be energetically favorable as compared to antiferromagnetic ordering. Furthermore, tensile strain is shown to strongly enhance the spin polarization so that a virtually half-metallic interface can be achieved, for comparably moderate strain. Our detailed study is aimed at complementing experiments on various oxide interfaces and obtaining a general picture how factors like cations, anions, their atomic weights and elecronegativities, O vacancies, lattice mismatch, lattice relaxation, magnetism etc play a combined role in device design.
753

Interface Effects Enabling New Applications of Two-Dimensional Materials

Sattar, Shahid 05 1900 (has links)
Interface effects in two-dimensional (2D) materials play a critical role for the electronic properties and device characteristics. Here we use first-principles calculations to investigate interface effects in 2D materials enabling new applications. We first show that graphene in contact with monolayer and bilayer PtSe2 experiences weak van der Waals interaction. Analysis of the work functions and band bending at the interface reveals that graphene forms an n-type Schottky contact with monolayer PtSe2 and a p-type Schottky contact with bilayer PtSe2, whereas a small biaxial tensile strain makes the contact Ohmic in the latter case as required for transistor operation. For silicene, which is a 2D Dirac relative of graphene, structural buckling complicates the experimental synthesis and strong interaction with the substrate perturbs the characteristic linear dispersion. To remove this obstacle, we propose solid argon as a possible substrate for realizing quasi-freestanding silicene and argue that a weak van der Waals interaction and small binding energy indicate the possibility to separate silicene from the substrate. For the silicene-PtSe2 interface, we demonstrate much stronger interlayer interaction than previously reported for silicene on other semiconducting substrates. Due to the inversion symmetry breaking and proximity to PtSe2, a band gap opening and spin splittings in the valence and conduction bands of silicene are observed. It is also shown that the strong interlayer interaction can be effectively reduced by intercalating NH3 molecules between silicene and PtSe2, and a small NH3 discussion barrier makes intercalation a viable experimental approach. Silicene/germanene are categorized as key materials for the field of valleytronics due to stronger spin-orbit coupling as compared to graphene. However, no viable route exists so far to experimental realization. We propose F-doped WS2 as substrate that avoids detrimental effects and at the same time induces the required valley polarization. The behavior is explained by proximity effects on silicene/germanene due to the underlying substrate. Broken inversion symmetry in the presence of WS2 opens a substantial band gap in silicene/germanene. F doping of WS2 results in spin polarization, which, in conjunction with proximity-enhanced spin orbit coupling, creates sizable spin-valley polarization. For heterostructures of silicene and hexagonal boron nitride, we show that the stacking is fundamental for the details of the dispersion relation in the vicinity of the Fermi energy (gapped, non-gapped, linear, parabolic) despite small differences in the total energy. We also demonstrate that the tightbinding model of bilayer graphene is able to capture most of these features and we identify the limitations of the model.
754

Development of Kinetic Parameters for the Leaching of Phlogopite and Characterisation of the Solid Residue

Favel, Cheri M. January 2020 (has links)
The development of an appropriate solid-state kinetic model which represents the leaching process of phlogopite was investigated. Phlogopite samples were leached with nitric acid solutions of different concentrations, at different temperatures and for different reaction times. Leach liquors were analysed by ICP-OES for concentration, while the raw phlogopite and the acid-leached solid residues were analysed by XRF, XRD, ATR-FTIR, BET, TGA-DTG and SEM-EDS for characterisation to support the reaction rate model selection. It was found that the reaction was diffusion-controlled and the model which represents onedimensional diffusion through a flat plate (model D1) most accurately predicts the leaching behaviour. The observed activation energies and preexponential constants varied with initial acid concentration. The observed activation energies decreased from 98.8 – 88.9 kJ mol-1 as the initial acid concentration increased from 2 – 4 M, while the observed preexponential constants decreased from 3.30 x 10+12 – 2.30 x 10+11 min-1. Additional experiments were conducted at different temperatures, using different initial acid concentrations and over different reaction times to test the model. The experimental data points obtained (“testing data”) were in agreement with the predicted values. Analyses of the solid residues also revealed complementary results with respect to the leaching model selection. The raw phlogopite was found to be highly crystalline (XRD). Therefore, the absence of defects in the lattice means that the motion of H+ ions permeating into the lattice is restricted (Ropp, 2003; Schmalzried, 1995). This confirms that the leaching is internal diffusion-controlled since the mobility of constituents into the system is the controlling factor, and since the phlogopite particles are plate-like (SEM-EDS, BET) in shape, the use of the D1 model for one-dimensional diffusion through a flat plate is the recommended model to represent the leaching process. Furthermore, results obtained from the different analytical techniques were supportive of each other. It was also found that the amount of acid consumed is inequivalent to the amount theoretically required. Using the theoretically required acid concentration (2.45 M) results in incomplete conversion (< 80 % according to Kgokong (2017)). When initial acid concentrations between 2.4 – 2.6 M were used, only 88 – 91 % conversion was obtained after 6 hours of leaching at 65 °C, leaving behind excess H+ in solution. If fertiliser is the desired end product, it would be favourable to minimise the H+ concentration of the leach liquor. Therefore, the leaching process should be optimised so that the acidity of the leach liquor is minimised while obtaining complete leaching of all cations from the phlogopite particles into solution. Furthermore, since the SiO2 by-product is highly porous (surface area of 517 m2 g-1), its application in industrial adsorbents, catalysts, polymers, pigments, cement, etc. should be further explored. / Dissertation (MEng (Chemical Engineering))--University of Pretoria, 2020. / Chemical Engineering / MEng (Chemical Engineering) / Unrestricted
755

Simultaneous Inference for High Dimensional and Correlated Data

Polin, Afroza 22 August 2019 (has links)
No description available.
756

A thermo-hydraulic model that represents the current configuration of the SAFARI-1 secondary cooling system

Huisamen, Ewan January 2015 (has links)
This document focuses on the procedure and results of creating a thermohydraulic model of the secondary cooling system of the SAFARI-1 research reactor at the Pelindaba facility of the South African Nuclear Energy Corporation (Necsa) to the west of Pretoria, South Africa. The secondary cooling system is an open recirculating cooling system that comprises an array of parallel-coupled heat exchangers between the primary systems and the main heat sink system, which consists of multiple counterflow-induced draught cooling towers. The original construction of the reactor was a turnkey installation, with no theoretical/technical support or verifiability. The design baseline is therefore not available and it is necessary to reverse-engineer a system that could be modelled and characterised. For the nuclear operator, it is essential to be able to make predictions and systematically implement modifications to improve system performance, such as to understand and modify the control system. Another objective is to identify the critical performance areas of the thermohydraulic system or to determine whether the cooling capacity of the secondary system meets the optimum original design characteristics. The approach was to perform a comprehensive one-dimensional modelling of all the available physical components, which was followed by using existing performance data to verify the accuracy and validity of the developed model. Where performance data is not available, separate analysis through computational fluid dynamics (CFD) modelling is performed to generate the required inputs. The results yielded a model that is accurate within 10%. This is acceptable when compared to the variation within the supplied data, generated and assumed alternatives, and when considering the compounding effect of the large amount of interdependent components, each with their own characteristics and associated performance uncertainties. The model pointed to potential problems within the current system, which comprised either an obstruction in a certain component or faulty measuring equipment. Furthermore, it was found that the current spray nozzles in the cooling towers are underutilised. It should be possible to use the current cooling tower arrangement to support a similar second reactor, although slight modifications would be required to ensure that the current system is not operated beyond its current limits. The interdependent nature of two parallel systems and the variability of the conditions that currently exist would require a similar analysis as the current model to determine the viability of using the existing cooling towers for an additional reactor. / Dissertation (MEng)--University of Pretoria, 2015. / Mechanical and Aeronautical Engineering / MEng / Unrestricted
757

Multidimensional Data Processing for Optical Coherence Tomography Imaging

McLean, James Patrick January 2021 (has links)
Optical Coherence Tomography (OCT) is a medical imaging technique which distinguishes itself by acquiring microscopic resolution images in-vivo at millimeter scale fields of view. The resulting in images are not only high-resolution, but often multi-dimensional to capture 3-D biological structures or temporal processes. The nature of multi-dimensional data presents a unique set of challenges to the OCT user that include acquiring, storing, and handling very large datasets, visualizing and understanding the data, and processing and analyzing the data. In this dissertation, three of these challenges are explored in depth: sub-resolution temporal analysis, 3-D modeling of fiber structures, and compressed sensing of large, multi-dimensional datasets. Exploration of these problems is followed by proposed solutions and demonstrations which rely on tools from multiple research areas including digital image filtering, image de-noising, and sparse representation theory. Combining approaches from these fields, advanced solutions were developed to produce new and groundbreaking results. High-resolution video data showing cilia motion in unprecedented detail and scale was produced. An image processing method was used to create the first 3-D fiber model of uterine tissue from OCT images. Finally, a compressed sensing approach was developed which we show to guarantee high accuracy image recovery of more complicated, clinically relevant, samples than had been previously demonstrated. The culmination of these methods represents a step forward in OCT image analysis, showing that these cutting edge tools can also be applied to OCT data and in the future be employed in a clinical setting.
758

Initial development and validation of a dimensional classification system for the emotional disorders

Rosellini, Anthony Joseph 22 January 2016 (has links)
Problems with the current categorical approach to classification used by the Diagnostic and Statistical Manual of Mental Disorders (DSM) have led to proposals that classify the emotional disorders (EDs; anxiety and mood disorders) using a dimensional-categorical system based on shared ED vulnerabilities and phenotypes. Such profile-based approaches have yet to be empirically evaluated, in part because a single multidimensional assessment of shared ED vulnerabilities and phenotypes amenable to profile-based classification has not been developed. The present studies aimed to provide an initial examination of a categorical-dimensional approach to ED classification (Study 1) as well as develop and evaluate a multidimensional self-report assessment of shared ED vulnerabilities and phenotypes (the Multidimensional Emotional Disorder Inventory [MEDI], Study 2). The samples consisted of 1,218 (Study 1) and 227 (Study 2) participants who presented for assessment and treatment at an outpatient ED treatment center. All participants were assessed using a semi-structured ED interview and a set of ED self-report questionnaires. The MEDI was completed only by the participants in Study 2. Study 1 used mixture modeling to identify six unobserved groups (classes) of individuals sharing similar profiles across seven dimensional ED vulnerability and phenotype indicators. The external validity of the profiles was supported when related ED covariates were added to the solution. The incremental validity of the profiles was supported using hierarchical regression models; the profiles accounted for unique variance in ED outcomes beyond DSM diagnoses. In Study 2, exploratory structural equation modeling (ESEM) and confirmatory factor analysis were used to evaluate the factor structure of the MEDI. ESEM supported an eight-factor solution of a 47-item version of the MEDI. Differential magnitude of correlation analyses supported the convergent/discriminant validity of seven of the eight MEDI scales. A five-class (profile) solution, consistent with Study 1, was found when mixture modeling was applied to the MEDI scales. Collectively, the present studies provide compelling evidence in support of the development and utility of a hybrid dimensional-categorical profile approach to emotional disorder classification using multidimensional self-report assessment methods such as the MEDI.
759

Interactive, Computation Assisted Design Tools

Garg, Akash January 2020 (has links)
Realistic modeling, rendering, and animation of physical and virtual shapes have matured significantly over the last few decades. Yet, the creation and subsequent modeling of three-dimensional shapes remains a tedious task which requires not only artistic and creative talent, but also significant technical skill. The perfection witnessed in computer-generated feature films requires extensive manual processing and touch-ups. Every researcher working in graphics and related fields has likely experienced the difficulty of creating even a moderate-quality 3D model, whether based on a mental concept, a hand sketch, or inspirations from one or more photographs or existing 3D designs. This situation, frequently referred to as the content creation bottleneck, is arguably the major obstacle to making computer graphics as ubiquitous as it could be. Classical modeling techniques have primarily dealt with local or low-level geometric entities (e.g., points or triangles) and criteria (e.g., smoothness or detail preservation), lacking the freedom necessary to produce novel and creative content. A major unresolved challenge towards a new unhindered design paradigm is how to support the design process to create visually pleasing and yet functional objects by users who lack specialized skills and training. Most of the existing geometric modeling tools are intended either for use by experts (e.g., computer-aided design [CAD] systems) or for modeling objects whose visual aspects are the only consideration (e.g., computer graphics modeling systems). Furthermore, rapid prototyping, brought on by technological advances 3D printing has drastically altered production and consumption practices. These technologies empower individuals to design and produce original objects, customized according to their own needs. Thus, a new generation of design tools is needed to support both the creation of designs within the domain's constraints, that not only allows capturing the novice user's design intent but also meets the fabrication constraints such that the designs can be realized with minimal tweaking by experts. To fill this void, the premise of this thesis relies on the following two tenets: 1. users benefit from an interactive design environment that allows novice users to continuously explore a design space and immediately see the tradeoffs of their design choices. 2. the machine's processing power is used to assist and guide the user to maintain constraints imposed by the problem domain (e.g., fabrication/material constraints) as well as help the user in exploring feasible solutions close to their design intent. Finding the appropriate balance between interactive design tools and the computation needed for productive workflows is the problem addressed by this thesis. This thesis makes the following contributions: 1. We take a close look at thin shells--materials that have a thickness significantly smaller than other dimensions. Towards the goal of achieving interactive and controllable simulations we realize a particular geometric insight to develop an efficient bending model for the simulation of thin shells. Under isometric deformations (deformations that undergo little to no stretching), we can reduce the nonlinear bending energy into a cubic polynomial that has a linear Hessian. This linear Hessian can be further approximated with a constant one, providing significant speedups during simulation. We also build upon this simple bending model and show how orthotropic materials can be modeled and simulated efficiently. 2. We study the theory of Chebyshev nets--a geometric model of woven materials using a two-dimensional net composed of inextensible yarns. The theory of Chebyshev nets sheds some light on their limitations in globally covering a target surface. As it turns out, Chebyshev nets are a good geometric model for wire meshes, free-form surfaces composed of woven wires arranged in a regular grid. In the context of designing sculptures with wire mesh, we rely on the mathematical theory laid out by Hazzidakis~\cite{Hazzidakis1879} to determine an artistically driven workflow for approximately covering a target surface with a wire mesh, while globally maintaining material and fabrication constraints. This alleviates the user from worrying about feasibility and allows focus on design. 3. Finally, we present a practical design tool for the design and exploration of reconfigurables, defined as an object or collection of objects whose transformation between various states defines its functionality or aesthetic appeal (e.g., a mechanical assembly composed of interlocking pieces, a transforming folding bicycle, or a space-saving arrangement of apartment furniture). A novel space-time collision detection and response technique is presented that can be used to create an interactive workflow for managing and designing objects with various states. This work also considers a graph-based timeline during the design process instead of the traditional linear timeline and shows its many benefits as well as challenges for the design of reconfigurables.
760

Hierarchické shlukování s Mahalanobis-average metrikou akcelerované na GPU / GPU-accelerated Mahalanobis-average hierarchical clustering

Šmelko, Adam January 2020 (has links)
Hierarchical clustering algorithms are common tools for simplifying, exploring and analyzing datasets in many areas of research. For flow cytometry, a specific variant of agglomerative clustering has been proposed, that uses cluster linkage based on Mahalanobis distance to produce results better suited for the domain. Applicability of this clustering algorithm is currently limited by its relatively high computational complexity, which does not allow it to scale to common cytometry datasets. This thesis describes a specialized, GPU-accelerated version of the Mahalanobis-average linked hierarchical clustering, which improves the algorithm performance by several orders of magnitude, thus allowing it to scale to much larger datasets. The thesis provides an overview of current hierarchical clustering algorithms, and details the construction of the variant used on GPU. The result is benchmarked on publicly available high-dimensional data from mass cytometry.

Page generated in 0.1915 seconds