• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 50
  • 19
  • 8
  • 5
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 101
  • 101
  • 17
  • 15
  • 14
  • 13
  • 11
  • 10
  • 10
  • 9
  • 8
  • 8
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Statistical algorithms for circuit synthesis under process variation and high defect density

Singh, Ashish Kumar, January 1900 (has links)
Thesis (Ph. D.)--University of Texas at Austin, 2007. / Vita. Includes bibliographical references.
42

Computational methods in optimization problems

Park, Chang-Man, January 1967 (has links)
Thesis (Ph. D.)--University of Wisconsin, 1967. / Typescript. Vita. eContent provider-neutral record in process. Description based on print version record. Includes bibliography.
43

Optimal control of switched autonomous systems theory, algorithms, and robotic applications /

Axelsson, Henrik. January 2006 (has links)
Thesis (Ph. D.)--Electrical and Computer Engineering, Georgia Institute of Technology, 2006. / Ashraf Saad, Committee Member ; Spyros Reveliotis, Committee Member ; Anthony Yezzi, Committee Member ; Erik Verriest, Committee Member ; Yorai Wardi, Committee Co-Chair ; Magnus Egerstedt, Committee Chair.
44

Distributed Optimization Algorithms for Networked Systems

Chatzipanagiotis, Nikolaos January 2015 (has links)
<p>Distributed optimization methods allow us to decompose an optimization problem</p><p>into smaller, more manageable subproblems that are solved in parallel. For this</p><p>reason, they are widely used to solve large-scale problems arising in areas as diverse</p><p>as wireless communications, optimal control, machine learning, artificial intelligence,</p><p>computational biology, finance and statistics, to name a few. Moreover, distributed</p><p>algorithms avoid the cost and fragility associated with centralized coordination, and</p><p>provide better privacy for the autonomous decision makers. These are desirable</p><p>properties, especially in applications involving networked robotics, communication</p><p>or sensor networks, and power distribution systems.</p><p>In this thesis we propose the Accelerated Distributed Augmented Lagrangians</p><p>(ADAL) algorithm, a novel decomposition method for convex optimization prob-</p><p>lems with certain separability structure. The method is based on the augmented</p><p>Lagrangian framework and addresses problems that involve multiple agents optimiz-</p><p>ing a separable convex objective function subject to convex local constraints and</p><p>linear coupling constraints. We establish the convergence of ADAL and also show</p><p>that it has a worst-case O(1/k) convergence rate, where k denotes the number of</p><p>iterations.</p><p>Moreover, we show that ADAL converges to a local minimum of the problem</p><p>for cases with non-convex objective functions. This is the first published work that</p><p>formally establishes the convergence of a distributed augmented Lagrangian method</p><p>ivfor non-convex optimization problems. An alternative way to select the stepsizes</p><p>used in the algorithm is also discussed. These two contributions are independent</p><p>from each other, meaning that convergence of the non-convex ADAL method can</p><p>still be shown using the stepsizes from the convex case, and, similarly, convergence</p><p>of the convex ADAL method can be shown using the stepsizes proposed in the non-</p><p>convex proof.</p><p>Furthermore, we consider cases where the distributed algorithm needs to operate</p><p>in the presence of uncertainty and noise and show that the generated sequences of</p><p>primal and dual variables converge to their respective optimal sets almost surely. In</p><p>particular, we are concerned with scenarios where: i) the local computation steps</p><p>are inexact or are performed in the presence of uncertainty, and ii) the message</p><p>exchanges between agents are corrupted by noise. In this case, the proposed scheme</p><p>can be classified as a distributed stochastic approximation method. Compared to</p><p>existing literature in this area, our work is the first that utilizes the augmented</p><p>Lagrangian framework. Moreover, the method allows us to solve a richer class of</p><p>problems as compared to existing methods on distributed stochastic approximation</p><p>that consider only consensus constraints.</p><p>Extensive numerical experiments have been carried out in an effort to validate</p><p>the novelty and effectiveness of the proposed method in all the areas of the afore-</p><p>mentioned theoretical contributions. We examine problems in convex, non-convex,</p><p>and stochastic settings where uncertainties and noise affect the execution of the al-</p><p>gorithm. For the convex cases, we present applications of ADAL to certain popular</p><p>network optimization problems, as well as to a two-stage stochastic optimization</p><p>problem. The simulation results suggest that the proposed method outperforms</p><p>the state-of-the-art distributed augmented Lagrangian methods that are known in</p><p>the literature. For the non-convex cases, we perform simulations on certain simple</p><p>non-convex problems to establish that ADAL indeed converges to non-trivial local</p><p>vsolutions of the problems; in comparison, the straightforward implementation of the</p><p>other distributed augmented Lagrangian methods on the same problems does not</p><p>lead to convergence. For the stochastic setting, we present simulation results of</p><p>ADAL applied on network optimization problems and examine the effect that noise</p><p>and uncertainties have in the convergence behavior of the method.</p><p>As an extended and more involved application, we also consider the problem</p><p>of relay cooperative beamforming in wireless communications systems. Specifically,</p><p>we study the scenario of a multi-cluster network, in which each cluster contains</p><p>multiple single-antenna source destination pairs that communicate simultaneously</p><p>over the same channel. The communications are supported by cooperating amplify-</p><p>and-forward relays, which perform beamforming. Since the emerging problem is non-</p><p>convex, we propose an approximate convex reformulation. Based on ADAL, we also</p><p>discuss two different ways to obtain a distributed solution that allows for autonomous</p><p>computation of the optimal beamforming decisions by each cluster, while taking into</p><p>account intra- and inter-cluster interference effects.</p><p>Our goal in this thesis is to advance the state-of-the-art in distributed optimization by proposing methods that combine fast convergence, wide applicability, ease</p><p>of implementation, low computational complexity, and are robust with respect to</p><p>delays, uncertainty in the problem parameters, noise corruption in the message ex-</p><p>changes, and inexact computations.</p> / Dissertation
45

Theoretical Studies of Ru- and Re-based Catalysts for Artificial Photosynthesis

Stolper, Thorsten 08 December 2017 (has links)
No description available.
46

Contributions to Convergence Analysis of Noisy Optimization Algorithms / Contributions à l'Analyse de Convergence d'Algorithmes d'Optimisation Bruitée

Astete morales, Sandra 05 October 2016 (has links)
Cette thèse montre des contributions à l'analyse d'algorithmes pour l'optimisation de fonctions bruitées. Les taux de convergences (regret simple et regret cumulatif) sont analysés pour les algorithmes de recherche linéaire ainsi que pour les algorithmes de recherche aléatoires. Nous prouvons que les algorithmes basé sur la matrice hessienne peuvent atteindre le même résultat que certaines algorithmes optimaux, lorsque les paramètres sont bien choisis. De plus, nous analysons l'ordre de convergence des stratégies évolutionnistes pour des fonctions bruitées. Nous déduisons une convergence log-log. Nous prouvons aussi une borne basse pour le taux de convergence de stratégies évolutionnistes. Nous étendons le travail effectué sur les mécanismes de réévaluations en les appliquant au cas discret. Finalement, nous analysons la mesure de performance en elle-même et prouvons que l'utilisation d'une mauvaise mesure de performance peut mener à des résultats trompeurs lorsque différentes méthodes d'optimisation sont évaluées. / This thesis exposes contributions to the analysis of algorithms for noisy functions. It exposes convergence rates for linesearch algorithms as well as for random search algorithms. We prove in terms of Simple Regret and Cumulative Regret that a Hessian based algorithm can reach the same results as some optimal algorithms in the literature, when parameters are tuned correctly. On the other hand we analyse the convergence order of Evolution Strategies when solving noisy functions. We deduce log-log convergence. We also give a lower bound for the convergence rate of the Evolution Strategies. We extend the work on revaluation by applying it to a discrete settings. Finally we analyse the performance measure itself and prove that the use of an erroneus performance measure can lead to misleading results on the evaluation of different methods.
47

Performance Analysis Between Combinations of Optimization Algorithms and Activation Functions used in Multi-Layer Perceptron Neural Networks

Valmiki, Geetha Charan, Tirupathi, Akhil Santosh January 2020 (has links)
Background:- Artificial Neural networks are motivated from biological nervous system and can be used for classification and forecasting the data. Each neural node contains activation function could be used for solving non-linear problems and optimization function to minimize the loss and give more accurate results. Neural networks are bustling in the field of machine learning, which inspired this study to analyse the performance variation based on the use of different combinations of the activation functions and optimization algorithms in terms of accuracy results and metrics recall and impact of data-set features on the performance of the neural networks. Objectives:- This study deals with an experiment to analyse the performance of the combinations are performing well and giving more results and to see impact of the feature segregation from data-set on the neural networks model performance. Methods:- The process involve the gathering of the data-sets, activation functions and optimization algorithm. Execute the network model using 7X5 different combinations of activation functions and optimization algorithm and analyse the performance of the neural networks. These models are tested upon the same data-set with some of the discarded features to know the effect on the performance of the neural networks. Results:- All the metrics for evaluating the neural networks presented in separate table and graphs are used to show growth and fall down of the activation function when associating with different optimization function. Impact of the individual feature on the performance of the neural network is also represented. Conclusions:- Out of 35 combinations, combinations made from optimizations algorithms Adam,RMSprop and Adagrad and activation functions ReLU,Softplus,Tanh Sigmoid and Hard_Sigmoid are selected based on the performance evaluation and data has impact on the performance of the combinations of the algorithms and activation functions which is also evaluated based on the experimentation. Individual features have their corresponding effect on the neural network.
48

Prioritized Database Synchronization using Optimization Algorithms

Alladi, Sai Sumeeth January 2023 (has links)
No description available.
49

Advancing computational materials design and model development using data-driven approaches

Sose, Abhishek Tejrao 02 February 2024 (has links)
Molecular dynamics (MD) simulations find their applications in fundamental understanding of molecular level mechanisms of physical processes. This assists in tuning the key features affecting the development of the novel hybrid materials. A certain application demanding the need for a desired function can be cherished through the hybrids with a blend of new properties by a combination of pure materials. However, to run MD simulations, an accurate representation of the interatomic potentials i.e. force-fields (FF) models remain a crucial aspect. This thesis intricately explores the fusion of MD simulations, uncertainty quantification, and data-driven methodologies to accelerate the computational design of innovative materials and models across the following interconnected chapters. Beginning with the development of force fields for atomic-level systems and coarse-grained models for FCC metals, the study progresses into exploring the intricate interfacial interactions between 2D materials like graphene, MoS2, and water. Current state-of-the-art model development faces the challenge of high dimensional input parameters' model and unknown robustness of developed model. The utilization of advanced optimization techniques such as particle swarm optimization (PSO) integrated with MD enhances the accuracy and precision of FF models. Moreover, the bayesian uncertainty quantification (BUQ) assists FF model development researchers in estimating the robustness of the model. Furthermore, the complex structure and dynamics of water confined between and around sheets was unraveled using 3D Convolutional Neural Networks (3D-CNN). Specifically, through classification and regression models, water molecule ordering/disordering and atomic density profiles were accurately predicted, thereby elucidating nuanced interplays between sheet compositions and confined water molecules. To further the computational design of hybrid materials, this thesis delves into designing and investigating polymer composites with functionalized MOFs shedding light on crucial factors governing their compatibility and performance. Therefore, this report includes the study of structure and dynamics of functionalized MOF in the polymer matrix. Additionally, it investigates the biomedical potential of porous MOFs as drug delivery vehicles (DDVs). Often overlooked is the pivotal role of solvents (used in MOF synthesis or found in relevant body fluids) in the drug adsorption and release process. This report underscores the solvent's impact on drug adsorption within MOFs by comparing results in its presence and absence. Building on these findings, the study delves into the effects of MOF functionalization on tuning the drug adsorption and release process. It further explores how different physical and chemical properties influence drug adsorption within MOFs. Furthermore, the research explores the potential of functionalized MOFs for improved carbon capture, considering their application in energy-related contexts. By harnessing machine learning and deep learning, the thesis introduces innovative pathways for material property prediction and design, emphasizing the pivotal fusion of computational methodologies with data-driven approaches to advance molecular-level understanding and propel future material design endeavors. / Doctor of Philosophy / Envision a world where scientific exploration reaches the microscopic scale, powered by advanced computational tools. In this frontier of materials science, researchers employ sophisticated computer simulations to delve into the intricate properties of materials, particularly focusing on Metal-Organic Frameworks (MOFs). These MOFs, equivalent to microscopic molecular sponges, exhibit remarkable abilities to capture gases or hold medicinal drug compounds. This thesis meticulously studies MOFs alongside materials like graphene, Boron Nitride and Molybdenum disulfide, investigating their interactions with water with unprecedented precision. Through these detailed explorations and the fusion of cutting-edge technologies, we aim to unlock a future featuring enhanced drug delivery systems, improved energy storage solutions, and innovative energy applications.
50

Large-Scale Optimization With Machine Learning Applications

Van Mai, Vien January 2019 (has links)
This thesis aims at developing efficient algorithms for solving some fundamental engineering problems in data science and machine learning. We investigate a variety of acceleration techniques for improving the convergence times of optimization algorithms.  First, we investigate how problem structure can be exploited to accelerate the solution of highly structured problems such as generalized eigenvalue and elastic net regression. We then consider Anderson acceleration, a generic and parameter-free extrapolation scheme, and show how it can be adapted to accelerate practical convergence of proximal gradient methods for a broad class of non-smooth problems. For all the methods developed in this thesis, we design novel algorithms, perform mathematical analysis of convergence rates, and conduct practical experiments on real-world data sets. / <p>QC 20191105</p>

Page generated in 0.1183 seconds