• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 50
  • 9
  • 5
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 89
  • 89
  • 29
  • 29
  • 28
  • 22
  • 22
  • 15
  • 15
  • 13
  • 13
  • 12
  • 12
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Gradient Temporal-Difference Learning Algorithms

Maei, Hamid Reza Unknown Date
No description available.
22

A Neural Network Classifier for Spectral Pattern Recognition. On-Line versus Off-Line Backpropagation Training

Staufer-Steinnocher, Petra, Fischer, Manfred M. 12 1900 (has links) (PDF)
In this contributon we evaluate on-line and off-line techniques to train a single hidden layer neural network classifier with logistic hidden and softmax output transfer functions on a multispectral pixel-by-pixel classification problem. In contrast to current practice a multiple class cross-entropy error function has been chosen as the function to be minimized. The non-linear diffierential equations cannot be solved in closed form. To solve for a set of locally minimizing parameters we use the gradient descent technique for parameter updating based upon the backpropagation technique for evaluating the partial derivatives of the error function with respect to the parameter weights. Empirical evidence shows that on-line and epoch-based gradient descent backpropagation fail to converge within 100,000 iterations, due to the fixed step size. Batch gradient descent backpropagation training is superior in terms of learning speed and convergence behaviour. Stochastic epoch-based training tends to be slightly more effective than on-line and batch training in terms of generalization performance, especially when the number of training examples is larger. Moreover, it is less prone to fall into local minima than on-line and batch modes of operation. (authors' abstract) / Series: Discussion Papers of the Institute for Economic Geography and GIScience
23

Otimização de parâmetros de interação do modelo UNIFAC-VISCO de misturas de interesse para a indústria de óleos essenciais / Optimization of interaction parameters for UNIFAC-VISCO model of mixtures interesting to essential oil industries

Camila Nardi Pinto 27 February 2015 (has links)
A determinação de propriedades físicas dos óleos essenciais é fundamental para sua aplicação na indústria de alimentos e também em projetos de equipamentos. A vasta quantidade de variáveis envolvidas no processo de desterpenação, tais como temperatura, pressão e composição, tornam a utilização de modelos preditivos de viscosidade necessária. Este trabalho teve como objetivo a obtenção de parâmetros para o modelo preditivo de viscosidade UNIFAC-VISCO com aplicação do método de otimização do gradiente descendente, a partir de dados de viscosidade de sistemas modelo que representam as fases que podem ser formadas em processos de desterpenação por extração líquido-líquido dos óleos essenciais de bergamota, limão e hortelã, utilizando como solvente uma mistura de etanol e água, em diferentes composições, a 25ºC. O experimento foi dividido em duas configurações; na primeira os parâmetros de interação previamente reportados na literatura foram mantidos fixos; na segunda todos os parâmetros de interação foram ajustados. O modelo e o método de otimização foram implementados em linguagem MATLAB®. O algoritmo de otimização foi executado 10 vezes para cada configuração, partindo de matrizes de parâmetros de interação iniciais diferentes obtidos pelo método de Monte Carlo. Os resultados foram comparados com o estudo realizado por Florido et al. (2014), no qual foi utilizado algoritmo genético como método de otimização. A primeira configuração obteve desvio médio relativo (DMR) de 1,366 e a segunda configuração resultou um DMR de 1,042. O método do gradiente descendente apresentou melhor desempenho para a primeira configuração em comparação com o método do algoritmo genético (DMR 1,70). Para a segunda configuração o método do algoritmo genético obteve melhor resultado (DMR 0,68). A capacidade preditiva do modelo UNIFAC-VISCO foi avaliada para o sistema de óleo essencial de eucalipto com os parâmetros determinados, obtendo-se DMR iguais a 17,191 e 3,711, para primeira e segunda configuração, respectivamente. Esses valores de DMR foram maiores do que os encontrados por Florido et al. (2014) (3,56 e 1,83 para primeira e segunda configuração, respectivamente). Os parâmetros de maior contribuição para o cálculo do DMR são CH-CH3 e OH-H2O para a primeira e segunda configuração, respectivamente. Os parâmetros que envolvem o grupo C não influenciam no valor do DMR, podendo ser excluído de análises futuras. / The determination of physical properties of essential oils is critical to their application in the food industry and also in equipment design. The large number of variables involved in deterpenation process, such as temperature, pressure and composition, to make use of viscosity predictive models required. This study aimed obtain parameters for the viscosity predictive model UNIFAC-VISCO using gradient descent as optimization method to model systems viscosity data representing the phases that can be formed in deterpenation processes for extraction liquid-liquid of bergamot, lemon and mint essential oils, using aqueous ethanol as solvente in different compositions at 25 º C. The work was divided in two configurations; in the first one the interaction parameters previously reported in the literature were kept fixed; in the second one all interaction parameters were adjusted. The model and the gradient descent method were implemented in MATLAB language. The optimization algorithm was runned 10 times for each configuration, starting from different arrays of initial interaction parameters obtained by the Monte Carlo method. The results were compared with the study carried out by Florido et al. (2014), which used genetic algorithm as optimization method. The first configuration provided an average deviation (DMR) of 1,366 and the second configuration resulted in a DMR 1,042. The gradient descent method showed better results for the first configuration comparing with the genetic algorithm method (DMR 1.70). On the other hand, for the second configuration the genetic algorithm method had a better result (DMR 0.68). The UNIFAC-VISCO model predictive ability was evaluated for eucalyptus essential oil system using the obtained parameters, providing DMR equal to 17.191 and 3.711, for the first and second configuration, respectively. The parameters determined by genetic algorithm presented lower DMR for the two settings (3.56 and 1.83 to the first and second configuration, respectively). The major parameters for calculating the DMR are CH-CH3 and OH-H2O to the first and second configuration, respectively. The parameters involving the C group did not influence the DMR and may be excluded from further analysis.
24

TOWARDS AN UNDERSTANDING OF RESIDUAL NETWORKS USING NEURAL TANGENT HIERARCHY

Yuqing Li (10223885) 06 May 2021 (has links)
<div>Deep learning has become an important toolkit for data science and artificial intelligence. In contrast to its practical success across a wide range of fields, theoretical understanding of the principles behind the success of deep learning has been an issue of controversy. Optimization, as an important component of theoretical machine learning, has attracted much attention. The optimization problems induced from deep learning is often non-convex and</div><div>non-smooth, which is challenging to locate the global optima. However, in practice, global convergence of first-order methods like gradient descent can be guaranteed for deep neural networks. In particular, gradient descent yields zero training loss in polynomial time for deep neural networks despite its non-convex nature. Besides that, another mysterious phenomenon is the compelling performance of Deep Residual Network (ResNet). Not only</div><div>does training ResNet require weaker conditions, the employment of residual connections by ResNet even enables first-order methods to train the neural networks with an order of magnitude more layers. Advantages arising from the usage of residual connections remain to be discovered.</div><div><br></div><div>In this thesis, we demystify these two phenomena accordingly. Firstly, we contribute to further understanding of gradient descent. The core of our analysis is the neural tangent hierarchy (NTH) that captures the gradient descent dynamics of deep neural networks. A recent work introduced the Neural Tangent Kernel (NTK) and proved that the limiting</div><div>NTK describes the asymptotic behavior of neural networks trained by gradient descent in the infinite width limit. The NTH outperforms the NTK in two ways: (i) It can directly study the time variation of NTK for neural networks. (ii) It improves the result to non-asymptotic settings. Moreover, by applying NTH to ResNet with smooth and Lipschitz activation function, we reduce the requirement on the layer width m with respect to the number of training samples n from quartic to cubic, obtaining a state-of-the-art result. Secondly, we extend our scope of analysis to structural properties of deep neural networks. By making fair and consistent comparisons between fully-connected network and ResNet, we suggest strongly that the particular skip-connection architecture possessed by ResNet is the main</div><div>reason for its triumph over fully-connected network.</div>
25

Costly Black-Box Optimization with GTperform at Siemens Industrial Turbomachinery

Malm, André January 2022 (has links)
The simulation program GTperform is used to estimate the machine settings from performance measurements for the gas turbine model STG-800 at Siemens Industrial Turbomachinery in Finspång, Sweden. By evaluating different settings within the program, the engineers try to estimate the one that generatesthe performance measurement. This procedure is done manually at Siemens and is very time-consuming. This project aims to establish an algorithm that automatically establishes the correct machine setting from the performance measurements. Two algorithms were implemented in Python: Simulated Annealing and Gradient Descent. The algorithms analyzed two possible objective functions, and objective were tested on three gas turbines located at different locations. The first estimated the machine setting that generated the best fit to the performance measurements, while the second established the most likely solution for the machine setting from probability distributions. Multiple simulations have been run for the two algorithms and objective functions to evaluate the performances. Both algorithms successfully established satisfactory results for the second objective function. The Simulated Annealing, in particular, established solutions with a lower spread compared to Gradient Descent. The algorithms give a possibility to automatically establish the machine settings for the simulation program, reducing the work for the engineers.
26

First-order distributed optimization methods for machine learning with linear speed-up

Spiridonoff, Artin 27 September 2021 (has links)
This thesis considers the problem of average consensus, distributed centralized and decentralized Stochastic Gradient Descent (SGD) and their communication requirements. Namely, (i) an algorithm for achieving consensus among a collection of agents is studied and its convergence to the average is shown, in the presence of link failures and delays. The new results improve upon the prior works by relaxing some of the restrictive assumptions on communication, such as bounded link failures and intercommunication intervals, as well as allowing for message delays. Next, (ii) a Robust Asynchronous Stochastic Gradient Push (RASGP) algorithm is proposed to minimize the separable objective F(z) = 𝛴_{i=1}^n f_i(z) in a harsh network setting characterized by asynchronous updates, message losses and delays, and directed communication. RASGP is shown to asymptotically perform as well as the best bounds on a centralized gradient descent that takes steps in the direction of the sum of the noisy gradients of all local functions f_i(z). Next, (iii) a new communication strategy for Local SGD is proposed, a centralized optimization algorithm where workers make local updates and then calculate their average values only once in a while. It is shown that linear speed-up in the number of workers N is possible, using only O(N) communication (averaging) rounds, independent of the total number of iterations T. Empirical evidence suggests this bound is close to being tight as it is further shown that √N or N^{3/4} communications fail to achieve linear speed-up. Finally, (iv) under mild assumptions, the main of which is twice differentiability on any neighborhood of the optimal solution, one-shot averaging, which only uses a single round of communication, is shown to have optimal convergence rate asymptotically.
27

Control perspective on distributed optimization

Farkhooi, Sam January 2023 (has links)
In the intersection between machine learning, artificial intelligence and mathe- matical computation lies optimization. A powerful tool that enables us to solve a variety of large scale problems. The purpose of this work is to explore optimiza- tion in the distributed setting. We will then touch on factors that contribute to a faster and more stable algorithm while solving a distributed optimization problem. The main factor we will look into is how we can integrate control.
28

On the Modelling of Stochastic Gradient Descent with Stochastic Differential Equations

Leino, Martin January 2023 (has links)
Stochastic gradient descent (SGD) is arguably the most important algorithm used in optimization problems for large-scale machine learning. Its behaviour has been studied extensively from the viewpoint of mathematical analysis and probability theory; it is widely held that in the limit where the learning rate in the algorithm tends to zero, a specific stochastic differential equation becomes an adequate model of the dynamics of the algorithm. This study exhibits some of the research in this field by analyzing the application of a recently proven theorem to the problem of tensor principal component analysis. The results, originally discovered in an article by Gérard Ben Arous, Reza Gheissari and Aukosh Jagannath from 2022, illustrate how the phase diagram of functions of SGD differ in the high-dimensional regime from that of the classical fixed-dimensional setting.
29

Fuzzy Control for an Unmanned Helicopter

Kadmiry, Bourhane January 2002 (has links)
The overall objective of the Wallenberg Laboratory for Information Technology and Autonomous Systems (WITAS) at Linköping University is the development of an intelligent command and control system, containing vision sensors, which supports the operation of a unmanned air vehicle (UAV) in both semi- and full-autonomy modes. One of the UAV platforms of choice is the APID-MK3 unmanned helicopter, by Scandicraft Systems AB. The intended operational environment is over widely varying geographical terrain with traffic networks and vehicle interaction of variable complexity, speed, and density. The present version of APID-MK3 is capable of autonomous take-off, landing, and hovering as well as of autonomously executing pre-defined, point-to-point flight where the latter is executed at low-speed. This is enough for performing missions like site mapping and surveillance, and communications, but for the above mentioned operational environment higher speeds are desired. In this context, the goal of this thesis is to explore the possibilities for achieving stable ‘‘aggressive’’ manoeuvrability at high-speeds, and test a variety of control solutions in the APID-MK3 simulation environment. The objective of achieving ‘‘aggressive’’ manoeuvrability concerns the design of attitude/velocity/position controllers which act on much larger ranges of the body attitude angles, by utilizing the full range of the rotor attitude angles. In this context, a flight controller should achieve tracking of curvilinear trajectories at relatively high speeds in a robust, w.r.t. external disturbances, manner. Take-off and landing are not considered here since APIDMK3 has already have dedicated control modules that realize these flight modes. With this goal in mind, we present the design of two different types of flight controllers: a fuzzy controller and a gradient descent method based controller. Common to both are model based design, the use of nonlinear control approaches, and an inner- and outer-loop control scheme. The performance of these controllers is tested in simulation using the nonlinear model of APID-MK3. / <p>Report code: LiU-Tek-Lic-2002:11. The format of the electronic version of this thesis differs slightly from the printed one: this is due mainly to font compatibility. The figures and body of the thesis are remaining unchanged.</p>
30

Back propagation control of model-based multi-layer adaptive filters for optical communication systems / 光通信のためのモデルベース適応多層フィルタの誤差逆伝播による制御

Arikawa, Manabu 25 September 2023 (has links)
京都大学 / 新制・課程博士 / 博士(情報学) / 甲第24937号 / 情博第848号 / 新制||情||142(附属図書館) / 京都大学大学院情報学研究科先端数理科学専攻 / (主査)教授 林 和則, 教授 青柳 富誌生, 准教授 寺前 順之介 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM

Page generated in 0.0808 seconds