• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • Tagged with
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Duomenų dimensijos mažinimas naudojant autoasociatyvinius neuroninius tinklus / Data dimensionality reduction using autoassociative neural networks

Bendinskienė, Janina 31 July 2012 (has links)
Šiame magistro darbe apžvelgiami daugiamačių duomenų dimensijos mažinimo (vizualizavimo) metodai, tarp kurių nagrinėjami dirbtiniai neuroniniai tinklai. Pateikiamos pagrindinės dirbtinių neuroninių tinklų sąvokos (biologinis neuronas ir dirbtinio neurono modelis, mokymo strategijos, daugiasluoksnis neuronas ir pan.). Nagrinėjami autoasociatyviniai neuroniniai tinklai. Darbo tikslas – išnagrinėti autoasociatyviųjų neuroninių tinklų taikymo daugiamačių duomenų dimensijos mažinimui ir vizualizavimui galimybes bei ištirti gaunamų rezultatų priklausomybę nuo skirtingų parametrų. Siekiant šio tikslo atlikti eksperimentai naudojant kelias daugiamačių duomenų aibes. Tyrimų metu nustatyti parametrai, įtakojantys autoasociatyvinio neuroninio tinklo veikimą. Be to, gauti rezultatai lyginti pagal dvi skirtingas tinklo daromas paklaidas – MDS ir autoasociatyvinę. MDS paklaida parodo, kaip gerai išlaikomi atstumai tarp analizuojamų taškų (vektorių) pereinant iš daugiamatės erdvės į mažesnės dimensijos erdvę. Autoasociatyvinio tinklo išėjimuose gautos reikšmės turi sutapti su įėjimo reikšmėmis, taigi autoasociatyvinė paklaida parodo, kaip gerai tai gaunama (vertinamas skirtumas tarp įėjimų ir išėjimų). Tirta, kaip paklaidas įtakoja šie autoasociatyvinio neuroninio tinklo parametrai: aktyvacijos funkcija, minimizuojama funkcija, mokymo funkcija, epochų skaičius, paslėptų neuronų skaičius ir dimensijos mažinimo skaičiaus pasirinkimas. / This thesis gives an overview of dimensionality reduction of multivariate data (visualization) techniques, including the issue of artificial neural networks. Presents the main concepts of artificial neural networks (biological and artificial neuron to neuron model, teaching strategies, multi-neuron and so on.). Autoassociative neural networks are analyzed. The aim of this work - to consider the application of autoassociative neural networks for multidimensional data visualization and dimension reduction and to explore the possibilities of the results obtained from the dependence of different parameters. To achieve this, several multidimensional data sets were used. In analysis determinate parameters influencing autoassociative neural network effect. In addition, the results obtained by comparing two different network made errors - MDS and autoassociative. MDS error shows how well maintained the distance between the analyzed points (vectors), in transition from the multidimensional space into a lower dimension space. Autoassociative network output values obtained should coincide with the input values, so autoassociative error shows how well it is received (evaluated the difference between inputs and outputs). Researched how autoassociative neural network errors are influenced by this parameters: the activation function, minimizing function, training function, the number of epochs, hidden neurons number and choices of the number of dimension reduction.
2

Distinct Feature Learning and Nonlinear Variation Pattern Discovery Using Regularized Autoencoders

January 2016 (has links)
abstract: Feature learning and the discovery of nonlinear variation patterns in high-dimensional data is an important task in many problem domains, such as imaging, streaming data from sensors, and manufacturing. This dissertation presents several methods for learning and visualizing nonlinear variation in high-dimensional data. First, an automated method for discovering nonlinear variation patterns using deep learning autoencoders is proposed. The approach provides a functional mapping from a low-dimensional representation to the original spatially-dense data that is both interpretable and efficient with respect to preserving information. Experimental results indicate that deep learning autoencoders outperform manifold learning and principal component analysis in reproducing the original data from the learned variation sources. A key issue in using autoencoders for nonlinear variation pattern discovery is to encourage the learning of solutions where each feature represents a unique variation source, which we define as distinct features. This problem of learning distinct features is also referred to as disentangling factors of variation in the representation learning literature. The remainder of this dissertation highlights and provides solutions for this important problem. An alternating autoencoder training method is presented and a new measure motivated by orthogonal loadings in linear models is proposed to quantify feature distinctness in the nonlinear models. Simulated point cloud data and handwritten digit images illustrate that standard training methods for autoencoders consistently mix the true variation sources in the learned low-dimensional representation, whereas the alternating method produces solutions with more distinct patterns. Finally, a new regularization method for learning distinct nonlinear features using autoencoders is proposed. Motivated in-part by the properties of linear solutions, a series of learning constraints are implemented via regularization penalties during stochastic gradient descent training. These include the orthogonality of tangent vectors to the manifold, the correlation between learned features, and the distributions of the learned features. This regularized learning approach yields low-dimensional representations which can be better interpreted and used to identify the true sources of variation impacting a high-dimensional feature space. Experimental results demonstrate the effectiveness of this method for nonlinear variation pattern discovery on both simulated and real data sets. / Dissertation/Thesis / Doctoral Dissertation Industrial Engineering 2016

Page generated in 0.0846 seconds