• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 104
  • 6
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 147
  • 102
  • 69
  • 67
  • 60
  • 58
  • 46
  • 41
  • 39
  • 38
  • 37
  • 35
  • 32
  • 29
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Identificación de impactos en una estructura tridimensional utilizando autoencoders y una aproximación lineal del principio de máxima entropía

Espinoza Quitral, Cony de los Ángeles January 2018 (has links)
Memoria para optar al título de Ingeniera Civil Mecánica / En general los sistemas con recubrimientos expuestos a impactos son incapaces de detectar problemas en tiempo real y con precisión. La motivación de este trabajo es avanzar en diseñar estructuras que reconozcan las alteraciones que reciban del medio. Una forma de monitorear la integridad de estos sistemas es mediante el desarrollo de algoritmos de detección de impactos basados en la respuesta vibratoria del sistema a alteraciones de tipo impacto. En este contexto se propone utilizar autoencoders (AE). Los AE son un tipo de red neuronal que permite extraer la información más significativa de datos con múltiples variables. Con esto, la estructura puede aprender de los impactos que recibe para reconocer eficientemente los impactos futuros. El objetivo de este trabajo es desarrollar un sistema de identificación de la ubicación y magnitud de impactos recibidos por una estructura. Se elije trabajar con una estructura cilíndrica metálica, emplazada en dos configuraciones a estudiar. Una con la estructura cilíndrica ubicada en posición vertical sobre una superficie lisa. Y otra con la estructura suspendida por medio de una cuerda emplazada a lo largo del cilindro. Ahora, se entrena un AE para extraer el espacio latente de los datos medidos. Luego, se entrena un algoritmo de aprendizaje supervisado basado en el principio de máxima entropía (LME), que permite reconocer y asociar señales de impactos con aquellas de la fase de entrenamiento. Dando como resultado una estimación de la ubicación y magnitud de los impactos recibidos por la estructura. Para ambas etapas se deben seleccionar las variables que permitan obtener el algoritmo más eficiente y que entreguen los resultados con el menor error asociado posible. El desempeño del AE es bueno si sólo se analiza su capacidad de reconstruir la señal, pero al analizar la estructura del espacio latente, esta resulta representar bien los datos en forma, pero debe mejorar en precisión. Por otro lado, el análisis del comportamiento de la etapa LME según el número de vecinos a utilizar resulta acorde a la interferencia de las condiciones de borde del problema y de cómo afectan las zonas de contacto con elementos externos a los resultados finales. Además de la superficie de contacto con la base, otro elemento que produce un aumento en el error de predicción es la presencia del cordón de soldadura a lo largo de la altura del cilindro. Finalmente, es posible afirmar que el método desarrollado en general cumple su función de identificar la ubicación y magnitud de los impactos efectuados a la estructura. Sin embargo, es necesario trabajar en la precisión de este.
2

Constellation Design for Multi-user Communications with Deep Learning

Sun, Yi-Lin January 2019 (has links)
In the simple form, a communication system includes a transmitter and a receiver. In the transmitter, it transforms the one-hot vector message to produce a transmitted signal. In general, the transmitter demands restrictions on the transmitted signal. The channel is defined by the conditional probability distribution function. On receiving of the transmitted signal with noise, the receiver appears to apply the transformation to generate the estimate of one hot vector message. We can regard this simplest communication system as a specific case of autoencoder from a deep learning perspective. In our case, autoencoder used to learn the representations of the one-hot vector which are robust to the noise channel and can be recovered at the receiver with the smallest probability of error. Our task is to make some improvements on the autoencoder systems. We propose different schemes depending on the different cases. We propose a method based on optimization of softmax and introduce the L1/2 regularization in MSE loss function for SISO case and MIMO case, separately. The simulation shows that both our optimized softmax function method and L1/2 regularization loss function have a better performance than the original neural network framework. / Thesis / Master of Applied Science (MASc)
3

Using a denoising autoencoder for localization : Denoising cellular-based wireless localization data / Brusreducerande autoencoder för platsdata : Brusreducering av trådlös platsdata från mobiltelefoner

Danielsson, Alexander, von Pfaler, Edvard January 2021 (has links)
A denoising autoencoder is a type of neural network which excels at removingnoise from noisy input data. In this project, a denoising autoencoder isoptimized for removing noise from mobile positioning data. The mobilepositioning data with noise is generated specifically for this project. In orderto generate realistic noise, a study in how real world noise looks like is carriedout. The project aims to answer the question: can a denoising autoencoderbe used to remove noise from mobile positioning data? The results showthat using this method can effectively cut the noise in half. In this reportit is mainly analyzed how the amount of hidden layers and respective sizesaffected the performance. It was concluded that the most optimal design forthe autoencoder was a single hidden layer model with multiple more nodes inthe hidden layer than the input and output layer. / En brusreducerande autoencoder är ett sorts neuralt nätverk som är specialiserat för att ta bort brus från indata. I detta projekt optimeras en brusreducerande autoencoder för att ta bort brus från mobilpositioneringsdata. Till projektet skapades helt ny mobilpositioneringsdata med realistiskt brus. Detta gjordes genom att studera hur verkligt brus ser ut och skapa ett program som efterliknar detta. Projektets syfte var att undersöka om en brusreducerande autoencoder kan användas för att ta bort brus från mobilpositioneringsdata. Resultaten visar att metoden kan ta bort ungefär hälften av bruset. I rapporten undersöks och analyseras även hur antalet dolda lager och antalet noder i dessa lager påverkade mängden brus som autoencodern lyckades ta bort. Från de gjorda testerna drogs slutsatsen att den mest optimala designen var en enkel design med ett enda dolt lager som hade betydligt fler noder än input- och outputlagren.
4

Genomic Data Augmentation with Variational Autoencoder

Thyrum, Emily 12 1900 (has links)
In order to treat cancer effectively, medical practitioners must predict pathological stages accurately, and machine learning methods can be employed to make such predictions. However, biomedical datasets, including genomic datasets, often have disproportionately more samples from people of European ancestry than people of other ethnic or racial groups, which can cause machine learning methods to perform better on the European samples than on the people of the under-represented groups. Data augmentation can be employed as a potential solution in order to artificially increase the number of samples from people of under-represented racial groups, and can in turn improve pathological stage predictions for future patients from such under-represented groups. Genomic data augmentation has been explored previously, for example using a Generative Adversarial Network, but to the best of our knowledge, the use of the variational autoencoder for the purpose of genomic data augmentation remains largely unexplored. Here we utilize a geometry-based variational autoencoder that models the latent space as a Riemannian manifold so that samples can be generated without the use of a prior distribution to show that the variational autoencoder can indeed be used to reliably augment genomic data. Using TCGA prostate cancer genotype data, we show that our VAE-generated data can improve pathological stage predictions on a test set of European samples. Because we only had European samples that were labeled in terms of pathological stage, we were not able to validate the African generated samples in this way, but we still attempt to show how such samples may be realistic. / Computer and Information Science
5

Predicting Global Internet Instability Caused by Worms using Neural Networks

Marais, Elbert 16 November 2006 (has links)
Student Number : 9607275H - MSc dissertation - School of Electrical and Information Engineering - Faculty of Engineering and the Built Environment / Internet worms are capable of quickly propagating by exploiting vulnerabilities of hosts that have access to the Internet. Once a computer has been infected, the worms have access to sensitive information on the computer, and are able to corrupt or retransmit this information. This dissertation describes a method of predicting Internet instability due to the presence of a worm on the Internet, using data currently available from global Internet routers. The work is based on previous research which has indicated a link between the increase in the number of Border Gateway Protocol (BGP) routing messages and global Internet instability. The type of system used to provide the prediction is known as an autoencoder. This is a specialised type of neural network, which is able to provide a degree of novelty for inputs. The autoencoder is trained to recognise “normal” data, and therefore provides a high novelty output for inputs dissimilar to the normal data. The BGP Update routing messages sent between routers were used as the only inputs to the autoencoder. These intra-router messages provide route availability information, and inform neighbouring routers of any route changes. The outputs from the network were shown to help provide an early warning mechanism for the presence of a worm. An alternative method for detecting instability is a rule-based system, which generates alarms if the number of certain BGP routing messages exceeds a prespecified threshold. This project compared the autoencoder to a simple rule-based system. The results showed that the autoencoder provided a better prediction and was less complex for a network administrator to configure. Although the correlation between the number of BGP Updates and global Internet instability has been shown previously, this work presents the first known application of a neural network to predict the instability using this correlation. A system based on this strategy has the potential to reduce the damage done by a worm’s propagation and payload, by providing an automated means of detection that is faster than that of a human.
6

Towards Personalized Learning using Counterfactual Inference for Randomized Controlled Trials

Zhao, Siyuan 26 April 2018 (has links)
Personalized learning considers that the causal effects of a studied learning intervention may differ for the individual student (e.g., maybe girls do better with video hints while boys do better with text hints). To evaluate a learning intervention inside ASSISTments, we run a randomized control trial (RCT) by randomly assigning students into either a control condition or a treatment condition. Making the inference about causal effects of studies interventions is a central problem. Counterfactual inference answers “What if� questions, such as "Would this particular student benefit more if the student were given the video hint instead of the text hint when the student cannot solve a problem?". Counterfactual prediction provides a way to estimate the individual treatment effects and helps us to assign the students to a learning intervention which leads to a better learning. A variant of Michael Jordan's "Residual Transfer Networks" was proposed for the counterfactual inference. The model first uses feed-forward neural networks to learn a balancing representation of students by minimizing the distance between the distributions of the control and the treated populations, and then adopts a residual block to estimate the individual treatment effect. Students in the RCT usually have done a number of problems prior to participating it. Each student has a sequence of actions (performance sequence). We proposed a pipeline to use the performance sequence to improve the performance of counterfactual inference. Since deep learning has achieved a huge amount of success in learning representations from raw logged data, student representations were learned by applying the sequence autoencoder to performance sequences. Then, incorporate these representations into the model for counterfactual inference. Empirical results showed that the representations learned from the sequence autoencoder improved the performance of counterfactual inference.
7

Variational Open Set Recognition

Buquicchio, Luke J. 08 May 2020 (has links)
In traditional classification problems, all classes in the test set are assumed to also occur in the training set, also referred to as the closed-set assumption. However, in practice, new classes may occur in the test set, which reduces the performance of machine learning models trained under the closed-set assumption. Machine learning models should be able to accurately classify instances of classes known during training while concurrently recognizing instances of previously unseen classes (also called the open set assumption). This open set assumption is motivated by real world applications of classifiers wherein its improbable that sufficient data can be collected a priori on all possible classes to reliably train for them. For example, motivated by the DARPA WASH project at WPI, a disease classifier trained on data collected prior to the outbreak of COVID-19 might erroneously diagnose patients with the flu rather than the novel coronavirus. State-of-the-art open set methods based on the Extreme Value Theory (EVT) fail to adequately model class distributions with unequal variances. We propose the Variational Open-Set Recognition (VOSR) model that leverages all class-belongingness probabilities to reject unknown instances. To realize the VOSR model, we design a novel Multi-Modal Variational Autoencoder (MMVAE) that learns well-separated Gaussian Mixture distributions with equal variances in its latent representation. During training, VOSR maps instances of known classes to high-probability regions of class-specific components. By enforcing a large distance between these latent components during training, VOSR then assumes unknown data lies in the low-probability space between components and uses a multivariate form of Extreme Value Theory to reject unknown instances. Our VOSR framework outperforms state-of-the-art open set classification methods with a 15% F1 score increase on a variety of benchmark datasets.
8

Machine Architecture / Maskinarkitektur

Spett, Max Viktor January 2018 (has links)
Recent developments in AI is changing our world. It already governs our digital life. In my thesis I take the position that AI involvement in the field of architecture is inevitable, and indeed already here. AI is neither something we can simply accept, nor wholly ignore. Rather, we should try to understand and work with it. These algorithms should not be seen as mere tools with predictable, repeatable outcomes, they are something more complex. I’ve explored the world of AI by means of teaching a machine to design diverse, typologically similar objects: residential doorways from Stockholm. By instructing the machine to read and recreate these objects it has learned to design objects similar to them. While the machine does not know what it has designed, it has nevertheless reinterpreted the residential gate, thus offering an opportunity to glimpse into to the “mind” of AI, a world equally as unknown as omnipresent.
9

Perceptual facial expression representation

Mikheeva, Olga January 2017 (has links)
Facial expressions play an important role in such areas as human communication or medical state evaluation. For machine learning tasks in those areas, it would be beneficial to have a representation of facial expressions which corresponds to human similarity perception. In this work, the data-driven approach to representation learning of facial expressions is taken. The methodology is built upon Variational Autoencoders and eliminates the appearance-related features from the latent space by using neutral facial expressions as additional inputs. In order to improve the quality of the learned representation, we modify the prior distribution of the latent variable to impose the structure on the latent space that is consistent with human perception of facial expressions. We conduct the experiments on two datasets and the additionally collected similarity data, show that the human-like topology in the latent representation helps to improve the performance on the stereotypical emotion classification task and demonstrate the benefits of using a probabilistic generative model in exploring the roles of latent dimensions through the generative process. / Ansiktsuttryck spelar en viktig roll i områden som mänsklig kommunikation eller vid utvärdering av medicinska tillstånd. För att tillämpa maskininlärning i dessa områden skulle det vara fördelaktigt att ha en representation av ansiktsuttryck som bevarar människors uppfattning av likhet. I det här arbetet används ett data-drivet angreppssätt till representationsinlärning av ansiktsuttryck. Metodologin bygger på s. k. Variational Autoencoders och eliminerar utseende-relaterade drag från den latenta rymden genom att använda neutrala ansiktsuttryck som extra input-data. För att förbättra kvaliteten på den inlärda representationen så modifierar vi a priori-distributionen för den latenta variabeln för att ålägga den struktur på den latenta rymden som är överensstämmande med mänsklig perception av ansiktsuttryck. Vi utför experiment på två dataset och även insamlad likhets-data och visar att den människolika topologin i den latenta representationen hjälper till att förbättra prestandan på en typisk emotionsklassificeringsuppgift samt fördelarna med att använda en probabilistisk generativ modell när man undersöker latenta dimensioners roll i den generativa processen.
10

Redundant and Irrelevant Attribute Elimination using Autoencoders / Redundant och irrelevant attributeliminering med autoencoders

Granskog, Tim January 2017 (has links)
Real-world data can often be high-dimensional and contain redundant or irrelevant attributes. High-dimensional data are problematic for machine learning as the high dimensionality causes learning to take more time and, unless the dataset is sufficiently large to provide an ample number of samples for each class, the accuracy will suffer. Redundant and irrelevant attributes cause the data to take on a higher dimensionality than necessary and obfuscates the important attributes. Because of this, it is of interest to be able to reduce the dimensionality of the data whilst preserving the important attributes. Several techniques have been presented in the field of computer science in order to reduce the dimensionality of data. One of these is the autoencoder which is an unsupervised learning neural network which uses its input as the target output, and by limiting the number of neurons in the hidden layer the autoencoder is forced to learn a lower dimensional representation of the data. This study focuses on using the autoencoder to reduce the dimensionality, and eliminate irrelevant or redundant attributes, of four different datasets from different domains. The results show that the autoencoder can eliminate redundant attributes, that are a linear combination of the other attributes, and provide a better lower dimensional representation of the data than that of the unreduced data. However, in data that is gathered under a controlled and carefully managed situation, the autoencoder cannot always provide a better lower dimensional representation than the data with redundant attributes. Lastly, the results show that the autoencoder cannot eliminate irrelevant attributes which have no correlation to the class or other attributes. / Verklig data kan ofta vara högdimensionella och innehålla överflödiga eller irrelevanta attribut. Högdimensionell data är problematisk för maskininlärning, eftersom det medför att lärandet tar längre tid och om inte datasetet är tillräckligt stort för att ge ett tillräckligt antal instanser för varje klass kommer precisionen att drabbas. Överflödiga och irrelevanta attribut gör att datan får en högre dimension än vad som är nödvändigt och gör de svårare att avgöra vilka de viktiga attributen är. På grund av detta är det av intresse att kunna reducera datans dimensionalitet samtidigt som de viktiga attributen bevaras. Flera tekniker har presenterats för dimensionsreducering av data. En utav dessa tekniker är autoencodern, som är ett oövervakat lärande neuralt nätverk som använder sin indata som målutdata, och genom att begränsa antalet neuroner i det dolda lagret tvingas autoencodern att lära sig en representation av datan i en lägre dimension. Denna studie fokuserar på att använda autoencodern för att minska dimensionerna och eliminera irrelevanta eller överflödiga attribut, av fyra olika dataset från olika domäner. Resultaten visar att autoenkodern kan eliminera redundanta attribut, som är en linjär kombination av de andra attributen, och ge en bättre lägre dimensionell representation av datan än den ej reducerade datan. I data som samlats in under en kontrollerad och noggrant hanterad situation kan emellertid autoencodern inte alltid ge en bättre lägre dimensionell representation än datan med redundanta attribut. Slutligen visar resultaten att autoencodern inte kan eliminera irrelevanta attribut, som inte har någon korrelation med klassen eller andra attribut.

Page generated in 0.0668 seconds