• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 111
  • 6
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 155
  • 107
  • 71
  • 70
  • 63
  • 63
  • 48
  • 45
  • 44
  • 41
  • 39
  • 37
  • 35
  • 32
  • 31
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Identificación de impactos en una estructura tridimensional utilizando autoencoders y una aproximación lineal del principio de máxima entropía

Espinoza Quitral, Cony de los Ángeles January 2018 (has links)
Memoria para optar al título de Ingeniera Civil Mecánica / En general los sistemas con recubrimientos expuestos a impactos son incapaces de detectar problemas en tiempo real y con precisión. La motivación de este trabajo es avanzar en diseñar estructuras que reconozcan las alteraciones que reciban del medio. Una forma de monitorear la integridad de estos sistemas es mediante el desarrollo de algoritmos de detección de impactos basados en la respuesta vibratoria del sistema a alteraciones de tipo impacto. En este contexto se propone utilizar autoencoders (AE). Los AE son un tipo de red neuronal que permite extraer la información más significativa de datos con múltiples variables. Con esto, la estructura puede aprender de los impactos que recibe para reconocer eficientemente los impactos futuros. El objetivo de este trabajo es desarrollar un sistema de identificación de la ubicación y magnitud de impactos recibidos por una estructura. Se elije trabajar con una estructura cilíndrica metálica, emplazada en dos configuraciones a estudiar. Una con la estructura cilíndrica ubicada en posición vertical sobre una superficie lisa. Y otra con la estructura suspendida por medio de una cuerda emplazada a lo largo del cilindro. Ahora, se entrena un AE para extraer el espacio latente de los datos medidos. Luego, se entrena un algoritmo de aprendizaje supervisado basado en el principio de máxima entropía (LME), que permite reconocer y asociar señales de impactos con aquellas de la fase de entrenamiento. Dando como resultado una estimación de la ubicación y magnitud de los impactos recibidos por la estructura. Para ambas etapas se deben seleccionar las variables que permitan obtener el algoritmo más eficiente y que entreguen los resultados con el menor error asociado posible. El desempeño del AE es bueno si sólo se analiza su capacidad de reconstruir la señal, pero al analizar la estructura del espacio latente, esta resulta representar bien los datos en forma, pero debe mejorar en precisión. Por otro lado, el análisis del comportamiento de la etapa LME según el número de vecinos a utilizar resulta acorde a la interferencia de las condiciones de borde del problema y de cómo afectan las zonas de contacto con elementos externos a los resultados finales. Además de la superficie de contacto con la base, otro elemento que produce un aumento en el error de predicción es la presencia del cordón de soldadura a lo largo de la altura del cilindro. Finalmente, es posible afirmar que el método desarrollado en general cumple su función de identificar la ubicación y magnitud de los impactos efectuados a la estructura. Sin embargo, es necesario trabajar en la precisión de este.
2

Constellation Design for Multi-user Communications with Deep Learning

Sun, Yi-Lin January 2019 (has links)
In the simple form, a communication system includes a transmitter and a receiver. In the transmitter, it transforms the one-hot vector message to produce a transmitted signal. In general, the transmitter demands restrictions on the transmitted signal. The channel is defined by the conditional probability distribution function. On receiving of the transmitted signal with noise, the receiver appears to apply the transformation to generate the estimate of one hot vector message. We can regard this simplest communication system as a specific case of autoencoder from a deep learning perspective. In our case, autoencoder used to learn the representations of the one-hot vector which are robust to the noise channel and can be recovered at the receiver with the smallest probability of error. Our task is to make some improvements on the autoencoder systems. We propose different schemes depending on the different cases. We propose a method based on optimization of softmax and introduce the L1/2 regularization in MSE loss function for SISO case and MIMO case, separately. The simulation shows that both our optimized softmax function method and L1/2 regularization loss function have a better performance than the original neural network framework. / Thesis / Master of Applied Science (MASc)
3

Using a denoising autoencoder for localization : Denoising cellular-based wireless localization data / Brusreducerande autoencoder för platsdata : Brusreducering av trådlös platsdata från mobiltelefoner

Danielsson, Alexander, von Pfaler, Edvard January 2021 (has links)
A denoising autoencoder is a type of neural network which excels at removingnoise from noisy input data. In this project, a denoising autoencoder isoptimized for removing noise from mobile positioning data. The mobilepositioning data with noise is generated specifically for this project. In orderto generate realistic noise, a study in how real world noise looks like is carriedout. The project aims to answer the question: can a denoising autoencoderbe used to remove noise from mobile positioning data? The results showthat using this method can effectively cut the noise in half. In this reportit is mainly analyzed how the amount of hidden layers and respective sizesaffected the performance. It was concluded that the most optimal design forthe autoencoder was a single hidden layer model with multiple more nodes inthe hidden layer than the input and output layer. / En brusreducerande autoencoder är ett sorts neuralt nätverk som är specialiserat för att ta bort brus från indata. I detta projekt optimeras en brusreducerande autoencoder för att ta bort brus från mobilpositioneringsdata. Till projektet skapades helt ny mobilpositioneringsdata med realistiskt brus. Detta gjordes genom att studera hur verkligt brus ser ut och skapa ett program som efterliknar detta. Projektets syfte var att undersöka om en brusreducerande autoencoder kan användas för att ta bort brus från mobilpositioneringsdata. Resultaten visar att metoden kan ta bort ungefär hälften av bruset. I rapporten undersöks och analyseras även hur antalet dolda lager och antalet noder i dessa lager påverkade mängden brus som autoencodern lyckades ta bort. Från de gjorda testerna drogs slutsatsen att den mest optimala designen var en enkel design med ett enda dolt lager som hade betydligt fler noder än input- och outputlagren.
4

Genomic Data Augmentation with Variational Autoencoder

Thyrum, Emily 12 1900 (has links)
In order to treat cancer effectively, medical practitioners must predict pathological stages accurately, and machine learning methods can be employed to make such predictions. However, biomedical datasets, including genomic datasets, often have disproportionately more samples from people of European ancestry than people of other ethnic or racial groups, which can cause machine learning methods to perform better on the European samples than on the people of the under-represented groups. Data augmentation can be employed as a potential solution in order to artificially increase the number of samples from people of under-represented racial groups, and can in turn improve pathological stage predictions for future patients from such under-represented groups. Genomic data augmentation has been explored previously, for example using a Generative Adversarial Network, but to the best of our knowledge, the use of the variational autoencoder for the purpose of genomic data augmentation remains largely unexplored. Here we utilize a geometry-based variational autoencoder that models the latent space as a Riemannian manifold so that samples can be generated without the use of a prior distribution to show that the variational autoencoder can indeed be used to reliably augment genomic data. Using TCGA prostate cancer genotype data, we show that our VAE-generated data can improve pathological stage predictions on a test set of European samples. Because we only had European samples that were labeled in terms of pathological stage, we were not able to validate the African generated samples in this way, but we still attempt to show how such samples may be realistic. / Computer and Information Science
5

Anomaly Detection for Insider Threats : Comparative Evaluation of LSTM Autoencoders, Isolation Forest, and Elasticsearch on Two Datasets. / Anomalidetektion för interna hot : Utvärdering av LSTM-autoencoders, Isolation Forest och Elasticsearch på två dataset

Fagerlund, Martin January 2024 (has links)
Insider threat detection is one of cybersecurity’s most challenging and costly problems. Anomalous behaviour can take multiple shapes, which puts a great demand on the anomaly detection system. Significant research has been conducted in the area, but the existing experimental datasets’ absence of real data leaves uncertainty about the proposed systems’ realistic performance. This thesis introduces a new insider threat dataset consisting exclusively of events from real users. The dataset is used to evaluate the performance of various anomaly detection system techniques comparatively. Three anomaly detection techniques were evaluated: LSTM autoencoder, isolation forest, and Elasticsearch’s anomaly detection. The dataset’s properties inhibited any hyperparameter tuning of the LSTM autoencoders since the data lacks sufficient positive instances. Therefore, the architecture and hyperparameter settings are taken from the previously proposed research. The implemented anomaly detection models were also evaluated on the commonly used CERT v4.2 insider threat test dataset. The results show that the LSTM autoencoder provides better anomaly detection on the CERT v4.2 dataset regarding the accuracy, precision, recall, F1 score, and false positive rate compared to the other tested models. However, the investigated systems performed more similarly on the introduced dataset with real data. The LSTM autoencoder achieved the best recall, precision, and F1 score, the isolation forest showed almost as good F1 score with a lower false positive rate, and Elasticsearch’s anomaly detection reported the best accuracy and false positive rate. Additionally, the LSTM autoencoder generated the best ROC curve and precision-recall curve. While Elasticsearch’s anomaly detection showed promising results concerning the accuracy, it performed with low precision and was explicitly implemented to detect certain anomalies, which reduced its generalisability. In conclusion, the results show that the LSTM autoencoder is a feasible anomaly detection model for detecting abnormal behaviour in real user-behaviour logs. Secondly, Elasticsearch’s anomaly detection can be used but is better suited for less complex data analysis tasks. Further, the thesis analyzes the introduced dataset and problematizes its application. In the closing chapter, the study provides domains where further research should be conducted. / Interna hot är ett av de svåraste och mest kostsamma problemen inom cybersäkerhet. Avvikande beteende kan anta många olika former vilket innebär stora krav på de system som ska upptäcka dem. Mycket forskning har genomförts i detta område för att tillhandahålla kraftfulla system. Dessvärre saknar de existerande dataseten som används inom forskningen verklig data vilket gör evalueringen av systemens verkliga förmåga osäker. Denna rapport introducerar ett nytt dataset med data enbart från riktiga användare. Datasetet används för att analysera prestandan av tre olika anomalidetektionssystem: LSTM autoencoder, isolation forest och Elasticsearchs inbyggda anomalidetektering. Datasetets egenskaper förhindrade hyperparameterjustering av LSTM autoencoderna då datasetet innehåller för få positiva data punkter. Därav var arkitekturen och hyperparameterinställningar tagna från tidigare forskning. De implementerade modellerna var också jämförda på det välanvända CERT v4.2 datasetet. Resultaten från CERT v4.2 datasetet visade att LSTM autoencodern ger en bättre anomalidetektion än de andra modellerna när måtten noggrannhet, precision, recall, F1 poäng och andel falska positiva användes. När modellerna testades på det introducerade datasetet presterade de mer jämlikt. LSTM autoencodern presterar med bäst recall, precision och F1 poäng medan isolation forest nästan nådde lika hög F1 poäng men med lägre andel falska positiva predikteringar. Elasticsearchs anomalidetektering lyckades nå högst noggrannhet med lägst andel falsk positiva. Dessvärre med låg precision jämfört med de två andra modellerna. Elasticsearchs anomalidetektering var även tvungen att implementeras mer specifikt riktat mot anomalierna den skulle upptäcka vilket gör användningsområdet för den mindre generellt. Sammanfattningsvis visar resultaten att LSTM autoencoders är ett adekvat alternativ för att detektera abnormaliteter i loggar med händelser från riktiga användare. Dessutom är det möjligt till en viss gräns att använda Elasticsearchs anomalidetektering för dessa ändamål men den passar bättre för uppgifter med mindre komplexitet. Utöver modellernas resultat så analyseras det framtagna datasetet och några egenskaper specificeras som försvårar dess användning och trovärdighet. Avslutningsvis så preciseras intressanta relaterade områden där vidare forskning bör ske.
6

Predicting Global Internet Instability Caused by Worms using Neural Networks

Marais, Elbert 16 November 2006 (has links)
Student Number : 9607275H - MSc dissertation - School of Electrical and Information Engineering - Faculty of Engineering and the Built Environment / Internet worms are capable of quickly propagating by exploiting vulnerabilities of hosts that have access to the Internet. Once a computer has been infected, the worms have access to sensitive information on the computer, and are able to corrupt or retransmit this information. This dissertation describes a method of predicting Internet instability due to the presence of a worm on the Internet, using data currently available from global Internet routers. The work is based on previous research which has indicated a link between the increase in the number of Border Gateway Protocol (BGP) routing messages and global Internet instability. The type of system used to provide the prediction is known as an autoencoder. This is a specialised type of neural network, which is able to provide a degree of novelty for inputs. The autoencoder is trained to recognise “normal” data, and therefore provides a high novelty output for inputs dissimilar to the normal data. The BGP Update routing messages sent between routers were used as the only inputs to the autoencoder. These intra-router messages provide route availability information, and inform neighbouring routers of any route changes. The outputs from the network were shown to help provide an early warning mechanism for the presence of a worm. An alternative method for detecting instability is a rule-based system, which generates alarms if the number of certain BGP routing messages exceeds a prespecified threshold. This project compared the autoencoder to a simple rule-based system. The results showed that the autoencoder provided a better prediction and was less complex for a network administrator to configure. Although the correlation between the number of BGP Updates and global Internet instability has been shown previously, this work presents the first known application of a neural network to predict the instability using this correlation. A system based on this strategy has the potential to reduce the damage done by a worm’s propagation and payload, by providing an automated means of detection that is faster than that of a human.
7

Towards Personalized Learning using Counterfactual Inference for Randomized Controlled Trials

Zhao, Siyuan 26 April 2018 (has links)
Personalized learning considers that the causal effects of a studied learning intervention may differ for the individual student (e.g., maybe girls do better with video hints while boys do better with text hints). To evaluate a learning intervention inside ASSISTments, we run a randomized control trial (RCT) by randomly assigning students into either a control condition or a treatment condition. Making the inference about causal effects of studies interventions is a central problem. Counterfactual inference answers “What if� questions, such as "Would this particular student benefit more if the student were given the video hint instead of the text hint when the student cannot solve a problem?". Counterfactual prediction provides a way to estimate the individual treatment effects and helps us to assign the students to a learning intervention which leads to a better learning. A variant of Michael Jordan's "Residual Transfer Networks" was proposed for the counterfactual inference. The model first uses feed-forward neural networks to learn a balancing representation of students by minimizing the distance between the distributions of the control and the treated populations, and then adopts a residual block to estimate the individual treatment effect. Students in the RCT usually have done a number of problems prior to participating it. Each student has a sequence of actions (performance sequence). We proposed a pipeline to use the performance sequence to improve the performance of counterfactual inference. Since deep learning has achieved a huge amount of success in learning representations from raw logged data, student representations were learned by applying the sequence autoencoder to performance sequences. Then, incorporate these representations into the model for counterfactual inference. Empirical results showed that the representations learned from the sequence autoencoder improved the performance of counterfactual inference.
8

Variational Open Set Recognition

Buquicchio, Luke J. 08 May 2020 (has links)
In traditional classification problems, all classes in the test set are assumed to also occur in the training set, also referred to as the closed-set assumption. However, in practice, new classes may occur in the test set, which reduces the performance of machine learning models trained under the closed-set assumption. Machine learning models should be able to accurately classify instances of classes known during training while concurrently recognizing instances of previously unseen classes (also called the open set assumption). This open set assumption is motivated by real world applications of classifiers wherein its improbable that sufficient data can be collected a priori on all possible classes to reliably train for them. For example, motivated by the DARPA WASH project at WPI, a disease classifier trained on data collected prior to the outbreak of COVID-19 might erroneously diagnose patients with the flu rather than the novel coronavirus. State-of-the-art open set methods based on the Extreme Value Theory (EVT) fail to adequately model class distributions with unequal variances. We propose the Variational Open-Set Recognition (VOSR) model that leverages all class-belongingness probabilities to reject unknown instances. To realize the VOSR model, we design a novel Multi-Modal Variational Autoencoder (MMVAE) that learns well-separated Gaussian Mixture distributions with equal variances in its latent representation. During training, VOSR maps instances of known classes to high-probability regions of class-specific components. By enforcing a large distance between these latent components during training, VOSR then assumes unknown data lies in the low-probability space between components and uses a multivariate form of Extreme Value Theory to reject unknown instances. Our VOSR framework outperforms state-of-the-art open set classification methods with a 15% F1 score increase on a variety of benchmark datasets.
9

Machine Architecture / Maskinarkitektur

Spett, Max Viktor January 2018 (has links)
Recent developments in AI is changing our world. It already governs our digital life. In my thesis I take the position that AI involvement in the field of architecture is inevitable, and indeed already here. AI is neither something we can simply accept, nor wholly ignore. Rather, we should try to understand and work with it. These algorithms should not be seen as mere tools with predictable, repeatable outcomes, they are something more complex. I’ve explored the world of AI by means of teaching a machine to design diverse, typologically similar objects: residential doorways from Stockholm. By instructing the machine to read and recreate these objects it has learned to design objects similar to them. While the machine does not know what it has designed, it has nevertheless reinterpreted the residential gate, thus offering an opportunity to glimpse into to the “mind” of AI, a world equally as unknown as omnipresent.
10

Perceptual facial expression representation

Mikheeva, Olga January 2017 (has links)
Facial expressions play an important role in such areas as human communication or medical state evaluation. For machine learning tasks in those areas, it would be beneficial to have a representation of facial expressions which corresponds to human similarity perception. In this work, the data-driven approach to representation learning of facial expressions is taken. The methodology is built upon Variational Autoencoders and eliminates the appearance-related features from the latent space by using neutral facial expressions as additional inputs. In order to improve the quality of the learned representation, we modify the prior distribution of the latent variable to impose the structure on the latent space that is consistent with human perception of facial expressions. We conduct the experiments on two datasets and the additionally collected similarity data, show that the human-like topology in the latent representation helps to improve the performance on the stereotypical emotion classification task and demonstrate the benefits of using a probabilistic generative model in exploring the roles of latent dimensions through the generative process. / Ansiktsuttryck spelar en viktig roll i områden som mänsklig kommunikation eller vid utvärdering av medicinska tillstånd. För att tillämpa maskininlärning i dessa områden skulle det vara fördelaktigt att ha en representation av ansiktsuttryck som bevarar människors uppfattning av likhet. I det här arbetet används ett data-drivet angreppssätt till representationsinlärning av ansiktsuttryck. Metodologin bygger på s. k. Variational Autoencoders och eliminerar utseende-relaterade drag från den latenta rymden genom att använda neutrala ansiktsuttryck som extra input-data. För att förbättra kvaliteten på den inlärda representationen så modifierar vi a priori-distributionen för den latenta variabeln för att ålägga den struktur på den latenta rymden som är överensstämmande med mänsklig perception av ansiktsuttryck. Vi utför experiment på två dataset och även insamlad likhets-data och visar att den människolika topologin i den latenta representationen hjälper till att förbättra prestandan på en typisk emotionsklassificeringsuppgift samt fördelarna med att använda en probabilistisk generativ modell när man undersöker latenta dimensioners roll i den generativa processen.

Page generated in 0.0592 seconds