• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • Tagged with
  • 7
  • 7
  • 7
  • 7
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Machine Learning Driven Model Inversion Methodology To Detect Reniform Nematodes In Cotton

Palacharla, Pavan Kumar 09 December 2011 (has links)
Rotylenchulus reniformis is a nematode species affecting the cotton crop and quickly spreading throughout the southeastern United States. Effective use of nematicides at a variable rate is the only economic counter measure. It requires the intraield variable nematode population, which in turn depends on the collection of soil samples from the field and analyzing them in the laboratory. This process is economically prohibitive. Hence estimating the nematode infestation on the cotton crop using remote sensing and machine learning techniques which are cost and time effective is the motivation for this study. In the current research, the concept of multi-temporal remote sensing has been implemented in order to design a robust and generalized Nematode detection regression model. Finally, a user friendly web-service is created which is gives trustworthy results for the given input data and thereby reducing the nematode infestation in the crop and their expenses on nematicides.
2

Modified Kernel Principal Component Analysis and Autoencoder Approaches to Unsupervised Anomaly Detection

Merrill, Nicholas Swede 01 June 2020 (has links)
Unsupervised anomaly detection is the task of identifying examples that differ from the normal or expected pattern without the use of labeled training data. Our research addresses shortcomings in two existing anomaly detection algorithms, Kernel Principal Component Analysis (KPCA) and Autoencoders (AE), and proposes novel solutions to improve both of their performances in the unsupervised settings. Anomaly detection has several useful applications, such as intrusion detection, fault monitoring, and vision processing. More specifically, anomaly detection can be used in autonomous driving to identify obscured signage or to monitor intersections. Kernel techniques are desirable because of their ability to model highly non-linear patterns, but they are limited in the unsupervised setting due to their sensitivity of parameter choices and the absence of a validation step. Additionally, conventionally KPCA suffers from a quadratic time and memory complexity in the construction of the gram matrix and a cubic time complexity in its eigendecomposition. The problem of tuning the Gaussian kernel parameter, $sigma$, is solved using the mini-batch stochastic gradient descent (SGD) optimization of a loss function that maximizes the dispersion of the kernel matrix entries. Secondly, the computational time is greatly reduced, while still maintaining high accuracy by using an ensemble of small, textit{skeleton} models and combining their scores. The performance of traditional machine learning approaches to anomaly detection plateaus as the volume and complexity of data increases. Deep anomaly detection (DAD) involves the applications of multilayer artificial neural networks to identify anomalous examples. AEs are fundamental to most DAD approaches. Conventional AEs rely on the assumption that a trained network will learn to reconstruct normal examples better than anomalous ones. In practice however, given sufficient capacity and training time, an AE will generalize to reconstruct even very rare examples. Three methods are introduced to more reliably train AEs for unsupervised anomaly detection: Cumulative Error Scoring (CES) leverages the entire history of training errors to minimize the importance of early stopping and Percentile Loss (PL) training aims to prevent anomalous examples from contributing to parameter updates. Lastly, early stopping via Knee detection aims to limit the risk of over training. Ultimately, the two new modified proposed methods of this research, Unsupervised Ensemble KPCA (UE-KPCA) and the modified training and scoring AE (MTS-AE), demonstrates improved detection performance and reliability compared to many baseline algorithms across a number of benchmark datasets. / Master of Science / Anomaly detection is the task of identifying examples that differ from the normal or expected pattern. The challenge of unsupervised anomaly detection is distinguishing normal and anomalous data without the use of labeled examples to demonstrate their differences. This thesis addresses shortcomings in two anomaly detection algorithms, Kernel Principal Component Analysis (KPCA) and Autoencoders (AE) and proposes new solutions to apply them in the unsupervised setting. Ultimately, the two modified methods, Unsupervised Ensemble KPCA (UE-KPCA) and the Modified Training and Scoring AE (MTS-AE), demonstrates improved detection performance and reliability compared to many baseline algorithms across a number of benchmark datasets.
3

Feature Extraction using Dimensionality Reduction Techniques: Capturing the Human Perspective

Coleman, Ashley B. January 2015 (has links)
No description available.
4

Facial Expression Recognition by Using Class Mean Gabor Responses with Kernel Principal Component Analysis

Chung, Koon Yin C. 16 April 2010 (has links)
No description available.
5

Critical Analysis of Dimensionality Reduction Techniques and Statistical Microstructural Descriptors for Mesoscale Variability Quantification

Galbincea, Nicholas D. January 2017 (has links)
No description available.
6

Daily pattern recognition of dynamic origin-destination matrices using clustering and kernel principal component analysis / Daglig mönsterigenkänning av dynamiska Origin-Destination-matriser med hjälp av clustering och kernel principal component analysis

Dong, Zhiwu January 2021 (has links)
Origin-Destination (OD) matrix plays an important role in traffic management and urban planning. However, the OD estimation demands large data collection which has been done in past mostly by surveys with numerous limitations. With the development of communication technology and artificial intelligence technology, the transportation industry experiences new opportunities and challenges. Sensors bring big data characterized by 4V (Volume, Variety, Velocity, Value) to the transportation domain. This allows traffic practitioners to receive data covering large-scale areas and long time periods, even several years of data. At the same time, the introduction of artificial intelligence technology provides new opportunities and challenges in processing massive data. Advances from computer science have also brought revolutionary advancements in the field of transportation. All these new advances and technologies enable large data collection that can be used for extracting and estimating dynamic OD matrices for small time intervals and long time periods.Using Stockholm as the focus of the case study, this thesis estimates dynamic OD matrices covering data collected from the tolls located around Stockholm municipality. These dynamic OD matrices are used to analyze the day-to-day characteristics of the traffic flow that goes through Stockholm. In other words, the typical day-types of traffic through the city center are identified and studied in this work. This study analyzes the data collected by 58 sensors around Stockholm containing nearly 100 million vehicle observations (12GB).Furthermore, we consider and study the effects of dimensionality reduction on the revealing of most common day-types by clustering. The considered dimensionality reduction techniques are Principal Component Analysis (PCA) and its variant Kernel PCA (KPCA). The results reveal that dimensionality reduction significantly drops computational costs while resulting in reasonable day-types. Day-type clusters reveal expected as unexpected patterns and thus could have potential in traffic management, urban planning, and designing the strategy for congestion tax. / Origin-Destination (OD) -matrisen spelar en viktig roll i trafikledning och stadsplanering. Emellertid kräver OD-uppskattningen stor datainsamling, vilket har gjorts tidigare mest genom enkäter med många begränsningar. Med utvecklingen av kommunikationsteknik och artificiell intelligens upplever transportindustrin nya möjligheter och utmaningar. Sensorer ger stor data som kännetecknas av 4V (på engelska, volym, variation, hastighet, värde) till transportdomänen. Detta gör det möjligt för trafikutövare att ta emot data som täcker storskaliga områden och långa tidsperioder, till och med flera års data. Samtidigt ger introduktionen av artificiell intelligens teknik nya möjligheter och utmaningar i behandlingen av massiva data. Datavetenskapens framsteg har också lett till revolutionära framsteg inom transportområdet. Alla dessa nya framsteg och tekniker möjliggör stor datainsamling som kan användas för att extrahera och uppskatta dynamiska OD-matriser under små tidsintervall och långa tidsperioder.Genom att använda Stockholm som fokus för fallstudien uppskattar denna avhandling dynamiska OD-matriser som täcker data som samlats in från vägtullarna runt Stockholms kommun. Dessa dynamiska OD-matriser används för att analysera de dagliga egenskaperna hos trafikflödet i Stockholm genom stadens centrum. Med andra ord känns igen och studeras de typiska dagtyperna av trafik genom stadens centrum i detta arbete. Denna studie analyserar data som samlats in av 58 sensorer runt Stockholm som innehåller nästan 100 miljoner fordonsobservationer (12 GB)Dessutom överväger och studerar vi effekterna av dimensioneringsreduktion på avslöjandet av de vanligaste dagtyperna genom kluster. De betraktade dimensioneringsreduktionsteknikerna är Principal Component Analysis (PCA) och dess variant Kernel PCA (KPCA). Resultaten avslöjar att dimensioneringsreduktion avsevärt minskar beräkningskostnaderna, samtidigt som det ger rimliga dagtyper. Dagstyp kluster avslöjar förväntade som oväntade mönster och därmed kan ha potential i trafikledning, stadsplanering och utformning av strategin för trängselskatt.
7

Linear and Nonlinear Dimensionality-Reduction-Based Surrogate Models for Real-Time Design Space Exploration of Structural Responses

Bird, Gregory David 03 August 2020 (has links)
Design space exploration (DSE) is a tool used to evaluate and compare designs as part of the design selection process. While evaluating every possible design in a design space is infeasible, understanding design behavior and response throughout the design space may be accomplished by evaluating a subset of designs and interpolating between them using surrogate models. Surrogate modeling is a technique that uses low-cost calculations to approximate the outcome of more computationally expensive calculations or analyses, such as finite element analysis (FEA). While surrogates make quick predictions, accuracy is not guaranteed and must be considered. This research addressed the need to improve the accuracy of surrogate predictions in order to improve DSE of structural responses. This was accomplished by performing comparative analyses of linear and nonlinear dimensionality-reduction-based radial basis function (RBF) surrogate models for emulating various FEA nodal results. A total of four dimensionality reduction methods were investigated, namely principal component analysis (PCA), kernel principal component analysis (KPCA), isometric feature mapping (ISOMAP), and locally linear embedding (LLE). These methods were used in conjunction with surrogate modeling to predict nodal stresses and coordinates of a compressor blade. The research showed that using an ISOMAP-based dual-RBF surrogate model for predicting nodal stresses decreased the estimated mean error of the surrogate by 35.7% compared to PCA. Using nonlinear dimensionality-reduction-based surrogates did not reduce surrogate error for predicting nodal coordinates. A new metric, the manifold distance ratio (MDR), was introduced to measure the nonlinearity of the data manifolds. When applied to the stress and coordinate data, the stress space was found to be more nonlinear than the coordinate space for this application. The upfront training cost of the nonlinear dimensionality-reduction-based surrogates was larger than that of their linear counterparts but small enough to remain feasible. After training, all the dual-RBF surrogates were capable of making real-time predictions. This same process was repeated for a separate application involving the nodal displacements of mode shapes obtained from a FEA modal analysis. The modal assurance criterion (MAC) calculation was used to compare the predicted mode shapes, as well as their corresponding true mode shapes obtained from FEA, to a set of reference modes. The research showed that two nonlinear techniques, namely LLE and KPCA, resulted in lower surrogate error in the more complex design spaces. Using a RBF kernel, KPCA achieved the largest average reduction in error of 13.57%. The results also showed that surrogate error was greatly affected by mode shape reversal. Four different approaches of identifying reversed mode shapes were explored, all of which resulted in varying amounts of surrogate error. Together, the methods explored in this research were shown to decrease surrogate error when performing DSE of a turbomachine compressor blade. As surrogate accuracy increases, so does the ability to correctly make engineering decisions and judgements throughout the design process. Ultimately, this will help engineers design better turbomachines.

Page generated in 0.1054 seconds