• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 3
  • Tagged with
  • 9
  • 9
  • 9
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Bringing interpretability and visualization with artificial neural networks

Gritsenko, Andrey 01 August 2017 (has links)
Extreme Learning Machine (ELM) is a training algorithm for Single-Layer Feed-forward Neural Network (SLFN). The difference in theory of ELM from other training algorithms is in the existence of explicitly-given solution due to the immutability of initialed weights. In practice, ELMs achieve performance similar to that of other state-of-the-art training techniques, while taking much less time to train a model. Experiments show that the speedup of training ELM is up to the 5 orders of magnitude comparing to standard Error Back-propagation algorithm. ELM is a recently discovered technique that has proved its efficiency in classic regression and classification tasks, including multi-class cases. In this thesis, extensions of ELMs for non-typical for Artificial Neural Networks (ANNs) problems are presented. The first extension, described in the third chapter, allows to use ELMs to get probabilistic outputs for multi-class classification problems. The standard way of solving this type of problems is based 'majority vote' of classifier's raw outputs. This approach can rise issues if the penalty for misclassification is different for different classes. In this case, having probability outputs would be more useful. In the scope of this extension, two methods are proposed. Additionally, an alternative way of interpreting probabilistic outputs is proposed. ELM method prove useful for non-linear dimensionality reduction and visualization, based on repetitive re-training and re-evaluation of model. The forth chapter introduces adaptations of ELM-based visualization for classification and regression tasks. A set of experiments has been conducted to prove that these adaptations provide better visualization results that can then be used for perform classification or regression on previously unseen samples. Shape registration of 3D models with non-isometric distortion is an open problem in 3D Computer Graphics and Computational Geometry. The fifth chapter discusses a novel approach for solving this problem by introducing a similarity metric for spectral descriptors. Practically, this approach has been implemented in two methods. The first one utilizes Siamese Neural Network to embed original spectral descriptors into a lower dimensional metric space, for which the Euclidean distance provides a good measure of similarity. The second method uses Extreme Learning Machines to learn similarity metric directly for original spectral descriptors. Over a set of experiments, the consistency of the proposed approach for solving deformable registration problem has been proven.
2

Agrupamento de dados baseado em predições de modelos de regressão: desenvolvimentos e aplicações em sistemas de recomendação / Data clustering based on prediction regression models: developments and applications in recommender systems

Pereira, André Luiz Vizine 12 May 2016 (has links)
Sistemas de Recomendação (SR) vêm se apresentando como poderosas ferramentas para portais web tais como sítios de comércio eletrônico. Para fazer suas recomendações, os SR se utilizam de fontes de dados variadas, as quais capturam as características dos usuários, dos itens e suas transações, bem como de modelos de predição. Dada a grande quantidade de dados envolvidos, é improvável que todas as recomendações possam ser bem representadas por um único modelo global de predição. Um outro importante aspecto a ser observado é o problema conhecido por cold-start, que apesar dos avanços na área de SR, é ainda uma questão relevante que merece uma maior atenção. O problema está relacionado com a falta de informação prévia sobre novos usuários ou novos itens do sistema. Esta tese apresenta uma abordagem híbrida de recomendação capaz de lidar com situações extremas de cold-start. A abordagem foi desenvolvida com base no algoritmo SCOAL (Simultaneous Co-Clustering and Learning). Na sua versão original, baseada em múltiplos modelos lineares de predição, o algoritmo SCOAL mostrou-se eficiente e versátil, podendo ser utilizado numa ampla gama de problemas de classificação e/ou regressão. Para melhorar o algoritmo SCOAL no sentido de deixá-lo mais versátil por meio do uso de modelos não lineares, esta tese apresenta uma variante do algoritmo SCOAL que utiliza modelos de predição baseados em Máquinas de Aprendizado Extremo. Além da capacidade de predição, um outro fator que deve ser levado em consideração no desenvolvimento de SR é a escalabilidade do sistema. Neste sentido, foi desenvolvida uma versão paralela do algoritmo SCOAL baseada em OpenMP, que minimiza o tempo envolvido no cálculo dos modelos de predição. Experimentos computacionais controlados, por meio de bases de dados amplamente usadas na prática, comprovam que todos os desenvolvimentos propostos tornam o SCOAL ainda mais atraente para aplicações práticas variadas. / Recommender Systems (RS) are powerful and popular tools for e-commerce. To build its recommendations, RS make use of multiple data sources, capture the characteristics of items, users and their transactions, and take advantage of prediction models. Given the large amount of data involved in the predictions made by RS, is unlikely that all predictions can be well represented by a single global model. Another important aspect to note is the problem known as cold-start that, despite that recent advances in the RS area, it is still a relevant issue that deserves further attention. The problem arises due to the lack of prior information about new users and new items. This thesis presents a hybrid recommendation approach that addresses the (pure) cold start problem, where no collaborative information (ratings) is available for new users. The approach is based on an existing algorithm, named SCOAL (Simultaneous Co-Clustering and Learning). In its original version, based on multiple linear prediction models, the SCOAL algorithm has shown to be efficient and versatile. In addition, it can be used in a wide range of problems of classification and / or regression. The SCOAL algorithm showed impressive results with the use of linear prediction models, but there is still room for improvements with nonlinear models. From this perspective, this thesis presents a variant of the SCOAL based on Extreme Learning Machines. Besides improving the accuracy, another important issue related to the development of RS is system scalability. In this sense, a parallel version of the SCOAL, based on OpenMP, was developed, aimed at minimizing the computational cost involved as prediction models are learned. Experiments using real-world datasets has shown that all proposed developments make SCOAL algorithm even more attractive for a variety of practical applications.
3

Agrupamento de dados baseado em predições de modelos de regressão: desenvolvimentos e aplicações em sistemas de recomendação / Data clustering based on prediction regression models: developments and applications in recommender systems

André Luiz Vizine Pereira 12 May 2016 (has links)
Sistemas de Recomendação (SR) vêm se apresentando como poderosas ferramentas para portais web tais como sítios de comércio eletrônico. Para fazer suas recomendações, os SR se utilizam de fontes de dados variadas, as quais capturam as características dos usuários, dos itens e suas transações, bem como de modelos de predição. Dada a grande quantidade de dados envolvidos, é improvável que todas as recomendações possam ser bem representadas por um único modelo global de predição. Um outro importante aspecto a ser observado é o problema conhecido por cold-start, que apesar dos avanços na área de SR, é ainda uma questão relevante que merece uma maior atenção. O problema está relacionado com a falta de informação prévia sobre novos usuários ou novos itens do sistema. Esta tese apresenta uma abordagem híbrida de recomendação capaz de lidar com situações extremas de cold-start. A abordagem foi desenvolvida com base no algoritmo SCOAL (Simultaneous Co-Clustering and Learning). Na sua versão original, baseada em múltiplos modelos lineares de predição, o algoritmo SCOAL mostrou-se eficiente e versátil, podendo ser utilizado numa ampla gama de problemas de classificação e/ou regressão. Para melhorar o algoritmo SCOAL no sentido de deixá-lo mais versátil por meio do uso de modelos não lineares, esta tese apresenta uma variante do algoritmo SCOAL que utiliza modelos de predição baseados em Máquinas de Aprendizado Extremo. Além da capacidade de predição, um outro fator que deve ser levado em consideração no desenvolvimento de SR é a escalabilidade do sistema. Neste sentido, foi desenvolvida uma versão paralela do algoritmo SCOAL baseada em OpenMP, que minimiza o tempo envolvido no cálculo dos modelos de predição. Experimentos computacionais controlados, por meio de bases de dados amplamente usadas na prática, comprovam que todos os desenvolvimentos propostos tornam o SCOAL ainda mais atraente para aplicações práticas variadas. / Recommender Systems (RS) are powerful and popular tools for e-commerce. To build its recommendations, RS make use of multiple data sources, capture the characteristics of items, users and their transactions, and take advantage of prediction models. Given the large amount of data involved in the predictions made by RS, is unlikely that all predictions can be well represented by a single global model. Another important aspect to note is the problem known as cold-start that, despite that recent advances in the RS area, it is still a relevant issue that deserves further attention. The problem arises due to the lack of prior information about new users and new items. This thesis presents a hybrid recommendation approach that addresses the (pure) cold start problem, where no collaborative information (ratings) is available for new users. The approach is based on an existing algorithm, named SCOAL (Simultaneous Co-Clustering and Learning). In its original version, based on multiple linear prediction models, the SCOAL algorithm has shown to be efficient and versatile. In addition, it can be used in a wide range of problems of classification and / or regression. The SCOAL algorithm showed impressive results with the use of linear prediction models, but there is still room for improvements with nonlinear models. From this perspective, this thesis presents a variant of the SCOAL based on Extreme Learning Machines. Besides improving the accuracy, another important issue related to the development of RS is system scalability. In this sense, a parallel version of the SCOAL, based on OpenMP, was developed, aimed at minimizing the computational cost involved as prediction models are learned. Experiments using real-world datasets has shown that all proposed developments make SCOAL algorithm even more attractive for a variety of practical applications.
4

Imbalanced Learning and Feature Extraction in Fraud Detection with Applications / Obalanserade Metoder och Attribut Aggregering för Upptäcka Bedrägeri, med Appliceringar

Jacobson, Martin January 2021 (has links)
This thesis deals with fraud detection in a real-world environment with datasets coming from Svenska Handelsbanken. The goal was to investigate how well machine learning can classify fraudulent transactions and how new additional features affected classification. The models used were EFSVM, RUTSVM, CS-SVM, ELM, MLP, Decision Tree, Extra Trees, and Random Forests. To determine the best results the Mathew Correlation Coefficient was used as performance metric, which has been shown to have a medium bias for imbalanced datasets. Each model could deal with high imbalanced datasets which is common for fraud detection. Best results were achieved with Random Forest and Extra Trees. The best scores were around 0.4 for the real-world datasets, though the score itself says nothing as it is more a testimony to the dataset’s separability. These scores were obtained when using aggregated features and not the standard raw dataset. The performance measure recall’s scores were around 0.88-0.93 with an increase in precision by 34.4%-67%, resulting in a large decrease of False Positives. Evaluation results showed a great difference compared to test-runs, either substantial increase or decrease. Two theories as to why are discussed, a great distribution change in the evaluation set, and the sample size increase (100%) for evaluation could have lead to the tests not being well representing of the performance. Feature aggregation were a central topic of this thesis, with the main focus on behaviour features which can describe patterns and habits of customers. For these there were five categories: Sender’s fraud history, Sender’s transaction history, Sender’s time transaction history, Sender’shistory to receiver, and receiver’s history. Out of these, the best performance increase was from the first which gave the top score, the other datasets did not show as much potential, with mostn ot increasing the results. Further studies need to be done before discarding these features, to be certain they don’t improve performance. Together with the data aggregation, a tool (t-SNE) to visualize high dimension data was usedto great success. With it an early understanding of what to expect from newly added features would bring to classification. For the best dataset it could be seen that a new sub-cluster of transactions had been created, leading to the belief that classification scores could improve, whichthey did. Feature selection and PCA-reduction techniques were also studied and PCA showedgood results and increased performance. Feature selection had not conclusive improvements. Over- and under-sampling were used and neither improved the scores, though undersampling could maintain the results which is interesting when increasing the dataset. / Denna avhandling handlar om upptäcka bedrägerier i en real-world miljö med data från Svenska Handelsbanken. Målet var att undersöka hur bra maskininlärning är på att klassificera bedrägliga transaktioner, och hur nya attributer hjälper klassificeringen. Metoderna som användes var EFSVM, RUTSVM, CS-SVM, ELM, MLP, Decision Tree, Extra Trees och Random Forests. För evaluering av resultat används Mathew Correlation Coefficient, vilket har visat sig ha småttt beroende med hänsyn till obalanserade datamängder. Varje modell har inbygda värden för attklara av att bearbeta med obalanserade datamängder, vilket är viktigt för att upptäcka bedrägerier. Resultatmässigt visade det sig att Random Forest och Extra Trees var bäst, utan att göra p-test:s, detta på grund att dataseten var relativt sätt små, vilket gör att små skillnader i resultat ej är säkra. De högsta resultaten var cirka 0.4, det absoluta värdet säger ingenting mer än som en indikation om graden av separation mellan klasserna. De bäst resultaten ficks när nya aggregerade attributer användes och inte standard datasetet. Dessa resultat hade recall värden av 0,88-0,93 och för dessa kunde det synas precision ökade med 34,4% - 67%, vilket ger en stor minskning av False Positives. Evluation-resultaten hade stor skillnad mot test-resultaten, denna skillnad hade antingen en betydande ökning eller minskning. Två anledningar om varför diskuterades, förändring av evaluation-datan mot test-datan eller att storleksökning (100%) för evaluation har lett till att testerna inte var representativa. Attribute-aggregering var ett centralt ämne, med fokus på beteende-mönster för att beskriva kunders vanor. För dessa fanns det fem kategorier: Avsändarens bedrägerihistorik, Avsändarens transaktionshistorik, Avsändarens historik av tid för transaktion, Avsändarens historik till mottagaren och mottagarens historik. Av dessa var den största prestationsökningen från bedrägerihistorik, de andra attributerna hade inte lika positiva resultat, de flesta ökade inte resultaten.Ytterligare mer omfattande studier måste göras innan dessa attributer kan sägas vara givande eller ogivande. Tillsammans med data-aggregering användes t-SNE för att visualisera högdimensionsdata med framgång. Med t-SNE kan en tidig förståelse för vad man kan förvänta sig av tillagda attributer, inom klassificering. För det bästa dataset kan man se att ett nytt kluster som hade skapats, vilket kan tolkas som datan var mer beskrivande. Där förväntades också resultaten förbättras, vilket de gjorde. Val av attributer och PCA-dimensions reducering studerades och PCA-visadeförbättring av resultaten. Over- och under-sampling testades och kunde ej förbättrade resultaten, även om undersampling kunde bibehålla resultated vilket är intressant om datamängden ökar.
5

Aprendizado semi-supervisionado para o tratamento de incerteza na rotulação de dados de química medicinal / Semi supervised learning for uncertainty on medicinal chemistry labelling

Souza, João Carlos Silva de 09 March 2017 (has links)
Nos últimos 30 anos, a área de aprendizagem de máquina desenvolveu-se de forma comparável com a Física no início do século XX. Esse avanço tornou possível a resolução de problemas do mundo real que anteriormente não poderiam ser solucionados por máquinas, devido à dificuldade de modelos puramente estatísticos ajustarem-se de forma satisfatória aos dados de treinamento. Dentre tais avanços, pode-se citar a utilização de técnicas de aprendizagem de máquina na área de Química Medicinal, envolvendo métodos de análise, representação e predição de informação molecular por meio de recursos computacionais. Os dados utilizados no contexto biológico possuem algumas características particulares que podem influenciar no resultado de sua análise. Dentre estas, pode-se citar a complexidade das informações moleculares, o desbalanceamento das classes envolvidas e a existência de dados incompletos ou rotulados de forma incerta. Tais adversidades podem prejudicar o processo de identificação de compostos candidatos a novos fármacos, se não forem tratadas de forma adequada. Neste trabalho, foi abordada uma técnica de aprendizagem de máquina semi-supervisionada capaz de reduzir o impacto causado pelo problema da incerteza na rotulação dos dados, aplicando um método para estimar rótulos mais confiáveis para os compostos químicos existentes no conjunto de treinamento. Na tentativa de evitar os efeitos causados pelo desbalanceamento dos dados, foi incorporada ao processo de estimação de rótulos uma abordagem sensível ao custo, com o objetivo de evitar o viés em benefício da classe majoritária. Após o tratamento do problema da incerteza na rotulação, classificadores baseados em Máquinas de Aprendizado Extremo foram construídos, almejando boa capacidade de aproximação em um tempo de processamento reduzido em relação a outras abordagens de classificação comumente aplicadas. Por fim, o desempenho dos classificadores construídos foi avaliado por meio de análises dos resultados obtidos, confrontando o cenário com os dados originais e outros com as novas rotulações obtidas durante o processo de estimação semi-supervisionado / In the last 30 years, the area of machine learning has developed in a way comparable to Physics in the early twentieth century. This breakthrough has made it possible to solve real-world problems that previously could not be solved by machines because of the difficulty of purely statistical models to fit satisfactorily with training data. Among these advances, one can cite the use of machine learning techniques in the area of Medicinal Chemistry, involving methods for analysing, representing and predicting molecular information through computational resources. The data used in the biological context have some particular characteristics that can influence the result of its analysis. These include the complexity of molecular information, the imbalance of the classes involved, and the existence of incomplete or uncertainly labeled data. If they are not properly treated, such adversities may affect the process of identifying candidate compounds for new drugs. In this work, a semi-supervised machine learning technique was considered to reduce the impact caused by the problem of uncertainty in the data labeling, by applying a method to estimate more reliable labels for the chemical compounds in the training set. In an attempt to reduce the effects caused by data imbalance, a cost-sensitive approach was incorporated to the label estimation process, in order to avoid bias in favor of the majority class. After addressing the uncertainty problem in labeling, classifiers based on Extreme Learning Machines were constructed, aiming for good approximation ability in a reduced processing time in relation to other commonly applied classification approaches. Finally, the performance of the classifiers constructed was evaluated by analyzing the results obtained, comparing the scenario with the original data and others with the new labeling obtained by the semi-supervised estimation process
6

Interpretable machine learning for additive manufacturing

Raquel De Souza Borges Ferreira (6386963) 10 June 2019 (has links)
<div>This dissertation addresses two significant issues in the effective application of machine learning algorithms and models for the physical and engineering sciences. The first is the broad challenge of automated modeling of data across different processes in a physical system. The second is the dilemma of obtaining insightful interpretations on the relationships between the inputs and outcome of a system as inferred from complex, black box machine learning models.</div><div><br></div><div><b>Automated Geometric Shape Deviation Modeling for Additive Manufacturing Systems</b></div><div><b><br></b></div><div>Additive manufacturing systems possess an intrinsic capability for one-of-a-kind manufacturing of a vast variety of shapes across a wide spectrum of processes. One major issue in AM systems is geometric accuracy control for the inevitable shape deviations that arise in AM processes. Current effective approaches for shape deviation control in AM involve the specification of statistical or machine learning deviation models for additively manufactured products. However, this task is challenging due to the constraints on the number of test shapes that can be manufactured in practice, and limitations on user efforts that can be devoted for learning deviation models across different shape classes and processes in an AM system. We develop an automated, Bayesian neural network methodology for comprehensive shape deviation modeling in an AM system. A fundamental innovation in this machine learning method is our new and connectable neural network structures that facilitate the transfer of prior knowledge and models on deviations across different shape classes and AM processes. Several case studies on in-plane and out-of-plane deviations, regular and free-form shapes, and different settings of lurking variables serve to validate the power and broad scope of our methodology, and its potential to advance high-quality manufacturing in an AM system.</div><div><br></div><div><b>Interpretable Machine Learning</b></div><div><b><br></b></div><div>Machine learning algorithms and models constitute the dominant set of predictive methods for a wide range of complex, real-world processes. However, interpreting what such methods effectively infer from data is difficult in general. This is because their typical black box natures possess a limited ability to directly yield insights on the underlying relationships between inputs and the outcome for a process. We develop methodologies based on new predictive comparison estimands that effectively enable one to ``mine’’ machine learning models, in the sense of (a) interpreting their inferred associations between inputs and/or functional forms of inputs with the outcome, (b) identifying the inputs that they effectively consider relevant, and (c) interpreting the inferred conditional and two-way associations of the inputs with the outcome. We establish Fisher consistent estimators, and their corresponding standard errors, for our new estimands under a condition on the inputs' distributions. The significance of our predictive comparison methodology is demonstrated with a wide range of simulation and case studies that involve Bayesian additive regression trees, neural networks, and support vector machines. Our extended study of interpretable machine learning for AM systems demonstrates how our method can contribute to smarter advanced manufacturing systems, especially as current machine learning methods for AM are lacking in their ability to yield meaningful engineering knowledge on AM processes. <br></div>
7

Aprendizado semi-supervisionado para o tratamento de incerteza na rotulação de dados de química medicinal / Semi supervised learning for uncertainty on medicinal chemistry labelling

João Carlos Silva de Souza 09 March 2017 (has links)
Nos últimos 30 anos, a área de aprendizagem de máquina desenvolveu-se de forma comparável com a Física no início do século XX. Esse avanço tornou possível a resolução de problemas do mundo real que anteriormente não poderiam ser solucionados por máquinas, devido à dificuldade de modelos puramente estatísticos ajustarem-se de forma satisfatória aos dados de treinamento. Dentre tais avanços, pode-se citar a utilização de técnicas de aprendizagem de máquina na área de Química Medicinal, envolvendo métodos de análise, representação e predição de informação molecular por meio de recursos computacionais. Os dados utilizados no contexto biológico possuem algumas características particulares que podem influenciar no resultado de sua análise. Dentre estas, pode-se citar a complexidade das informações moleculares, o desbalanceamento das classes envolvidas e a existência de dados incompletos ou rotulados de forma incerta. Tais adversidades podem prejudicar o processo de identificação de compostos candidatos a novos fármacos, se não forem tratadas de forma adequada. Neste trabalho, foi abordada uma técnica de aprendizagem de máquina semi-supervisionada capaz de reduzir o impacto causado pelo problema da incerteza na rotulação dos dados, aplicando um método para estimar rótulos mais confiáveis para os compostos químicos existentes no conjunto de treinamento. Na tentativa de evitar os efeitos causados pelo desbalanceamento dos dados, foi incorporada ao processo de estimação de rótulos uma abordagem sensível ao custo, com o objetivo de evitar o viés em benefício da classe majoritária. Após o tratamento do problema da incerteza na rotulação, classificadores baseados em Máquinas de Aprendizado Extremo foram construídos, almejando boa capacidade de aproximação em um tempo de processamento reduzido em relação a outras abordagens de classificação comumente aplicadas. Por fim, o desempenho dos classificadores construídos foi avaliado por meio de análises dos resultados obtidos, confrontando o cenário com os dados originais e outros com as novas rotulações obtidas durante o processo de estimação semi-supervisionado / In the last 30 years, the area of machine learning has developed in a way comparable to Physics in the early twentieth century. This breakthrough has made it possible to solve real-world problems that previously could not be solved by machines because of the difficulty of purely statistical models to fit satisfactorily with training data. Among these advances, one can cite the use of machine learning techniques in the area of Medicinal Chemistry, involving methods for analysing, representing and predicting molecular information through computational resources. The data used in the biological context have some particular characteristics that can influence the result of its analysis. These include the complexity of molecular information, the imbalance of the classes involved, and the existence of incomplete or uncertainly labeled data. If they are not properly treated, such adversities may affect the process of identifying candidate compounds for new drugs. In this work, a semi-supervised machine learning technique was considered to reduce the impact caused by the problem of uncertainty in the data labeling, by applying a method to estimate more reliable labels for the chemical compounds in the training set. In an attempt to reduce the effects caused by data imbalance, a cost-sensitive approach was incorporated to the label estimation process, in order to avoid bias in favor of the majority class. After addressing the uncertainty problem in labeling, classifiers based on Extreme Learning Machines were constructed, aiming for good approximation ability in a reduced processing time in relation to other commonly applied classification approaches. Finally, the performance of the classifiers constructed was evaluated by analyzing the results obtained, comparing the scenario with the original data and others with the new labeling obtained by the semi-supervised estimation process
8

Road-traffic accident prediction model : Predicting the Number of Casualties

Andeta, Jemal Ahmed January 2021 (has links)
Efficient and effective road traffic prediction and management techniques are crucial in intelligent transportation systems. It can positively influence road advancement, safety enhancement, regulation formulation, and route planning to save living things in advance from road traffic accidents. This thesis considers road safety by predicting the number of casualties if an accident occurs using multiple traffic accident attributes. It helps individuals (drivers) or traffic offices to adjust and control their contributions for the occurrence of an accident before emerging it. Three candidate algorithms from different regression fit patterns are proposed and evaluated to conduct the thesis: the bagging, linear, and non-linear fitting patterns. The gradient boosting machines (GBoost) from the bagging, Linearsupport vector regression (LinearSVR) from the linear, and extreme learning machines (ELM) also from the non-linear side are the selected algorithms. RMSE and MAE performance evaluation metrics are applied to evaluate the models. The GBoost achieved a better performance than the other two with a low error rate and minimum prediction interval value for 95% prediction interval. A SHAP (SHapley Additive exPlanations) interpretation technique is applied to interpret each model at the global interpretation level using SHAP’s beeswarm plots. Finally, suggestions for future improvements are presented via the dataset and hyperparameter tuning.
9

Machine Learning for Spacecraft Time-Series Anomaly Detection and Plant Phenotyping

Sriram Baireddy (17428602) 01 December 2023 (has links)
<p dir="ltr">Detecting anomalies in spacecraft time-series data is a high priority, especially considering the harshness of the spacecraft operating environment. These anomalies often function as precursors for system failure. Traditionally, the time-series data channels are monitored manually by domain experts, which is time-consuming. Additionally, there are thousands of channels to monitor. Machine learning methods have proven to be useful for automatic anomaly detection, but a unique model must be trained from scratch for each time-series. This thesis proposes three approaches for reducing training costs. First, a transfer learning approach that finetunes a general pre-trained model to reduce training time and the number of unique models required for a given spacecraft. The second and third approaches both use online learning to reduce the amount of training data and time needed to identify anomalies. The second approach leverages an ensemble of extreme learning machines while the third approach uses deep learning models. All three approaches are shown to achieve reasonable anomaly detection performance with reduced training costs.</p><p dir="ltr">Measuring the phenotypes, or observable traits, of a plant enables plant scientists to understand the interaction between the growing environment and the genetic characteristics of a plant. Plant phenotyping is typically done manually, and often involves destructive sampling, making the entire process labor-intensive and difficult to replicate. In this thesis, we use image processing for characterizing two different disease progressions. Tar spot disease can be identified visually as it induces small black circular spots on the leaf surface. We propose using a Mask R-CNN to detect tar spots from RGB images of leaves, thus enabling rapid non-destructive phenotyping of afflicted plants. The second disease, bacteria-induced wilting, is measured using a visual assessment that is often subjective. We design several metrics that can be extracted from RGB images that can be used to generate consistent wilting measurements with a random forest. Both approaches ensure faster, replicable results, enabling accurate, high-throughput analysis to draw conclusions about effective disease treatments and plant breeds.</p>

Page generated in 0.5299 seconds