461 |
AUTOMATED MACHINE LEARNING BASED ANALYSIS OF INTRAVASCULAR OPTICAL COHERENCE TOMOGRAPHY IMAGESShalev, Ronny Y. 31 May 2016 (has links)
No description available.
|
462 |
Prediction and Classification of Physical Properties by Near-Infrared Spectroscopy and Baseline Correction of Gas Chromatography Mass Spectrometry Data of Jet Fuels by Using Chemometric AlgorithmsXu, Zhanfeng 26 July 2012 (has links)
No description available.
|
463 |
A Semi Supervised Support Vector Machine for a Recommender System : Applied to a real estate datasetMéndez, José January 2021 (has links)
Recommender systems are widely used in e-commerce websites to improve the buying experience of the customer. In recent years, e-commerce has been quickly expanding and its growth has been accelerated during the COVID-19 pandemic, when customers and retailers were asked to keep their distance and do lockdowns. Therefore, there is an increasing demand for items and good recommendations to the users to improve their shopping experience. In this master’s thesis a recommender system for a real-estate website is built, based on Support Vector Machines (SVM). The main characteristic of the built model is that it is trained with a few labelled samples and the rest of unlabelled samples, using a semi-supervised machine learning paradigm. The model is constructed step-by-step from the simple SVM, until the semi-supervised Nested Cost-Sensitive Support Vector Machine (NCS-SVM). Then, we compare our model using four different kernel functions: gaussian, second-degree polynomial, fourth-degree polynomial, and linear. We also compare a user with strict housing requirements against a user with vague requirements. We finish with a discussion focusing principally on parameter tuning, and briefly in the model downsides and ethical considerations.
|
464 |
The Influence of Artificial Intelligence on Education: Sentiment Analysis on YouTube Comments : What is people´s sentiment on ChatGPT for educational purposes?Rodríguez Roldán, Javier January 2024 (has links)
The use of artificial intelligence (AI), especially ChatGPT, has increased exponentially in the past years, and it can be seen how AI-based tools are being used in several fields, including education. The literature on AI on education (AIEd), how it has been used, its potential uses, opportunities and challenges were reviewed as well as the literature on sentiment analysis on social media to evaluate the best approach. Since education might face notorious changes due to this technology, assessing how people feel about this potential change in the paradigm is essential. Sentiment analysis on YouTube comments of videos related to ChatGPT, the most popular AI tool for education across learners and educators, was performed. It was found that 62.1% of thes ample had a positive feeling regarding this technology for educational purposes, whereas 19.4% had a negative sentiment and 18.5% were neutral. To contribute to the literature on sentiment analysis of YouTube comments, the two most used and best-performing algorithms were used for this task: Naive Bayes and Support Vector Machine. The results show that the first algorithm had a 61.30% accuracy, whereas SVM had a 71.79%.
|
465 |
Deep Learning Empowered Unsupervised Contextual Information Extraction and its applications in Communication SystemsGusain, Kunal 16 January 2023 (has links)
Master of Science / There has been an astronomical increase in data at the network edge due to the rapid development of 5G infrastructure and the proliferation of the Internet of Things (IoT). In order to improve the network controller's decision-making capabilities and improve the user experience, it is of paramount importance to properly analyze this data. However, transporting such a large amount of data from edge devices to the network controller requires large bandwidth and increased latency, presenting a significant challenge to resource-constrained wireless networks. By using information processing techniques, one could effectively address this problem by sending only pertinent and critical information to the network controller. Nevertheless, finding critical information from high-dimensional observation is not an easy task, especially when large amounts of background information are present. Our thesis proposes to extract critical but low-dimensional information from high-dimensional observations using an information-theoretic deep learning framework. We focus on two distinct problems where critical information extraction is imperative. In the first problem, we study the problem of feature extraction from video frames collected in a dynamic environment and showcase its effectiveness using a video game simulation experiment. In the second problem, we investigate the detection of anomaly signals in the spectrum by extracting and analyzing useful features from spectrograms. Using extensive simulation experiments based on a practical data set, we conclude that our proposed approach is highly effective in detecting anomaly signals in a wide range of signal-to-noise ratios.
|
466 |
Performance evaluation of two machine learning algorithms for classification in a production line : Comparing artificial neural network and support vector machine using a quasi-experimentJörlid, Olle, Sundbeck, Erik January 2024 (has links)
This thesis investigated the possibility of using machine learning algorithms for classifying items in a queuing system to optimize a production line. The evaluated algorithms are Artificial Neural Network (ANN) and Support Vector Machine (SVM), selected based on other research projects. A quasi-experiment evaluates the two machine learning algorithms trained on the same data. The dataset used in the experiment was complex and contained 47,212 rows of samples with features of items from a production setting. Both models performed better than the current system, with ANN reaching 97,5% and SVM 98% on all measurements. The ANN and SVM models differed in training time where ANN took almost 205 seconds and SVM took 1.97 seconds, ANN was however 20 times faster to classify. We conclude that ANN and SVM are feasible options for using Artificial Intelligence (AI) to classify items in an industrial environment with similar scenarios.
|
467 |
Detection of bullying with MachineLearning : Using Supervised Machine Learning and LLMs to classify bullying in textYousef, Seif-Alamir, Svensson, Ludvig January 2024 (has links)
In recent years, there has been an increase in the issue of bullying, particularly in academic settings. This degree project examines the use of supervised machine learning techniques to identify bullying in text data from school surveys provided by the Friends Foundation. It evaluates various traditional algorithms such as Logistic Regression, Naive Bayes, SVM, Convolutional neural networks (CNN), alongside a Retrieval-Augmented Generation (RAG) model using Llama 3, with a primary goal of achieving high recall on the texts consisting of bullying while also considering precision, which is reflected in the use of the F3-score. The SVM model emerged as the most effective among the traditional methods, achieving the highest F3-score of 0.83. Although the RAG model showed promising recall, it suffered from very low precision, resulting in a slightly lower F3-score of 0.79. The study also addresses challenges such as the small and imbalanced dataset as well as emphasizes the importance of retaining stop words to maintain context in the text data. The findings highlight the potential of advanced machine learning models to significantly assist in bullying detection with adequate resources and further refinement.
|
468 |
Extracting and Aggregating Temporal Events from TextsDöhling, Lars 11 October 2017 (has links)
Das Finden von zuverlässigen Informationen über gegebene Ereignisse aus großen und dynamischen Textsammlungen, wie dem Web, ist ein wichtiges Thema. Zum Beispiel sind Rettungsteams und Versicherungsunternehmen an prägnanten Fakten über Schäden nach Katastrophen interessiert, die heutzutage online in Web-Blogs, Zeitungsartikeln, Social Media etc. zu finden sind. Solche Fakten helfen, die erforderlichen Hilfsmaßnahmen zu bestimmen und unterstützen deren Koordination. Allerdings ist das Finden, Extrahieren und Aggregieren nützlicher Informationen ein hochkomplexes Unterfangen: Es erfordert die Ermittlung geeigneter Textquellen und deren zeitliche Einordung, die Extraktion relevanter Fakten in diesen Texten und deren Aggregation zu einer verdichteten Sicht auf die Ereignisse, trotz Inkonsistenzen, vagen Angaben und Veränderungen über die Zeit. In dieser Arbeit präsentieren und evaluieren wir Techniken und Lösungen für jedes dieser Probleme, eingebettet in ein vierstufiges Framework. Die angewandten Methoden beruhen auf Verfahren des Musterabgleichs, der Verarbeitung natürlicher Sprache und des maschinellen Lernens. Zusätzlich berichten wir über die Ergebnisse zweier Fallstudien, basierend auf dem Einsatz des gesamten Frameworks: Die Ermittlung von Daten über Erdbeben und Überschwemmungen aus Webdokumenten. Unsere Ergebnisse zeigen, dass es unter bestimmten Umständen möglich ist, automatisch zuverlässige und zeitgerechte Daten aus dem Internet zu erhalten. / Finding reliable information about given events from large and dynamic text collections, such as the web, is a topic of great interest. For instance, rescue teams and insurance companies are interested in concise facts about damages after disasters, which can be found today in web blogs, online newspaper articles, social media, etc. Knowing these facts helps to determine the required scale of relief operations and supports their coordination. However, finding, extracting, and condensing specific facts is a highly complex undertaking: It requires identifying appropriate textual sources and their temporal alignment, recognizing relevant facts within these texts, and aggregating extracted facts into a condensed answer despite inconsistencies, uncertainty, and changes over time. In this thesis, we present and evaluate techniques and solutions for each of these problems, embedded in a four-step framework. Applied methods are pattern matching, natural language processing, and machine learning. We also report the results for two case studies applying our entire framework: gathering data on earthquakes and floods from web documents. Our results show that it is, under certain circumstances, possible to automatically obtain reliable and timely data from the web.
|
469 |
\"Processamento e análise de imagens para medição de vícios de refração ocular\" / Image Processing and Analysis for Measuring Ocular Refraction ErrorsValerio Netto, Antonio 18 August 2003 (has links)
Este trabalho apresenta um sistema computacional que utiliza técnicas de Aprendizado de Máquina (AM) para auxiliar o diagnóstico oftalmológico. Trata-se de um sistema de medidas objetivas e automáticas dos principais vícios de refração ocular, astigmatismo, hipermetropia e miopia. O sistema funcional desenvolvido aplica técnicas convencionais de processamento a imagens do olho humano fornecidas por uma técnica de aquisição chamada Hartmann-Shack (HS), ou Shack-Hartmann (SH), com o objetivo de extrair e enquadrar a região de interesse e remover ruídos. Em seguida, vetores de características são extraídos dessas imagens pela técnica de transformada wavelet de Gabor e, posteriormente, analisados por técnicas de AM para diagnosticar os possíveis vícios refrativos presentes no globo ocular representado. Os resultados obtidos indicam a potencialidade dessa abordagem para a interpretação de imagens de HS de forma que, futuramente, outros problemas oculares possam ser detectados e medidos a partir dessas imagens. Além da implementação de uma nova abordagem para a medição dos vícios refrativos e da introdução de técnicas de AM na análise de imagens oftalmológicas, o trabalho contribui para a investigação da utilização de Máquinas de Vetores Suporte e Redes Neurais Artificiais em sistemas de Entendimento/Interpretação de Imagens (Image Understanding). O desenvolvimento deste sistema permite verificar criticamente a adequação e limitações dessas técnicas para a execução de tarefas no campo do Entendimento/Interpretação de Imagens em problemas reais. / This work presents a computational system that uses Machine Learning (ML) techniques to assist in ophthalmological diagnosis. The system developed produces objective and automatic measures of ocular refraction errors, namely astigmatism, hypermetropia and myopia from functional images of the human eye acquired with a technique known as Hartmann-Shack (HS), or Shack-Hartmann (SH). Image processing techniques are applied to these images in order to remove noise and extract the regions of interest. The Gabor wavelet transform technique is applied to extract feature vectors from the images, which are then input to ML techniques that output a diagnosis of the refractive errors in the imaged eye globe. Results indicate that the proposed approach creates interesting possibilities for the interpretation of HS images, so that in the future other types of ocular diseases may be detected and measured from the same images. In addition to implementing a novel approach for measuring ocular refraction errors and introducing ML techniques for analyzing ophthalmological images, this work investigates the use of Artificial Neural Networks and Support Vector Machines (SVMs) for tasks in Image Understanding. The description of the process adopted for developing this system can help in critically verifying the suitability and limitations of such techniques for solving Image Understanding tasks in \"real world\" problems.
|
470 |
Contributions à l’apprentissage automatique pour l’analyse d’images cérébrales anatomiques / Contributions to statistical learning for structural neuroimaging dataCuingnet, Rémi 29 March 2011 (has links)
L'analyse automatique de différences anatomiques en neuroimagerie a de nombreuses applications pour la compréhension et l'aide au diagnostic de pathologies neurologiques. Récemment, il y a eu un intérêt croissant pour les méthodes de classification telles que les machines à vecteurs supports pour dépasser les limites des méthodes univariées traditionnelles. Cette thèse a pour thème l'apprentissage automatique pour l'analyse de populations et la classification de patients en neuroimagerie. Nous avons tout d'abord comparé les performances de différentes stratégies de classification, dans le cadre de la maladie d'Alzheimer à partir d'images IRM anatomiques de 509 sujets de la base de données ADNI. Ces différentes stratégies prennent insuffisamment en compte la distribution spatiale des \textit{features}. C'est pourquoi nous proposons un cadre original de régularisation spatiale et anatomique des machines à vecteurs supports pour des données de neuroimagerie volumiques ou surfaciques, dans le formalisme de la régularisation laplacienne. Cette méthode a été appliquée à deux problématiques cliniques: la maladie d'Alzheimer et les accidents vasculaires cérébraux. L'évaluation montre que la méthode permet d'obtenir des résultats cohérents anatomiquement et donc plus facilement interprétables, tout en maintenant des taux de classification élevés. / Brain image analyses have widely relied on univariate voxel-wise methods. In such analyses, brain images are first spatially registered to a common stereotaxic space, and then mass univariate statistical tests are performed in each voxel to detect significant group differences. However, the sensitivity of theses approaches is limited when the differences involve a combination of different brain structures. Recently, there has been a growing interest in support vector machines methods to overcome the limits of these analyses.This thesis focuses on machine learning methods for population analysis and patient classification in neuroimaging. We first evaluated the performances of different classification strategies for the identification of patients with Alzheimer's disease based on T1-weighted MRI of 509 subjects from the ADNI database. However, these methods do not take full advantage of the spatial distribution of the features. As a consequence, the optimal margin hyperplane is often scattered and lacks spatial coherence, making its anatomical interpretation difficult. Therefore, we introduced a framework to spatially regularize support vector machines for brain image analysis based on Laplacian regularization operators. The proposed framework was then applied to the analysis of stroke and of Alzheimer's disease. The results demonstrated that the proposed classifier generates less-noisy and consequently more interpretable feature maps with no loss of classification performance.
|
Page generated in 0.0278 seconds