• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 1
  • 1
  • Tagged with
  • 13
  • 9
  • 8
  • 7
  • 7
  • 7
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Query Expansion Study for Clinical Decision Support

Zhuang, Wenjie 12 February 2018 (has links)
Information retrieval is widely used for retrieving relevant information among a variety of data, such as text documents, images, audio and videos. Since the first medical batch retrieval system was developed in mid 1960s, significant research efforts have focused on applying information retrieval to medical data. However, despite the vast developments in medical information retrieval and accompanying technologies, the actual promise of this area remains unfulfilled due to properties of medical data and the huge volume of medical literature. Specifically, the recall and precision of the selected dataset from the TREC clinical decision support track are low. The overriding objective of this thesis is to improve the performance of information retrieval techniques applied to biomedical text documents. We have focused on improving recall and precision among the top retrieved results. To that end, we have removed redundant words, and then expanded queries by adding MeSH terms in TREC CDS topics. We have also used other external data sources and domain knowledge to implement the expansion. In addition, we have also considered using the doc2vec model to optimize retrieval. Finally, we have applied learning to rank which sorts documents based on relevance and put relevant documents in front of irrelevant documents, so as to return the relevant retrieved data on the top. We have discovered that queries, expanded with external data sources and domain knowledge, perform better than applying the TREC topic information directly. / Master of Science / Information retrieval is widely used for retrieving relevant information among a variety of data. Since the first medical batch retrieval system was developed in mid 1960s, significant research efforts have focused on applying information retrieval to medical data. However the actual promise of this area remains unfulfilled due to certain properties of medical data and the sheer volume of medical literature. The overriding objective of this thesis is to improve the performance of information retrieval techniques applied to biomedical text documents. This thesis presents several ways to implement query expansion in order to make more efficient retrieval. Then this thesis discusses some approaches to put documents relevant to the queries at the top.
2

A Machine Learning Approach to Predicting Alcohol Consumption in Adolescents From Historical Text Messaging Data

Bergh, Adrienne 28 May 2019 (has links)
Techniques based on artificial neural networks represent the current state-of-the-art in machine learning due to the availability of improved hardware and large data sets. Here we employ doc2vec, an unsupervised neural network, to capture the semantic content of text messages sent by adolescents during high school, and encode this semantic content as numeric vectors. These vectors effectively condense the text message data into highly leverageable inputs to a logistic regression classifier in a matter of hours, as compared to the tedious and often quite lengthy task of manually coding data. Using our machine learning approach, we are able to train a logistic regression model to predict adolescents' engagement in substance abuse during distinct life phases with accuracy ranging from 76.5% to 88.1%. We show the effects of grade level and text message aggregation strategy on the efficacy of document embedding generation with doc2vec. Additional examination of the vectorizations for specific terms extracted from the text message data adds quantitative depth to this analysis. We demonstrate the ability of the method used herein to overcome traditional natural language processing concerns related to unconventional orthography. These results suggest that the approach described in this thesis is a competitive and efficient alternative to existing methodologies for predicting substance abuse behaviors. This work reveals the potential for the application of machine learning-based manipulation of text messaging data to development of automatic intervention strategies against substance abuse and other adolescent challenges.
3

Topic Explorer Dashboard : A Visual Analytics Tool for an Innovation Management System enhanced by Machine Learning Techniques

Knoth, Stefanie January 2020 (has links)
Innovation Management Software contains complex data with many different variables. This data is usually presented in tabular form or with isolated graphs that visualize a single independent aspect of a dataset. However, displaying this data with interconnected, interactive charts provide much more flexibility and opportunities for working with and understanding the data. Charts that show multiple aspects of the data at once can help in uncovering hidden relationships between different aspects of the data and in finding new insights that might be difficult to see with the traditional way of displaying data. The size and complexity of the available data also invites analyzing it with machine learning techniques. In this thesis it is first explored how machine learning techniques can be used to gain additional insight from the data and then the results of this investigation are used together with the original data in order to build a prototypical dashboard for exploratory visual data analysis. This dashboard is then evaluated by means of ICE-T heuristics and the results and findings are discussed.
4

A Comparison between Different Recommender System Approaches for a Book and an Author Recommender System

Hedlund, Jesper, Nilsson Tengstrand, Emma January 2020 (has links)
A recommender system is a popular tool used by companies to increase customer satisfaction and to increase revenue. Collaborative filtering and content-based filtering are the two most common approaches when implementing a recommender system, where the former provides recommendations based on user behaviour, and the latter uses the characteristics of the items that are recommended. The aim of the study was to develop and compare different recommender system approaches, for both book and author recommendations and their ability to predict user ratings of an e-book application. The evaluation of the models was done by measuring root mean square error (RMSE) and mean absolute error (MAE). Two pure models were developed, one based on collaborative filtering and one based on content-based filtering. Also, three different hybrid models using a combination of the two pure approaches were developed and compared to the pure models. The study also explored how aggregation of book data to author level could be used to implement an author recommender system. The results showed that the aggregated author data was more difficult to predict. However, it was difficult to draw any conclusions of the performance on author data due to the data aggregation. Although it was clear that it was possible to derive author recommendations based on data from books. The study also showed that the collaborative filtering model performed better than the content-based filtering model according to RMSE but not according to MAE. The lowest RMSE and MAE, however, were achieved by combining the two approaches in a hybrid model.
5

Benchmarking authorship attribution techniques using over a thousand books by fifty Victorian era novelists

Gungor, Abdulmecit 03 April 2018 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Authorship attribution (AA) is the process of identifying the author of a given text and from the machine learning perspective, it can be seen as a classification problem. In the literature, there are a lot of classification methods for which feature extraction techniques are conducted. In this thesis, we explore information retrieval techniques such as Word2Vec, paragraph2vec, and other useful feature selection and extraction techniques for a given text with different classifiers. We have performed experiments on novels that are extracted from GDELT database by using different features such as bag of words, n-grams or newly developed techniques like Word2Vec. To improve our success rate, we have combined some useful features some of which are diversity measure of text, bag of words, bigrams, specific words that are written differently between English and American authors. Support vector machine classifiers with nu-SVC type is observed to give best success rates on the stacked useful feature set. The main purpose of this work is to lay the foundations of feature extraction techniques in AA. These are lexical, character-level, syntactic, semantic, application specific features. We also have aimed to offer a new data resource for the author attribution research community and demonstrate how it can be used to extract features as in any kind of AA problem. The dataset we have introduced consists of works of Victorian era authors and the main feature extraction techniques are shown with exemplary code snippets for audiences in different knowledge domains. Feature extraction approaches and implementation with different classifiers are employed in simple ways such that it would also serve as a beginner step to AA. Some feature extraction techniques introduced in this work are also meant to be employed in different NLP tasks such as sentiment analysis with Word2Vec or text summarization. Using the introduced NLP tasks and feature extraction techniques one can start to implement them on our dataset. We have also introduced several methods to implement extracted features in different methodologies such as feature stack engineering with different classifiers, or using Word2Vec to create sentence level vectors.
6

Classifying and Comparing Latent Space Representation of Unstructured Log Data. / Klassificering och jämförelse av latenta rymdrepresentationer av ostrukturerad loggdata.

Sharma, Bharat January 2021 (has links)
This thesis explores and compares various methods for producing vector representation of unstructured log data. Ericsson wanted to investigate machine learning methods to analyze logs produced by their systems to reduce the cost and effort required for manual log analysis. Four NLP methods were used to produce vector embeddings for logs: Doc2Vec, DAN, XLNet, and RoBERTa. Also, a Random forest classifier was used to classify those embeddings. The experiments were performed on three different datasets and the results showed that the performance of the models varied based on the dataset being used. The results also show that in the case of log data, fine-tuning makes the transformer models computationally heavy and the performance gain is very low. RoBERTa without fine-tuning produced optimal vector representations for the first and third datasets used whereas DAN had better performance for the second dataset. The study also concluded that the NLP models were able to better understand and classify the third dataset as it contained more plain text information as contrasted against more technical and less human readable datasets. / I den här uppsatsen undersöks och jämförs olika metoder för att skapa vektorrepresentationer av ostrukturerad loggdata. Ericsson vill undersöka om det är möjligt att använda tekniker inom maskininlärning för att analysera loggdata som produceras av deras nuvarande system och på så sätt underlätta och minska kostnaderna för manuell logganalys. Fyra olika språkteknologier undersöks för att skapa vektorrepresentationer av loggdata: Doc2vec, DAN, XLNet and RoBERTa. Dessutom används en Random Forest klassificerare för att klassificera vektorrepresentationerna. Experimenten utfördes på tre olika datamängder och resultaten visade att modellernas prestanda varierade baserat på datauppsättningen som används. Resultaten visar också att finjustering av transformatormodeller gör dem beräkningskrävande och prestandavinsten är liten.. RoBERTa utan finjustering producerade optimala vektorrepresentationer för de första och tredje dataset som användes, medan DAN hade bättre prestanda för det andra datasetet. Studien visar också att språkmodellerna kunde klassificera det tredje datasetet bättre då det innehöll mer information i klartext jämfört med mer tekniska och mindre lättlästa dataseten.
7

Maskininlärning för dokumentklassificering av finansielladokument med fokus på fakturor / Machine Learning for Document Classification of FinancialDocuments with Focus on Invoices

Khalid Saeed, Nawar January 2022 (has links)
Automatiserad dokumentklassificering är en process eller metod som syftar till att bearbeta ochhantera dokument i digitala former. Många företag strävar efter en textklassificeringsmetodiksom kan lösa olika problem. Ett av dessa problem är att klassificera och organisera ett stort antaldokument baserat på en uppsättning av fördefinierade kategorier.Detta examensarbete syftar till att hjälpa Medius, vilket är ett företag som arbetar med fakturaarbetsflöde, att klassificera dokumenten som behandlas i deras fakturaarbetsflöde till fakturoroch icke-fakturor. Detta har åstadkommits genom att implementera och utvärdera olika klassificeringsmetoder för maskininlärning med avseende på deras noggrannhet och effektivitet för attklassificera finansiella dokument, där endast fakturor är av intresse.I denna avhandling har två dokumentrepresentationsmetoder "Term Frequency Inverse DocumentFrequency (TF-IDF) och Doc2Vec" använts för att representera dokumenten som vektorer. Representationen syftar till att minska komplexiteten i dokumenten och göra de lättare att hantera.Dessutom har tre klassificeringsmetoder använts för att automatisera dokumentklassificeringsprocessen för fakturor. Dessa metoder var Logistic Regression, Multinomial Naïve Bayes och SupportVector Machine.Resultaten från denna avhandling visade att alla klassificeringsmetoder som använde TF-IDF, föratt representera dokumenten som vektorer, gav goda resultat i from av prestanda och noggranhet.Noggrannheten för alla tre klassificeringsmetoderna var över 90%, vilket var kravet för att dennastudie skulle anses vara lyckad. Dessutom verkade Logistic Regression att ha det lättare att klassificera dokumenten jämfört med andra metoder. Ett test på riktiga data "dokument" som flödarin i Medius fakturaarbetsflöde visade att Logistic Regression lyckades att korrekt klassificeranästan 96% av dokumenten.Avslutningsvis, fastställdes Logistic Regression tillsammans med TF-IDF som de övergripandeoch mest lämpliga metoderna att klara av problmet om dokumentklassficering. Dessvärre, kundeDoc2Vec inte ge ett bra resultat p.g.a. datamängden inte var anpassad och tillräcklig för attmetoden skulle fungera bra. / Automated document classification is an essential technique that aims to process and managedocuments in digital forms. Many companies strive for a text classification methodology thatcan solve a plethora of problems. One of these problems is classifying and organizing a massiveamount of documents based on a set of predefined categories.This thesis aims to help Medius, a company that works with invoice workflow, to classify theirdocuments into invoices and non-invoices. This has been accomplished by implementing andevaluating various machine learning classification methods in terms of their accuracy and efficiencyfor the task of financial document classification, where only invoices are of interest. Furthermore,the necessary pre-processing steps for achieving good performance are considered when evaluatingthe mentioned classification methods.In this study, two document representation methods "Term Frequency Inverse Document Frequency (TF-IDF) and Doc2Vec" were used to represent the documents as fixed-length vectors.The representation aims to reduce the complexity of the documents and make them easier tohandle. In addition, three classification methods have been used to automate the document classification process for invoices. These methods were Logistic Regression, Multinomial Naïve Bayesand Support Vector Machine.The results from this thesis indicate that all classification methods used TF-IDF, to represent thedocuments as vectors, give high performance and accuracy. The accuracy of all three classificationmethods is over 90%, which is the prerequisite for the success of this study. Moreover, LogisticRegression appears to cope with this task very easily, since it classifies the documents moreefficiently compared to the other methods. A test of real data flowing into Medius’ invoiceworkflow shows that Logistic Regression is able to correctly classify up to 96% of the data.In conclusion, the Logistic Regression together with TF-IDF is determined to be the overall mostappropriate method out of the other tested methods. In addition, Doc2Vec suffers to providea good result because the data set is not customized and sufficient for the method to workwell.
8

Classifying Challenging Behaviors in Autism Spectrum Disorder with Neural Document Embeddings

Atchison, Abigail 28 May 2019 (has links)
The understanding and treatment of challenging behaviors in individuals with Autism Spectrum Disorder is paramount to enabling the success of behavioral therapy; an essential step in this process being the labeling of challenging behaviors demonstrated in therapy sessions. These manifestations differ across individuals and within individuals over time and thus, the appropriate classification of a challenging behavior when considering purely qualitative factors can be unclear. In this thesis we seek to add quantitative depth to this otherwise qualitative task of challenging behavior classification. We do so through the application of natural language processing techniques to behavioral descriptions extracted from the CARD Skills dataset. Specifically, we construct 3 sets of 50-dimensional document embeddings to represent the 1,917 recorded instances of challenging behaviors demonstrated in Applied Behavior Analysis therapy. These embeddings are learned through three processes: a TF-IDF weighted sum of Word2Vec embeddings, Doc2Vec embeddings which use hierarchical softmax as an output layer, and Doc2Vec which optimizes the original Doc2Vec architecture through Negative Sampling. Once created, these embeddings are initially used as input to a Support Vector Machine classifier to demonstrate the success of binary classification within this problem set. This preliminary exploration achieves promising classification accuracies ranging from 78.2-100% and establishes the separability of challenging behaviors given their neural embeddings. We next construct a multi-class classification model via a Gaussian Process Classifier fitted with Laplace approximation. This classification model, trained on an 80/20 stratified split of the seven most frequently occurring behaviors in the dataset, produces an accuracy of 82.7%. Through this exploration we demonstrate that the semantic queues derived from the language of challenging behavior descriptions, modeled using natural language processing techniques, can be successfully leveraged in classification architectures. This study represents the first of its kind, providing a proof of concept for the application of machine learning to the observations of challenging behaviors demonstrated in ASD with the ultimate goal of improving the efficacy of the behavioral treatments which intrinsically rely on the accurate identification of these behaviors.
9

Automated Image Suggestions for News Articles : An Evaluation of Text and Image Representations in an Image Retrieval System / Automatiska bildförslag till nyhetsartiklar

Svensson, Pontus January 2020 (has links)
Multimodal machine learning is a subfield of machine learning that aims to relate data from different modalities, such as texts and images. One of the many applications that could be built upon this technique is an image retrieval system that, given a text query, retrieves suitable images from a database. In this thesis, a retrieval system based on canonical correlation is used to suggest images for news articles. Different dense text representations produced by Word2vec and Doc2vec, and image representations produced by pre-trained convolutional neural networks are explored to find out how they affect the suggestions. Which part of an article is best suited as a query to the system is also studied. Also, experiments are carried out to determine if an article's date of publication can be used to improve the suggestions. The results show that Word2vec outperforms Doc2vec in the task, which indicates that the meaning of article texts are not as important as the individual words they consist of. Furthermore, the queries are improved by rewarding words that are particularly significant.
10

Evaluation of text classification techniques for log file classification / Utvärdering av textklassificeringstekniker för klassificering avloggfiler

Olin, Per January 2020 (has links)
System log files are filled with logged events, status codes, and other messages. By analyzing the log files, the systems current state can be determined, and find out if something during its execution went wrong. Log file analysis has been studied for some time now, where recent studies have shown state-of-the-art performance using machine learning techniques. In this thesis, document classification solutions were tested on log files in order to classify regular system runs versus abnormal system runs. To solve this task, supervised and unsupervised learning methods were combined. Doc2Vec was used to extract document features, and Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) based architectures on the classification task. With the use of the machine learning models and preprocessing techniques the tested models yielded an f1-score and accuracy above 95% when classifying log files.

Page generated in 0.0407 seconds