• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 118
  • 39
  • 5
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 199
  • 199
  • 86
  • 75
  • 64
  • 57
  • 47
  • 46
  • 41
  • 38
  • 33
  • 27
  • 25
  • 22
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Klassificering av kvitton med hjälp av maskininlärning

Enerstrand, Simon January 2019 (has links)
Maskininlärning nyttjas inom fler och fler områden. Det har potential att ersätta många repetitiva arbetsuppgifter, eller åtminstone förenkla dem. Dokumenthantering inom ekonomisystem är ett område maskininlärning kan hjälpa till med. Det behövs ofta mycket manuell input i olika fält genom att avläsa fakturor eller kvitton. Målet med projektet är att skapa en applikation som nyttjar maskininlärning åt företaget Centsoft AB. Applikationen ska ta emot OCR-tolkad textmassa från en bild på ett kvitto och sedan, med hög säkerhet, kunna avgöra vilken kategori kvittot tillhör. Den här rapporten syftar till att visa utvecklingen av maskininlärningsmodellen i applikationen. Rapporten svarar på frågeställningen: ”Hur kan kvitton klassificeras med hjälp av maskininlärning?”.Undersökningsmetoden fallstudie och projektmetoden MoSCoW tillämpas i projektet. Projektet tar även hänsyn till åtagandetriangeln. Maskininlärningsramverk används för att utvärdera den upptränade modellen. Den tränade modellen klarar av att, med hög säkerhet, tolka kvitton den inte stött på tidigare. För att få en meningsfull tolkning måste kvitton ha i avsikt att tillhöra någon av de åtta tränade kategorierna.Valet av metoder passade bra till projektet för att besvara frågeställningen. Applikationen kan utvecklas vidare och implementeras i fakturahanteringssystemet. Genomförandet av projektet ger kunskap att arbeta med maskininlärningslösningar. Tekniken kan i framtiden appliceras på flera områden. / Machine learning is used in more and more areas. It has the potential to replace many repetitive tasks, or at least simplify them. Document management within financial systems is an area machine learning can help with. A lot of manual input is often needed in different fields by reading invoices or receipts. The goal of the project is to create an application that uses machine learning for the company Centsoft AB. The application should receive OCR-interpreted texts from an image of a receipt and then, with high certainty, be able to determine which category the receipt belongs to. This report aims to show the development of the machine learning model in the application. The report answers the question: "How can receipts be classified using machine learning?".The methodology case study and the research method MoSCoW will be applied during the project. The project also considers the triangle method described by Eklund. Machine learning frameworks are used to evaluate the trained model. The trained model can, with high certainty, interpret receipts it has not encountered before. In order to get a meaningful interpretation, receipts must have the intention of belonging to one of the eight trained categories.The choice of methods suited the project well to answer the question. The application can be further developed and be implemented in the invoice management system. The implementation of the project gives knowledge about how to work with machine learning solutions. In the future, the technology can be applied in several areas.
152

[pt] APLICANDO APRENDIZADO DE MÁQUINA À SUPERVISÃO DO MERCADO DE CAPITAIS: CLASSIFICAÇÃO E EXTRAÇÃO DE INFORMAÇÕES DE DOCUMENTOS FINANCEIROS / [en] APPLYING MACHINE LEARNING TO CAPITAL MARKETS SUPERVISION: CLASSIFICATION AND INFORMATION EXTRACTION FROM FINANCIAL DOCUMENT

FREDERICO SHU 06 January 2022 (has links)
[pt] A análise de documentos financeiros não estruturados é uma atividade essencial para a supervisão do mercado de capitais realizada pela Comissão de Valores Mobiliários (CVM). Formas de automatização que reduzam o esforço humano despendido no processo de triagem de documentos são vitais para a CVM lidar com a escassez de recursos humanos e a expansão do mercado de valores mobiliários. Nesse contexto, a dissertação compara sistematicamente diversos algoritmos de aprendizado de máquina e técnicas de processamento de texto, a partir de sua aplicação em duas tarefas de processamento de linguagem natural – classificação de documentos e extração de informações – desempenhadas em ambiente real de supervisão de mercados. Na tarefa de classificação, os algoritmos clássicos proporcionaram melhor desempenho que as redes neurais profundas, o qual foi potencializado pela aplicação de técnicas de subamostragem e comitês de máquinas (ensembles). A precisão atual, estimada entre 20 por cento, e 40 por cento, pode ser aumentada para mais de 90 por cento, com a aplicação dos algoritmos testados. A arquitetura BERT foi capaz de extrair informações sobre aumento de capital e incorporação societária de documentos financeiros. Os resultados satisfatórios obtidos em ambas as tarefas motivam a implementação futura em regime de produção dos modelos estudados, sob a forma de um sistema de apoio à decisão. Outra contribuição da dissertação é o CVMCorpus, um corpus constituído para o escopo deste trabalho com documentos financeiros entregues por companhias abertas brasileiras à CVM entre 2009 e 2019, que abre possibilidades de pesquisa futura linguística e de finanças. / [en] The analysis of unstructured financial documents is key to the capital markets supervision performed by Comissão de Valores Mobiliários (Brazilian SEC or CVM). Systems capable of reducing human effort involved in the task of screening documents and outlining relevant information, for further manual review, are important tools for CVM to deal with the shortage of human resources and expansion of the Brazilian securities market. In this regard, this dissertation presents and discusses the application of several machine learning algorithms and text processing techniques to perform two natural language processing tasks— document classification and information extraction—in a real market supervision environment. In the classification exercise, classic algorithms achieved a better performance than deep neural networks, which was enhanced by applying undersampling techniques and ensembles. Using the tested algorithms can improve the current precision rate from 20 percent–40 percent to more than 90 percent. The BERT network architecture was able to extract information from financial documents on capital increase and mergers. The successful results obtained in both tasks encourage future implementation of the studied models in the form of a decision support system. Another contribution of this work is the CVMCorpus, a corpus built to produce datasets for the tasks, with financial documents released between 2009 and 2019 by Brazilian companies, which opens possibilities of future linguistic and finance research.
153

Automatic Classification of Conditions for Grants in Appropriation Directions of Government Agencies

Wallerö, Emma January 2022 (has links)
This study explores the possibilities of classifying language as governing or not. The ground premise is to examine how detecting and quantifying governing conditions from thousands of financial grants in appropriation directions can be performed automatically, as well as creating a data set to perform machine learning for this text classification task. In this study, automatic classification is performed along with an annotation process extracting and labelling data. Automatic classification can be performed by using a variety of data, methods and tasks. The classification task aims to mainly divide conditions into being governing of the conducting of the specific agency or not. The data consists of text from the specific chapter in the appropriation directions regarding financial grants. The text is split into sentences, keeping only sentences longer than 15 words. An iterative annotation process is then performed in order to receive labelled conditions, involving three expert annotators for the final data set, and laymen annotations for initial experiments. Given the data extracted from the annotation process, SVM, BiLSTM and KB-BERT classifiers are trained and evaluated. All models are evaluated using no context information, with bullet points as an exception, where a previous, generally descriptive sentence is included. Apart from this default input representation type, context regarding preceding sentence along with the target sentence, as well as adding specific agency to the target sentence are evaluated as alternative data representation types. The final inter-annotator agreement was not optimal with Cohen’s Kappa scores that can be interpreted as representing moderate agreement. By using majority vote for the test set, the non-optimal agreement was somewhat prevented for this specific set. The best performing model all input representation types considered was the KB-BERT using no context information, receiving an F1-score on 0.81 and an accuracy score on 0.89 on the test set. All models gave a better performance for sentences classed as governing, which might be partially due to the final annotated data sets being skewed. Possible future studies include further iterative annotation and working towards a clear and as objective definition of how a governing condition can be defined, as well as exploring the possibilities of using data augmentation to counteract the uneven distribution of classes in the final data sets.
154

Multilabel text classification of public procurements using deep learning intent detection / Textklassificering av offentliga upphandlingar med djupa artificiella neuronnät och avsåtsdetektering

Suta, Adin January 2019 (has links)
Textual data is one of the most widespread forms of data and the amount of such data available in the world increases at a rapid rate. Text can be understood as either a sequence of characters or words, where the latter approach is the most common. With the breakthroughs within the area of applied artificial intelligence in recent years, more and more tasks are aided by automatic processing of text in various applications. The models introduced in the following sections rely on deep-learning sequence-processing in order to process and text to produce a regression algorithm for classification of what the text input refers to. We investigate and compare the performance of several model architectures along with different hyperparameters. The data set was provided by e-Avrop, a Swedish company which hosts a web platform for posting and bidding of public procurements. It consists of titles and descriptions of Swedish public procurements posted on the website of e-Avrop, along with the respective category/categories of each text. When the texts are described by several categories (multi label case) we suggest a deep learning sequence-processing regression algorithm, where a set of deep learning classifiers are used. Each model uses one of the several labels in the multi label case, along with the text input to produce a set of text - label observation pairs. The goal becomes to investigate whether these classifiers can carry out different levels of intent, an intent which should theoretically be imposed by the different training data sets used by each of the individual deep learning classifiers. / Data i form av text är en av de mest utbredda formerna av data och mängden tillgänglig textdata runt om i världen ökar i snabb takt. Text kan tolkas som en följd av bokstäver eller ord, där tolkning av text i form av ordföljder är absolut vanligast. Genombrott inom artificiell intelligens under de senaste åren har medfört att fler och fler arbetsuppgifter med koppling till text assisteras av automatisk textbearbetning. Modellerna som introduceras i denna uppsats är baserade på djupa artificiella neuronnät med sekventiell bearbetning av textdata, som med hjälp av regression förutspår tillhörande ämnesområde för den inmatade texten. Flera modeller och tillhörande hyperparametrar utreds och jämförs enligt prestanda. Datamängden som använts är tillhandahållet av e-Avrop, ett svenskt företag som erbjuder en webbtjänst för offentliggörande och budgivning av offentliga upphandlingar. Datamängden består av titlar, beskrivningar samt tillhörande ämneskategorier för offentliga upphandlingar inom Sverige, tagna från e-Avrops webtjänst. När texterna är märkta med ett flertal kategorier, föreslås en algoritm baserad på ett djupt artificiellt neuronnät med sekventiell bearbetning, där en mängd klassificeringsmodeller används. Varje sådan modell använder en av de märkta kategorierna tillsammans med den tillhörande texten, som skapar en mängd av text - kategori par. Målet är att utreda huruvida dessa klassificerare kan uppvisa olika former av uppsåt som teoretiskt sett borde vara medfört från de olika datamängderna modellerna mottagit.
155

ML enhanced interpretation of failed test result

Pechetti, Hiranmayi January 2023 (has links)
This master thesis addresses the problem of classifying test failures in Ericsson AB’s BAIT test framework, specifically distinguishing between environment faults and product faults. The project aims to automate the initial defect classification process, reducing manual work and facilitating faster debugging. The significance of this problem lies in the potential time and cost savings it offers to Ericsson and other companies utilizing similar test frameworks. By automating the classification of test failures, developers can quickly identify the root cause of an issue and take appropriate action, leading to improved efficiency and productivity. To solve this problem, the thesis employs machine learning techniques. A dataset of test logs is utilized to evaluate the performance of six classification models: logistic regression, support vector machines, k-nearest neighbors, naive Bayes, decision trees, and XGBoost. Precision and macro F1 scores are used as evaluation metrics to assess the models’ performance. The results demonstrate that all models perform well in classifying test failures, achieving high precision values and macro F1 scores. The decision tree and XGBoost models exhibit perfect precision scores for product faults, while the naive Bayes model achieves the highest macro F1 score. These findings highlight the effectiveness of machine learning in accurately distinguishing between environment faults and product faults within the Bait framework. Developers and organizations can benefit from the automated defect classification system, reducing manual effort and expediting the debugging process. The successful application of machine learning in this context opens up opportunities for further research and development in automated defect classification algorithms. / Detta examensarbete tar upp problemet med att klassificera testfel i Ericsson AB:s BAIT-testramverk, där man specifikt skiljer mellan miljöfel och produktfel. Projektet syftar till att automatisera den initiala defekten klassificeringsprocessen, vilket minskar manuellt arbete och underlättar snabbare felsökning. Betydelsen av detta problem ligger i de potentiella tids- och kostnadsbesparingarna det erbjuder till Ericsson och andra företag som använder liknande testramar. Förbi automatisera klassificeringen av testfel, kan utvecklare snabbt identifiera grundorsaken till ett problem och vidta lämpliga åtgärder, vilket leder till förbättrad effektivitet och produktivitet. För att lösa detta problem använder avhandlingen maskininlärningstekniker. A datauppsättning av testloggar används för att utvärdera prestandan för sex klassificeringar modeller: logistisk regression, stödvektormaskiner, k-närmaste grannar, naiva Bayes, beslutsträd och XGBoost. Precision och makro F1 poäng används som utvärderingsmått för att bedöma modellernas prestanda. Resultaten visar att alla modeller presterar bra i klassificeringstest misslyckanden, uppnå höga precisionsvärden och makro F1-poäng. Beslutet tree- och XGBoost-modeller uppvisar perfekta precision-spoäng för produktfel, medan den naiva Bayes-modellen uppnår högsta makro F1-poäng. Dessa resultat belyser effektiviteten av maskininlärning när det gäller att exakt särskilja mellan miljöfel och produktfel inom Bait-ramverket. Utvecklare och organisationer kan dra nytta av den automatiska defektklassificeringen system, vilket minskar manuell ansträngning och påskyndar felsöknings-processen. De framgångsrik tillämpning av maskininlärning i detta sammanhang öppnar möjligheter för vidare forskning och utveckling inom automatiserade defektklassificeringsalgoritmer.
156

Arabic Language Processing for Text Classification. Contributions to Arabic Root Extraction Techniques, Building An Arabic Corpus, and to Arabic Text Classification Techniques.

Al-Nashashibi, May Y.A. January 2012 (has links)
The impact and dynamics of Internet-based resources for Arabic-speaking users is increasing in significance, depth and breadth at highest pace than ever, and thus requires updated mechanisms for computational processing of Arabic texts. Arabic is a complex language and as such requires in depth investigation for analysis and improvement of available automatic processing techniques such as root extraction methods or text classification techniques, and for developing text collections that are already labeled, whether with single or multiple labels. This thesis proposes new ideas and methods to improve available automatic processing techniques for Arabic texts. Any automatic processing technique would require data in order to be used and critically reviewed and assessed, and here an attempt to develop a labeled Arabic corpus is also proposed. This thesis is composed of three parts: 1- Arabic corpus development, 2- proposing, improving and implementing root extraction techniques, and 3- proposing and investigating the effect of different pre-processing methods on single-labeled text classification methods for Arabic. This thesis first develops an Arabic corpus that is prepared to be used here for testing root extraction methods as well as single-label text classification techniques. It also enhances a rule-based root extraction method by handling irregular cases (that appear in about 34% of texts). It proposes and implements two expanded algorithms as well as an adjustment for a weight-based method. It also includes the algorithm that handles irregular cases to all and compares the performances of these proposed methods with original ones. This thesis thus develops a root extraction system that handles foreign Arabized words by constructing a list of about 7,000 foreign words. The outcome of the technique with best accuracy results in extracting the correct stem and root for respective words in texts, which is an enhanced rule-based method, is used in the third part of this thesis. This thesis finally proposes and implements a variant term frequency inverse document frequency weighting method, and investigates the effect of using different choices of features in document representation on single-label text classification performance (words, stems or roots as well as including to these choices their respective phrases). This thesis applies forty seven classifiers on all proposed representations and compares their performances. One challenge for researchers in Arabic text processing is that reported root extraction techniques in literature are either not accessible or require a long time to be reproduced while labeled benchmark Arabic text corpus is not fully available online. Also, by now few machine learning techniques were investigated on Arabic where usual preprocessing steps before classification were chosen. Such challenges are addressed in this thesis by developing a new labeled Arabic text corpus for extended applications of computational techniques. Results of investigated issues here show that proposing and implementing an algorithm that handles irregular words in Arabic did improve the performance of all implemented root extraction techniques. The performance of the algorithm that handles such irregular cases is evaluated in terms of accuracy improvement and execution time. Its efficiency is investigated with different document lengths and empirically is found to be linear in time for document lengths less than about 8,000. The rule-based technique is improved the highest among implemented root extraction methods when including the irregular cases handling algorithm. This thesis validates that choosing roots or stems instead of words in documents representations indeed improves single-label classification performance significantly for most used classifiers. However, the effect of extending such representations with their respective phrases on single-label text classification performance shows that it has no significant improvement. Many classifiers were not yet tested for Arabic such as the ripple-down rule classifier. The outcome of comparing the classifiers' performances concludes that the Bayesian network classifier performance is significantly the best in terms of accuracy, training time, and root mean square error values for all proposed and implemented representations. / Petra University, Amman (Jordan)
157

Evaluation of the performance of machine learning techniques for email classification / Utvärdering av prestationen av maskininlärningstekniker för e-post klassificering

Tapper, Isabella January 2022 (has links)
Manual categorization of a mail inbox can often become time-consuming. Therefore many attempts have been made to use machine learning for this task. One essential Natural Language Processing (NLP) task is text classification, which is a big challenge since an NLP engine is not a native speaker of any human language. An NLP engine often fails at understanding sarcasm and underlying intent. One of the NLP challenges is to represent text. Text embeddings can be learned, or they can be generated from a pre-trained model. Google’s pre-trained model Sentence Bidirectional Encoder Representations from Transformers (SBERT) is state-of-the-art for generating pre-trained vector representation of longer text. In this project, different methods of classifying and clustering emails were studied. The performances of three supervised classification models were compared to each other. A Support Vector Machine (SVM) and a Neural Network (NN) were trained with SBERT embeddings, and the third model, a Recurrent Neural Network (RNN) was trained on raw data. The motivation for this experiment was to see whether SBERT embedding is an excellent choice of text representation when combined with simpler classification models in an email classification task. The results show that the SVM and NN perform higher than RNN in the email classification task. Since most real data is unlabeled, this thesis also evaluated how well unsupervised methods could perform in email clustering taking advantage of the available labels and using SBERT embeddings as text representations. Three unsupervised clustering models are reviewed in this thesis: K-Means (KM), Spectral Clustering (SC), and Hierarchical Agglomerative Clustering (HAC). The results show that the unsupervised models all had a similar performance in terms of precision, recall and F1-score, and the performances were evaluated using the available labeled dataset. In conclusion, this thesis gives evidence that in an email classification task, it is better for supervised models to train with pre-trained SBERT embeddings than to train on raw data. This thesis also showed that the output of the clustering methods compared on par with the output of the selected supervised learning techniques. / Manuell kategorisering av en inkorg kan ofta bli tidskrävande. Därför har många försök gjorts att använda maskininlärning för denna uppgift. En viktig uppgift för Natural Language Processing (NLP) är textklassificering, vilket är en stor utmaning eftersom en språkmotor inte talar något mänskligt språk som modersmål. En språkmotor misslyckas ofta med att förstå sarkasm och underliggande avsikt. En av språkmotorns utmaningar är att representera text. Textinbäddningar kan bli inlärda, eller så kan de genereras av en förutbildad modell. Googles förutbildade modell Sentence Bidirectional Encoder Representations from Transformers (SBERT) är den senaste tekniken för att generera förtränade vektorrepresentation av längre text. I detta projekt studerades olika metoder för att klassificera e-postmeddelanden. Prestandan av tre övervakade klassificeringsmodeller jämfördes med varandra, och av dessa var två utbildade med SBERT-inbäddningar: Support Vector Machine (SVM), Neural Network (NN) och den tredje modellen tränades på rådata: Recurrent Neural Network (RNN). Motivationen till detta experiment var att se om SBERT-inbäddningar tillsammans med enklare klassificeringsmodeller är ett bra val av textrepresentation i en e-post klassificeringsuppgift. Resultaten visar att SVM och NN har högre prestanda än RNN i e-postklassificeringsuppgiften. Eftersom mycket verklig data är omärkt utvärderade denna avhandling också hur väl oövervakade metoder kan utföras i samma e-postklassificeringsuppgift med SBERT-inbäddningar som textrepresentationer. Tre oövervakade klustringsmodeller utvärderas i denna avhandling: K-Means (KM), Spectral Clustering (SC) och Hierarchical Agglomerative Clustering (HAC). Resultaten visar att de oövervakade modeller hade liknande prestanda i precision, recall och F1-score, och prestandan var baserad på de tillgängliga klassannoteringarna. Sammanfattningsvis ger denna avhandling bevis på att i en e-postklassificeringsuppgift är det bättre att övervakade modeller tränar med förtränade SBERT-inbäddningar än att träna på rådata. Denna avhandling visade också att resultatet av klustringsmodellerna hade en jämförbar prestanda med resultatet av de valda övervakade inlärningstekniker.
158

A Platform for Aligning Academic Assessments to Industry and Federal Job Postings

Parks, Tyler J. 07 1900 (has links)
The proposed tool will provide users with a platform to access a side-by-side comparison of classroom assessment and job posting requirements. Using techniques and methodologies from NLP, machine learning, data analysis, and data mining: the employed algorithm analyzes job postings and classroom assessments, extracts and classifies skill units within, then compares sets of skills from different input volumes. This effectively provides a predicted alignment between academic and career sources, both federal and industrial. The compilation of tool results indicates an overall accuracy score of 82%, and an alignment score of only 75.5% between the input assessments and overall job postings. These results describe that the 50 UNT assessments and 5,000 industry and federal job postings examined, demonstrate a compatibility (alignment) of 75.5%; and, that this measure was calculated using a tool operating at an 82% precision rate.
159

Low-resource suicide ideation and depression detection with multitask learning and large language models

Breau, Pierre-William 08 1900 (has links)
Nous évaluons des méthodes de traitement automatique du langage naturel (TALN) pour la détection d’idées suicidaires, de la dépression et de l’anxiété à partir de publications sur les médias sociaux. Comme les ensembles de données relatifs à la santé mentale sont rares et généralement de petite taille, les méthodes classiques d’apprentissage automatique ont traditionnellement été utilisées dans ce domaine. Nous évaluons l’effet de l’apprentissage multi-tâche sur la détection d’idées suicidaires en utilisant comme tâches auxiliaires des ensembles de données disponibles publiquement pour la détection de la dépression et de l’anxiété, ainsi que la classification d’émotions et du stress. Nous constatons une hausse de la performance de classification pour les tâches de détection d’idées suicidaires, de la dépression et de l’anxiété lorsqu’elles sont entraînées ensemble en raison de similitudes entre les troubles de santé mentale à l’étude. Nous observons que l’utilisation d’ensembles de données publiquement accessibles pour des tâches connexes peut bénéficier à la détection de problèmes de santé mentale. Nous évaluons enfin la performance des modèles ChatGPT et GPT-4 dans des scénarios d’apprentissage zero-shot et few-shot. GPT-4 surpasse toutes les autres méthodes testées pour la détection d’idées suicidaires. De plus, nous observons que ChatGPT bénéficie davantage de l’apprentissage few-shot, car le modèle fournit un haut taux de réponses non concluantes si aucun exemple n’est présenté. Enfin, une analyse des faux négatifs produits par GPT-4 pour la détection d’idées suicidaires conclut qu’ils sont dus à des erreurs d’étiquetage plutôt qu’à des lacunes du modèle. / In this work we explore natural language processing (NLP) methods to suicide ideation, depression, and anxiety detection in social media posts. Since annotated mental health data is scarce and difficult to come by, classical machine learning methods have traditionally been employed on this type of task due to the small size of the datasets. We evaluate the effect of multi-task learning on suicide ideation detection using publicly-available datasets for depression, anxiety, emotion and stress classification as auxiliary tasks. We find that classification performance of suicide ideation, depression, and anxiety is improved when trained together because of the proximity between the mental disorders. We observe that publicly-available datasets for closely-related tasks can benefit the detection of certain mental health conditions. We then perform classification experiments using ChatGPT and GPT-4 using zero-shot and few-shot learning, and find that GPT-4 obtains the best performance of all methods tested for suicide ideation detection. We further observe that ChatGPT benefits the most from few-shot learning as it struggles to give conclusive answers when no examples are provided. Finally, an analysis of false negative results for suicide ideation output by GPT-4 concludes that they are due to labeling errors rather than mistakes from the model.
160

Semi-supervised adverse drug reaction detection / Halvvägledd upptäckt av läkemedelsreleterade biverkningar

Ohl, Louis January 2021 (has links)
Pharmacogivilance consists in carefully monitoring drugs in order to re-evaluate their risk for people’s health. The sooner the Adverse Drug Reactions are detected, the sooner one can act consequently. This thesis aims at discovering such reactions in electronical health records under the constraint of lacking annotated data, in order to replicate the scenario of the Regional Center for Pharmacovigilance of Nice. We investigate how in a semi-supervised learning design the unlabeled data can contribute to improve classification scores. Results suggest an excellent recall in discovering adverse reactions and possible classification improvements under specific data distribution. / Läkemedelsövervakningen består i kolla försiktigt läkemedlen så att utvärdera dem för samhällets hälsa. Ju tidigare de läkemedelsrelaterade biverkningarna upptäcks, desto tidigare man får handla dem. Detta exjobb söker att upptäcka de där läkemedelsrelaterade biverkningarnna inom elektroniska hälsopost med få datamärkningar, för att återskapa Nice regionalt läkemedelelsöveraknings-centrumets situationen. Vi undersöker hur en halvväglett lärande lösning kan hjälpa att förbättra klassificeringsresultat. Resultaten visar en god återställning med biverknings-upptäckning och möjliga förbättringar.

Page generated in 0.1306 seconds