• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 278
  • 31
  • 25
  • 22
  • 9
  • 8
  • 5
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 432
  • 208
  • 162
  • 157
  • 150
  • 136
  • 112
  • 102
  • 92
  • 80
  • 77
  • 74
  • 73
  • 71
  • 62
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
381

Using a Character-Based Language Model for Caption Generation / Användning av teckenbaserad språkmodell för generering av bildtext

Keisala, Simon January 2019 (has links)
Using AI to automatically describe images is a challenging task. The aim of this study has been to compare the use of character-based language models with one of the current state-of-the-art token-based language models, im2txt, to generate image captions, with focus on morphological correctness. Previous work has shown that character-based language models are able to outperform token-based language models in morphologically rich languages. Other studies show that simple multi-layered LSTM-blocks are able to learn to replicate the syntax of its training data. To study the usability of character-based language models an alternative model based on TensorFlow im2txt has been created. The model changes the token-generation architecture into handling character-sized tokens instead of word-sized tokens. The results suggest that a character-based language model could outperform the current token-based language models, although due to time and computing power constraints this study fails to draw a clear conclusion. A problem with one of the methods, subsampling, is discussed. When using the original method on character-sized tokens this method removes characters (including special characters) instead of full words. To solve this issue, a two-phase approach is suggested, where training data first is separated into word-sized tokens where subsampling is performed. The remaining tokens are then separated into character-sized tokens. Future work where the modified subsampling and fine-tuning of the hyperparameters are performed is suggested to gain a clearer conclusion of the performance of character-based language models.
382

Improving the Performance of Clinical Prediction Tasks by Using Structured and Unstructured Data Combined with a Patient Network

Nouri Golmaei, Sara 08 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / With the increasing availability of Electronic Health Records (EHRs) and advances in deep learning techniques, developing deep predictive models that use EHR data to solve healthcare problems has gained momentum in recent years. The majority of clinical predictive models benefit from structured data in EHR (e.g., lab measurements and medications). Still, learning clinical outcomes from all possible information sources is one of the main challenges when building predictive models. This work focuses mainly on two sources of information that have been underused by researchers; unstructured data (e.g., clinical notes) and a patient network. We propose a novel hybrid deep learning model, DeepNote-GNN, that integrates clinical notes information and patient network topological structure to improve 30-day hospital readmission prediction. DeepNote-GNN is a robust deep learning framework consisting of two modules: DeepNote and patient network. DeepNote extracts deep representations of clinical notes using a feature aggregation unit on top of a state-of-the-art Natural Language Processing (NLP) technique - BERT. By exploiting these deep representations, a patient network is built, and Graph Neural Network (GNN) is used to train the network for hospital readmission predictions. Performance evaluation on the MIMIC-III dataset demonstrates that DeepNote-GNN achieves superior results compared to the state-of-the-art baselines on the 30-day hospital readmission task. We extensively analyze the DeepNote-GNN model to illustrate the effectiveness and contribution of each component of it. The model analysis shows that patient network has a significant contribution to the overall performance, and DeepNote-GNN is robust and can consistently perform well on the 30-day readmission prediction task. To evaluate the generalization of DeepNote and patient network modules on new prediction tasks, we create a multimodal model and train it on structured and unstructured data of MIMIC-III dataset to predict patient mortality and Length of Stay (LOS). Our proposed multimodal model consists of four components: DeepNote, patient network, DeepTemporal, and score aggregation. While DeepNote keeps its functionality and extracts representations of clinical notes, we build a DeepTemporal module using a fully connected layer stacked on top of a one-layer Gated Recurrent Unit (GRU) to extract the deep representations of temporal signals. Independent to DeepTemporal, we extract feature vectors of temporal signals and use them to build a patient network. Finally, the DeepNote, DeepTemporal, and patient network scores are linearly aggregated to fit the multimodal model on downstream prediction tasks. Our results are very competitive to the baseline model. The multimodal model analysis reveals that unstructured text data better help to estimate predictions than temporal signals. Moreover, there is no limitation in applying a patient network on structured data. In comparison to other modules, the patient network makes a more significant contribution to prediction tasks. We believe that our efforts in this work have opened up a new study area that can be used to enhance the performance of clinical predictive models.
383

Prediction of Psychosis Using Big Web Data in the United States

Tadisetty, Srikanth 15 August 2018 (has links)
No description available.
384

Drömmen om Artificiell Intelligens (AI) : En studie angående utmaningar med att implementera Artificiell Intelligens inom myndigheter / The Dream of Artificial Intelligence (AI)

Nilsson, Adam, Hathalia, Abbas January 2020 (has links)
The purpose of the study was to find out what challenges governments have encountered when implementing Artificial Intelligence.  The method used was qualitative and the interviews were conducted remotely. Four governments were interviewed where respondents were asked questions about what they had experienced as challenges in the implementation of AI. The results were analyzed against previous studies and compiled by picking out themes from the transcribed interviews.  The results of the survey identify a number of challenges linked to three main themes: the lack of knowledge, challenges around data and when challenges arise.
385

Performance Benchmarking and Cost Analysis of Machine Learning Techniques : An Investigation into Traditional and State-Of-The-Art Models in Business Operations / Prestandajämförelse och kostnadsanalys av maskininlärningstekniker : en undersökning av traditionella och toppmoderna modeller inom affärsverksamhet

Lundgren, Jacob, Taheri, Sam January 2023 (has links)
Eftersom samhället blir allt mer datadrivet revolutionerar användningen av AI och maskininlärning sättet företag fungerar och utvecklas på. Denna studie utforskar användningen av AI, Big Data och Natural Language Processing (NLP) för att förbättra affärsverksamhet och intelligens i företag. Huvudsyftet med denna avhandling är att undersöka om den nuvarande klassificeringsprocessen hos värdorganisationen kan upprätthållas med minskade driftskostnader, särskilt lägre moln-GPU-kostnader. Detta har potential att förbättra klassificeringsmetoden, förbättra produkten som företaget erbjuder sina kunder på grund av ökad klassificeringsnoggrannhet och stärka deras värdeerbjudande. Vidare utvärderas tre tillvägagångssätt mot varandra och implementationerna visar utvecklingen inom området. Modellerna som jämförs i denna studie inkluderar traditionella maskininlärningsmetoder som Support Vector Machine (SVM) och Logistisk Regression, tillsammans med state-of-the-art transformermodeller som BERT, både Pre-Trained och Fine-Tuned. Artikeln visar att det finns en avvägning mellan prestanda och kostnad vilket illustrerar problemet som många företag, som Valu8, står inför när de utvärderar vilket tillvägagångssätt de ska implementera. Denna avvägning diskuteras och analyseras sedan mer detaljerat för att utforska möjliga kompromisser från varje perspektiv i ett försök att hitta en balanserad lösning som kombinerar prestandaeffektivitet och kostnadseffektivitet. / As society is becoming more data-driven, Artificial Intelligence (AI) and Machine Learning are revolutionizing how companies operate and evolve. This study explores the use of AI, Big Data, and Natural Language Processing (NLP) in improving business operations and intelligence in enterprises. The primary objective of this thesis is to examine if the current classification process at the host company can be maintained with reduced operating costs, specifically lower cloud GPU costs. This can improve the classification method, enhance the product the company offers its customers due to increased classification accuracy, and strengthen its value proposition. Furthermore, three approaches are evaluated against each other, and the implementations showcase the evolution within the field. The models compared in this study include traditional machine learning methods such as Support Vector Machine (SVM) and Logistic Regression, alongside state-of-the-art transformer models like BERT, both Pre-Trained and Fine-Tuned. The paper shows a trade-off between performance and cost, showcasing the problem many companies like Valu8 stand before when evaluating which approach to implement. This trade-off is discussed and analyzed in further detail to explore possible compromises from each perspective to strike a balanced solution that combines performance efficiency and cost-effectiveness.
386

Exploring Automatic Synonym Generation for Lexical Simplification of Swedish Electronic Health Records

Jänich, Anna January 2023 (has links)
Electronic health records (EHRs) are used in Sweden's healthcare systems to store patients' medical information. Patients in Sweden have the right to access and read their health records. Unfortunately, the language used in EHRs is very complex and presents a challenge for readers who lack medical knowledge. Simplifying the language used in EHRs could facilitate the transfer of information between medical staff and patients. This project investigates the possibility of generating Swedish medical synonyms automatically. These synonyms are intended to be used in future systems for lexical simplification that can enhance the readability of Swedish EHRs and simplify medical terminology. Current publicly available Swedish corpora that provide synonyms for medical terminology are insufficient in size to be utilized in a system for lexical simplification. To overcome the obstacle of insufficient corpora, machine learning models are trained to generate synonyms and terms that convey medical concepts in a more understandable way. With the purpose of establishing a foundation for analyzing complex medical terms, a simple mechanism for Complex Word Identification (CWI) is implemented. The mechanism relies on matching strings and substrings from a pre-existing corpus containing hand-curated medical terms in Swedish. To find a suitable strategy for generating medical synonyms automatically, seven different machine learning models are queried for synonym suggestions for 50 complex sample terms. To explore the effect of different input data, we trained our models on different datasets with varying sizes. Three of the seven models are based on BERT and four of them are based on Word2Vec. For each model, results for the 50 complex sample terms are generated and raters with medical knowledge are asked to assess whether the automatically generated suggestions could be considered synonyms. The results vary between the different models and seem to be connected to the amount and quality of the data they have been trained on. Furthermore, the raters involved in judging the synonyms exhibit great disagreement, revealing the complexity and subjectivity of the task to find suitable and widely accepted medical synonyms. The method and models applied in this project do not succeed in creating a stable source of suitable synonyms. The chosen BERT approach based on Masked Language Modelling cannot reliably generate suitable synonyms due to the limitation of generating one term per synonym suggestion only. The Word2Vec models demonstrate some weaknesses due to the lack of context consideration. Despite the fact that the current performance of our models in generating automatic synonym suggestions is not entirely satisfactory, we have observed a promising number of accurate suggestions. This gives us reason to believe that with enhanced training and a larger amount of input data consisting of Swedish medical text, the models could be improved and eventually effectively applied.
387

Automating the Experimental Laboratory

Kulkarni, Chaitanya Krishnaji January 2021 (has links)
No description available.
388

Data Analysis of Discussions, Regarding Common Vulnerabilities and Exposures, and their Sentiment on Social Media / Dataanalys av diskussioner, gällande vanliga säkerhetssårbarheter och exponeringar, och deras sentiment på sociala medier

Rahmati, Mustafa, Grujicic, Danijel January 2022 (has links)
As common vulnerabilites and exposures are detected, they are also discussed in various social platforms. The problem is that only a few of the posts made about them, are getting enough attention. This leads to an unawareness of potential and critical threats against systems. It is therefore important to look for patterns that make certain vulnerabilites more or less discussed. To do so, a framework was made for collecting discussions around cybersecurity and more specific vulnerabilites/exposures called CVE from Reddit. In addition, some of the desired data was collected from Twitter. Thereafter, the sentiments of the collected posts were calculated to see patterns between popular subreddits and the attitude shown in them. This was done with three methods: Flair, TextBlob and Vader. The results showed for instance that general discussions about information security were considered to be more positive than discussions of common vulnerabilites and exposures. Another result showed that the spread of CVEs that have a partial impact, are higher in Reddit, and is increasing almost exponentially. CVSS scores showed that a CVE with a CVSS score of around 7 is more likely to appear. Many CVEs in Reddit was also discussed before and after they were disclosed. The implication of this work might be that more and more people might use Reddit to discuss specific types of CVEs in a suitable subreddit, as well as being aware of common vulnerabilites and exposures, in order to prevent future threats.
389

Extending the explanatory power of factor pricing models using topic modeling / Högre förklaringsgrad hos faktorprismodeller genom topic modeling

Everling, Nils January 2017 (has links)
Factor models attribute stock returns to a linear combination of factors. A model with great explanatory power (R2) can be used to estimate the systematic risk of an investment. One of the most important factors is the industry which the company of the stock operates in. In commercial risk models this factor is often determined with a manually constructed stock classification scheme such as GICS. We present Natural Language Industry Scheme (NLIS), an automatic and multivalued classification scheme based on topic modeling. The topic modeling is performed on transcripts of company earnings calls and identifies a number of topics analogous to industries. We use non-negative matrix factorization (NMF) on a term-document matrix of the transcripts to perform the topic modeling. When set to explain returns of the MSCI USA index we find that NLIS consistently outperforms GICS, often by several hundred basis points. We attribute this to NLIS’ ability to assign a stock to multiple industries. We also suggest that the proportions of industry assignments for a given stock could correspond to expected future revenue sources rather than current revenue sources. This property could explain some of NLIS’ success since it closely relates to theoretical stock pricing. / Faktormodeller förklarar aktieprisrörelser med en linjär kombination av faktorer. En modell med hög förklaringsgrad (R2) kan användas föratt skatta en investerings systematiska risk. En av de viktigaste faktorerna är aktiebolagets industritillhörighet. I kommersiella risksystem bestäms industri oftast med ett aktieklassifikationsschema som GICS, publicerat av ett finansiellt institut. Vi presenterar Natural Language Industry Scheme (NLIS), ett automatiskt klassifikationsschema baserat på topic modeling. Vi utför topic modeling på transkript av aktiebolags investerarsamtal. Detta identifierar ämnen, eller topics, som är jämförbara med industrier. Topic modeling sker genom icke-negativmatrisfaktorisering (NMF) på en ord-dokumentmatris av transkripten. När NLIS används för att förklara prisrörelser hos MSCI USA-indexet finner vi att NLIS överträffar GICS, ofta med 2-3 procent. Detta tillskriver vi NLIS förmåga att ge flera industritillhörigheter åt samma aktie. Vi föreslår också att proportionerna hos industritillhörigheterna för en aktie kan motsvara förväntade inkomstkällor snarare än nuvarande inkomstkällor. Denna egenskap kan också vara en anledning till NLIS framgång då den nära relaterar till teoretisk aktieprissättning.
390

Distributionella representationer av ord för effektiv informationssökning : Algoritmer för sökning i kundsupportforum / Distributional Representations of Words for Effective Information Retrieval : Information Retrieval in Customer Support Forums

Lachmann, Tim, Sabel, Johan January 2017 (has links)
I takt med att informationsmängden ökar i samhället ställs högre krav på mer förfinade metoder för sökning och hantering av information. Att utvinna relevant data från företagsinterna system blir en mer komplex uppgift då större informationsmängder måste hanteras och mycket kommunikation förflyttas till digitala plattformar. Metoder för vektorbaserad ordinbäddning har under senare år gjort stora framsteg; i synnerhet visade Google 2013 banbrytande resultat med modellen Word2vec och överträffade äldre metoder. Vi implementerar en sökmotor som utnyttjar ordinbäddningar baserade på Word2vec och liknande modeller, avsedd att användas på IT-företaget Kundo och för produkten Kundo Forum. Resultaten visar på potential för informationssökning med markant bättre täckning utan minskad precision. Kopplat till huvudområdet informationssökning genomförs också en analys av vilka implikationer en förbättrad sökmotor har ur ett marknads- och produktutvecklingsperspektiv. / As the abundance of information in society increases, so does the need for more sophisticated methods of information retrieval. Extracting information from internal systems becomes a more complex task when handling larger amounts of information and when more communications are transferred to digital platforms. Recent years methods for word embedding in vector space have gained traction. In 2013 Google sent ripples across the field of Natural Language Processing with a new method called Word2vec, significantly outperforming former practices. Among different established methods for information retrieval, we implement a retrieval method utilizing Word2vec and related methods of word embedding for the search engine at IT company Kundo and their product Kundo Forum. We demonstrate the potential to improve information retrieval recall by a significant margin without diminishing precision. Coupled with the primary subject of information retrieval we also investigate potential market and product development implications related to a different kind of search engine.

Page generated in 2.1281 seconds