• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 2
  • 1
  • Tagged with
  • 5
  • 5
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Natural language processing techniques for the purpose of sentinel event information extraction

Barrett, Neil 23 November 2012 (has links)
An approach to biomedical language processing is to apply existing natural language processing (NLP) solutions to biomedical texts. Often, existing NLP solutions are less successful in the biomedical domain relative to their non-biomedical domain performance (e.g., relative to newspaper text). Biomedical NLP is likely best served by methods, information and tools that account for its particular challenges. In this thesis, I describe an NLP system specifically engineered for sentinel event extraction from clinical documents. The NLP system's design accounts for several biomedical NLP challenges. The specific contributions are as follows. - Biomedical tokenizers differ, lack consensus over output tokens and are difficult to extend. I developed an extensible tokenizer, providing a tokenizer design pattern and implementation guidelines. It evaluated as equivalent to a leading biomedical tokenizer (MedPost). - Biomedical part-of-speech (POS) taggers are often trained on non-biomedical corpora and applied to biomedical corpora. This results in a decrease in tagging accuracy. I built a token centric POS tagger, TcT, that is more accurate than three existing POS taggers (mxpost, TnT and Brill) when trained on a non-biomedical corpus and evaluated on biomedical corpora. TcT achieves this increase in tagging accuracy by ignoring previously assigned POS tags and restricting the tagger's scope to the current token, previous token and following token. - Two parsers, MST and Malt, have been evaluated using perfect POS tag input. Given that perfect input is unlikely in biomedical NLP tasks, I evaluated these two parsers on imperfect POS tag input and compared their results. MST was most affected by imperfectly POS tagged biomedical text. I attributed MST's drop in performance to verbs and adjectives where MST had more potential for performance loss than Malt. I attributed Malt's resilience to POS tagging errors to its use of a rich feature set and a local scope in decision making. - Previous automated clinical coding (ACC) research focuses on mapping narrative phrases to terminological descriptions (e.g., concept descriptions). These methods make little or no use of the additional semantic information available through topology. I developed a token-based ACC approach that encodes tokens and manipulates token-level encodings by mapping linguistic structures to topological operations in SNOMED CT. My ACC method recalled most concepts given their descriptions and performed significantly better than MetaMap. I extended my contributions for the purpose of sentinel event extraction from clinical letters. The extensions account for negation in text, use medication brand names during ACC and model (coarse) temporal information. My software system's performance is similar to state-of-the-art results. Given all of the above, my thesis is a blueprint for building a biomedical NLP system. Furthermore, my contributions likely apply to NLP systems in general. / Graduate
2

Deep Neural Networks for Inverse De-Identification of Medical Case Narratives in Reports of Suspected Adverse Drug Reactions / Djupa neuronnät för omvänd avidentifiering av medicinska fallbeskrivningar i biverkningsrapporter

Meldau, Eva-Lisa January 2018 (has links)
Medical research requires detailed and accurate information on individual patients. This is especially so in the context of pharmacovigilance which amongst others seeks to identify previously unknown adverse drug reactions. Here, the clinical stories are often the starting point for assessing whether there is a causal relationship between the drug and the suspected adverse reaction. Reliable automatic de-identification of medical case narratives could allow to share this patient data without compromising the patient’s privacy. Current research on de-identification focused on solving the task of labelling the tokens in a narrative with the class of sensitive information they belong to. In this Master’s thesis project, we explore an inverse approach to the task of de-identification. This means that de-identification of medical case narratives is instead understood as identifying tokens which do not need to be removed from the text in order to ensure patient confidentiality. Our results show that this approach can lead to a more reliable method in terms of higher recall. We achieve a recall of sensitive information of 99.1% while the precision is kept above 51% for the 2014-i2b2 benchmark data set. The model was also fine-tuned on case narratives from reports of suspected adverse drug reactions, where a recall of sensitive information of more than 99% was achieved. Although the precision was only at a level of 55%, which is lower than in comparable systems, an expert could still identify information which would be useful for causality assessment in pharmacovigilance in most of the case narratives which were de-identified with our method. In more than 50% of the case narratives no information useful for causality assessment was missing at all. / Tillgång till detaljerade kliniska data är en förutsättning för att bedriva medicinsk forskning och i förlängningen hjälpa patienter. Säker avidentifiering av medicinska fallbeskrivningar kan göra det möjligt att dela sådan information utan att äventyra patienters skydd av personliga data. Tidigare forskning inom området har sökt angripa problemet genom att märka ord i en text med vilken typ av känslig information de förmedlar. I detta examensarbete utforskar vi möjligheten att angripa problemet på omvänt vis genom att identifiera de ord som inte behöver avlägsnas för att säkerställa skydd av känslig patientinformation. Våra resultat visar att detta kan avidentifiera en större andel av den känsliga informationen: 99,1% av all känslig information avidentifieras med vår metod, samtidigt som 51% av alla uteslutna ord verkligen förmedlar känslig information, vilket undersökts för 2014-i2b2 jämförelse datamängden. Algoritmen anpassades även till fallbeskrivningar från biverkningsrapporter, och i detta fall avidentifierades 99,1% av all känslig information medan 55% av alla uteslutna ord förmedlar känslig information. Även om denna senare andel är lägre än för jämförbara system så kunde en expert hitta information som är användbar för kausalitetsvärdering i flertalet av de avidentifierade rapporterna; i mer än hälften av de avidentifierade fallbeskrivningarna saknades ingen information med värde för kausalitetsvärdering.
3

Analyse contrastive des verbes dans des corpus médicaux et création d’une ressource verbale de simplification de textes / Automatic analysis of verbs in texts of medical corpora : theoretical and applied issues

Wandji Tchami, Ornella 26 February 2018 (has links)
Grâce à l’évolution de la technologie à travers le Web, la documentation relative à la santé est de plus en plus abondante et accessible à tous, plus particulièrement aux patients, qui ont ainsi accès à une panoplie d’informations sanitaires. Malheureusement, la grande disponibilité de l’information médicale ne garantit pas sa bonne compréhension par le public visé, en l’occurrence les non-experts. Notre projet de thèse a pour objectif la création d’une ressource de simplification de textes médicaux, à partir d’une analyse syntaxico-sémantique des verbes dans quatre corpus médicaux en français qui se distinguent de par le degré d’expertise de leurs auteurs et celui des publics cibles. La ressource conçue contient 230 patrons syntaxicosémantiques des verbes (appelés pss), alignés avec leurs équivalents non spécialisés. La méthode semi-automatique d’analyse des verbes appliquée pour atteindre notre objectif est basée sur quatre tâches fondamentales : l’annotation syntaxique des corpus, réalisée grâce à l’analyseur syntaxique Cordial (Laurent, Dominique et al, 2009) ; l’annotation sémantique des arguments des verbes, à partir des catégories sémantiques de la version française de la terminologie médicale Snomed Internationale (Côté, 1996) ; l’acquisition des patrons syntactico-sémantiqueset l’analyse contrastive du fonctionnement des verbes dans les différents corpus. Les patrons syntaxico-sémantiques des verbes acquis au terme de ce processus subissent une évaluation (par trois équipes d’experts en médecine) qui débouche sur la sélection des candidats constituant la nomenclature de la ressource de simplification. Les pss sont ensuite alignés avec leurs correspondants non spécialisés, cet alignement débouche sur le création de la ressource de simplification, qui représente le résultat principal de notre travail de thèse. Une évaluation du rendement du contenu de la ressource a été effectuée avec deux groupes d’évaluateurs : des linguistes et des non-linguistes. Les résultats montrent que la simplification des pss permet de faciliter la compréhension du sens du verbe en emploi spécialisé, surtout lorsque un certains paramètres sont réunis. / With the evolution of Web technology, healthcare documentation is becoming increasinglyabundant and accessible to all, especially to patients, who have access to a large amount ofhealth information. Unfortunately, the ease of access to medical information does not guaranteeits correct understanding by the intended audience, in this case non-experts. Our PhD work aimsat creating a resource for the simplification of medical texts, based on a syntactico-semanticanalysis of verbs in four French medical corpora, that are distinguished according to the levelof expertise of their authors and that of the target audiences. The resource created in thepresent thesis contains 230 syntactico-semantic patterns of verbs (called pss), aligned withtheir non-specialized equivalents. The semi-automatic method applied, for the analysis of verbs,in order to achieve our goal is based on four fundamental tasks : the syntactic annotation of thecorpora, carried out thanks to the Cordial parser (Laurent et al., 2009) ; the semantic annotationof verb arguments, based on semantic categories of the French version of a medical terminologyknown as Snomed International (Côté, 1996) ; the acquisition of syntactico-semantic patternsof verbs and the contrastive analysis of the verbs behaviors in the different corpora. Thepss, acquired at the end of this process, undergo an evaluation (by three teams of medicalexperts) which leads to the selection of candidates constituting the nomenclature of our textsimplification resource. These pss are then aligned with their non-specialized equivalents, thisalignment leads to the creation of the simplification resource, which is the main result of ourPhD study. The content of the resource was evaluated by two groups of people : linguists andnon-linguists. The results show that the simplification of pss makes it easier for non-expertsto understand the meaning of verbs used in a specialized way, especially when a certain set ofparameters is collected.
4

A Pedagogy of Holistic Media Literacy: Reflections on Culture Jamming as Transformative Learning and Healing

Stasko, Carly 14 December 2009 (has links)
This qualitative study uses narrative inquiry (Connelly & Clandinin, 1988, 1990, 2001) and self-study to investigate ways to further understand and facilitate the integration of holistic philosophies of education with media literacy pedagogies. As founder and director of the Youth Media Literacy Project and a self-titled Imagitator (one who agitates imagination), I have spent over 10 years teaching media literacy in various high schools, universities, and community centres across North America. This study will focus on my own personal practical knowledge (Connelly & Clandinin, 1982) as a culture jammer, educator and cancer survivor to illustrate my original vision of a ‘holistic media literacy pedagogy’. This research reflects on the emergence and impact of holistic media literacy in my personal and professional life and also draws from relevant interdisciplinary literature to challenge and synthesize current insights and theories of media literacy, holistic education and culture jamming.
5

A Pedagogy of Holistic Media Literacy: Reflections on Culture Jamming as Transformative Learning and Healing

Stasko, Carly 14 December 2009 (has links)
This qualitative study uses narrative inquiry (Connelly & Clandinin, 1988, 1990, 2001) and self-study to investigate ways to further understand and facilitate the integration of holistic philosophies of education with media literacy pedagogies. As founder and director of the Youth Media Literacy Project and a self-titled Imagitator (one who agitates imagination), I have spent over 10 years teaching media literacy in various high schools, universities, and community centres across North America. This study will focus on my own personal practical knowledge (Connelly & Clandinin, 1982) as a culture jammer, educator and cancer survivor to illustrate my original vision of a ‘holistic media literacy pedagogy’. This research reflects on the emergence and impact of holistic media literacy in my personal and professional life and also draws from relevant interdisciplinary literature to challenge and synthesize current insights and theories of media literacy, holistic education and culture jamming.

Page generated in 0.4389 seconds