• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 5
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 33
  • 12
  • 10
  • 8
  • 7
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Implementation and evaluation of a text extraction tool for adverse drug reaction information

Dahlberg, Gunnar January 2010 (has links)
Inom ramen för Världshälsoorganisationens (WHO:s) internationella biverkningsprogram rapporterar sjukvårdspersonal och patienter misstänkta läkemedelsbiverkningar i form av spontana biverkningsrapporter som via nationella myndigheter skickas till Uppsala Monitoring Centre (UMC). Hos UMC lagras rapporterna i VigiBase, WHO:s biverkningsdatabas. Rapporterna i VigiBase analyseras med hjälp av statistiska metoder för att hitta potentiella samband mellan läkemedel och biverkningar. Funna samband utvärderas i flera steg där ett tidigt steg i utvärderingen är att studera den medicinska litteraturen för att se om sambandet redan är känt sedan tidigare (tidigare kända samband filtreras bort från fortsatt analys). Att manuellt leta efter samband mellan ett visst läkemedel och en viss biverkan är tidskrävande. I den här studien har vi utvecklat ett verktyg för att automatiskt leta efter medicinska biverkningstermer i medicinsk litteratur och spara funna samband i ett strukturerat format. I verktyget har vi implementerat och integrerat funktionalitet för att söka efter medicinska biverkningar på olika sätt (utnyttja synonymer,ta bort ändelser på ord, ta bort ord som saknar betydelse, godtycklig ordföljd och stavfel). Verktygets prestanda har utvärderats på manuellt extraherade medicinska termer från SPC-texter (texter från läkemedels bipacksedlar) och på biverkningstexter från Martindale (medicinsk referenslitteratur för information om läkemedel och substanser) där WHO-ART- och MedDRA-terminologierna har använts som källa för biverkningstermer. Studien visar att sofistikerad textextraktion avsevärt kan förbättra identifieringen av biverkningstermer i biverkningstexter jämfört med en ordagrann extraktion. / Background: Initial review of potential safety issues related to the use of medicines involves reading and searching existing medical literature sources for known associations of drug and adverse drug reactions (ADRs), so that they can be excluded from further analysis. The task is labor demanding and time consuming. Objective: To develop a text extraction tool to automatically identify ADR information from medical adverse effects texts. Evaluate the performance of the tool’s underlying text extraction algorithm and identify what parts of the algorithm contributed to the performance. Method: A text extraction tool was implemented on the .NET platform with functionality for preprocessing text (removal of stop words, Porter stemming and use of synonyms) and matching medical terms using permutations of words and spelling variations (Soundex, Levenshtein distance and Longest common subsequence distance). Its performance was evaluated on both manually extracted medical terms (semi-structuredtexts) from summary of product characteristics (SPC) texts and unstructured adverse effects texts from Martindale (i.e. a medical reference for information about drugs andmedicines) using the WHO-ART and MedDRA medical term dictionaries. Results: For the SPC data set, a verbatim match identified 72% of the SPC terms. The text extraction tool correctly matched 87% of the SPC terms while producing one false positive match using removal of stop words, Porter stemming, synonyms and permutations. The use of the full MedDRA hierarchy contributed the most to performance. Sophisticated text algorithms together contributed roughly equally to the performance. Phonetic codes (i.e. Soundex) is evidently inferior to string distance measures (i.e. Levenshtein distance and Longest common subsequence distance) for fuzzy matching in our implementation. The string distance measures increased the number of matched SPC terms, but at the expense of generating false positive matches. Results from Martindaleshow that 90% of the identified medical terms were correct. The majority of false positive matches were caused by extracting medical terms not describing ADRs. Conclusion: Sophisticated text extraction can considerably improve the identification of ADR information from adverse effects texts compared to a verbatim extraction.
12

Untersuchungen zur Verbesserung der Resultatqualität bei Suchverfahren über Web-Archive

Hofmann, Frank 10 February 2003 (has links) (PDF)
Eine Übersicht über die Verfahren der Erweiterten Suche (TF,IDF, Stemming, Indexing, Klang von Wörtern) sowie Textkorrektur, dazu deskriptorenbasierte Beschreibung von Dokumenten und Abstracts. Es erfolgt eine Evaluierung dieser Verfahren anhand von ausgewählten XML-Metadaten aus dem MONARCH. Den Abschluß bildet eine Analyse zum Ist-Zustand des MONARCH, bezogen auf Qualität der verwendeten Metadaten und deren Nutzbarkeit für die Erweiterte Suche.
13

Hledání sémantické informace v textových datech s využitím latentní analýzy

Řezníček, Pavel January 2015 (has links)
The first part of thesis focuses on theoretical introduction to the methods of text mining -- Information retrieval, classification and clustering. LSA method is presented as an advanced model for representing textual data. Furthermore, the work describes source data and methods for their preprocessing and preparation used to enhance the effectiveness of text mining methods. For each chosen text mining method there are defined evaluation metrics and used already existing, or newly implemented, programs are presented. The results of experiments comparing the effects of different preprocessing type and use of different models of the source data are then demonstrated and discussed in the conclusion.
14

Teiresias: Datenbank-basiertes Online-Wörterbuch Neugriechisch-Deutsch

Helmchen, Christian 26 October 2017 (has links)
Mehrsprachige Anwendungen finden heute eine immer größere Verbreitung. Und zusehends treffen dabei Sprachen mit vollkommen unterschiedlichen Zeichensätzen aufeinander. Wie können aktuelle Datenbanksysteme und Programmiersprachen damit umgehen? Ist eine vollständige Unterstützung solch verschiedener Sprachen in heutigen datenbank-basierten Anwendungen möglich? Das werde ich in dieser Arbeit klären und ein praktisches Beispiel einer solchen Anwendung vorstellen: das Online-Wörterbuch Teiresias. Es vereint nicht nur die Sprachen Deutsch und Neugriechisch in sich, sondern es nutzt auch linguistisch interessante Verfahren zur Suche und zur Bewertung der Treffer. Außerdem greift es auf Data-Warehouse-Techniken zurück um eine möglichst hohe Effizienz zu erzielen. Die schwierige Übernahme des Datenbestandes der Vorgängerversion zeigt dabei auch die Probleme bei der Weiterentwicklung von Software und der Umstellung auf neue Technologien auf.
15

O efeito do uso de diferentes formas de extração de termos na compreensibilidade e representatividade dos termos em coleções textuais na língua portuguesa / The effect of using different forms of terms extraction on its comprehensibility and representability in Portuguese textual domains

Conrado, Merley da Silva 10 September 2009 (has links)
A extração de termos em coleções textuais, que é uma atividade da etapa de Pré-Processamento da Mineração de Textos, pode ser empregada para diversos fins nos processos de extração de conhecimento. Esses termos devem ser cuidadosamente extraídos, uma vez que os resultados de todo o processo dependerão, em grande parte, da \"qualidade\" dos termos obtidos. A \"qualidade\" dos termos, neste trabalho, abrange tanto a representatividade dos termos no domínio em questão como sua compreensibilidade. Tendo em vista sua importância, neste trabalho, avaliou-se o efeito do uso de diferentes técnicas de simplificação de termos na compreensibilidade e representatividade dos termos em coleções textuais na Língua Portuguesa. Os termos foram extraídos seguindo os passos da metodologia apresentada neste trabalho e as técnicas utilizadas durante essa atividade de extração foram a radicalização, lematização e substantivação. Para apoiar tal metodologia, foi desenvolvida uma ferramenta, a ExtraT (Ferramenta para Extração de Termos). Visando garantir a \"qualidade\" dos termos extraídos, os mesmos são avaliados objetiva e subjetivamente. As avaliações subjetivas, ou seja, com o auxílio de especialistas do domínio em questão, abrangem a representatividade dos termos em seus respectivos documentos, a compreensibilidade dos termos obtidos ao utilizar cada técnica e a preferência geral subjetiva dos especialistas em cada técnica. As avaliações objetivas, que são auxiliadas por uma ferramenta desenvolvida (a TaxEM - Taxonomia em XML da Embrapa), levam em consideração a quantidade de termos extraídos por cada técnica, além de abranger tambéem a representatividade dos termos extraídos a partir de cada técnica em relação aos seus respectivos documentos. Essa avaliação objetiva da representatividade dos termos utiliza como suporte a medida CTW (Context Term Weight). Oito coleções de textos reais do domínio de agronegócio foram utilizadas na avaliaçao experimental. Como resultado foram indicadas algumas das características positivas e negativas da utilização das técnicas de simplificação de termos, mostrando que a escolha pelo uso de alguma dessas técnicas para o domínio em questão depende do objetivo principal pré-estabelecido, que pode ser desde a necessidade de se ter termos compreensíveis para o usuário até a necessidade de se trabalhar com uma menor quantidade de termos / The task of term extraction in textual domains, which is a subtask of the text pre-processing in Text Mining, can be used for many purposes in knowledge extraction processes. These terms must be carefully extracted since their quality will have a high impact in the results. In this work, the quality of these terms involves both representativity in the specific domain and comprehensibility. Considering this high importance, in this work the effects produced in the comprehensibility and representativity of terms were evaluated when different term simplification techniques are utilized in text collections in Portuguese. The term extraction process follows the methodology presented in this work and the techniques used were radicalization, lematization and substantivation. To support this metodology, a term extraction tool was developed and is presented as ExtraT. In order to guarantee the quality of the extracted terms, they were evaluated in an objective and subjective way. The subjective evaluations, assisted by domain specialists, analyze the representativity of the terms in related documents, the comprehensibility of the terms with each technique, and the specialist\'s opinion. The objective evaluations, which are assisted by TaxEM and by Thesagro (National Agricultural Thesaurus), consider the number of extracted terms by each technique and their representativity in the related documents. This objective evaluation of the representativity uses the CTW measure (Context Term Weight) as support. Eight real collections of the agronomy domain were used in the experimental evaluation. As a result, some positive and negative characteristics of each techniques were pointed out, showing that the best technique selection for this domain depends on the main pre-established goal, which can involve obtaining better comprehensibility terms for the user or reducing the quantity of extracted terms
16

O efeito do uso de diferentes formas de extração de termos na compreensibilidade e representatividade dos termos em coleções textuais na língua portuguesa / The effect of using different forms of terms extraction on its comprehensibility and representability in Portuguese textual domains

Merley da Silva Conrado 10 September 2009 (has links)
A extração de termos em coleções textuais, que é uma atividade da etapa de Pré-Processamento da Mineração de Textos, pode ser empregada para diversos fins nos processos de extração de conhecimento. Esses termos devem ser cuidadosamente extraídos, uma vez que os resultados de todo o processo dependerão, em grande parte, da \"qualidade\" dos termos obtidos. A \"qualidade\" dos termos, neste trabalho, abrange tanto a representatividade dos termos no domínio em questão como sua compreensibilidade. Tendo em vista sua importância, neste trabalho, avaliou-se o efeito do uso de diferentes técnicas de simplificação de termos na compreensibilidade e representatividade dos termos em coleções textuais na Língua Portuguesa. Os termos foram extraídos seguindo os passos da metodologia apresentada neste trabalho e as técnicas utilizadas durante essa atividade de extração foram a radicalização, lematização e substantivação. Para apoiar tal metodologia, foi desenvolvida uma ferramenta, a ExtraT (Ferramenta para Extração de Termos). Visando garantir a \"qualidade\" dos termos extraídos, os mesmos são avaliados objetiva e subjetivamente. As avaliações subjetivas, ou seja, com o auxílio de especialistas do domínio em questão, abrangem a representatividade dos termos em seus respectivos documentos, a compreensibilidade dos termos obtidos ao utilizar cada técnica e a preferência geral subjetiva dos especialistas em cada técnica. As avaliações objetivas, que são auxiliadas por uma ferramenta desenvolvida (a TaxEM - Taxonomia em XML da Embrapa), levam em consideração a quantidade de termos extraídos por cada técnica, além de abranger tambéem a representatividade dos termos extraídos a partir de cada técnica em relação aos seus respectivos documentos. Essa avaliação objetiva da representatividade dos termos utiliza como suporte a medida CTW (Context Term Weight). Oito coleções de textos reais do domínio de agronegócio foram utilizadas na avaliaçao experimental. Como resultado foram indicadas algumas das características positivas e negativas da utilização das técnicas de simplificação de termos, mostrando que a escolha pelo uso de alguma dessas técnicas para o domínio em questão depende do objetivo principal pré-estabelecido, que pode ser desde a necessidade de se ter termos compreensíveis para o usuário até a necessidade de se trabalhar com uma menor quantidade de termos / The task of term extraction in textual domains, which is a subtask of the text pre-processing in Text Mining, can be used for many purposes in knowledge extraction processes. These terms must be carefully extracted since their quality will have a high impact in the results. In this work, the quality of these terms involves both representativity in the specific domain and comprehensibility. Considering this high importance, in this work the effects produced in the comprehensibility and representativity of terms were evaluated when different term simplification techniques are utilized in text collections in Portuguese. The term extraction process follows the methodology presented in this work and the techniques used were radicalization, lematization and substantivation. To support this metodology, a term extraction tool was developed and is presented as ExtraT. In order to guarantee the quality of the extracted terms, they were evaluated in an objective and subjective way. The subjective evaluations, assisted by domain specialists, analyze the representativity of the terms in related documents, the comprehensibility of the terms with each technique, and the specialist\'s opinion. The objective evaluations, which are assisted by TaxEM and by Thesagro (National Agricultural Thesaurus), consider the number of extracted terms by each technique and their representativity in the related documents. This objective evaluation of the representativity uses the CTW measure (Context Term Weight) as support. Eight real collections of the agronomy domain were used in the experimental evaluation. As a result, some positive and negative characteristics of each techniques were pointed out, showing that the best technique selection for this domain depends on the main pre-established goal, which can involve obtaining better comprehensibility terms for the user or reducing the quantity of extracted terms
17

Určení základního tvaru slova / Determination of basic form of words

Šanda, Pavel January 2011 (has links)
Lemmatization is an important preprocessing step for many applications of text mining. Lemmatization process is similar to the stemming process, with the difference that determines not only the word stem, but it´s trying to determines the basic form of the word using the methods Brute Force and Suffix Stripping. The main aim of this paper is to present methods for algorithmic improvements Czech lemmatization. The created training set of data are content of this paper and can be freely used for student and academic works dealing with similar problematics.
18

Untersuchungen zur Verbesserung der Resultatqualität bei Suchverfahren über Web-Archive

Hofmann, Frank 10 February 2003 (has links)
Eine Übersicht über die Verfahren der Erweiterten Suche (TF,IDF, Stemming, Indexing, Klang von Wörtern) sowie Textkorrektur, dazu deskriptorenbasierte Beschreibung von Dokumenten und Abstracts. Es erfolgt eine Evaluierung dieser Verfahren anhand von ausgewählten XML-Metadaten aus dem MONARCH. Den Abschluß bildet eine Analyse zum Ist-Zustand des MONARCH, bezogen auf Qualität der verwendeten Metadaten und deren Nutzbarkeit für die Erweiterte Suche.
19

Prototyp för att öka exponeringen av skönlitteratur på internet

Viderberg, Arvid, Hammersberg, Hampus January 2018 (has links)
På internet idag genereras information för att exponera böcker manuellt. Det är information som till exempel genre, författare, platser och sammanfattning. Böckernas fullständiga text är inte tillgänglig publikt på internet på grund av upphovsrättslagen och av den anledningen går det inte att automatiskt generera denna typ av information. En lösning är att konstruera en prototyp som behandlar originalverket och automatisk genererar information som kan exponeras på internet, utan att exponera hela verket. Denna rapport jämfört tre olika algoritmer som behandlar böcker: utbrytning av ordstam, stoppordsfiltrering och blandning av meningar inom stycken. Algoritmerna är jämförda med avseende på generering av relevant information till tjänsterna: sökmotorer, automatisk metadata, smarta annonser och textsammanfattning. Sökmotorer låter en användare söka på exempelvis bokens titel eller en mening ur boken. Automatisk metadata bryter automatiskt ut beskrivande information från boken. Smarta annonser använder beskrivande information för att rekommendera och marknadsföra böcker. Textsammanfattning kan skapa en kort, beskrivande sammanfattning av boken automatiskt. Informationen som sparas från böckerna ska endast vara relevant information till tjänsterna. Informationen ska inte heller har något litterärt värde1 för en människa. Resultatet av arbetet visar att kombinationerna blandning av meningar →stoppordsfiltrering och stoppordsfiltrering →blandning av meningar är optimala i form av sökbarhet. Det är också rekommenderat att lägga till utbrytning av ordstam som ett extra steg i behandlingen av originalverket, eftersom det genererar mer relevant automatisk metadata till boken. / On the internet today, information to expose books is generated manually. That includes information such as genre, author, places, and summary. The full text of books are not publicly available on the Internet due to copyright law, and for this reason it is not possible to generate this type of information automatically. One solution is to construct a prototype that processes the original book and automatically generates information that can be exposed to the Internet, without exposing the entire book. In this report, three different algorithms that deal with processing books are compared: stemming, filtering of stop words and scrambling of sentences within paragraphs. The algorithms are compared by generating relevant information to the services: search engines, automatic metadata, smart ads and text analysis. Search engines allows a user to search for e.g. the title or a sentence from the book. Automatic metadata automatically breaks out descriptive information from the book. Smart ads can use descriptive information to recommend and promote books. Text analysis can be used to automatically create a brief descriptive summary. The information stored from the books should only be relevant information for the services and the information should not have any literal value2 for a human to read. The result of the work shows that the combinations scrambling of sentences→filtering of stop words and filtering of stop words→scramlbing of sentences are optimal in terms of searchability. It is also recommended to add stemming as an additional step in the processing of the original book, as it generates more relevant automatic metadata to the book.
20

Outomatiese Setswana lemma-identifisering / Jeanetta Hendrina Brits

Brits, Jeanetta Hendrina January 2006 (has links)
Within the context of natural language processing, a lemmatiser is one of the most important core technology modules that has to be developed for a particular language. A lemmatiser reduces words in a corpus to the corresponding lemmas of the words in the lexicon. A lemma is defined as the meaningful base form from which other more complex forms (i.e. variants) are derived. Before a lemmatiser can be developed for a specific language, the concept "lemma" as it applies to that specific language should first be defined clearly. This study concludes that, in Setswana, only stems (and not roots) can act independently as words; therefore, only stems should be accepted as lemmas in the context of automatic lemmatisation for Setswana. Five of the seven parts of speech in Setswana could be viewed as closed classes, which means that these classes are not extended by means of regular morphological processes. The two other parts of speech (nouns and verbs) require the implementation of alternation rules to determine the lemma. Such alternation rules were formalised in this study, for the purpose of development of a Setswana lemmatiser. The existing Setswana grammars were used as basis for these rules. Therewith the precision of the formalisation of these existing grammars to lemmatise Setswana words could be determined. The software developed by Van Noord (2002), FSA 6, is one of the best-known applications available for the development of finite state automata and transducers. Regular expressions based on the formalised morphological rules were used in FSA 6 to create finite state transducers. The code subsequently generated by FSA 6 was implemented in the lemmatiser. The metric that applies to the evaluation of the lemmatiser is precision. On a test corpus of 1 000 words, the lemmatiser obtained 70,92%. In another evaluation on 500 complex nouns and 500 complex verbs separately, the lemmatiser obtained 70,96% and 70,52% respectively. Expressed in numbers the precision on 500 complex and simplex nouns was 78,45% and on complex and simplex verbs 79,59%. The quantitative achievement only gives an indication of the relative precision of the grammars. Nevertheless, it did offer analysed data with which the grammars were evaluated qualitatively. The study concludes with an overview of how these results might be improved in the future. / Thesis (M.A. (African Languages))--North-West University, Potchefstroom Campus, 2006.

Page generated in 0.1143 seconds