Spelling suggestions: "subject:"textanalysen"" "subject:"textanalyser""
21 |
Rechts, Links, Geradeaus? Zum Sprachduktus deutscher Prähistoriker zwischen 1935 und 1965Schäfer, Martina 29 May 2019 (has links)
No description available.
|
22 |
Techniques for the Automatic Extraction of Character Networks in German Historic Novels / Techniken zur automatischen Extraktion von Figurennetzwerken aus deutschen RomanenKrug, Markus January 2020 (has links) (PDF)
Recent advances in Natural Language Preprocessing (NLP) allow for a fully automatic extraction of character networks for an incoming text. These networks serve as a compact and easy to grasp representation of literary fiction. They offer an aggregated view of the text, which can be used during distant reading approaches for the analysis of literary hypotheses. In their core, the networks consist of nodes, which represent literary characters, and edges, which represent relations between characters. For an automatic extraction of such a network, the first step is the detection of the references of all fictional entities that are of importance for a text. References to the fictional entities appear in the form of names, noun phrases and pronouns and prior to this work, no components capable of automatic detection of character references were available. Existing tools are only capable of detecting proper nouns, a subset of all character references. When evaluated on the task of detecting proper nouns in the domain of literary fiction, they still underperform at an F1-score of just about 50%. This thesis uses techniques from the field of semi-supervised learning, such as Distant supervision and Generalized Expectations, and improves the results of an existing tool to about 82%, when evaluated on all three categories in literary fiction, but without the need for annotated data in the target domain. However, since this quality is still not sufficient, the decision to annotate DROC, a corpus comprising 90 fragments of German novels was made. This resulted in a new general purpose annotation environment titled as ATHEN, as well as annotated data that spans about 500.000 tokens in total. Using this data, the combination of supervised algorithms and a tailored rule based algorithm, which in combination are able to exploit both - local consistencies as well as global consistencies - yield an algorithm with an F1-score of about 93%. This component is referred to as the Kallimachos tagger.
A character network can not directly display references however, instead they need to be clustered so that all references that belong to a real world or fictional entity are grouped together. This process widely known as coreference resolution is a hard problem in the focus of research for more than half a century. This work experimented with adaptations of classical feature based machine learning, with a dedicated rule based algorithm and with modern techniques of Deep Learning, but no approach can surpass 55% B-Cubed F1, when evaluated on DROC. Due to this barrier, many researchers do not use a fully-fledged coreference resolution when they extract character networks, but only focus on a more forgiving subset- the names. For novels such as Alice's Adventures in Wonderland by Lewis Caroll, this would however only result in a network in which many important characters are missing. In order to integrate important characters into the network that are not named by the author, this work makes use of automatic detection of speaker and addressees for direct speech utterances (all entities involved in a dialog are considered to be of importance). This problem is by itself not an easy task, however the most successful system analysed in this thesis is able to correctly determine the speaker to about 85% of the utterances as well as about 65% of the addressees. This speaker information can not only help to identify the most dominant characters, but also serves as a way to model the relations between entities.
During the span of this work, components have been developed to model relations between characters using speaker attribution, using co-occurrences as well as by the usage of true interactions, for which yet again a dataset was annotated using ATHEN. Furthermore, since relations between characters are usually typed, a component for the extraction of a typed relation was developed. Similar to the experiments for the character reference detection, a combination of a rule based and a Maximum Entropy classifier yielded the best overall results, with the extraction of family relations showing a score of about 80% and the quality of love relations with a score of about 50%. For family relations, a kernel for a Support Vector Machine was developed that even exceeded the scores of the combined approach but is behind on the other labels.
In addition, this work presents new ways to evaluate automatically extracted networks without the need of domain experts, instead it relies on the usage of expert summaries. It also refrains from the uses of social network analysis for the evaluation, but instead presents ranked evaluations using Precision@k and the Spearman Rank correlation coefficient for the evaluation of the nodes and edges of the network. An analysis using these metrics showed, that the central characters of a novel are contained with high probability but the quality drops rather fast if more than five entities are analyzed. The quality of the edges is mainly dominated by the quality of the coreference resolution and the correlation coefficient between gold edges and system edges therefore varies between 30 and 60%.
All developed components are aggregated alongside a large set of other preprocessing modules in the Kallimachos pipeline and can be reused without any restrictions. / Techniken zur automatischen Extraktion von Figurennetzwerken aus deutschen Romanen
|
23 |
Neue Wörter -alte Ideen : Die Reproduktion nationalsozialistischen Sprachgebrauchs in den Parteiprogrammen der Nationaldemokratischen Partei Deutschlands und der SverigedemokraternaNeubauer, Christine January 2008 (has links)
No description available.
|
24 |
Didaktik theatralen Philosophierens : Untersuchungen zum Zusammenspiel argumentativ-diskursiver und theatral-präsentativer Verfahren bei der Texteröffnung in philosophischen Bildungsprozessen /Gefert, Christian, January 2002 (has links)
Thesis (doctoral)--Universität, Hamburg, 2001. / Includes bibliographical references (p. 307-324).
|
25 |
Language Engineering for Information ExtractionSchierle, Martin 10 January 2012 (has links) (PDF)
Accompanied by the cultural development to an information society and knowledge economy and driven by the rapid growth of the World Wide Web and decreasing prices for technology and disk space, the world\'s knowledge is evolving fast, and humans are challenged with keeping up.
Despite all efforts on data structuring, a large part of this human knowledge is still hidden behind the ambiguities and fuzziness of natural language. Especially domain language poses new challenges by having specific syntax, terminology and morphology. Companies willing to exploit the information contained in such corpora are often required to build specialized systems instead of being able to rely on off the shelf software libraries and data resources. The engineering of language processing systems is however cumbersome, and the creation of language resources, annotation of training data and composition of modules is often enough rather an art than a science. The scientific field of Language Engineering aims at providing reliable information, approaches and guidelines of how to design, implement, test and evaluate language processing systems.
Language engineering architectures have been a subject of scientific work for the last two decades and aim at building universal systems of easily reusable components. Although current systems offer comprehensive features and rely on an architectural sound basis, there is still little documentation about how to actually build an information extraction application. Selection of modules, methods and resources for a distinct usecase requires a detailed understanding of state of the art technology, application demands and characteristics of the input text. The main assumption underlying this work is the thesis that a new application can only occasionally be created by reusing standard components from different repositories. This work recapitulates existing literature about language resources, processing resources and language engineering architectures to derive a theory about how to engineer a new system for information extraction from a (domain) corpus.
This thesis was initiated by the Daimler AG to prepare and analyze unstructured information as a basis for corporate quality analysis. It is therefore concerned with language engineering in the area of Information Extraction, which targets the detection and extraction of specific facts from textual data. While other work in the field of information extraction is mainly concerned with the extraction of location or person names, this work deals with automotive components, failure symptoms, corrective measures and their relations in arbitrary arity.
The ideas presented in this work will be applied, evaluated and demonstrated on a real world application dealing with quality analysis on automotive domain language. To achieve this goal, the underlying corpus is examined and scientifically characterized, algorithms are picked with respect to the derived requirements and evaluated where necessary. The system comprises language identification, tokenization, spelling correction, part of speech tagging, syntax parsing and a final relation extraction step. The extracted information is used as an input to data mining methods such as an early warning system and a graph based visualization for interactive root cause analysis. It is finally investigated how the unstructured data facilitates those quality analysis methods in comparison to structured data. The acceptance of these text based methods in the company\'s processes further proofs the usefulness of the created information extraction system.
|
26 |
Polyglot text to speech synthesis text analysis & prosody controlRomsdorfer, Harald January 2009 (has links)
Zugl.: Zürich, Techn. Hochsch., Diss., 2009
|
27 |
Juristische Textarbeit im Spiegel der Öffentlichkeit /Felder, Ekkehard. January 2003 (has links) (PDF)
Univ., Habil.-Schr.--Münster (Westf.), 2002. / Literaturverz. S. [307] - 333.
|
28 |
Globalisierung und Lokalisierung von Rapmusik am Beispiel amerikanischer und deutscher RaptexteLüdtke, Solveig. January 2007 (has links)
Zugl.: Hannover, Universiẗat, Diss., 2007.
|
29 |
Psychische Gesundheit arbeitsloser Menschen : Daseinsanalyse und Kohärenzgefühl - Analyseinstrumente zur Erkennung einer möglichen Anpassungsstörung und/oder Traumatisierung arbeitsloser Menschen? /Sommer, Astrid. January 2008 (has links) (PDF)
Bachelorarbeit ZHAW, 2008.
|
30 |
Genre Analysis and Corpus Design: Nineteenth Century Spanish-American Novels (1830–1910) / Gattungsanalyse und Korpusaufbau: Hispanoamerikanische Romane im 19. Jahrhundert (1830–1910) / Análisis de género y diseño de corpus: Novelas hispanoamericanas del siglo XIX (1830–1910)Henny-Krahmer, Ulrike January 2023 (has links) (PDF)
This work in the field of digital literary stylistics and computational literary studies is concerned with theoretical concerns of literary genre, with the design of a corpus of nineteenth-century Spanish-American novels, and with its empirical analysis in terms of subgenres of the novel. The digital text corpus consists of 256 Argentine, Cuban, and Mexican novels from the period between 1830 and 1910. It has been created with the goal to analyze thematic subgenres and literary currents that were represented in numerous novels in the nineteenth century by means of computational text categorization methods. The texts have been gathered from different sources, encoded in the standard of the Text Encoding Initiative (TEI), and enriched with detailed bibliographic and subgenre-related metadata, as well as with structural information.
To categorize the texts, statistical classification and a family resemblance analysis relying on network analysis are used with the aim to examine how the subgenres, which are understood as communicative, conventional phenomena, can be captured on the stylistic, textual level of the novels that participate in them. The result is that both thematic subgenres and literary currents are textually coherent to degrees of 70–90 %, depending on the individual subgenre constellation, meaning that the communicatively established subgenre classifications can be accurately captured to this extent in terms of textually defined classes.
Besides the empirical focus, the dissertation also aims to relate literary theoretical genre concepts to the ones used in digital genre stylistics and computational literary studies as subfields of digital humanities. It is argued that literary text types, conventional literary genres, and textual literary genres should be distinguished on a theoretical level to improve the conceptualization of genre for digital text analysis. / Diese Arbeit ist in den Forschungsfeldern der digitalen literaturwissenschaftlichen Stilistik und der Computational Literary Studies angesiedelt und setzt sich mit theoretischen Gattungsproblemen, mit der Erstellung eines Korpus von hispanoamerikanischen Romanen des 19. Jahrhunderts und mit ihrer empirischen Analyse nach Untergattungen auseinander. Das digitale Textkorpus umfasst 256 argentinische, kubanische und mexikanische Romane aus der Zeit von 1830 bis 1910 und ist mit dem Ziel erstellt worden, thematische Untergattungen und literarische Strömungen, die im 19. Jahrhundert durch zahlreiche Romane repräsentiert waren, mit Hilfe computergestützter Methoden der Textkategorisierung zu analysieren.
Um die Texte zu kategorisieren werden Verfahren der statistischen Klassifikation und eine Familienähnlichkeitsanalyse verwendet, die auf einer Netzwerkanalyse basiert. Das Ziel der Analysen ist es zu untersuchen inwieweit die Untergattungen, die primär als Phänomene der Kommunikation und Konvention verstanden werden, auf der stilistischen, textlichen Ebene der Romane, die an ihnen teilhaben, erfasst werden können. Das Ergebnis ist, dass sowohl die thematischen Untergattungen als auch die literarischen Strömungen zu 70–90 % textlich kohärent sind, in Abhängigkeit der gewählten Untergattungskonstellation, womit gemeint ist, dass die kommunikativ etablierten Untergattungsklassifikationen in diesem Maß an Genauigkeit auch als textlich definierte Klassen erfasst werden können.
Über die empirische Ausrichtung hinaus ist ein weiteres Ziel, literaturtheoretische Gattungskonzepte zu denjenigen in Beziehung zu setzen, die in der digitalen Gattungsstilistik als einer Teildisziplin der Digital Humanities verwendet werden. Es wird argumentiert, dass literarische Texttypen, konventionelle literarische Gattungen und textliche literarische Gattungen auf einer theoretischen Ebene unterschieden werden sollten, um die Konzeption von Gattung für die digitale Textanalyse zu verbessern. / Este trabajo en el campo de la estilística literaria digital y los estudios literarios computacionales se ocupa de las preocupaciones teóricas del género literario, del diseño de un corpus de novelas hispanoamericanas del siglo XIX y de su análisis empírico en términos de subgéneros de la novela. El corpus de textos digitales consta de 256 novelas argentinas, cubanas y mexicanas del período comprendido entre 1830 y 1910. Ha sido creado con el objetivo de analizar los subgéneros temáticos y las corrientes literarias que estaban representadas en numerosas novelas del siglo XIX mediante métodos de categorización computacional de textos.
Para la categorización de los textos se utiliza una clasificación estadística y un análisis de semejanza familiar basado en el análisis de redes, con el fin de examinar cómo los subgéneros, entendidos como fenómenos comunicativos y convencionales, pueden ser captados en el plano estilístico y textual de las novelas que participan en ellos. El resultado es que tanto los subgéneros temáticos como las corrientes literarias son textualmente coherentes en grados del 70–90 %, dependiendo de la constelación individual de subgéneros, lo que significa que las clasificaciones de subgéneros establecidas comunicativamente pueden ser capturadas con precisición hasta este punto en términos de clases textualmente definidas.
Además del enfoque empírico, la disertación también pretende relacionar los conceptos teóricos de género literario con los utilizados en la estilística de género digital y los estudios literarios computacionales como subcampos de las humanidades digitales. Se argumenta que los tipos de texto literario, los géneros literarios convencionales y los géneros literarios textuales deberían distinguirse a nivel teórico para mejorar la conceptualización del género para el análisis de textos digitales.
|
Page generated in 0.038 seconds