• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 114
  • 32
  • 17
  • 9
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 1
  • 1
  • Tagged with
  • 261
  • 261
  • 58
  • 57
  • 52
  • 46
  • 45
  • 45
  • 41
  • 36
  • 31
  • 28
  • 26
  • 25
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

DANGERS OF THE NEWS(FEED): AN EXPLORATION INTO FAKE NEWS, PHOTOGRAPHIC TRUTH AND THE POWER OF DIGITAL COMMUNICATION ON FACEBOOK

Galla, Taylor 01 January 2018 (has links)
In an age of ever-expanding digital communication platforms and the presence of news online becoming paramount, the amount of information being shared and the truth of that information is becoming more and more difficult to track. The power of these social platforms is one that all should recognize and reflect upon in terms of their use of them, and reliance on them for the information they need. This thesis seeks to explore this power and the ways in which to remedy the falsities spread on the platforms faster than ever before, through photographic journalism.
22

Natural Language Processing of Stories

Rittichier, Kaley J. 05 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / In this thesis, I deal with the task of computationally processing stories with a focus on multidisciplinary ends, specifically in Digital Humanities and Cultural Analytics. In the process, I collect, clean, investigate, and predict from two datasets. The first is a dataset of 2,302 open-source literary works categorized by the time period they are set in. These works were all collected from Project Gutenberg. The classification of the time period in which the work is set was discovered by collecting and inspecting Library of Congress subject classifications, Wikipedia Categories, and literary factsheets from SparkNotes. The second is a dataset of 6,991 open-source literary works categorized by the hierarchical location the work is set in; these labels were constructed from Library of Congress subject classifications and SparkNotes factsheets. These datasets are the first of their kind and can help move forward an understanding of 1) the presentation of settings in stories and 2) the effect the settings have on our understanding of the stories.
23

Women in Rock and Roll's First Wave

Branstetter, Leah Tallen 23 May 2019 (has links)
No description available.
24

Techniques for the Automatic Extraction of Character Networks in German Historic Novels / Techniken zur automatischen Extraktion von Figurennetzwerken aus deutschen Romanen

Krug, Markus January 2020 (has links) (PDF)
Recent advances in Natural Language Preprocessing (NLP) allow for a fully automatic extraction of character networks for an incoming text. These networks serve as a compact and easy to grasp representation of literary fiction. They offer an aggregated view of the text, which can be used during distant reading approaches for the analysis of literary hypotheses. In their core, the networks consist of nodes, which represent literary characters, and edges, which represent relations between characters. For an automatic extraction of such a network, the first step is the detection of the references of all fictional entities that are of importance for a text. References to the fictional entities appear in the form of names, noun phrases and pronouns and prior to this work, no components capable of automatic detection of character references were available. Existing tools are only capable of detecting proper nouns, a subset of all character references. When evaluated on the task of detecting proper nouns in the domain of literary fiction, they still underperform at an F1-score of just about 50%. This thesis uses techniques from the field of semi-supervised learning, such as Distant supervision and Generalized Expectations, and improves the results of an existing tool to about 82%, when evaluated on all three categories in literary fiction, but without the need for annotated data in the target domain. However, since this quality is still not sufficient, the decision to annotate DROC, a corpus comprising 90 fragments of German novels was made. This resulted in a new general purpose annotation environment titled as ATHEN, as well as annotated data that spans about 500.000 tokens in total. Using this data, the combination of supervised algorithms and a tailored rule based algorithm, which in combination are able to exploit both - local consistencies as well as global consistencies - yield an algorithm with an F1-score of about 93%. This component is referred to as the Kallimachos tagger. A character network can not directly display references however, instead they need to be clustered so that all references that belong to a real world or fictional entity are grouped together. This process widely known as coreference resolution is a hard problem in the focus of research for more than half a century. This work experimented with adaptations of classical feature based machine learning, with a dedicated rule based algorithm and with modern techniques of Deep Learning, but no approach can surpass 55% B-Cubed F1, when evaluated on DROC. Due to this barrier, many researchers do not use a fully-fledged coreference resolution when they extract character networks, but only focus on a more forgiving subset- the names. For novels such as Alice's Adventures in Wonderland by Lewis Caroll, this would however only result in a network in which many important characters are missing. In order to integrate important characters into the network that are not named by the author, this work makes use of automatic detection of speaker and addressees for direct speech utterances (all entities involved in a dialog are considered to be of importance). This problem is by itself not an easy task, however the most successful system analysed in this thesis is able to correctly determine the speaker to about 85% of the utterances as well as about 65% of the addressees. This speaker information can not only help to identify the most dominant characters, but also serves as a way to model the relations between entities. During the span of this work, components have been developed to model relations between characters using speaker attribution, using co-occurrences as well as by the usage of true interactions, for which yet again a dataset was annotated using ATHEN. Furthermore, since relations between characters are usually typed, a component for the extraction of a typed relation was developed. Similar to the experiments for the character reference detection, a combination of a rule based and a Maximum Entropy classifier yielded the best overall results, with the extraction of family relations showing a score of about 80% and the quality of love relations with a score of about 50%. For family relations, a kernel for a Support Vector Machine was developed that even exceeded the scores of the combined approach but is behind on the other labels. In addition, this work presents new ways to evaluate automatically extracted networks without the need of domain experts, instead it relies on the usage of expert summaries. It also refrains from the uses of social network analysis for the evaluation, but instead presents ranked evaluations using Precision@k and the Spearman Rank correlation coefficient for the evaluation of the nodes and edges of the network. An analysis using these metrics showed, that the central characters of a novel are contained with high probability but the quality drops rather fast if more than five entities are analyzed. The quality of the edges is mainly dominated by the quality of the coreference resolution and the correlation coefficient between gold edges and system edges therefore varies between 30 and 60%. All developed components are aggregated alongside a large set of other preprocessing modules in the Kallimachos pipeline and can be reused without any restrictions. / Techniken zur automatischen Extraktion von Figurennetzwerken aus deutschen Romanen
25

Heuristic Futures: Reading the Digital Humanities through Science Fiction

Dargue, Joseph W. 19 October 2015 (has links)
No description available.
26

Altertumswissenschaften in a Digital Age: Egyptology, Papyrology and beyond: proceedings of a conference and workshop in Leipzig, November 4-6, 2015

Berti, Monica, Naether, Franziska January 2016 (has links)
No description available.
27

E-learning Kurs "Verarbeitung digitaler Daten in der Ägyptologie"

Jushaninowa, Julia January 2016 (has links)
Seit 2013 bin ich Teil des Teams zur Bereitstellung eines online Kurses für die Ägyptologie Studenten der Universität Leipzig im Weiterbildungsmoodle. Dieser Moodlekurs wurde von Prof. Dr. Kai-Christian Bruhn (FH Mainz), Dr. Franziska Naether und Dr. Dietrich Raue konzipiert und gestaltet und ist obligatorischer Teil des Moduls \"Einführung in die Ägyptologie\" an der Universität Leipzig. Daher richtet sich der Kurs vorrangig an Bachelor-Studenten, aber auch Studenten höherer Semester und sonstige Interessenten nehmen an dem zusätzlichen Angebot teil. Der Online-Kurs fand in diesem Jahr bereits zum vierten Mal statt und startet zum Wintersemester (WS 2015/16) in die fünfte Runde. Die Lehrveranstaltung findet komplett im Internet statt und die Teilnehmer entscheiden selbst wann und wo sie den Unterrichtsstoff im Laufe der zwei Semester erfüllen. Mittels dieser neuen Lernform werden die Teilnehmer zum Umgang mit digitalen Daten und deren automatisierten Verarbeitung angeleitet, die für die Studenten der Ägyptologie bereits während des Studiums Verwendung finden, z.B. bei der Auswertung des archäologischen Materials. Darüber hinaus setzen sie sich mit seriösen und unverzichtbaren Internet-Ressourcen auseinander. Hierbei liefert die Übung eine Einweisung im Umgang mit unterschiedlichen frei verfügbaren Programmen zur Textverarbeitung, Betrachtung und Bearbeitung von Grafiken und Geoinformationssystemen, die eine selbstständige Vertiefung ermöglichen. Die neuartige Lehrveranstaltung richtet sich somit auf die in den vergangenen Jahren zunehmend angewachsene Nachfrage auf die langfristige Speicherung wissenschaftlicher Daten (z.B. aus Datenbanken u. Bildarchiven, kartographische Daten v. Satelliten) sowie deren interdisziplinärer Nutzung. Sie hat somit seit ihrer Anlegung weiterhin Pioniercharakter an der Uni Leipzig. So soll der fertig aufbereitete Kurs künftig im IANUS-Forschungsdatenzentrum einem breiteren Publikum zur Verfügung gestellt werden. Wie das in unserem Falle im Hochschulalltag konkret erfolgt soll mit Hilfe einer Power-Point Präsentation an einigen praktischen Beispielen veranschaulicht werden. Es gilt nun Schlüsse aus den bereits erfolgten Kurs-Durchläufen zu ziehen sowie Probleme und Anregungen zu besprechen. Gerne würde ich mich mit den anderen Teilnehmern der Tagung über Innovative Lehr- und Lernmethoden austauschen.
28

Digitized newspapers and digital research: What is the library‘s role?

Garcés, Juan, Meyer, Julia 26 October 2017 (has links)
Mass-digitised newspapers offer researchers, academic and non-academic, a readily-accessible and invaluable resource for all sorts of historical enquiries. Research of print-medium newspapers, even as reproduced as microfiche or similar formats, traditionally entails the relatively close reading of individual articles, in order to extrapolate the information pertinent to the research question pursued. The re-medialisation of historical print newspapers into digital format, however, opens up new analytical avenues that allow the methodologically-savvy researcher to extrapolate information across a large number of texts with the help of approaches developed for text mining and information retrieval. The question for which this paper will present possible answers is: how can libraries that hold digitised newspaper collections support these distant reading-approaches? In answering that question, the paper will focus on three interlinked areas with potential roles for libraries and present best practice examples. These areas are (1) technical infrastructure, (2) methodological knowhow and (3) analytical tools: 1. Most research libraries have accepted their key role in providing a digital research infrastructure and are increasingly engaged in actively developing the constituent parts of said infrastructure. Researchers applying distant reading approaches, which ideally need open access to the entire data set in order to apply its approaches rather than curated interfaces, are still not part of the main vision. 2. Few historians are trained in text mining, information retrieval and related approaches. It will be argued that libraries have not only a responsibility to give access to research-relevant digital data but also to provide competent consultation and teaching in analytical methods suitable to and made possible by the digital medium. 3. The final area encompasses the provision of tools that implement standard methods on the newspaper corpora. This area might be one where libraries focus on the re-use of already existing tools rather than own developments.
29

Kulturen und Technologien: 4. Europäische Sommeruniversität der Digital Humanities an der Universität Leipzig

Reimer, Julia 11 December 2013 (has links)
Internationale Sommerakademien als intensive Spezialkurse für fortgeschrittene Studierende und junge Wissenschaftler/innen haben sich seit einigen Jahren auch in Sachsen etabliert. Unter Beteiligung der SLUB fand an der TU Dresden Anfang Oktober 2013 zum Beispiel der Kurs „Digitization and its Impact on Society“. An der Universität Leipzig bot sich bereits zum vierten Mal im Rahmen der Europäischen Sommeruniversität „Kulturen und Technologien“ Raum für interdisziplinären Wissens- und Erfahrungsaustausch für Nachwuchswissenschaftler/innen.
30

Detection, Extraction and Analysis of Vossian Antonomasia in Large Text Corpora Using Machine Learning

Schwab, Michel 02 July 2024 (has links)
Rhetorische Stilmittel, werden seit jeher in Texten verwendet, um Bilder zu erzeugen, Leser zu fesseln und wichtige Punkte hervorzuheben. Unter diesen Stilmitteln ist die Vossianische Antonomasie besonders für den Einsatz von Eigennamen als rhetorische Elemente beliebt. Genauer definiert beinhaltet die Vossianische Antonomasie, dass einem Eigennamen eine bestimmte Menge von Eigenschaften oder Attributen zugeordnet wird, indem ein anderer Eigenname, der für die entsprechenden Eigenschaften allgemein bekannt ist, genannt wird. Modifizierende Phrasen, die typischerweise in Kombination mit dem letztgenannten Eigennamen auftreten, helfen, diese Attribute zu kontextualisieren. Trotz ihrer Allgegenwärtigkeit in modernen Medien ist die Forschung zu ihrer Identifizierung, Verwendung und Interpretation selten. Dies motiviert das Thema dieser Arbeit: die automatische Erkennung, Extraktion und Analyse der Vossianischen Antonomasie. Wir präsentieren mehrere Methoden zur automatisierten Erkennung des Phänomens und entwickeln einen annotierten Datensatz. Die Methoden basieren zumeist auf neuronalen Netzen. Zusätzlich stellen wir verschiedene Ansätze zur Extraktion jedes Teils des Stilmittels in einem Satz vor. Darüber hinaus führen wir sprachübergreifende Extraktionsmodelle ein und verfeinern Erkennungsmethoden für eine verbesserte Leistung bei bisher unbekannten syntaktischen Variationen des Phänomens, indem wir uns ausschließlich auf den Schlüsseleigennamen des Stilmittels konzentrieren. Außerdem befassen wir uns mit einer anderen, aber ergänzenden Aufgabe, nämlich der Extraktion des zu beschreibenden Eigennamens in einem ganzen Textabsatz. Für ein tieferes Verständnis der Vossianischen Antonomasie präsentieren wir eine explorative Analyse des entwickelten Datensatzes. Wir führen zwei interaktive Visualisierungen ein, die die einzelnen Teile des Phänomens und ihr Zusammenspiel hervorheben, um weitere Einblicke zu gewinnen. / Stylistic devices, also known as figures of speech or rhetorical devices, have always been used in text to create imagery, engage readers, and emphasize key points. Among these devices, Vossian Antonomasia, which is closely related to metaphor and metonymy, is particularly popular for employing named entities as rhetorical elements. Defined more precisely, Vossian Antonomasia involves attributing a particular set of properties or attributes to an entity by naming another named entity that is generally well-known for the respective properties. Modifying phrases, which typically appear in combination with the latter entity, help contextualize these attributes. Despite its ubiquity in modern media, the research on its identification, usage, and interpretation is rare. This motivates the topic of this thesis: The automated detection, extraction and analysis of Vossian Antonomasia. We present several methods for the automated detection of the phenomenon and create an annotated dataset. Mostly, the methods are based on neural networks. Additionally, we introduce several approaches for extracting each chunk of the device in a sentence by modeling the problem as a sequence tagging task. Moreover, we introduce cross-lingual extraction models and refine detection methods for an improved performance on unseen syntactic variations of the phenomenon by focusing solely on the key entity of the device. Furthermore, we tackle a distinct but complementary task, namely, the extraction of the entity being described in an entire text paragraph. For a deeper understanding of Vossian Antonomasia, we present an exploratory analysis of the developed dataset. We introduce two interactive visualizations that highlight the chunks of the phenomenon and their interplay to gain more insights.

Page generated in 0.0426 seconds