Spelling suggestions: "subject:"4digital humanities"" "subject:"4digital umanities""
181 |
From "disentangling the subtle soul" to "ineluctable modality" : James Joyce's transmodal techniquesMulliken, Jasmine Tiffany 02 June 2011 (has links)
This study of James Joyce's transmodal techniques explores, first, Joyce's implementation of non-language based media into his works and, second, how digital technologies might assist in identifying and studying these implementations. The first chapter introduces the technique of re-rendering, the artistic practice of drawing out certain characteristics of one medium and, by then depicting those characteristics in a new medium, calling attention to both media and their limitations and potentials. Re-rendering can be content-based or form-based. Joyce employs content-based re-rendering when he alludes to a piece of art in another medium and form-based re-rendering when he superimposes the form of another medium onto his text. The second chapter explores Dubliners as a panoramic catalog of the various aspects involved in re-rendering media. The collection of stories, or the fragmented novel, shows synaesthetic characters, characters engaged in repetition and revision, and characters translating art across media by superimposing the forms, materials, and conventions of one medium onto another. Dubliners culminates in the use of coda, a musical structure that commonly finalizes a multi-movement work. The third chapter analyzes of A Portrait of the artist as a young man, focusing on its protagonist who exhibits synaesthetic qualities and a penchant for repeating phrases. With each repetition he also revises, a practice that foreshadows the form-based re-rendering Joyce employs in Ulysses and Finnegans Wake. The fourth chapter explores the "Sirens" episode of Ulysses. In this episode, Joyce isolates the structure of the musical medium and transfers it to a literary medium. This technique shows his advanced exploration of the effects of one artistic medium on another and exemplifies his innovative technique of re-rendering art forms. Finally, the fifth chapter explores how we might use digital technologies to visualize Joyce's techniques of re-rendering. Based on these visualizations, we might identify further connections Joyce makes across his works. / text
|
182 |
The cyber-performative in Second LifeVan Orden, Meindert Nicholas 29 April 2010 (has links)
I argue that current descriptions of the ways that language and computer code effect change (are “performative”) oversimplify the effects that utterances made in and through virtual spaces have on the real world. Building on J.L. Austin’s speech-act theory and Jacques Derrida’s deconstruction of Austin’s notion of performative language, I develop the theory of cyber-performativity. Though Katherine Hayles argues that “code” is more strongly performative than the utterances Austin focused on, Hayles’ analysis is founded on her problematic distinction between the logical computational worldview and the slippery natural-languages worldview. Cyber-performative theory builds on Hayles’ argument by showing that computational processes are as uncertain as natural languages: like human languages, “code” might always signify more and other than is intended. I argue that the social, economic, and political status of language changes as utterances made in virtual worlds such as Second Life simultaneously effect change in both real and virtual spaces.
|
183 |
From Information Extraction to Knowledge Discovery: Semantic Enrichment of Multilingual Content with Linked Open DataDe Wilde, Max 23 October 2015 (has links)
Discovering relevant knowledge out of unstructured text in not a trivial task. Search engines relying on full-text indexing of content reach their limits when confronted to poor quality, ambiguity, or multiple languages. Some of these shortcomings can be addressed by information extraction and related natural language processing techniques, but it still falls short of adequate knowledge representation. In this thesis, we defend a generic approach striving to be as language-independent, domain-independent, and content-independent as possible. To reach this goal, we offer to disambiguate terms with their corresponding identifiers in Linked Data knowledge bases, paving the way for full-scale semantic enrichment of textual content. The added value of our approach is illustrated with a comprehensive case study based on a trilingual historical archive, addressing constraints of data quality, multilingualism, and language evolution. A proof-of-concept implementation is also proposed in the form of a Multilingual Entity/Resource Combiner & Knowledge eXtractor (MERCKX), demonstrating to a certain extent the general applicability of our methodology to any language, domain, and type of content. / Découvrir de nouveaux savoirs dans du texte non-structuré n'est pas une tâche aisée. Les moteurs de recherche basés sur l'indexation complète des contenus montrent leur limites quand ils se voient confrontés à des textes de mauvaise qualité, ambigus et/ou multilingues. L'extraction d'information et d'autres techniques issues du traitement automatique des langues permettent de répondre partiellement à cette problématique, mais sans pour autant atteindre l'idéal d'une représentation adéquate de la connaissance. Dans cette thèse, nous défendons une approche générique qui se veut la plus indépendante possible des langues, domaines et types de contenus traités. Pour ce faire, nous proposons de désambiguïser les termes à l'aide d'identifiants issus de bases de connaissances du Web des données, facilitant ainsi l'enrichissement sémantique des contenus. La valeur ajoutée de cette approche est illustrée par une étude de cas basée sur une archive historique trilingue, en mettant un accent particulier sur les contraintes de qualité, de multilinguisme et d'évolution dans le temps. Un prototype d'outil est également développé sous le nom de Multilingual Entity/Resource Combiner & Knowledge eXtractor (MERCKX), démontrant ainsi le caractère généralisable de notre approche, dans un certaine mesure, à n'importe quelle langue, domaine ou type de contenu. / Doctorat en Information et communication / info:eu-repo/semantics/nonPublished
|
184 |
MODELING DEPONENCY IN GERMANIC PRETERITE-PRESENT VERBS USING DATRBourgerie Hunter, Marie G. 01 January 2017 (has links)
In certain Germanic languages, there is a group of verbs called preterite-present verbs that are often viewed as irregular, but in fact behave very predictably. They exhibit a morphological phenomenon called deponency, often in conjunction with another morphological phenomenon called heteroclisis. I examine the preterite-present verbs of three different languages: Old Norse, Modern Icelandic, and Modern German. Initially, I approach them from a historical perspective and then seek to reconcile their morphology with the modern perspective. A criteria is established for a canonical preterite-present verb, and then using a lexical programming language called DATR, I create code that generates the appropriate paradigms while also illustrating the morphological relationships between verb tenses and inflection classes, among other things. DATR is a programming language used specifically for language models.
|
185 |
Letting the Body LeadBoggs, Amanda 18 December 2020 (has links)
Letting the Body Lead is an exhibition and workshop series that focuses on embodiment in social context and invites attendees to engage with the work, both as viewers and active participants. Embodiment in social context refers to the understanding of lived experiences in the body. Within my creative practice, I explore the body's creativity, knowledge, and agency while bridging and brining together the fields of fine arts, movement-based and socially just art making. I believe in the transformative potential of how movement and contemplative practices can support a more liberated way of being both within an individual and, by extension, within broader communities and social movements.
|
186 |
Alpinism as a Form of Intangible Cultural Heritage from the Perspective of Interactive Methods in Digital HumanitiesDeck, Klaus-Georg January 2021 (has links)
In 2019, alpinism was inscribed in the UNESCO representative list of the intangible cultural heritage of humanity. This thesis shows that alpinism as intangible cultural heritage can be made experienceable for everyone by means of interactive technologies of digital humanities. First, four relevant aspects of alpinism are elaborated and identified for which a total of nine different scenarios of interactive methods are then outlined, sketched and prototypically realised with basic digital tools. These scenarios are described from the user’s perspective and include virtual reality, augmented reality and location-based methods. Finally, these scenarios are analysed within a general framework for tangible interaction systems. The range of scenarios presented, whose actors are not only alpinists but also people who are not mountaineers, makes the different facets of alpinism as cultural heritage tangible for a wide range of people.
|
187 |
Automatic Transcription of Historical Documents : Transkribus as a Tool for Libraries, Archives and ScholarsMilioni, Nikolina January 2020 (has links)
Digital libraries and archives are major portals to rich sources of information. They undertake large-scale digitization to enhance their digital collections and offer users valuable text data. When it comes to handwritten documents, usually these are only provided as digitized images and not accompanied by their transcriptions. Text in non-machine-readable format restricts contemporary scholars to conduct research, especially by employing digital humanities approaches, such as distant reading and data mining. The purpose of this thesis is to evaluate Transkribus platform as a linguistic tool mainly developed for producing automatic transcriptions of handwritten documents. The results are correlated with the findings of a questionnaire distributed to libraries and archives across Europe to expand our knowledge on the policy they follow regarding manuscripts and transcription provision. A model for a specific writing style in Latin language is trained and the accuracy on various Latin handwritten pages is tested. Finally, the tool’s validation is discussed, as well as to what extent it meets the general needs of the cultural heritage institutions and of humanities scholars.
|
188 |
Digital Humanities in der Musikwissenschaft – Computergestützte Erschließungsstrategien und Analyseansätze für handschriftliche LiedblätterBurghardt, Manuel 03 December 2019 (has links)
Der Beitrag beschreibt ein laufendes Projekt zur computergestützten Erschließung und Analyse einer großen Sammlung handschriftlicher Liedblätter mit Volksliedern aus dem deutschsprachigen Raum. Am Beispiel dieses praktischen Projekts werden Chancen und Herausforderungen diskutiert, die der Einsatz von Digital Humanities-Methoden für den Bereich der Musikwissenschaft mit sich bringt. / This article presents an ongoing project for the computer-based transcription and analysis of handwritten music scores from a large collection of German folk tunes. Based on this project, I will discuss the challenges and opportunities that arise when using Digital Humanities methods in musicology.
|
189 |
The Impact of BookTube on Book Publishing: A Study of John Green's Looking for AlaskaMitchell, Amanda 01 May 2021 (has links)
Around 2010, a group of online content creators, commonly referred to as "Youtubers" or "BookTubers," began to emerge on YouTube.com. This community's content revolves around many topics under the realm of literature including book discussions, reviews, genre discussions, and many more. While the group started off small, it has grown significantly over the past decade; some of the most prominent creators have several hundred thousand subscribers. In the ten years since its emergence, the creators and content have transformed, where many in the beginning made video discussions just for fun, and now many of them have grown their channel into a financially successful career and have formed partnerships with publishing companies.
Specifically within the BookTube community, young adult author John Green has revolutionized the platform and seen unprecedented amount of success. His novels along with their film and TV adaptations have inspired thousands of Booktube reviews and discussions, and John and his brother Hank Green have gained a massive following on YouTube.
This essay examines BookTube as a collaborative community, a marketing platform, and a space for reception theory analysis by examining readers' discussions of John Green's Looking for Alaska. BookTube and other online communities are becoming increasingly important in people's lives, and analyzing these platforms is essential to understanding future generations.
|
190 |
Automatic Extraction of Narrative Structure from Long Form TextEisenberg, Joshua Daniel 02 November 2018 (has links)
Automatic understanding of stories is a long-time goal of artificial intelligence and natural language processing research communities. Stories literally explain the human experience. Understanding our stories promotes the understanding of both individuals and groups of people; various cultures, societies, families, organizations, governments, and corporations, to name a few. People use stories to share information. Stories are told –by narrators– in linguistic bundles of words called narratives.
My work has given computers awareness of narrative structure. Specifically, where are the boundaries of a narrative in a text. This is the task of determining where a narrative begins and ends, a non-trivial task, because people rarely tell one story at a time. People don’t specifically announce when we are starting or stopping our stories: We interrupt each other. We tell stories within stories. Before my work, computers had no awareness of narrative boundaries, essentially where stories begin and end. My programs can extract narrative boundaries from novels and short stories with an F1 of 0.65.
Before this I worked on teaching computers to identify which paragraphs of text have story content, with an F1 of 0.75 (which is state of the art). Additionally, I have taught computers to identify the narrative point of view (POV; how the narrator identifies themselves) and diegesis (how involved in the story’s action is the narrator) with F1 of over 0.90 for both narrative characteristics. For the narrative POV, diegesis, and narrative level extractors I ran annotation studies, with high agreement, that allowed me to teach computational models to identify structural elements of narrative through supervised machine learning.
My work has given computers the ability to find where stories begin and end in raw text. This allows for further, automatic analysis, like extraction of plot, intent, event causality, and event coreference. These tasks are impossible when the computer can’t distinguish between which stories are told in what spans of text. There are two key contributions in my work: 1) my identification of features that accurately extract elements of narrative structure and 2) the gold-standard data and reports generated from running annotation studies on identifying narrative structure.
|
Page generated in 0.0476 seconds