• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 4
  • 2
  • Tagged with
  • 13
  • 13
  • 8
  • 5
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The Digital John D. Wagg Papers

Woods, Zachary John-Robert 18 May 2011 (has links)
John D. Wagg was a native of Ashe County, North Carolina and a Southern Methodist circuit minister active immediately before and during the Civil War. His surviving journal, sermons, and received letters allow us to employ him as a window into a particular time, place, and set of conditions. To facilitate this, selections from the Wagg documents have been transcribed, edited, and presented as a Web-based digital edition, the Digital John D. Wagg Papers. This edition is designed to work with many other editions of similarly narrow historical and geographical scope as one historical witness in a network of witnesses. We must draw from several varieties of documents in the John Wagg collection and from contextualizing historical scholarship to construct a history of Wagg as a product of and participant in his times. Born 8 July 1835, Wagg began keeping a journal in 1854 as he worked toward a degree in medicine at Jefferson, North Carolina, the Wagg family hometown. As a diarist he often explored the place of humanity in a God-made world, a theme that foreshadows his turn from medicine and entry into the itinerant ministry of the Methodist Episcopal Church, South in October 1858. Wagg spent the Civil War years preaching throughout western North Carolina and southwest Virginia, generally striving to keep his heavily Confederate-leaning politics from the pulpit. This lifestyle allows the Wagg Papers to bring an alternate point of view to any archive of Civil War documents consisting primarily of the letters of combatants. / Master of Arts
2

Deciphering Demotic Digitally

Korte, Jannik, Maderna-Sieben, Claudia, Wespi, Fabian 20 April 2016 (has links) (PDF)
In starting the Demotic Palaeographical Database Project, we intend to build up an online database which pays special attention to the actual appearance of Demotic papyri and texts down to the level of the individual sign. Our idea is to analyse a papyrus with respect to its visual nature, inasmuch as it shall be possible to compare each Demotic sign to other representations of the same sign in other texts and to study its occurrences in different words. Words shall not only be analysed in their textual context but also by their orthography and it should be possible to study even the papyrus itself by means of its material features. Therefore, the Demotic Palaeographical Database Project aims for the creation of a modern and online accessible Demotic palaeography, glossary of word spellings and corpus of manuscripts, which will not only be a convenient tool for Egyptologists and researchers interested in the Demotic writing system or artefacts inscribed with Demotic script but also will serve the conservation of cultural heritage. In our paper, we will present our conceptual ideas and the preliminary version of the database in order to demonstrate its functionalities and possibilities.
3

Cataloguing and editing Coptic Biblical texts in an online database system

Feder, Frank 20 April 2016 (has links) (PDF)
The Göttingen Virtual Manuscript Room (VMR); The Göttingen Virtual Manuscript Room (VMR) offers both an online based digital repository for Coptic Biblical manuscripts (ideally, high resolution images of every manuscript page, all metadata etc.) and a digital edition of their texts, finally even a critical edition of every biblical book of the Coptic Old Testament based on all available manuscripts. All text data will also be transferred into XML and linguistically annotated. In this way the VMR offers a full physical description of each manuscript and, at the same time, a full edition of its text and language data. Of course, the VMR can be used for manuscripts and texts other than Coptic too.
4

Deciphering Demotic Digitally

Korte, Jannik, Maderna-Sieben, Claudia, Wespi, Fabian January 2016 (has links)
In starting the Demotic Palaeographical Database Project, we intend to build up an online database which pays special attention to the actual appearance of Demotic papyri and texts down to the level of the individual sign. Our idea is to analyse a papyrus with respect to its visual nature, inasmuch as it shall be possible to compare each Demotic sign to other representations of the same sign in other texts and to study its occurrences in different words. Words shall not only be analysed in their textual context but also by their orthography and it should be possible to study even the papyrus itself by means of its material features. Therefore, the Demotic Palaeographical Database Project aims for the creation of a modern and online accessible Demotic palaeography, glossary of word spellings and corpus of manuscripts, which will not only be a convenient tool for Egyptologists and researchers interested in the Demotic writing system or artefacts inscribed with Demotic script but also will serve the conservation of cultural heritage. In our paper, we will present our conceptual ideas and the preliminary version of the database in order to demonstrate its functionalities and possibilities.
5

eDictor: da plataforma para a nuvem / eDictor: from platform to the cloud

Veronesi, Luiz Henrique Lima 04 February 2015 (has links)
Neste trabalho, apresentamos uma nova proposta para edição de textos que fazem parte de um corpus eletrônico. Partindo do histórico de desenvolvimento do corpus Tycho Brahe e da ferramenta eDictor, propõe-se a análise de todo o processo de trabalho de criação de um corpus para obter uma forma de organização da informação mais concisa e sem redundâncias, através do uso de um único repositório de informações contendo os dados textuais e morfossintáticos do texto. Esta forma foi atingida através da criação de uma estrutura de dados baseada em unidades mínimas chamadas tokens e blocos de unidades chamados chunks. A relação entre os tokens e os chunks, da forma como considerada neste trabalho, é capaz de guardar a informação de como o texto é estruturado em sua visualização (página, parágrafos, sentenças) e na sua estrutura sintática em árvores. A base de análise é composta por todos os arquivos pertencentes ao catálogo de textos do corpus Tycho Brahe. Através desta análise, foi possível chegar a elementos genéricos que se relacionam, desconstruindo o texto e criando uma relação de pontos de início e fim relativos às palavras (tokens) e não seguindo sua forma linear. A introdução do conceito de orientação a objetos possibilitou a criação de uma relação entre unidades ainda menores que o token, os split tokens que também são tokens, pois herdam as características do elemento mais significativo, o token. O intuito neste trabalho foi buscar uma forma com o menor número possível de atributos buscando diminuir a necessidade de se criar atributos específicos demais ou genéricos de menos. Na busca deste equilíbrio, foi verificada a necessidade de se criar um atributo específico para o chunk sintático, um atributo de nível que indica a distância de um nó da árvore para o nó raiz. Organizada a informação, o acesso a ela se torna mais simples e parte-se para definição da interface do usuário. A tecnologia web disponível permite que elementos sejam posicionados na tela reproduzindo a visualização que ocorre no livro e também permite que haja uma independência entre um e outro elemento. Esta independência é o que permite que a informação trafegue entre o computador do usuário e a central de processamento na nuvem sem que o usuário perceba. O processamento ocorre em background, utilizando tecnologias assíncronas. A semelhança entre as tecnologias html e xml introduziu uma necessidade de adaptação da informação para apresentação ao usuário. A solução apresentada neste trabalho é pensada de forma a atribuir aos tokens informações que indiquem que eles fazem parte de um chunk. Assim, não seriam as palavras que pertencem a uma sentença, mas cada palavra que possuiria um pedaço de informação que a faz pertencente à sentença. Esta forma de se pensar muda a maneira como a informação é exibida. / In this work, we present a new proposal for text edition organized under an electronic corpus. Starting from Tycho Brahe corpus development history and the eDictor tool, we propose to analyze the whole work process of corpus creation in order to obtain a more concise and less redudant way of organizing information by using a single source repository for textual and morphosyntactic data. This single source repository was achieved by the creation of a data structure based on minimal significative units called tokens and grouping units named chunks. The relationship between tokens and chunks, in the way considered on this work, allows storage of information about how the text is organized visually (pages, paragraphs, sentences) and on how they are organized syntactically as represented by syntactic trees. All files referred to the Tycho Brahe corpus catalog were used as base for analysis. That way, it was possible to achieve generic elements that relate to each other in a manner that the text is deconstructed by using relative pointers to each token in the text instead of following the usual linear form. The introduction of oriented-object conception made the creation of relationship among even smaller units possible, they are the split tokens, but split tokens are also tokens, as they inherit characteristics from the most significative element (the token). The aim here was being attributeless avoiding the necessity of too specific or too vague attributes. Looking for that balance, it was verified the necessity of creating a level attribute for syntactic data that indicates the distance of a tree node to its root node. After information is organized, access to it become simpler and then focus is turned to user-interface definition. Available web technology allows the use of elements that may be positioned on the screen reproducing the way the text is viewed within a book and it also allows each element to be indepedent of each other. This independence is what allows information to travel between user computer and central processing unit at the cloud without user perception. Processing occurs in background using asynchronous technology. Resemblance between html and xml introduced a necessity of adaption to present the information to the user. The adopted solution in this work realizes that tokens must contain the information about the chunk to which they belong. So this is not a point of view where words belong to sentences, but that each word have a piece of information that make them belong to the sentence. This subtile change of behavioring changes the way information is displayed.
6

eDictor: da plataforma para a nuvem / eDictor: from platform to the cloud

Luiz Henrique Lima Veronesi 04 February 2015 (has links)
Neste trabalho, apresentamos uma nova proposta para edição de textos que fazem parte de um corpus eletrônico. Partindo do histórico de desenvolvimento do corpus Tycho Brahe e da ferramenta eDictor, propõe-se a análise de todo o processo de trabalho de criação de um corpus para obter uma forma de organização da informação mais concisa e sem redundâncias, através do uso de um único repositório de informações contendo os dados textuais e morfossintáticos do texto. Esta forma foi atingida através da criação de uma estrutura de dados baseada em unidades mínimas chamadas tokens e blocos de unidades chamados chunks. A relação entre os tokens e os chunks, da forma como considerada neste trabalho, é capaz de guardar a informação de como o texto é estruturado em sua visualização (página, parágrafos, sentenças) e na sua estrutura sintática em árvores. A base de análise é composta por todos os arquivos pertencentes ao catálogo de textos do corpus Tycho Brahe. Através desta análise, foi possível chegar a elementos genéricos que se relacionam, desconstruindo o texto e criando uma relação de pontos de início e fim relativos às palavras (tokens) e não seguindo sua forma linear. A introdução do conceito de orientação a objetos possibilitou a criação de uma relação entre unidades ainda menores que o token, os split tokens que também são tokens, pois herdam as características do elemento mais significativo, o token. O intuito neste trabalho foi buscar uma forma com o menor número possível de atributos buscando diminuir a necessidade de se criar atributos específicos demais ou genéricos de menos. Na busca deste equilíbrio, foi verificada a necessidade de se criar um atributo específico para o chunk sintático, um atributo de nível que indica a distância de um nó da árvore para o nó raiz. Organizada a informação, o acesso a ela se torna mais simples e parte-se para definição da interface do usuário. A tecnologia web disponível permite que elementos sejam posicionados na tela reproduzindo a visualização que ocorre no livro e também permite que haja uma independência entre um e outro elemento. Esta independência é o que permite que a informação trafegue entre o computador do usuário e a central de processamento na nuvem sem que o usuário perceba. O processamento ocorre em background, utilizando tecnologias assíncronas. A semelhança entre as tecnologias html e xml introduziu uma necessidade de adaptação da informação para apresentação ao usuário. A solução apresentada neste trabalho é pensada de forma a atribuir aos tokens informações que indiquem que eles fazem parte de um chunk. Assim, não seriam as palavras que pertencem a uma sentença, mas cada palavra que possuiria um pedaço de informação que a faz pertencente à sentença. Esta forma de se pensar muda a maneira como a informação é exibida. / In this work, we present a new proposal for text edition organized under an electronic corpus. Starting from Tycho Brahe corpus development history and the eDictor tool, we propose to analyze the whole work process of corpus creation in order to obtain a more concise and less redudant way of organizing information by using a single source repository for textual and morphosyntactic data. This single source repository was achieved by the creation of a data structure based on minimal significative units called tokens and grouping units named chunks. The relationship between tokens and chunks, in the way considered on this work, allows storage of information about how the text is organized visually (pages, paragraphs, sentences) and on how they are organized syntactically as represented by syntactic trees. All files referred to the Tycho Brahe corpus catalog were used as base for analysis. That way, it was possible to achieve generic elements that relate to each other in a manner that the text is deconstructed by using relative pointers to each token in the text instead of following the usual linear form. The introduction of oriented-object conception made the creation of relationship among even smaller units possible, they are the split tokens, but split tokens are also tokens, as they inherit characteristics from the most significative element (the token). The aim here was being attributeless avoiding the necessity of too specific or too vague attributes. Looking for that balance, it was verified the necessity of creating a level attribute for syntactic data that indicates the distance of a tree node to its root node. After information is organized, access to it become simpler and then focus is turned to user-interface definition. Available web technology allows the use of elements that may be positioned on the screen reproducing the way the text is viewed within a book and it also allows each element to be indepedent of each other. This independence is what allows information to travel between user computer and central processing unit at the cloud without user perception. Processing occurs in background using asynchronous technology. Resemblance between html and xml introduced a necessity of adaption to present the information to the user. The adopted solution in this work realizes that tokens must contain the information about the chunk to which they belong. So this is not a point of view where words belong to sentences, but that each word have a piece of information that make them belong to the sentence. This subtile change of behavioring changes the way information is displayed.
7

Göttinger Statuten im 15. Jahrhundert / Entstehung - Entwicklung - Edition / Statutory Regulations in 15th Century Göttingen

Rehbein, Malte 17 June 2009 (has links)
No description available.
8

Cataloguing and editing Coptic Biblical texts in an online database system

Feder, Frank January 2016 (has links)
The Göttingen Virtual Manuscript Room (VMR); The Göttingen Virtual Manuscript Room (VMR) offers both an online based digital repository for Coptic Biblical manuscripts (ideally, high resolution images of every manuscript page, all metadata etc.) and a digital edition of their texts, finally even a critical edition of every biblical book of the Coptic Old Testament based on all available manuscripts. All text data will also be transferred into XML and linguistically annotated. In this way the VMR offers a full physical description of each manuscript and, at the same time, a full edition of its text and language data. Of course, the VMR can be used for manuscripts and texts other than Coptic too.
9

Digital Humanities Day Leipzig (DHDL) 2023

Piontkowitz, Vera, Kretschmer, Uwe, Burghardt, Manuel 24 January 2024 (has links)
Die Poster-Reihe des Digital Humanities Day Leipzig 2023 (DHDL) präsentiert eine facettenreiche Sammlung von Projekten und Forschungsarbeiten aus dem “Big Tent” der Digital Humanities und zeigt eindrucksvoll die interdisziplinären Verknüpfungen und die Breite des Feldes auf. Die Beiträge stammen von Forscher:innen aus Leipzig, aus der Region Mitteldeutschland und darüber hinaus. Sie bieten Einblicke in aktuelle Forschungsprojekte und demonstrieren die Anwendung digitaler Technologien in den Geisteswissenschaften. Beim DHDL 2023 stellten über 20 Gruppen aktuelle Forschungsprojekte in einer Poster-Session vor.
10

Digital Humanities Day Leipzig

06 February 2024 (has links)
Der Digital Humanities Day Leipzig ist eine seit 2017 jährlich am Dies academicus stattfindende Veranstaltung des Forums für Digital Humanities Leipzig (FDHL; fdhl.info), welche die regionale und überregionale Vernetzung von DH-Akteur:innen zum Ziel hat

Page generated in 0.0939 seconds