• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • Tagged with
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Criação de um ambiente para o processamento de córpus de Português Histórico / Creation of an environment for processing of Historical Porrtuguese Corpora

Candido Junior, Arnaldo 02 April 2008 (has links)
A utilização de córpus tem crescido progressivamente em áreas como Lingüística e Processamento de Língua Natural. Como resultado, temos a compilação de novos e grandes córpus e a criação de sistemas processadores de córpus e de padrões para codificação e intercâmbio de textos eletrônicos. Entretanto, a metodologia para compilação de córpus históricos difere das metodologias usadas em córpus contemporâneos. Outro problema é o fato de a maior parte dos processadores de córpus proverem poucos recursos para o tratamento de córpus históricos, apesar de tais córpus serem numerosos. Da mesma forma, os sistemas para criação de dicionários não atendem satisfatoriamente necessidades de dicionários históricos. A motivação desta pesquisa é o projeto do Dicionário Histórico do Português do Brasil (DHPB) que tem como base a construção de um córpus de Português do Brasil dos séculos XVI a XVIII (incluindo alguns textos do começo do século XIX). Neste trabalho são apresentados os desafios encontrados para o processamento do córpus do projeto do projeto DHPB e os requisitos para redação de verbetes do dicionário histórico. Um ambiente computacional para processamento de córpus, criação de glossários e redação de verbetes foi desenvolvido para o projeto DHPB sendo possível adaptá-lo para ser aplicado a outros projetos de criação de dicionários históricos / Corpora has been increasingly used within the areas of Linguistics and Natural Language Processing. As a result, new and larger corpora have been compiled and processing systems and standards for encoding and interchange of electronic texts have been developed. However, when it comes to compilation of historical corpora, the methodology is different from the ones used to compile corpora of contemporary language. Another drawback is the fact that most corpus processing systems provide few resources for the treatment of historical corpus, although there are numerous corpora of this type. Similarly, the systems for dictionary creation do not satisfactorily meet the needs of historical dictionaries. The present study is part of a larger project - the Historical Dictionary of Brazilian Portuguese (HDBP) - which aims to compile a dictionary on the basis of a corpus of Brazilian Portuguese texts from the sixteenth through the eighteenth centuries (including some texts from early nineteenth century). Here, we present the challenges for processing the corpus of the HDPB project and established the criteria for creating the entries of a historical dictionary. This study has developed a computational environment for processing the corpus, building glossaries as well as for creating the entries of the HDPB. This system can be easily adapted to the needs and scope of other historical dictionary projects
2

Criação de um ambiente para o processamento de córpus de Português Histórico / Creation of an environment for processing of Historical Porrtuguese Corpora

Arnaldo Candido Junior 02 April 2008 (has links)
A utilização de córpus tem crescido progressivamente em áreas como Lingüística e Processamento de Língua Natural. Como resultado, temos a compilação de novos e grandes córpus e a criação de sistemas processadores de córpus e de padrões para codificação e intercâmbio de textos eletrônicos. Entretanto, a metodologia para compilação de córpus históricos difere das metodologias usadas em córpus contemporâneos. Outro problema é o fato de a maior parte dos processadores de córpus proverem poucos recursos para o tratamento de córpus históricos, apesar de tais córpus serem numerosos. Da mesma forma, os sistemas para criação de dicionários não atendem satisfatoriamente necessidades de dicionários históricos. A motivação desta pesquisa é o projeto do Dicionário Histórico do Português do Brasil (DHPB) que tem como base a construção de um córpus de Português do Brasil dos séculos XVI a XVIII (incluindo alguns textos do começo do século XIX). Neste trabalho são apresentados os desafios encontrados para o processamento do córpus do projeto do projeto DHPB e os requisitos para redação de verbetes do dicionário histórico. Um ambiente computacional para processamento de córpus, criação de glossários e redação de verbetes foi desenvolvido para o projeto DHPB sendo possível adaptá-lo para ser aplicado a outros projetos de criação de dicionários históricos / Corpora has been increasingly used within the areas of Linguistics and Natural Language Processing. As a result, new and larger corpora have been compiled and processing systems and standards for encoding and interchange of electronic texts have been developed. However, when it comes to compilation of historical corpora, the methodology is different from the ones used to compile corpora of contemporary language. Another drawback is the fact that most corpus processing systems provide few resources for the treatment of historical corpus, although there are numerous corpora of this type. Similarly, the systems for dictionary creation do not satisfactorily meet the needs of historical dictionaries. The present study is part of a larger project - the Historical Dictionary of Brazilian Portuguese (HDBP) - which aims to compile a dictionary on the basis of a corpus of Brazilian Portuguese texts from the sixteenth through the eighteenth centuries (including some texts from early nineteenth century). Here, we present the challenges for processing the corpus of the HDPB project and established the criteria for creating the entries of a historical dictionary. This study has developed a computational environment for processing the corpus, building glossaries as well as for creating the entries of the HDPB. This system can be easily adapted to the needs and scope of other historical dictionary projects
3

Text and Speech Alignment Methods for Speech Translation Corpora Creation : Augmenting English LibriVox Recordings with Italian Textual Translations

Della Corte, Giuseppe January 2020 (has links)
The recent uprise of end-to-end speech translation models requires a new generation of parallel corpora, composed of a large amount of source language speech utterances aligned with their target language textual translations. We hereby show a pipeline and a set of methods to collect hundreds of hours of English audio-book recordings and align them with their Italian textual translations, using exclusively public domain resources gathered semi-automatically from the web. The pipeline consists in three main areas: text collection, bilingual text alignment, and forced alignment. For the text collection task, we show how to automatically find e-book titles in a target language by using machine translation, web information retrieval, and named entity recognition and translation techniques. For the bilingual text alignment task, we investigated three methods: the Gale–Church algorithm in conjunction with a small-size hand-crafted bilingual dictionary, the Gale–Church algorithm in conjunction with a bigger bilingual dictionary automatically inferred through statistical machine translation, and bilingual text alignment by computing the vector similarity of multilingual embeddings of concatenation of consecutive sentences. Our findings seem to indicate that the consecutive-sentence-embeddings similarity computation approach manages to improve the alignment of difficult sentences by indirectly performing sentence re-segmentation. For the forced alignment task, we give a theoretical overview of the preferred method depending on the properties of the text to be aligned with the audio, suggesting and using a TTS-DTW (text-to-speech and dynamic time warping) based approach in our pipeline. The result of our experiments is a publicly available multi-modal corpus composed of about 130 hours of English speech aligned with its Italian textual translation and split in 60561 triplets of English audio, English transcript, and Italian textual translation. We also post-processed the corpus so as to extract 40-MFCCs features from the audio segments and released them as a data-set.

Page generated in 0.1037 seconds