• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 32
  • 8
  • 7
  • 4
  • 4
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 71
  • 71
  • 36
  • 36
  • 14
  • 11
  • 11
  • 11
  • 9
  • 9
  • 9
  • 9
  • 8
  • 7
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Information fusion for monolingual and cross-language spoken document retrieval. / CUHK electronic theses & dissertations collection / Digital dissertation consortium

January 2002 (has links)
Lo Wai-kit. / "October 2002." / Thesis (Ph.D.)--Chinese University of Hong Kong, 2002. / Includes bibliographical references (p. 170-184). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. Ann Arbor, MI : ProQuest Information and Learning Company, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Mode of access: World Wide Web. / Abstracts in English and Chinese.
22

Using web resources for effective English-to-Chinese cross language information retrieval. / CUHK electronic theses & dissertations collection

January 2005 (has links)
A web-aided query translation expansion method in Cross-Language Information Retrieval (CLIR) is presented in this study. The method is applied to English/Chinese language pair, in which queries are expressed in English and the documents returned are in Chinese. Among the three main categories of CLIR methods of machine translation (MT), dictionary translation using a machine-readable dictionary (MRD), and parallel corpus, our method is based on the second one. MRD-based method is easy to implement. However, it faces the resource limitation problem, i.e., the dictionary is often incomplete leading to poor translation and hence undesirable results. By combining MRD and web-aided query translation expansion technique, good retrieval performance can be achieved. The performance gain is largely due to the successful translation extraction of relevant words of a query term from online texts. A new Chinese word discovery algorithm, which extracts words from continuous Chinese characters was designed and used for this purpose. The extracted relevant words do not only include the precise translation of a query term, but also those words that are relevant to that term in the source language. / Jin Honglan. / "October 2005." / Adviser: Kam Fai Wong. / Source: Dissertation Abstracts International, Volume: 67-07, Section: B, page: 3899. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2005. / Includes bibliographical references (p. 115-121). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract in English and Chinese. / School code: 1307.
23

Adapting Automatic Summarization to New Sources of Information

Ouyang, Jessica Jin January 2019 (has links)
English-language news articles are no longer necessarily the best source of information. The Web allows information to spread more quickly and travel farther: first-person accounts of breaking news events pop up on social media, and foreign-language news articles are accessible to, if not immediately understandable by, English-speaking users. This thesis focuses on developing automatic summarization techniques for these new sources of information. We focus on summarizing two specific new sources of information: personal narratives, first-person accounts of exciting or unusual events that are readily found in blog entries and other social media posts, and non-English documents, which must first be translated into English, often introducing translation errors that complicate the summarization process. Personal narratives are a very new area of interest in natural language processing research, and they present two key challenges for summarization. First, unlike many news articles, whose lead sentences serve as summaries of the most important ideas in the articles, personal narratives provide no such shortcuts for determining where important information occurs in within them; second, personal narratives are written informally and colloquially, and unlike news articles, they are rarely edited, so they require heavier editing and rewriting during the summarization process. Non-English documents, whether news or narrative, present yet another source of difficulty on top of any challenges inherent to their genre: they must be translated into English, potentially introducing translation errors and disfluencies that must be identified and corrected during summarization. The bulk of this thesis is dedicated to addressing the challenges of summarizing personal narratives found on the Web. We develop a two-stage summarization system for personal narrative that first extracts sentences containing important content and then rewrites those sentences into summary-appropriate forms. Our content extraction system is inspired by contextualist narrative theory, using changes in writing style throughout a narrative to detect sentences containing important information; it outperforms both graph-based and neural network approaches to sentence extraction for this genre. Our paraphrasing system rewrites the extracted sentences into shorter, standalone summary sentences, learning to mimic the paraphrasing choices of human summarizers more closely than can traditional lexicon- or translation-based paraphrasing approaches. We conclude with a chapter dedicated to summarizing non-English documents written in low-resource languages – documents that would otherwise be unreadable for English-speaking users. We develop a cross-lingual summarization system that performs even heavier editing and rewriting than does our personal narrative paraphrasing system; we create and train on large amounts of synthetic errorful translations of foreign-language documents. Our approach produces fluent English summaries from disdisfluent translations of non-English documents, and it generalizes across languages.
24

The Concurrent and Longitudinal Relationships between Orthographic Processing and Spelling in French Immersion Children

Chung, Sheila Cira 24 June 2014 (has links)
We examined the relationship between orthographic processing and spelling in French immersion children. Study 1 included 148 first graders and they were assessed on orthographic processing and spelling in English and French. In Study 2, we followed 69 second graders for two years. Orthographic processing and spelling in English and French were administered in second and third grade. In Study 3, we analyzed the spelling errors made by the third graders in Study 2. In Study 1, we found a within-language relationship in English and French between orthographic processing and spelling. Cross-language transfer from French orthographic processing to English spelling was also observed. In Study 2, Grade 2 English spelling predicted gains in Grade 3 English and French orthographic processing. Study 3 showed that children made transfer errors when spelling in English and French. Overall, the current research highlights the importance of orthographic processing and spelling in French immersion children.
25

Experiences of coping in young unaccompanied refugees in the UK

Scott, Jacqui January 2017 (has links)
Research with refugees tends to be dominated by mainstream medical and trauma models. However, development of resilience theories and research on coping increasingly find that such constructs can open up currently limited understandings of the refugee experience. This research took a culturally relativist approach to explore experiences of coping in young unaccompanied refugees in the UK. Following extensive consultation, five young refugees were recruited, who were living independently or semi-independently having arrived in the UK without their family, at the age of 15 or 16. Interpretative Phenomenological Analysis was used to explore experiences and understanding of 'coping', whilst acknowledging the relative contributions of their own and my own cultural frameworks and the limitations of language; three participants made use of having an interpreter present. The accounts are presented idiographically, under three major themes that were apparent on multiple levels of the refugees' lives, from the individual to the cultural: 'Adaptation in the context of hardship and loss', 'Beliefs and worldview in shaping a new life', and 'Building strength and self-reliance'. These findings contribute to research finding resilience in refugee lives, whilst not to the detriment of incredible loss and pain. The research attests to the significance of cultural frameworks in refugee coping, with religion playing a key role. The themes are discussed in relation to existing literature and relevant texts, with implications for further research and clinical practice. The role of professionals as allies of refugees is suggested, in promoting socially inclusive practices that involves work both in the clinic and on community and social levels.
26

Aplicando algoritmos de mineração de regras de associação para recuperação de informações multilíngues. / Cross-language information retrieval using algorithms for mining association rules

Geraldo, André Pinto January 2009 (has links)
Este trabalho propõe a utilização de algoritmos de mineração de regras de associação para a Recuperação de Informações Multilíngues. Esses algoritmos têm sido amplamente utilizados para analisar transações de registro de vendas. A ideia é mapear o problema de encontrar associações entre itens vendidos para o problema de encontrar termos equivalentes entre idiomas diferentes em um corpus paralelo. A proposta foi validada por meio de experimentos com diferentes idiomas, conjuntos de consultas e corpora. Os resultados mostram que a eficácia da abordagem proposta é comparável ao estado da arte, ao resultado monolíngue e à tradução automática de consultas, embora este utilize técnicas mais complexas de processamento de linguagem natural. Foi criado um protótipo que faz consultas à Web utilizando o método proposto. O sistema recebe palavras-chave em português, as traduz para o inglês e submete a consulta a diversos sites de busca. / This work proposes the use of algorithms for mining association rules as an approach for Cross-Language Information Retrieval. These algorithms have been widely used to analyze market basket data. The idea is to map the problem of finding associations between sales items to the problem of finding term translations over a parallel corpus. The proposal was validated by means of experiments using different languages, queries and corpora. The results show that the performance of our proposed approach is comparable to the performance of the monolingual baseline and to query translation via machine translation, even though these systems employ more complex Natural Language Processing techniques. A prototype for cross-language web querying was implemented to test the proposed method. The system accepts keywords in Portuguese, translates them into English and submits the query to several web-sites that provide search functionalities.
27

Cross-language plagiarism detection / Detecção de plágio multilíngue

Pereira, Rafael Corezola January 2010 (has links)
Plágio é um dos delitos mais graves no meio acadêmico. É definido como “o uso do trabalho de uma pessoa sem a devida referência ao trabalho original”. Em contrapartida a esse problema, existem diversos métodos que tentam detectar automaticamente plágio entre documentos. Nesse contexto, esse trabalho propõe um novo método para Análise de Plágio Multilíngue. O objetivo do método é detectar casos de plágio em documentos suspeitos baseado em uma coleção de documentos ditos originais. Para realizar essa tarefa, é proposto um método de detecção de plágio composto por cinco fases principais: normalização do idioma, recuperação dos documentos candidatos, treinamento do classificador, análise de plágio, pós-processamento. Uma vez que o método é projetado para detectar plágio entre documentos escritos em idiomas diferentes, nós usamos um language guesser para identificar o idioma de cada documento e um tradutor automático para traduzir todos os documentos para um idioma comum (para que eles possam ser analisados de uma mesma forma). Após a normalização, nós aplicamos um algoritmo de classificação com o objetivo de construir um modelo que consiga diferenciar entre um trecho plagiado e um trecho não plagiado. Após a fase de treinamento, os documentos suspeitos podem ser analisados. Um sistema de recuperação é usado para buscar, baseado em trechos extraídos de cada documento suspeito, os trechos dos documentos originais que são mais propensos de terem sido utilizados como fonte de plágio. Somente após os trechos candidatos terem sido retornados, a análise de plágio é realizada. Por fim, uma técnica de pós-processamento é aplicada nos resultados da detecção a fim de juntar os trechos plagiados que estão próximos um dos outros. Nós avaliamos o métodos utilizando três coleções de testes disponíveis. Duas delas foram criadas para as competições PAN (PAN’09 e PAN’10), que são competições internacionais de detecção de plágio. Como apenas um pequeno percentual dos casos de plágio dessas coleções era multilíngue, nós criamos uma coleção com casos de plágio multilíngue artificiais. Essa coleção foi chamada de ECLaPA (Europarl Cross-Language Plagiarism Analysis). Os resultados alcançados ao analisar as três coleções de testes mostraram que o método proposto é uma alternativa viável para a tarefa de detecção de plágio multilíngue. / Plagiarism is one of the most serious forms of academic misconduct. It is defined as “the use of another person's written work without acknowledging the source”. As a countermeasure to this problem, there are several methods that attempt to automatically detect plagiarism between documents. In this context, this work proposes a new method for Cross-Language Plagiarism Analysis. The method aims at detecting external plagiarism cases, i.e., it tries to detect the plagiarized passages in the suspicious documents (the documents to be investigated) and their corresponding text fragments in the source documents (the original documents). To accomplish this task, we propose a plagiarism detection method composed by five main phases: language normalization, retrieval of candidate documents, classifier training, plagiarism analysis, and postprocessing. Since the method is designed to detect cross-language plagiarism, we used a language guesser to identify the language of the documents and an automatic translation tool to translate all the documents in the collection into a common language (so they can be analyzed in a uniform way). After language normalization, we applied a classification algorithm in order to build a model that is able to differentiate a plagiarized text passage from a non-plagiarized one. Once the classifier is trained, the suspicious documents can be analyzed. An information retrieval system is used to retrieve, based on passages extracted from each suspicious document, the passages from the original documents that are more likely to be the source of plagiarism. Only after the candidate passages are retrieved, the plagiarism analysis is performed. Finally, a postprocessing technique is applied in the reported results in order to join the contiguous plagiarized passages. We evaluated our method using three freely available test collections. Two of them were created for the PAN competitions (PAN’09 and PAN’10), which are international competitions on plagiarism detection. Since only a small percentage of these two collections contained cross-language plagiarism cases, we also created an artificial test collection especially designed to contain this kind of offense. We named the test collection ECLaPA (Europarl Cross-Language Plagiarism Analysis). The results achieved while analyzing these collections showed that the proposed method is a viable approach to the task of cross-language plagiarism analysis.
28

Mediace a její aplikace ve výuce anglického jazyka / Czech-English cross-language mediation in EFL teaching and learning

Bartáková, Stanislava January 2018 (has links)
The aim of this thesis is to introduce the concept of cross-language mediation, which is well established in some countries abroad but is not widely used in the Czech environment, from various points of view. The work tries to show that cross-language mediation is an engaging, effective and interesting activity preparing the learners for real-life mediating situations employing both mother tongue and English. The work is enriching for its battery of mediation activities designed for the Czech context, mediation assessment criteria and a survey among Czech and foreign teachers summarizing their attitude towards mediation and its role in EFL lessons. KEYWORDS: cross-language mediation, translation, mother tongue in EFL teaching
29

Descoberta de cross-language links ausentes na wikipédia / Identifying missing cross-language links in wikipedia

Moreira, Carlos Eduardo Manzoni January 2014 (has links)
A Wikipédia é uma enciclopédia pública composta por milhões de artigos editados diariamente por uma comunidade de autores de diferentes regiões do mundo. Os artigos que constituem a Wikipédia possuem um tipo de link chamado de Cross-language Link que relaciona artigos correspondentes em idiomas diferentes. O objetivo principal dessa estrutura é permitir a navegação dos usuários por diferentes versões de um mesmo artigo em busca da informação desejada. Além disso, por permitir a obtenção de corpora comparáveis, os Cross-language Links são extremamente importantes para aplicações que trabalham com tradução automática e recuperação de informações multilíngues. Visto que os Cross-language Links são inseridos manualmente pelos autores dos artigos, quando o autor não reconhece o seu correspondente em determinado idioma ocorre uma situação de Cross-language Links ausente. Sendo assim, é importante o desenvolvimento de uma abordagem que realize a descoberta de Cross-language Links entre artigos que são correspondentes, porém, não estão conectados por esse tipo link. Nesta dissertação, é apresentado o CLLFinder, uma abordagem para a descoberta de Cross-language Links ausentes. A nossa abordagem utiliza o relacionamento entre as categorias e a indexação e consulta do conteúdo dos artigos para realizar a seleção do conjunto de candidatos. Para a identificação do artigo correspondente, são utilizados atributos que exploram a transitividade de Cross-language Links entre outros idiomas bem como características textuais dos artigos. Os resultados demonstram a criação de um conjunto de candidatos com 84,3% de presença do artigo correspondente, superando o trabalho utilizado como baseline. A avaliação experimental com mais de dois milhões de pares de artigos aponta uma precisão de 99,2% e uma revocação geral de 78,9%, superando, também, o baseline. Uma inspeção manual dos resultados do CLLFinder aplicado em um cenário real indica que 73,6% dos novos Cross-language Links sugeridos pela nossa abordagem eram de fato correspondentes. / Wikipedia is a public encyclopedia composed of millions of articles written daily by volunteer authors from different regions of the world. The articles contain links called Cross-language Links which relate corresponding articles across different languages. This feature is extremely useful for applications that work with automatic translation and multilingual information retrieval as it allows the assembly of comparable corpora. Since these links are created manually, in many occasions, the authors fail to do so. Thus, it is important to have a mechanism that automatically creates such links. This has been motivating the development of techniques to identify missing cross-language links. In this work, we present CLLFinder, an approach for finding missing cross-language links. The approach makes use of the links between categories and an index of the content of the articles to select candidates. In order to identify corresponding articles, the method uses the transitivity between existing cross-language links in other languages as well as textual features extracted from the articles. Experiments on over two million pairs of articles from the English and Portuguese Wikipedias show that our approach has a recall of 78.9% and a precision of 99.2%, outperforming the baseline system.A manual inspection of the results of CLLFinder applied to a real situation indicates that our approach was able to identify the Cross-language Link correctly 73.6% of the time.
30

Aplicando algoritmos de mineração de regras de associação para recuperação de informações multilíngues. / Cross-language information retrieval using algorithms for mining association rules

Geraldo, André Pinto January 2009 (has links)
Este trabalho propõe a utilização de algoritmos de mineração de regras de associação para a Recuperação de Informações Multilíngues. Esses algoritmos têm sido amplamente utilizados para analisar transações de registro de vendas. A ideia é mapear o problema de encontrar associações entre itens vendidos para o problema de encontrar termos equivalentes entre idiomas diferentes em um corpus paralelo. A proposta foi validada por meio de experimentos com diferentes idiomas, conjuntos de consultas e corpora. Os resultados mostram que a eficácia da abordagem proposta é comparável ao estado da arte, ao resultado monolíngue e à tradução automática de consultas, embora este utilize técnicas mais complexas de processamento de linguagem natural. Foi criado um protótipo que faz consultas à Web utilizando o método proposto. O sistema recebe palavras-chave em português, as traduz para o inglês e submete a consulta a diversos sites de busca. / This work proposes the use of algorithms for mining association rules as an approach for Cross-Language Information Retrieval. These algorithms have been widely used to analyze market basket data. The idea is to map the problem of finding associations between sales items to the problem of finding term translations over a parallel corpus. The proposal was validated by means of experiments using different languages, queries and corpora. The results show that the performance of our proposed approach is comparable to the performance of the monolingual baseline and to query translation via machine translation, even though these systems employ more complex Natural Language Processing techniques. A prototype for cross-language web querying was implemented to test the proposed method. The system accepts keywords in Portuguese, translates them into English and submits the query to several web-sites that provide search functionalities.

Page generated in 0.0721 seconds