• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 89
  • 46
  • 46
  • 10
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 247
  • 106
  • 103
  • 90
  • 51
  • 30
  • 29
  • 28
  • 23
  • 22
  • 22
  • 21
  • 20
  • 20
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Úloha adipokinetického hormonu v metabolismu základních živin u octomilky obecné \kur{Drosophila melanogaster}

MOCHANOVÁ, Michaela January 2018 (has links)
The aim of the thesis was evaluation of various metabolic characteristics in the fruit flies Drosophila melanogaster with deficiency of adipokinetic hormone (AKH) production, and with adenosine receptor dysfunction. The experiments were done with a goal to evaluate involment of AKH and adenosine into control of the metabolic pathways. For that measuring of basic nutrients, level of Drome-AKH, mortality and some others characteristics in the fruit flies during starvation were performed. Results revealed the effect of AKH on metabolism of storage nutrients, however, the role of adenosine was unclear.
82

Assessing text and web accessibility for people with autism spectrum disorder

Yaneva, Victoria January 2016 (has links)
People with Autism Spectrum Disorder experience difficulties with reading comprehension and information processing, which affect their school performance, employability and social inclusion. The main goal of this work is to investigate new ways to evaluate and improve text and web accessibility for adults with autism. The first stage of this research involved using eye-tracking technology and comprehension testing to collect data from a group of participants with autism and a control group of participants without autism. This series of studies resulted in the development of the ASD corpus, which is the first multimodal corpus of text and gaze data obtained from participants with and without autism. We modelled text complexity and sentence complexity using sets of features matched to the reading difficulties people with autism experience. For document-level classification we trained a readability classifier on a generic corpus with known readability levels (easy, medium and difficult) and then used the ASD corpus to evaluate with unseen user-assessed data. For sentence-level classification, we used for the first time gaze data and comprehension testing to define a gold standard of easy and difficult sentences, which we then used as training and evaluation sets for sentence-level classification. The results showed that both classifiers outperformed other measures of complexity and were more accurate predictors of the comprehension of people with autism. We conducted a series of experiments evaluating easy-to-read documents for people with cognitive disabilities. Easy-to-read documents are written in an accessible way, following specific writing guidelines and containing both text and images. We focused mainly on the image component of these documents, a topic which has been significantly under-studied compared to the text component; we were also motivated by the fact that people with autism are very strong visual thinkers and that therefore image insertion could be a way to use their strengths in visual thinking to compensate for their difficulties in reading. We investigated the effects images in text have on attention, comprehension, memorisation and user preferences in people with autism (all of these phenomena were investigated both objectively and subjectively). The results of these experiments were synthesised in a set of guidelines for improving text accessibility for people with autism. Finally, we evaluated the accessibility of web pages with different levels of visual complexity. We provide evidence of existing barriers to finding relevant information on web pages that people with autism face and we explore their subjective experiences with searching the web through survey questions.
83

Extração multilíngue de termos multipalavra em corpora comparáveis

Prestes, Kassius Vargas January 2015 (has links)
Este trabalho investiga técnicas de extração de termos multipalavra a partir de corpora comparáveis, que são conjuntos de textos em duas (ou mais) línguas sobre o mesmo domínio. A extração de termos, especialmente termos multipalavra é muito importante para auxiliar a criação de terminologias, ontologias e o aperfeiçoamento de tradutores automáticos. Neste trabalho utilizamos um corpus comparável português/inglês e queremos encontrar termos e seus equivalentes em ambas as línguas. Para isso começamos com a extração dos termos separadamente em cada língua, utilizando padrões morfossintáticos para identificar os n-gramas (sequências de n palavras) mais prováveis de serem termos importantes para o domínio. A partir dos termos de cada língua, utilizamos o contexto, isto é, as palavras que ocorrem no entorno dos termos para comparar os termos das diferentes línguas e encontrar os equivalentes bilíngues. Tínhamos como objetivos principais neste trabalho fazer a identificação monolíngue de termos, aplicar as técnicas de alinhamento para o português e avaliar os diferentes parâmetros de tamanho e tipo (PoS utilizados) de janela para a extração de contexto. Esse é o primeiro trabalho a aplicar essa metodologia para o Português e apesar da falta de alguns recursos léxicos e computacionais (como dicionários bilíngues e parsers) para essa língua, conseguimos alcançar resultados comparáveis com o estado da arte para trabalhos em Francês/Inglês. / This work investigates techniques for multiword term extraction from comparable corpora, which are sets of texts in two (or more) languages about the same topic. Term extraction, specially multiword terms is very important to help the creation of terminologies, ontologies and the improvement of machine translation. In this work we use a comparable corpora Portuguese/ English and want to find terms and their equivalents in both languages. To do this we start with separate term extraction for each language. Using morphossintatic patterns to identify n-grams (sequences of n words) most likely to be important terms of the domain. From the terms of each language, we use their context, i. e., the words that occurr around the term to compare the terms of different languages and to find the bilingual equivalents. We had as main goals in this work identificate monolingual terms, apply alignment techniques for Portuguese and evaluate the different parameters of size and type (used PoS) of window to the context extraction. This is the first work to apply this methodology to Portuguese and in spite of the lack of lexical and computational resources (like bilingual dictionaries and parsers) for this language, we achieved results comparable to state of the art in French/English.
84

On the application of focused crawling for statistical machine translation domain adaptation

Laranjeira, Bruno Rezende January 2015 (has links)
O treinamento de sistemas de Tradução de Máquina baseada em Estatística (TME) é bastante dependente da disponibilidade de corpora paralelos. Entretanto, este tipo de recurso costuma ser difícil de ser encontrado, especialmente quando lida com idiomas com poucos recursos ou com tópicos muito específicos, como, por exemplo, dermatologia. Para contornar esta situação, uma possibilidade é utilizar corpora comparáveis, que são recursos muito mais abundantes. Um modo de adquirir corpora comparáveis é a aplicação de algoritmos de Coleta Focada (CF). Neste trabalho, são propostas novas abordagens para CF, algumas baseadas em n-gramas e outras no poder expressivo das expressões multipalavra. Também são avaliadas a viabilidade do uso de CF para realização de adaptação de domínio para sistemas genéricos de TME e se há alguma correlação entre a qualidade dos algoritmos de CF e dos sistemas de TME que podem ser construídos a partir dos respectivos dados coletados. Os resultados indicam que algoritmos de CF podem ser bons meios para adquirir corpora comparáveis para realizar adaptação de domínio para TME e que há uma correlação entre a qualidade dos dois processos. / Statistical Machine Translation (SMT) is highly dependent on the availability of parallel corpora for training. However, these kinds of resource may be hard to be found, especially when dealing with under-resourced languages or very specific domains, like the dermatology. For working this situation around, one possibility is the use of comparable corpora, which are much more abundant resources. One way of acquiring comparable corpora is to apply Focused Crawling (FC) algorithms. In this work we propose novel approach for FC algorithms, some based on n-grams and other on the expressive power of multiword expressions. We also assess the viability of using FC for performing domain adaptations for generic SMT systems and whether there is a correlation between the quality of the FC algorithms and of the SMT systems that can be built with its collected data. Results indicate that the use of FCs is, indeed, a good way for acquiring comparable corpora for SMT domain adaptation and that there is a correlation between the qualities of both processes.
85

Les constructions causatives \kur{faire + infinitif} et leurs équivalents tch\'ques. / Causative Constructions \kur and their Czech Translation.

VENUŠOVÁ, Alena January 2013 (has links)
This thesis provides a comparison of causative mechanisms between two languages: Czech and French. The aim of this research is to reveal expressions that contain a causative meaning in Czech and to analyze which of them are truly equivalent to the French causative construction faire + infinitive. This work classifies general causative mechanisms, according to their nature, between synthetic (prefix, lexical expressions) and analytic (French complex predicate faire + infinitive, periphrastic constructions, separate clauses) and focuses on the French construction by describing its syntactical and semantic specificity. This causative construction is the basis of a parallel research in corpora InterCorp, a technical tool which helps excerpt authentic texts. Additionally, it is attempted to clarify and classify the usage of the Czech equivalents and search for factors that influence their choice with an eye on the source language.
86

Corpus-based study of the use of English general extenders spoken by Japanese users of English across speaking proficiency levels and task types

Watanabe, Tomoko January 2015 (has links)
There is a pronounced shift in English language teaching policy in Japan with the recognition not only of the importance of spoken English and interactional competence in a globalised world, but also the need to emphasise it within English language pedagogy. Given this imperative to improve the oral communication skills of Japanese users of English (JUEs), it is vital for teachers of English to understand the cultural complexities surrounding the language, one of which is the use of vague language, which has been shown to serve both interpersonal and interactional functions in communications. One element of English vague language is the general extender (for example, or something). The use of general extenders by users of English as a second language (L2) has been studied extensively. However, there is a lack of research into the use of general extenders by JUEs, and their functional differences across speaking proficiency levels and contexts. This study sought to address the knowledge gap, critically exploring the use of general extenders spoken by JUEs across speaking proficiency levels and task types. The study drew on quantitative and qualitative corpus-based tools and methodologies using the National Institute of Information and Communications Technology Japanese Learner English Corpus (Izumi, Uchimoto, & Isahara, 2004), which contains transcriptions of a speaking test. An in-depth analysis of individual frequently-occurring general extenders was carried out across speaking proficiency levels and test tasks (description, narrative, interview and role-play) in order to reveal the frequency, and the textual and functional complexity of general extenders used by JUEs. In order to ensure the relevance of the application of the findings to the context of language education, the study also sought language teachers’ beliefs on the use of general extenders by JUEs. Three general extenders (or something (like that), and stuff, and and so on) were explored due to their high frequency within the corpus. The study showed that the use of these forms differed widely across the JUEs’ speaking proficiency levels and task types undertaken: or something (like that) is typically used in description tasks at the higher level and in interview and description tasks at the intermediate level; and stuff is typical of the interview at the higher level; and so on of the interview at the lower-intermediate level. The study also revealed that a greater proportion of the higher level JUEs use general extenders than do those at lower levels, while those with lower speaking proficiency level who do use general extenders, do so at an high density. A qualitative exploration of concordance lines and extracts revealed a number of interpersonal and discourse-oriented functions across speaking proficiency levels: or something (like that) functions to show uncertainty about information or linguistic choice and helps the JUEs to hold their turn; and stuff serves to make the JUEs’ expression emphatic; and so on appears to show the JUEs’ lack of confidence in their language use, and signals the desire to give up their turn. The findings suggest that the use of general extenders by JUEs is multifunctional, and that this multi-functionality is linked to various elements, such as the level of language proficiency, the nature of the task, the real time processing of their speech and the power asymmetry where the time and floor are mainly managed by the examiners. The study contributes to extending understanding of how JUEs use general extenders to convey interpersonal and discourse-oriented functions in the context of language education, in speaking tests and possibly also in classrooms, and provides new insights into the dynamics of L2 users’ use of general extenders. It brings into questions the generally-held view that the use of general extenders by L2 users as a group is homogenous. The findings from this study could assist teachers to understand JUEs’ intentions in their speech and to aid their speech production. More importantly, it may raise language educators’ awareness of how the use of general extenders by JUEs varies across speaking proficiency levels and task types. These findings should have pedagogical implications in the context of language education, and assist teachers in improving interactional competence, in line with emerging English language teaching policy in Japan.
87

Constitution d'une ressource sémantique arabe à partir d'un corpus multilingue aligné / Constitution of a semantic resource for the Arabic language from multilingual aligned corpora

Abdulhay, Authoul 23 November 2012 (has links)
Cette thèse vise à la mise en œuvre et à l'évaluation de techniques d'extraction de relations sémantiques à partir d'un corpus multilingue aligné. Ces relations seront extraites par transitivité de l'équivalence traductionnelle, deux lexèmes possédant les mêmes équivalents dans une langue cible étant susceptibles de partager un même sens. D'abord, nos observations porteront sur la comparaison sémantique d'équivalents traductionnels dans des corpus multilingues alignés. A partir des équivalences, nous tâcherons d'extraire des "cliques", ou sous-graphes maximaux complets connexes, dont toutes les unités sont en interrelation, du fait d'une probable intersection sémantique. Ces cliques présentent l'intérêt de renseigner à la fois sur la synonymie et la polysémie des unités, et d'apporter une forme de désambiguïsation sémantique. Elles seront créées à partir de l'extraction automatique de correspondances lexicales, basée sur l'observation des occurrences et cooccurrences en corpus. Le recours à des techniques de lemmatisation sera envisagé. Ensuite nous tâcherons de relier ces cliques avec un lexique sémantique (de type Wordnet) afin d'évaluer la possibilité de récupérer pour les unités arabes des relations sémantiques définies pour des unités en anglais ou en français. Ces relations permettraient de construire automatiquement un réseau utile pour certaines applications de traitement de la langue arabe, comme les moteurs de question-réponse, la traduction automatique, les systèmes d'alignement, la recherche d'information, etc. / This study aims at the implementation and evaluation of techniques for extracting semantic relations from a multilingual aligned corpus. Firstly, our observations will focus on the semantic comparison of translational equivalents in multilingual aligned corpus. From these equivalences, we will try to extract "cliques", which ara maximum complete related sub-graphs, where all units are interrelated because of a probable semantic intersection. These cliques have the advantage of giving information on both the synonymy and polysemy of units, and providing a form of semantic disambiguation. Secondly, we attempt to link these cliques with a semantic lexicon (like WordNet) in order to assess the possibility of recovering, for the Arabic units, a semantic relationships already defined for English, French or Spanish units. These relations would automatically build a semantic resource which would be useful for different applications of NLP, such as Question Answering systems, machine translation, alignment systems, Information Retrieval…etc.
88

On the application of focused crawling for statistical machine translation domain adaptation

Laranjeira, Bruno Rezende January 2015 (has links)
O treinamento de sistemas de Tradução de Máquina baseada em Estatística (TME) é bastante dependente da disponibilidade de corpora paralelos. Entretanto, este tipo de recurso costuma ser difícil de ser encontrado, especialmente quando lida com idiomas com poucos recursos ou com tópicos muito específicos, como, por exemplo, dermatologia. Para contornar esta situação, uma possibilidade é utilizar corpora comparáveis, que são recursos muito mais abundantes. Um modo de adquirir corpora comparáveis é a aplicação de algoritmos de Coleta Focada (CF). Neste trabalho, são propostas novas abordagens para CF, algumas baseadas em n-gramas e outras no poder expressivo das expressões multipalavra. Também são avaliadas a viabilidade do uso de CF para realização de adaptação de domínio para sistemas genéricos de TME e se há alguma correlação entre a qualidade dos algoritmos de CF e dos sistemas de TME que podem ser construídos a partir dos respectivos dados coletados. Os resultados indicam que algoritmos de CF podem ser bons meios para adquirir corpora comparáveis para realizar adaptação de domínio para TME e que há uma correlação entre a qualidade dos dois processos. / Statistical Machine Translation (SMT) is highly dependent on the availability of parallel corpora for training. However, these kinds of resource may be hard to be found, especially when dealing with under-resourced languages or very specific domains, like the dermatology. For working this situation around, one possibility is the use of comparable corpora, which are much more abundant resources. One way of acquiring comparable corpora is to apply Focused Crawling (FC) algorithms. In this work we propose novel approach for FC algorithms, some based on n-grams and other on the expressive power of multiword expressions. We also assess the viability of using FC for performing domain adaptations for generic SMT systems and whether there is a correlation between the quality of the FC algorithms and of the SMT systems that can be built with its collected data. Results indicate that the use of FCs is, indeed, a good way for acquiring comparable corpora for SMT domain adaptation and that there is a correlation between the qualities of both processes.
89

Extração multilíngue de termos multipalavra em corpora comparáveis

Prestes, Kassius Vargas January 2015 (has links)
Este trabalho investiga técnicas de extração de termos multipalavra a partir de corpora comparáveis, que são conjuntos de textos em duas (ou mais) línguas sobre o mesmo domínio. A extração de termos, especialmente termos multipalavra é muito importante para auxiliar a criação de terminologias, ontologias e o aperfeiçoamento de tradutores automáticos. Neste trabalho utilizamos um corpus comparável português/inglês e queremos encontrar termos e seus equivalentes em ambas as línguas. Para isso começamos com a extração dos termos separadamente em cada língua, utilizando padrões morfossintáticos para identificar os n-gramas (sequências de n palavras) mais prováveis de serem termos importantes para o domínio. A partir dos termos de cada língua, utilizamos o contexto, isto é, as palavras que ocorrem no entorno dos termos para comparar os termos das diferentes línguas e encontrar os equivalentes bilíngues. Tínhamos como objetivos principais neste trabalho fazer a identificação monolíngue de termos, aplicar as técnicas de alinhamento para o português e avaliar os diferentes parâmetros de tamanho e tipo (PoS utilizados) de janela para a extração de contexto. Esse é o primeiro trabalho a aplicar essa metodologia para o Português e apesar da falta de alguns recursos léxicos e computacionais (como dicionários bilíngues e parsers) para essa língua, conseguimos alcançar resultados comparáveis com o estado da arte para trabalhos em Francês/Inglês. / This work investigates techniques for multiword term extraction from comparable corpora, which are sets of texts in two (or more) languages about the same topic. Term extraction, specially multiword terms is very important to help the creation of terminologies, ontologies and the improvement of machine translation. In this work we use a comparable corpora Portuguese/ English and want to find terms and their equivalents in both languages. To do this we start with separate term extraction for each language. Using morphossintatic patterns to identify n-grams (sequences of n words) most likely to be important terms of the domain. From the terms of each language, we use their context, i. e., the words that occurr around the term to compare the terms of different languages and to find the bilingual equivalents. We had as main goals in this work identificate monolingual terms, apply alignment techniques for Portuguese and evaluate the different parameters of size and type (used PoS) of window to the context extraction. This is the first work to apply this methodology to Portuguese and in spite of the lack of lexical and computational resources (like bilingual dictionaries and parsers) for this language, we achieved results comparable to state of the art in French/English.
90

Extração multilíngue de termos multipalavra em corpora comparáveis

Prestes, Kassius Vargas January 2015 (has links)
Este trabalho investiga técnicas de extração de termos multipalavra a partir de corpora comparáveis, que são conjuntos de textos em duas (ou mais) línguas sobre o mesmo domínio. A extração de termos, especialmente termos multipalavra é muito importante para auxiliar a criação de terminologias, ontologias e o aperfeiçoamento de tradutores automáticos. Neste trabalho utilizamos um corpus comparável português/inglês e queremos encontrar termos e seus equivalentes em ambas as línguas. Para isso começamos com a extração dos termos separadamente em cada língua, utilizando padrões morfossintáticos para identificar os n-gramas (sequências de n palavras) mais prováveis de serem termos importantes para o domínio. A partir dos termos de cada língua, utilizamos o contexto, isto é, as palavras que ocorrem no entorno dos termos para comparar os termos das diferentes línguas e encontrar os equivalentes bilíngues. Tínhamos como objetivos principais neste trabalho fazer a identificação monolíngue de termos, aplicar as técnicas de alinhamento para o português e avaliar os diferentes parâmetros de tamanho e tipo (PoS utilizados) de janela para a extração de contexto. Esse é o primeiro trabalho a aplicar essa metodologia para o Português e apesar da falta de alguns recursos léxicos e computacionais (como dicionários bilíngues e parsers) para essa língua, conseguimos alcançar resultados comparáveis com o estado da arte para trabalhos em Francês/Inglês. / This work investigates techniques for multiword term extraction from comparable corpora, which are sets of texts in two (or more) languages about the same topic. Term extraction, specially multiword terms is very important to help the creation of terminologies, ontologies and the improvement of machine translation. In this work we use a comparable corpora Portuguese/ English and want to find terms and their equivalents in both languages. To do this we start with separate term extraction for each language. Using morphossintatic patterns to identify n-grams (sequences of n words) most likely to be important terms of the domain. From the terms of each language, we use their context, i. e., the words that occurr around the term to compare the terms of different languages and to find the bilingual equivalents. We had as main goals in this work identificate monolingual terms, apply alignment techniques for Portuguese and evaluate the different parameters of size and type (used PoS) of window to the context extraction. This is the first work to apply this methodology to Portuguese and in spite of the lack of lexical and computational resources (like bilingual dictionaries and parsers) for this language, we achieved results comparable to state of the art in French/English.

Page generated in 0.031 seconds