• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 258
  • 220
  • 58
  • 39
  • 18
  • 15
  • 12
  • 10
  • 7
  • 6
  • 3
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 706
  • 706
  • 276
  • 236
  • 199
  • 149
  • 140
  • 137
  • 99
  • 86
  • 74
  • 72
  • 70
  • 66
  • 64
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Testes de proficiência lingüística em lingua de sinais: as possibilidades para os intérpretes de libras

Pereira, Maria Cristina Pires 20 March 2008 (has links)
Made available in DSpace on 2015-03-05T18:11:57Z (GMT). No. of bitstreams: 0 Previous issue date: 20 / Nenhuma / Esta dissertação de mestrado investiga testagens de proficiência lingüística em língua de sinais que são aplicadas a pessoas ouvintes, intérpretes de língua de sinais, no início de sua vida profissional. Devido à diversidade de instrumentos, procedimentos e concepções do que deve ser avaliado em intérpretes de língua de sinais ILS, faz-se necessária uma investigação sobre a testagem de proficiência lingüística e a distinção entre proficiência tradutória e certificação profissional, bem como qual o momento adequado de suas aplicações nas diferentes etapas da formação e do exercício profissional dos ILS. O embasamento teórico consta da distinção entre proficiência e fluência lingüísticas; a evolução do conceito de proficiência lingüística; testagem lingüística; e de um panorama geral sobre tradução e interpretação de língua de sinais. A testagem lingüística de língua de sinais abordada neste estudo inclui aquelas explicitamente denominadas ‘de proficiência’ e as provas de habilitação ou de seleção que contenham / This is a dissertation on language proficiency testing as applied to hearing people, sign language interpreters, in the beginning of their professional lives. Due to the diversity of instruments, proceedings and conceptions of what has to be assessed in sign language interpreters SLI, an investigation on language proficiency testing and the distinction between translation proficiency and professional certification is needed, as well as when is the most adequate moment to apply different kind of testing in different phases of interpreters´ training and professional practice. The theoretical basis of this work includes the distinction between language proficiency and fluency, the evolution of the proficiency concept, language testing, and a general view about sign language translation and interpreting. The sign language testing that is approached in this study comprises those explicitly named as ´proficiency tests´ and professional or selection tests that comprise sign language proficiency features, even if the
192

L'incidence de la privation du langage sur l'acquisition du sentiment moral chez l'enfant sourd en France et en Syrie / The effect of the language’s absence on the moral development of the deaf child in the France and Syria

Hatem, Abir 06 May 2011 (has links)
Dans cet écrit, nous étudions le développement moral des enfants sourds du point de vu interculturelle. Nous nous intéressons au fonctionnement du développement cognitif dans le cas d’une privation auditive : la cognition comme outil de représentation chez l’enfant déficient auditif, et le développement comme le résultat de la communication avec l’environnement. Quels ajustements psychologiques et cognitifs se mettent en cas de surdité ? Notre problématique porte sur l’intérêt du lien entre l’interaction sociale et développement moral chez l’enfant sourd. Comment les enfants sourds construisent des interactions linguistiques sans la fonction auditive ? Pourquoi les enfants sourds n’arrivent pas facilement à avoir accès au jugement moral ? Existe-t-il un lien entre la privation de langue et le retard au niveau du jugement moral ? Est ce que le jugement moral est en relation avec la culture. Nous constatons que les enfants sourds ont des difficultés à accéder au jugement moral. Nous estimons que ce dernière chez les enfants sourds, est comme chez les entendants, dépendant essentiellement de l’interaction sociale. Un grand nombre de personnes handicapées se présente dans le monde de façon visible, c’était après beaucoup de changements démographiques dans la vie et la prévalence des facteurs de santé qui touchent les femmes enceintes avant et pendant l’accouchement et les causes du handicap qui est apparu ici. Nous remarquons alors l’ouvertement d’un grand intérêt dans les catégories des personnes handicapées à tous les niveaux. En effet, il y a dans le monde entier environ 650 millions de personnes handicapées, soit 10 % de la population mondiale. A peu près les deux tiers vivent dans les pays en développement. Dans certains pays en développement, près de 20 % de la population souffrent d'un handicap. Dans cet écrit, nous nous intéressons au handicap auditif. Et nous étudions la démarche du développement moral chez l’enfant sourd du point de vue interculturelle. L’enfant sourd est un enfant qui n’entend pas et ne peut donc pas s’approprier la langue parlée autour de lui, cela ne signifie pas que cet enfant soit sans pensée, sans intelligence ni sans langage. Bien au contraire, cet enfant va avoir une démarche originale : là où l’enfant ordinaire répète ce qu’il entend, l’enfant sourd lui doit inventer. Car pour communiquer, il est obligé d’inventer un langage recourant à des gestes et à des mimiques, afin de se faire comprendre. L’enfant sourd est donc un enfant normal dans ses potentialités intellectuelles et linguistiques. C’est un enfant qui a sa propre façon d’exister d’une façon cohérente et une intelligence adaptative. Sur le plan épistémologique, notre étude se situe entre la psychologie différentielle, la psychologie du développement, la psychologie physiologique, la psychologie cognitive et la psychologie culturelle. Nous abordons principalement les processus du jugement moral dans les structures déficitaires, autour de la question de la communication dans deux sociétés déférentes. L’approche différentielle des structures déficitaires étudie la situation du handicap comme un système organisé adapté et intégré, qui a sa dynamique et ses flexibilités spécifiques. Les variations individuelles relevées chez les sujets déficitaires sont sources de connaissances sur le handicap lui même, mais aussi sur les lois du développement chez les sujets ordinaires. Si bien que l’étude des situations de déficit nous renseignent beaucoup sur les processus psychologiques ordinaires. L’approche physiologique étudie la fonction de l’ouïe. Comment les données auditives de l’environnement sont transformées de l’oreille au cerveau ? Comment le cerveau traite ces données ? L’approche développementale postule, généralement, une certaine continuité entre le développement normal et le développement perturbé. L’approche cognitive des déficits consiste à étudier comment les enfants sourds développent un jugement moral ?... / In this research, we study the moral development of deaf children in the view intercultural. We are interested of the cognitive operation in the case of auditory deprivation: the cognition as a resource of representation and the development as the result of communication with the environment. Our problematic concerns the interest of the relationship between the social interaction and the moral development in the deaf child. How to develop a deaf child the language interactions without auditory function? Why the deaf child is unable to have easily the access of the moral judgment? Is there a relationship between deprivation language and the delay in the moral judgment? Is the moral judgment is related to the culture. This thesis focuses on the study, if deaf child have difficulty of accessing to the moral judgment. We expect that the moral judgment in the deaf children occur in the same steps that occur in the entendant child, and depend substantially on social interaction.
193

Sistema de reconhecimento automático de Língua Brasileira de Sinais / Automatic Recognition System of Brazilian Sign Language

Beatriz Tomazela Teodoro 23 October 2015 (has links)
O reconhecimento de língua de sinais é uma importante área de pesquisa que tem como objetivo atenuar os obstáculos impostos no dia a dia das pessoas surdas e/ou com deficiência auditiva e aumentar a integração destas pessoas na sociedade majoritariamente ouvinte em que vivemos. Baseado nisso, esta dissertação de mestrado propõe o desenvolvimento de um sistema de informação para o reconhecimento automático de Língua Brasileira de Sinais (LIBRAS), que tem como objetivo simplificar a comunicação entre surdos conversando em LIBRAS e ouvintes que não conheçam esta língua de sinais. O reconhecimento é realizado por meio do processamento de sequências de imagens digitais (vídeos) de pessoas se comunicando em LIBRAS, sem o uso de luvas coloridas e/ou luvas de dados e sensores ou a exigência de gravações de alta qualidade em laboratórios com ambientes controlados, focando em sinais que utilizam apenas as mãos. Dada a grande dificuldade de criação de um sistema com este propósito, foi utilizada uma abordagem para o seu desenvolvimento por meio da divisão em etapas. Considera-se que todas as etapas do sistema proposto são contribuições para trabalhos futuros da área de reconhecimento de sinais, além de poderem contribuir para outros tipos de trabalhos que envolvam processamento de imagens, segmentação de pele humana, rastreamento de objetos, entre outros. Para atingir o objetivo proposto foram desenvolvidas uma ferramenta para segmentar sequências de imagens relacionadas à LIBRAS e uma ferramenta para identificar sinais dinâmicos nas sequências de imagens relacionadas à LIBRAS e traduzi-los para o português. Além disso, também foi construído um banco de imagens de 30 palavras básicas escolhidas por uma especialista em LIBRAS, sem a utilização de luvas coloridas, laboratórios com ambientes controlados e/ou imposição de exigências na vestimenta dos indivíduos que executaram os sinais. O segmentador implementado e utilizado neste trabalho atingiu uma taxa média de acurácia de 99,02% e um índice overlap de 0,61, a partir de um conjunto de 180 frames pré-processados extraídos de 18 vídeos gravados para a construção do banco de imagens. O algoritmo foi capaz de segmentar pouco mais de 70% das amostras. Quanto à acurácia para o reconhecimento das palavras, o sistema proposto atingiu 100% de acerto para reconhecer as 422 amostras de palavras do banco de imagens construído, as quais foram segmentadas a partir da combinação da técnica de distância de edição e um esquema de votação com um classificador binário para realizar o reconhecimento, atingindo assim, o objetivo proposto neste trabalho com êxito. / The recognition of sign language is an important research area that aims to mitigate the obstacles in the daily lives of people who are deaf and/or hard of hearing and increase their integration in the majority hearing society in which we live. Based on this, this dissertation proposes the development of an information system for automatic recognition of Brazilian Sign Language (BSL), which aims to simplify the communication between deaf talking in BSL and listeners who do not know this sign language. The recognition is accomplished through the processing of digital image sequences (videos) of people communicating in BSL without the use of colored gloves and/or data gloves and sensors or the requirement of high quality recordings in laboratories with controlled environments focusing on signals using only the hands. Given the great difficulty of setting up a system for this purpose, an approach divided in several stages was used. It considers that all stages of the proposed system are contributions for future works of sign recognition area, and can contribute to other types of works involving image processing, human skin segmentation, object tracking, among others. To achieve this purpose we developed a tool to segment sequences of images related to BSL and a tool for identifying dynamic signals in the sequences of images related to the BSL and translate them into portuguese. Moreover, it was also built an image bank of 30 basic words chosen by a BSL expert without the use of colored gloves, laboratory-controlled environments and/or making of the dress of individuals who performed the signs. The segmentation algorithm implemented and used in this study had a average accuracy rate of 99.02% and an overlap of 0.61, from a set of 180 preprocessed frames extracted from 18 videos recorded for the construction of database. The segmentation algorithm was able to target more than 70% of the samples. Regarding the accuracy for recognizing words, the proposed system reached 100% accuracy to recognize the 422 samples from the database constructed (the ones that were segmented), using a combination of the edit distance technique and a voting scheme with a binary classifier to carry out the recognition, thus reaching the purpose proposed in this work successfully.
194

Segmentação automática de Expressões Faciais Gramaticais com Multilayer Perceptrons e Misturas de Especialistas / Automatic Segmentation of Grammatical Facial Expressions with Multilayer Perceptrons and Mixtures of Experts

Maria Eduarda de Araújo Cardoso 02 October 2018 (has links)
O reconhecimento de expressões faciais é uma área de interesse da ciência da computação e tem sido um atrativo para pesquisadores de diferentes áreas, pois tem potencial para promover o desenvolvimento de diferentes tipos de aplicações. Reconhecer automaticamente essas expressões tem se tornado um objetivo, principalmente na área de análise do comportamento humano. Especialmente para estudo das línguas de sinais, a análise das expressões faciais é importante para a interpretação do discurso, pois é o elemento que permite expressar informação prosódica, suporta o desenvolvimento da estrutura gramatical e semântica da língua, e ajuda na formação de sinais com outros elementos básicos da língua. Nesse contexto, as expressões faciais são chamadas de expressões faciais gramaticais e colaboram na composição no sentido semântico das sentenças. Entre as linhas de estudo que exploram essa temática, está aquela que pretende implementar a análise automática da língua de sinais. Para aplicações com objetivo de interpretar línguas de sinais de forma automatizada, é preciso que tais expressões sejam identificadas no curso de uma sinalização, e essa tarefa dá-se é definida como segmentação de expressões faciais gramaticais. Para essa área, faz-se útil o desenvolvimento de uma arquitetura capaz de realizar a identificação de tais expressões em uma sentença, segmentando-a de acordo com cada tipo diferente de expressão usada em sua construção. Dada a necessidade do desenvolvimento dessa arquitetura, esta pesquisa apresenta: uma análise dos estudos na área para levantar o estado da arte; a implementação de algoritmos de reconhecimento de padrões usando Multilayer Perceptron e misturas de especialistas para a resolução do problema de reconhecimento da expressão facial; a comparação desses algoritmos reconhecedores das expressões faciais gramaticais usadas na concepção de sentenças na Língua Brasileira de Sinais (Libras). A implementação e teste dos algoritmos mostraram que a segmentação automática de expressões faciais gramaticais é viável em contextos dependentes do usuários. Para contextos independentes de usuários, o problema de segmentação de expressões faciais representa um desafio que requer, principalmente, a organização de um ambiente de aprendizado estruturado sobre um conjunto de dados com volume e diversidade maior do que os atualmente disponíveis / The recognition of facial expressions is an area of interest in computer science and has been an attraction for researchers in different fields since it has potential for development of different types of applications. Automatically recognizing these expressions has become a goal primarily in the area of human behavior analysis. Especially for the study of sign languages, the analysis of facial expressions represents an important factor for the interpretation of discourse, since it is the element that allows expressing prosodic information, supports the development of the grammatical and semantic structure of the language, and eliminates ambiguities between similar signs. In this context, facial expressions are called grammatical facial expressions. These expressions collaborate in the semantic composition of the sentences. Among the lines of study that explore this theme is the one that intends to implement the automatic analysis of sign language. For applications aiming to interpret signal languages in an automated way, it is necessary that such expressions be identified in the course of a signaling, and that task is called \"segmentation of grammatical facial expressions\'\'. For this area, it is useful to develop an architecture capable of performing the identification of such expressions in a sentence, segmenting it according to each different type of expression used in its construction. Given the need to develop this architecture, this research presents: a review of studies already carried out in the area; the implementation of pattern recognition algorithms using Multilayer Perceptron and mixtures of experts to solve the facial expression recognition problem; the comparison of these algorithms as recognizers of grammatical facial expressions used in the conception of sentences in the Brazilian Language of Signs (Libras). The implementation and tests carried out with such algorithms showed that the automatic segmentation of grammatical facial expressions is practicable in user-dependent contexts. Regarding user-independent contexts, this is a challenge which demands the organization of a learning environment structured on datasets bigger and more diversified than those current available
195

A formação do intérprete de libras para o ensino de ciências lacunas refletidas na atuação do TILS em sala de aula / The professional qualification of the interpreter of brazilian language of signs for the theaching of sciences: gaps reflected in their work in the classroom

Rieger, Camila Paula Effgen 11 August 2016 (has links)
Made available in DSpace on 2017-07-10T16:38:39Z (GMT). No. of bitstreams: 1 dissert Camila Rieger.pdf: 1178838 bytes, checksum: b4e17000e085d2feb68e516253126d4d (MD5) Previous issue date: 2016-08-11 / This work has as its main theme the academic education of the Interpreter of Sign Language (ISL) in acting as a mediator of communication among teachers and deaf students in regular public institutions. It presents some considerations about barriers faced by the ISL when dealing with technical and scientific terms related to specific subjects that are not part of their current vocabulary or find no equivalent in sign language and how their academic skills contribute to the erect these obstacles to the process of interpretation. Starting from the author´s experience, graduated in pedagogy, acting as an interpreter of Portuguese/Libras ( acronymous to Brazilian Sign Language) in a subjects for Exact Sciences, this research seeks to broaden perspectives on issues related to the training path of ILS and their work at the educational institutions. Initially it is presented an analysis of articles published in the National Research Congress Proceedings in Translation and Interpretation of Libras and Portuguese with attention to those articles whose theme is the training of ILS. This analysis highlights the lack of discussion of the training activities for acting in science teaching as a way to overcome language barriers found in the act of interpretation. In order to analyze the detachment between the knowledge acquired by the ILS in action during their training (initial or ongoing) and the subjects of the areas of Exact and Natural Sciences, it was proposed a questionnaire answered by the ILS from the cities of Foz do Iguaçu and Cascavel. The analysis of the questionnaire answers it is observed that the academic education of ILS focuses on Humanities courses and that, whether in postgraduate level or other training processes, there is no attention to training for acting in subjects of Exact Sciences. / Esta dissertação tem como tema a formação do Tradutor e Intérprete de Língua de Sinais (TILS) em atuação como mediador da comunicação entre professores e alunos surdos em instituições regulares comuns. Apresenta reflexões sobre as barreiras encontradas pelos TILS quando termos técnicos e científicos relacionados ao conteúdo a ser interpretado não fazem parte de seu vocabulário ou não encontram equivalentes na Língua de Sinais e como a formação do TILS contribui para a edificação destes obstáculos ao processo de interpretação. Partindo da experiência da autora, pedagoga, atuante como intérprete de Libras/Português em um Curso superior da área de Ciências Exatas e Tecnológicas, a pesquisa busca ampliar o olhar para as questões relacionadas à formação do intérprete e sua atuação junto às Instituições de ensino. Para isso apresenta, inicialmente, uma análise de artigos publicados nos Anais do Congresso Nacional de Pesquisas em Tradução e Interpretação de Libras e Língua Portuguesa com atenção àqueles cuja temática é a formação de tradutores e intérpretes de língua de sinais. Esta análise evidencia a falta de discussão sobre a formação para atuação na área de ensino de Ciências como forma de transpor as barreiras linguísticas encontradas no ato da interpretação. No intuito de analisar a distância entre o conhecimento adquirido pelo intérprete em atuação durante sua formação (inicial ou continuada) e os conteúdos das áreas de Ciências Exatas e Naturais, foi construído, a partir da aplicação de questionários aplicados a TILS da cidade de Foz do Iguaçu, um panorama local da formação dos TILS em atuação. Da análise dos questionários observa-se que a formação dos TILS se concentra em cursos da área de Ciências Humanas e que, seja em nível de pós-graduação ou outros processos de capacitação, não há atenção para a formação para atuação em Ciências Exatas.
196

Describing and remembering motion events in British Sign Language

Bermingham, Rowena January 2018 (has links)
Motion events are ubiquitous in conversation, from describing a tiresome commute to recounting a burglary. These situations, where an entity changes location, consist of four main semantic components: Motion (the movement), Figure (the entity moving), Ground (the object or objects with respect to which the Figure carries out the Motion) and Path (the route taken). Two additional semantic components can occur simultaneously: Manner (the way the Motion occurs) and Cause (the source of/reason for the Motion). Languages differ in preferences for provision and packaging of semantic components in descriptions. It has been suggested, in the thinking-for-speaking hypothesis, that these preferences influence the conceptualisation of events (such as their memorisation). This thesis addresses questions relating to the description and memory of Motion events in British Sign Language (BSL) and English. It compares early BSL (acquired before age seven) and late BSL (acquired after age 16) descriptions of Motion events and investigates whether linguistic preferences influence memory. Comparing descriptions by early signers and late signers indicates where their linguistic preferences differ, providing valuable knowledge for interpreters wishing to match early signers. Understanding how linguistic preferences might influence memory contributes to debates around the connection between language and thought. The experimental groups for this study were: deaf early BSL signers, hearing early BSL signers, deaf late BSL signers, hearing late BSL signers and hearing English monolinguals. Participants watched target Motion event video clips before completing a memory and attention task battery. Subsequently, they performed a forced-choice recognition task where they saw each target Motion event clip again alongside a distractor clip that differed in one semantic component. They selected which of the two clips they had seen in the first presentation. Finally, participants were filmed describing all of the target and distractor video clips (in English for English monolinguals and BSL for all other groups). The Motion event descriptions were coded for the inclusion and packaging of components. Linguistic descriptions were compared between languages (English and BSL) and BSL group. Statistical models were created to investigate variation on the memory and attention task battery and the recognition task. Results from linguistic analysis reveal that English and BSL are similar in the components included in descriptions. However, packaging differs between languages. English descriptions show preferences for Manner verbs and spatial particles to express Path ('run out'). BSL descriptions show preferences for serial verb constructions (using Manner and Path verbs in the same clause). The BSL groups are also similar in the components they include in descriptions. However, the packaging differs, with hearing late signers showing some English-like preferences and deaf early signers showing stronger serial verb preferences. Results from the behavioural experiments show no overall relationship between language group and memory. I suggest that the similarity of information provided in English and BSL descriptions undermines the ability of the task to reveal memory differences. However, results suggest a link between individual linguistic description and memory; marking a difference between components in linguistic description is correlated with correctly selecting that component clip in the recognition task. I argue that this indicates a relationship between linguistic encoding and memory within each individual, where their personal preference for including certain semantic components in their utterances is connected to their memory for those components. I also propose that if the languages were more distinct in their inclusion of information then there may have been differences in recognition task scores. I note that further research is needed across modalities to create a fuller picture of how information is included and packaged cross-modally and how this might affect individual Motion event memory.
197

Facilitating American Sign Language learning for hearing parents of deaf children via mobile devices

Xu, Kimberly A. 02 April 2013 (has links)
In the United States, between 90 and 95% of deaf children are born to hearing parents. In most circumstances, the birth of a deaf child is the first experience these parents have with American Sign Language (ASL) and the Deaf community. Parents learn ASL as a second language to provide their children with language models and to be able to communicate with their children more effectively, but they face significant challenges. To address these challenges, I have developed a mobile learning application, SMARTSign, to help parents of deaf children learn ASL vocabulary. I hypothesize that providing a method for parents to learn and practice ASL words associated with popular children's stories on their mobile phones would help improve their ASL vocabulary and abilities more than if words were grouped by theme. I posit that parents who learn vocabulary associated with children's stories will use the application more, which will lead to more exposure to ASL and more learned vocabulary. My dissertation consists of three studies. First I show that novices are able to reproduce signs presented on mobile devices with high accuracy regardless of source video resolution. Next, I interview hearing parents with deaf children to discover the difficulties they have with current methods for learning ASL. When asked which methods of presenting signs they preferred, participants were most interested in learning vocabulary associated with children's stories. Finally, I deploy SMARTSign to parents for four weeks. Participants learning story vocabulary used the application more often and had higher sign recognition scores than participants who learned vocabulary based on word types. The condition did not affect participants' ability to produce the signed vocabulary.
198

Segmental discriminative analysis for American Sign Language recognition and verification

Yin, Pei 06 April 2010 (has links)
This dissertation presents segmental discriminative analysis techniques for American Sign Language (ASL) recognition and verification. ASL recognition is a sequence classification problem. One of the most successful techniques for recognizing ASL is the hidden Markov model (HMM) and its variants. This dissertation addresses two problems in sign recognition by HMMs. The first is discriminative feature selection for temporally-correlated data. Temporal correlation in sequences often causes difficulties in feature selection. To mitigate this problem, this dissertation proposes segmentally-boosted HMMs (SBHMMs), which construct the state-optimized features in a segmental and discriminative manner. The second problem is the decomposition of ASL signs for efficient and accurate recognition. For this problem, this dissertation proposes discriminative state-space clustering (DISC), a data-driven method of automatically extracting sub-sign units by state-tying from the results of feature selection. DISC and SBHMMs can jointly search for discriminative feature sets and representation units of ASL recognition. ASL verification, which determines whether an input signing sequence matches a pre-defined phrase, shares similarities with ASL recognition, but it has more prior knowledge and a higher expectation of accuracy. Therefore, ASL verification requires additional discriminative analysis not only in utilizing prior knowledge but also in actively selecting a set of phrases that have a high expectation of verification accuracy in the service of improving the experience of users. This dissertation describes ASL verification using CopyCat, an ASL game that helps deaf children acquire language abilities at an early age. It then presents the "probe" technique which automatically searches for an optimal threshold for verification using prior knowledge and BIG, a bi-gram error-ranking predictor which efficiently selects/creates phrases that, based on the previous performance of existing verification systems, should have high verification accuracy. This work demonstrates the utility of the described technologies in a series of experiments. SBHMMs are validated in ASL phrase recognition as well as various other applications such as lip reading and speech recognition. DISC-SBHMMs consistently produce fewer errors than traditional HMMs and SBHMMs in recognizing ASL phrases using an instrumented glove. Probe achieves verification efficacy comparable to the optimum obtained from manually exhaustive search. Finally, when verifying phrases in CopyCat, BIG predicts which CopyCat phrases, even unseen in training, will have the best verification accuracy with results comparable to much more computationally intensive methods.
199

The processing of German Sign Language sentences / Three event-related potential studies on phonological, morpho-syntactic, and semantic aspects

Hosemann, Jana Alexandra 10 April 2015 (has links)
No description available.
200

A preprocessor for an English-to-Sign Language Machine Translation system

Combrink, Andries J. 12 1900 (has links)
Thesis (MSc (Computer Science))--University of Stellenbosch, 2005. / Sign Languages such as South African Sign Language, are proper natural languages; they have their own vocabularies, and they make use of their own grammar rules. However, machine translation from a spoken to a signed language creates interesting challenges. These problems are caused as a result of the differences in character between spoken and signed languages. Sign Languages are classified as visual-spatial languages: a signer makes use of the space around him, and gives visual clues from body language, facial expressions and sign movements to help him communicate. It is the absence of these elements in the written form of a spoken language that causes the contextual ambiguities during machine translation. The work described in this thesis is aimed at resolving the ambiguities caused by a translation from written English to South African Sign Language. We designed and implemented a preprocessor that uses areas of linguistics such as anaphora resolution and a data structure called a scene graph to help with the spatial aspect of the translation. The preprocessor also makes use of semantic and syntactic analysis, together with the help of a semantic relational database, to find emotional context from text. This analysis is then used to suggest body language, facial expressions and sign movement attributes, helping us to address the visual aspect of the translation. The results show that the system is flexible enough to be used with different types of text, and will overall improve the quality of a machine translation from English into a Sign Language.

Page generated in 0.0653 seconds