• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 3
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 23
  • 23
  • 15
  • 6
  • 6
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Αναγνώριση γλώσσας κειμένου με βάση στατιστικά μοντέλα

Τσέλιος, Βασίλειος 16 April 2013 (has links)
Στην παρούσα διπλωματική εργασία, κατασκευάστηκε μία βάση δεδομένων κειμένων (corpus) με κείμενα τεσσάρων θεματικών ενοτήτων από δέκα Ευρωπαϊκές γλώσσες. Κατόπιν πάνω στη βάση αυτή έγιναν πειράματα αναγνώρισης γλώσσας κειμένου, βασισμένα σε στατιστικά μοντέλα και εξάχθηκαν χρήσιμα συμπεράσματα που επιβεβαιώνουν την υπάρχουσα θεωρία για την ικανότητα αναγνώρισης γλώσσας ενός κειμένου χρησιμοποιώντας τη μέθοδο των Ν-γραμμάτων. / In this thesis, we constructed a multilingual and multidomain corpus.We then used the corpus to extract statistical information on language recognition rates using the N-gramms method.
2

Independent hand-tracking from a single two-dimensional view and its application to South African sign language recognition

Achmed, Imran January 2014 (has links)
Philosophiae Doctor - PhD / Hand motion provides a natural way of interaction that allows humans to interact not only with the environment, but also with each other. The effectiveness and accuracy of hand-tracking is fundamental to the recognition of sign language. Any inconsistencies in hand-tracking result in a breakdown in sign language communication. Hands are articulated objects, which complicates the tracking thereof. In sign language communication the tracking of hands is often challenged by the occlusion of the other hand, other body parts and the environment in which they are being tracked. The thesis investigates whether a single framework can be developed to track the hands independently of an individual from a single 2D camera in constrained and unconstrained environments without the need for any special device. The framework consists of a three-phase strategy, namely, detection, tracking and learning phases. The detection phase validates whether the object being tracked is a hand, using extended local binary patterns and random forests. The tracking phase tracks the hands independently by extending a novel data-association technique. The learning phase exploits contextual features, using the scale-invariant features transform (SIFT) algorithm and the fast library for approximate nearest neighbours (FLANN) algorithm to assist tracking and the recovering of hands from any form of tracking failure. The framework was evaluated on South African sign language phrases that use a single hand, both hands without occlusion, and both hands with occlusion. These phrases were performed by 20 individuals in constrained and unconstrained environments. The experiments revealed that integrating all three phases to form a single framework is suitable for tracking hands in both constrained and unconstrained environments, where a high average accuracy of 82,08% and 79,83% was achieved respectively.
3

Hand shape estimation for South African sign language

Li, Pei January 2012 (has links)
>Magister Scientiae - MSc / Hand shape recognition is a pivotal part of any system that attempts to implement Sign Language recognition. This thesis presents a novel system which recognises hand shapes from a single camera view in 2D. By mapping the recognised hand shape from 2D to 3D,it is possible to obtain 3D co-ordinates for each of the joints within the hand using the kinematics embedded in a 3D hand avatar and smooth the transformation in 3D space between any given hand shapes. The novelty in this system is that it does not require a hand pose to be recognised at every frame, but rather that hand shapes be detected at a given step size. This architecture allows for a more efficient system with better accuracy than other related systems. Moreover, a real-time hand tracking strategy was developed that works efficiently for any skin tone and a complex background.
4

Data collection of 3D spatial features of gestures from static peruvian sign language alphabet for sign language recognition

Nurena-Jara, Roberto, Ramos-Carrion, Cristopher, Shiguihara-Juarez, Pedro 21 October 2020 (has links)
El texto completo de este trabajo no está disponible en el Repositorio Académico UPC por restricciones de la casa editorial donde ha sido publicado. / Peruvian Sign Language Recognition (PSL) is approached as a classification problem. Previous work has employed 2D features from the position of hands to tackle this problem. In this paper, we propose a method to construct a dataset consisting of 3D spatial positions of static gestures from the PSL alphabet, using the HTC Vive device and a well-known technique to extract 21 keypoints from the hand to obtain a feature vector. A dataset of 35, 400 instances of gestures for PSL was constructed and a novel way to extract data was stated. To validate the appropriateness of this dataset, a comparison of four baselines classifiers in the Peruvian Sign Language Recognition (PSLR) task was stated, achieving 99.32% in the average in terms of F1 measure in the best case. / Revisión por pares
5

Improving the efficacy of automated sign language practice tools

Brashear, Helene Margaret 07 July 2010 (has links)
The CopyCat project is an interdisciplinary effort to create a set of computer-aided language learning tools for deaf children. The CopyCat games allow children to interact with characters using American Sign Language (ASL). Through Wizard of Oz pilot studies we have developed a set of games, shown their efficacy in improving young deaf children's language and memory skills, and collected a large corpus of signing examples. Our previous implementation of the automatic CopyCat games uses automatic sign language recognition and verification in the infrastructure of a memory repetition and phrase verification task. The goal of my research is to expand the automatic sign language system to transition the CopyCat games to include the flexibility of a dialogue system. I have created a labeling ontology from analysis of the CopyCat signing corpus, and I have used the ontology to describe the contents of the CopyCat data set. This ontology was used to change and improve the automatic sign language recognition system and to add flexibility to language use in the automatic game.
6

Automatic recognition of American sign language classifiers

Zafrulla, Zahoor 08 June 2015 (has links)
Automatically recognizing classifier-based grammatical structures of American Sign Language (ASL) is a challenging problem. Classifiers in ASL utilize surrogate hand shapes for people or "classes" of objects and provide information about their location, movement and appearance. In the past researchers have focused on recognition of finger spelling, isolated signs, facial expressions and interrogative words like WH-questions (e.g. Who, What, Where, and When). Challenging problems such as recognition of ASL sentences and classifier-based grammatical structures remain relatively unexplored in the field of ASL recognition.  One application of recognition of classifiers is toward creating educational games to help young deaf children acquire language skills. Previous work developed CopyCat, an educational ASL game that requires children to engage in a progressively more difficult expressive signing task as they advance through the game.   We have shown that by leveraging context we can use verification, in place of recognition, to boost machine performance for determining if the signed responses in an expressive signing task, like in the CopyCat game, are correct or incorrect. We have demonstrated that the quality of a machine verifier's ability to identify the boundary of the signs can be improved by using a novel two-pass technique that combines signed input in both forward and reverse directions. Additionally, we have shown that we can reduce CopyCat's dependency on custom manufactured hardware by using an off-the-shelf Microsoft Kinect depth camera to achieve similar verification performance. Finally, we show how we can extend our ability to recognize sign language by leveraging depth maps to develop a method using improved hand detection and hand shape classification to recognize selected classifier-based grammatical structures of ASL.
7

An integrated sign language recognition system

Nel, Warren January 2014 (has links)
Doctor Educationis / Research has shown that five parameters are required to recognize any sign language gesture: hand shape, location, orientation and motion, as well as facial expressions. The South African Sign Language (SASL) research group at the University of the Western Cape has created systems to recognize Sign Language gestures using single parameters. Using a single parameter can cause ambiguities in the recognition of signs that are similarly signed resulting in a restriction of the possible vocabulary size. This research pioneers work at the group towards combining multiple parameters to achieve a larger recognition vocabulary set. The proposed methodology combines hand location and hand shape recognition into one combined recognition system. The system is shown to be able to recognize a very large vocabulary of 50 signs at a high average accuracy of 74.1%. This vocabulary size is much larger than existing SASL recognition systems, and achieves a higher accuracy than these systems in spite of the large vocabulary. It is also shown that the system is highly robust to variations in test subjects such as skin colour, gender and body dimension. Furthermore, the group pioneers research towards continuously recognizing signs from a video stream, whereas existing systems recognized a single sign at a time. To this end, a highly accurate continuous gesture segmentation strategy is proposed and shown to be able to accurately recognize sentences consisting of five isolated SASL gestures.
8

Detekce změny jazyka při hovoru / Code Switching Detection in Speech

Povolný, Filip January 2015 (has links)
This master's thesis deals with code-switching detection in speech. The state-of-the-art methods of language diarization are described in the first part of the thesis. The proposed method for implementation is based on acoustic approach to language identification using combination of GMM, i-vector and LDA. New Mandarin-English code-switching database was created for these experiments. Using this system, accuracy of 89,3 % is achieved on this database.
9

Feature Extraction with Video Summarization of Dynamic Gestures for Peruvian Sign Language Recognition

Neyra-Gutierrez, Andre, Shiguihara-Juarez, Pedro 01 September 2020 (has links)
El texto completo de este trabajo no está disponible en el Repositorio Académico UPC por restricciones de la casa editorial donde ha sido publicado. / In peruvian sign language (PSL), recognition of static gestures has been proposed earlier. However, to state a conversation using sign language, it is also necessary to employ dynamic gestures. We propose a method to extract a feature vector for dynamic gestures of PSL. We collect a dataset with 288 video sequences of words related to dynamic gestures and we state a workflow to process the keypoints of the hands, obtaining a feature vector for each video sequence with the support of a video summarization technique. We employ 9 neural networks to test the method, achieving an average accuracy ranging from 80% and 90%, using 10 fold cross-validation.
10

Interfaces naturais e o reconhecimento das línguas de sinais / Natural interfaces and the sign language recognition

Silva, Renato Kimura da 07 June 2013 (has links)
Made available in DSpace on 2016-04-29T14:23:20Z (GMT). No. of bitstreams: 1 Renato Kimura da Silva.pdf: 3403382 bytes, checksum: 99bab2a00a7da4496b0eea8ad640d9bf (MD5) Previous issue date: 2013-06-07 / Interface is an intermediate layer between two faces. In the computational context, we could say that the interface exists on the interactive intermediation between two subjects, or between subject and program. Over the years, the interfaces have evolved constantly: from the monochromatic text lines to the mouse with the exploratory concept of graphic interfaces to the more recent natural interfaces ubique and that aims the interactive transparency. In the new interfaces, through the use of body, the user can interact with the computer. Today is not necessary to learn the interface, or the use of these interfaces is more intuitive, with recognition of voice, face and gesture. This technology advance fits well to basic needs from the individuals, like communication. With the evolution of the devices and the interfaces, is more feasible conceive new technologies that benefits people in different spheres. The contribution of this work lays on understanding the technical scenario that allow thinking and conceiving natural interfaces for the signal recognition of Sign Languages and considerable part of its grammar. To do so, this research was guided primarily in the study of the development of computer interfaces and their close relationship with videogames, basing on the contributions of authors such as Pierre Lévy, Sherry Turkle, Janet Murray and Louise Poissant. Thereafter, we approach to authors as William Stokoe, Scott Liddell, Ray Birdwhistell, Lucia Santaella and Winfried Nöth, concerning general and specific themes spanning the multidisciplinarity of Sign Languages. Finally, a research was made of State of Art of Natural Interfaces focused on the recognition of Sign Languages, besides the remarkable research study related to the topic, presenting possible future paths to be followed by new lines of multidisciplinary research / Interface é uma camada intermediária que está entre duas faces. No contexto computacional, podemos dizer que interface existe na intermediação interativa entre dois sujeitos, ou ainda entre sujeito e programa. Ao longo dos anos, as interfaces vêm evoluído constantemente: das linhas de texto monocromáticas, aos mouses com o conceito exploratório da interface gráfica até as mais recentes interfaces naturais ubíquas e que objetivam a transparência da interação. Nas novas interfaces, por meio do uso do corpo, o usuário interage com o computador, não sendo necessário aprender a interface. Seu uso é mais intuitivo, com o reconhecimento da voz, da face e dos gestos. O avanço tecnológico vai de encontro com necessidades básicas do indivíduo, como a comunicação, tornando-se factível conceber novas tecnologias que beneficiam pessoas em diferentes esferas. A contribuição desse trabalho está em entender o cenário técnico que possibilita idealizar e criar interfaces naturais para o reconhecimento dos signos das Línguas de Sinais e considerável parte de sua gramática. Para tanto, essa pesquisa foi primeiramente pautada no estudo do desenvolvimento das interfaces computacionais e da sua estreita relação com os videogames, fundamentando-se nas contribuições de autores como Pierre Lévy, Sherry Turkle, Janet Murray e Louise Poissant. Em momento posterior, aproximamo-nos de autores como William Stokoe, Scott Liddell, Ray Birdwhistell, Lúcia Santaella e Winfried Nöth, a respeito de temas gerais e específicos que abarcam a multidisciplinaridade das Línguas de Sinais. Por fim, foi realizado um levantamento do Estado da Arte das Interfaces Naturais voltadas ao Reconhecimento das Línguas de Sinais, além do estudo de pesquisas notáveis relacionadas ao tema, apresentando possíveis caminhos futuros a serem trilhados por novas linhas de pesquisa multidisciplinares

Page generated in 0.1293 seconds