• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 3
  • 1
  • Tagged with
  • 15
  • 15
  • 15
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Sistema de reconhecimento automático de Língua Brasileira de Sinais / Automatic Recognition System of Brazilian Sign Language

Beatriz Tomazela Teodoro 23 October 2015 (has links)
O reconhecimento de língua de sinais é uma importante área de pesquisa que tem como objetivo atenuar os obstáculos impostos no dia a dia das pessoas surdas e/ou com deficiência auditiva e aumentar a integração destas pessoas na sociedade majoritariamente ouvinte em que vivemos. Baseado nisso, esta dissertação de mestrado propõe o desenvolvimento de um sistema de informação para o reconhecimento automático de Língua Brasileira de Sinais (LIBRAS), que tem como objetivo simplificar a comunicação entre surdos conversando em LIBRAS e ouvintes que não conheçam esta língua de sinais. O reconhecimento é realizado por meio do processamento de sequências de imagens digitais (vídeos) de pessoas se comunicando em LIBRAS, sem o uso de luvas coloridas e/ou luvas de dados e sensores ou a exigência de gravações de alta qualidade em laboratórios com ambientes controlados, focando em sinais que utilizam apenas as mãos. Dada a grande dificuldade de criação de um sistema com este propósito, foi utilizada uma abordagem para o seu desenvolvimento por meio da divisão em etapas. Considera-se que todas as etapas do sistema proposto são contribuições para trabalhos futuros da área de reconhecimento de sinais, além de poderem contribuir para outros tipos de trabalhos que envolvam processamento de imagens, segmentação de pele humana, rastreamento de objetos, entre outros. Para atingir o objetivo proposto foram desenvolvidas uma ferramenta para segmentar sequências de imagens relacionadas à LIBRAS e uma ferramenta para identificar sinais dinâmicos nas sequências de imagens relacionadas à LIBRAS e traduzi-los para o português. Além disso, também foi construído um banco de imagens de 30 palavras básicas escolhidas por uma especialista em LIBRAS, sem a utilização de luvas coloridas, laboratórios com ambientes controlados e/ou imposição de exigências na vestimenta dos indivíduos que executaram os sinais. O segmentador implementado e utilizado neste trabalho atingiu uma taxa média de acurácia de 99,02% e um índice overlap de 0,61, a partir de um conjunto de 180 frames pré-processados extraídos de 18 vídeos gravados para a construção do banco de imagens. O algoritmo foi capaz de segmentar pouco mais de 70% das amostras. Quanto à acurácia para o reconhecimento das palavras, o sistema proposto atingiu 100% de acerto para reconhecer as 422 amostras de palavras do banco de imagens construído, as quais foram segmentadas a partir da combinação da técnica de distância de edição e um esquema de votação com um classificador binário para realizar o reconhecimento, atingindo assim, o objetivo proposto neste trabalho com êxito. / The recognition of sign language is an important research area that aims to mitigate the obstacles in the daily lives of people who are deaf and/or hard of hearing and increase their integration in the majority hearing society in which we live. Based on this, this dissertation proposes the development of an information system for automatic recognition of Brazilian Sign Language (BSL), which aims to simplify the communication between deaf talking in BSL and listeners who do not know this sign language. The recognition is accomplished through the processing of digital image sequences (videos) of people communicating in BSL without the use of colored gloves and/or data gloves and sensors or the requirement of high quality recordings in laboratories with controlled environments focusing on signals using only the hands. Given the great difficulty of setting up a system for this purpose, an approach divided in several stages was used. It considers that all stages of the proposed system are contributions for future works of sign recognition area, and can contribute to other types of works involving image processing, human skin segmentation, object tracking, among others. To achieve this purpose we developed a tool to segment sequences of images related to BSL and a tool for identifying dynamic signals in the sequences of images related to the BSL and translate them into portuguese. Moreover, it was also built an image bank of 30 basic words chosen by a BSL expert without the use of colored gloves, laboratory-controlled environments and/or making of the dress of individuals who performed the signs. The segmentation algorithm implemented and used in this study had a average accuracy rate of 99.02% and an overlap of 0.61, from a set of 180 preprocessed frames extracted from 18 videos recorded for the construction of database. The segmentation algorithm was able to target more than 70% of the samples. Regarding the accuracy for recognizing words, the proposed system reached 100% accuracy to recognize the 422 samples from the database constructed (the ones that were segmented), using a combination of the edit distance technique and a voting scheme with a binary classifier to carry out the recognition, thus reaching the purpose proposed in this work successfully.
12

Segmental discriminative analysis for American Sign Language recognition and verification

Yin, Pei 06 April 2010 (has links)
This dissertation presents segmental discriminative analysis techniques for American Sign Language (ASL) recognition and verification. ASL recognition is a sequence classification problem. One of the most successful techniques for recognizing ASL is the hidden Markov model (HMM) and its variants. This dissertation addresses two problems in sign recognition by HMMs. The first is discriminative feature selection for temporally-correlated data. Temporal correlation in sequences often causes difficulties in feature selection. To mitigate this problem, this dissertation proposes segmentally-boosted HMMs (SBHMMs), which construct the state-optimized features in a segmental and discriminative manner. The second problem is the decomposition of ASL signs for efficient and accurate recognition. For this problem, this dissertation proposes discriminative state-space clustering (DISC), a data-driven method of automatically extracting sub-sign units by state-tying from the results of feature selection. DISC and SBHMMs can jointly search for discriminative feature sets and representation units of ASL recognition. ASL verification, which determines whether an input signing sequence matches a pre-defined phrase, shares similarities with ASL recognition, but it has more prior knowledge and a higher expectation of accuracy. Therefore, ASL verification requires additional discriminative analysis not only in utilizing prior knowledge but also in actively selecting a set of phrases that have a high expectation of verification accuracy in the service of improving the experience of users. This dissertation describes ASL verification using CopyCat, an ASL game that helps deaf children acquire language abilities at an early age. It then presents the "probe" technique which automatically searches for an optimal threshold for verification using prior knowledge and BIG, a bi-gram error-ranking predictor which efficiently selects/creates phrases that, based on the previous performance of existing verification systems, should have high verification accuracy. This work demonstrates the utility of the described technologies in a series of experiments. SBHMMs are validated in ASL phrase recognition as well as various other applications such as lip reading and speech recognition. DISC-SBHMMs consistently produce fewer errors than traditional HMMs and SBHMMs in recognizing ASL phrases using an instrumented glove. Probe achieves verification efficacy comparable to the optimum obtained from manually exhaustive search. Finally, when verifying phrases in CopyCat, BIG predicts which CopyCat phrases, even unseen in training, will have the best verification accuracy with results comparable to much more computationally intensive methods.
13

The Efficacy of the Eigenvector Approach to South African Sign Language Identification

Segers, Vaughn Mackman January 2010 (has links)
Masters of Science / The communication barriers between deaf and hearing society mean that interaction between these communities is kept to a minimum. The South African Sign Language research group, Integration of Signed and Verbal Communication: South African Sign Language Recognition and Animation (SASL), at the University of the Western Cape aims to create technologies to bridge the communication gap. In this thesis we address the subject of whole hand gesture recognition. We demonstrate a method to identify South African Sign Language classifiers using an eigenvector approach. The classifiers researched within this thesis are based on those outlined by the Thibologa Sign Language Institute for SASL. Gesture recognition is achieved in real time. Utilising a pre-processing method for image registration we are able to increase the recognition rates for the eigenvector approach.
14

Real-Time Finger Spelling American Sign Language Recognition Using Deep Convolutional Neural Networks

Viswavarapu, Lokesh Kumar 12 1900 (has links)
This thesis presents design and development of a gesture recognition system to recognize finger spelling American Sign Language hand gestures. We developed this solution using the latest deep learning technique called convolutional neural networks. This system uses blink detection to initiate the recognition process, Convex Hull-based hand segmentation with adaptive skin color filtering to segment hand region, and a convolutional neural network to perform gesture recognition. An ensemble of four convolutional neural networks are trained with a dataset of 25254 images for gesture recognition and a feedback unit called head pose estimation is implemented to validate the correctness of predicted gestures. This entire system was developed using Python programming language and other supporting libraries like OpenCV, Tensor flow and Dlib to perform various image processing and machine learning tasks. This entire application can be deployed as a web application using Flask to make it operating system independent.
15

Swedish Sign Language Skills Training and Assessment / Utbildning och bedömning av svensk teckenspråksförmåga

Potrus, Dani January 2017 (has links)
Sign language is used widely around the world as a first language for those that are unable to use spoken language and by groups of people that have a disability which precludes them from using spoken language (such as a hearing impairment). The importance of effective learning of sign language and its applications in modern computer science has grown widely in the modern aged society and research around sign language recognition has sprouted in many different directions, some examples using hidden markov models (HMMs) to train models to recognize different sign language patterns (Swedish sign language, American sign language, Korean sign language, German sign language and so on).  This thesis project researches the assessment and skill efficiency of using a simple video game to learn Swedish sign language for children in the ages within the range of 10 to 11 with no learning disorders, or any health disorders. During the experimental testing, 38 children are divided into two equally sized groups of 19 where each group plays a sign language video game. The context of the video game is the same for both groups, where both listened to a 3D avatar speak to them using both spoken language and sign language. The first group played the game and answered questions given to them by using sign language, whereas the other group answered questions given to them by clicking on an alternative on the video game screen. A week after the children have played the video game, the sign language skills that they have acquired from playing the video game are assessed by simple questions where they are asked to provide some of the signs that they saw during the duration of the video game. The main hypothesis of the project is that the group of children that answered by signing outperforms the other group, in both remembering the signs and executing them correctly. A statistical null hypothesis test is performed on this hypothesis, in which the main hypothesis is confirmed. Lastly, discussions for future research within sign language assessment using video games is described in the final chapter of the thesis. / Teckenspråk används i stor grad runt om i världen som ett modersmål för dom som inte kan använda vardagligt talsspråk och utav grupper av personer som har en funktionsnedsättning (t.ex. en hörselskada). Betydelsen av effektivt lärande av teckenspråk och dess tillämpningar i modern datavetenskap har ökat i stor utsträckning i det moderna samhället, och forskning kring teckenspråklig igenkänning har spirat i många olika riktningar, ett exempel är med hjälp av statistika modeller såsom dolda markovmodeller (eng. Hidden markov models) för att träna modeller för att känna igen olika teckenspråksmönster (bland dessa ingår Svenskt teckenspråk, Amerikanskt teckenspråk, Koreanskt teckenspråk, Tyskt teckenspråk med flera). Denna rapport undersöker bedömningen och skickligheten av att använda ett enkelt teckenspråksspel som har utvecklats för att lära ut enkla Svenska teckenspråksmönster för barn i åldrarna 10 till 11 års ålder som inte har några inlärningssjukdomar eller några problem med allmän hälsa. Under projektets experiment delas 38 barn upp i två lika stora grupper om 19 i vardera grupp, där varje grupp kommer att få spela ett teckenspråksspel. Sammanhanget för spelet är detsamma för båda grupperna, där de får höra och se en tredimensionell figur (eng. 3D Avatar) tala till dom med både talsspråk och teckenspråk. Den första gruppen spelar spelet och svarar på frågor som ges till dem med hjälp av teckenspråk, medan den andra gruppen svarar på frågor som ges till dem genom att klicka på ett av fem alternativ som finns på spelets skärm. En vecka efter att barnen har utfört experimentet med teckenspråksspelet bedöms deras teckenspråkliga färdigheter som de har fått från spelet genom att de ombeds återuppge några av de tecknena som de såg under spelets varaktighet. Rapportens hypotes är att de barn som tillhör gruppen som fick ge teckenspråk som svar till frågorna som ställdes överträffar den andra gruppen, genom att både komma ihåg tecknena och återuppge dom på korrekt sätt. En statistisk hypotesprövning utförs på denna hypotes, där denna i sin tur bekräftas. Slutligen beskrivs det i rapportens sista kapitel om framtida forskning inom teckenspråksbedömning med tv spel och deras effektivitet.

Page generated in 0.1375 seconds