• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • Tagged with
  • 5
  • 5
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Real-time localization of balls and hands in videos of juggling using a convolutional neural network

Åkerlund, Rasmus January 2019 (has links)
Juggling can be both a recreational activity that provides a wide variety of challenges to participants and an art form that can be performed on stage. Non-learning-based computer vision techniques, depth sensors, and accelerometers have been used in the past to augment these activities. These solutions either require specialized hardware or only work in a very limited set of environments. In this project, a 54 000 frame large video dataset of annotated juggling was created and a convolutional neural network was successfully trained that could locate the balls and hands with high accuracy in a variety of environments. The network was sufficiently light-weight to provide real-time inference on CPUs. In addition, the locations of the balls and hands were recorded for thirty-six common juggling pattern, and small neural networks were trained that could categorize them almost perfectly. By building on the publicly available code, models and datasets that this project has produced jugglers will be able to create interactive juggling games for beginners and novel audio-visual enhancements for live performances.
2

Exploiting phonological constraints for handshape recognition in sign language video

Thangali, Ashwin 22 January 2016 (has links)
The ability to recognize handshapes in signing video is essential in algorithms for sign recognition and retrieval. Handshape recognition from isolated images is, however, an insufficiently constrained problem. Many handshapes share similar 3D configurations and are indistinguishable for some hand orientations in 2D image projections. Additionally, significant differences in handshape appearance are induced by the articulated structure of the hand and variants produced by different signers. Linguistic rules involved in the production of signs impose strong constraints on the articulations of the hands, yet, little attention has been paid towards exploiting these constraints in previous works on sign recognition. Among the different classes of signs in any signed language, lexical signs constitute the prevalent class. Morphemes (or, meaningful units) for signs in this class involve a combination of particular handshapes, palm orientations, locations for articulation, and movement type. These are thus analyzed by many sign linguists as analogues of phonemes in spoken languages. Phonological constraints govern the ways in which phonemes combine in American Sign Language (ASL), as in other signed and spoken languages; utilizing these constraints for handshape recognition in ASL is the focus of the proposed thesis. Handshapes in monomorphemic lexical signs are specified at the start and end of the sign. The handshape transition within a sign are constrained to involve either closing or opening of the hand (i.e., constrained to exclusively use either folding or unfolding of the palm and one or more fingers). Furthermore, akin to allophonic variations in spoken languages, both inter- and intra- signer variations in the production of specific handshapes are observed. We propose a Bayesian network formulation to exploit handshape co-occurrence constraints also utilizing information about allophonic variations to aid in handshape recognition. We propose a fast non-rigid image alignment method to gain improved robustness to handshape appearance variations during computation of observation likelihoods in the Bayesian network. We evaluate our handshape recognition approach on a large dataset of monomorphemic lexical signs. We demonstrate that leveraging linguistic constraints on handshapes results in improved handshape recognition accuracy. As part of the overall project, we are collecting and preparing for dissemination a large corpus (three thousand signs from three native signers) of ASL video annotated with linguistic information such as glosses, morphological properties and variations, and start/end handshapes associated with each ASL sign.
3

Enhancing Object Detection in Infrared Videos through Temporal and Spatial Information

Jinke, Shi January 2023 (has links)
Object detection is a prominent area of research within computer vision. While object detection based on infrared videos holds great practical significance, the majority of mainstream methods are primarily designed for visible datasets. This thesis investigates the enhancement of object detection accuracy on infrared datasets by leveraging temporal and spatial information. The Memory Enhanced Global-Local Aggregation (MEGA) framework is chosen as a baseline due to its capability to incorporate both forms of information. Based on the initial visualization result from the infrared dataset, CAMEL, the noisy characteristic of the infrared dataset is further explored. Through comprehensive experiments, the impact of temporal and spatial information is examined, revealing that spatial information holds a detrimental effect, while temporal information could be used to improve model performance. Moreover, an innovative Dual Frame Average Aggregation (DFAA) framework is introduced to address challenges related to object overlapping and appearance changes. This framework processes two global frames in parallel and in an organized manner, showing an improvement from the original configuration. / Objektdetektion är ett framträdande forskningsområde inom datorseende. Även om objektdetektering baserad på infraröda videor har stor praktisk betydelse, är majoriteten av vanliga metoder i första hand utformade för synliga datauppsättningar. Denna avhandling undersöker förbättringen av objektdetektionsnoggrannhet på infraröda datauppsättningar genom att utnyttja tids- och rumslig information. Memory Enhanced Global-Local Aggregation (MEGA)-ramverket väljs som baslinje på grund av dess förmåga att införliva båda formerna av information. Baserat på det initiala visualiseringsresultatet från den infraröda datamängden, CAMEL, utforskas den brusiga karaktäristiken för den infraröda datamängden ytterligare. Genom omfattande experiment undersöks effekten av tids- och rumslig information, vilket avslöjar att den rumsliga informationen har en skadlig effekt, medan tidsinformation kan användas för att förbättra modellens prestanda. Dessutom introduceras en innovativ Dual Frame Average Aggregation (DFAA) ramverk för att hantera utmaningar relaterade till objektöverlappning och utseendeförändringar. Detta ramverk bearbetar två globala ramar parallellt och på ett organiserat sätt, vilket visar en förbättring från den ursprungliga konfigurationen.
4

Automatic prediction of emotions induced by movies / Reconnaissance automatique des émotions induites par les films

Baveye, Yoann 12 November 2015 (has links)
Jamais les films n’ont été aussi facilement accessibles aux spectateurs qui peuvent profiter de leur potentiel presque sans limite à susciter des émotions. Savoir à l’avance les émotions qu’un film est susceptible d’induire à ses spectateurs pourrait donc aider à améliorer la précision des systèmes de distribution de contenus, d’indexation ou même de synthèse des vidéos. Cependant, le transfert de cette expertise aux ordinateurs est une tâche complexe, en partie due à la nature subjective des émotions. Cette thèse est donc dédiée à la détection automatique des émotions induites par les films, basée sur les propriétés intrinsèques du signal audiovisuel. Pour s’atteler à cette tâche, une base de données de vidéos annotées selon les émotions induites aux spectateurs est nécessaire. Cependant, les bases de données existantes ne sont pas publiques à cause de problèmes de droit d’auteur ou sont de taille restreinte. Pour répondre à ce besoin spécifique, cette thèse présente le développement de la base de données LIRIS-ACCEDE. Cette base a trois avantages principaux: (1) elle utilise des films sous licence Creative Commons et peut donc être partagée sans enfreindre le droit d’auteur, (2) elle est composée de 9800 extraits vidéos de bonne qualité qui proviennent de 160 films et courts métrages, et (3) les 9800 extraits ont été classés selon les axes de “valence” et “arousal” induits grâce un protocole de comparaisons par paires mis en place sur un site de crowdsourcing. L’accord inter-annotateurs élevé reflète la cohérence des annotations malgré la forte différence culturelle parmi les annotateurs. Trois autres expériences sont également présentées dans cette thèse. Premièrement, des scores émotionnels ont été collectés pour un sous-ensemble de vidéos de la base LIRIS-ACCEDE dans le but de faire une validation croisée des classements obtenus via crowdsourcing. Les scores émotionnels ont aussi rendu possible l’apprentissage d’un processus gaussien par régression, modélisant le bruit lié aux annotations, afin de convertir tous les rangs liés aux vidéos de la base LIRIS-ACCEDE en scores émotionnels définis dans l’espace 2D valence-arousal. Deuxièmement, des annotations continues pour 30 films ont été collectées dans le but de créer des modèles algorithmiques temporellement fiables. Enfin, une dernière expérience a été réalisée dans le but de mesurer de façon continue des données physiologiques sur des participants regardant les 30 films utilisés lors de l’expérience précédente. La corrélation entre les annotations physiologiques et les scores continus renforce la validité des résultats de ces expériences. Equipée d’une base de données, cette thèse présente un modèle algorithmique afin d’estimer les émotions induites par les films. Le système utilise à son avantage les récentes avancées dans le domaine de l’apprentissage profond et prend en compte la relation entre des scènes consécutives. Le système est composé de deux réseaux de neurones convolutionnels ajustés. L’un est dédié à la modalité visuelle et utilise en entrée des versions recadrées des principales frames des segments vidéos, alors que l’autre est dédié à la modalité audio grâce à l’utilisation de spectrogrammes audio. Les activations de la dernière couche entièrement connectée de chaque réseau sont concaténées pour nourrir un réseau de neurones récurrent utilisant des neurones spécifiques appelés “Long-Short-Term- Memory” qui permettent l’apprentissage des dépendances temporelles entre des segments vidéo successifs. La performance obtenue par le modèle est comparée à celle d’un modèle basique similaire à l’état de l’art et montre des résultats très prometteurs mais qui reflètent la complexité de telles tâches. En effet, la prédiction automatique des émotions induites par les films est donc toujours une tâche très difficile qui est loin d’être complètement résolue. / Never before have movies been as easily accessible to viewers, who can enjoy anywhere the almost unlimited potential of movies for inducing emotions. Thus, knowing in advance the emotions that a movie is likely to elicit to its viewers could help to improve the accuracy of content delivery, video indexing or even summarization. However, transferring this expertise to computers is a complex task due in part to the subjective nature of emotions. The present thesis work is dedicated to the automatic prediction of emotions induced by movies based on the intrinsic properties of the audiovisual signal. To computationally deal with this problem, a video dataset annotated along the emotions induced to viewers is needed. However, existing datasets are not public due to copyright issues or are of a very limited size and content diversity. To answer to this specific need, this thesis addresses the development of the LIRIS-ACCEDE dataset. The advantages of this dataset are threefold: (1) it is based on movies under Creative Commons licenses and thus can be shared without infringing copyright, (2) it is composed of 9,800 good quality video excerpts with a large content diversity extracted from 160 feature films and short films, and (3) the 9,800 excerpts have been ranked through a pair-wise video comparison protocol along the induced valence and arousal axes using crowdsourcing. The high inter-annotator agreement reflects that annotations are fully consistent, despite the large diversity of raters’ cultural backgrounds. Three other experiments are also introduced in this thesis. First, affective ratings were collected for a subset of the LIRIS-ACCEDE dataset in order to cross-validate the crowdsourced annotations. The affective ratings made also possible the learning of Gaussian Processes for Regression, modeling the noisiness from measurements, to map the whole ranked LIRIS-ACCEDE dataset into the 2D valence-arousal affective space. Second, continuous ratings for 30 movies were collected in order develop temporally relevant computational models. Finally, a last experiment was performed in order to collect continuous physiological measurements for the 30 movies used in the second experiment. The correlation between both modalities strengthens the validity of the results of the experiments. Armed with a dataset, this thesis presents a computational model to infer the emotions induced by movies. The framework builds on the recent advances in deep learning and takes into account the relationship between consecutive scenes. It is composed of two fine-tuned Convolutional Neural Networks. One is dedicated to the visual modality and uses as input crops of key frames extracted from video segments, while the second one is dedicated to the audio modality through the use of audio spectrograms. The activations of the last fully connected layer of both networks are conv catenated to feed a Long Short-Term Memory Recurrent Neural Network to learn the dependencies between the consecutive video segments. The performance obtained by the model is compared to the performance of a baseline similar to previous work and shows very promising results but reflects the complexity of such tasks. Indeed, the automatic prediction of emotions induced by movies is still a very challenging task which is far from being solved.
5

[pt] DETECÇÃO DE CONTEÚDO SENSÍVEL EM VIDEO COM APRENDIZADO PROFUNDO / [en] SENSITIVE CONTENT DETECTION IN VIDEO WITH DEEP LEARNING

PEDRO VINICIUS ALMEIDA DE FREITAS 09 June 2022 (has links)
[pt] Grandes quantidades de vídeo são carregadas em plataformas de hospedagem de vídeo a cada minuto. Esse volume de dados apresenta um desafio no controle do tipo de conteúdo enviado para esses serviços de hospedagem de vídeo, pois essas plataformas são responsáveis por qualquer mídia sensível enviada por seus usuários. Nesta dissertação, definimos conteúdo sensível como sexo, violencia fisica extrema, gore ou cenas potencialmente pertubadoras ao espectador. Apresentamos um conjunto de dados de vídeo sensível para classificação binária de vídeo (se há conteúdo sensível no vídeo ou não), contendo 127 mil vídeos anotados, cada um com seus embeddings visuais e de áudio extraídos. Também treinamos e avaliamos quatro modelos baseline para a tarefa de detecção de conteúdo sensível em vídeo. O modelo com melhor desempenho obteve 99 por cento de F2-Score ponderado no nosso subconjunto de testes e 88,83 por cento no conjunto de dados Pornography-2k. / [en] Massive amounts of video are uploaded on video-hosting platforms every minute. This volume of data presents a challenge in controlling the type of content uploaded to these video hosting services, for those platforms are responsible for any sensitive media uploaded by their users. There has been an abundance of research on methods for developing automatic detection of sensitive content. In this dissertation, we define sensitive content as sex, extreme physical violence, gore, or any scenes potentially disturbing to the viewer. We present a sensitive video dataset for binary video classification (whether there is sensitive content in the video or not), containing 127 thousand tagged videos, Each with their extracted audio and visual embeddings. We also trained and evaluated four baseline models for the sensitive content detection in video task. The best performing model achieved 99 percent weighed F2-Score on our test subset and 88.83 percent on the Pornography-2k dataset.

Page generated in 0.3386 seconds