• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 24
  • 7
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 55
  • 55
  • 16
  • 12
  • 10
  • 9
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

AnnotEasy: A gesture and speech-to-text based video annotation tool for note taking in pre-recorded lectures in higher education

Uggerud, Nils January 2021 (has links)
This paper investigates students’ attitudes towards using gestures and speech-to- text (GaST) to take notes while watching recorded lectures. A literature review regarding video based learning, an expert interview, and a background survey regarding students’ note taking habits led to the creation of the prototype AnnotEasy, a tool that allows students to use GaST to take notes. AnnotEasy was tested in three iterations against 18 students, and was updated after each iteration.  The students watched a five minute long lecture and took notes by using AnnotEasy. The participants’ perceived ease of use (PEU) and perceived usefulness (PU) was evaluated based on the TAM. Their general attitudes were evaluated in semi structured interviews.  The result showed that the students had a high PEU and PU of AnnotEasy. They were mainly positive towards taking notes by using GaST. Further, the result suggests that AnnotEasy could facilitate the process of structuring a lecture’s content. Lastly, even though students had positive attitudes towards using speech to create notes, observations showed that this was problematic when the users attempted to create longer notes. This indicates that speech could be more beneficial for taking shorter notes.
32

A Meta-Analysis of Video Based Interventions in Adult Mental Health

Montes, Lauretta Kaye 01 January 2018 (has links)
Symptoms of mental illness such as anxiety and depression diminish functioning, cause distress, and create an economic burden to individuals and society. This meta-analysis was designed to evaluate the effectiveness of video based interventions (VBIs) for the treatment of adults in mental health settings. VBIs comprise four different ways of using video in mental health therapy, including video modeling, video exposure, video feedback, and videos used for psychoeducation. Bandura's social learning theory, Beck's cognitive theory, and Dowrick's theory of feedforward learning form the theoretical framework for understanding how VBIs work. The research questions were: (a) what is the range of effect sizes for VBI in mental health treatment of adults? (b) what is the mean standardized effect size for VBI in this context? and (c) what categorical variables, such as type of mental health issue or specific VBI application, moderate the effect of VBI? A comprehensive literature search strategy and coding plan for between-group studies was developed; the overall effect size for the 60 included studies equaled 0.34. A meta-regression was conducted; although the results were not significant, it is possible that type of VBI may be a moderator. Subgroup analyses by mental health outcome found the largest effect size, 0.48, for caregiving attitude and the smallest effect size, 0.21, for depression. Although the results of this meta-analysis were mixed, this study provides preliminary support for VBI use with adults as an evidence-based treatment. VBIs can contribute to positive social change by improving mental health treatment for the benefit of individuals, families, and society.
33

If a Picture is Worth a Thousand Words, What is a Video Worth? The Impact of Video on Interaction and Reflection in the Post-Observation Conference

Green, Jennifer J. 10 December 2018 (has links)
No description available.
34

Humor i musikklassrummet : En diskursanalytisk studie om hur musiklärare använder humor som retorisk resurs

Paulin, Tove January 2023 (has links)
Syftet med den här studien är att belysa hur humor används i musikklassrummet på estetiska programmet i gymnasiet. Förhoppningen är att bidra till en ökad medvetenhet och djupare förståelse för vilken roll humor spelar i samspelet mellan lärare och elever, hur det kan användas som retorisk resurs i musikundervisningen och diskutera vilken påverkan det har för elevernas förutsättningar för lärande. Metoden som använts är videoobservation och till studien filmades tre musiklektioner med tre olika lärare. Materialet analyserades sedan tematiskt utifrån ett diskurspsykologiskt angreppssätt. I resultatet framkommer att lärarnas humoranvändning skiftar mellan att ge positiva effekter så som avspänning och avdramatisering, till att fungera disciplinerande via ironisk feedback eller skämtsamma tillrättavisningar. Det framkommer även exempel på hur humor kan fungera som statussymbol och positionering i klassrummet, där skämtande används av elever som strategi för att inte att tappa ansiktet eller riskera att misslyckas. I diskussionen lyfts hur humor kan påverka förutsättningar för elevers lärande och vilka pedagogiska implikationer humoranvändning kan innebära. / The intention of this study is to examine how humor is used in the music classroom in the aesthetics program in high school. The ambition is to contribute to an increased awareness and deeper understanding of what role humor plays in the interaction between teachers and students, how it can be used as a rhetorical resource in music teaching and discuss what impact it has on the students' conditions for learning. The collection of data was made by video-based observation, where three music lessons with three different teachers was filmed and observed. These were transfixed and analyzed thematically based on discourse psychology theory approach. The result shows that the teachers’ use of humor shifts between providing positive effects such as relaxation and de-dramatization, to functioning as a discipline via ironic feedback or joking reprimands. There are also examples of how humor can function as a status symbol and work for student positioning in the classroom, where joking is used by students as a strategy not to risk failure. The discussion highlights how humor can affect the learning opportunities for students and what pedagogical implications the use of humor can entail.
35

Teaching Physical Education Skills to a Student with a Disability Through Video Modeling

Huddleston, Robin 01 June 2019 (has links)
Video modeling (VM) is a video-based intervention (VBI) that has been implemented with individuals with disabilities to teach various life and educational skills. It is a tool that allows learners to watch a target skill modeled on a pre-recorded video. The learner is able to re-watch a new skill as many times as needed, and the teacher is given the flexibility needed to work with multiple students while providing individualized instruction. The participant in this study was a 13-year-old male with a traumatic brain injury (TBI) and intellectual disability (ID). The participant was enrolled in a life skills class at his junior high school and received special education services under the classification of TBI. This study used a delayed multiple-baseline, across-skills design to examine increased consistency for completing different sports skills in physical education (PE), including a basketball chest pass, football forward pass, and soccer inside foot pass. VM was used successfully to increase task completion rates for all three sports skills. The participant was able to perform the basketball chess pass with 75% to 87.5% accuracy, and the football forward pass and soccer pass with 87.5% accuracy. Prior to the study he could only complete each skill with less than 25% accuracy. Future research is needed on larger samples to empirically demonstrate the efficacy of VM to improve PE skills for special needs students.
36

Video-based analysis of Gait pathologies

Nguyen, Hoang Anh 12 1900 (has links)
L’analyse de la marche a émergé comme l’un des domaines médicaux le plus im- portants récemment. Les systèmes à base de marqueurs sont les méthodes les plus fa- vorisées par l’évaluation du mouvement humain et l’analyse de la marche, cependant, ces systèmes nécessitent des équipements et de l’expertise spécifiques et sont lourds, coûteux et difficiles à utiliser. De nombreuses approches récentes basées sur la vision par ordinateur ont été développées pour réduire le coût des systèmes de capture de mou- vement tout en assurant un résultat de haute précision. Dans cette thèse, nous présentons notre nouveau système d’analyse de la démarche à faible coût, qui est composé de deux caméras vidéo monoculaire placées sur le côté gauche et droit d’un tapis roulant. Chaque modèle 2D de la moitié du squelette humain est reconstruit à partir de chaque vue sur la base de la segmentation dynamique de la couleur, l’analyse de la marche est alors effectuée sur ces deux modèles. La validation avec l’état de l’art basée sur la vision du système de capture de mouvement (en utilisant le Microsoft Kinect) et la réalité du ter- rain (avec des marqueurs) a été faite pour démontrer la robustesse et l’efficacité de notre système. L’erreur moyenne de l’estimation du modèle de squelette humain par rapport à la réalité du terrain entre notre méthode vs Kinect est très prometteur: les joints des angles de cuisses (6,29◦ contre 9,68◦), jambes (7,68◦ contre 11,47◦), pieds (6,14◦ contre 13,63◦), la longueur de la foulée (6.14cm rapport de 13.63cm) sont meilleurs et plus stables que ceux de la Kinect, alors que le système peut maintenir une précision assez proche de la Kinect pour les bras (7,29◦ contre 6,12◦), les bras inférieurs (8,33◦ contre 8,04◦), et le torse (8,69◦contre 6,47◦). Basé sur le modèle de squelette obtenu par chaque méthode, nous avons réalisé une étude de symétrie sur différentes articulations (coude, genou et cheville) en utilisant chaque méthode sur trois sujets différents pour voir quelle méthode permet de distinguer plus efficacement la caractéristique symétrie / asymétrie de la marche. Dans notre test, notre système a un angle de genou au maximum de 8,97◦ et 13,86◦ pour des promenades normale et asymétrique respectivement, tandis que la Kinect a donné 10,58◦et 11,94◦. Par rapport à la réalité de terrain, 7,64◦et 14,34◦, notre système a montré une plus grande précision et pouvoir discriminant entre les deux cas. / Gait analysis has emerged as one of the most important medical field recently due to its wide range of applications. Marker-based systems are the most favoured methods of human motion assessment and gait analysis, however, these systems require specific equipment and expertise, and are cumbersome, costly and difficult to use. Many re- cent computer-vision-based approaches have been developed to reduce the cost of the expensive motion capture systems while ensuring high accuracy result. In this thesis, we introduce our new low-cost gait analysis system that is composed of two low-cost monocular cameras (camcorders) placed on the left and right sides of a treadmill. Each 2D left or right human skeleton model is reconstructed from each view based on dy- namic color segmentation, the gait analysis is then performed on these two models. The validation with one state-of-the-art vision-based motion capture system (using the Mi- crosoft Kinect v.1) and one ground-truth (with markers) was done to demonstrate the robustness and efficiency of our system. The average error in human skeleton model estimation compared to ground-truth between our method vs. Kinect are very promis- ing: the joints angles of upper legs (6.29◦ vs. 9.68◦), lower legs (7.68◦ vs. 11.47◦), feet (6.14◦ vs. 13.63◦), stride lengths (6.14cm vs. 13.63cm) were better and more stable than those from the Kinect, while the system could maintain a reasonably close accu- racy to the Kinect for upper arms (7.29◦ vs. 6.12◦), lower arms (8.33◦ vs. 8.04◦), and torso (8.69◦ vs. 6.47◦). Based on the skeleton model obtained by each method, we per- formed a symmetry study on various joints (elbow, knee and ankle) using each method on two different subjects to see which method can distinguish more efficiently the sym- metry/asymmetry characteristic of gaits. In our test, our system reported a maximum knee angle of 8.97◦ and 13.86◦ for normal and asymmetric walks respectively, while the Kinect gave 10.58◦ and 11.94◦. Compared to the ground-truth, 7.64◦ and 14.34◦, our system showed more accuracy and discriminative power between the two cases.
37

[en] VIDEO BASED INTERACTIVE STORYTELLING / [pt] STORYTELLING INTERATIVO BASEADO EM VÍDEO

EDIRLEI EVERSON SOARES DE LIMA 06 March 2015 (has links)
[pt] A geração de representações visuais envolventes para storytelling interativo é um dos desafios-chave para a evolução e popularização das narrativas interativas. Usualmente, sistemas de storytelling interativo utilizam computação gráfica para representar os mundos virtuais das histórias, o que facilita a geração dinâmica de conteúdos visuais. Embora animação tridimensional seja um poderoso meio para contar histórias, filmes com atores reais continuam atraindo mais atenção do público em geral. Além disso, apesar dos recentes progressos em renderização gráfica e da ampla aceitação de animação 3D em filmes, a qualidade visual do vídeo continua sendo muito superior aos gráficos gerados computacionalmente em tempo real. Na presente tese propomos uma nova abordagem para criar narrativas interativas mais envolventes, denominada Storytelling Interativo Baseado em Vídeo, onde os personagens e ambientes virtuais são substituídos por atores e cenários reais, sem perder a estrutura lógica da narrativa. Este trabalho apresenta um modelo geral para sistemas de storytelling interativo baseados em vídeo, incluindo os aspectos autorais das fases de produção e os aspectos técnicos dos algoritmos responsáveis pela geração em tempo real de narrativas interativas usando técnicas de composição de vídeo. / [en] The generation of engaging visual representations for interactive storytelling represents a key challenge for the evolution and popularization of interactive narratives. Usually, interactive storytelling systems adopt computer graphics to represent the virtual story worlds, which facilitates the dynamic generation of visual content. Although animation is a powerful storytelling medium, live-action films still attract more attention from the general public. In addition, despite the recent progress in graphics rendering and the wide-scale acceptance of 3D animation in films, the visual quality of video is still far superior to that of real-time generated computer graphics. In the present thesis, we propose a new approach to create more engaging interactive narratives, denominated Video-Based Interactive Storytelling, where characters and virtual environments are replaced by real actors and settings, without losing the logical structure of the narrative. This work presents a general model for interactive storytelling systems that are based on video, including the authorial aspects of the production phases, and the technical aspects of the algorithms responsible for the real-time generation of interactive narratives using video compositing techniques.
38

故宮影音頻道:服務及實作 / iPalace video channel: the service and its implementation

楊子諒, Yang, Tzu Liang Unknown Date (has links)
在服務經濟的時代,人們為了提升生活品質越來越重視心靈需求而非過往所重視的物質需求。文化創意產業提供的原創性以及內容扮演越來越重要的角色,文化創意產業近年來在世界各國之經濟地位也漸趨重要;本篇論文提出一個透過網際網路提供台灣國立故宮博物院動態展覽的嶄新影音廣播服務--「故宮影音頻道」。故宮影音頻道意圖透過影音多媒體展示,更全方位且完整地呈現中華文物的精緻特質和內涵,以帶給瀏覽者更清晰具體的文物觀賞體驗,以及激發瀏覽者之創意靈感。 故宮影音頻道為了符合高標準的服務品質以及處理來自世界各地使用者的潛在高峰需求而特別設計負載平衡的機制。本篇論文更進一步地探討故宮影音頻道平台後端作業流程的技術挑戰以及展示提供線上影音廣播服務的具備彈性、靈活、分散特性的雲端平台。首先,我們在雲端平台上建立具有處理大量使用者的負載平衡機制的故宮影音頻道。接著我們讓故宮影音頻道上線進行實驗、收集資料及分析。根據實驗的結果,我們為了維持高標準的服務品質應用排隊理論設計了兩階段的服務系統,最後探討在不同電腦及網路環境下,如何利用該服務系統計算出符合服務品質的設定。 / The iPalace Video Channel is a video-based website and provide people around the world a brand new video-broadcasting service on the Internet which introduces relics in National Palace Museum in Taiwan. The iPalace Video Channel is designed to meet the high standard of service quality which users can watch videos without delay and be capable of coping with potential high peak demands from worldwide. In this paper, first, we establish the iPalace Video Channel and an elastic and reliable mechanism via load balancing that is capable of handling potential huge peak-demand of video services. Second, we conduct an experiment by sampling the video-based service in practice and evaluating the presented approach online. Third, from the finding of evaluation, we establish the two phase service system for delivering video services with high standard of service quality. Finally, we discuss the formal queuing model of video services from which the settings of the queuing system that satisfy acceptable service quality could be calculated under various computer and network environments
39

Example-based Rendering of Textural Phenomena

Kwatra, Vivek 19 July 2005 (has links)
This thesis explores synthesis by example as a paradigm for rendering real-world phenomena. In particular, phenomena that can be visually described as texture are considered. We exploit, for synthesis, the self-repeating nature of the visual elements constituting these texture exemplars. Techniques for unconstrained as well as constrained/controllable synthesis of both image and video textures are presented. For unconstrained synthesis, we present two robust techniques that can perform spatio-temporal extension, editing, and merging of image as well as video textures. In one of these techniques, large patches of input texture are automatically aligned and seamless stitched with each other to generate realistic looking images and videos. The second technique is based on iterative optimization of a global energy function that measures the quality of the synthesized texture with respect to the given input exemplar. We also present a technique for controllable texture synthesis. In particular, it allows for generation of motion-controlled texture animations that follow a specified flow field. Animations synthesized in this fashion maintain the structural properties like local shape, size, and orientation of the input texture even as they move according to the specified flow. We cast this problem into an optimization framework that tries to simultaneously satisfy the two (potentially competing) objectives of similarity to the input texture and consistency with the flow field. This optimization is a simple extension of the approach used for unconstrained texture synthesis. A general framework for example-based synthesis and rendering is also presented. This framework provides a design space for constructing example-based rendering algorithms. The goal of such algorithms would be to use texture exemplars to render animations for which certain behavioral characteristics need to be controlled. Our motion-controlled texture synthesis technique is an instantiation of this framework where the characteristic being controlled is motion represented as a flow field.
40

Automated video-based measurement of eye closure using a remote camera for detecting drowsiness and behavioural microsleeps

Malla, Amol Man January 2008 (has links)
A device capable of continuously monitoring an individual’s levels of alertness in real-time is highly desirable for preventing drowsiness and lapse related accidents. This thesis presents the development of a non-intrusive and light-insensitive video-based system that uses computer-vision methods to localize face, eyes, and eyelids positions to measure level of eye closure within an image, which, in turn, can be used to identify visible facial signs associated with drowsiness and behavioural microsleeps. The system was developed to be non-intrusive and light-insensitive to make it practical and end-user compliant. To non-intrusively monitor the subject without constraining their movement, the video was collected by placing a camera, a near-infrared (NIR) illumination source, and an NIR-pass optical filter at an eye-to-camera distance of 60 cm from the subject. The NIR-illumination source and filter make the system insensitive to lighting conditions, allowing it to operate in both ambient light and complete darkness without visually distracting the subject. To determine the image characteristics and to quantitatively evaluate the developed methods, reference videos of nine subjects were recorded under four different lighting conditions with the subjects exhibiting several levels of eye closure, head orientations, and eye gaze. For each subject, a set of 66 frontal face reference images was selected and manually annotated with multiple face and eye features. The eye-closure measurement system was developed using a top-down passive feature-detection approach, in which the face region of interest (fROI), eye regions of interests (eROIs), eyes, and eyelid positions were sequentially localized. The fROI was localized using an existing Haar-object detection algorithm. In addition, a Kalman filter was used to stabilize and track the fROI in the video. The left and the right eROIs were localized by scaling the fROI with corresponding proportional anthropometric constants. The position of an eye within each eROI was detected by applying a template-matching method in which a pre-formed eye-template image was cross-correlated with the sub-images derived from the eROI. Once the eye position was determined, the positions of the upper and lower eyelids were detected using a vertical integral-projection of the eROI. The detected positions of the eyelids were then used to measure eye closure. The detection of fROI and eROI was very reliable for frontal-face images, which was considered sufficient for an alertness monitoring system as subjects are most likely facing straight ahead when they are drowsy or about to have microsleep. Estimation of the y- coordinates of the eye, upper eyelid, and lower eyelid positions showed average median errors of 1.7, 1.4, and 2.1 pixels and average 90th percentile (worst-case) errors of 3.2, 2.7, and 6.9 pixels, respectively (1 pixel 1.3 mm in reference images). The average height of a fully open eye in the reference database was 14.2 pixels. The average median and 90th percentile errors of the eye and eyelid detection methods were reasonably low except for the 90th percentile error of the lower eyelid detection method. Poor estimation of the lower eyelid was the primary limitation for accurate eye-closure measurement. The median error of fractional eye-closure (EC) estimation (i.e., the ratio of closed portions of an eye to average height when the eye is fully open) was 0.15, which was sufficient to distinguish between the eyes being fully open, half closed, or fully closed. However, compounding errors in the facial-feature detection methods resulted in a 90th percentile EC estimation error of 0.42, which was too high to reliably determine extent of eye-closure. The eye-closure measurement system was relatively robust to variation in facial-features except for spectacles, for which reflections can saturate much of the eye-image. Therefore, in its current state, the eye-closure measurement system requires further development before it could be used with confidence for monitoring drowsiness and detecting microsleeps.

Page generated in 0.117 seconds