• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 50
  • 3
  • 1
  • 1
  • Tagged with
  • 92
  • 92
  • 49
  • 39
  • 26
  • 23
  • 18
  • 17
  • 16
  • 15
  • 15
  • 14
  • 14
  • 13
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Students See, Students Do?: Inducing a Peer Norm Effect for Oral Source Citations

Buerkle, C. Wesley, Gearhart, Christopher C. 03 April 2017 (has links)
Video modeling was used to establish descriptive norms for proper oral citation performance in a general education public speaking class (N = 191). Three conditions—a control, a peer model video, and a nonpeer model video—were compared for influence on proper citation usage and completeness. Results indicated that students viewing any video performed more complete citations than students not viewing a video. Results were mixed when comparing the effects of the peer model video against the nonpeer model video. Findings suggest norms for proper oral citation behavior can be established through modeling videos.
72

iMath - Using Video Modeling Via iPads to Teach Mathematics Skills to Struggling Students

Steinberg, Melissa 16 June 2020 (has links)
There is a growing body of research that suggests that video-based interventions, such as video modeling and video prompting, are effective tools for teaching academic skills to students with disabilities. This study used a single subject, multiple-baseline-across-subjects design to evaluate whether a video-prompting intervention could effectively assist second grade students who had been identified by their teachers as "struggling"in mathematics to better solve multiplication story problems. Five second grade students (one female and four males) ages 7 to 8 viewed the intervention videos on an iPad that modeled how to solve multiplication word problems. To evaluate the effectiveness of the videos, a rubric was used as the primary measure to assess the domains of problem-solving, communicating, and representing with numbers. Based on visual analysis between baseline and intervention, there was a functional relationship between the introduction of the intervention and the performance on the math problems. In addition, a visual analysis between intervention and maintenance appeared stable for all participants. These results indicate that technology can be used to implement interventions for struggling learners and may be utilized in regular classrooms. Results also demonstrate that video modeling can be a useful instructional tool for helping many individuals, not just those with an identified disability, to learn complex tasks. Implementing video models in a classroom setting could enable teachers to consistently provide interventions to students that work more independently, allowing teachers to work on a more one-on-one or small group basis with their students.
73

A Meta-Analysis of Video Based Interventions in Adult Mental Health

Montes, Lauretta Kaye 01 January 2018 (has links)
Symptoms of mental illness such as anxiety and depression diminish functioning, cause distress, and create an economic burden to individuals and society. This meta-analysis was designed to evaluate the effectiveness of video based interventions (VBIs) for the treatment of adults in mental health settings. VBIs comprise four different ways of using video in mental health therapy, including video modeling, video exposure, video feedback, and videos used for psychoeducation. Bandura's social learning theory, Beck's cognitive theory, and Dowrick's theory of feedforward learning form the theoretical framework for understanding how VBIs work. The research questions were: (a) what is the range of effect sizes for VBI in mental health treatment of adults? (b) what is the mean standardized effect size for VBI in this context? and (c) what categorical variables, such as type of mental health issue or specific VBI application, moderate the effect of VBI? A comprehensive literature search strategy and coding plan for between-group studies was developed; the overall effect size for the 60 included studies equaled 0.34. A meta-regression was conducted; although the results were not significant, it is possible that type of VBI may be a moderator. Subgroup analyses by mental health outcome found the largest effect size, 0.48, for caregiving attitude and the smallest effect size, 0.21, for depression. Although the results of this meta-analysis were mixed, this study provides preliminary support for VBI use with adults as an evidence-based treatment. VBIs can contribute to positive social change by improving mental health treatment for the benefit of individuals, families, and society.
74

The effect of correct and incorrect video models on the acquisition of skills taught in behavioral parent training

Herrera, Elizabeth A. 01 January 2016 (has links)
Modeling, a process by which a learned behavior is observed and imitated, has been demonstrated to be effective in the acquisition of skills. Several factors appear to enhance or detract from the effect a model has on subsequent observer behavior and contradictory findings have been reported based on the type of model used. A less explored factor is the impact of correct and incorrect models as often employed in parent training packages when teaching skills that are to be acquired by the observer. To further investigate, the current study compared the effectiveness of correct and incorrect video models using an empirically supported treatment for child behavior problems: The Incredible Years. Using a fairly minimal, and mostly remote intervention 5 out of 6 participants improved from baseline sessions. Several areas of future research are presented for modeling and parent training to assess effectiveness of model types and treatment programs used.
75

Autonomic Responses During Animated Avatar Video Modeling Instruction of Social Emotional Learning to Students With ADHD: A Mixed Methods Study

Rhodes, Jesse D 12 December 2022 (has links)
For those with attention deficit hyperactivity disorder (ADHD), social interactions involving high levels of face-to-face interaction can raise stress levels and emotional dysregulation. Using animated avatar video models may mitigate potential emotional dysregulation while learning social skills in these populations. This study examined autonomic data of adolescents aged 7-13 diagnosed with attention deficit hyperactivity disorder (ADHD), n=5 during avatar animated video modeling (AAVM) of social and emotional skills. This was a replication study with the addition of biofeedback data collection and a change of population. Participants were given three Nearpod training modules with AAVM and multiple-choice quizzes on self-awareness, social awareness, and relationship skills. Using a multiple baseline design, we collected Social Emotional Learning (SEL) scores at baseline, and during each phase of intervention. During all phases, we collected heart rate and analyzed heart rate variability (HRV) metrics: standard deviation of N-N intervals (SDNN), high frequency (HF), low frequency (LF), and HF/LF ratio). We also collected real-time somatic data: muscle tension (EMG), skin conductance (SC), and skin temperature (temp). The somatic autonomic data were not analyzed as part of this thesis. Results suggest that persons with ADHD may benefit from avatar animated video modeling delivered instruction based on patterns in autonomic data, increases in scores on the targeted skills taught during instruction, and participant's expressions about this method of learning. In future research and practice the population for this content could be narrowed to age 8-12. Reliable but smaller and less obtrusive biofeedback devices are currently available, and having several accessible options is recommended.
76

Teaching Physical Education Skills to a Student with a Disability Through Video Modeling

Huddleston, Robin 01 June 2019 (has links)
Video modeling (VM) is a video-based intervention (VBI) that has been implemented with individuals with disabilities to teach various life and educational skills. It is a tool that allows learners to watch a target skill modeled on a pre-recorded video. The learner is able to re-watch a new skill as many times as needed, and the teacher is given the flexibility needed to work with multiple students while providing individualized instruction. The participant in this study was a 13-year-old male with a traumatic brain injury (TBI) and intellectual disability (ID). The participant was enrolled in a life skills class at his junior high school and received special education services under the classification of TBI. This study used a delayed multiple-baseline, across-skills design to examine increased consistency for completing different sports skills in physical education (PE), including a basketball chest pass, football forward pass, and soccer inside foot pass. VM was used successfully to increase task completion rates for all three sports skills. The participant was able to perform the basketball chess pass with 75% to 87.5% accuracy, and the football forward pass and soccer pass with 87.5% accuracy. Prior to the study he could only complete each skill with less than 25% accuracy. Future research is needed on larger samples to empirically demonstrate the efficacy of VM to improve PE skills for special needs students.
77

Increasing Engagement Utilizing Video Modeling and the Good Behavior Game with Students with Emotional and Behavioral Disorders

Flowers, Emily M. 05 December 2017 (has links)
No description available.
78

Using Behavioral Skills Training with Video Modeling to Improve Future Behavior Analysts’ Graphing Skills

Wallave, Geena Desiree January 2020 (has links)
Individuals who train to become behavior analysts should be able to organize, create, and display data accurately in order to make a data-based decision about the interventions being used for his or her clients. Behavior analysts most commonly use the visual analysis of the data to continuously evaluate the relationship between the intervention and the target behavior being measured. A multiple probe design across behaviors (i.e., Reversal Design, Alternating treatments and Multiple baseline design) was used to evaluate the effects of behavioral skills training (BST) with video modeling on three potential behavior analysts’ single-subject design graphing skills in Microsoft Excel™. Behavioral skills training is a training package made up of multiple components, but for the purpose of this study BST included: rehearsal, video modeling w/ instructions, and feedback. The three participants were taught remotely via Zoom how to accurately complete the steps in the graph creation process for a reversal design, alternating treatments design, and a multiple baseline design. Results indicate that BST with video modeling was an effective and efficient intervention to increase the accuracy of three potential behavior analysts’ single-subject design graphing skills on Microsoft Excel™. / Applied Behavioral Analysis
79

Structural And Event Based Multimodal Video Data Modeling

Oztarak, Hakan 01 December 2005 (has links) (PDF)
Investments on multimedia technology enable us to store many more reflections of the real world in digital world as videos. By recording videos about real world entities, we carry a lot of information to the digital world directly. In order to store and efficiently query this information, a video database system (VDBS) is necessary. In this thesis work, we propose a structural, event based and multimodal (SEBM) video data model for VDBSs. SEBM video data model supports three different modalities that are visual, auditory and textual modalities and we propose that we can dissolve these three modalities with a single SEBM video data model. This proposal is supported by the interpretation of the video data by human. Hence we can answer the content based, spatio-temporal and fuzzy queries of the user more easily, since we store the video data as the way that s/he interprets the real world data. We follow divide and conquer technique when answering very complicated queries. We have implemented the SEBM video data model in a Java based system that uses XML for representing the SEBM data model and Berkeley XML DBMS for storing the data based on the SEBM prototype system.
80

Bayesian Nonparametric Modeling of Temporal Coherence for Entity-Driven Video Analytics

Mitra, Adway January 2015 (has links) (PDF)
In recent times there has been an explosion of online user-generated video content. This has generated significant research interest in video analytics. Human users understand videos based on high-level semantic concepts. However, most of the current research in video analytics are driven by low-level features and descriptors, which often lack semantic interpretation. Existing attempts in semantic video analytics are specialized and require additional resources like movie scripts, which are not available for most user-generated videos. There are no general purpose approaches to understanding videos through semantic concepts. In this thesis we attempt to bridge this gap. We view videos as collections of entities which are semantic visual concepts like the persons in a movie, or cars in a F1 race video. We focus on two fundamental tasks in Video Understanding, namely summarization and scene- discovery. Entity-driven Video Summarization and Entity-driven Scene discovery are important open problems. They are challenging due to the spatio-temporal nature of videos, and also due to lack of apriori information about entities. We use Bayesian nonparametric methods to solve these problems. In the absence of external resources like scripts we utilize fundamental structural properties like temporal coherence in videos- which means that adjacent frames should contain the same set of entities and have similar visual features. There have been no focussed attempts to model this important property. This thesis makes several contributions in Computer Vision and Bayesian nonparametrics by addressing Entity-driven Video Understanding through temporal coherence modeling. Temporal Coherence in videos is observed across its frames at the level of features/descriptors, as also at semantic level. We start with an attempt to model TC at the level of features/descriptors. A tracklet is a spatio-temporal fragment of a video- a set of spatial regions in a short sequence (5-20) of consecutive frames, each of which enclose a particular entity. We attempt to find a representation of tracklets to aid tracking of entities. We explore region descriptors like Covari- ance Matrices of spatial features in individual frames. Due to temporal coherence, such matrices from corresponding spatial regions in successive frames have nearly identical eigenvectors. We utilize this property to model a tracklet using a covariance matrix, and use it for region-based entity tracking. We propose a new method to estimate such a matrix. Our method is found to be much more efficient and effective than alternative covariance-based methods for entity tracking. Next, we move to modeling temporal coherence at a semantic level, with special emphasis on videos of movies and TV-series episodes. Each tracklet is associated with an entity (say a particular person). Spatio-temporally close but non-overlapping tracklets are likely to belong to the same entity, while tracklets that overlap in time can never belong to the same entity. Our aim is to cluster the tracklets based on the entities associated with them, with the goal of discovering the entities in a video along with all their occurrences. We argue that Bayesian Nonparametrics is the most convenient way for this task. We propose a temporally coherent version of Chinese Restaurant Process (TC-CRP) that can encode such constraints easily, and results in discovery of pure clusters of tracklets, and also filter out tracklets resulting from false detections. TC-CRP shows excellent performance on person discovery from TV-series videos. We also discuss semantic video summarization, based on entity discovery. Next, we consider entity-driven temporal segmentation of a video into scenes, where each scene is characterized by the entities present in it. This is a novel application, as existing work on temporal segmentation have focussed on low-level features of frames, rather than entities. We propose EntScene: a generative model for videos based on entities and scenes, and propose an inference algorithm based on Blocked Gibbs Sampling, for simultaneous entity discovery and scene discovery. We compare it to alternative inference algorithms, and show significant improvements in terms of segmentatio and scene discovery. Video representation by low-rank matrix has gained popularity recently, and has been used for various tasks in Computer Vision. In such a representation, each column corresponds to a frame or a single detection. Such matrices are likely to have contiguous sets of identical columns due to temporal coherence, and hence they should be low-rank. However, we discover that none of the existing low-rank matrix recovery algorithms are able to preserve such structures. We study regularizers to encourage these structures for low-rank matrix recovery through convex optimization, but note that TC-CRP-like Bayesian modeling is better for enforcing them. We then focus our attention on modeling temporal coherence in hierarchically grouped sequential data, such as word-tokens grouped into sentences, paragraphs, documents etc in a text corpus. We attempt Bayesian modeling for such data, with application to multi-layer segmentation. We first make a detailed study of existing models for such data. We present a taxonomy for such models called Degree-of-Sharing (DoS), based on how various mixture components are shared by the groups of data in these models. We come up with Layered Dirichlet Process which generalizes Hierarchical Dirichlet Process to multiple layers, and can also handle sequential information easily through Markovian approach. This is applied to hierarchical co-segmentation of a set of news transcripts- into broad categories (like politics, sports etc) and individual stories. We also propose a explicit-duration (semi-Markov) approach for this purpose, and provide an efficient inference algorithm for this. We also discuss generative processes for distribution matrices, where each column is a probability distribution. For this we discuss an application: to infer the correct answers to questions on online answering forums from opinions provided by different users.

Page generated in 0.3247 seconds