• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 296
  • 68
  • 32
  • 15
  • 8
  • 8
  • 8
  • 8
  • 8
  • 8
  • 8
  • 8
  • 7
  • 6
  • 5
  • Tagged with
  • 609
  • 609
  • 265
  • 258
  • 192
  • 178
  • 79
  • 63
  • 50
  • 49
  • 46
  • 44
  • 39
  • 38
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
281

A Study for Determining the Efficacy of Tape-Recorded Presentations for the Enhancement of Self-Concept in First-Grade Children

Aston, Willard A. 12 1900 (has links)
The problem of the study was to discover whether the selfconcepts of selected children in the primary grades could be enhanced. The purpose of the study was to determine the feasibility of using tape-recorded stories to enhance the self-concepts of selected primary grade children. A treatment of the Piers-Harris Children's Self Concept Scale for sex differences showed no significant differences for either the experimental or control groups. Some enhancement of the self-concepts of primary grade children may be possible by means of auditory non-teacher directed activities under properly controlled conditions. Several areas should be further investigated. A regular school year study should be designed to produce results applicable to a more general population. Such a study might answer questions regarding peer influences, the relationship between self-concept and academic achievement, the tolerance of primary grade children for prolonged treatment, and teacher attitude toward conducting such activities. Studies should be conducted to determine the relative value of simultaneous visual and auditory presentations for the enhancement of self-concept.
282

Student experiences with instructional videos in online learning environments

Hibbert, Melanie C. January 2016 (has links)
Drawing upon qualitative methods of semi-structured interviews and observational talk-through interviews, this qualitative dissertation investigates the ways in which graduate students in an online course context experience online instructional videos. A conceptual framework of user experience and multimodality, as well as the framework of sense-making developed by McCarthy and Wright (2004) guided this study and data analysis. The findings of this dissertation have implications for how students are participating in, interacting with, and making sense of online learning environments. Some of the findings of this research include: (a) students do not necessarily experience course videos as discrete elements (or differentiate them with other aspects of the course); (b) the times and contexts in which students view instructional videos shifts (e.g., between home and commuting); (c) student motivations and expectations shape how they approach and orient themselves towards watching online course videos; and (d) multimodal design elements influence students’ meaning-making of online instructional videos. These data findings are all in support of the overarching conclusion of this dissertation, which is that students have significant agency in these online environments, and their meaning-making of online videos may not align with designers’ intentions. This conclusion argues against deterministic views of design. The emerging findings have design implications related to the creation of learning environments in online spaces, such as: (a) fully integrating videos within the broader instructional design of a course; (b) foregrounding the embedded context of instructional videos; and (c) accounting for the shifting times, places, and contexts in which viewers watch instructional videos. This dissertation is situated in the growing field of online education, in particular higher education, where significant money and resources are increasingly dedicated towards the development of online spaces while still much is unknown in relation to the design, experiences, and impact of these online learning environments.
283

Videogame como linguagem audiovisual: compreensão e aplicação em um estudo de caso - super street fighter

Corrêa, Francisco Tupy Gomes 27 September 2013 (has links)
A relação entre os aspectos fílmicos e lúdicos dos jogos vão muito além da palavra videogame. Essa terminologia frente à humanidade é muito recente, apenas com algumas dezenas de anos, porém, trata-se de algo que converge em símbolos, técnicas e práticas, revestindo tais itens contemporâneos e impactando a sociedade significativamente. Logo, configura-se como um objeto de estudo capaz de promover reflexões diversas. No caso desta pesquisa, consideramos que nessa expressão do ato de jogar existe muito de game e pouco de vídeo. As abordagens focando as narrativas e as mecânicas muitas vezes deixam uma lacuna referente aos processos audiovisuais. Em função desta observação, foi realizado um estudo de cunho hipotético-dedutivo e metapórico preconizando uma visão integradora de três métodos distintos: a Ludologia, a Narratologia e a Linguagem Audiovisual. A motivação do título escolhido para o estudo de caso, Super Street Fighter IV, ocorreu por ser um jogo popular, dentro de um gênero típico de jogabilidade, que há mais de duas décadas se reinventa para agradar os fãs. Além disso, dialoga diretamente com temas culturais (costumes asiáticos e artes marciais) e temas cinematográficos (filmes de luta e o ícone representado por Bruce Lee). O estudo focou os resultados visando contribuir para uma compreensão do videogame, de modo que a pesquisa realizada pudesse trazer parâmetros tanto para aprofundar elementos presentes em sua linguagem quanto para o desenvolvimento de questões de ligadas à sua realização. / The relation between cinematographic and ludic aspects of games go well beyond the Word videogame. This term is very recent in human history, having appeared only a few decades ago. However, it is something the converges in symbols, techniques and practices, labelling these contemporary items and significantly impacting society. For this reason, it presents itself as a study subject that is capable of promoting diverse reflections and discussions. In this research, we considered that in this expression of the act of playing there is much more game than video. Approaches that focus on narratives and mechanics often don\'t refer to audiovisual processes. Based on this observation, we devised a hypothetic-deductive, and metaphoric study that considers three distinct methods: Ludology, Narratology, and Audiovisual Language. The motivation behind the chosen subject of study, Super Street Fighter IV, comes form it being a popular game, from a genre that is mainly based on playability, with more than two decades of reinvention in order to please fans. Besides, it directly dialogs with cultural (Asiatic costumes and martial arts) and cinematographic themes (fight movies and the popular icon Bruce Lee). The study focused the results in order to contribute to a comprehension of video-games, in a way that the research could bring parameters both to deepen elements that help better understanding this language, and to aid the development of questions concerned to realization.
284

The orchestration of modes and EFL audio-visual comprehension: A multimodal discourse analysis of vodcasts

Norte Fernández-Pacheco, Natalia 27 January 2016 (has links)
This thesis explores the role of multimodality in language learners’ comprehension, and more specifically, the effects on students’ audio-visual comprehension when different orchestrations of modes appear in the visualization of vodcasts. Firstly, I describe the state of the art of its three main areas of concern, namely the evolution of meaning-making, Information and Communication Technology (ICT), and audio-visual comprehension. One of the most important contributions in the theoretical overview is the suggested integrative model of audio-visual comprehension, which attempts to explain how students process information received from different inputs. Secondly, I present a study based on the following research questions: ‘Which modes are orchestrated throughout the vodcasts?’, ‘Are there any multimodal ensembles that are more beneficial for students’ audio-visual comprehension?’, and ‘What are the students’ attitudes towards audio-visual (e.g., vodcasts) compared to traditional audio (e.g., audio tracks) comprehension activities?’. Along with these research questions, I have formulated two hypotheses: Audio-visual comprehension improves when there is a greater number of orchestrated modes, and students have a more positive attitude towards vodcasts than traditional audios when carrying out comprehension activities. The study includes a multimodal discourse analysis, audio-visual comprehension tests, and students’ questionnaires. The multimodal discourse analysis of two British Council’s language learning vodcasts, entitled English is GREAT and Camden Fashion, using ELAN as the multimodal annotation tool, shows that there are a variety of multimodal ensembles of two, three and four modes. The audio-visual comprehension tests were given to 40 Spanish students, learning English as a foreign language, after the visualization of vodcasts. These comprehension tests contain questions related to specific orchestrations of modes appearing in the vodcasts. The statistical analysis of the test results, using repeated-measures ANOVA, reveal that students obtain better audio-visual comprehension results when the multimodal ensembles are constituted by a greater number of orchestrated modes. Finally, the data compiled from the questionnaires, conclude that students have a more positive attitude towards vodcasts in comparison to traditional audio listenings. Results from the audio-visual comprehension tests and questionnaires prove the two hypotheses of this study.
285

Reflections on the use of a smartphone to facilitate qualitative research in South Africa

Matlala, Sogo France, Matlala, Makoko Neo 10 January 2018 (has links)
Journal article published in The Qualitative Report 2018 Volume 23, Number 10, How to Article 2, 2264-2275 / This paper describes conditions that led to the use of a smartphone to collect qualitative data instead of using a digital voice recorder as the standard device for recording of interviews. Through reviewing technical documents, the paper defines a smartphone and describes its applications that are useful in the research process. It further points out possible uses of other applications of a smartphone in the research process. The paper concludes that a smartphone is a valuable device to researchers
286

Diarization, Localization and Indexing of Meeting Archives

Vajaria, Himanshu 21 February 2008 (has links)
This dissertation documents the research performed on the topics of localization, diarization and indexing in meeting archives. It surveys existing work in these areas, identifies opportunities for improvements and proposes novel solutions for each of these problems. The framework resulting from this dissertation enables various kinds of queries such as identifying the participants of a meeting, finding all meetings for a particular participant, locating a particular individual in the video and finding all instances of speech from a particular individual. Also, since the proposed solutions are computationally efficient, require no training and use little domain knowledge, they can be easily ported to other domains of multimedia analysis. Speaker diarization involves determining the number of distinct speakers and identifying the durations when they spoke in an audio recording. We propose novel solutions for the segmentation and clustering sub-tasks, based on graph spectral clustering. The resulting system yields a diarization error rate of around 20%, a relative improvement of 16% over the current popular diarization technique which is based on hierarchical clustering. The most significant contribution of this work lies in performing speaker localization using only a single camera and a single microphone by exploiting long term audio-visual co-occurence. Our novel computational model allows identifying regions in the image belonging to the speaker even when the speaker's face is non-frontal and even when the speaker is only partially visible. This approach results in a hit ratio of 73.8% compared to an MI based approach which results in a hit ratio of 52.6%, which illustrates its suitability in the meeting domain. The third problem addresses indexing meeting archives to enable retrieving all segments from the archive during which a particular individual speaks, in a query by example framework. By performing audio-visual association and clustering, a target cluster is generated per individual that contains multiple multimodal samples for that individual to which a query sample is matched. The use of multiple samples results in a retrieval precision of 92.6% at 90% recall compared to a precision of 71% at the same recall, achieved by a unimodal unisample system.
287

Producing and using video film : a tool for agricultural extension, a case study in Limpopo Province

Mphahlele, Chipientsho Koketso January 2007 (has links)
Thesis (M.Sc. (Agriculture)) --University of Limpopo, 2007 / The study was designed to outline the production process of a video film with farmers and its use as a tool for agricultural extension with other farmers engaged in similar development processes. The production process of the video film followed five stages namely: (1). Planning stage, where the production idea was discussed between the producer and the director. (2). Pre-production where brainstorming and conceptual framework were made. (3). Production stage was the shooting stage. Production took place at different venues with farmers and extension officers. (4). The editing stage using conceptual framework and Non Linear Editing (NLE) method to organize the video film into sequence; and (5) Distribution to project the video film with farmers in ten rural areas of the Limpopo province. Following the above-mentioned process, an eleven-minute film called Phanda na Vhulimi was produced with farmers, farmer’s leader as the main character and extension officers. Phanda na Vhulimi captured the farmer in her field, during meetings at various venues as a leader and during a public function in the village with provincial leaders. A back voice extensionist supplements the visual information with a description of the support process. In the ten villages the video film Phanda na Vhulimi was then projected to farmers following the subsequent steps: (1) Preparation for projection was a stage for arranging projection venues and setting sound to audible volume. (2) Pre-projection, here the researcher made a short presentation about the study without disclosing the content of the video film. (3) Projection was a stage of playing the video without pausing or talking by the projecting person (researcher) with exception to the viewers. (4) Post projection stage iii was where the video film was discussed with farmers, during this stage the researcher was acting as the facilitator to bring in farmer-to-farmer experience in relation to what was portrayed. After projections, an open-ended questionnaire was used to conduct this research. The raw data collected were analyzed by dividing it into two themes. The themes were divided into subsections as follows: preparation of the video film, reflection by the viewers/participants of the video film and learning during the projection process. The results of the study indicated that people in rural areas of South Africa watch television. There is a culture of shooting still pictures and watching video films but not hiring as they find it expensive, as a result, they borrow or watch with neighbours, friends i.e. other villages or watch family videos produced during special events. With this culture people are used to see pictures-both moving and still, therefore they will criticize less good quality pictures when they come across them. The study discovered that when a video film is produced with characters of the same background targeted audiences associate themselves with the product and feel that it represents them and their activities. These video films can be used as a tool to compliment not to replace the available methods of presentations. / Department of Labour SETASA NSF
288

Suivi multi-locuteurs avec information audio-visuel pour la perception du robot / audio-visual multiple-speaker tracking for robot perception

Ban, Yutong 10 May 2019 (has links)
La perception des robots joue un rôle crucial dans l’interaction homme-robot (HRI). Le système de perception fournit les informations au robot sur l’environnement, ce qui permet au robot de réagir en consequence. Dans un scénario de conversation, un groupe de personnes peut discuter devant le robot et se déplacer librement. Dans de telles situations, les robots sont censés comprendre où sont les gens, ceux qui parlent et de quoi ils parlent. Cette thèse se concentre sur les deux premières questions, à savoir le suivi et la diarisation des locuteurs. Nous utilisons différentes modalités du système de perception du robot pour remplir cet objectif. Comme pour l’humain, l’ouie et la vue sont essentielles pour un robot dans un scénario de conversation. Les progrès de la vision par ordinateur et du traitement audio de la dernière décennie ont révolutionné les capacités de perception des robots. Dans cette thèse, nous développons les contributions suivantes : nous développons d’abord un cadre variationnel bayésien pour suivre plusieurs objets. Le cadre bayésien variationnel fournit des solutions explicites, rendant le processus de suivi très efficace. Cette approche est d’abord appliqué au suivi visuel de plusieurs personnes. Les processus de créations et de destructions sont en adéquation avec le modèle probabiliste proposé pour traiter un nombre variable de personnes. De plus, nous exploitons la complémentarité de la vision et des informations du moteur du robot : d’une part, le mouvement actif du robot peut être intégré au système de suivi visuel pour le stabiliser ; d’autre part, les informations visuelles peuvent être utilisées pour effectuer l’asservissement du moteur. Par la suite, les informations audio et visuelles sont combinées dans le modèle variationnel, pour lisser les trajectoires et déduire le statut acoustique d’une personne : parlant ou silencieux. Pour expérimenter un scenario où l’information visuelle est absente, nous essayons le modèle pour la localisation et le suivi des locuteurs basé sur l’information acoustique uniquement. Les techniques de déréverbération sont d’abord appliquées, dont le résultat est fourni au système de suivi. Enfin, une variante du modèle de suivi des locuteurs basée sur la distribution de von-Mises est proposée, celle-ci étant plus adaptée aux données directionnelles. Toutes les méthodes proposées sont validées sur des bases de données specifiques à chaque application. / Robot perception plays a crucial role in human-robot interaction (HRI). Perception system provides the robot information of the surroundings and enables the robot to give feedbacks. In a conversational scenario, a group of people may chat in front of the robot and move freely. In such situations, robots are expected to understand where are the people, who are speaking, or what are they talking about. This thesis concentrates on answering the first two questions, namely speaker tracking and diarization. We use different modalities of the robot’s perception system to achieve the goal. Like seeing and hearing for a human-being, audio and visual information are the critical cues for a robot in a conversational scenario. The advancement of computer vision and audio processing of the last decade has revolutionized the robot perception abilities. In this thesis, we have the following contributions: we first develop a variational Bayesian framework for tracking multiple objects. The variational Bayesian framework gives closed-form tractable problem solutions, which makes the tracking process efficient. The framework is first applied to visual multiple-person tracking. Birth and death process are built jointly with the framework to deal with the varying number of the people in the scene. Furthermore, we exploit the complementarity of vision and robot motorinformation. On the one hand, the robot’s active motion can be integrated into the visual tracking system to stabilize the tracking. On the other hand, visual information can be used to perform motor servoing. Moreover, audio and visual information are then combined in the variational framework, to estimate the smooth trajectories of speaking people, and to infer the acoustic status of a person- speaking or silent. In addition, we employ the model to acoustic-only speaker localization and tracking. Online dereverberation techniques are first applied then followed by the tracking system. Finally, a variant of the acoustic speaker tracking model based on von-Mises distribution is proposed, which is specifically adapted to directional data. All the proposed methods are validated on datasets according to applications.
289

Re-framing : an investigation of performance at the intersection of space

Tuttle, Dean, University of Western Sydney, Faculty of Performance, Fine Arts and Design, School of Design January 1997 (has links)
Re-framing is the documentation and analysis of a process of theoretical and practical performance research. The terms of reference for this research were to experiment with the practical workshopping and development of three productions which each restructured and reconceived a 'canonical' written playscript in a format which combined audio-visual media with live performance. The perfromances were specifically developed for highschools in New South Wales and developed models and ideas for using portable technology so that they could easily travel from location to location. The research methodology also included the practical investigation of a process of collaborative production of a multimedia theatre piece with a group of highschool students (from Plumpton High School in Western Sydney). The documentation consists of an interactive multimedia component and a number of text 'modules' that correspond to sections of the interactive. The analysis formulates the process of construction, execution and reception of the performances in terms of a number of interesting and interacting spaces. The focus is on the practice and effects of creating combinations and interactions between these otherwise discreet spaces. The nature of these spaces helps to define and situate the performance but the space can, conversely, be redefined by the performance. In the specific context of multimedia theatre performances for highschools, the spaces that may come into interplay and be modified include: those of the audio-visual media, the meeting space of live performer and audience, the school environment and the wider institution of public education that it is a part of, the written text of a playscript as a space for constructing a fictional reality and the 'virtual space' where this fiction is reconstructed within the mind of the spectator in response to the symbolism of the performance. If such spaces are bounded by frames which are at least partly socially and discursively defined, the thesis proposes, then the performance can act as a catalyst to create new spaces, with languages and ways of structuring reality that differ to those of the old spaces. The implications of this hybridisation may reach beyond the immediate time, space and subject of the performance to reframe ideas, images, narratives and mythologies in domains that extend into many areas of social life and destabilize the systems upon which they are based. Reframing a space can reform the perception and structuring of realities within it / Master of Arts (Hons)(Performance)
290

Audio and Visual Rendering with Perceptual Foundations

Bonneel, Nicolas 15 October 2009 (has links) (PDF)
Realistic visual and audio rendering still remains a technical challenge. Indeed, typical computers do not cope with the increasing complexity of today's virtual environments, both for audio and visuals, and the graphic design of such scenes require talented artists. In the first part of this thesis, we focus on audiovisual rendering algorithms for complex virtual environments which we improve using human perception of combined audio and visual cues. In particular, we developed a full perceptual audiovisual rendering engine integrating an efficient impact sounds rendering improved by using our perception of audiovisual simultaneity, a way to cluster sound sources using human's spatial tolerance between a sound and its visual representation, and a combined level of detail mechanism for both audio and visuals varying the impact sounds quality and the visually rendered material quality of the objects. All our crossmodal effects were supported by the prior work in neuroscience and demonstrated using our own experiments in virtual environments. In a second part, we use information present in photographs in order to guide a visual rendering. We thus provide two different tools to assist “casual artists” such as gamers, or engineers. The first extracts the visual hair appearance from a photograph thus allowing the rapid customization of avatars in virtual environments. The second allows for a fast previewing of 3D scenes reproducing the appearance of an input photograph following a user's 3D sketch. We thus propose a first step toward crossmodal audiovisual rendering algorithms and develop practical tools for non expert users to create virtual worlds using photograph's appearance.

Page generated in 0.0359 seconds