• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 134
  • 60
  • 23
  • 22
  • 13
  • 11
  • 6
  • 6
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 325
  • 70
  • 63
  • 45
  • 39
  • 39
  • 38
  • 36
  • 33
  • 31
  • 30
  • 28
  • 27
  • 25
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Signal Processing of Electroencephalogram for the Detection of Attentiveness towards Short Training Videos

Nussbaum, Paul 18 October 2013 (has links)
This research has developed a novel method which uses an easy to deploy single dry electrode wireless electroencephalogram (EEG) collection device as an input to an automated system that measures indicators of a participant’s attentiveness while they are watching a short training video. The results are promising, including 85% or better accuracy in identifying whether a participant is watching a segment of video from a boring scene or lecture, versus a segment of video from an attentiveness inducing active lesson or memory quiz. In addition, the final system produces an ensemble average of attentiveness across many participants, pinpointing areas in the training videos that induce peak attentiveness. Qualitative analysis of the results of this research is also very promising. The system produces attentiveness graphs for individual participants and these triangulate well with the thoughts and feelings those participants had during different parts of the videos, as described in their own words. As distance learning and computer based training become more popular, it is of great interest to measure if students are attentive to recorded lessons and short training videos. This research was motivated by this interest, as well as recent advances in electronic and computer engineering’s use of biometric signal analysis for the detection of affective (emotional) response. Signal processing of EEG has proven useful in measuring alertness, emotional state, and even towards very specific applications such as whether or not participants will recall television commercials days after they have seen them. This research extended these advances by creating an automated system which measures attentiveness towards short training videos. The bulk of the research was focused on electrical and computer engineering, specifically the optimization of signal processing algorithms for this particular application. A review of existing methods of EEG signal processing and feature extraction methods shows that there is a common subdivision of the steps that are used in different EEG applications. These steps include hardware sensing filtering and digitizing, noise removal, chopping the continuous EEG data into windows for processing, normalization, transformation to extract frequency or scale information, treatment of phase or shift information, and additional post-transformation noise reduction techniques. A large degree of variation exists in most of these steps within the currently documented state of the art. This research connected these varied methods into a single holistic model that allows for comparison and selection of optimal algorithms for this application. The research described herein provided for such a structured and orderly comparison of individual signal analysis and feature extraction methods. This study created a concise algorithmic approach in examining all the aforementioned steps. In doing so, the study provided the framework for a systematic approach which followed a rigorous participant cross validation so that options could be tested, compared and optimized. Novel signal analysis methods were also developed, using new techniques to choose parameters, which greatly improved performance. The research also utilizes machine learning to automatically categorize extracted features into measures of attentiveness. The research improved existing machine learning with novel methods, including a method of using per-participant baselines with kNN machine learning. This provided an optimal solution to extend current EEG signal analysis methods that were used in other applications, and refined them for use in the measurement of attentiveness towards short training videos. These algorithms are proven to be best via selection of optimal signal analysis and optimal machine learning steps identified through both n-fold and participant cross validation. The creation of this new system which uses signal processing of EEG for the detection of attentiveness towards short training videos has created a significant advance in the field of attentiveness measuring towards short training videos.
102

Spatial Immersion Dimensions in 360º Videos

El Ghazouani, Anas January 2017 (has links)
360º videos have emerged as a technology that providesnew possibilities for filmmakers and users alike. Thisresearch study will look at 360º videos and the level ofspatial immersion that users can achieve while viewingthem in different contexts. A number of studies havelooked at immersion in virtual environment. However, thesame does not apply to 360º videos. The paper willintroduce related work in the areas of 360º as well asimmersion and spatial immersion in virtual realityenvironments in order to provide a background for theresearch question. The process of answering this researchquestion is conducted through showing test subjects fivedifferent videos in set in different locations andinterviewing them as well as asking them to take part in aquestionnaire. The study analyses the findings that emergefrom the interviews and questionnaire in relation to thespatial immersion dimensions that are presented in background literature. Among the study's findings is that the potential movements and actions that users feel theycan perform in the virtual environment is a significantfactor when it comes to achieving spatial immersion. Thestudy also concludes that movement is another factor thathelp users achieve spatial immersion.
103

Improving the Utility of Egocentric Videos

Biao Ma (6848807) 15 August 2019 (has links)
<div>For either entertainment or documenting purposes, people are starting to record their life using egocentric cameras, mounted on either a person or a vehicle. Our target is to improve the utility of these egocentric videos. </div><div><br></div><div>For egocentric videos with an entertainment purpose, we aim to enhance the viewing experience to improve overall enjoyment. We focus on First-Person Videos (FPVs), which are recorded by wearable cameras. People record FPVs in order to share their First-Person Experience (FPE). However, raw FPVs are usually too shaky to watch, which ruins the experience. We explore the mechanism of human perception and propose a biometric-based measurement called the Viewing Experience (VE) score, which measures both the stability and the First-person Motion Information (FPMI) of a FPV. This enables us to further develop a system to stabilize FPVs while preserving their FPMI. Experimental results show that our system is robust and efficient in measuring and improving the VE of FPVs.</div><div><br></div><div>For egocentric videos whose goal is documentation, we aim to build a system that can centrally collect, compress and manage the videos. We focus on Dash Camera Videos (DCVs), which are used by people to document the route they drive each day. We proposed a system that can classify videos according to the route they drove using GPS information and visual information. When new DCVs are recorded, their bit-rate can be reduced by jointly compressing them with videos recorded on the similar route. Experimental results show that our system outperforms other similar solutions and the standard HEVC particularly in varying illumination.</div><div><br></div><div>The First-Person Video viewing experience topic and the Dashcam Video compression topic are two representations of applications rely on Visual Odometers (VOs): visual augmentation and robotic perception. Different applications have different requirement for VOs. And the performance of VOs are also influenced by many different factors. To help our system and other users that also work on similar applications, we further propose a system that can investigate the performance of different VOs under various factors. The proposed system is shown to be able to provide suggestion on selecting VOs based on the application.</div>
104

IMAGE AND VIDEO QUALITY ASSESSMENT WITH APPLICATIONS IN FIRST-PERSON VIDEOS

Chen Bai (6760616) 12 August 2019 (has links)
<div>First-person videos (FPVs) captured by wearable cameras provide a huge amount of visual data. FPVs have different characteristics compared to broadcast videos and mobile videos. The video quality of FPVs are influenced by motion blur, tilt, rolling shutter and exposure distortions. In this work, we design image and video assessment methods applicable for FPVs. </div><div><br></div><div>Our video quality assessment mainly focuses on three quality problems. The first problem is the video frame artifacts including motion blur, tilt, rolling shutter, that are caused by the heavy and unstructured motion in FPVs. The second problem is the exposure distortions. Videos suffer from exposure distortions when the camera sensor is not exposed to the proper amount of light, which often caused by bad environmental lighting or capture angles. The third problem is the increased blurriness after video stabilization. The stabilized video is perceptually more blurry than its original because the masking effect of motion is no longer present. </div><div><br></div><div>To evaluate video frame artifacts, we introduce a new strategy for image quality estimation, called mutual reference (MR), which uses the information provided by overlapping content to estimate the image quality. The MR strategy is applied to FPVs by partitioning temporally nearby frames with similar content into sets, and estimating their visual quality using their mutual information. We propose one MR quality estimator, Local Visual Information (LVI), that estimates the relative quality between two images which overlap.</div><div><br></div><div>To alleviate exposure distortions, we propose a controllable illumination enhancement method that adjusts the amount of enhancement with a single knob. The knob can be controlled by our proposed over-enhancement measure, Lightness Order Measure (LOM). Since the visual quality is an inverted U-shape function of the amount of enhancement, our design is to control the amount of enhancement so that the image is enhanced to the peak visual quality. </div><div><br></div><div>To estimate the increased blurriness after stabilization, we propose a visibility-inspired temporal pooling (VTP) mechanism. VTP mechanism models the motion masking effect on perceived video blurriness as the influence of the visibility of a frame on the temporal pooling weight of the frame quality score. The measure for visibility is estimated as the proportion of spatial details that is visible for human observers.</div>
105

Let's Play videa z pohledu autorského práva / Let's Play videos from the point of view of copyright law

Hálek, Jakub January 2018 (has links)
Let's Play videos from the point of view of copyright law Abstract This Master's Thesis examines Let's Play videos (that are a new but significant and popular part of the entertainment industry) from the point of view of copyright law, especially the Czech one. The view of the European Union law is of course not omitted. With respect to the global nature of the issue, the Thesis includes selected foreign legislation, case law and expert opinions. Since the issue of Let's Play videos is new and almost unexplored, this Thesis examines and defines not only Let's Play videos but also their creators. It also identifies sources of income from Let's Play videos, persons involved and their interests, which can collide with each other. Given the existential interdependence of Let's Play videos on videogames, this Thesis examines even some relevant copyright aspects of videogames. Besides the question of copyright classification of Let's Play videos, the Thesis also deals with not so obvious consequences of such classification. There are analyzed possible legal titles for the use of video games and their elements in the creation, publication and monetization of Let's Play videos as well. Subsequently, this Thesis also examines and analyses the current worldwide licensing practice in this field. Despite the fact, that...
106

Comprehension of an audio versus an audiovisual lecture at 50% time-compression

Unknown Date (has links)
Since students can adjust the speed of online videos by time-compression which is available through common software (Pastore & Ritzhaupt, 2015), it is important to learn at which point compression impacts comprehension. The focus of the study is whether the speaker’s face benefits comprehension during a 50% compressed lecture. Participants listened to a normal lecture or a 50% compressed lecture. Each participant saw an audio and audiovisual lecture, and were eye tracked during the audiovisual lecture. A comprehension test revealed that participants in the compressed lecture group performed better with the face. Eye fixations revealed that participants in the compressed lecture group looked less at the eyes and more at the nose when compared to eye fixations for those that viewed the normal lecture. This study demonstrates that 50% compression affects eye fixations and that the face benefits the listener, but this much compression will still lessen comprehension. / Includes bibliography. / Thesis (M.A.)--Florida Atlantic University, 2017. / FAU Electronic Theses and Dissertations Collection
107

Enhancing the effectiveness of online video advertising through interactivity

Unknown Date (has links)
This research examines how incorporating interactivity into online video advertisements effects the following key marketing dependent variables: a) Involvement with the Advertisement, b) Ad Recall, c) Attitude towards the website, d) Attitude towards the Advertisement, e) Attitude towards the Brand, and f) Purchase Intention. Deriving from past Interactivity research, three important facets of interactivity are identified; User Control, Two-way Communication and Synchronicity. In order to test an Internet based 2 (User Control: high or low) X 2 (two-way communication: high or low) X 2 (synchronicity: high or low) between subjects experimental design, 8 different online video platforms were created. The online video experiment was administered to approximately 400 students in a large South-Eastern school. Overall the findings regarding interactivity in online video advertising found no significant effect of synchronicity on the dependent variables. There was however a significant interaction effect of user control and two-way communication on the dependent variables. These interaction effects were examined further with a cell means multiple comparison analysis. User control and two-way communication were found to have a significant interaction effect on ad recall, purchase intention and attitude towards the brand. User control had a significant effect on involvement and two-way communication had a significant effect on attitude towards the website. There was no effect of UC or TWC on attitude towards the ad. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2014. / FAU Electronic Theses and Dissertations Collection
108

Deuses do rock: a construção do mito no audiovisual

Barbosa, João Victor 26 June 2018 (has links)
Submitted by Filipe dos Santos (fsantos@pucsp.br) on 2018-08-13T13:05:52Z No. of bitstreams: 1 João Victor Barbosa.pdf: 5748645 bytes, checksum: 58b3ef288efce68ada8645265edfe6f8 (MD5) / Made available in DSpace on 2018-08-13T13:05:52Z (GMT). No. of bitstreams: 1 João Victor Barbosa.pdf: 5748645 bytes, checksum: 58b3ef288efce68ada8645265edfe6f8 (MD5) Previous issue date: 2018-06-26 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / This research analyzes the construction of the myth of the "Gods of Rock" from the communication and cultural processes that are present in musical documentaries produced between the years 1968-1971 and denominated rockumentaries, for specifically approaching musicians of the rock and roll genre in their networks. In a time when rock and roll became the mainstream media product for young people, The Rolling Stones Rock and Roll Circus (1968), Woodstock (1969) and Pink Floyd: Live at Pompeii (1971), productions which constitute the corpus of the present research, captured images of the presentations mentioned with the differential of bringing extra contents in documentary language, such as testimonials and backstage images. In the following decades, the three rockumentaries were reedited to add unpublished images and new proposals of assembly. We investigate the creative processes of these audiovisuals and argue that, especially in their reissues, the rockumentaries, in different ways, present mythical narratives and epic characters, making a translation within a certain time and space to compose the social imaginary of the " Gods of Rock. "The research is based on the analysis of documentaries, with emphasis on the socio-cultural reverberations that interfere in the processes of creation of these audiovisual pieces and in the social memory of rock and roll. The theoretical foundation is in the reflection on culture and creation processes by Edgar's Morin and Cecilia Salles and in discussions about the imaginary and the myths by Michel Maffesoli and Joseph Campbell / Esta pesquisa analisa a construção do mito dos “Deuses do Rock” a partir de processos comunicacionais e culturais presentes em documentários musicais produzidos entre os anos 1968-1971 e denominados rockumentários, por abordarem especificamente músicos do gênero rock and roll em suas redes. Em uma época em que o rock and roll tornava-se o principal produto midiático pensado para o consumo dos jovens, The Rolling Stones Rock and Roll Circus (1968), Woodstock (1969) e Pink Floyd: Live at Pompeii (1971), produções que constituem o corpus da presente pesquisa, capturaram registros de apresentações com o diferencial de trazer conteúdos extras em linguagem documental, como depoimentos e imagens de bastidores. Nas décadas seguintes, os três rockumentários foram reeditados para agregar imagens inéditas e novas propostas de montagem. Investigamos os processos de criação destas obras audiovisuais e argumentamos que, sobretudo em suas reedições, as peças, de formas distintas, se apropriam de narrativas míticas e personagens épicos, fazendo uma tradução dentro de um determinado tempo e espaço para compor o imaginário social dos "Deuses do Rock”. A pesquisa parte da análise dos documentários, com ênfase nas reverberações socioculturais que interferem nos processos de criação dessas peças audiovisuais e na memória social do rock and roll. A fundamentação teórica envolve a reflexão sobre cultura e processos de criação de Edgar Morin e Cecília Salles e discussões acerca do imaginário e dos mitos em Michel Maffesoli e Joseph Campbell
109

Automatic Eye-Gaze Following from 2-D Static Images: Application to Classroom Observation Video Analysis

Aung, Arkar Min 23 April 2018 (has links)
In this work, we develop an end-to-end neural network-based computer vision system to automatically identify where each person within a 2-D image of a school classroom is looking (“gaze following�), as well as who she/he is looking at. Automatic gaze following could help facilitate data-mining of large datasets of classroom observation videos that are collected routinely in schools around the world in order to understand social interactions between teachers and students. Our network is based on the architecture by Recasens, et al. (2015) but is extended to (1) predict not only where, but who the person is looking at; and (2) predict whether each person is looking at a target inside or outside the image. Since our focus is on classroom observation videos, we collect gaze dataset (48,907 gaze annotations over 2,263 classroom images) for students and teachers in classrooms. Results of our experiments indicate that the proposed neural network can estimate the gaze target - either the spatial location or the face of a person - with substantially higher accuracy compared to several baselines.
110

O uso do vídeo na Sala de Aula Invertida: uma experiência no Colégio Arbos de Santo André

Molina, Verónica Andrea Peralta Meléndez 12 September 2017 (has links)
Submitted by Filipe dos Santos (fsantos@pucsp.br) on 2017-09-28T12:41:38Z No. of bitstreams: 1 Verónica Andrea Peralta Meléndez Molina.pdf: 4005665 bytes, checksum: 1745d2821a62a27c7737cb28ad457130 (MD5) / Made available in DSpace on 2017-09-28T12:41:38Z (GMT). No. of bitstreams: 1 Verónica Andrea Peralta Meléndez Molina.pdf: 4005665 bytes, checksum: 1745d2821a62a27c7737cb28ad457130 (MD5) Previous issue date: 2017-09-12 / This project reflects about the use of educational videos as a learning tool in Flipped Classroom and discusses what applications and strategies can be used with this new methodology for better use, both the student and the teacher. In our analysis, the use of this methodology allows construction of a significant learning with more dynamism and autonomy, improving the use of the proposed activities and topics in the classroom. In this research, we will also point out how using video in Flipped Classroom makes classes with less extensive explanations possible, direct and objective, when recorded on video that can be accessed by students through mobile devices, desktop computers and Ipads. Class time is released for use in other learning activities such as individual exercises or in pairs, discussion, projects or group work, having the teacher available in the classroom to act as mediator. The research noted the use of the videos in Flipped Classrooms in ‘Colégio Arbos’, in Santo André, and tried to understand if this tool helped in building meaningful learning and how it helped. The research methodology that was used was the case study. The teachers answered a questionnaire and the dice studies were performed through content analysis of Bardin / Este projeto tem como objetivo a utilização dos vídeos educacionais como ferramenta de aprendizagem na Sala de Aula Invertida e abordaremos sobre quais aplicativos e estratégias podem ser usados com esse novo modelo pedagógico para que haja um bom aproveitamento, tanto por parte do aluno, quanto por parte do professor. Na nossa análise, o uso dessa ferramenta metodológica possibilita a construção de uma aprendizagem significativa, dinâmica e autônoma, o que evidencia uma melhora no aproveitamento das atividades e temáticas propostas em sala de aula. Nesta pesquisa, apontamos como o uso do vídeo na Aula Invertida viabiliza aulas com explicações menos extensas, diretas e objetivas; quando gravadas em vídeo, que podem ser acessados pelos alunos por meio de seus dispositivos móveis, computadores de mesa e Ipads. Dessa forma o tempo de sala de aula é liberado para ser aplicado em outras atividades de aprendizagem como, por exemplo, exercícios individuais ou em duplas, discussões, projetos ou trabalhos em grupo, tendo o professor tempo disponível em sala para atuar como mediador. A pesquisa observa o uso dos vídeos na Sala de Aula Invertida no Colégio Arbos, de Santo André, e tenta entender se e como o uso do vídeo na Sala de Aula Invertida auxilia na construção do ensino aprendizagem

Page generated in 0.0395 seconds