• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 22
  • 22
  • 8
  • 7
  • 5
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Reaching a creative common ground : Enhancing the creative collaboration between a film editor and its respective client

Kedfors, Fredrik January 2016 (has links)
The aim of this thesis is to locate current problems concerning the process of finding common ground between a creative producer and its respective client, furthermore it aims to propose a solution to this problem in the context of collaborative video editing. The paper starts off by exploring research related to the topic. After that, it establishes, through interviews with experts within fields of video editing and graphic design, what the currently existing problems are concerning communication within their line of work. As a solution to these problems, a collaborative software is proposed with the idea of bridging the understanding between the video editor and its client. The paper ends with some conclusions surrounding the current state of the topic and proposes a way forward for both practitioners and researchers.
2

Efficient image/video restyling and collage on GPU. / CUHK electronic theses & dissertations collection

January 2013 (has links)
創意媒體研究中,圖像/視頻再藝術作為有表現力的用戶定制外觀的創作手段受到了很大關注。交互設計中,特別是在圖像空間只有單張圖像或視頻輸入的情況下,運用計算機輔助設計虛擬地再渲染關注物體的風格化外觀來實現紋理替換是很強大的。現行的紋理替換往往通過操作圖像空間中像素的間距來處理紋理扭曲,原始圖像中潛在的紋理扭曲總是被破壞,因為現行的方法要麼存在由於手動網格拉伸導致的不恰當扭曲,要麼就由於紋理合成而導致不可避免的紋理開裂。圖像/視頻拼貼畫是被發明用以支持在顯示畫布上並行展示多個物體和活動。隨著數字視頻俘獲裝置的快速發展,相關的議題就是快速檢閱和摘要大量的視覺媒體數據集來找出關注的資料。這會是一項繁瑣的任務來審查長且乏味的監控視頻並快速把握重要信息。以關鍵信息和縮短視頻形式為交流媒介,視頻摘要是增強視覺數據集瀏覽效率和簡易理解的手段。 / 本文首先將圖像/視頻再藝術聚焦在高效紋理替換和風格化上。我們展示了一種交互紋理替換方法,能夠在不知潛在幾何結構和光照環境的情況下保持相似的紋理扭曲。我們運用SIFT 棱角特徵來自然地發現潛在紋理扭曲,並應用梯度深度圖復原和皺褶重要性優化來完成扭曲過程。我們運用GPU-CUDA 的並行性,通過實時雙邊網格和特徵導向的扭曲優化來促成交互紋理替換。我們運用基於塊的實時高精度TV-L¹光流,通過基於關鍵幀的紋理傳遞來完成視頻紋理替換。我們進一步研究了基於GPU 的風格化方法,並運用梯度優化保持原始圖像的精細結構。我們提出了一種能夠自然建模原始圖像精細結構的圖像結構圖,並運用基於梯度的切線生成和切線導向的形態學來構建這個結構圖。我們在GPU-CUDA 上通過並行雙邊網格和結構保持促成最終風格化。實驗中,我們的方法實時連續地展現了高質量的圖像/視頻的抽象再藝術。 / 當前,視頻拼貼畫大多創作靜態的基於關鍵幀的拼貼圖片,該結果只包含動態視頻有限的信息,會很大程度影響視覺數據集的理解。爲了便於瀏覽,我們展示了一種在顯示畫布上有效並行摘要動態活動的動態視頻拼貼畫。我們提出應用活動長方體來重組織及提取事件,執行視頻防抖來生成穩定的活動長方體,實行時空域優化來優化活動長方體在三維拼貼空間的位置。我們通過在GPU 上的事件相似性和移動關係優化來完成高效的動態拼貼畫,允許多視頻輸入。擁有再序核函數CUDA 處理,我們的視頻拼貼畫爲便捷瀏覽長視頻激活了動態摘要,節省大量存儲傳輸空間。實驗和調查表明我們的動態拼貼畫快捷有效,能被廣泛應用于視頻摘要。將來,我們會擴展交互紋理替換來支持更複雜的具大運動和遮蔽場景的一般視頻,避免紋理跳動。我們會採用最新視頻技術靈感使視頻紋理替換更加穩定。我們未來關於視頻拼貼畫的工作包括審查監控業中動態拼貼畫應用,並研究含有大量相機運動和不同種視頻過度的移動相機和一般視頻。 / Image/video restyling as an expressive way for producing usercustomized appearances has received much attention in creative media researches. In interactive design, it would be powerful to re-render the stylized presentation of interested objects virtually using computer-aided design tools for retexturing, especially in the image space with a single image or video as input. The nowaday retexturing methods mostly process texture distortion by inter-pixel distance manipulation in image space, the underlying texture distortion is always destroyed due to limitations like improper distortion caused by human mesh stretching, or unavoidable texture splitting caused by texture synthesis. Image/ video collage techniques are invented to allow parallel presenting of multiple objects and events on the display canvas. With the rapid development of digital video capture devices, the related issues are to quickly review and brief such large amount of visual media datasets to find out interested video materials. It will be a tedious task to investigate long boring surveillance videos and grasp the essential information quickly. By applying key information and shortened video forms as vehicles for communication, video abstraction and summary are the means to enhance the browsing efficiency and easy understanding of visual media datasets. / In this thesis, we first focused our image/video restyling work on efficient retexturing and stylization. We present an interactive retexturing that preserves similar texture distortion without knowing the underlying geometry and lighting environment. We utilized SIFT corner features to naturally discover the underlying texture distortion. The gradient depth recovery and wrinkle stress optimization are applied to accomplish the distortion process. We facilitate the interactive retexturing via real-time bilateral grids and feature-guided distortion optimization using GPU-CUDA parallelism. Video retexturing is achieved through a keyframe-based texture transferring strategy using accurate TV-L¹ optical flow with patch motion tracking techniques in real-time. Further, we work on GPU-based abstract stylization that preserves the fine structure in the original images using gradient optimization. We propose an image structure map to naturally distill the fine structure of the original images. Gradientbased tangent generation and tangent-guided morphology are applied to build the structure map. We facilitate the final stylization via parallel bilateral grids and structure-aware stylizing in real-time on GPU-CUDA. In the experiments, our proposed methods consistently demonstrate high quality performance of image/video abstract restyling in real-time. / Currently, in video abstraction, video collages are mostly produced with static keyfame-based collage pictures, which contain limited information of dynamic videos and in uence understanding of visual media datasets greatly. We present dynamic video collage that effectively summarizes condensed dynamic activities in parallel on the canvas for easy browsing. We propose to utilize activity cuboids to reorganize and extract dynamic objects for further collaging, and video stabilization is performed to generate stabilized activity cuboids. Spatial-temporal optimization is carried out to optimize the positions of activity cuboids in the 3D collage space. We facilitate the efficient dynamic collage via event similarity and moving relationship optimization on GPU allowing multi-video inputs. Our video collage approach with kernel reordering CUDA processing enables dynamic summaries for easy browsing of long videos, while saving huge memory space for storing and transmitting them. The experiments and user study have shown the efficiency and usefulness of our dynamic video collage, which can be widely applied for video briefing and summary applications. In the future, we will further extend the interactive retexturing to more complicated general video applications with large motion and occluded scene avoiding textures icking. We will also work on new approaches to make video retexturing more stable by inspiration from latest video processing techniques. Our future work for video collage includes investigating applications of dynamic collage into the surveillance industry, and working on moving camera and general videos, which may contain large amount of camera motions and different types of video shot transitions. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Li, Ping. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2013. / Includes bibliographical references (leaves 109-121). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts also in Chinese. / Abstract --- p.i / Acknowledgements --- p.v / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Background --- p.1 / Chapter 1.2 --- Main Contributions --- p.5 / Chapter 1.3 --- Thesis Overview --- p.7 / Chapter 2 --- Efficient Image/video Retexturing --- p.8 / Chapter 2.1 --- Introduction --- p.8 / Chapter 2.2 --- Related Work --- p.11 / Chapter 2.3 --- Image/video Retexturing on GPU --- p.16 / Chapter 2.3.1 --- Wrinkle Stress Optimization --- p.19 / Chapter 2.3.2 --- Efficient Video Retexturing --- p.24 / Chapter 2.3.3 --- Interactive Parallel Retexturing --- p.29 / Chapter 2.4 --- Results and Discussion --- p.35 / Chapter 2.5 --- Chapter Summary --- p.41 / Chapter 3 --- Structure-Aware Image Stylization --- p.43 / Chapter 3.1 --- Introduction --- p.43 / Chapter 3.2 --- Related Work --- p.46 / Chapter 3.3 --- Structure-Aware Stylization --- p.50 / Chapter 3.3.1 --- Approach Overview --- p.50 / Chapter 3.3.2 --- Gradient-Based Tangent Generation --- p.52 / Chapter 3.3.3 --- Tangent-Guided Image Morphology --- p.54 / Chapter 3.3.4 --- Structure-Aware Optimization --- p.56 / Chapter 3.3.5 --- GPU-Accelerated Stylization --- p.58 / Chapter 3.4 --- Results and Discussion --- p.61 / Chapter 3.5 --- Chapter Summary --- p.66 / Chapter 4 --- Dynamic Video Collage --- p.67 / Chapter 4.1 --- Introduction --- p.67 / Chapter 4.2 --- Related Work --- p.70 / Chapter 4.3 --- Dynamic Video Collage on GPU --- p.74 / Chapter 4.3.1 --- Activity Cuboid Generation --- p.75 / Chapter 4.3.2 --- Spatial-Temporal Optimization --- p.80 / Chapter 4.3.3 --- GPU-Accelerated Parallel Collage --- p.86 / Chapter 4.4 --- Results and Discussion --- p.90 / Chapter 4.5 --- Chapter Summary --- p.100 / Chapter 5 --- Conclusion --- p.101 / Chapter 5.1 --- Research Summary --- p.101 / Chapter 5.2 --- Future Work --- p.104 / Chapter A --- Publication List --- p.107 / Bibliography --- p.109
3

Multi-frame information fusion for image and video enhancement

Gunturk, Bahadir K. 01 December 2003 (has links)
No description available.
4

Multi-dimensional digital signal integration with applications in image, video and light field processing

Sevcenco, Ioana Speranta 16 August 2018 (has links)
Multi-dimensional digital signals have become an intertwined part of day to day life, from digital images and videos used to capture and share life experiences, to more powerful scene representations such as light field images, which open the gate to previously challenging tasks, such as post capture refocusing or eliminating visible occlusions from a scene. This dissertation delves into the world of multi-dimensional signal processing and introduces a tool of particular use for gradient based solutions of well-known signal processing problems. Specifically, a technique to reconstruct a signal from a given gradient data set is developed in the case of two dimensional (2-D), three dimensional (3-D) and four dimensional (4-D) digital signals. The reconstruction technique is multiresolution in nature, and begins by using the given gradient to generate a multi-dimensional Haar wavelet decomposition of the signals of interest, and then reconstructs the signal by Haar wavelet synthesis, performed on successive resolution levels. The challenges in developing this technique are non-trivial and are brought about by the applications at hand. For example, in video content replacement, the gradient data from which a video sequence needs to be reconstructed is a combination of gradient values that belong to different video sequences. In most cases, such operations disrupt the conservative nature of the gradient data set. The effects of the non-conservative nature of the newly generated gradient data set are attenuated by using an iterative Poisson solver at each resolution level during the reconstruction. A second and more important challenge is brought about by the increase in signal dimensionality. In a previous approach, an intermediate extended signal with symmetric region of support is obtained, and the signal of interest is extracted from it. This approach is reasonable in 2-D, but becomes less appealing as the signal dimensionality increases. To avoid generating data that is then discarded, a new approach is proposed, in which signal extension is no longer performed. Instead, different procedures are suggested to generate a non-symmetric Haar wavelet decomposition of the signals of interest. In the case of 2-D and 3-D signals, ways to obtain this decomposition exactly from the given gradient data and the average value of the signal are proposed. In addition, ways to approximate a subset of decomposition coefficients are introduced and the visual consequences of such approximations are studied in the special case of 2-D digital images. Several ways to approximate the same subset of decomposition coefficients are developed in the special case of 4-D light field images. Experiments run on various 2-D, 3-D and 4-D test signals are included to provide an insight on the performance of the reconstruction technique. The value of the multi-dimensional reconstruction technique is then demonstrated by including it in a number of signal processing applications. First, an efficient algorithm is developed with the purpose of combining information from the gradient of a set of 2-D images with different regions in focus or different exposure times, with the purpose of generating an all-in-focus image or revealing details that were lost due to improper exposure setting. Moving on to 3-D signal processing applications, two video editing problems are studied and gradient based solutions are presented. In the first one, the objective is to seamlessly place content from one video sequence in another, while in the second one, to combine elements from two video sequences and generate a transparency effect. Lastly, a gradient based technique for editing 4-D scene representations (light fields) is presented, as well as a technique to combine information from two light fields with the purpose of generating a light field with more details of the imaged scene. All these applications show that the developed technique is a reliable tool for gradient domain based solutions of signal processing problems. / Graduate
5

Designing an Interactive Video Editing Tool for Teachers

Bonnevier, Jesper January 2018 (has links)
This study aims to find the answers to how an online interactive video editing tool for teachers to use would be designed. To find out the answers to this, students studying to become teachers and experienced teachers were interviewed and used for observations and usability testing of a prototype. In total there were 27 unique data gathering situations with 11 unique participants. The five teacher students who were participating were all teacher students at Linnaeus University in Växjö. The six experienced teachers have been teaching for many years and are currently lecturing teachers about new technology that can be used in the classroom. The result from interviews, observations and literature search contributed to a list of requirements which in turn became a prototype. What has been discovered is that teachers need a tool which is easy to use with interactions and functions such as adding clickable annotations to clips and creating playlists which will help teachers plan ahead and save time during lectures.
6

Flexible Storylines

Romashka, Ivan Dmitrievich 27 May 2011 (has links) (PDF)
A long-standing goal of computer-based entertainment is the creation of a story where a user is in control of portions of the storyline. These non-linear stories give a user an opportunity to adapt the story to his or her interests, schedule and needs. The Internet has made non-linear video a reality. Different approaches have been taken to create and play non-linear video stories. They suffer from lack of simplicity, smoothness, and television-like experience in story creation and presentation. Flexible Storylines provide a way to easily create and present non-linear video stories. The creation of these stories is done using a time-line based editor that mimics the way video stories are composed by film makers. The viewing experience of flexible stories is very similar to viewing a normal video with an introduction of the choice to see more or less of the current topic. This provides a highly variable experience with a simple, smooth and non-intrusive form of user interaction. We also provide a mechanism that lets a story flow smoothly despite the introduction of user interaction.
7

The Effect Of Cognitive Styles Upon The Completion Of A Visually-oriented Component Of Online Instruction

Lee, Jia-Ling 01 January 2006 (has links)
This study was designed to examine whether a person's prepositioned cognitive style influenced learning achievement in a visually-oriented task for an online learning environment in higher education. Field dependence-independence was used to identify individuals' cognitive styles. A true experimental study was conducted in the fall 2005 term at the University of Central Florida. This researcher followed Dwyer and Moore's research (1991, 2002) and divided learners into three groups (field dependent [FD], field neutral [FN], and the field independent [FI] students). Eighty-three preservice teachers participated in this study; the data from 52 of the FD and the FI participants were analyzed to answer research questions. The findings in this study supported those in the literature review; students from both FD and FI cognitive styles performed equally well in online learning environments. In addition, for providing introductory-level instruction on visually-oriented tasks in an online learning environment, instructions which emphasized an FD approach benefited both FI and FD students in their knowledge-based learning achievement. In this approach, extra cues and sequence of content might have been the reasons that students had higher scores on their knowledge-based learning achievement and satisfaction levels. The findings of this study also indicated that students could demonstrate higher performance-based learning achievement if they had more experiences on the subject matter and higher knowledge-based learning achievement if they felt the instructions were easy to follow and the workload of the module was manageable. Based on the findings and conclusions, the recommendations are: (1) A larger sample size is needed to generalize the findings of the study; (2) In this study, student-to-student and teacher-to-student interactions might affect students' learning achievement. Future studies should consider those interactions as factors and examine their effect on students' learning achievement.
8

Controllable Visual Synthesis

AlBahar, Badour A. Sh A. 08 June 2023 (has links)
Computer graphics has become an integral part of various industries such as entertainment (i.e.,films and content creation), fashion (i.e.,virtual try-on), and video games. Computer graphics has evolved tremendously over the past years. It has shown remarkable image generation improvement from low-quality, pixelated images with limited details to highly realistic images with fine details that can often be mistaken for real images. However, the traditional pipeline of rendering an image in computer graphics is complex and time- consuming. The whole process of creating the geometry, material, and textures requires not only time but also significant expertise. In this work, we aim to replace this complex traditional computer graphics pipeline with a simple machine learning model. This machine learning model can synthesize realistic images without requiring expertise or significant time and effort. Specifically, we address the problem of controllable image synthesis. We propose several approaches that allow the user to synthesize realistic content and manipulate images to achieve their desired goals with ease and flexibility. / Doctor of Philosophy / Computer graphics has become an integral part of various industries such as entertainment (i.e.,films and content creation), fashion (i.e.,virtual try-on), and video games. Computer graphics has evolved tremendously over the past years. It has shown remarkable image generation improvement from low-quality, pixelated images with limited details to highly realistic images with fine details that can often be mistaken for real images. However, the traditional process of generating an image in computer graphics is complex and time- consuming. You need to set up a camera and light, and create objects with all sorts of details. This requires not only time but also significant expertise. In this work, we aim to replace this complex traditional computer graphics pipeline with a simple machine learning model. This machine learning model can generate realistic images without requiring expertise or significant time and effort. Specifically, we address the problem of controllable image synthesis. We propose several approaches that allow the user to synthesize realistic content and manipulate images to achieve their desired goals with ease and flexibility.
9

Visual Media, Dance, and Academia: Comparing Video Production with the Choreographic Process and Dance Improvisation

Schrock, Madeline Rose 16 June 2011 (has links)
No description available.
10

[en] VERSION CONTROL SYSTEM FOR COOPERATIVE MPEG-2 VIDEO EDITING / [pt] SISTEMA DE CONTROLE DE VERSÕES PARA EDIÇÃO COOPERATIVA DE VÍDEO MPEG-2

RODRIGO BORGES DA SILVA SANTOS 02 October 2007 (has links)
[pt] Os avanços das tecnologias de captura, armazenamento e compressão de vídeo digital estão motivando o desenvolvimento e a disponibilização de novos serviços e sistemas para manipulação e gerenciamento de acervos de vídeo. Um exemplo disso são os sistemas de gerenciamento, edição e compartilhamento de versões utilizados pelos produtores de conteúdo audiovisual. Entretanto, tais funcionalidades são requisitos não encontrados em um único sistema. Este trabalho descreve um sistema que possibilita a edição cooperativa de dados audiovisuais no formato MPEG-2 permitindo o controle de versão, a visualização e manipulação do seu conteúdo por partes (segmentos). Esse sistema colaborativo tem ainda como vantagens a divisão de tarefas, a fusão das contribuições e a extração de informações da autoria de cada versão. / [en] Technological advances in areas such as capture, storage and compression of digital video are stimulating the development of new services and systems for manipulation and management of huge amount of video data. An example of this, are the systems of management, editing and sharing of versions used by producers of audiovisual content. However, such functional requirements are not found in one system. This work describes a system that makes possible the cooperative edition of audiovisual data in MPEG-2 format, allowing the version control, visualization and manipulation of its content by segments. This collaborative system still has advantages as the division of tasks between editors, the fusion of different versions and the extraction of information of authorship from each version.

Page generated in 0.0872 seconds