Spelling suggestions: "subject:"video compositing"" "subject:"video kompositing""
1 |
Bloom and DoomLi, Jiale 24 May 2024 (has links)
This thesis harnesses 3D modeling to present a contrasting world through Maya 3D models, Substance Painter textures, and Adobe's audio-visual editing tools. Influenced by my childhood experiences and cultural history, the project contrasts the natural environment with a polluted, urban landscape. The paper also discusses future expansions aiming to enhance interactivity and deepening the narrative on human environmental impact. / Master of Fine Arts / This thesis uses 3D modeling to present a contrasting world. Influences include my childhood experience and my hometown's cultural history. The work prompts reflection on our environmental footprint with plans to incorporate interactivity elements in the future.
|
2 |
Models of Visual Appearance for Analyzing and Editing Images and VideosSunkavalli, Kalyan 15 August 2012 (has links)
The visual appearance of an image is a complex function of factors such as scene geometry, material reflectances and textures, illumination, and the properties of the camera used to capture the image. Understanding how these factors interact to produce an image is a fundamental problem in computer vision and graphics. This dissertation examines two aspects of this problem: models of visual appearance that allow us to recover scene properties from images and videos, and tools that allow users to manipulate visual appearance in images and videos in intuitive ways. In particular, we look at these problems in three different applications. First, we propose techniques for compositing images that differ significantly in their appearance. Our framework transfers appearance between images by manipulating the different levels of a multi-scale decomposition of the image. This allows users to create realistic composites with minimal interaction in a number of different scenarios. We also discuss techniques for compositing and replacing facial performances in videos. Second, we look at the problem of creating high-quality still images from low-quality video clips. Traditional multi-image enhancement techniques accomplish this by inverting the camera’s imaging process. Our system incorporates feature weights into these image models to create results that have better resolution, noise, and blur characteristics, and summarize the activity in the video. Finally, we analyze variations in scene appearance caused by changes in lighting. We develop a model for outdoor scene appearance that allows us to recover radiometric and geometric infor- mation about the scene from images. We apply this model to a variety of visual tasks, including color-constancy, background subtraction, shadow detection, scene reconstruction, and camera geo-location. We also show that the appearance of a Lambertian scene can be modeled as a combi- nation of distinct three-dimensional illumination subspaces — a result that leads to novel bounds on scene appearance, and a robust uncalibrated photometric stereo method. / Engineering and Applied Sciences
|
3 |
Splat! Fragmented Space in Experimental CinemaSzabados, Luke 13 May 2016 (has links)
No description available.
|
4 |
[en] VIDEO BASED INTERACTIVE STORYTELLING / [pt] STORYTELLING INTERATIVO BASEADO EM VÍDEOEDIRLEI EVERSON SOARES DE LIMA 06 March 2015 (has links)
[pt] A geração de representações visuais envolventes para storytelling interativo é um dos desafios-chave para a evolução e popularização das narrativas interativas. Usualmente, sistemas de storytelling interativo utilizam computação gráfica para representar os mundos virtuais das histórias, o que facilita a geração dinâmica de conteúdos visuais. Embora animação tridimensional seja um poderoso meio para contar histórias, filmes com atores reais continuam atraindo mais atenção do público em geral. Além disso, apesar dos recentes progressos em renderização gráfica e da ampla aceitação de animação 3D em filmes, a qualidade visual do vídeo continua sendo muito superior aos gráficos gerados computacionalmente em tempo real. Na presente tese propomos uma nova abordagem para criar narrativas interativas mais envolventes, denominada Storytelling Interativo Baseado em Vídeo, onde os personagens e ambientes virtuais são substituídos por atores e cenários reais, sem perder a estrutura lógica da narrativa. Este trabalho apresenta um modelo geral para sistemas de storytelling interativo baseados em vídeo, incluindo os aspectos autorais das fases de produção e os aspectos técnicos dos algoritmos responsáveis pela geração em tempo real de narrativas interativas usando técnicas de composição de vídeo. / [en] The generation of engaging visual representations for interactive storytelling represents a key challenge for the evolution and popularization of interactive narratives. Usually, interactive storytelling systems adopt computer graphics to represent the virtual story worlds, which facilitates the dynamic generation of visual content. Although animation is a powerful storytelling medium, live-action films still attract more attention from the general public. In addition, despite the recent progress in graphics rendering and the wide-scale acceptance of 3D animation in films, the visual quality of video is still far superior to that of real-time generated computer graphics. In the present thesis, we propose a new approach to create more engaging interactive narratives, denominated Video-Based Interactive Storytelling, where characters and virtual environments are replaced by real actors and settings, without losing the logical structure of the narrative. This work presents a general model for interactive storytelling systems that are based on video, including the authorial aspects of the production phases, and the technical aspects of the algorithms responsible for the real-time generation of interactive narratives using video compositing techniques.
|
Page generated in 0.0771 seconds