• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Capture, analysis and synthesis of photorealistic crowds

Flagg, Matthew 17 November 2010 (has links)
This thesis explores techniques for synthesizing crowds from imagery. Synthetic photorealistic crowds are desirable for cinematic gaming, special effects and architectural visualization. While motion captured-based techniques for the animation and control of crowds have been well-studied in computer graphics, the resulting control rig sequences require a laborious model-based graphics pipeline to render photorealistic videos of crowds. Over the past ten years, data-driven techniques for rendering imagery of complex phenomena have become a popular alternative to model-based graphics. This popularity is due in large part to difficulties in constructing the sufficiently-detailed models that are required to achieve photorealism. A dynamic crowd of humans is an extremely challenging example of such phenomena. Example-based synthesis methods such as video textures are an appealing alternative, but current techniques are unable to handle new challenges posed by crowds. This thesis describes how to synthesize video-based crowds by explicitly segmenting pedestrians from input videos of natural crowds and optimally placing them into an output video while satisfying environmental constraints imposed by the scene. There are three key challenges. First, the crowd layout of segmented videos must satisfy constraints imposed by environmental and crowd obstacles. This thesis addresses four types of environmental constraints: (a) ground planes in the scene which are valid for crowd traversal, such as sidewalks, (b) spatial regions of these planes where crowds may enter and exit the scene, (c) static obstacles, such as mailboxes and walls of a building, and (d) dynamic obstacles such as individuals and groups of individuals. Second, pedestrians and groups of pedestrians should be segmented from the input video with no artifacts and minimal interaction time. This is challenging in real world scenes due to significant appearance changes while traveling through the scene. Third, segmented pedestrian videos may not have enough frames or the right shape to compose a path from an artist-defined entrance to exit. Plausible temporal transitions between segmented pedestrians are therefore needed but they are difficult to identify and synthesize due to complex self occlusions. We present a novel algorithm for composing video billboards, represented by crowd tubes, to form a crowd while avoiding collisions between static and dynamic obstacles. Crowd tubes are represented in the scene using a temporal sequence of circles planted in the calibrated ground plane. The approach consists of representing crowd tube samples and constraint violations with a conflict graph. The maximal independent set yields a dense crowd composition. We present a prototype system for the capture, analysis, synthesis and control of video-based crowds. Several results demonstrate the system's ability to generate videos of crowds which exhibit a variety of natural behaviors.
2

Example-based Rendering of Textural Phenomena

Kwatra, Vivek 19 July 2005 (has links)
This thesis explores synthesis by example as a paradigm for rendering real-world phenomena. In particular, phenomena that can be visually described as texture are considered. We exploit, for synthesis, the self-repeating nature of the visual elements constituting these texture exemplars. Techniques for unconstrained as well as constrained/controllable synthesis of both image and video textures are presented. For unconstrained synthesis, we present two robust techniques that can perform spatio-temporal extension, editing, and merging of image as well as video textures. In one of these techniques, large patches of input texture are automatically aligned and seamless stitched with each other to generate realistic looking images and videos. The second technique is based on iterative optimization of a global energy function that measures the quality of the synthesized texture with respect to the given input exemplar. We also present a technique for controllable texture synthesis. In particular, it allows for generation of motion-controlled texture animations that follow a specified flow field. Animations synthesized in this fashion maintain the structural properties like local shape, size, and orientation of the input texture even as they move according to the specified flow. We cast this problem into an optimization framework that tries to simultaneously satisfy the two (potentially competing) objectives of similarity to the input texture and consistency with the flow field. This optimization is a simple extension of the approach used for unconstrained texture synthesis. A general framework for example-based synthesis and rendering is also presented. This framework provides a design space for constructing example-based rendering algorithms. The goal of such algorithms would be to use texture exemplars to render animations for which certain behavioral characteristics need to be controlled. Our motion-controlled texture synthesis technique is an instantiation of this framework where the characteristic being controlled is motion represented as a flow field.

Page generated in 0.0971 seconds