• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • Tagged with
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Image composition in computer rendering

Ji, Li 28 September 2016 (has links)
In this research, we study image composition in the context of computer rendering, investigate why composition is difficult with conventional rendering methods, and propose our solutions. Image composition is a process in which an artist improves a visual image to achieve certain aesthetic goals, and it is a central topic in studies of visual arts. Approaching the compositional quality of hand-made art work with computer rendering is a challenging task; but there is scarcely any in-depth research on this task from an interdisciplinary viewpoint between computer graphics and visual arts. Although recent developments of computer rendering have enabled the synthesis of high quality photographic images, most rendering methods only simulate a photographic process and do not permit straightforward compositional editing in the image space. In order to improve the visual quality of the digitally synthesized images, the knowledge of visual composition needs to be incorporated. This objective not only asks for novel algorithmic inventions, but also involves research in visual perception, painting, photography and other disciplines of visual arts. With examples from historical painting and contemporary photography, we inquire why and how a well-composed image elicits an aesthetic visual response from its viewer. Our analysis based on visual perception shows that the composition of an image serves as a guideline for the viewing process of that image; the composition of an image conveys an artist's intention of how the depicted scene should be viewed, and directs a viewer's eyes. A key observation is that for a composition to take effect, a viewer must be allowed to attentively look at the image for a period of time. From this analysis, we outline a few rules for composing light and shade in computer rendering, which serve as guidelines for designing rendering methods that create imagery beyond photorealistic depictions. Our original analysis elucidates the mechanism and function of image composition in the context of rendering, and offers clearly defined directions for algorithmic design. Theories about composition mostly remain in the literature of art critique and art history, while there are hardly any investigations on this topic in a technical context. Our novel analysis is an instructive contribution for enhancing the aesthetic quality of digitally synthesized images. We present two research projects that develop our analysis into rendering programs. We first show an interpolative material model, in which the surface shading is interpolated from input textures with a brightness value. The resultant rendering depicts surface brightness instead of light energy in the depicted scene. We also show a painting interface with this material model, with which an artist can directly compose surface brightness with a digital pen. In the second project, we ask an artist to provide a sketch of lighting design with coarse paint strokes on top of a rendering, while details of the light and shade in the depicted scene are automatically filled in by our program. This project is staged in the context of creating the visual effects of foliage shadows under sunshine. Our software tool also includes a novel method for generating coherent animations that resemble the movements of tree foliage in a gentle breeze. These programming projects validate the rendering methodology proposed by our theoretical analysis, and demonstrate the feasibility of incorporating compositional techniques in computer rendering. In addition to programming projects, this interdisciplinary research also consists of practices in visual arts. We present two art projects of digital photography and projection installation, which we built based on our theoretical analysis of composition and our software tools from the programming projects. Through these art projects, we evaluate our methodology by both making art ourselves and critiquing the resultant pieces with peer artists. From our point of view, it is important to be involved in art practices for rendering researchers, especially those who deal with aesthetic issues. The valuable first-hand experiences and the communications with artists in a visual arts context are rarely reported in the rendering literature. These experiences serve as effective guides for the future development of our research on computer rendering. The long term goal of our research is find a balance between artistic expression and realistic believability, based on the interdisciplinary knowledge of composition and perception, and implemented as either automated or user-assisted rendering tools. This goal may be termed as to achieve a staged realism, to synthesize images that are recognizable as depictions of realistic scenes, and at the same time enabling the freedom of composing the rendering results in an artistic manner. / Graduate / 0357 / 0984
2

Nouvelles méthodes pour la recherche sémantique et esthétique d'informations multimédia / Novel methods for semantic and aesthetic multimedia retrieval

Redi, Miriam 29 May 2013 (has links)
A l'ère d'Internet, la classification informatisée des images est d'une importance cruciale pour l’utilisation efficace de l'énorme quantité de données visuelles qui sont disponibles. Mais comment les ordinateurs peuvent-ils comprendre la signification d'une image? La Recherche d’Information Multimédia (RIM) est un domaine de recherche qui vise à construire des systèmes capables de reconnaître automatiquement le contenu d’une image. D'abord, des caractéristiques de bas niveau sont extraites et regroupées en signatures visuelles compactes. Ensuite, des techniques d'apprentissage automatique construisent des modèles qui font la distinction entre les différentes catégories d'images à partir de ces signatures. Ces modèles sont finalement utilisés pour reconnaître les propriétés d'une nouvelle image. Malgré les progrès dans le domaine, ces systèmes ont des performances en général limitées. Dans cette thèse, nous concevons un ensemble de contributions originales pour chaque étape de la chaîne RIM, en explorant des techniques provenant d'une variété de domaines qui ne sont pas traditionnellement liés avec le MMIR. Par exemple, nous empruntons la notion de saillance et l'utilisons pour construire des caractéristiques de bas niveau. Nous employons la théorie des Copulae étudiée en statistique économique, pour l'agrégation des caractéristiques. Nous réutilisons la notion de pertinence graduée, populaire dans le classement des pages Web, pour la récupération visuelle. Le manuscrit détaille nos solutions novatrices et montre leur efficacité pour la catégorisation d'image et de vidéo, et l’évaluation de l'esthétique. / In the internet era, computerized classification and discovery of image properties (objects, scene, emotions generated, aesthetic traits) is of crucial importance for the automatic retrieval of the huge amount of visual data surrounding us. But how can computers see the meaning of an image? Multimedia Information Retrieval (MMIR) is a research field that helps building intelligent systems that automatically recognize the image content and its characteristics. In general, this is achieved by following a chain process: first, low-level features are extracted and pooled into compact image signatures. Then, machine learning techniques are used to build models able to distinguish between different image categories based on such signatures. Such model will be finally used to recognize the properties of a new image. Despite the advances in the field, human vision systems still substantially outperform their computer-based counterparts. In this thesis we therefore design a set of novel contributions for each step of the MMIR chain, aiming at improving the global recognition performances. In our work, we explore techniques from a variety of fields that are not traditionally related with Multimedia Retrieval, and embed them into effective MMIR frameworks. For example, we borrow the concept of image saliency from visual perception, and use it to build low-level features. We employ the Copula theory of economic statistics for feature aggregation. We re-use the notion of graded relevance, popular in web page ranking, for visual retrieval frameworks. We explain in detail our novel solutions and prove their effectiveness for image categorization, video retrieval and aesthetics assessment.
3

Morphable guidelines for the human head

Gao, Shelley Y. 25 April 2013 (has links)
Morphable guidelines are a 3D structure that helps users achieve better face warping on 2D portrait images. Faces can be difficult to warp accurately because the rotation of the head affects the shape of the facial features. I bypass the problem by utilizing the popular Loomis ‘ball and plane’ head drawing guideline as a proxy structure. The resulting ‘morphable guidelines’ consist of a simple 3D head model that can be reshaped by the user and aligned to their input image. The vertices of the model go on to act as deformation points for a 2D image deformation algorithm. Thus, the user can seamlessly transform the face proportions in the 2D image by transforming the proportions of the morphable guidelines. This system can be used for both retouching and caricature warping purposes, as it is well-suited for both subtle and extreme modifications. This system is advantageous over previous work in face warping because our morphable guidelines can be used on a wide range of head orientations and do not require the generation of a full 3D model. / Graduate / 0984 / syugao@gmail.com

Page generated in 0.1436 seconds