Return to search

Estimation of Global Illumination using Cycle-Consistent Adversarial Networks

The field of computer graphics has made significant progress over the years, transforming from simple, pixelated images to highly realistic visuals used across various industries including entertainment, fashion, and video gaming. However, the traditional process of rendering images remains complex and time-consuming, requiring a deep understanding of geometry, materials, and textures. This thesis introduces a simpler approach through a machine learning model, specifically using Cycle-Consistent Adversarial Networks (CycleGAN), to generate realistic images and estimate global illumination in real-time, significantly reducing the need for extensive expertise and time investment. Our experiments on the Blender and Portal datasets demonstrate the model's ability to efficiently generate high-quality, globally illuminated scenes, while a comparative study with the Pix2Pix model highlights our approach's strengths in preserving fine visual details. Despite these advancements, we acknowledge the limitations posed by hardware constraints and dataset diversity, pointing towards areas for future improvement and exploration. This work aims to simplify the complex world of computer graphics, making it more accessible and user-friendly, while maintaining high standards of visual realism. / Master of Science / Creating realistic images on a computer is a crucial part of making video games and movies more immersive and lifelike. Traditionally, this has been a complex and time-consuming task, requiring a deep understanding of how light interacts with objects to create shadows and highlights. This study introduces a simpler and quicker method using a type of smart computer program that learns from examples. This program, known as Cycle-Consistent Adversarial Networks (CycleGAN), is designed to understand the complex play of light in virtual scenes and recreate it in a way that makes the image look real. In testing this new method on different types of images, from simpler scenes to more complex ones, the results were impressive. The program was not only able to significantly cut down the time needed to render an image, but it also maintained the fine details that bring an image to life. While there were challenges, such as working with limited computer power and needing a wider variety of images for the program to learn from, the study shows great promise. It represents a big step forward in making the creation of high-quality, realistic computer graphics more accessible and achievable for a wider range of applications.

Identiferoai:union.ndltd.org:VTETD/oai:vtechworks.lib.vt.edu:10919/117251
Date20 December 2023
CreatorsOh, Junho
ContributorsElectrical and Computer Engineering, Abbott, Amos L., Plassmann, Paul E., Wang, Yue J.
PublisherVirginia Tech
Source SetsVirginia Tech Theses and Dissertation
LanguageEnglish
Detected LanguageEnglish
TypeThesis
FormatETD, application/pdf, application/pdf
RightsIn Copyright, http://rightsstatements.org/vocab/InC/1.0/

Page generated in 0.0019 seconds