• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 28
  • 10
  • 9
  • 1
  • 1
  • Tagged with
  • 55
  • 55
  • 24
  • 15
  • 12
  • 11
  • 10
  • 10
  • 10
  • 10
  • 9
  • 9
  • 9
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Accelerating ray tracing with directional subdivision and parallel processing

Simiakakis, George January 1995 (has links)
No description available.
2

Radiosity and its application in dynamic and non-diffuse environments

Sun, Jizhou January 1994 (has links)
No description available.
3

Scene decompositions for accelerated ray tracing

Spackman, John Neil January 1989 (has links)
No description available.
4

Stylistic and Spatial Disentanglement in GANs

Alharbi, Yazeed 17 August 2021 (has links)
This dissertation tackles the problem of entanglement in Generative Adversarial Networks (GANs). The key insight is that disentanglement in GANs can be improved by differentiating between the content, and the operations performed on that content. For example, the identity of a generated face can be thought of as the content, while the lighting conditions can be thought of as the operations. We examine disentanglement in several kinds of deep networks. We examine image-to-image translation GANs, unconditional GANs, and sketch extraction networks. The task in image-to-image translation GANs is to translate images from one domain to another. It is immediately clear that disentanglement is necessary in this case. The network must maintain the core contents of the image while changing the stylistic appearance to match the target domain. We propose latent filter scaling to achieve multimodality and disentanglement. Previous methods require complicated network architectures to enforce that disentanglement. Our approach, on the other hand, maintains the traditional GAN loss with a minor change in architecture. Unlike image-to-image GANs, unconditional GANs are generally entangled. Unconditional GANs offer one method of changing the generated output which is changing the input noise code. Therefore, it is very difficult to resample only some parts of the generated images. We propose structured noise injection to achieve disentanglement in unconditional GANs. We propose using two input codes: one to specify spatially-variable details, and one to specify spatially-invariable details. In addition to the ability to change content and style independently, it also allows users to change the content only at certain locations. Combining our previous findings, we improve the performance of sketch-to-image translation networks. A crucial problem is how to correct input sketches before feeding them to the generator. By extracting sketches in an unsupervised way only from the spatially-variable branch of the image, we are able to produce sketches that show the content in many different styles. Those sketches can serve as a dataset to train a sketch-to-image translation GAN.
5

Frozen-State Hierarchical Annealing

Campaigne, Wesley January 2012 (has links)
There is significant interest in the synthesis of discrete-state random fields, particularly those possessing structure over a wide range of scales. However, given a model on some finest, pixellated scale, it is computationally very difficult to synthesize both large and small-scale structures, motivating research into hierarchical methods. This thesis proposes a frozen-state approach to hierarchical modelling, in which simulated annealing is performed on each scale, constrained by the state estimates at the parent scale. The approach leads significant advantages in both modelling flexibility and computational complexity. In particular, a complex structure can be realized with very simple, local, scale-dependent models, and by constraining the domain to be annealed at finer scales to only the uncertain portions of coarser scales, the approach leads to huge improvements in computational complexity. Results are shown for synthesis problems in porous media.
6

Frozen-State Hierarchical Annealing

Campaigne, Wesley January 2012 (has links)
There is significant interest in the synthesis of discrete-state random fields, particularly those possessing structure over a wide range of scales. However, given a model on some finest, pixellated scale, it is computationally very difficult to synthesize both large and small-scale structures, motivating research into hierarchical methods. This thesis proposes a frozen-state approach to hierarchical modelling, in which simulated annealing is performed on each scale, constrained by the state estimates at the parent scale. The approach leads significant advantages in both modelling flexibility and computational complexity. In particular, a complex structure can be realized with very simple, local, scale-dependent models, and by constraining the domain to be annealed at finer scales to only the uncertain portions of coarser scales, the approach leads to huge improvements in computational complexity. Results are shown for synthesis problems in porous media.
7

MICROSCOPY IMAGE REGISTRATION, SYNTHESIS AND SEGMENTATION

Chichen Fu (5929679) 10 June 2019 (has links)
<div>Fluorescence microscopy has emerged as a powerful tool for studying cell biology because it enables the acquisition of 3D image volumes deeper into tissue and the imaging of complex subcellular structures. Fluorescence microscopy images are frequently distorted by motion resulting from animal respiration and heartbeat which complicates the quantitative analysis of biological structures needed to characterize the structure and constituency of tissue volumes. This thesis describes a two pronged approach to quantitative analysis consisting of non-rigid registration and deep convolutional neural network segmentation. The proposed image registration method is capable of correcting motion artifacts in three dimensional fluorescence microscopy images collected over time. In particular, our method uses 3D B-Spline based nonrigid registration using a coarse-to-fine strategy to register stacks of images collected at different time intervals and 4D rigid registration to register 3D volumes over time. The results show that the proposed method has the ability of correcting global motion artifacts of sample tissues in four dimensional space, thereby revealing the motility of individual cells in the tissue.</div><div><br></div><div>We describe in thesis nuclei segmentation methods using deep convolutional neural networks, data augmentation to generate training images of different shapes and contrasts, a refinement process combining segmentation results of horizontal, frontal, and sagittal planes in a volume, and a watershed technique to enumerate the nuclei. Our results indicate that compared to 3D ground truth data, our method can successfully segment and count 3D nuclei. Furthermore, a microscopy image synthesis method based on spatially constrained cycle-consistent adversarial networks is used to efficiently generate training data. A 3D modified U-Net network is trained with a combination of Dice loss and binary cross entropy metrics to achieve accurate nuclei segmentation. A multi-task U-Net is utilized to resolve overlapping nuclei. This method was found to achieve high accuracy object-based and voxel-based evaluations.</div>
8

Synthesis of Thoracic Computer Tomography Images using Generative Adversarial Networks

Hagvall Hörnstedt, Julia January 2019 (has links)
The use of machine learning algorithms to enhance and facilitate medical diagnosis and analysis is a promising and an important area, which could improve the workload of clinicians’ substantially. In order for machine learning algorithms to learn a certain task, large amount of data needs to be available. Data sets for medical image analysis are rarely public due to restrictions concerning the sharing of patient data. The production of synthetic images could act as an anonymization tool to enable the distribution of medical images and facilitate the training of machine learning algorithms, which could be used in practice. This thesis investigates the use of Generative Adversarial Networks (GAN) for synthesis of new thoracic computer tomography (CT) images, with no connection to real patients. It also examines the usefulness of the images by comparing the quantitative performance of a segmentation network trained with the synthetic images with the quantitative performance of the same segmentation network trained with real thoracic CT images. The synthetic thoracic CT images were generated using CycleGAN for image-to-image translation between label map ground truth images and thoracic CT images. The synthetic images were evaluated using different set-ups of synthetic and real images for training the segmentation network. All set-ups were evaluated according to sensitivity, accuracy, Dice and F2-score and compared to the same parameters evaluated from a segmentation network trained with 344 real images. The thesis shows that it was possible to generate synthetic thoracic CT images using GAN. However, it was not possible to achieve an equal quantitative performance of a segmentation network trained with synthetic data compared to a segmentation network trained with the same amount of real images in the scope of this thesis. It was possible to achieve equal quantitative performance of a segmentation network, as a segmentation network trained on real images, by training it with a combination of real and synthetic images, where a majority of the images were synthetic images and a minority were real images. By using a combination of 59 real images and 590 synthetic images, equal performance as a segmentation network trained with 344 real images was achieved regarding sensitivity, Dice and F2-score. Equal quantitative performance of a segmentation network could thus be achieved by using fewer real images together with an abundance of synthetic images, created at close to no cost, indicating a usefulness of synthetically generated images.
9

Extensões ao algoritmo de 'RAY TRACING' parametrizado. / Extensions on the parameterized ray tracing algorithm.

Santos, Eduardo Toledo 01 July 1998 (has links)
Ray tracing é um algoritmo para a síntese de imagens por computador. Suas características principais são a alta qualidade das imagens que proporciona (incorporando sombras, reflexões e transparências entre outros efeitos) e, por outro lado, a grande demanda em termos de processamento. O ray tracing parametrizado é um algoritmo baseado no ray tracing, que permite a obtenção de imagens com a mesma qualidade a um custo computacional dezenas de vezes menor, porém com restrições. Estas restrições são a necessidade de geração de um arquivo de dados inicial, cujo tempo de processamento é pouco maior que o do ray tracing convencional e a não possibilidade de alteração de qualquer parâmetro geométrico da cena. Por outro lado, a geração de versões da mesma cena com mudanças nos parâmetros ópticos (cores, intensidades de luz, texturas, reflexões, transparências, etc.) é extremamente rápida. Esta tese propõe extensões ao algoritmo de ray tracing parametrizado, procurando aliviar algumas de suas restrições. Estas extensões permitem alterar alguns parâmetros geométricos como a posição das fontes de luz, parâmetros de fontes de luz spot e mapeamento de revelo entre outros, mantendo o bom desempenho do algoritmo original. Também é estudada a paralelização do algoritmo e outras formas de aceleração do processamento. As extensões propostas permitem ampliar o campo de aplicação do algoritmo original incentivando sua adoção mais generalizada. / Ray tracing is an image synthesis computer algorithm. Its main features are the high quality of the generated images (which incorporate shadows, reflections and transparency, among other effects) and, on the other hand, a high processing demand. Parameterized ray tracing is an algorithm based on ray tracing which allows the synthesis of images with the same quality but tens of times faster than ray tracing, although with some restrictions. These restrictions are the requirement of generating a data file (which takes a little longer than standard ray tracing to create) and the fact that no geometric modifications are allowed. On the other side, the processing time for creating new versions of the image with changes only on optical parameters (colors, light intensities, textures, reflections, transparencies, etc.) is extremely fast. This Ph.D. dissertation proposes extensions to the parameterized ray tracing algorithm for diminishing its restrictions. These extensions allow changing some geometric parameters like the light source positions, spotlight parameters and bump-mapping among others, keeping the processing performance of the original algorithm. The parallelization of the algorithm is also focused as well as other performance enhancements. The proposed extensions enlarge the field of application of the original algorithm, encouraging more general adoption.
10

Proposta de metodologia para avaliação de métodos de iluminação global em síntese de imagens / Proposal of a methodology for evaluation of global illumination methods in image synthesis.

Meneghel, Giovani Balen 01 July 2015 (has links)
Produzir imagens de alta qualidade por computador, no menor tempo possível, que sejam convincentes ao público alvo, utilizando-se de maneira ótima todos os recursos computacionais à disposição, é uma tarefa que envolve uma cadeia de processos específicos, sendo um grande desafio ainda nos dias de hoje. O presente trabalho apresenta um estudo sobre toda esta cadeia de processos, com foco na avaliação de métodos de Iluminação Global empregados na Síntese de Imagens fotorrealistas para as áreas de Animação e Efeitos Visuais. Com o objetivo de auxiliar o usuário na tarefa de produzir imagens fotorrealistas de alta qualidade, foram realizados experimentos envolvendo diversas cenas de teste e seis métodos de Iluminação Global do Estado da Arte: Path Tracing, Light Tracing, Bidirectional Path Tracing, Metropolis Light Transport, Progressive Photon Mapping e Vertex Connection and Merging. O sintetizador escolhido para execução do experimento foi o Mitsuba Renderer. Para avaliação da qualidade dos resultados, duas métricas perceptuais foram adotadas: o Índice de Similaridade Estrutural SSIM e o Previsor de Diferenças Visuais HDR-VDP-2. A partir da avaliação dos resultados, foi construído um Guia de Recomendações para o usuário, indicando, com base nas características de uma cena arbitrária, o método de Iluminação Global mais adequado para realizar a síntese das imagens. Por fim, foram apontados caminhos de pesquisa para trabalhos futuros, sugerindo o emprego de classificadores, métodos de redução de parâmetros e Inteligência Artificial a fim de automatizar o processo de produção de imagens fotorrealistas e de alta qualidade. / The task of generating high quality computer images in the shortest time possible, believable to the targets audience perception, using all computational resources available, is still a challenging procedure, composed by a chain of specific processes. This work presents a study of this chain, focusing on the evaluation of Global Illumination methods used on the Synthesis of Photorealistic Images, in the areas of Animation and Visual Effects. To achieve the goal of helping users to produce high-quality photorealistic images, two experiments were proposed containing several test scenes and six State-of-the-Art Global Illumination methods: Path Tracing, Light Tracing, Bidirectional Path Tracing, Metropolis Light Transport, Progressive Photon Mapping and Vertex Connection and Merging. In order to execute the tests, the open source renderer Mitsuba was used. The quality of the produced images was analyzed using two different perceptual metrics: Structural Similarity Index SSIM and Visual Difference Predictor HDR-VDP-2. By analyzing results, a Recommendation Guide was created, providing suggestions, based on an arbitrary scenes characteristics, of the most suitable Global Illumination method to be used in order to synthesize images from the given scene. In the end, future ways of research are presented, proposing the use of classifiers, parameter reduction methods and Artificial Intelligence, in order to build an automatic procedure to generate high quality photorealistic images.

Page generated in 0.0643 seconds