Return to search

Generating Synthetic X-rays Using Generative Adversarial Networks

We propose a novel method for generating synthetic X-rays from atypical inputs. This method creates approximate X-rays for use in non-diagnostic visualization problems where only generic cameras and sensors are available. Traditional methods are restricted to 3-D inputs such as meshes or Computed Tomography (CT) scans. We create custom synthetic X-ray datasets using a custom generator capable of creating RGB images, point cloud images, and 2-D pose images. We create a dataset using natural hand poses and train general-purpose Conditional Generative Adversarial Networks (CGANs) as well as our own novel network pix2xray. Our results show the successful plausibility of generating X-rays from point cloud and RGB images. We also demonstrate the superiority of our pix2xray approach, especially in the troublesome cases of occlusion due to overlapping or rotated anatomy. Overall, our work establishes a baseline that synthetic X-rays can be simulated using inputs such as RGB images and point cloud.

Identiferoai:union.ndltd.org:uottawa.ca/oai:ruor.uottawa.ca:10393/41092
Date24 September 2020
CreatorsHaiderbhai, Mustafa
ContributorsFallavollita, Pascal
PublisherUniversité d'Ottawa / University of Ottawa
Source SetsUniversité d’Ottawa
LanguageEnglish
Detected LanguageEnglish
TypeThesis
Formatapplication/pdf

Page generated in 0.003 seconds