Deep learning has overwhelmingly impacted post-acquisition image-processing tasks, however, there is increasing interest in more tightly coupled computational imaging approaches, where models, computation, and physical sensing are intertwined. This dissertation focuses on how to leverage the expressive power of deep learning in image reconstruction. We use deep learning in both the sensor data domain and the image domain to develop new fast and efficient algorithms to achieve superior quality imagery.
Metal artifacts are ubiquitous in both security and medical applications. They can greatly limit subsequent object delineation and information extraction from the images, restricting their diagnostic value. This problem is particularly acute in the security domain, where there is great heterogeneity in the objects that can appear in a scene, highly accurate decisions must be made quickly, and the processing time is highly constrained. Motivated primarily by security applications, we present a new deep-learning-based MAR approach that tackles the problem in the sensor data domain. We treat the observed data corresponding to dense, metal objects as missing data and train an adversarial deep network to complete the missing data directly in the projection domain. The subsequent complete projection data is then used with an efficient conventional image reconstruction algorithm to reconstruct an image intended to be free of artifacts.
Conventional image reconstruction algorithms assume that high-quality data is present on a dense and regular grid. Using conventional methods when these requirements are not met produces images filled with artifacts that are difficult to interpret. In this context, we develop data-domain deep learning methods that attempt to enhance the observed data to better meet the assumptions underlying the fast conventional analytical reconstruction methods. By focusing learning in the data domain in this way and coupling the result with existing conventional reconstruction methods, high-quality imaging can be achieved in a fast and efficient manner. We demonstrate results on four different problems: i) low-dose CT, ii) sparse-view CT, iii) limited-angle CT, and iv) accelerated MRI.
Image domain prior models have been shown to improve the quality of reconstructed images, especially when data are limited. A novel principled approach is presented allowing the unified integration of both data and image domain priors for improved image reconstruction. The consensus equilibrium framework is extended to integrate physical sensor models, data models, and image models. In order to achieve this integration, the conventional image variables used in consensus equilibrium are augmented with variables representing data domain quantities. The overall result produces combined estimates of both the data and the reconstructed image that is consistent with the physical models and prior models being utilized. The prior models used in both image and data domains in this work are created using deep neural networks. The superior quality allowed by incorporating both data and image domain prior models is demonstrated for two applications: limited-angle CT and accelerated MRI.
A major question that arises in the use of neural networks and in particular deep networks is their stability. That is, if the examples seen in the application environment differ from the training environment will the performance be robust. We perform an empirical stability analysis of data and image domain deep learning methods developed for limited-angle CT reconstruction. We consider three types of perturbations to test stability: adversarially optimized, random, and structural perturbations. Our empirical analysis reveals that the data-domain learning approach proposed in this dissertation is less susceptible to perturbations as compared to the image-domain post-processing approach. This is a very encouraging result and strongly supports the main argument of this dissertation that there is value in using data-domain learning and it should be a part of our computational imaging toolkit.
Identifer | oai:union.ndltd.org:bu.edu/oai:open.bu.edu:2144/41921 |
Date | 22 January 2021 |
Creators | Ghani, Muhammad Usman |
Contributors | Karl, W. Clem |
Source Sets | Boston University |
Language | en_US |
Detected Language | English |
Type | Thesis/Dissertation |
Rights | Attribution 4.0 International, http://creativecommons.org/licenses/by/4.0/ |
Page generated in 0.0019 seconds