• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 12
  • 1
  • Tagged with
  • 51
  • 51
  • 20
  • 17
  • 15
  • 11
  • 11
  • 10
  • 9
  • 7
  • 7
  • 7
  • 7
  • 7
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

MACE CT Reconstruction for Modular Material Decomposition from Photon-Counting CT Data

Natalie Marie Jadue (19199005) 24 July 2024 (has links)
<p dir="ltr">X-ray computed tomography (CT) based on photon counting detectors (PCD) extends standard CT by counting detected photons in multiple energy bins. PCD data can be used to increase the contrast-to-noise ratio (CNR), increase spatial resolution, reduce radiation dose, reduce injected contrast dose, and compute a material decomposition using a specified set of basis materials [1]. Current commercial and prototype clinical photon counting CT systems utilize PCD-CT reconstruction methods that either reconstruct from each spectral bin separately, or first create an estimate of a material sinogram using a specified set of basis materials and then reconstruct from these material sinograms. However, existing methods are not able to utilize simultaneously and in a modular fashion both the measured spectral information and advanced prior models in order to produce a material decomposition. </p><p dir="ltr">We describe an efficient, modular framework for PCD-based CT reconstruction and material decomposition using Multi-Agent Consensus Equilibrium (MACE). Portions of this dissertation appear in [2]. Our method employs a detector proximal map or agent that uses PCD measurements to update an estimate of the path length sinogram. We also create a prior agent in the form of a sinogram denoiser that enforces both physical and empirical knowledge about the material-decomposed sinogram. The sinogram reconstruction is computed using the MACE algorithm, which finds an equilibrium solution between the two agents, and the final image is reconstructed from the estimated sinogram. Importantly, the modularity of our method allows the two agents to be designed, implemented, and optimized independently. Our results on simulated data show a substantial (2-3 times) noise reduction vs conventional maximum likelihood reconstruction when applied to a phantom used to evaluate low contrast detectability. Our results with measured data show an even higher reduction (2-12 times) in noise standard deviation. Lastly, we demonstrate our method on a Lungman phantom that more realistically represents the human body. </p>
42

HIGH SPEED IMAGING VIA ADVANCED MODELING

Soumendu Majee (10942896) 04 August 2021 (has links)
<div>There is an increasing need to accurately image objects at a high temporal resolution for different applications in order to analyze the underlying physical, chemical, or biological processes. In this thesis, we use advanced models exploiting the image structure and the measurement process in order to achieve an improved temporal resolution. The thesis is divided into three chapters, each corresponding to a different imaging application.</div><div><br></div><div>In the first chapter, we propose a novel method to localize neurons in fluorescence microscopy images. Accurate localization of neurons enables us to scan only the neuron locations instead of the full brain volume and thus improve the temporal resolution of neuron activity monitoring. We formulate the neuron localization problem as an inverse problem where we reconstruct an image that encodes the location of the neuron centers. The sparsity of the neuron centers serves as a prior model, while the forward model comprises of shape models estimated from training data.</div><div><br></div><div>In the second chapter, we introduce multi-slice fusion, a novel framework to incorporate advanced prior models for inverse problems spanning many dimensions such as 4D computed tomography (CT) reconstruction. State of the art 4D reconstruction methods use model based iterative reconstruction (MBIR), but it depends critically on the quality of the prior modeling. Incorporating deep convolutional neural networks (CNNs) in the 4D reconstruction problem is difficult due to computational difficulties and lack of high-dimensional training data. Multi-Slice Fusion integrates the tomographic forward model with multiple low dimensional CNN denoisers along different planes to produce a 4D regularized reconstruction. The improved regularization in multi-slice fusion allows each time-frame to be reconstructed from fewer measurements, resulting in an improved temporal resolution in the reconstruction. Experimental results on sparse-view and limited-angle CT data demonstrate that Multi-Slice Fusion can substantially improve the quality of reconstructions relative to traditional methods, while also being practical to implement and train.</div><div><br></div><div>In the final chapter, we introduce CodEx, a synergistic combination of coded acquisition and a non-convex Bayesian reconstruction for improving acquisition speed in computed tomography (CT). In an ideal ``step-and-shoot'' tomographic acquisition, the object is rotated to each desired angle, and the view is taken. However, step-and-shoot acquisition is slow and can waste photons, so in practice the object typically rotates continuously in time, leading to views that are blurry. This blur can then result in reconstructions with severe motion artifacts. CodEx works by encoding the acquisition with a known binary code that the reconstruction algorithm then inverts. The CodEx reconstruction method uses the alternating direction method of multipliers (ADMM) to split the inverse problem into iterative deblurring and reconstruction sub-problems, making reconstruction practical. CodEx allows for a fast data acquisition leading to a good temporal resolution in the reconstruction.</div>
43

TIME-OF-FLIGHT NEUTRON CT FOR ISOTOPE DENSITY RECONSTRUCTION AND CONE-BEAM CT SEPARABLE MODELS

Thilo Balke (15348532) 26 April 2023 (has links)
<p>There is a great need for accurate image reconstruction in the context of non-destructive evaluation. Major challenges include the ever-increasing necessity for high resolution reconstruction with limited scan and reconstruction time and thus fewer and noisier measurements. In this thesis, we leverage advanced Bayesian modeling of the physical measurement process and probabilistic prior information of the image distribution in order to yield higher image quality despite limited measurement time. We demonstrate in several ways efficient computational performance through the exploitation of more efficient memory access, optimized parametrization of the system model, and multi-pixel parallelization. We demonstrate that by building high-fidelity forward models that we can generate quantitatively reliable reconstructions despite very limited measurement data.</p> <p><br></p> <p>In the first chapter, we introduce an algorithm for estimating isotopic densities from neutron time-of-flight imaging data. Energy resolved neutron imaging (ERNI) is an advanced neutron radiography technique capable of non-destructively extracting spatial isotopic information within a given material. Energy-dependent radiography image sequences can be created by utilizing neutron time-of-flight techniques. In combination with uniquely characteristic isotopic neutron cross-section spectra, isotopic areal densities can be determined on a per-pixel basis, thus resulting in a set of areal density images for each isotope present in the sample. By preforming ERNI measurements over several rotational views, an isotope decomposed 3D computed tomography is possible. We demonstrate a method involving a robust and automated background estimation based on a linear programming formulation. The extremely high noise due to low count measurements is overcome using a sparse coding approach. It allows for a significant computation time improvement, from weeks to a few hours compared to existing neutron evaluation tools, enabling at the present stage a semi-quantitative, user-friendly routine application. </p> <p><br></p> <p>In the second chapter, we introduce the TRINIDI algorithm, a more refined algorithm for the same problem.</p> <p>Accurate reconstruction of 2D and 3D isotope densities is a desired capability with great potential impact in applications such as evaluation and development of next-generation nuclear fuels.</p> <p>Neutron time-of-flight (TOF) resonance imaging offers a potential approach by exploiting the characteristic neutron adsorption spectra of each isotope.</p> <p>However, it is a major challenge to compute quantitatively accurate images due to a variety of confounding effects such as severe Poisson noise, background scatter, beam non-uniformity, absorption non-linearity, and extended source pulse duration. We present the TRINIDI algorithm which is based on a two-step process in which we first estimate the neutron flux and background counts, and then reconstruct the areal densities of each isotope and pixel.</p> <p>Both components are based on the inversion of a forward model that accounts for the highly non-linear absorption, energy-dependent emission profile, and Poisson noise, while also modeling the substantial spatio-temporal variation of the background and flux. </p> <p>To do this, we formulate the non-linear inverse problem as two optimization problems that are solved in sequence.</p> <p>We demonstrate on both synthetic and measured data that TRINIDI can reconstruct quantitatively accurate 2D views of isotopic areal density that can then be reconstructed into quantitatively accurate 3D volumes of isotopic volumetric density.</p> <p><br></p> <p>In the third chapter, we introduce a separable forward model for cone-beam computed tomography (CT) that enables efficient computation of a Bayesian model-based reconstruction. Cone-beam CT is an attractive tool for many kinds of non-destructive evaluation (NDE). Model-based iterative reconstruction (MBIR) has been shown to improve reconstruction quality and reduce scan time. However, the computational burden and storage of the system matrix is challenging. In this paper we present a separable representation of the system matrix that can be completely stored in memory and accessed cache-efficiently. This is done by quantizing the voxel position for one of the separable subproblems. A parallelized algorithm, which we refer to as zipline update, is presented that speeds up the computation of the solution by about 50 to 100 times on 20 cores by updating groups of voxels together. The quality of the reconstruction and algorithmic scalability are demonstrated on real cone-beam CT data from an NDE application. We show that the reconstruction can be done from a sparse set of projection views while reducing artifacts visible in the conventional filtered back projection (FBP) reconstruction. We present qualitative results using a Markov Random Field (MRF) prior and a Plug-and-Play denoiser.</p>
44

[pt] ESTUDO E IMPLEMENTAÇÃO DE UMA CÂMERA DE PIXEL ÚNICO POR MEIO DE SENSORIAMENTO COMPRESSIVO / [en] STUDY AND IMPLEMENTATION OF A SINGLE PIXEL CAMERA BY COMPRESSIVE SAMPLING

MATHEUS ESTEVES FERREIRA 15 June 2021 (has links)
[pt] Câmeras de pixel único consistem em reconstruir computacionalmente imagens em duas dimensões a partir de um conjunto de medidas feitas por um detector de único pixel. Para que se obtenha a informação espacial, um conjunto de padrões de modulação são aplicados à luz transmitida/refletida do objeto e essa informação é combinada com o sinal integral do detector. Primeiro, apresentamos uma visão geral desses sistemas e demonstramos a implementação de uma prova de conceito capaz de fazer aquisição de imagem usando três modos de operação: Varredura, escaneamento por base de Hadamard, e escaneamento por base de Hadamard com sensoriamento compreensivo. Segundo, discutimos como os diferentes parâmetros experimentais do sistema ótico afetam a aquisição. Finalmente, comparamos a performance dos três modos de operação quando usados para a aquisição de images com tamanhos entre (8px, 8px) e (128px, 128px). / [en] Single-pixel imaging consists in computationally reconstructing 2-dimensional images from a set of intensity measurements taken by a singlepoint detector. To derive the spatial information of a scene, a set of modulation patterns are applied to the transmitted/backscattered light from the object and combined with the integral signal on the detector. First, we present an overview of such optical systems and implement a proof of concept that can perform image acquisition using three different modes of operation: Raster scanning, Hadamard basis scanning, and Hadamard compressive sampling. Second, we explore how the different experimental parameters affect image acquisition. Finally, we compare how the three scanning mode perform for acquisition of images of sizes ranging from (8px, 8px) to (128px, 128px).
45

<b>WEARABLE BIG DATA HARNESSING WITH DEEP LEARNING, EDGE COMPUTING AND EFFICIENCY OPTIMIZATION</b>

Jiadao Zou (16920153) 03 January 2024 (has links)
<p dir="ltr">In this dissertation, efforts and innovations are made to advance subtle pattern mining, edge computing, and system efficiency optimization for biomedical applications, thereby advancing precision medicine big data.</p><p dir="ltr">Brain visual dynamics encode rich functional and biological patterns of the neural system, promising for applications like intention decoding, cognitive load quantization and neural disorder measurement. We here focus on the understanding of the brain visual dynamics for the Amyotrophic lateral sclerosis (ALS) population. We leverage a deep learning framework for automatic feature learning and classification, which can translate the eye Electrooculography (EOG) signal to meaningful words. We then build an edge computing platform on the smart phone, for learning, visualization, and decoded word demonstration, all in real-time. In a further study, we have leveraged deep transfer learning to boost EOG decoding effectiveness. More specifically, the model trained on basic eye movements is leveraged and treated as an additional feature extractor when classifying the signal to the meaningful word, resulting in higher accuracy.</p><p dir="ltr">Efforts are further made to decoding functional Near-Infrared Spectroscopy (fNIRS) signal, which encodes rich brain dynamics like the cognitive load. We have proposed a novel Multi-view Multi-channel Graph Neural Network (mmGNN). More specifically, we propose to mine the multi-channel fNIRS dynamics with a multi-stage GNN that can effectively extract the channel- specific patterns, propagate patterns among channels, and fuse patterns for high-level abstraction. Further, we boost the learning capability with multi-view learning to mine pertinent patterns in temporal, spectral, time-frequency, and statistical domains.</p><p dir="ltr">Massive-device systems, like wearable massive-sensor computers and Internet of Things (IoTs), are promising in the era of big data. The crucial challenge is about how to maximize the efficiency under coupling constraints like energy budget, computing, and communication. We propose a deep reinforcement learning framework, with a pattern booster and a learning adaptor. This framework has demonstrated optimally maximizes the energy utilization and computing efficiency on the local massive devices under a one-center fifteen-device circumstance.</p><p dir="ltr">Our research and findings are expected to greatly advance the intelligent, real-time, and efficient big data harnessing, leveraging deep learning, edge computing, and efficiency optimization.</p>
46

Controllable 3D Effects Synthesis in Image Editing

Yichen Sheng (18184378) 15 April 2024 (has links)
<p dir="ltr">3D effect synthesis is crucial in image editing to enhance realism or visual appeal. Unlike classical graphics rendering, which relies on complete 3D geometries, 3D effect synthesis in im- age editing operates solely with 2D images as inputs. This shift presents significant challenges, primarily addressed by data-driven methods that learn to synthesize 3D effects in an end-to-end manner. However, these methods face limitations in the diversity of 3D effects they can produce and lack user control. For instance, existing shadow generation networks are restricted to produc- ing hard shadows without offering any user input for customization.</p><p dir="ltr">In this dissertation, we tackle the research question: <i>how can we synthesize controllable and realistic 3D effects in image editing when only 2D information is available? </i>Our investigation leads to four contributions. First, we introduce a neural network designed to create realistic soft shadows from an image cutout and a user-specified environmental light map. This approach is the first attempt in utilizing neural network for realistic soft shadow rendering in real-time. Second, we develop a novel 2.5D representation Pixel Height, tailored for the nuances of image editing. This representation not only forms the foundation of a new soft shadow rendering pipeline that provides intuitive user control, but also generalizes the soft shadow receivers to be general shadow receivers. Third, we present the mathematical relationship between the Pixel Height representation and 3D space. This connection facilitates the reconstruction of normals or depth from 2D scenes, broadening the scope for synthesizing comprehensive 3D lighting effects such as reflections and refractions. A 3D-aware buffer channels are also proposed to improve the synthesized soft shadow quality. Lastly, we introduce Dr.Bokeh, a differentiable bokeh renderer that extends traditional bokeh effect algorithms with better occlusion modeling to correct flaws existed in existing methods. With the more precise lens modeling, we show that Dr.Bokeh not only achieves the state-of-the-art bokeh rendering quality, but also pushes the boundary of depth-from-defocus problem.</p><p dir="ltr">Our work in controllable 3D effect synthesis represents a pioneering effort in image editing, laying the groundwork for future lighting effect synthesis in various image editing applications. Moreover, the improvements to filtering-based bokeh rendering could significantly enhance com- mercial products, such as the portrait mode feature on smartphones.</p>
47

MAJORIZED MULTI-AGENT CONSENSUS EQUILIBRIUM FOR 3D COHERENT LIDAR IMAGING

Tony Allen (18502518) 06 May 2024 (has links)
<pre>Coherent lidar uses a chirped laser pulse for 3D imaging of distant targets.However, existing coherent lidar image reconstruction methods do not account for the system's aperture, resulting in sub-optimal resolution.Moreover, these methods use majorization-minimization for computational efficiency, but do so without a theoretical treatment of convergence.<br> <br>In this work, we present Coherent Lidar Aperture Modeled Plug-and-Play (CLAMP) for multi-look coherent lidar image reconstruction.CLAMP uses multi-agent consensus equilibrium (a form of PnP) to combine a neural network denoiser with an accurate physics-based forward model.CLAMP introduces an FFT-based method to account for the effects of the aperture and uses majorization of the forward model for computational efficiency.We also formalize the use of majorization-minimization in consensus optimization problems and prove convergence to the exact consensus equilibrium solution.Finally, we apply CLAMP to synthetic and measured data to demonstrate its effectiveness in producing high-resolution, speckle-free, 3D imagery.</pre><p></p>
48

OVERCOMING THE RAYLEIGH LIMIT FOR HIGH-RESOLUTION OPTICAL IMAGING: QUANTUM ANDCLASSICAL METHODS

Hyunsoo Choi (18989168) 12 July 2024 (has links)
<p><br></p><p dir="ltr">Achieving high optical resolution imaging is one of the most important goals in the history of optics. However, due to finite aperture sizes, a diffraction limit is imposed on optical imaging. Therefore, the Rayleigh limit, which describes the minimum separation at which two point sources are resolvable, has served as a critical limit in optical resolution. Many methods have been studied to break the limit and succeed in resolving nearby sources below the Rayleigh criterion but only beyond a certain distance. Furthermore, it has been demonstrated that quantum-inspired optics techniques maintain consistent variance in estimating the separation of point sources even at low separations, but only with prior information like a known number of sources and equal brightness. Therefore, achieving the ultimate optical resolution remains an open question. This thesis will conclusively address this challenge considering real-world scenarios, i.e., no prior information or controlled lab environment as well as low signal-to-noise ratio (SNR), turbulence, and other practical challenges.</p><p><br></p><p dir="ltr">In information theory, the estimation variance of a random parameter can be quantified using the inverse of Fisher information. By maximizing the Fisher information, one can minimize the variance in estimation. In my thesis, we have shown that the measurement can be accelerated without sacrificing optical resolution using the adaptive mode so that quantum Fisher information per detected photon is maximized. The notable attribute that sets it apart from other quantum-inspired methods is that it does not require any prior information, making it more feasible for practical application. We have further shown that the space domain awareness (SDA) challenge can be effectively handled with the aforementioned approach with a very limited photon budget and even in the presence of turbulence. Toward solving the challenges, we designed a photon statistics-based direct imaging method that can also serve as a baseline method for quantum optics. In my thesis, atmospheric turbulence is also deeply explored and the effect is mitigated using reinforcement learning.</p><p><br></p>
49

RECONSTRUCTION OF HIGH-SPEED EVENT-BASED VIDEO USING PLUG AND PLAY

Trevor D. Moore (5930756) 16 January 2019 (has links)
<div>Event-Based cameras, also known as neuromophic cameras or dynamic vision sensors, are an imaging modality that attempt to mimic human eyes by asynchronously measuring contrast over time. If the contrast changes sufficiently then a 1-bit event is output, indicating whether the contrast has gone up or down. This stream of events is sparse, and its asynchronous nature allows the pixels to have a high dynamic range and high temporal resolution. However, these events do not encode the intensity of the scene, resulting in an inverse problem to estimate intensity images from the event stream. Hybrid event-based cameras, such as the DAVIS camera, provide a reference intensity image that can be leveraged when estimating the intensity at each pixel during an event. Normally, inverse problems are solved by formulating a forward and prior model and minimizing the associated cost, however, for this problem, the Plug and Play (P&P) algorithm is used to solve the inverse problem. In this case, P&P replaces the prior model subproblem with a denoiser, making the algorithm modular, easier to implement. We propose an idealized forward model that assumes the contrast steps measured by the DAVIS camera are uniform in size to simplify the problem. We show that the algorithm can swiftly reconstruct the scene intensity at a user-specified frame rate, depending on the chosen denoiser’s computational complexity and the selected frame rate.</div>
50

Imaging and Object Detection under Extreme Lighting Conditions and Real World Adversarial Attacks

Xiangyu Qu (16385259) 22 June 2023 (has links)
<p>Imaging and computer vision systems deployed in real-world environments face the challenge of accommodating a wide range of lighting conditions. However, the cost, the demand for high resolution, and the miniaturization of imaging devices impose physical constraints on sensor design, limiting both the dynamic range and effective aperture size of each pixel. Consequently, conventional CMOS sensors fail to deliver satisfactory capture in high dynamic range scenes or under photon-limited conditions, thereby impacting the performance of downstream vision tasks. In this thesis, we address two key problems: 1) exploring the utilization of spatial multiplexing, specifically spatially varying exposure tiling, to extend sensor dynamic range and optimize scene capture, and 2) developing techniques to enhance the robustness of object detection systems under photon-limited conditions.</p> <p><br></p> <p>In addition to challenges imposed by natural environments, real-world vision systems are susceptible to adversarial attacks in the form of artificially added digital content. Therefore, this thesis presents a comprehensive pipeline for constructing a robust and scalable system to counter such attacks.</p>

Page generated in 0.0903 seconds