• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 6
  • 2
  • 1
  • Tagged with
  • 24
  • 24
  • 24
  • 13
  • 11
  • 10
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Development of Time-Resolved Diffuse Optical Systems Using SPAD Detectors and an Efficient Image Reconstruction Algorithm

Alayed, Mrwan January 2019 (has links)
Time-Resolved diffuse optics is a powerful and safe technique to quantify the optical properties (OP) for highly scattering media such as biological tissues. The OP values are correlated with the compositions of the measured objects, especially for the tissue chromophores such as hemoglobin. The OP are mainly the absorption and the reduced scattering coefficients that can be quantified for highly scattering media using Time-Resolved Diffuse Optical Spectroscopy (TR-DOS) systems. The OP can be retrieved using Time-Resolved Diffuse Optical Imaging (TR-DOI) systems to reconstruct the distribution of the OP in measured media. Therefore, TR-DOS and TR-DOI can be used for functional monitoring of brain and muscles, and to diagnose some diseases such as detection and localization for breast cancer and blood clot. In general, TR-DOI systems are non-invasive, reliable, and have a high temporal resolution. TR-DOI systems have been known for their complexity, bulkiness, and costly equipment such as light sources (picosecond pulsed laser) and detectors (single photon counters). Also, TR-DOI systems acquire a large amount of data and suffer from the computational cost of the image reconstruction process. These limitations hinder the usage of TR-DOI for widespread potential applications such as clinical measurements. The goals of this research project are to investigate approaches to eliminate two main limitations of TR-DOI systems. First, building TR-DOS systems using custom-designed free-running (FR) and time-gated (TG) SPAD detectors that are fabricated in low-cost standard CMOS technology instead of the costly photon counting and timing detectors. The FR-TR-DOS prototype has demonstrated comparable performance (for homogeneous objects measurements) with the reported TR-DOS prototypes that use commercial and expensive detectors. The TG-TR-DOS prototype has acquired raw data with a low level of noise and high dynamic range that enable this prototype to measure multilayered objects such as human heads. Second, building and evaluating TR-DOI prototype that uses a computationally efficient algorithm to reconstruct high quality 3D tomographic images by analyzing a small part of the acquired data. This work indicates the possibility to exploit the recent advances in the technologies of silicon detectors, and computation to build low-cost, compact, portable TR-DOI systems. These systems can expand the applications of TR-DOI and TR-DOS into several fields such as oncology, and neurology. / Thesis / Doctor of Philosophy (PhD)
22

Stochastic Dynamical Systems : New Schemes for Corrections of Linearization Errors and Dynamic Systems Identification

Raveendran, Tara January 2013 (has links) (PDF)
This thesis essentially deals with the development and numerical explorations of a few improved Monte Carlo filters for nonlinear dynamical systems with a view to estimating the associated states and parameters (i.e. the hidden states appearing in the system or process model) based on the available noisy partial observations. The hidden states are characterized, subject to modelling errors, by the weak solutions of the process model, which is typically in the form of a system of stochastic ordinary differential equations (SDEs). The unknown system parameters, when included as pseudo-states within the process model, are made to evolve as Wiener processes. The observations may also be modelled by a set of measurement SDEs or, when collected at discrete time instants, their temporally discretized maps. The proposed Monte Carlo filters aim at achieving robustness (i.e. insensitivity to variations in the noise parameters) and higher accuracy in the estimates whilst retaining the important feature of applicability to large dimensional nonlinear filtering problems. The thesis begins with a brief review of the literature in Chapter 1. The first development, reported in Chapter 2, is that of a nearly exact, semi-analytical, weak and explicit linearization scheme called Girsanov Corrected Linearization Method (GCLM) for nonlinear mechanical oscillators under additive stochastic excitations. At the heart of the linearization is a temporally localized rejection sampling strategy that, combined with a resampling scheme, enables selecting from and appropriately modifying an ensemble of locally linearized trajectories whilst weakly applying the Girsanov correction (the Radon- Nikodym derivative) for the linearization errors. Through their numeric implementations for a few workhorse nonlinear oscillators, the proposed variants of the scheme are shown to exhibit significantly higher numerical accuracy over a much larger range of the time step size than is possible with the local drift-linearization schemes on their own. The above scheme for linearization correction is exploited and extended in Chapter 3, wherein novel variations within a particle filtering algorithm are proposed to weakly correct for the linearization or integration errors that occur while numerically propagating the process dynamics. Specifically, the correction for linearization, provided by the likelihood or the Radon-Nikodym derivative, is incorporated in two steps. Once the likelihood, an exponential martingale, is split into a product of two factors, correction owing to the first factor is implemented via rejection sampling in the first step. The second factor, being directly computable, is accounted for via two schemes, one employing resampling and the other, a gain-weighted innovation term added to the drift field of the process SDE thereby overcoming excessive sample dispersion by resampling. The proposed strategies, employed as add-ons to existing particle filters, the bootstrap and auxiliary SIR filters in this work, are found to non-trivially improve the convergence and accuracy of the estimates and also yield reduced mean square errors of such estimates visà-vis those obtained through the parent filtering schemes. In Chapter 4, we explore the possibility of unscented transformation on Gaussian random variables, as employed within a scaled Gaussian sum stochastic filter, as a means of applying the nonlinear stochastic filtering theory to higher dimensional system identification problems. As an additional strategy to reconcile the evolving process dynamics with the observation history, the proposed filtering scheme also modifies the process model via the incorporation of gain-weighted innovation terms. The reported numerical work on the identification of dynamic models of dimension up to 100 is indicative of the potential of the proposed filter in realizing the stated aim of successfully treating relatively larger dimensional filtering problems. We propose in Chapter 5 an iterated gain-based particle filter that is consistent with the form of the nonlinear filtering (Kushner-Stratonovich) equation in our attempt to treat larger dimensional filtering problems with enhanced estimation accuracy. A crucial aspect of the proposed filtering set-up is that it retains the simplicity of implementation of the ensemble Kalman filter (EnKF). The numerical results obtained via EnKF-like simulations with or without a reduced-rank unscented transformation also indicate substantively improved filter convergence. The final contribution, reported in Chapter 6, is an iterative, gain-based filter bank incorporating an artificial diffusion parameter and may be viewed as an extension of the iterative filter in Chapter 5. While the filter bank helps in exploring the phase space of the state variables better, the iterative strategy based on the artificial diffusion parameter, which is lowered to zero over successive iterations, helps improve the mixing property of the associated iterative update kernels and these are aspects that gather importance for highly nonlinear filtering problems, including those involving significant initial mismatch of the process states and the measured ones. Numerical evidence of remarkably enhanced filter performance is exemplified by target tracking and structural health assessment applications. The thesis is finally wound up in Chapter 7 by summarizing these developments and briefly outlining the future research directions
23

Development of Sparse Recovery Based Optimized Diffuse Optical and Photoacoustic Image Reconstruction Methods

Shaw, Calvin B January 2014 (has links) (PDF)
Diffuse optical tomography uses near infrared (NIR) light as the probing media to re-cover the distributions of tissue optical properties with an ability to provide functional information of the tissue under investigation. As NIR light propagation in the tissue is dominated by scattering, the image reconstruction problem (inverse problem) is non-linear and ill-posed, requiring usage of advanced computational methods to compensate this. Diffuse optical image reconstruction problem is always rank-deficient, where finding the independent measurements among the available measurements becomes challenging problem. Knowing these independent measurements will help in designing better data acquisition set-ups and lowering the costs associated with it. An optimal measurement selection strategy based on incoherence among rows (corresponding to measurements) of the sensitivity (or weight) matrix for the near infrared diffuse optical tomography is proposed. As incoherence among the measurements can be seen as providing maximum independent information into the estimation of optical properties, this provides high level of optimization required for knowing the independency of a particular measurement on its counterparts. The utility of the proposed scheme is demonstrated using simulated and experimental gelatin phantom data set comparing it with the state-of-the-art methods. The traditional image reconstruction methods employ ℓ2-norm in the regularization functional, resulting in smooth solutions, where the sharp image features are absent. The sparse recovery methods utilize the ℓp-norm with p being between 0 and 1 (0 ≤ p1), along with an approximation to utilize the ℓ0-norm, have been deployed for the reconstruction of diffuse optical images. These methods are shown to have better utility in terms of being more quantitative in reconstructing realistic diffuse optical images compared to traditional methods. Utilization of ℓp-norm based regularization makes the objective (cost) function non-convex and the algorithms that implement ℓp-norm minimization utilizes approximations to the original ℓp-norm function. Three methods for implementing the ℓp-norm were con-sidered, namely Iteratively Reweigthed ℓ1-minimization (IRL1), Iteratively Reweigthed Least-Squares (IRLS), and Iteratively Thresholding Method (ITM). These results in-dicated that IRL1 implementation of ℓp-minimization provides optimal performance in terms of shape recovery and quantitative accuracy of the reconstructed diffuse optical tomographic images. Photoacoustic tomography (PAT) is an emerging hybrid imaging modality combining optics with ultrasound imaging. PAT provides structural and functional imaging in diverse application areas, such as breast cancer and brain imaging. A model-based iterative reconstruction schemes are the most-popular for recovering the initial pressure in limited data case, wherein a large linear system of equations needs to be solved. Often, these iterative methods requires regularization parameter estimation, which tends to be a computationally expensive procedure, making the image reconstruction process to be performed off-line. To overcome this limitation, a computationally efficient approach that computes the optimal regularization parameter is developed for PAT. This approach is based on the least squares-QR (LSQR) decomposition, a well-known dimensionality reduction technique for a large system of equations. It is shown that the proposed framework is effective in terms of quantitative and qualitative reconstructions of initial pressure distribution.
24

Automated Selection of Hyper-Parameters in Diffuse Optical Tomographic Image Reconstruction

Jayaprakash, * January 2013 (has links) (PDF)
Diffuse optical tomography is a promising imaging modality that provides functional information of the soft biological tissues, with prime imaging applications including breast and brain tissue in-vivo. This modality uses near infrared light( 600nm-900nm) as the probing media, giving an advantage of being non-ionizing imaging modality. The image reconstruction problem in diffuse optical tomography is typically posed as a least-squares problem that minimizes the difference between experimental and modeled data with respect to optical properties. This problem is non-linear and ill-posed, due to multiple scattering of the near infrared light in the biological tissues, leading to infinitely many possible solutions. The traditional methods employ a regularization term to constrain the solution space as well as stabilize the solution, with Tikhonov type regularization being the most popular one. The choice of this regularization parameter, also known as hyper parameter, dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience. In this thesis, a simple back projection type image reconstruction algorithm is taken up, as they are known to provide computationally efficient solution compared to regularized solutions. In these algorithms, the hyper parameter becomes equivalent to filter factor and choice of which is typically dependent on the sampling interval used for acquiring data in each projection and the angle of projection. Determining these parameters for diffuse optical tomography is not so straightforward and requires usage of advanced computational models. In this thesis, a computationally efficient simplex Method based optimization scheme for automatically finding this filter factor is proposed and its performances is evaluated through numerical and experimental phantom data. As back projection type algorithms are approximations to traditional methods, the absolute quantitative accuracy of the reconstructed optical properties is poor .In scenarios, like dynamic imaging, where the emphasis is on recovering relative difference in the optical properties, these algorithms are effective in comparison to traditional methods, with an added advantage being highly computationally efficient. In the second part of this thesis, this hyper parameter choice for traditional Tikhonov type regularization is attempted with the help of Least-Squares QR-decompisition (LSQR) method. The established techniques that enable the automated choice of hyper parameters include Generalized Cross-Validation(GCV) and regularized Minimal Residual Method(MRM), where both of them come with higher over head of computation time, making it prohibitive to be used in the real-time. The proposed LSQR algorithm uses bidiagonalization of the system matrix to result in less computational cost. The proposed LSQR-based algorithm for automated choice of hyper parameter is compared with MRM methods and is proven to be computationally optimal technique through numerical and experimental phantom cases.

Page generated in 0.0544 seconds