81 |
Ultrasound-Assisted Diffuse Correlation Spectroscopy : Recovery of Local Dynamics and Mechanical Properties in Soft Condensed Matter MaterialsChandran, Sriram R January 2016 (has links) (PDF)
This thesis describes the development and applications of an extension of DWS which enables the recovery of ‘localized’ mechanical properties, in a specified region of a complex jelly-like object which is inhomogeneous, marked out by the focal volume of an ultrasound transducer, also called the region-of-interest (ROI). Introduction of the sinusoidal forcing creates a sinusoidal phase variation in the detected light in a DWS experiment which modulates the measured intensity autocorrelation, g2 (τ ). Decay in the modulation depth with τ is used to recover the visco-elastic spectrum of the material in the ROI. En route to this, growth of the mean-squared dis- placement (MSD) with time is extracted from the modulation depth decay, which was verified first by the usual DWS experimental data from an homogeneous object with properties matching those in the ROI of the inhomogeneous object and then those obtained by solving the generalized Langevin equation (GLE) modelling the dynamics of a typical scattering centre in the ROI. A region-specific visco-elastic spectral map was obtained by scanning the inhomogeneous object by the ultrasound focal volume. Further, the resonant modes of the vibrating ROI were measured by locating the peaks of the modulation depth variation in g2(τ ) with respect to the ultrasound frequency. These resonant modes were made use of to recover elasticity of the material of the object in the ROI. Using a similar strategy, it was also shown that flow in pipe can be detected and flow rate computed by ‘tagging’ the photons passing through the pipe with a focussed ultrasound beam. It is demonstrated, both through experiments and simulations that the ultrasound-assisted technique devel- oped is better suited to both detect and quantitatively assess flow in a background of Brownian dynamics than the usual DWS. In particular, the MSD of particles in the flow, which shows forth a super-diffusive dynamics with MSD growing following τ α with α < 2, is captured over larger intervals of τ than was possible using existing methods. On the theoretical front, the main contribution is the derivation of the GLE, with multiplicative noise modulating the interaction ‘spring constant’. The noise is derived as an average effect of the micropolar rotations suffered by the
‘bath’ particles on the ‘system’ particle modelled. It has been shown that the ‘local’ dynamics of the system particle is nontrivially influenced by the dynamics, both translation and rotation, of ‘nonlocal’ bath particles.
|
82 |
3. Workshop "Meßtechnik für stationäre und transiente Mehrphasenströmungen", 14. Oktober 1999 in RossendorfPrasser, Horst-Michael January 1999 (has links)
Am 14. Oktober 1999 wurde in Rossendorf die dritte Veranstaltung in einer Serie von Workshops über Meßtechnik für stationäre und transiente Mehrphasenströmungen durchgeführt. Dieses Jahr kann auf auf 11 interessante Vorträge zurückgeblickt werden. Besonders hervorzuheben sind die beiden Hauptvorträge, die von Herrn Professor Hetsroni aus Haifa und Herrn Dr. Sengpiel aus Karlsruhe gehalten wurden. Erneut lag ein wichtiger Schwerpunkt auf Meßverfahren, die räumliche Verteilungen von Phasenanteilen und Geschwindigkeiten sowie die Größe von Partikeln bzw. Blasen der dispersen Phase zugänglich machen. So wurde über einen dreidimensional arbeitenden Röntgentomographen, ein Verfahren zur Messung von Geschwindigkeitsprofilen mit Gittersensoren und eine Methode zur simultanen Messung von Blasengrößen sowie Feldern von Gas- und Flüssigkeitsgeschwindigkeit mit einer optischen Partikelverfolgungstechnik vorgetragen. Daneben wurden interessante Entwicklungen auf dem Gebiet der lokalen Sonden vorgestellt, wie z.B. eine Elektrodiffusionssonde. Neue meßtechnische Ansätze waren ebenfalls vertreten; hervorzuheben ist der Versuch, die Methode der optischen Tomographie für die Untersuchung von Zweiphasenströmungen nutzbar zu machen. Der Tagungsband enthält die folgenden Beiträge: S. John, R. Wilfer, N. Räbiger, Universität Bremen, Messung hydrodynamischer Parameter in Mehrphasenströmungen bei hohen Dispersphasengehalten mit Hilfe der Elektrodiffusionsmeßtechnik E. Krepper, A. Aszodi, Forschungszentrum Rossendorf, Temperatur- und Dampfgehaltsverteilungen bei Sieden in seitlich beheizten Tanks D. Hoppe, Forschungszentrum Rossendorf, Ein akustisches Resonanzverfahren zur Klassifizierung von Füllständen W. Sengpiel, V. Heinzel, M. Simon, Forschungszentrum Karlsruhe, Messungen der Eigenschaften von kontinuierlicher und disperser Phase in Luft-Wasser-Blasenströmungen R. Eschrich, VDI, Die Probestromentnahme zur Bestimmung der dispersen Phase einer Zweiphasenströmung U. Hampel, TU Dresden, Optische Tomographie O. Borchers, C. Busch, G. Eigenberger, Universität Stuttgart, Analyse der Hydrodynamik in Blasenströmungen mit einer Bildverarbeitungsmethode C. Zippe, Forschungszentrum Rossendorf, Beobachtung der Wechselwirkung von Blasen mit Gittersensoren mit einer Hochgeschwindigkeits-Videokamera H.-M. Prasser, Forschungszentrum Rossendorf, Geschwindigkeits- und Durchflußmessung mit Gittersensoren
|
83 |
Development of Time-Resolved Diffuse Optical Systems Using SPAD Detectors and an Efficient Image Reconstruction AlgorithmAlayed, Mrwan January 2019 (has links)
Time-Resolved diffuse optics is a powerful and safe technique to quantify the optical properties (OP) for highly scattering media such as biological tissues. The OP values are correlated with the compositions of the measured objects, especially for the tissue chromophores such as hemoglobin. The OP are mainly the absorption and the reduced scattering coefficients that can be quantified for highly scattering media using Time-Resolved Diffuse Optical Spectroscopy (TR-DOS) systems. The OP can be retrieved using Time-Resolved Diffuse Optical Imaging (TR-DOI) systems to reconstruct the distribution of the OP in measured media. Therefore, TR-DOS and TR-DOI can be used for functional monitoring of brain and muscles, and to diagnose some diseases such as detection and localization for breast cancer and blood clot. In general, TR-DOI systems are non-invasive, reliable, and have a high temporal resolution.
TR-DOI systems have been known for their complexity, bulkiness, and costly equipment such as light sources (picosecond pulsed laser) and detectors (single photon counters). Also, TR-DOI systems acquire a large amount of data and suffer from the computational cost of the image reconstruction process. These limitations hinder the usage of TR-DOI for widespread potential applications such as clinical measurements.
The goals of this research project are to investigate approaches to eliminate two main limitations of TR-DOI systems. First, building TR-DOS systems using custom-designed free-running (FR) and time-gated (TG) SPAD detectors that are fabricated in low-cost standard CMOS technology instead of the costly photon counting and timing detectors. The FR-TR-DOS prototype has demonstrated comparable performance (for homogeneous objects measurements) with the reported TR-DOS prototypes that use commercial and expensive detectors. The TG-TR-DOS prototype has acquired raw data with a low level of noise and high dynamic range that enable this prototype to measure multilayered objects such as human heads. Second, building and evaluating TR-DOI prototype that uses a computationally efficient algorithm to reconstruct high quality 3D tomographic images by analyzing a small part of the acquired data.
This work indicates the possibility to exploit the recent advances in the technologies of silicon detectors, and computation to build low-cost, compact, portable TR-DOI systems. These systems can expand the applications of TR-DOI and TR-DOS into several fields such as oncology, and neurology. / Thesis / Doctor of Philosophy (PhD)
|
84 |
Développement d’un système de Topographie Optique Diffuse résolu en temps et hyperspectral pour la détection de l’activité cérébrale humaine / Developement of a hyperspectral time resolved DOT system for the monitoring of the human brain activityLange, Frédéric 28 January 2016 (has links)
La Tomographie Optique Diffuse (TOD) est désormais une modalité d’imagerie médicale fonctionnelle reconnue. L’une des applications les plus répandues de cette technique est celle de l’imagerie fonctionnelle cérébrale chez l’Homme. En effet, cette technique présente de nombreux avantages, notamment grâce à la richesse des contrastes optiques accessibles. Néanmoins, certains verrous subsistent et freinent le développement de son utilisation, spécialement pour des applications chez l’Homme adulte en clinique ou dans des conditions particulières comme lors du suivi de l’activité sportive. En effet, le signal optique mesuré contient des informations venant de différentes profondeurs de la tête, et donc de différents types de tissus comme la peau ou le cerveau. Or, la réponse d’intérêt étant celle du cerveau, la réponse de la peau peut dégrader l’information recherchée. Dans ce contexte, ces travaux portent sur le développement d’un nouvel instrument de TOD permettant d’acquérir les dimensions spatiale, spectrale et de temps de vol du photon de façon simultanée, et ce à haute fréquence d’acquisition. Au cours de cette thèse, l’instrument a été développé et caractérisé sur fantôme optique. Ensuite, il a été validé in-vivo chez l’Homme adulte, notamment en détectant l’activité du cortex préfrontal en réponse à une tâche de calcul simple. Les informations multidimensionnelles acquises par notre système ont permis d’améliorer la séparation des contributions des différents tissus (Peau/Cerveau). Elles ont également permis de différencier la signature de la réponse physiologique de ces tissus, notamment en permettant de détecter les variations de concentration en Cytochrome-c-oxydase. Parallèlement à ce développement instrumental, des simulations Monte-Carlo de la propagation de la lumière dans un modèle anatomique de tête ont été effectuées. Ces simulations ont permis de mieux comprendre la propagation de la lumière dans les tissus en fonction de la longueur d’onde et de valider la pertinence de cette approche multidimensionnelle. Les perspectives de ces travaux de thèse se dirigent vers l’utilisation de cet instrument pour le suivi de la réponse du cerveau chez l’Homme adulte lors de différentes sollicitations comme des stimulations de TDCS, ou en réponse à une activité sportive. / The Diffuse Optical Tomography (DOT) is now a relevant tool for the functional medical imaging. One of the most widespread application of this technic is the imaging of the human brain function. Indeed, this technic has numerous advantages, especially the richness of the optical contrast accessible. Nevertheless, some drawbacks are curbing the use of the technic, especially for applications on adults in clinics or in particular environment like in the monitoring of sports activity. Indeed, the measured signal contains information coming from different depths of the head, so it contains different tissues types like skin and brain. Yet, the response of interest is the one of the brain, and the one of the skin is blurring it. In this context, this work is about the development of a new instrument of DOT capable of acquiring spatial and spectral information, as well as the arrival time of photons simultaneously and at a high acquisition speed. During the PhD thesis the instrument has been developed and characterised on optical phantoms. Then, it has been validated in-vivo on adults, especially by detecting the cortical activation of the prefrontal cortex, in response to a simple calculation task. Multidimensional information acquired by our system allowed us to better distinguish between superficial and deep layers. It also allowed us to distinguish between the physiological signature of those tissues, and especially to detect the variations of concentration in Cytochrom-c-oxydase. Concurrently to this experimental work, Monte-Carlo simulation of light propagation in a model off a human head has been done. Those simulations allowed us to better understand the light propagation in tissues as function as their wavelength, and to validate the relevance of our multidimensional approach. Perspectives of this work is to use the developed instrument to monitor the brain’s response of the Human adult to several solicitations like tDCS stimulation, or sports activity.
|
85 |
Stochastic Dynamical Systems : New Schemes for Corrections of Linearization Errors and Dynamic Systems IdentificationRaveendran, Tara January 2013 (has links) (PDF)
This thesis essentially deals with the development and numerical explorations of a few improved Monte Carlo filters for nonlinear dynamical systems with a view to estimating the associated states and parameters (i.e. the hidden states appearing in the system or process model) based on the available noisy partial observations. The hidden states are characterized, subject to modelling errors, by the weak solutions of the process model, which is typically in the form of a system of stochastic ordinary differential equations (SDEs). The unknown system parameters, when included as pseudo-states within the process model, are made to evolve as Wiener processes. The observations may also be modelled by a set of measurement SDEs or, when collected at discrete time instants, their temporally discretized maps. The proposed Monte Carlo filters aim at achieving robustness (i.e. insensitivity to variations in the noise parameters) and higher accuracy in the estimates whilst retaining the important feature of applicability to large dimensional
nonlinear filtering problems.
The thesis begins with a brief review of the literature in Chapter 1. The first development, reported in Chapter 2, is that of a nearly exact, semi-analytical, weak and explicit linearization scheme called Girsanov Corrected Linearization Method (GCLM) for nonlinear mechanical oscillators under additive stochastic excitations. At the heart of the linearization is a temporally localized rejection sampling strategy that, combined with a resampling scheme, enables selecting from and appropriately modifying an ensemble of
locally linearized trajectories whilst weakly applying the Girsanov correction (the Radon-
Nikodym derivative) for the linearization errors. Through their numeric implementations for a few workhorse nonlinear oscillators, the proposed variants of the scheme are shown to exhibit significantly higher numerical accuracy over a much larger range of the time step size than is possible with the local drift-linearization schemes on their own.
The above scheme for linearization correction is exploited and extended in Chapter 3, wherein novel variations within a particle filtering algorithm are proposed to weakly correct for the linearization or integration errors that occur while numerically propagating the process dynamics. Specifically, the correction for linearization, provided by the likelihood or the Radon-Nikodym derivative, is incorporated in two steps. Once the
likelihood, an exponential martingale, is split into a product of two factors, correction owing to the first factor is implemented via rejection sampling in the first step. The second factor, being directly computable, is accounted for via two schemes, one employing resampling and the other, a gain-weighted innovation term added to the drift field of the process SDE thereby overcoming excessive sample dispersion by resampling.
The proposed strategies, employed as add-ons to existing particle filters, the bootstrap and auxiliary SIR filters in this work, are found to non-trivially improve the convergence and accuracy of the estimates and also yield reduced mean square errors of such estimates visà-vis those obtained through the parent filtering schemes.
In Chapter 4, we explore the possibility of unscented transformation on Gaussian random
variables, as employed within a scaled Gaussian sum stochastic filter, as a means of applying the nonlinear stochastic filtering theory to higher dimensional system identification problems. As an additional strategy to reconcile the evolving process dynamics with the observation history, the proposed filtering scheme also modifies the process model via the incorporation of gain-weighted innovation terms. The reported numerical work on the identification of dynamic models of dimension up to 100 is indicative of the potential of the proposed filter in realizing the stated aim of successfully
treating relatively larger dimensional filtering problems.
We propose in Chapter 5 an iterated gain-based particle filter that is consistent with the form of the nonlinear filtering (Kushner-Stratonovich) equation in our attempt to treat larger dimensional filtering problems with enhanced estimation accuracy. A crucial aspect of the proposed filtering set-up is that it retains the simplicity of implementation of the ensemble Kalman filter (EnKF). The numerical results obtained via EnKF-like simulations with or without a reduced-rank unscented transformation also indicate substantively improved filter convergence.
The final contribution, reported in Chapter 6, is an iterative, gain-based filter bank
incorporating an artificial diffusion parameter and may be viewed as an extension of the iterative filter in Chapter 5. While the filter bank helps in exploring the phase space of the state variables better, the iterative strategy based on the artificial diffusion parameter, which is lowered to zero over successive iterations, helps improve the mixing property of the associated iterative update kernels and these are aspects that gather importance for
highly nonlinear filtering problems, including those involving significant initial mismatch of the process states and the measured ones. Numerical evidence of remarkably enhanced filter performance is exemplified by target tracking and structural health assessment applications.
The thesis is finally wound up in Chapter 7 by summarizing these developments and
briefly outlining the future research directions
|
86 |
Development of Sparse Recovery Based Optimized Diffuse Optical and Photoacoustic Image Reconstruction MethodsShaw, Calvin B January 2014 (has links) (PDF)
Diffuse optical tomography uses near infrared (NIR) light as the probing media to re-cover the distributions of tissue optical properties with an ability to provide functional information of the tissue under investigation. As NIR light propagation in the tissue is dominated by scattering, the image reconstruction problem (inverse problem) is non-linear and ill-posed, requiring usage of advanced computational methods to compensate this.
Diffuse optical image reconstruction problem is always rank-deficient, where finding the independent measurements among the available measurements becomes challenging problem. Knowing these independent measurements will help in designing better data acquisition set-ups and lowering the costs associated with it. An optimal measurement selection strategy based on incoherence among rows (corresponding to measurements) of the sensitivity (or weight) matrix for the near infrared diffuse optical tomography is proposed. As incoherence among the measurements can be seen as providing maximum independent information into the estimation of optical properties, this provides high level of optimization required for knowing the independency of a particular measurement on its counterparts. The utility of the proposed scheme is demonstrated using simulated and experimental gelatin phantom data set comparing it with the state-of-the-art methods.
The traditional image reconstruction methods employ ℓ2-norm in the regularization functional, resulting in smooth solutions, where the sharp image features are absent. The sparse recovery methods utilize the ℓp-norm with p being between 0 and 1 (0 ≤ p1), along with an approximation to utilize the ℓ0-norm, have been deployed for the reconstruction of diffuse optical images. These methods are shown to have better utility in terms of being more quantitative in reconstructing realistic diffuse optical images compared to traditional methods.
Utilization of ℓp-norm based regularization makes the objective (cost) function non-convex and the algorithms that implement ℓp-norm minimization utilizes approximations to the original ℓp-norm function. Three methods for implementing the ℓp-norm were con-sidered, namely Iteratively Reweigthed ℓ1-minimization (IRL1), Iteratively Reweigthed Least-Squares (IRLS), and Iteratively Thresholding Method (ITM). These results in-dicated that IRL1 implementation of ℓp-minimization provides optimal performance in terms of shape recovery and quantitative accuracy of the reconstructed diffuse optical tomographic images.
Photoacoustic tomography (PAT) is an emerging hybrid imaging modality combining optics with ultrasound imaging. PAT provides structural and functional imaging in diverse application areas, such as breast cancer and brain imaging. A model-based iterative reconstruction schemes are the most-popular for recovering the initial pressure in limited data case, wherein a large linear system of equations needs to be solved. Often, these iterative methods requires regularization parameter estimation, which tends to be a computationally expensive procedure, making the image reconstruction process to be performed off-line. To overcome this limitation, a computationally efficient approach that computes the optimal regularization parameter is developed for PAT. This approach is based on the least squares-QR (LSQR) decomposition, a well-known dimensionality reduction technique for a large system of equations. It is shown that the proposed framework is effective in terms of quantitative and qualitative reconstructions of initial pressure distribution.
|
87 |
A Stochastic Search Approach to Inverse ProblemsVenugopal, Mamatha January 2016 (has links) (PDF)
The focus of the thesis is on the development of a few stochastic search schemes for inverse problems and their applications in medical imaging. After the introduction in Chapter 1 that motivates and puts in perspective the work done in later chapters, the main body of the thesis may be viewed as composed of two parts: while the first part concerns the development of stochastic search algorithms for inverse problems (Chapters 2 and 3), the second part elucidates on the applicability of search schemes to inverse problems of interest in tomographic imaging (Chapters 4 and 5). The chapter-wise contributions of the thesis are summarized below.
Chapter 2 proposes a Monte Carlo stochastic filtering algorithm for the recursive estimation of diffusive processes in linear/nonlinear dynamical systems that modulate the instantaneous rates of Poisson measurements. The same scheme is applicable when the set of partial and noisy measurements are of a diffusive nature. A key aspect of our development here is the filter-update scheme, derived from an ensemble approximation of the time-discretized nonlinear Kushner Stratonovich equation, that is modified to account for Poisson-type measurements. Specifically, the additive update through a gain-like correction term, empirically approximated from the innovation integral in the filtering equation, eliminates the problem of particle collapse encountered in many conventional particle filters that adopt weight-based updates. Through a few numerical demonstrations, the versatility of the proposed filter is brought forth, first with application to filtering problems with diffusive or Poisson-type measurements and then to an automatic control problem wherein the exterminations of the associated cost functional is achieved simply by an appropriate redefinition of the innovation process.
The aim of one of the numerical examples in Chapter 2 is to minimize the structural response of a duffing oscillator under external forcing. We pose this problem of active control within a filtering framework wherein the goal is to estimate the control force that minimizes an appropriately chosen performance index. We employ the proposed filtering algorithm to estimate the control force and the oscillator displacements and velocities that are minimized as a result of the application of the control force. While Fig. 1 shows the time histories of the uncontrolled and controlled displacements and velocities of the oscillator, a plot of the estimated control force against the external force applied is given in Fig. 2.
(a) (b)
Fig. 1. A plot of the time histories of the uncontrolled and controlled (a) displacements and (b) velocities.
Fig. 2. A plot of the time histories of the external force and the estimated control force
Stochastic filtering, despite its numerous applications, amounts only to a directed search and is best suited for inverse problems and optimization problems with unimodal solutions. In view of general optimization problems involving multimodal objective functions with a priori unknown optima, filtering, similar to a regularized Gauss-Newton (GN) method, may only serve as a local (or quasi-local) search. In Chapter 3, therefore, we propose a stochastic search (SS) scheme that whilst maintaining the basic structure of a filtered martingale problem, also incorporates randomization techniques such as scrambling and blending, which are meant to aid in avoiding the so-called local traps. The key contribution of this chapter is the introduction of yet another technique, termed as the state space splitting (3S) which is a paradigm based on the principle of divide-and-conquer. The 3S technique, incorporated within the optimization scheme, offers a better assimilation of measurements and is found to outperform filtering in the context of quantitative photoacoustic tomography (PAT) to recover the optical absorption field from sparsely available PAT data using a bare minimum ensemble. Other than that, the proposed scheme is numerically shown to be better than or at least as good as CMA-ES (covariance matrix adaptation evolution strategies), one of the best performing optimization schemes in minimizing a set of benchmark functions.
Table 1 gives the comparative performance of the proposed scheme and CMA-ES in minimizing a set of 40-dimensional functions (F1-F20), all of which have their global minimum at 0, using an ensemble size of 20. Here, 10 5 is the tolerance limit to be attained for the objective function value and MAX is the maximum number of iterations permissible to the optimization scheme to arrive at the global minimum.
Table 1. Performance of the SS scheme and
Chapter 4 gathers numerical and experimental evidence to support our conjecture in the previous chapters that even a quasi-local search (afforded, for instance, by the filtered martingale problem) is generally superior to a regularized GN method in solving inverse problems. Specifically, in this chapter, we solve the inverse problems of ultrasound modulated optical tomography (UMOT) and diffraction tomography (DT). In UMOT, we perform a spatially resolved recovery of the mean-squared displacements, p r of the scattering centres in a diffusive object by measuring the modulation depth in the decaying autocorrelation of the incident coherent light. This modulation is induced by the input ultrasound focussed to a specific region referred to as the region of interest (ROI) in the object. Since the ultrasound-induced displacements are a measure of the material stiffness, in principle, UMOT can be applied for the early diagnosis of cancer in soft tissues. In DT, on the other hand, we recover the real refractive index distribution, n r of an optical fiber from experimentally acquired transmitted intensity of light traversing through it. In both cases, the filtering step encoded within the optimization scheme recovers superior reconstruction images vis-à-vis the GN method in terms of quantitative accuracies.
Fig. 3 gives a comparative cross-sectional plot through the centre of the reference and reconstructed p r images in UMOT when the ROI is at the centre of the object. Here, the anomaly is presented as an increase in the displacements and is at the centre of the ROI.
Fig. 4 shows the comparative cross-sectional plot of the reference and reconstructed refractive index distributions, n r of the optical fiber in DT.
Fig. 3. Cross-sectional plot through the center of the reference and reconstructed p r images.
Fig. 4. Cross-sectional plot through the center of the reference and reconstructed n r distributions.
In Chapter 5, the SS scheme is applied to our main application, viz. photoacoustic tomography (PAT) for the recovery of the absorbed energy map, the optical absorption coefficient and the chromophore concentrations in soft tissues. Nevertheless, the main contribution of this chapter is to provide a single-step method for the recovery of the optical absorption field from both simulated and experimental time-domain PAT data. A single-step direct recovery is shown to yield better reconstruction than the generally adopted two-step method for quantitative PAT. Such a quantitative reconstruction maybe converted to a functional image through a linear map. Alternatively, one could also perform a one-step recovery of the chromophore concentrations from the boundary pressure, as shown using simulated data in this chapter. Being a Monte Carlo scheme, the SS scheme is highly parallelizable and the availability of such a machine-ready inversion scheme should finally enable PAT to emerge as a clinical tool in medical diagnostics.
Given below in Fig. 5 is a comparison of the optical absorption map of the Shepp-Logan phantom with the reconstruction obtained as a result of a direct (1-step) recovery.
Fig. 5. The (a) exact and (b) reconstructed optical absorption maps of the Shepp-Logan phantom. The x- and y-axes are in m and the colormap is in mm-1.
Chapter 6 concludes the work with a brief summary of the results obtained and suggestions for future exploration of some of the schemes and applications described in this thesis.
|
88 |
Automated Selection of Hyper-Parameters in Diffuse Optical Tomographic Image ReconstructionJayaprakash, * January 2013 (has links) (PDF)
Diffuse optical tomography is a promising imaging modality that provides functional information of the soft biological tissues, with prime imaging applications including breast and brain tissue in-vivo. This modality uses near infrared light( 600nm-900nm) as the probing media, giving an advantage of being non-ionizing imaging modality.
The image reconstruction problem in diffuse optical tomography is typically posed as a least-squares problem that minimizes the difference between experimental and modeled data with respect to optical properties. This problem is non-linear and ill-posed, due to multiple scattering of the near infrared light in the biological tissues, leading to infinitely many possible solutions. The traditional methods employ a regularization term to constrain the solution space as well as stabilize the solution, with Tikhonov type regularization being the most popular one. The choice of this regularization parameter, also known as hyper parameter, dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience.
In this thesis, a simple back projection type image reconstruction algorithm is taken up, as they are known to provide computationally efficient solution compared to regularized solutions. In these algorithms, the hyper parameter becomes equivalent to filter factor and choice of which is typically dependent on the sampling interval used for acquiring data in each projection and the angle of projection. Determining these parameters for diffuse optical tomography is not so straightforward and requires usage of advanced computational models. In this thesis, a computationally efficient simplex
Method based optimization scheme for automatically finding this filter factor is proposed and its performances is evaluated through numerical and experimental phantom data. As back projection type algorithms are approximations to traditional methods, the absolute quantitative accuracy of the reconstructed optical properties is poor .In scenarios, like dynamic imaging, where the emphasis is on recovering relative difference in the optical properties, these algorithms are effective in comparison to traditional methods, with an added advantage being highly computationally efficient.
In the second part of this thesis, this hyper parameter choice for traditional Tikhonov type regularization is attempted with the help of Least-Squares QR-decompisition (LSQR) method. The established techniques that enable the automated choice of hyper parameters include Generalized Cross-Validation(GCV) and regularized Minimal Residual Method(MRM), where both of them come with higher over head of computation time, making it prohibitive to be used in the real-time. The proposed LSQR algorithm uses bidiagonalization of the system matrix to result in less computational cost. The proposed LSQR-based algorithm for automated choice of hyper parameter is compared with MRM methods and is proven to be computationally optimal technique through numerical and experimental phantom cases.
|
89 |
Active geometric model : multi-compartment model-based segmentation & registrationMukherjee, Prateep 26 August 2014 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / We present a novel, variational and statistical approach for model-based segmentation. Our model generalizes the Chan-Vese model, proposed for concurrent segmentation of multiple objects embedded in the same image domain. We also propose a novel shape descriptor, namely the Multi-Compartment Distance Functions or mcdf. Our proposed framework for segmentation is two-fold: first, several training samples distributed across various classes are registered onto a common frame of reference; then, we use a variational method similar to Active Shape Models (or ASMs) to generate an average shape model and hence use the latter to partition new images. The key advantages of such a framework is: (i) landmark-free automated shape training; (ii) strict shape constrained model to fit test data. Our model can naturally deal with shapes of arbitrary dimension and topology(closed/open curves). We term our model Active Geometric Model, since it focuses on segmentation of geometric shapes. We demonstrate the power of the proposed framework in two important medical applications: one for morphology estimation of 3D Motor Neuron compartments, another for thickness estimation of Henle's Fiber Layer in the retina. We also compare the qualitative and quantitative performance of our method with that of several other state-of-the-art segmentation methods.
|
90 |
Real-time adaptive-optics optical coherence tomography (AOOCT) image reconstruction on a GPUShafer, Brandon Andrew January 2014 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Adaptive-optics optical coherence tomography (AOOCT) is a technology that has been rapidly advancing in recent years and offers amazing capabilities in scanning the human eye in vivo. In order to bring the ultra-high resolution capabilities to clinical use, however, newer technology needs to be used in the image reconstruction process. General purpose computation on graphics processing units is one such way that this computationally intensive reconstruction can be performed in a desktop computer in real-time. This work shows the process of AOOCT image reconstruction, the basics of how to use NVIDIA's CUDA to write parallel code, and a new AOOCT image reconstruction technology implemented using NVIDIA's CUDA. The results of this work demonstrate that image reconstruction can be done in real-time with high accuracy using a GPU.
|
Page generated in 0.0334 seconds