Spelling suggestions: "subject:"0ptical tomography."" "subject:"aptical tomography.""
81 
3. Workshop "Meßtechnik für stationäre und transiente Mehrphasenströmungen", 14. Oktober 1999 in RossendorfPrasser, HorstMichael January 1999 (has links)
Am 14. Oktober 1999 wurde in Rossendorf die dritte Veranstaltung in einer Serie von Workshops über Meßtechnik für stationäre und transiente Mehrphasenströmungen durchgeführt. Dieses Jahr kann auf auf 11 interessante Vorträge zurückgeblickt werden. Besonders hervorzuheben sind die beiden Hauptvorträge, die von Herrn Professor Hetsroni aus Haifa und Herrn Dr. Sengpiel aus Karlsruhe gehalten wurden. Erneut lag ein wichtiger Schwerpunkt auf Meßverfahren, die räumliche Verteilungen von Phasenanteilen und Geschwindigkeiten sowie die Größe von Partikeln bzw. Blasen der dispersen Phase zugänglich machen. So wurde über einen dreidimensional arbeitenden Röntgentomographen, ein Verfahren zur Messung von Geschwindigkeitsprofilen mit Gittersensoren und eine Methode zur simultanen Messung von Blasengrößen sowie Feldern von Gas und Flüssigkeitsgeschwindigkeit mit einer optischen Partikelverfolgungstechnik vorgetragen. Daneben wurden interessante Entwicklungen auf dem Gebiet der lokalen Sonden vorgestellt, wie z.B. eine Elektrodiffusionssonde. Neue meßtechnische Ansätze waren ebenfalls vertreten; hervorzuheben ist der Versuch, die Methode der optischen Tomographie für die Untersuchung von Zweiphasenströmungen nutzbar zu machen. Der Tagungsband enthält die folgenden Beiträge: S. John, R. Wilfer, N. Räbiger, Universität Bremen, Messung hydrodynamischer Parameter in Mehrphasenströmungen bei hohen Dispersphasengehalten mit Hilfe der Elektrodiffusionsmeßtechnik E. Krepper, A. Aszodi, Forschungszentrum Rossendorf, Temperatur und Dampfgehaltsverteilungen bei Sieden in seitlich beheizten Tanks D. Hoppe, Forschungszentrum Rossendorf, Ein akustisches Resonanzverfahren zur Klassifizierung von Füllständen W. Sengpiel, V. Heinzel, M. Simon, Forschungszentrum Karlsruhe, Messungen der Eigenschaften von kontinuierlicher und disperser Phase in LuftWasserBlasenströmungen R. Eschrich, VDI, Die Probestromentnahme zur Bestimmung der dispersen Phase einer Zweiphasenströmung U. Hampel, TU Dresden, Optische Tomographie O. Borchers, C. Busch, G. Eigenberger, Universität Stuttgart, Analyse der Hydrodynamik in Blasenströmungen mit einer Bildverarbeitungsmethode C. Zippe, Forschungszentrum Rossendorf, Beobachtung der Wechselwirkung von Blasen mit Gittersensoren mit einer HochgeschwindigkeitsVideokamera H.M. Prasser, Forschungszentrum Rossendorf, Geschwindigkeits und Durchflußmessung mit Gittersensoren

82 
Development of TimeResolved Diffuse Optical Systems Using SPAD Detectors and an Efficient Image Reconstruction AlgorithmAlayed, Mrwan January 2019 (has links)
TimeResolved diffuse optics is a powerful and safe technique to quantify the optical properties (OP) for highly scattering media such as biological tissues. The OP values are correlated with the compositions of the measured objects, especially for the tissue chromophores such as hemoglobin. The OP are mainly the absorption and the reduced scattering coefficients that can be quantified for highly scattering media using TimeResolved Diffuse Optical Spectroscopy (TRDOS) systems. The OP can be retrieved using TimeResolved Diffuse Optical Imaging (TRDOI) systems to reconstruct the distribution of the OP in measured media. Therefore, TRDOS and TRDOI can be used for functional monitoring of brain and muscles, and to diagnose some diseases such as detection and localization for breast cancer and blood clot. In general, TRDOI systems are noninvasive, reliable, and have a high temporal resolution.
TRDOI systems have been known for their complexity, bulkiness, and costly equipment such as light sources (picosecond pulsed laser) and detectors (single photon counters). Also, TRDOI systems acquire a large amount of data and suffer from the computational cost of the image reconstruction process. These limitations hinder the usage of TRDOI for widespread potential applications such as clinical measurements.
The goals of this research project are to investigate approaches to eliminate two main limitations of TRDOI systems. First, building TRDOS systems using customdesigned freerunning (FR) and timegated (TG) SPAD detectors that are fabricated in lowcost standard CMOS technology instead of the costly photon counting and timing detectors. The FRTRDOS prototype has demonstrated comparable performance (for homogeneous objects measurements) with the reported TRDOS prototypes that use commercial and expensive detectors. The TGTRDOS prototype has acquired raw data with a low level of noise and high dynamic range that enable this prototype to measure multilayered objects such as human heads. Second, building and evaluating TRDOI prototype that uses a computationally efficient algorithm to reconstruct high quality 3D tomographic images by analyzing a small part of the acquired data.
This work indicates the possibility to exploit the recent advances in the technologies of silicon detectors, and computation to build lowcost, compact, portable TRDOI systems. These systems can expand the applications of TRDOI and TRDOS into several fields such as oncology, and neurology. / Thesis / Doctor of Philosophy (PhD)

83 
Développement d’un système de Topographie Optique Diffuse résolu en temps et hyperspectral pour la détection de l’activité cérébrale humaine / Developement of a hyperspectral time resolved DOT system for the monitoring of the human brain activityLange, Frédéric 28 January 2016 (has links)
La Tomographie Optique Diffuse (TOD) est désormais une modalité d’imagerie médicale fonctionnelle reconnue. L’une des applications les plus répandues de cette technique est celle de l’imagerie fonctionnelle cérébrale chez l’Homme. En effet, cette technique présente de nombreux avantages, notamment grâce à la richesse des contrastes optiques accessibles. Néanmoins, certains verrous subsistent et freinent le développement de son utilisation, spécialement pour des applications chez l’Homme adulte en clinique ou dans des conditions particulières comme lors du suivi de l’activité sportive. En effet, le signal optique mesuré contient des informations venant de différentes profondeurs de la tête, et donc de différents types de tissus comme la peau ou le cerveau. Or, la réponse d’intérêt étant celle du cerveau, la réponse de la peau peut dégrader l’information recherchée. Dans ce contexte, ces travaux portent sur le développement d’un nouvel instrument de TOD permettant d’acquérir les dimensions spatiale, spectrale et de temps de vol du photon de façon simultanée, et ce à haute fréquence d’acquisition. Au cours de cette thèse, l’instrument a été développé et caractérisé sur fantôme optique. Ensuite, il a été validé invivo chez l’Homme adulte, notamment en détectant l’activité du cortex préfrontal en réponse à une tâche de calcul simple. Les informations multidimensionnelles acquises par notre système ont permis d’améliorer la séparation des contributions des différents tissus (Peau/Cerveau). Elles ont également permis de différencier la signature de la réponse physiologique de ces tissus, notamment en permettant de détecter les variations de concentration en Cytochromecoxydase. Parallèlement à ce développement instrumental, des simulations MonteCarlo de la propagation de la lumière dans un modèle anatomique de tête ont été effectuées. Ces simulations ont permis de mieux comprendre la propagation de la lumière dans les tissus en fonction de la longueur d’onde et de valider la pertinence de cette approche multidimensionnelle. Les perspectives de ces travaux de thèse se dirigent vers l’utilisation de cet instrument pour le suivi de la réponse du cerveau chez l’Homme adulte lors de différentes sollicitations comme des stimulations de TDCS, ou en réponse à une activité sportive. / The Diffuse Optical Tomography (DOT) is now a relevant tool for the functional medical imaging. One of the most widespread application of this technic is the imaging of the human brain function. Indeed, this technic has numerous advantages, especially the richness of the optical contrast accessible. Nevertheless, some drawbacks are curbing the use of the technic, especially for applications on adults in clinics or in particular environment like in the monitoring of sports activity. Indeed, the measured signal contains information coming from different depths of the head, so it contains different tissues types like skin and brain. Yet, the response of interest is the one of the brain, and the one of the skin is blurring it. In this context, this work is about the development of a new instrument of DOT capable of acquiring spatial and spectral information, as well as the arrival time of photons simultaneously and at a high acquisition speed. During the PhD thesis the instrument has been developed and characterised on optical phantoms. Then, it has been validated invivo on adults, especially by detecting the cortical activation of the prefrontal cortex, in response to a simple calculation task. Multidimensional information acquired by our system allowed us to better distinguish between superficial and deep layers. It also allowed us to distinguish between the physiological signature of those tissues, and especially to detect the variations of concentration in Cytochromcoxydase. Concurrently to this experimental work, MonteCarlo simulation of light propagation in a model off a human head has been done. Those simulations allowed us to better understand the light propagation in tissues as function as their wavelength, and to validate the relevance of our multidimensional approach. Perspectives of this work is to use the developed instrument to monitor the brain’s response of the Human adult to several solicitations like tDCS stimulation, or sports activity.

84 
Stochastic Dynamical Systems : New Schemes for Corrections of Linearization Errors and Dynamic Systems IdentificationRaveendran, Tara January 2013 (has links) (PDF)
This thesis essentially deals with the development and numerical explorations of a few improved Monte Carlo filters for nonlinear dynamical systems with a view to estimating the associated states and parameters (i.e. the hidden states appearing in the system or process model) based on the available noisy partial observations. The hidden states are characterized, subject to modelling errors, by the weak solutions of the process model, which is typically in the form of a system of stochastic ordinary differential equations (SDEs). The unknown system parameters, when included as pseudostates within the process model, are made to evolve as Wiener processes. The observations may also be modelled by a set of measurement SDEs or, when collected at discrete time instants, their temporally discretized maps. The proposed Monte Carlo filters aim at achieving robustness (i.e. insensitivity to variations in the noise parameters) and higher accuracy in the estimates whilst retaining the important feature of applicability to large dimensional
nonlinear filtering problems.
The thesis begins with a brief review of the literature in Chapter 1. The first development, reported in Chapter 2, is that of a nearly exact, semianalytical, weak and explicit linearization scheme called Girsanov Corrected Linearization Method (GCLM) for nonlinear mechanical oscillators under additive stochastic excitations. At the heart of the linearization is a temporally localized rejection sampling strategy that, combined with a resampling scheme, enables selecting from and appropriately modifying an ensemble of
locally linearized trajectories whilst weakly applying the Girsanov correction (the Radon
Nikodym derivative) for the linearization errors. Through their numeric implementations for a few workhorse nonlinear oscillators, the proposed variants of the scheme are shown to exhibit significantly higher numerical accuracy over a much larger range of the time step size than is possible with the local driftlinearization schemes on their own.
The above scheme for linearization correction is exploited and extended in Chapter 3, wherein novel variations within a particle filtering algorithm are proposed to weakly correct for the linearization or integration errors that occur while numerically propagating the process dynamics. Specifically, the correction for linearization, provided by the likelihood or the RadonNikodym derivative, is incorporated in two steps. Once the
likelihood, an exponential martingale, is split into a product of two factors, correction owing to the first factor is implemented via rejection sampling in the first step. The second factor, being directly computable, is accounted for via two schemes, one employing resampling and the other, a gainweighted innovation term added to the drift field of the process SDE thereby overcoming excessive sample dispersion by resampling.
The proposed strategies, employed as addons to existing particle filters, the bootstrap and auxiliary SIR filters in this work, are found to nontrivially improve the convergence and accuracy of the estimates and also yield reduced mean square errors of such estimates visàvis those obtained through the parent filtering schemes.
In Chapter 4, we explore the possibility of unscented transformation on Gaussian random
variables, as employed within a scaled Gaussian sum stochastic filter, as a means of applying the nonlinear stochastic filtering theory to higher dimensional system identification problems. As an additional strategy to reconcile the evolving process dynamics with the observation history, the proposed filtering scheme also modifies the process model via the incorporation of gainweighted innovation terms. The reported numerical work on the identification of dynamic models of dimension up to 100 is indicative of the potential of the proposed filter in realizing the stated aim of successfully
treating relatively larger dimensional filtering problems.
We propose in Chapter 5 an iterated gainbased particle filter that is consistent with the form of the nonlinear filtering (KushnerStratonovich) equation in our attempt to treat larger dimensional filtering problems with enhanced estimation accuracy. A crucial aspect of the proposed filtering setup is that it retains the simplicity of implementation of the ensemble Kalman filter (EnKF). The numerical results obtained via EnKFlike simulations with or without a reducedrank unscented transformation also indicate substantively improved filter convergence.
The final contribution, reported in Chapter 6, is an iterative, gainbased filter bank
incorporating an artificial diffusion parameter and may be viewed as an extension of the iterative filter in Chapter 5. While the filter bank helps in exploring the phase space of the state variables better, the iterative strategy based on the artificial diffusion parameter, which is lowered to zero over successive iterations, helps improve the mixing property of the associated iterative update kernels and these are aspects that gather importance for
highly nonlinear filtering problems, including those involving significant initial mismatch of the process states and the measured ones. Numerical evidence of remarkably enhanced filter performance is exemplified by target tracking and structural health assessment applications.
The thesis is finally wound up in Chapter 7 by summarizing these developments and
briefly outlining the future research directions

85 
Development of Sparse Recovery Based Optimized Diffuse Optical and Photoacoustic Image Reconstruction MethodsShaw, Calvin B January 2014 (has links) (PDF)
Diﬀuse optical tomography uses near infrared (NIR) light as the probing media to recover the distributions of tissue optical properties with an ability to provide functional information of the tissue under investigation. As NIR light propagation in the tissue is dominated by scattering, the image reconstruction problem (inverse problem) is nonlinear and illposed, requiring usage of advanced computational methods to compensate this.
Diffuse optical image reconstruction problem is always rankdeficient, where finding the independent measurements among the available measurements becomes challenging problem. Knowing these independent measurements will help in designing better data acquisition setups and lowering the costs associated with it. An optimal measurement selection strategy based on incoherence among rows (corresponding to measurements) of the sensitivity (or weight) matrix for the near infrared diﬀuse optical tomography is proposed. As incoherence among the measurements can be seen as providing maximum independent information into the estimation of optical properties, this provides high level of optimization required for knowing the independency of a particular measurement on its counterparts. The utility of the proposed scheme is demonstrated using simulated and experimental gelatin phantom data set comparing it with the stateoftheart methods.
The traditional image reconstruction methods employ ℓ2norm in the regularization functional, resulting in smooth solutions, where the sharp image features are absent. The sparse recovery methods utilize the ℓpnorm with p being between 0 and 1 (0 ≤ p1), along with an approximation to utilize the ℓ0norm, have been deployed for the reconstruction of diﬀuse optical images. These methods are shown to have better utility in terms of being more quantitative in reconstructing realistic diﬀuse optical images compared to traditional methods.
Utilization of ℓpnorm based regularization makes the objective (cost) function nonconvex and the algorithms that implement ℓpnorm minimization utilizes approximations to the original ℓpnorm function. Three methods for implementing the ℓpnorm were considered, namely Iteratively Reweigthed ℓ1minimization (IRL1), Iteratively Reweigthed LeastSquares (IRLS), and Iteratively Thresholding Method (ITM). These results indicated that IRL1 implementation of ℓpminimization provides optimal performance in terms of shape recovery and quantitative accuracy of the reconstructed diﬀuse optical tomographic images.
Photoacoustic tomography (PAT) is an emerging hybrid imaging modality combining optics with ultrasound imaging. PAT provides structural and functional imaging in diverse application areas, such as breast cancer and brain imaging. A modelbased iterative reconstruction schemes are the mostpopular for recovering the initial pressure in limited data case, wherein a large linear system of equations needs to be solved. Often, these iterative methods requires regularization parameter estimation, which tends to be a computationally expensive procedure, making the image reconstruction process to be performed oﬀline. To overcome this limitation, a computationally eﬃcient approach that computes the optimal regularization parameter is developed for PAT. This approach is based on the least squaresQR (LSQR) decomposition, a wellknown dimensionality reduction technique for a large system of equations. It is shown that the proposed framework is eﬀective in terms of quantitative and qualitative reconstructions of initial pressure distribution.

86 
A Stochastic Search Approach to Inverse ProblemsVenugopal, Mamatha January 2016 (has links) (PDF)
The focus of the thesis is on the development of a few stochastic search schemes for inverse problems and their applications in medical imaging. After the introduction in Chapter 1 that motivates and puts in perspective the work done in later chapters, the main body of the thesis may be viewed as composed of two parts: while the first part concerns the development of stochastic search algorithms for inverse problems (Chapters 2 and 3), the second part elucidates on the applicability of search schemes to inverse problems of interest in tomographic imaging (Chapters 4 and 5). The chapterwise contributions of the thesis are summarized below.
Chapter 2 proposes a Monte Carlo stochastic filtering algorithm for the recursive estimation of diffusive processes in linear/nonlinear dynamical systems that modulate the instantaneous rates of Poisson measurements. The same scheme is applicable when the set of partial and noisy measurements are of a diffusive nature. A key aspect of our development here is the filterupdate scheme, derived from an ensemble approximation of the timediscretized nonlinear Kushner Stratonovich equation, that is modified to account for Poissontype measurements. Specifically, the additive update through a gainlike correction term, empirically approximated from the innovation integral in the filtering equation, eliminates the problem of particle collapse encountered in many conventional particle filters that adopt weightbased updates. Through a few numerical demonstrations, the versatility of the proposed filter is brought forth, first with application to filtering problems with diffusive or Poissontype measurements and then to an automatic control problem wherein the exterminations of the associated cost functional is achieved simply by an appropriate redefinition of the innovation process.
The aim of one of the numerical examples in Chapter 2 is to minimize the structural response of a duffing oscillator under external forcing. We pose this problem of active control within a filtering framework wherein the goal is to estimate the control force that minimizes an appropriately chosen performance index. We employ the proposed filtering algorithm to estimate the control force and the oscillator displacements and velocities that are minimized as a result of the application of the control force. While Fig. 1 shows the time histories of the uncontrolled and controlled displacements and velocities of the oscillator, a plot of the estimated control force against the external force applied is given in Fig. 2.
(a) (b)
Fig. 1. A plot of the time histories of the uncontrolled and controlled (a) displacements and (b) velocities.
Fig. 2. A plot of the time histories of the external force and the estimated control force
Stochastic filtering, despite its numerous applications, amounts only to a directed search and is best suited for inverse problems and optimization problems with unimodal solutions. In view of general optimization problems involving multimodal objective functions with a priori unknown optima, filtering, similar to a regularized GaussNewton (GN) method, may only serve as a local (or quasilocal) search. In Chapter 3, therefore, we propose a stochastic search (SS) scheme that whilst maintaining the basic structure of a filtered martingale problem, also incorporates randomization techniques such as scrambling and blending, which are meant to aid in avoiding the socalled local traps. The key contribution of this chapter is the introduction of yet another technique, termed as the state space splitting (3S) which is a paradigm based on the principle of divideandconquer. The 3S technique, incorporated within the optimization scheme, offers a better assimilation of measurements and is found to outperform filtering in the context of quantitative photoacoustic tomography (PAT) to recover the optical absorption field from sparsely available PAT data using a bare minimum ensemble. Other than that, the proposed scheme is numerically shown to be better than or at least as good as CMAES (covariance matrix adaptation evolution strategies), one of the best performing optimization schemes in minimizing a set of benchmark functions.
Table 1 gives the comparative performance of the proposed scheme and CMAES in minimizing a set of 40dimensional functions (F1F20), all of which have their global minimum at 0, using an ensemble size of 20. Here, 10 5 is the tolerance limit to be attained for the objective function value and MAX is the maximum number of iterations permissible to the optimization scheme to arrive at the global minimum.
Table 1. Performance of the SS scheme and
Chapter 4 gathers numerical and experimental evidence to support our conjecture in the previous chapters that even a quasilocal search (afforded, for instance, by the filtered martingale problem) is generally superior to a regularized GN method in solving inverse problems. Specifically, in this chapter, we solve the inverse problems of ultrasound modulated optical tomography (UMOT) and diffraction tomography (DT). In UMOT, we perform a spatially resolved recovery of the meansquared displacements, p r of the scattering centres in a diffusive object by measuring the modulation depth in the decaying autocorrelation of the incident coherent light. This modulation is induced by the input ultrasound focussed to a specific region referred to as the region of interest (ROI) in the object. Since the ultrasoundinduced displacements are a measure of the material stiffness, in principle, UMOT can be applied for the early diagnosis of cancer in soft tissues. In DT, on the other hand, we recover the real refractive index distribution, n r of an optical fiber from experimentally acquired transmitted intensity of light traversing through it. In both cases, the filtering step encoded within the optimization scheme recovers superior reconstruction images visàvis the GN method in terms of quantitative accuracies.
Fig. 3 gives a comparative crosssectional plot through the centre of the reference and reconstructed p r images in UMOT when the ROI is at the centre of the object. Here, the anomaly is presented as an increase in the displacements and is at the centre of the ROI.
Fig. 4 shows the comparative crosssectional plot of the reference and reconstructed refractive index distributions, n r of the optical fiber in DT.
Fig. 3. Crosssectional plot through the center of the reference and reconstructed p r images.
Fig. 4. Crosssectional plot through the center of the reference and reconstructed n r distributions.
In Chapter 5, the SS scheme is applied to our main application, viz. photoacoustic tomography (PAT) for the recovery of the absorbed energy map, the optical absorption coefficient and the chromophore concentrations in soft tissues. Nevertheless, the main contribution of this chapter is to provide a singlestep method for the recovery of the optical absorption field from both simulated and experimental timedomain PAT data. A singlestep direct recovery is shown to yield better reconstruction than the generally adopted twostep method for quantitative PAT. Such a quantitative reconstruction maybe converted to a functional image through a linear map. Alternatively, one could also perform a onestep recovery of the chromophore concentrations from the boundary pressure, as shown using simulated data in this chapter. Being a Monte Carlo scheme, the SS scheme is highly parallelizable and the availability of such a machineready inversion scheme should finally enable PAT to emerge as a clinical tool in medical diagnostics.
Given below in Fig. 5 is a comparison of the optical absorption map of the SheppLogan phantom with the reconstruction obtained as a result of a direct (1step) recovery.
Fig. 5. The (a) exact and (b) reconstructed optical absorption maps of the SheppLogan phantom. The x and yaxes are in m and the colormap is in mm1.
Chapter 6 concludes the work with a brief summary of the results obtained and suggestions for future exploration of some of the schemes and applications described in this thesis.

87 
Automated Selection of HyperParameters in Diffuse Optical Tomographic Image ReconstructionJayaprakash, * January 2013 (has links) (PDF)
Diffuse optical tomography is a promising imaging modality that provides functional information of the soft biological tissues, with prime imaging applications including breast and brain tissue invivo. This modality uses near infrared light( 600nm900nm) as the probing media, giving an advantage of being nonionizing imaging modality.
The image reconstruction problem in diffuse optical tomography is typically posed as a leastsquares problem that minimizes the difference between experimental and modeled data with respect to optical properties. This problem is nonlinear and illposed, due to multiple scattering of the near infrared light in the biological tissues, leading to infinitely many possible solutions. The traditional methods employ a regularization term to constrain the solution space as well as stabilize the solution, with Tikhonov type regularization being the most popular one. The choice of this regularization parameter, also known as hyper parameter, dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience.
In this thesis, a simple back projection type image reconstruction algorithm is taken up, as they are known to provide computationally efficient solution compared to regularized solutions. In these algorithms, the hyper parameter becomes equivalent to filter factor and choice of which is typically dependent on the sampling interval used for acquiring data in each projection and the angle of projection. Determining these parameters for diffuse optical tomography is not so straightforward and requires usage of advanced computational models. In this thesis, a computationally efficient simplex
Method based optimization scheme for automatically finding this filter factor is proposed and its performances is evaluated through numerical and experimental phantom data. As back projection type algorithms are approximations to traditional methods, the absolute quantitative accuracy of the reconstructed optical properties is poor .In scenarios, like dynamic imaging, where the emphasis is on recovering relative difference in the optical properties, these algorithms are effective in comparison to traditional methods, with an added advantage being highly computationally efficient.
In the second part of this thesis, this hyper parameter choice for traditional Tikhonov type regularization is attempted with the help of LeastSquares QRdecompisition (LSQR) method. The established techniques that enable the automated choice of hyper parameters include Generalized CrossValidation(GCV) and regularized Minimal Residual Method(MRM), where both of them come with higher over head of computation time, making it prohibitive to be used in the realtime. The proposed LSQR algorithm uses bidiagonalization of the system matrix to result in less computational cost. The proposed LSQRbased algorithm for automated choice of hyper parameter is compared with MRM methods and is proven to be computationally optimal technique through numerical and experimental phantom cases.

88 
Active geometric model : multicompartment modelbased segmentation & registrationMukherjee, Prateep 26 August 2014 (has links)
Indiana UniversityPurdue University Indianapolis (IUPUI) / We present a novel, variational and statistical approach for modelbased segmentation. Our model generalizes the ChanVese model, proposed for concurrent segmentation of multiple objects embedded in the same image domain. We also propose a novel shape descriptor, namely the MultiCompartment Distance Functions or mcdf. Our proposed framework for segmentation is twofold: first, several training samples distributed across various classes are registered onto a common frame of reference; then, we use a variational method similar to Active Shape Models (or ASMs) to generate an average shape model and hence use the latter to partition new images. The key advantages of such a framework is: (i) landmarkfree automated shape training; (ii) strict shape constrained model to fit test data. Our model can naturally deal with shapes of arbitrary dimension and topology(closed/open curves). We term our model Active Geometric Model, since it focuses on segmentation of geometric shapes. We demonstrate the power of the proposed framework in two important medical applications: one for morphology estimation of 3D Motor Neuron compartments, another for thickness estimation of Henle's Fiber Layer in the retina. We also compare the qualitative and quantitative performance of our method with that of several other stateoftheart segmentation methods.

89 
Realtime adaptiveoptics optical coherence tomography (AOOCT) image reconstruction on a GPUShafer, Brandon Andrew January 2014 (has links)
Indiana UniversityPurdue University Indianapolis (IUPUI) / Adaptiveoptics optical coherence tomography (AOOCT) is a technology that has been rapidly advancing in recent years and offers amazing capabilities in scanning the human eye in vivo. In order to bring the ultrahigh resolution capabilities to clinical use, however, newer technology needs to be used in the image reconstruction process. General purpose computation on graphics processing units is one such way that this computationally intensive reconstruction can be performed in a desktop computer in realtime. This work shows the process of AOOCT image reconstruction, the basics of how to use NVIDIA's CUDA to write parallel code, and a new AOOCT image reconstruction technology implemented using NVIDIA's CUDA. The results of this work demonstrate that image reconstruction can be done in realtime with high accuracy using a GPU.

Page generated in 0.0823 seconds