• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 10
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 32
  • 32
  • 18
  • 13
  • 13
  • 10
  • 8
  • 8
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Análise da aplicação de diferentes algoritmos de reconstrução de imagens tomográficas de objetos industriais / Analysis of different algorithms application for the tomographic image reconstruction of industrial objects

Velo, Alexandre França 17 December 2018 (has links)
Existe na indústria o interesse em utilizar as informações da tomografia computadorizada a fim de conhecer o interior (i) dos objetos industriais fabricados ou (ii) das máquinas e seus meios de produção. Nestes casos, a tomografia tem como finalidade (a) controlar a qualidade do produto final e (b) otimizar a produção, contribuindo na fase piloto dos projetos e na análise da qualidade dos meios sem interromper a produção. O contínuo controle de qualidade dos meios de produção é a chave mestra para garantir a qualidade e a competitividade dos produtos. O Centro de Tecnologia das Radiações (CTR), do Instituto de Pesquisas Energéticas e Nucleares (IPEN/CNEN-SP) vem desenvolvendo esta tecnologia para fins de análises de processos industriais há algum tempo. Atualmente, o laboratório tem desenvolvido três gerações de tomógrafos: (i) primeira geração; (ii) terceira geração; e (iii) tomógrafo Instant Non-Scanning. Os algoritmos de reconstrução de imagens tomográficas tem uma importância relevante para o funcionamento ideal desta tecnologia. Nesta tese, foram desenvolvidos e analisados os algoritmos de reconstrução de imagens tomográficas para serem implementados aos protocolos experimentais dos tomógrafos. Os métodos de reconstrução de imagem analítico e iterativo foram desenvolvidos utilizando o software Matlab® r2013b. Os algoritmos iterativos apresentaram imagens com melhor resolução espacial comparado com as obtidas pelo método analítico. Entretanto as imagens por método analítico apresentaram menos ruídos. O tempo para obtenção de imagem pelo método iterativo é relativamente elevado, e aumenta conforme aumenta a matriz de pixels da imagem. Já o método analítico fornece imagens instantâneas. Para as reconstruções de imagens utilizando o tomógrafo Instant Non-Scanning, as imagens pelo método analítico não apresentaram qualidade de imagem satisfatória comparada aos métodos iterativos. / There is an interest in the industry to use the CT information in order to know the interior (i) of the manufactured industrial objects or (ii) the machines and their means of production. In these cases, the purpose of the tomography systems is to (a) control the quality of the final product and (b) to optimize production, contributing to the pilot phase of the projects and to analyze the quality of the means without interrupting he line production. Continuous quality assurance of the means of production is the key to ensuring product quality and competitiveness. The Radiation Technology Center of the Nuclear and Energy Research Institute (IPEN/CNEN-SP) has been developing this technology for the purpose of industrial analysis. Currently the laboratory has developed three generations of tomography systems: (i) first generation; (ii) third generation; and (iii) Instant Non-Scanning tomography. The algorithms for the reconstruction of tomographic images are of relevant importance for the optimal functioning of this technology. In this PhD thesis, the reconstruction algorithms of tomographic images were developed and analyzed to be implemented to the tomography systems developed. The analytical and iterative image reconstruction methods were developed using the software Matlab® r2013b. The iterative algorithms presented images with better spatial resolution compared to those obtained by the analytical method; however the images of the analytical method presented be less image noisy. The time to obtain the image by the iterative method is high, and increases as the image matrix increases, while the analytical method provides fast images. For images reconstructions using the Instant Non-Scanning tomography system, the images by the analytical method did not present satisfactory image quality compared to the iterative methods.
22

Análise da aplicação de diferentes algoritmos de reconstrução de imagens tomográficas de objetos industriais / Analysis of different algorithms application for the tomographic image reconstruction of industrial objects

Alexandre França Velo 17 December 2018 (has links)
Existe na indústria o interesse em utilizar as informações da tomografia computadorizada a fim de conhecer o interior (i) dos objetos industriais fabricados ou (ii) das máquinas e seus meios de produção. Nestes casos, a tomografia tem como finalidade (a) controlar a qualidade do produto final e (b) otimizar a produção, contribuindo na fase piloto dos projetos e na análise da qualidade dos meios sem interromper a produção. O contínuo controle de qualidade dos meios de produção é a chave mestra para garantir a qualidade e a competitividade dos produtos. O Centro de Tecnologia das Radiações (CTR), do Instituto de Pesquisas Energéticas e Nucleares (IPEN/CNEN-SP) vem desenvolvendo esta tecnologia para fins de análises de processos industriais há algum tempo. Atualmente, o laboratório tem desenvolvido três gerações de tomógrafos: (i) primeira geração; (ii) terceira geração; e (iii) tomógrafo Instant Non-Scanning. Os algoritmos de reconstrução de imagens tomográficas tem uma importância relevante para o funcionamento ideal desta tecnologia. Nesta tese, foram desenvolvidos e analisados os algoritmos de reconstrução de imagens tomográficas para serem implementados aos protocolos experimentais dos tomógrafos. Os métodos de reconstrução de imagem analítico e iterativo foram desenvolvidos utilizando o software Matlab® r2013b. Os algoritmos iterativos apresentaram imagens com melhor resolução espacial comparado com as obtidas pelo método analítico. Entretanto as imagens por método analítico apresentaram menos ruídos. O tempo para obtenção de imagem pelo método iterativo é relativamente elevado, e aumenta conforme aumenta a matriz de pixels da imagem. Já o método analítico fornece imagens instantâneas. Para as reconstruções de imagens utilizando o tomógrafo Instant Non-Scanning, as imagens pelo método analítico não apresentaram qualidade de imagem satisfatória comparada aos métodos iterativos. / There is an interest in the industry to use the CT information in order to know the interior (i) of the manufactured industrial objects or (ii) the machines and their means of production. In these cases, the purpose of the tomography systems is to (a) control the quality of the final product and (b) to optimize production, contributing to the pilot phase of the projects and to analyze the quality of the means without interrupting he line production. Continuous quality assurance of the means of production is the key to ensuring product quality and competitiveness. The Radiation Technology Center of the Nuclear and Energy Research Institute (IPEN/CNEN-SP) has been developing this technology for the purpose of industrial analysis. Currently the laboratory has developed three generations of tomography systems: (i) first generation; (ii) third generation; and (iii) Instant Non-Scanning tomography. The algorithms for the reconstruction of tomographic images are of relevant importance for the optimal functioning of this technology. In this PhD thesis, the reconstruction algorithms of tomographic images were developed and analyzed to be implemented to the tomography systems developed. The analytical and iterative image reconstruction methods were developed using the software Matlab® r2013b. The iterative algorithms presented images with better spatial resolution compared to those obtained by the analytical method; however the images of the analytical method presented be less image noisy. The time to obtain the image by the iterative method is high, and increases as the image matrix increases, while the analytical method provides fast images. For images reconstructions using the Instant Non-Scanning tomography system, the images by the analytical method did not present satisfactory image quality compared to the iterative methods.
23

Bornes inférieures et algorithmes de reconstruction pour des sommes de puissances affines / Lower bounds and reconstruction algorithms for sums of affine powers

Pecatte, Timothée 11 July 2018 (has links)
Le cadre général de cette thèse est l'étude des polynômes comme objets de modèles de calcul. Cette approche permet de définir de manière précise la complexité d'évaluation d'un polynôme, puis de classifier des familles de polynômes en fonction de leur difficulté dans ce modèle. Dans cette thèse, nous nous intéressons en particulier au modèle AffPow des sommes de puissance de forme linéaire, i.e. les polynômes qui s'écrivent $f = \sum_{i = 1}^s \alpha_i \ell_i^{e_i}$, avec $\deg \ell_i = 1$. Ce modèle semble assez naturel car il étend à la fois le modèle de Waring $f = \sum \alpha_i \ell_i^d$ et le modèle du décalage creux $f = \sum \alpha_i \ell^{e_i}$, mais peu de résultats sont connus pour cette généralisation.Nous avons pu prouver des résultats structurels pour la version univarié de ce modèle, qui nous ont ensuite permis d'obtenir des bornes inférieures et des algorithmes de reconstruction, qui répondent au problème suivant : étant donné $f = \sum \alpha_i (x-a_i)^{e_i}$ par la liste de ses coefficients, retrouver les $\alpha_i, a_i, e_i$ qui apparaissent dans la décomposition optimale de $f$.Nous avons aussi étudié plus en détails la version multivarié du modèle, qui avait été laissé ouverte par nos précédents algorithmes de reconstruction, et avons obtenu plusieurs résultats lorsque le nombre de termes dans une expression optimale est relativement petit devant le nombre de variables ou devant le degré du polynôme. / The general framework of this thesis is the study of polynomials as objects of models of computation. This approach allows to define precisely the evaluation complexity of a polynomial, and then to classify families of polynomials depending on their complexity. In this thesis, we focus on the study of the model of sums of affine powers, that is polynomials that can be written as $f = \sum_{i = 1}^s \alpha_i \ell_i^{e_i}$, with $\deg \ell_i = 1$.This model is quite natural, as it extends both the Waring model $f = \sum \alpha_i \ell_i^d$ , and the sparsest shift model $f = \sum \alpha_i \ell^{e_i}$, but it is still not well known.In this work, we obtained structural results for the univariate variant of this model, which allow us to obtain lower bounds and reconstruction algorithms, that solve the following problem : given $f = \sum \alpha_i (x-a_i)^{e_i}$ as a list of its coefficient, find the values of the $\alpha_i$’s, $e_i$’s and $a_i$’s in the optimal decomposition of $f$.We also studied the multivariate case and obtained several reconstruction algorithms that work whenever the number of terms in the optimal expression is small in terms of the number of variable or the degree of the polynomial.
24

Investigation of mm-wave imaging and radar systems / Etude de système d'imagerie et radar en ondes millimétriques

Zeitler, Armin 11 January 2013 (has links)
Durant la dernière décade, les radars millimétriques en bande W (75 - 110 GHz) pour les applications civiles que ce soit dans le domaine de l'aide à la conduite ou de la sécurité. La maturité de ces systèmes et les exigences accrues en termes d'application, orientent actuellement les recherches vers l'insertion de fonctions permettant l'identification. Ainsi, des systèmes d'imagerie radar ont été développés, notamment à l'aide d'imagerie qualitative (SAR). Les premiers résultats sont très prometteurs, cependant, afin de reconstruire les propriétés électromagnétiques des objets, il faut travailler de manière quantitative. De nombreux travaux ont déjà été conduits en ondes centimétriques, mais aucun système d'imagerie quantitative n'existe, à notre connaissance, en gamme millimétrique. L'objectif du travail présenté dans ce manuscrit est de poser les bases d'un système d'imagerie quantitative en gamme millimétrique et de le comparer à l'imagerie radar de systèmes développés en collaboration avec l'Université d'Ulm (Allemagne). L'ensemble des résultats obtenus valide le processus développé pour d'imagerie quantitative. Les recherches doivent être poursuivies. D'une part le système de mesure doit évoluer vers un vrai système multi-incidences/multivues. D'autre part, le cas 2D-TE doit être implémenté afin de pouvoir traiter un objet 2D quelconque dans n'importe quelle polarisation. Enfin, les mesures à partir de systèmes radar réels doivent être poursuivies, en particulier pour rendre exploitables les mesures des coefficients de transmission. Ces dernières sont indispensables si l'on veut un jour appliquer les algorithmes d'inversion à des mesures issues de systèmes radar. / In the last decade, microwave and millimeter-wave systems have gained importance in civil and security applications. Due to an increasing maturity and availability of circuits and components, these systems are getting more compact while being less expensive. Furthermore, quantitative imaging has been conducted at lower frequencies using computational intensive inverse problem algorithms. Due to the ill-posed character of the inverse problem, these algorithms are, in general, very sensitive to noise: the key to their successful application to experimental data is the precision of the measurement system. Only a few research teams investigate systems for imaging in the W-band. In this manuscript such a system is presented, designed to provide scattered field data to quantitative reconstruction algorithms. This manuscript is divided into six chapters. Chapter 2 describes the theory to compute numerically the scattered fields of known objects. In Chapter 3, the W-band measurement setup in the anechoic chamber is shown. Preliminary measurement results are analyzed. Relying on the measurement results, the error sources are studied and corrected by post-processing. The final results are used for the qualitative reconstruction of all three targets of interest and to image quantitatively the small cylinder. The reconstructed images are compared in detail in Chapter 4. Close range imaging has been investigated using a vector analyzer and a radar system. This is described in Chapter 5, based on a future application, which is the detection of FOD on airport runways. The conclusion is addressed in Chapter 6 and some future investigations are discussed.
25

Development of stopping rule methods for the MLEM and OSEM algorithms used in PET image reconstruction / Ανάπτυξη κριτηρίων παύσης των αλγορίθμων MLEM και OSEM που χρησιμοποιούνται στην ανακατασκευή εικόνας σε PET

Γαϊτάνης, Αναστάσιος 11 January 2011 (has links)
The aim of this Thesis is the development of stopping rule methods for the MLEM and OSEM algorithms used in image reconstruction positron emission tomography (PET). The development of the stopping rules is based on the study of the properties of both algorithms. Analyzing their mathematical expressions, it can be observed that the pixel updating coefficients (PUC) play a key role in the upgrading process of the reconstructed image from iteration k to k+1. For the analysis of the properties of the PUC, a PET scanner geometry was simulated using Monte Carlo methods. For image reconstruction using iterative techniques, the calculation of the transition matrix is essential. And it fully depends on the geometrical characteristics of the PET scanner. The MLEM and OSEM algorithms were used to reconstruct the projection data. In order to compare the reconstructed and true images, two figures of merit (FOM) were used; a) the Normalized Root Mean Square Deviation (NRMSD) and b) the chi-square χ2. The behaviour of the PUC C values for a zero and non-zero pixel in the phantom image was analyzed and it has been found different behavior for zero and non-zero pixels. Based on this assumption, the vector of all C values was analyzed for all non-zero pixels of the reconstructed image and it was found that the histograms of the values of the PUC have two components: one component around C(i)=1.0 and a tail component, for values C(i)<1.0. In this way, a vector variable has been defined, where I is the total number of pixels in the image and k is the iteration number. is the minimum value of the vector of the pixel updating coefficients among the non-zero pixels of the reconstructed image at iteration k. Further work was performed to find out the dependence of Cmin on the image characteristics, image topology and activity level. The analysis shows that the parameterization of Cmin is reliable and allows the establishment of a robust stopping rule for the MLEM algorithm. Furthermore, following a different approach, a new stopping rule using the log-likelihood properties of the MLEM algorithm has been developed. The two rules were evaluated using the independent Digimouse phantom. The study revealed that both stopping rules produce reconstructed images with similar properties. The same study was performed for the OSEM algorithm and a stopping rule for the OSEM algorithm dedicated to each number of subset was developed. / Σκοπός της διατριβής είναι η ανάπτυξη κριτηρίων παύσης για τους επαναληπτικούς αλγόριθμους (MLEM και OSEM) που χρησιμοποιούνται στην ανακατασκευή ιατρικής εικόνας στους τομογράφους εκπομπής ποζιτρονίου (PET). Η ανάπτυξη των κριτηρίων παύσης βασίστηκε στη μελέτη των ιδιοτήτων των αλγόριθμων MLEM & OSEM. Απο τη μαθηματική έκφραση των δύο αλγορίθμων προκύπτει ότι οι συντελεστές αναβάθμισης (ΣΑ) των pixels της εικόνας παίζουν σημαντικό ρόλο στην ανακατασκευή της απο επανάληψη σε επανάληψη. Για την ανάλυση ένας τομογράφος PET προσομοιώθηκε με τη χρήση των μεθόδων Μόντε Κάρλο.Για την ανακατασκευή της εικόνας με τη χρήση των αλγόριθμων MLEM και OSEM, υπολογίστηκε ο πίνακας μετάβασης. Ο πίνακας μετάβασης εξαρτάται απο τα γεωμετρικά χαρακτηριστικά του τομογράφου PET και για τον υπολογισμό του χρησιμοποιήθηκαν επίσης μέθοδοι Μόντε Κάρλο. Ως ψηφιακά ομοιώματα χρησιμοποιήθηκαν το ομοίωμα εγκεφάλου Hoffman και το 4D MOBY. Για κάθε ένα απο τα ομοιώματα δημιουργήθηκαν προβολικά δεδομένα σε διαφορετικές ενεργότητες. Για τη σύγκριση της ανακατασκευασμένης και της αρχικής εικόνας χρησιμοποιήθηκαν δύο ξεχωριστοί δείκτες ποίοτητας, το NRMSD και το chi square. Η ανάλυση έδειξε οτι οι ΣΑ για τα μη μηδενικά pixels της εικόνας τείνουν να λάβουν την τιμή 1.0 με την αύξηση των επαναλήψεων, ενώ για τα μηδενικά pixels αυτό δε συμβαίνει. Αναλύοντας περισσότερο το διάνυσμα των ΣΑ για τα μη μηδενικά pixels της ανακατασκευασμένης εικόνας διαπιστώθηκε ότι αυτό έχει δύο μέρη: α) Μια κορυφή για τιμές των ΣΑ = 1.0 και β) μια ουρά με τιμές των ΣΑ<1.0. Αυξάνοντας τις επαναλήψεις, ο αριθμός των pixels με ΣΑ=1.0 αυξάνονταν ενώ ταυτόχρονα η ελάχιστη τιμή του διανύσματος των ΣΑ μετακινούνταν προς το 1.0. Με αυτό τον τρόπο προσδιορίστηκε μια μεταβλητή της μορφής όπου N είναι ο αριθμός των pixels της εικόνας, k η επανάληψη και η ελάχιστη τιμή του διανύσματος των ΣΑ. Η ανάλυση που έγινε έδειξε ότι η μεταβλητή Cmin συσχετίζεται μόνο με την ενεργότητα της εικόνας και όχι με το είδος ή το μέγεθός της. Η παραμετροποίηση αυτής της σχέσης οδήγησε στην ανάπτυξη του κριτηρίου παύσης για τον MLEM αλγόριθμο. Μια άλλη προσέγγιση βασισμένη στις ιδιότητες πιθανοφάνειας του MLEM αλγόριθμου, οδήγησε στην ανάπτυξη ενός διαφορετικού κριτηρίου παύσης του MLEM. Τα δύο κριτήρια αποτιμήθηκαν με τη χρήση του ομοιώματος Digimouse και βρέθηκε να παράγουν παρόμοιες εικόνες. Η ίδια μελέτη έγινε και για τον OSEM αλγόριθμο και αναπτύχθηκε κριτήριο παύσης για διαφορετικό αριθμό subsets.
26

Development of Novel Reconstruction Methods Based on l1--Minimization for Near Infrared Diffuse Optical Tomography

Shaw, Calbvin B January 2012 (has links) (PDF)
Diffuse optical tomography uses near infrared (NIR) light as the probing media to recover the distributions of tissue optical properties. It has a potential to become an adjunct imaging modality for breast and brain imaging, that is capable of providing functional information of the tissue under investigation. As NIR light propagation in the tissue is dominated by scattering, the image reconstruction problem (inverse problem) tends to be non-linear and ill-posed, requiring usage of advanced computational methods to compensate this. Traditional image reconstruction methods in diffuse optical tomography employ l2 –norm based regularization, which is known to remove high frequency noises in the re-constructed images and make them appear smooth. The recovered contrast in the reconstructed image in these type of methods are typically dependent on the iterative nature of the method employed, in which the non-linear iterative technique is known to perform better in comparison to linear techniques. The usage of non-linear iterative techniques in the real-time, especially in dynamical imaging, becomes prohibitive due to the computational complexity associated with them. In the rapid dynamic diffuse optical imaging, assumption of a linear dependency in the solutions between successive frames results in a linear inverse problem. This new frame work along with the l1–norm based regularization can provide better robustness to noise and results in a better contrast recovery compared to conventional l2 –based techniques. Moreover, it is shown that the proposed l1-based technique is computationally efficient compared to its counterpart(l2 –based one). The proposed framework requires a reasonably close estimate of the actual solution for the initial frame and any suboptimal estimate leads to erroneous reconstruction results for the subsequent frames. Modern diffuse optical imaging systems are multi-modal in nature, where diffuse optical imaging is combined with traditional imaging modalities such as MRI, CT, and Ultrasound. A novel approach that can more effectively use the structural information provided by the traditional imaging modalities in these scenarios is introduced, which is based on prior image constrained- l1 minimization scheme. This method has been motivated by the recent progress in the sparse image reconstruction techniques. It is shown that the- l1 based frame work is more effective in terms of localizing the tumor region and recovering the optical property values both in numerical and gelatin phantom cases compared to the traditional methods that use structural information.
27

Development of Efficient Computational Methods for Better Estimation of Optical Properties in Diffuse Optical Tomography

Ravi Prasad, K J January 2013 (has links) (PDF)
Diffuse optical tomography (DOT) is one of the promising imaging modalities that pro- vides functional information of the soft biological tissues in-vivo, such as breast and brain tissues. The near infrared (NIR) light (600-1000 nm) is the interrogating radiation, which is typically delivered and collected using fiber bundles placed on the boundary of the tissue. The internal optical property distribution is estimated via model-based image reconstruction algorithm using these limited boundary measurements. Image reconstruction problem in DOT is known to be non-linear, ill-posed, and some times under-determined due to the multiple scattering of NIR light in the tissue. Solving this inverse problem requires regularization to obtain meaningful results, with Tikhonov-type regularization being the most popular one. The choice of the regularization parameter dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience. An automated method for optimal selection of regularization parameter that is based on regularized minimal residual method (MRM) is proposed and is compared with the traditional generalized cross-validation method. The results obtained using numerical and gelatin phantom data indicate that the MRM-based method is capable of providing the optimal regularization parameter. A new approach that can easily incorporate any generic penalty function into the diffuse optical tomographic image reconstruction is introduced to show the utility of non-quadratic penalty functions. The penalty functions that were used include, quadratic (`2), absolute (`1), Cauchy, and Geman-McClure. The regularization parameter in each of these cases were obtained automatically using the generalized cross-validation (GCV) method. The reconstruction results were systematically compared with each other via utilization of quantitative metrics, such as relative error and Pearson correlation. The reconstruction results indicate that while quadratic penalty may be able to provide better separation between two closely spaced targets, it's contrast recovery capability is limited and the sparseness promoting penalties, such as `1, Cauchy, Geman-McClure have better utility in reconstructing high-contrast and complex-shaped targets with Geman-McClure penalty being the most optimal one. Effective usage of image guidance by incorporating the refractive index (RI) variation in computational modeling of light propagation in tissue is investigated to assess its impact on optical-property estimation. With the aid of realistic patient breast three-dimensional models, the variation in RI for different regions of tissue under investigation is shown to influence the estimation of optical properties in image-guided diffuse optical tomography (IG-DOT) using numerical simulations. It is also shown that by assuming identical RI for all regions of tissue would lead to erroneous estimation of optical properties. The a priori knowledge of the RI for the segmented regions of tissue in IG-DOT, which is difficult to obtain for the in vivo cases, leads to more accurate estimates of optical properties. Even inclusion of approximated RI values, obtained from the literature, for the regions of tissue resulted in better estimates of optical properties, with values comparable to that of having the correct knowledge of RI for different regions of tissue. Image reconstruction in IG-DOT procedure involves reduction of the number of optical parameters to be reconstructed equal to the number of distinct regions identified in the structural information provided by the traditional imaging modality. This makes the image reconstruction problem to be well-determined compared to traditional under- determined case. Still, the methods that are deployed in this case are same as the one used for traditional diffuse optical image reconstruction, which involves regularization term as well as computation of the Jacobian. A gradient-free Nelder-Mead simplex method was proposed here to perform the image reconstruction procedure and shown to be providing solutions that are closely matching with ones obtained using established methods. The proposed method also has the distinctive advantage of being more efficient due to being regularization free, involving only repeated forward calculations.
28

Experimental And Theoretical Studies Towards The Development Of A Direct 3-D Diffuse Optical Tomographic Imaging System

Biswas, Samir Kumar 01 1900 (has links) (PDF)
Diffuse Optical Tomography is a diagnostic imaging modality where optical parameters such as absorption coefficient, scattering coefficient and refractive index distributions are recovered to form the internal tissue metabolic image. Near-infrared (NIR) light has the potential to be used as a noninvasive means of diagnostic imaging within the human breast. Due to the diffusive nature of light in tissue, computational model-based methods are required for functional imaging. The main goal is to recover the spatial variation of optical properties which shed light on the different metabolic states of tissue and tissue like media. This thesis addresses the issue of quantitative recovery of optical properties of tissue-mimicking phantom and pork tissue using diffuse optical tomography (DOT). The main contribution of the present work is the development of robust, efficient and fast optical property reconstruction algorithms for a direct 3-D DOT imaging system. There are both theoretical and experimental contributions towards the development of an imaging system and procedures to minimize accurate data collection time, overall system automation as well as development of computational algorithms. In nurturing the idea of imaging using NIR light into a fully developed direct 3-D imaging system, challenges from the theoretical and computational aspects have to be met. The recovery of the optical property distribution in the interior of the object from the often noisy boundary measurements on light, is an ill-posed ( and nonlinear) problem. This is particularly true, when one is interested in a direct 3-D image reconstruction instead of the often employed stacking of 2-D cross-sections obtained from solving a set of 2-D DOT problems. In order to render the DOT, a useful diagnostic imaging tool and a robust reconstruction procedure giving accurate and reliable parameter recovery in the scenario, where the number of unknowns far outnumbers the number of independent data sets that can be gathered (for example, the direct 3-D recovery mentioned earlier) is essential. Here, the inversion problem is often solved through iterative methods based on nonlinear optimization for the minimization of a data-model misfit function. An interesting development in this direction has been the development of Broyden’ s and adjoint Broyden’ s methods that avoids direct Jacobian computation in each iteration thereby making the full 3-D a reality. Conventional model based iterative image reconstruction (MoBIIR) algorithm uses Newton’ s and it’s variant methods, where it required repeated evaluation of whole Jacobian, which consumes bulk time in reconstruction process. The explicit secant and adjoint information based fast 2-D/3-D image reconstruction algorithms without repeated evaluation of the Jacobian is proposed in diffuse optical tomography, where the computational time has been decreased many folds by updating the Jacobian successively through low rank update. An alternative route to the iterative solution is attempted by introducing an artificial dynamics in the system and treating the steady-state response of the artificially evolving dynamical system as a solution. The objective is to consider a novel family of pseudo-dynamical 2-D and 3-D systems whose numerical integration in time provides an asymptotic solution to the inverse problem at hand. We convert Gauss-Newton’ s equation for updates into a pseudo-dynamical (PD) form by explicitly adding a time derivative term. As the pseudo-time integration schemes do not need such explicit matrix inversion and depending on the pseudo-time step size, provides for a layer of regularization that in turn helps in superior quality of 2-D and 3-D image reconstruction. A cost effective frequency domain Matlab based 2-D/3-D automated imaging system is designed and built. The complete instrumentation (including PC-based control software) has been developed using a single modulated laser source (wavelength 830nm) and a photo-multiplier tube (PMT). The source and detector fiber change their positions dynamically allowing us to gather data at multiple source and detector locations. The fiber positions are adjusted on the phantom surface automatically for scanning variable size phantoms. A heterodyning scheme was used for reading out the measurement using a lock-in-amplifier. The Matlab program carries out sequence of actions such as instrument control, data acquisition, data organization, data calibration and reconstruction of image. The Gauss-Newton’ s, Broyden’ s, adjoint Broyden’ s and pseudo-time integration algorithms are evaluated using the simulation data as well as data from the experimental DOT system. Validation of the system and the reconstruction algorithms were carried out on a real tissue, a pork tissue with an embedded fat inhomogeneity. The results were found to match the known parameters closely.
29

Automated Selection of Hyper-Parameters in Diffuse Optical Tomographic Image Reconstruction

Jayaprakash, * January 2013 (has links) (PDF)
Diffuse optical tomography is a promising imaging modality that provides functional information of the soft biological tissues, with prime imaging applications including breast and brain tissue in-vivo. This modality uses near infrared light( 600nm-900nm) as the probing media, giving an advantage of being non-ionizing imaging modality. The image reconstruction problem in diffuse optical tomography is typically posed as a least-squares problem that minimizes the difference between experimental and modeled data with respect to optical properties. This problem is non-linear and ill-posed, due to multiple scattering of the near infrared light in the biological tissues, leading to infinitely many possible solutions. The traditional methods employ a regularization term to constrain the solution space as well as stabilize the solution, with Tikhonov type regularization being the most popular one. The choice of this regularization parameter, also known as hyper parameter, dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience. In this thesis, a simple back projection type image reconstruction algorithm is taken up, as they are known to provide computationally efficient solution compared to regularized solutions. In these algorithms, the hyper parameter becomes equivalent to filter factor and choice of which is typically dependent on the sampling interval used for acquiring data in each projection and the angle of projection. Determining these parameters for diffuse optical tomography is not so straightforward and requires usage of advanced computational models. In this thesis, a computationally efficient simplex Method based optimization scheme for automatically finding this filter factor is proposed and its performances is evaluated through numerical and experimental phantom data. As back projection type algorithms are approximations to traditional methods, the absolute quantitative accuracy of the reconstructed optical properties is poor .In scenarios, like dynamic imaging, where the emphasis is on recovering relative difference in the optical properties, these algorithms are effective in comparison to traditional methods, with an added advantage being highly computationally efficient. In the second part of this thesis, this hyper parameter choice for traditional Tikhonov type regularization is attempted with the help of Least-Squares QR-decompisition (LSQR) method. The established techniques that enable the automated choice of hyper parameters include Generalized Cross-Validation(GCV) and regularized Minimal Residual Method(MRM), where both of them come with higher over head of computation time, making it prohibitive to be used in the real-time. The proposed LSQR algorithm uses bidiagonalization of the system matrix to result in less computational cost. The proposed LSQR-based algorithm for automated choice of hyper parameter is compared with MRM methods and is proven to be computationally optimal technique through numerical and experimental phantom cases.
30

Image reconstruction for Compton camera with application to hadrontherapy / Reconstruction d'images pour la caméra Compton avec application en hadronthérapie

Lojacono, Xavier 26 November 2013 (has links)
La caméra Compton est un dispositif permettant d’imager les sources de rayonnement gamma. Ses avantages sont sa sensibilité (absence de collimateur mécanique) et la possibilité de reconstruire des images 3D avec un dispositif immobile. Elle également adaptée pour des sources à large spectre énergétique. Ce dispositif est un candidat prometteur en médecine nucléaire et en hadronthérapie. Ces travaux, financés par le projet européen ENVISION (European NoVel Imaging Systems for ION therapy) Coopération-FP7, portent sur le développement de méthodes de reconstruction d’images pour la caméra Compton pour la surveillance de la thérapie par ions. Celle-ci nécessite idéalement une reconstruction temps réel avec une précision millimétrique, même si le nombre de données acquises est relativement faible. Nous avons développé des méthodes analytiques et itératives. Leurs performances sont analysées dans le contexte d’acquisitions réalistes (géométrie de la caméra, nombre d’événements). Nous avons développé une méthode analytique de rétroprojection filtrée. Cette méthode est rapide mais nécessite beaucoup de données. Nous avons également développé des méthodes itératives utilisant un algorithme de maximisation de la fonction de vraisemblance. Nous avons proposé un modèle probabiliste pour l’estimation des éléments de la matrice système nécessaire à la reconstruction et nous avons développé différentes approches pour le calcul de ses éléments : l’une néglige les incertitudes de mesure sur l’énergie, l’autre les prend en compte en utilisant une distribution gaussienne. Nous avons étudié une méthode simplifiée utilisant notre modèle probabiliste. Plusieurs reconstructions sont menées à partir de données simulées, obtenues avec Geant4, mais provenant aussi de plusieurs prototypes simulés de caméra Compton proposés par l’Institut de Physique Nucléaire de Lyon (IPNL) et par le Centre de recherche de Dresde-Rossendorf en Allemagne. Les résultats sont prometteurs et des études plus poussées, à partir de données encore plus réalistes, viseront à les confirmer. / The Compton camera is a device for imaging gamma radiation sources. The advantages of the system lie in its sensitivity, due to the absence of mechanical collimator, and the possibility of imaging wide energy spectrum sources. These advantages make it a promising candidate for application in hadrontherapy. Funded by the european project ENVISION, FP7-Cooperation Work Program, this work deals with the development of image reconstruction methods for the Compton camera. We developed both analytical and iterative methods in order to reconstruct the source from cone-surface projections. Their performances are analyzed with regards to the context (geometry of the camera, number of events). We developped an analytical method using a Filtered BackProjection (FBP) formulation. This method is fast but really sensitive to the noise. We have also developped iterative methods using a List Mode-Maximum Likelihood Expectation Maximization (LM-MLEM) algorithm. We proposed a new probabilistic model for the computation of the elements of the system matrix and different approaches for the calculation of these elements neglecting or not the measurement uncertainties. We also implemented a simplified method using the probabilistic model we proposed. The novelty of the method also lies on the specific discretization of the cone-surface projections. Several studies are carried out upon the reconstructions of simulated data worked out with Geant4, but also simulated data obtained from several prototypes of Compton cameras under study at the Institut de Physique Nucléaire de Lyon (IPNL) and at the Research Center of Dresden-Rossendorf. Results are promising, and further investigations on more realistic data are to be done.

Page generated in 0.1157 seconds