• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 10
  • 6
  • 1
  • 1
  • Tagged with
  • 31
  • 31
  • 11
  • 10
  • 9
  • 8
  • 8
  • 7
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Isogeometric Approach to Optical Tomography

Bateni, Vahid 14 June 2021 (has links)
Optical Tomography is an imaging modality that enhances early diagnosis of disease through use of harmless Near-Infrared rays instead of conventional x-rays. The subsequent images are used to reconstruct the object. However Optical Tomography has not been effectively utilized due to the complicated photon scattering phenomenon and ill-posed nature of the corresponding image reconstruction scheme. The major method for reconstruction of the object is based on an iterative loop that constantly minimizes the difference between the predicted model of photon scattering with acquired images. Currently the most effective method of predicting the photon scattering pattern is the solution of the Radiative Transfer Equation (RTE) using the Finite Elements Method (FEM). However, the conventional FEM uses classical C0 interpolation functions, which have shortcomings in terms of continuity of the solution over the domain as well as proper representation of geometry. Hence higher discretization is necessary to maintain accuracy of gradient-based results which may significantly increase the computational cost in each iteration. This research implements the recently developed Isogeometric Approach (IGA) and particularly IGA-based FEM to address the aforementioned issues. The IGA-based FEM has the potential to enhance adaptivity and reduce the computational cost of discretization schemes. The research in this study applies the IGA method to solve the RTE with the diffusion approximation and studies its behavior in comparison to conventional FEM. The results show comparison of the IGA-based solution with analytical and conventional FEM solutions in terms of accuracy and efficiency. While both methods show high levels of accuracy in reference to the analytical solution, the IGA results clearly excel in accuracy. Furthermore, FE solutions tend to have shorter runtimes in low accuracy results. However, in higher accuracy solutions, where it matters the most, the IGA proves to be considerably faster. / Doctor of Philosophy / CT scans can save lives by allowing medical practitioners observe inside the patient's body without use of invasive surgery. However, they use high energy, potentially harmful x-rays to penetrate the organs. Due to limits of the mathematical algorithm used to reconstruct the 3D figure of the organs from the 2D x-ray images, many such images are required. Thus, a high level of x-ray exposure is necessary, which in periodic use can be harmful. Optical Tomography is a promising alternative which replaces x-rays with harmless Near-infrared (NIR) visible light. However, NIR photons have lower energy and tend to scatter before leaving the organs. Therefore, an additional algorithm is required to predict the distribution of light photons inside the body and their resulting 2D images. This is called the forward problem of Optical Tomography. Only then, like conventional CT scans, can another algorithm, called the inverse solution, reconstruct the 3D image by diminishing the difference between the predicted and registered images. Currently Optical Tomography cannot replace x-ray CT scans for most cases, due to shortcomings in the forward and inverse algorithms to handle real life usages. One obstacle stems from the fact that the forward problem must be solved numerous times for the inverse solution to reach the correct visualization. However, the current numerical method, Finite Element Method (FEM), has limitations in generating accurate solutions fast enough using economically viable computers. This limitation is mostly caused by the FEM's use of a simpler mathematical construct that requires more computations and is limited in accurately modelling the geometry and shape. This research implements the recently developed Isogeometric Analysis (IGA) and particularly IGA-based FEM to address this issue. The IGA-based FEM uses the same mathematical construct that is used to visualize the geometry for complicated applications such as some animations and computer games. They are also less complicated to apply due to much lower need for partitioning the domain. This study applies the IGA method to solve the forward problem of diffuse Optical Tomography and compare the accuracy and speed of IGA solution to the conventional FEM solution. The comparison reveals that while both methods can reach high accuracy, the IGA solutions are relatively more accurate. Also, while low accuracy FEM solutions have shorter runtimes, in solutions with required higher accuracy levels, the IGA proves to be considerably faster.
12

Laboratory Soft X-Ray Cryo Microscopy: Source, System and Bio Applications

Fogelqvist, Emelie January 2017 (has links)
Soft x-ray microscopes routinely perform high-resolution 3D imaging of biological cells in their near-native environment with short exposure times at synchrotron radiation facilities. Some laboratory-sized microscopes are aiming to make this imaging technique more accessible to a wider scientific community. However, these systems have been hampered by source instabilities hindering routine imaging of biological samples with short exposure times. This Thesis presents work performed on the Stockholm laboratory x-ray microscope. A novel heat control system has been implemented, improving the stability of the laser-produced plasma source. In combination with recent upgrades to the imaging system and an improved cryofixation method, the microscope now has the capability to routinely produce images with 10-second exposure time of cryofixed biological samples. This has allowed for tomographic imaging of cell autophagy and cell-cell interactions. Furthermore, a numerical 3D image formation model is presented as well as a novel reconstruction approach dealing with the limited depth of focus in x-ray microscopes. / <p>QC 20170505</p>
13

Um modelo de reconstrução tomográfica 3D para amostras agrícolas com filtragem de Wiener em processamento paralelo / A 3D Tomographic Reconstruction Model for Agricultural Samples with Wiener Filtering and Parallel Processing

Pereira, Mauricio Fernando Lima 19 June 2007 (has links)
Neste trabalho, é apresentado um novo modelo de reconstrução tridimensional (3D) para amostras agrícolas com filtragem de Wiener em processamento paralelo, o qual é obtido a partir de reconstruções tomográficas bidimensionais (2D). No desenvolvimento, foram modelados algoritmos paralelos de retroprojeção filtrada e reconstrução tridimensional, baseando-se na inserção de um conjunto de planos virtuais entre pares de planos reais obtidos em ensaios tomográficos de raios X na faixa de energia de 56 keV a 662 keV. No modelo, os planos virtuais gerados em algoritmo paralelo são implementados com base na técnica de interpolação por B-Spline-Wavelet. Para validação do modelo desenvolvido, foi utilizada uma plataforma paralela composta de 4 processadores DSP, a qual possibilitou a troca de dados entre os processadores DSP e o envio de informações para o host, um computador desktop com processador Pentium III operando em 800 MHz. A extração de medidas de eficiência, de ganho e de precisão dos algoritmos paralelos foi realizada com base em um conjunto de amostras agrícolas (solo, vidro e madeiras) e de phantoms de calibração. Nessa avaliação, observou-se que o algoritmo de reconstrução 2D, utilizado como base para o algoritmo de reconstrução 3D, possibilitou uma alta eficiência para imagens de maior resolução, atingindo um pico de 92% de eficiência na resolução de 181X181 pixels. O algoritmo paralelo de reconstrução 3D foi analisado para um conjunto de amostras, sob diferentes configurações de planos reais e virtuais, organizados de forma a possibilitarem a avaliação do impacto causado pelo aumento da granularidade da comunicação e da carga de trabalho. Um melhor desempenho, com ganho médio igual a 3,4, foi obtido na reconstrução de objetos que demandaram o cálculo de um maior número de planos. Também, buscou-se conhecer a adaptabilidade do modelo para uso em arquitetura convencional, sendo que neste caso o uso de MPI permitiu a comunicação entre as tarefas projetadas em cada algoritmo paralelo. Adicionamente, foram incluídas ferramentas de visualização 2D e 3D para que usuários possam analisar as imagens e as características das amostras agrícolas em ambiente tridimensional. Os resultados obtidos indicam que o modelo de reconstrução 3D paralela trouxe contribuições originais para a área de tomografia agrícola aplicada à física de solos, bem como para a criação de ferramentas que viabilizem explorar recursos computacionais disponíveis em arquiteturas paralelas que demandem elevada capacidade de processamento. / This work presents a new method for three dimensional (3D) image reconstruction dedicated to the investigation in soil physics by means of X-ray tomography which is obtained using two-dimensional (2D) tomographic image reconstructed slices. The conception of the 3D model for reconstruction and visualization was based on the filtered back projection algorithm, operating under parallel environment together the insertion of virtual planes between pairs of real planes obtained by X-Ray tomography under energies varying from 56 keV to 662 keV. In this model, the virtual planes were generated by interpolation with the use of B-Spline-Wavelets. The evaluation of the 3D reconstruction model was established by using a set of agricultural samples (i.e., soil, glass, wood and calibration phantoms) having different configuration for the planes. Such configuration was based on setting not only the sizes and the number of the real but also the virtual planes in the volume. This procedure allows the impact measurements as a function of the increasing in workload and the communication granularity. To validate the reconstruction model, a dedicated parallel architecture composed of 4 DSP processors was used. This board enables data exchange between DSP processors and communication with host computer. A measurement of efficiency with a speed up equal to 3.4 was obtained using the same set of samples and a better performance was observed with a higher number of planes. Also, to understand about its adaptability, the model was implemented in conventional architecture, using MPI library to enable communication between designed tasks. Additionally, 2D and 3D visualization tools based on Vizualization ToolKit were included in order to help users to analyze images and their characteristics. Results have shown that the 3D parallel model reconstruction brought original contributions for the soil science diagnosis by X-Ray tomography, as well as to explore the available computational resources in parallel architectures, which demands great processing capacity.
14

Modélisation, simulation et quantification de lésions athéromateuses en tomographie par émission de positons / Numerical modeling, simulation and quantification of atheromatous lesions in positron emission tomography

Huet, Pauline 06 July 2015 (has links)
Les pathologies cardio-vasculaires d’origine athéroscléreuse, premières causes de mortalité dans les pays occidentaux, sont insuffisamment prises en charge par les outils de dépistage et de suivi thérapeutique actuels. La Tomographie par Emission de Positons (TEP) est susceptible d’apporter au clinicien des outils puissants pour le diagnostic et le suivi thérapeutique des patients, en particulier grâce au traceur Fluorodésoxyglucose marqué au fluor 18 ([18F]-FDG). Cependant, l’Effet de Volume Partiel (EVP), dû notamment à la résolution spatiale limitée dans les images (plusieurs millimètres) en regard des faibles dimensions (de l’ordre du millimètre) des VOlumes d’Intérêt (VOIs), et les fluctuations statistiques du signal mesuré présentent des défis pour une quantification fiable.Un modèle original de lésion athéromateuse, paramétré par ses dimensions et sa concentration d’activité, a été développé et des simulations Monte-Carlo d’acquisitions TEP au [18F]-FDG de 36 lésions ont été produites. A partir des acquisitions simulées, nous avons montré que le nombre d’itérations des reconstructions itératives, le post-filtrage appliqué et le moyennage dans le VOI,paramètres relevés comme hautement variables dans une revue de la littérature dédiée, peuvent induire des variations des valeurs de fixation mesurées d’un facteur 1.5 à 4. Nous avons montré qu’une modélisation de la réponse du tomographe pouvait réduire le biais de mesure d’environ 10% par rapport au biais mesuré sur une image reconstruite avec un algorithme itératif standard et pour un niveau de bruit comparable. Sur les images reconstruites, nous avons montré que la fixation mesurée reste très biaisée (sous-estimation de plus de 50% du SUV réel) et dépend fortement des dimensions de la lésion à cause de l’EVP. Un contraste minimum de 4 par rapport à l’activité sanguine est nécessaire pour qu’une lésion soit détectée. Sans correction d’EVP, la mesure présente une corrélation faible avec la concentration d’activité, mais est très corrélée à l’activité totale dans la lésion. L’application d’une correction d’EVP fournit une mesure moins sensible à la géométrie de la lésion et plus corrélée à la concentration d’activité mais réduit la corrélation à l’activité totale dans la lésion.En conclusion, nous avons montré que l’intégralité de la fixation du [18F]-FDG dans les lésions athéromateuses inflammatoires totale peut être caractérisée sur les images TEP. Cette estimée ne requiert pas de correction de l’EVP. Lorsque la concentration d’activité dans la lésion est estimée, les mesures sont très biaisées à cause de l’EVP. Ce biais peut être réduit en mesurant le voxel d’intensité maximale, dans les images reconstruites sans post-filtrage avec au moins 80 itérations incluant un modèle de réponse du détecteur. La mise en œuvre d’une correction d’EVP facilite la détection des changements d’activité métabolique indépendamment de changements de dimensions de la zone siègede l’inflammation. Une quantification absolue exacte de la concentration d’activité dans les lésions ne sera possible que via une amélioration substantielle de la résolution spatiale des détecteurs TEP. / Cardiovascular disease is the leading cause of death in western countries. New strategies and tools for diagnosis and therapeutic monitoring need to be developed to manage patients with atherosclerosis, which is one major cause of cardiovascular disease. Fluorodeoxyglucose Positron Emission Tomography (FDG-PET) is a powerful imaging technique that can detect at early stages plaques prone to rupture. Yet, Partial Volume Effect (PVE), due to the small lesion dimensions (around 1 mm) with respect to the scanner spatial resolution (around 6 mm full width at half maximum), and statistical variations considerably challenge the precise characterization of plaques from PET images. An original model of atheromatous lesion parameterized by its dimensions and activity concentration, was developed. Thirty six Monte Carlo simulations of FDG-PET acquisitions were produced. Based on the simulations, we showed that the number of iterations in iterative reconstructions, the post filtering of reconstructed images and the quantification method in the Volume Of Interests (VOI) varied sharply in an analysis of the dedicated literature. Changes in one of these parameters only could induce variations by a factor of 1.5 to 4 in the quantitative index. Overall, inflammation remained largely underestimated (> 50% of the real uptake). We demonstrated that modeling the detector response could reduce the bias by 10% of its value in comparison to a standard OSEM recontruction and for an identical level of noise. In reconstructed images, we showed that the measured values depended not only on the real uptake but also on the lesion dimensions because of PVE. A minimum contrast of 4 with respect to blood activity was required for the lesion to be observable. Without PVE correction, the measured values exhibited a correlation with activity concentration but were much more correlated with the total uptake in the lesion. Applying a PVE correction leads to an activity estimate that was less sensitive to the geometry of the lesion. The corrected values were more correlated to the activity concentration and less correlated to the total activity. In conclusion, we showed that the total activity in inflammatory lesions could be assessed in FDG-PET images. This estimate did not require PVE correction. Tracer concentration estimates are largely biased due to PVE, and the bias can be reduced by measuring the maximum voxel in the lesion in images reconstructed with at least 80 iterations and by modeling the detector response. Explicit PVE correction is recommended to detect metabolic changes independent of geometric changes. An accurate estimation of plaque uptake will however require the intrinsic spatial resolution of PET scanners to be improved.
15

Um modelo de reconstrução tomográfica 3D para amostras agrícolas com filtragem de Wiener em processamento paralelo / A 3D Tomographic Reconstruction Model for Agricultural Samples with Wiener Filtering and Parallel Processing

Mauricio Fernando Lima Pereira 19 June 2007 (has links)
Neste trabalho, é apresentado um novo modelo de reconstrução tridimensional (3D) para amostras agrícolas com filtragem de Wiener em processamento paralelo, o qual é obtido a partir de reconstruções tomográficas bidimensionais (2D). No desenvolvimento, foram modelados algoritmos paralelos de retroprojeção filtrada e reconstrução tridimensional, baseando-se na inserção de um conjunto de planos virtuais entre pares de planos reais obtidos em ensaios tomográficos de raios X na faixa de energia de 56 keV a 662 keV. No modelo, os planos virtuais gerados em algoritmo paralelo são implementados com base na técnica de interpolação por B-Spline-Wavelet. Para validação do modelo desenvolvido, foi utilizada uma plataforma paralela composta de 4 processadores DSP, a qual possibilitou a troca de dados entre os processadores DSP e o envio de informações para o host, um computador desktop com processador Pentium III operando em 800 MHz. A extração de medidas de eficiência, de ganho e de precisão dos algoritmos paralelos foi realizada com base em um conjunto de amostras agrícolas (solo, vidro e madeiras) e de phantoms de calibração. Nessa avaliação, observou-se que o algoritmo de reconstrução 2D, utilizado como base para o algoritmo de reconstrução 3D, possibilitou uma alta eficiência para imagens de maior resolução, atingindo um pico de 92% de eficiência na resolução de 181X181 pixels. O algoritmo paralelo de reconstrução 3D foi analisado para um conjunto de amostras, sob diferentes configurações de planos reais e virtuais, organizados de forma a possibilitarem a avaliação do impacto causado pelo aumento da granularidade da comunicação e da carga de trabalho. Um melhor desempenho, com ganho médio igual a 3,4, foi obtido na reconstrução de objetos que demandaram o cálculo de um maior número de planos. Também, buscou-se conhecer a adaptabilidade do modelo para uso em arquitetura convencional, sendo que neste caso o uso de MPI permitiu a comunicação entre as tarefas projetadas em cada algoritmo paralelo. Adicionamente, foram incluídas ferramentas de visualização 2D e 3D para que usuários possam analisar as imagens e as características das amostras agrícolas em ambiente tridimensional. Os resultados obtidos indicam que o modelo de reconstrução 3D paralela trouxe contribuições originais para a área de tomografia agrícola aplicada à física de solos, bem como para a criação de ferramentas que viabilizem explorar recursos computacionais disponíveis em arquiteturas paralelas que demandem elevada capacidade de processamento. / This work presents a new method for three dimensional (3D) image reconstruction dedicated to the investigation in soil physics by means of X-ray tomography which is obtained using two-dimensional (2D) tomographic image reconstructed slices. The conception of the 3D model for reconstruction and visualization was based on the filtered back projection algorithm, operating under parallel environment together the insertion of virtual planes between pairs of real planes obtained by X-Ray tomography under energies varying from 56 keV to 662 keV. In this model, the virtual planes were generated by interpolation with the use of B-Spline-Wavelets. The evaluation of the 3D reconstruction model was established by using a set of agricultural samples (i.e., soil, glass, wood and calibration phantoms) having different configuration for the planes. Such configuration was based on setting not only the sizes and the number of the real but also the virtual planes in the volume. This procedure allows the impact measurements as a function of the increasing in workload and the communication granularity. To validate the reconstruction model, a dedicated parallel architecture composed of 4 DSP processors was used. This board enables data exchange between DSP processors and communication with host computer. A measurement of efficiency with a speed up equal to 3.4 was obtained using the same set of samples and a better performance was observed with a higher number of planes. Also, to understand about its adaptability, the model was implemented in conventional architecture, using MPI library to enable communication between designed tasks. Additionally, 2D and 3D visualization tools based on Vizualization ToolKit were included in order to help users to analyze images and their characteristics. Results have shown that the 3D parallel model reconstruction brought original contributions for the soil science diagnosis by X-Ray tomography, as well as to explore the available computational resources in parallel architectures, which demands great processing capacity.
16

Reconstrução tomográfica dinâmica industrial

OLIVEIRA, Eric Ferreira de 29 February 2016 (has links)
Submitted by Rafael Santana (rafael.silvasantana@ufpe.br) on 2017-05-11T18:10:34Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Versão Atualizada Tese Eric Ferreira de Oliveira.pdf: 1865683 bytes, checksum: 517608a245f6372afd45b3bba78733d6 (MD5) / Made available in DSpace on 2017-05-11T18:10:34Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Versão Atualizada Tese Eric Ferreira de Oliveira.pdf: 1865683 bytes, checksum: 517608a245f6372afd45b3bba78733d6 (MD5) Previous issue date: 2016-02-29 / CNEN / O estado da arte dos métodos aplicados para processos industriais é atualmente baseado em princípios de reconstruções tomográficas clássicas desenvolvidos para padrões tomográficos de distribuições estáticas, ou seja, são limitados a processos de pouca variabilidade. Ruídos e artefatos de movimento são os principais problemas causados pela incompatibilidade nos dados gerada pelo movimento. Além disso, em processos tomográficos industriais é comum um número limitado de dados podendo produzir ruído, artefatos e não concordância com a distribuição em estudo. Um dos objetivos do presente trabalho é discutir as dificuldades que surgem da implementação de algoritmos de reconstrução em tomografia dinâmica que foram originalmente desenvolvidos para distribuições estáticas. Outro objetivo é propor soluções que visam reduzir a perda de informação temporal devido a utilização de técnicas estáticas em processos dinâmicos. No que diz respeito à reconstrução de imagem dinâmica foi realizada uma comparação entre diferentes métodos de reconstrução estáticos, como MART e FBP, quando usado para cenários dinâmicos. Esta comparação foi baseada em simulações por MCNPX, e também analiticamente, de um cilindro de alumínio que se move durante o processo de aquisição, e também com base em imagens de cortes transversais de técnicas de CFD. Outra contribuição foi aproveitar o canal triplo de cores necessário para exibir imagens coloridas na maioria dos monitores, de modo que, dimensionando adequadamente os valores adquiridos de cada vista no sistema linear de reconstrução, foi possível imprimir traços temporais na imagem tradicionalmente reconstruída. Finalmente, uma técnica de correção de movimento usado no campo da medicina foi proposto para aplicações industriais, considerando-se que a distribuição de densidade nestes cenários pode apresentar variações compatíveis com movimentos rígidos ou alterações na escala de certos objetos. A ideia é usar dados conhecidos a priori ou durante o processo, como vetor deslocamento, e então usar essas informações para melhorar a qualidade da reconstrução. Isto é feito através da manipulação adequada da matriz peso no método algébrico, isto é, ajustando-se os valores para refletir o movimento objeto do previsto ou deformação. Os resultados de todas essas técnicas aplicadas em vários experimentos e simulações são discutidos neste trabalho. / The state of the art methods applied to industrial processes is currently based on the principles of classical tomographic reconstructions developed for tomographic patterns of static distributions, or is limited to cases of low variability of the density distribution function of the tomographed object. Noise and motion artifacts are the main problems caused by a mismatch in the data from views acquired in different instants. All of these add to the known fact that using a limited amount of data can result in the presence of noise, artifacts and some inconsistencies with the distribution under study. One of the objectives of the present work is to discuss the difficulties that arise from implementing reconstruction algorithms in dynamic tomography that were originally developed for static distributions. Another objective is to propose solutions that aim at reducing a temporal type of information loss caused by employing regular acquisition systems to dynamic processes. With respect to dynamic image reconstruction it was conducted a comparison between different static reconstruction methods, like MART and FBP, when used for dynamic scenarios. This comparison was based on a MCNPx simulation as well as an analytical setup of an aluminum cylinder that moves along the section of a riser during the process of acquisition, and also based on cross section images from CFD techniques. As for the adaptation of current tomographic acquisition systems for dynamic processes, this work established a sequence of tomographic views in a just-in-time fashion for visualization purposes, a form of visually disposing density information as soon as it becomes amenable to image reconstruction. A third contribution was to take advantage of the triple color channel necessary to display colored images in most displays, so that, by appropriately scaling the acquired values of each view in the linear system of the reconstruction, it was possible to imprint a temporal trace into the regularly reconstructed image, where the temporal trace utilizes a channel and the regular reconstruction utilizes a different one. Finally, a motion correction technique used in the medical field was proposed for industrial applications, considering that the density distribution in these scenarios may present variations compatible with rigid motions or changes in scale of certain objects. The idea is to identify in some configurations of the temporarily distributed data clues of the type of motion or deformation suffered by the object during the data acquisition, and then use this information to improve the quality of the reconstruction. This is done by appropriately manipulating the weight matrix in the algebraic method, i.e., by adjusting the values to reflect the predicted object motion or deformation. The results of all these techniques applied in several experiments and simulations are discussed in this work.
17

Spatiotemporal PET reconstruction with Learned Registration / Spatiotemporal PET-rekonstruktion med inlärd registrering

Meyrat, Pierre January 2022 (has links)
Because of the long acquisition time of Positron Emission Tomography scanners, the reconstructed images are blurred by motion. We hereby propose a novel motion-correction maximum-likelihood expectation-maximization algorithm integrating 3D movements between the different gates estimated by a neural network trained on synthetic data with contrast invariance. We show that, compared to the classic reconstruction method, this algorithm can increase the image quality on realistic synthetic 3D data of a human body, in particular, the contrast of small carcinogenic lung lesions. For the detection of lesions of one cm on four gates for medium and high noise levels, the studied algorithm gave an increase of 45 to 130% of the Pearson correlation coefficient in comparison with classic reconstruction methods without deformations. / På grund av den långa insamlingstiden för Positron Emission Tomography skannrar, blir de rekonstruerade bilderna suddiga av rörelse. Vi föreslår härmed en ny algoritm för maximal sannolikhet för rörelsekorrigering förväntningar-maximering som integrerar 3D-rörelser mellan de olika grindarna uppskattade av ett neuralt nätverk tränat på syntetisk data med kontrastinvarians. Vi visar att, jämfört med den klassiska rekonstruktionsmetoden, kan denna algoritm öka bildkvaliteten på realistiska syntetiska 3D-data från en människokropp, i synnerhet kontrasten av små cancerframkallande lungskador. För detektion av lesioner på en cm på fyra grindar för medelhöga och höga ljudnivåer gav den studerade algoritmen en ökning med 45 till 130% av Pearsons korrelationskoefficient i jämförelse med klassisk rekonstruktionsmetod utan deformationer.
18

Calibrations et stratégies de commandes tomographique pour les optiques adaptatives grand champ : validations expérimentales sur le banc HOMER

Parisot, Amelie 24 October 2012 (has links)
L'optique adaptative (OA) permet de corriger en temps réel les déformations du front d'onde induites par la turbulence atmosphérique. Cependant, cette technique aujourd'hui mature connaît une limitation fondamentale : l'anisoplanétisme. Pour y pallier, différents concepts d'OA grand champ ont été développés. La turbulence est alors mesurée dans plusieurs directions afin de l'estimer tomographiquement. Ces systèmes soulèvent des problématiques spécifiques, telles que leurs processus d'étalonnage et leur contrôle temps réel au moyen de lois de commande tomographiques. Mes travaux de recherche ont consisté à modifier et optimiser le banc OA grand champ de l'Onera pour ensuite y implanter et comparer différentes loi de commande tomographiques envisagées pour les futurs instruments. Pour cela, une caractérisation et une implantation de nouveaux composants ont été effectuées, et j'ai développé une procédure d'identification de paramètres système dans un objectif double: alignement du banc et optimisation de lois de commande. Quatre lois de commande, explorant la diversité des solutions proposées, sont ensuite étudiées, du simple reconstructeur moindre carré à la commande optimale linéaire quadratique gaussienne, en passant par des approches de type pseudo boucle ouverte ou miroir déformable virtuel. Pour chacune, une optimisation des facteurs de réglage est effectuée, et une performance en fonction du champ est établie, ce pour plusieurs valeurs de rapport signal à bruit. Les résultats expérimentaux sont mis en regard avec les résultats obtenus par simulation, et les lois de commande sont comparées ensuite en terme de performance, robustesse et simplicité de mise en œuvre. / Adaptive Optics (AO) provides a real-time correction of the atmospheric turbulence effects. This technique is now well mastered; nonetheless it is limited by the anisoplanatism effect. Wide Field AO concepts have been developed to overcome this limitation. Turbulence is probed in several directions in order to perform a tomographic reconstruction of the turbulent volume. These complex systems raise critical challenges such as tomographic control and calibrations.My PhD work is focused on implementation and comparison of different tomographic control schemes developed in the perspective of future systems, after an optimisation of the Onera wide field AO bench. Calibration and integration of new components have been performed, and I have developed a method to identify system parameters with a twofold goal: bench alignment and control laws optimisation. Four control schemes have been studied, exploring the proposed solutions, from the simplest least-square to the optimal linear quadratic gaussian solutions including virtual deformable mirror and pseudo open loop approaches. In each case, an optimisation of tuning factors is performed and low and high noise conditions are explored, for several different fields of views. Experimental results are compared to numerical ones and control laws are analyzed in term of performance, robustness and implementation simplicity.
19

Nouvelle approche de la correction de l'atténuation mammaire en tomoscintigraphie de perfusion myocardique / New approch of breast attenuation correction in SPECT myocardial perfusion imaging

Chamouine, Saïd Omar 12 December 2011 (has links)
Nous proposons dans le cadre de cette thèse une nouvelle approche permettant de s'affranchir de l'atténuation mammaire en tomographie par émission monophotonique (TEMP) de perfusion myocardique. Elle est constituée de deux parties : - la première consiste à rendre les projections acquises consistantes. - la deuxième consiste à pondérer ces même les projections corrigées durant la reconstruction. Nous avons effectué l'étude de validité de nos méthodes sur quelques exemples de simulation TEMP de perfusion myocardique simulant l'atténuation mammaire et sur quelques exemples d'études patients réelles notamment : des cas d'atténuation mammaire, d'infarctus inférieure, d'infarctus apical, d'infarctus antérieur, d'ischémie antérieure et inférieure. Les résultats semblent encourageants. Il s'agit dans le proche avenir de mener une étude de validation chez les patients versus un gold standard (coronarographie, coroscanner) / We propose in this thesis a new approach to correct the breast attenuation in SPECT myocardial perfusion imaging. It consists of two parts: -The first is to make the acquired projections consistent with each other. - The second is to weight the corrected attenuated projection during the reconstruction. We conducted a validation of our methods on some examples of myocardial perfusion SPECT imaging simulating the breast attenuation and some examples of real patient studies including: breast attenuation, anterior myocardial infarction, inferior myocardial infarction, anterior myocardial ischemia and inferior myocardial ischemia. The obtained results are encouraging. At this step, it is interesting in the near future to conduct a validation study in patients versus a gold standard (angiography, coroscan).Key words: SPECT, tomographic reconstruction, breast attenuation, Iterative reconstruction, attenuation correction, myocardial perfusion imaging, nuclear medicine
20

Development of tomographic PIV for the study of turbulent flows / Développement de la PIV tomographique pour l'étude d'écoulements turbulents

Cheminet, Adam 19 May 2016 (has links)
Cette thèse porte sur le développement de la PIV tomographique (tomo-PIV) pour la mesure d’écoulements turbulents (Elsinga et al 2006). Cette méthode se fonde sur la reconstruction tomographique d’une distribution volumique d’intensité de particules traceuses, à partir de projections enregistrées par des caméras. Les distributions volumiques sont corrélées, fournissant ainsi un champ de déplacement 3D.Les principales avancées de la recherche sur cette technique sont présentées ainsi que les points bloquants. Les efforts ont principalement été portés sur la reconstruction tomographique. La principale difficulté est le bruit tomographique (particules fantômes) qui croît exponentiellement lorsqu’une forte densité de traceur est requise pour obtenir une résolution spatiale fine de la mesure, particulièrement pour les écoulements turbulents.Afin de mieux appréhender ce bruit de reconstruction, nous avons étudié numériquement les facteurs expérimentaux nuisant à la qualité de la reconstruction. Des considérations géométriques ont permis de quantifier l’impact de «particules ajoutées», qui se trouvent dans le volume de l’union mais pas dans le volume de l’intersection entre la zone laser et les champs de vue des caméras. La diminution du ratio signal-à-bruit dans les images, due à la diffusion de Mie et l’astigmatisme des optiques, a pour principal effet la perte de vraies particules dans la reconstruction.Étudier les conditions optiques de la tomo-PIV nous a permis de proposer une approche alternative à la reconstruction tomographique classique, qui vise à reconstruire une particule presque sur un unique voxel, plutôt que comme un agrégat de voxels de taille étendue, en se fondant sur une représentation particulaire des images. Nous nommons cette méthode Reconstruction Volumique de Particules (PVR). Après avoir été incorporée à un algorithme de reconstruction (SMART), il est possible d’élargir la représentation particulaire de PVR, afin d’obtenir des blobs de 2/3 voxels de diamètre requis par les algorithmes de corrélation de 3D-PIV. Des simulations numériques sur un large spectre de conditions génératrices, ont montré qu’utiliser PVR-SMART permettait des gains de performance par rapport à un algorithme classique comme tomo-SMART (Atkinson 2009).L’aspect vélocimétrie par corrélation de la méthode a aussi été pris en compte avec une extension sur GPU à la 3D (FOLKI-3D) de l’algorithme FOLKI-PIV (Champagnat et al. 2011). Le déplacement y est cherché en minimisant itérativement une fonctionnelle, du type des moindres carrés, par déformation de volume. Les tests synthétiques confirment que la réponse fréquentielle d’espace est semblable à celle d’autres algorithmes classiques itératifs de déformation de volume. Les simulations numériques de reconstruction tomographique ont permis de caractériser la robustesse de l’algorithme au bruit spécifique de la tomographie. Nous avons montré que FOLKI-3D était plus robuste aux particules fantômes cohérentes que les algorithmes classiques de déformation volumique. De plus, des gains de performance ont été observés en utilisant des schémas d’ordre élevé pour différents types de bruit.L’application de PVR-SMART sur des données expérimentales a été effectuée sur un jet d’air turbulent. Différentes densités de particules ont été utilisées pour comparer les performance de PVR-SMART avec tomo-SMART sur la région proche buse du jet. Avec le pré-traitement d’image utilisé, nous avons montré que les champs de vitesse de PVR-SMART étaient près de 50 % moins bruités que ceux de tomo-SMART. L’analyse sur les champs de vitesse comporte l’étude de quantités statistiques, de peak-locking, de divergence, du tenseur des gradients ainsi que de structures cohérentes.Enfin, nous concluons avec une synthèse des résultats obtenus au cours de cette étude, en envisageant de nouvelles perspectives de recherche dans le contexte de la PIV tomographique. / This research dissertation focuses on the developments of tomographic PIV (tomo-PIV) for the measurement of turbulent flows (Elsinga et al. 2006). It is based on the tomographic reconstruction of a volumic intensity distribution of tracer particles from projections recorded on cameras. The corresponding volumic distributions are correlated to obtain 3D displacement fields.The present work surveys the state of advancement of the research conducted on this technique and the main issues it has been confronted with so far. The main research focus was on tomographic reconstruction. Indeed, its main limitation is the appearance of ghost particles, ie reconstruction noise, which occurs when high tracer concentrations are required for high spatial resolution measurements.For a thorough understanding of tomographic noise, we carried out a numerical study of experimental factors impacting the quality of tomographic reconstruction. Geometric considerations quantified the impact of "added particles" lying in the Union volume but not in the Intersection volume, between the camera fields of view and the illumination area. This phenomenon was shown to create ghost particles. The decrease in signal-to-noise ratio in the image was investigated, considering Mie scattering and defocusing effects. Particle image defocusing mainly results in the loss of real particles in reconstruction. Mie scattering’s main impact is also the loss of real particles due to the polydisperse nature of the seeding.This study of imaging conditions for tomo-PIV led us to propose an alternative approach to classical tomographic reconstruction. It seeks to recover nearly single voxel particles rather than blobs of extended size using a particle-based representation of image data. We term this approach Particle Volume Reconstruction (PVR). PVR underlies a more physical, sparse volumic representation of point particles, which lives halfway between infinitely small particles, and voxel blobs commonly used in tomo-PIV. From that representation, it is possible to smooth it to 2 voxel diameter blobs for a 3D-PIV use of PVR incorporated in a SMART algorithm. Numerical simulations showed that PVR-SMART outperforms tomo-SMART (Atkinson et al. 2009) on a variety generating conditions and a variety of metrics on volume reconstruction and displacement estimation, especially in the case of seeding density greater than 0.06 ppp.We introduce a cross-correlation technique for 3D-PIV (FOLKI-3D) as an extension to 3D of the FOLKI-PIV algorithm (Champagnat et al. 2011). The displacement is searched as the minimizer of a sum of squared differences, solved iteratively by using volume deformation. Numerical tests confirmed that spatial frequency response is similar to that of standard iterative deformation algorithms. Numerical simulations of tomographic reconstruction characterized the robustness of the algorithm to specific tomographic noise. FOLKI-3D was found more robust to coherent ghosts than standard deformation algorithms, while gains in accuracy of the high-order deformation scheme were obtained for various signal noises.The application of PVR-SMART on experimental data was performed on a turbulent air jet. Several seeding density conditions were used to compare the performance of tomo-SMART and PVR-SMART on the near field region of the jet. With the given image pre-processing, PVR-SMART was found to yield velocity fields that are about 50 % less noisy than tomo-SMART. The velocity field comparison included velocity field statistical properties, peak-locking study, flow divergence analysis, velocity gradient tensor and coherent structures exploration.Finally, conclusions are drawn from the main results of this dissertation and lead to potential research perspectives of our work with respect to the future of tomographic PIV.

Page generated in 0.1035 seconds