51 |
Quantificação de artefatos metálicos produzidos por implantes dentários em imagens de tomografia computadorizada de feixe cônico obtidas com diferentes protocolos de aquisição / Quantification of metallic artifacts produced by dental implants in cbct images obtained using different acquisition protocolsFardim, Karolina Aparecida Castilho 08 August 2018 (has links)
Submitted by Geandra Rodrigues (geandrar@gmail.com) on 2018-10-24T11:31:17Z
No. of bitstreams: 0 / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2018-10-24T15:48:14Z (GMT) No. of bitstreams: 0 / Made available in DSpace on 2018-10-24T15:48:14Z (GMT). No. of bitstreams: 0
Previous issue date: 2018-08-08 / O objetivo do trabalho foi quantificar, em imagens de tomografia computadorizada de feixe cônico (TCFC) obtidas com diferentes protocolos, os artefatos metálicos produzidos por implantes de titânio instalados em diferentes regiões da mandíbula. Os implantes foram instalados em quatro diferentes regiões (incisivo, canino, pré-molar e molar) de um phamtom e submetidos a exames de TCFC com variação da posição do objeto no interior do FOV (central, anterior, posterior, direita e esquerda), variação do FOV (6 x 13 e 12 x 13 cm) e do tamanho do voxel (0,25 e 0,30 mm). Um corte axial da região cervical de cada implante foi selecionado para quantificação. Os testes de Kruskal-Wallis e Student-Newman-Keuls foram utilizados para comparação das regiões dos dentes e entre as diferentes posições do phantom dentro do FOV. O teste de Wilcoxom foi utilizado para comparar a variação de tamanho do FOV e voxel. O teste ANOVA fatorial para avaliar a interação entre as variáveis do estudo. A região de incisivo apresentou a maior quantidade de artefatos, em comparação as outras regiões (p=0,0315). Não houve diferença significativa na variação da posição do phantom dentro do FOV (p=0,7418). O FOV menor produziu mais artefatos (p<0,0001). Ao comparar as imagens produzidas com diferentes resoluções, o menor voxel produziu mais artefatos (p<0,0001). Os artefatos metálicos sofrem influência do tamanho do FOV e do voxel, além da região anatômica. A variação da localização do phantom no interior do FOV não alterou a quantidade de artefatos. / O objetivo do trabalho foi quantificar, em imagens de tomografia computadorizada de feixe cônico (TCFC) obtidas com diferentes protocolos, os artefatos metálicos produzidos por implantes de titânio instalados em diferentes regiões da mandíbula. Os implantes foram instalados em quatro diferentes regiões (incisivo, canino, pré-molar e molar) de um phamtom e submetidos a exames de TCFC com variação da posição do objeto no interior do FOV (central, anterior, posterior, direita e esquerda), variação do FOV (6 x 13 e 12 x 13 cm) e do tamanho do voxel (0,25 e 0,30 mm). Um corte axial da região cervical de cada implante foi selecionado para quantificação. Os testes de Kruskal-Wallis e Student-Newman-Keuls foram utilizados para comparação das regiões dos dentes e entre as diferentes posições do phantom dentro do FOV. O teste de Wilcoxom foi utilizado para comparar a variação de tamanho do FOV e voxel. O teste ANOVA fatorial para avaliar a interação entre as variáveis do estudo. A região de incisivo apresentou a maior quantidade de artefatos, em comparação as outras regiões (p=0,0315). Não houve diferença significativa na variação da posição do phantom dentro do FOV (p=0,7418). O FOV menor produziu mais artefatos (p<0,0001). Ao comparar as imagens produzidas com diferentes resoluções, o menor voxel produziu mais artefatos (p<0,0001). Os artefatos metálicos sofrem influência do tamanho do FOV e do voxel, além da região anatômica. A variação da localização do phantom no interior do FOV não alterou a quantidade de artefatos.
|
52 |
Étude des effets de volume partiel en IRM cérébrale pour l'estimation d'épaisseur corticale / Partial volume effets in brain MRI for cortical thickness estimationDuché, Quentin 18 June 2015 (has links)
Les travaux réalisés dans cette thèse se situent à l'interface des domaines de l'acquisition en imagerie par résonance magnétique (IRM) et du traitement d'image pour l'analyse automatique des structures cérébrales. La mesure de modifications structurelles telles que l'atrophie corticale nécessite l'application d'algorithmes de traitement d'image. Ceux-ci doivent compenser les artefacts en IRM tels que l'inhomogénéité du signal ou les effets de volume partiel (VP) pour permettre la segmentation des tissus cérébraux puis l'estimation d'épaisseur corticale. Nous proposons une nouvelle modélisation de VP proche de la physique de l'acquisition baptisée modèle bi-exponentiel qui vient concurrencer le traditionnel modèle linéaire. Il nécessite l'utilisation de deux images de contrastes différents parfaitement recalées. Ce modèle a été validé sur des simulations et des fantômes physique et numérique dans un premier temps. Parallèlement, la récente séquence MP2RAGE permet d'acquérir deux images co-recalées par acquisition et leur combinaison aboutit à l'obtention d'une image insensible aux inhomogénéités du signal et d'une carte de T1 des tissus imagés. Nous avons testé notre modèle sur des données in vivo MP2RAGE et avons montré que l'application du modèle linéaire de VP conduit à une sous-estimation systématique de la substance grise à l'échelle du voxel. Ces erreurs se propagent à l'estimation d'épaisseur corticale, biomarqueur très sensible aux effets de VP. Nos résultats plaident en faveur de l'hypothèse suivante : la modélisation de VP pour les images MP2RAGE doit être différente de celle employée pour des images obtenues avec des séquences plus classiques. Le modèle bi-exponentiel est une solution adaptée à cette séquence particulière. / The work developed in this thesis is within the scope of magnetic resonance imaging (MRI) acquisition and image processing for the automated analysis of brain structures. The measurement of structural modifications with time such as cortical atrophy requires the application of image processing algorithms. They must compensate for MRI artifacts such as intensity inhomogeneities or partial volume (PV) effects to allow for brain tissues segmentation then cortical thickness estimation. We suggest a new PV model relying on the physics of acquisition named bi-exponential model that differs from the commonly used linear model by modelling brain tissues and image acquisition. It requires the use of two differently contrasted and perfectly coregistered images. This model has been validated with simulations and physical and digital phantoms in a first place. In parallel, the recent MP2RAGE sequence provides two coregistered images and their combination results in a bias-field corrected image as well as a T1 map of the scanned tissues. We tested our model with in vivo MP2RAGE data and demonstrated that using the linear PV model leads to a systematic gray matter proportion underestimation in PV voxels. These errors result in cortical thickness underestimation. Our results favor the following assumption: PV modelling with MP2RAGE images must differ from the usual linear PV model applied for images obtained from more classic sequences. The bi-exponential model is an adapted solution to this particular sequence.
|
53 |
Polyedrisierung dreidimensionaler digitaler Objekte mit Mitteln der konvexen HülleSchulz, Henrik 21 July 2008 (has links)
Für die Visualisierung dreidimensionaler digitaler Objekte ist im Allgemeinen nur ihre Oberfläche von Interesse. Da von den bildgebenden Verfahren das gesamte räumliche Objekt in Form einer Volumenstruktur digitalisiert wird, muss aus den Daten die Oberfläche berechnet werden. In dieser Arbeit wird ein Algorithmus vorgestellt, der die Oberfläche dreidimensionaler digitaler Objekte, die als Menge von Voxeln gegeben sind, approximiert und dabei Polyeder erzeugt, die die Eigenschaft besitzen, die Voxel des Objektes von den Voxeln des Hintergrundes zu trennen. Weiterhin werden nicht-konvexe Objekte klassifiziert und es wird untersucht, für welche Klassen von Objekten die erzeugten Polyeder die minimale Flächenanzahl und den minimalen Oberflächeninhalt besitzen.
|
54 |
Blood Pressure Control in Aging Predicts Cerebral Atrophy Related to Small-Vessel White Matter Lesions.Kern, Kyle C, Wright, Clinton B, Bergfield, Kaitlin L, Fitzhugh, Megan C, Chen, Kewei, Moeller, James R, Nabizadeh, Nooshin, Elkind, Mitchell S V, Sacco, Ralph L, Stern, Yaakov, DeCarli, Charles S, Alexander, Gene E January 2017 (has links)
Cerebral small-vessel damage manifests as white matter hyperintensities and cerebral atrophy on brain MRI and is associated with aging, cognitive decline and dementia. We sought to examine the interrelationship of these imaging biomarkers and the influence of hypertension in older individuals. We used a multivariate spatial covariance neuroimaging technique to localize the effects of white matter lesion load on regional gray matter volume and assessed the role of blood pressure control, age and education on this relationship. Using a case-control design matching for age, gender, and educational attainment we selected 64 participants with normal blood pressure, controlled hypertension or uncontrolled hypertension from the Northern Manhattan Study cohort. We applied gray matter voxel-based morphometry with the scaled subprofile model to (1) identify regional covariance patterns of gray matter volume differences associated with white matter lesion load, (2) compare this relationship across blood pressure groups, and (3) relate it to cognitive performance. In this group of participants aged 60-86 years, we identified a pattern of reduced gray matter volume associated with white matter lesion load in bilateral temporal-parietal regions with relative preservation of volume in the basal forebrain, thalami and cingulate cortex. This pattern was expressed most in the uncontrolled hypertension group and least in the normotensives, but was also more evident in older and more educated individuals. Expression of this pattern was associated with worse performance in executive function and memory. In summary, white matter lesions from small-vessel disease are associated with a regional pattern of gray matter atrophy that is mitigated by blood pressure control, exacerbated by aging, and associated with cognitive performance.
|
55 |
Cascaded Voxel Cone-Tracing Shadows : A Computational Performance StudyDan, Sjödahl January 2019 (has links)
Background. Real-time shadows in 3D applications have for decades been implemented with a solution called Shadow Mapping or some variant of it. This is a solution that is easy to implement and has good computational performance, nevertheless it does suffer from some problems and limitations. But there are newer alternatives and one of them is based on a technique called Voxel Cone-Tracing. This can be combined with a technique called Cascading to create Cascaded Voxel Cone-Tracing Shadows (CVCTS). Objectives. To measure the computational performance of CVCTS to get better insight into it and provide data and findings to help developers make an informed decision if this technique is worth exploring. And to identify where the performance problems with the solution lies. Methods. A simple implementation of CVCTS was implemented in OpenGL aimed at simulating a solution that could be used for outdoor scenes in 3D applications. It had several different parameters that could be changed. Then computational performance measurements were made with these different parameters set at different settings. Results. The data was collected and analyzed before drawing conclusions. The results showed several parts of the implementation that could potentially be very slow and why this was the case. Conclusions. The slowest parts of the CVCTS implementation was the Voxelization and Cone-Tracing steps. It might be possible to use the CVCTS solution in the thesis in for example a game if the settings are not too high but that is a stretch. Little time could be spent during the thesis to optimize the solution and thus it’s possible that its performance could be increased.
|
56 |
Voxel-Based Morphometry (VBM) in Individuals with Blast/Tbi-Related Balance DysfunctionCacace, A. T., Ye, Y., Akin, Faith W., Murnane, Owen D., Pearson, A., Gattu, R., Haacke, E. M. 01 August 2014 (has links)
No description available.
|
57 |
Segmentation of the Brain from MR ImagesCaesar, Jenny January 2005 (has links)
<p>KTH, Division of Neuronic Engineering, have a finite element model of the head. However, this model does not contain detailed modeling of the brain. This thesis project consists of finding a method to extract brain tissues from T1-weighted MR images of the head. The method should be automatic to be suitable for patient individual modeling.</p><p>A summary of the most common segmentation methods is presented and one of the methods is implemented. The implemented method is based on the assumption that the probability density function (pdf) of an MR image can be described by parametric models. The intensity distribution of each tissue class is modeled as a Gaussian distribution. Thus, the total pdf is a sum of Gaussians. However, the voxel values are also influenced by intensity inhomogeneities, which affect the pdf. The implemented method is based on the expectation-maximization algorithm and it corrects for intensity inhomogeneities. The result from the algorithm is a classification of the voxels. The brain is extracted from the classified voxels using morphological operations.</p>
|
58 |
Segmentation of the Brain from MR ImagesCaesar, Jenny January 2005 (has links)
KTH, Division of Neuronic Engineering, have a finite element model of the head. However, this model does not contain detailed modeling of the brain. This thesis project consists of finding a method to extract brain tissues from T1-weighted MR images of the head. The method should be automatic to be suitable for patient individual modeling. A summary of the most common segmentation methods is presented and one of the methods is implemented. The implemented method is based on the assumption that the probability density function (pdf) of an MR image can be described by parametric models. The intensity distribution of each tissue class is modeled as a Gaussian distribution. Thus, the total pdf is a sum of Gaussians. However, the voxel values are also influenced by intensity inhomogeneities, which affect the pdf. The implemented method is based on the expectation-maximization algorithm and it corrects for intensity inhomogeneities. The result from the algorithm is a classification of the voxels. The brain is extracted from the classified voxels using morphological operations.
|
59 |
Voxel-based Cortical Thickness Measurement of Human Brain Using Magnetic Resonance ImagingChen, Wen-Fu 14 February 2012 (has links)
Cerebral cortex, classified as gray matter, is the superficial layer of the cerebrum. In recent years, many studies have shown the abnormality of cortical thickness is possibly correlated to the disease or disorder in central nervous system, such as Alzheimer¡¦s disease and lissencephaly. Therefore, this purpose of this work is to implement the measurement of the cortical thickness.
In general, two approaches, surface-based and voxel-based methods, have been proposed to measure the cortical thickness. In this thesis, a procedure of the voxel-based method using Laplace¡¦s equation was developed on the basis of a 2008 publication reported by Chloe Hutton et al to obtain voxel-based cortical thickness (VBCT) map. The result of our home-made program was further compared with those calculated by Hutton¡¦s program, whic h was generously provided by the author. The difference between two implementations was consisted of four main parts. First of all, different strategies of the tissue classification were used to define boundary condition of Laplace¡¦s equation. When grey matter, white matter, and cerebrospinal fluid were classified by maximizing the tissue probability, Hutton¡¦s program tends to search more voxels of cerebrospinal fluid in sulci by skeletonizing the non-parenchyma area. Second, the algorithm of layer growing also differs. The single layer obtained by the 26-neighborhood algorithm in our program would be obviously thicker than that provided by Hutton¡¦s program using 6-neighborhood. Third, compared with a fixed step size (usually 0.5 mm) porposed in the main reference to track cortical streamline, we designed a variable step size, reducing the underestimation of cortical thickness. The last but not the least, the connecting points of the cortical streamline usually are not grid points, thus requiring interpolation to estimate the stepping gradient. We adapted the linear interpolation for better accuracy when Hutton et al searched for the closest grid point for replacement to achieve faster computation.
|
60 |
Polyhedral Surface Approximation of Non-Convex Voxel Sets and Improvements to the Convex Hull Computing MethodSchulz, Henrik 31 March 2010 (has links) (PDF)
In this paper we introduce an algorithm for the creation of polyhedral approximations for objects represented as strongly connected sets of voxels in three-dimensional binary images. The algorithm generates the convex hull of a given object and modifies the hull afterwards by recursive repetitions of generating convex hulls of subsets of the given voxel set or subsets of the background voxels. The result of this method is a polyhedron which separates object voxels from background voxels. The objects processed by this algorithm and also the background voxel components inside the convex hull of the objects are restricted to have genus 0. The second aim of this paper is to present some improvements to our convex hull algorithm to reduce computation time.
|
Page generated in 0.0425 seconds