• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 692
  • 421
  • 47
  • 43
  • 35
  • 31
  • 19
  • 18
  • 17
  • 11
  • 9
  • 7
  • 6
  • 5
  • 4
  • Tagged with
  • 1554
  • 1554
  • 670
  • 652
  • 394
  • 393
  • 267
  • 266
  • 256
  • 238
  • 191
  • 180
  • 173
  • 138
  • 137
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

New insights into the natural history of thrombo-embolic disease provided by imaging and disease quantification

Murchison, John Tallach January 2013 (has links)
Venous thromboembolism (VTE) is a common disease with a myriad of presentation. It is often difficult to diagnosis with symptoms which are shared with many other disorders. Because of the overlap in symptomatology with other pathologies it is both commonly overlooked when present and commonly considered when absent. The threshold for investigating suspected VTE has dropped over time, in part due to a greater awareness of the disease among clinicians, but also because of the greater availability of diagnostic tests which are both accurate at positively diagnosing VTE and are patient friendly. This has resulted in a mushrooming of the number of diagnostic tests being performed for suspected VTE in radiology departments. As such radiology provides a window into the disease in a way that no other speciality can. All branches of medicine having their share of VTE patients but radiology provides a unique opportunity to study VTE patients as, no matter from which speciality they arise when the disease is suspected, they will almost inevitably end up undergoing a definitive radiological test. There is much still to learn about VTE however developments in modern imaging and computerised databases have advanced our understanding of this common disease. The window that radiology provides into VTE has contributed towards those advances.
52

Evaluation of Volumetric Change of Periapical Lesions After Apicoectomy as a Measure of Postsurgical Healing Utilizing Cone Beam Computed Tomography

Arasu, Eshwar 01 January 2017 (has links)
The aim of this study was to evaluate whether volumetric changes in persistent periapical lesions can be detected in follow-ups six months to five years after apicoectomy using cone-beam computed tomography. Patients with a previous treatment history of apicoectomy and for whom a pre-surgical CBCT scan was taken between November 2010 and December 2015 were invited to participate in the study. A post-surgical CBCT image of the treated tooth was obtained at the recall visit. Volumetric and linear measurements of periapical lesions on initial and postoperative CBCT images were performed using DiThreshGUI software and two calibrated examiners—a board-certified endodontist and a board-certified oral radiologist. Repeated-measures ANOVA were used to estimate the magnitude of reduction and to test for differences (at alpha=0.05). A total of 20 patients with 27 surgically treated teeth were recalled at an average interval of 37 months. Reduction in the size of lesions was observed in 24 teeth (88%); overall, the volumes significantly decreased as detected by software-assisted measurement of volume (P = .0002) and by calculation from linear measurements (P < .0001). Volumetric analysis detected a reduction of 86% in lesions while the linear-derived volume measurements yielded an average reduction of 96%. These two methods of lesion assessment were strongly correlated with one another in pre-surgical scans (r>0.88) when apical lesions are measurable.
53

Advanced Techniques for Image Quality Assessment of Modern X-ray Computed Tomography Systems

Solomon, Justin Bennion January 2016 (has links)
<p>X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination. </p><p>A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.</p><p>Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.</p><p>The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).</p><p>First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.</p><p>Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.</p><p>Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.</p><p>The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.</p><p>To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.</p><p>The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.</p><p>The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.</p><p>Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.</p><p>The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose. </p><p>In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.</p> / Dissertation
54

Interactive 3D Image Analysis for Cranio-Maxillofacial Surgery Planning and Orthopedic Applications

Nysjö, Johan January 2016 (has links)
Modern medical imaging devices are able to generate highly detailed three-dimensional (3D) images of the skeleton. Computerized image processing and analysis methods, combined with real-time volume visualization techniques, can greatly facilitate the interpretation of such images and are increasingly used in surgical planning to aid reconstruction of the skeleton after trauma or disease. Two key challenges are to accurately separate (segment) bone structures or cavities of interest from the rest of the image and to interact with the 3D data in an efficient way. This thesis presents efficient and precise interactive methods for segmenting, visualizing, and analysing 3D computed tomography (CT) images of the skeleton. The methods are validated on real CT datasets and are primarily intended to support planning and evaluation of cranio-maxillofacial (CMF) and orthopedic surgery. Two interactive methods for segmenting the orbit (eye-socket) are introduced. The first method implements a deformable model that is guided and fitted to the orbit via haptic 3D interaction, whereas the second method implements a user-steered volumetric brush that uses distance and gradient information to find exact object boundaries. The thesis also presents a semi-automatic method for measuring 3D angulation changes in wrist fractures. The fractured bone is extracted with interactive mesh segmentation, and the angulation is determined with a technique based on surface registration and RANSAC. Lastly, the thesis presents an interactive and intuitive tool for segmenting individual bones and bone fragments. This type of segmentation is essential for virtual surgery planning, but takes several hours to perform with conventional manual methods. The presented tool combines GPU-accelerated random walks segmentation with direct volume rendering and interactive 3D texture painting to enable quick marking and separation of bone structures. It enables the user to produce an accurate segmentation within a few minutes, thereby removing a major bottleneck in the planning procedure.
55

Évaluation des effets dento-alvéolaires et squelettiques de l'expansion palatine rapide assistée chirurgicalement à l'aide de tomodensitométrie à faisceau conique

Quintin, Olivier January 2009 (has links)
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal.
56

Alveolar Ridge Dimension Analysis Following Socket Preservation Using Clinical Assessment and Cone Beam Computed Tomography (CBCT).

Duggan, Sayward 12 May 2001 (has links)
AIM: Extraction of a tooth can lead to alveolar ridge resorption which can be minimized by socket preservation. The aim of this study is to analyze vertical and horizontal alveolar ridge dimensions clinically and by CBCT immediately following extraction and 3-4 months following socket preservation. METHODS: The preserved group (P) consisted of 20 patients with1-2 non-molar teeth requiring extraction with socket preservation, while the control group (C) consisted of 5 patients requiring extraction alone. An acrylic stent was fabricated presurgically in order to measure vertical and horizontal ridge dimensions clinically and radiographically immediately following extraction and 3-4 months following socket preservation. RESULTS: Overall, P sites gained ridge height and lost minimal ridge width over 3-4 months, while C sites lost both ridge height and width. Preserved sites in which the teeth were extracted due to caries had the most significant gain in the radiographic vertical occlusal dimension (RVO). Overall, high correlations were found between the clinical and radiographic measurements at the initial surgery and at the 3-4 month follow up. CONCLUSIONS: The preserved group had minimal ridge resorption and more socket bony fill when compared to the non-preserved group 3-4 months following tooth extraction, especially when the tooth was extracted due to caries. Additionally, the CBCT can be a useful diagnostic tool to evaluate socket preservation healing, as it compares well to clinical assessments of socket healing.
57

Segmentation of Bones in 3D CT Images / Segmentation of Bones in 3D CT Images

Krčah, Marcel January 2011 (has links)
Accurate and automatic segmentation techniques that do not require any explicit prior model have been of high interest in the medical community. We propose a fully-automatic method for segmenting the femur from 3D Computed Tomography scans, based on the graph-cut segmentation framework and the bone boundary enhancement filter analyzing second-order local structures. The presented algorithm is evaluated in large-scale experiments, conducted on 197 CT volumes, and compared to other three automatic bone segmentation methods. Out of the four tested approaches, the proposed algorithm achieved most accurate results and segmented the femur correctly in 81% of the cases.
58

Prospective Estimation of Radiation Dose and Image Quality for Optimized CT Performance

Tian, Xiaoyu January 2016 (has links)
<p>X-ray computed tomography (CT) is a non-invasive medical imaging technique that generates cross-sectional images by acquiring attenuation-based projection measurements at multiple angles. Since its first introduction in the 1970s, substantial technical improvements have led to the expanding use of CT in clinical examinations. CT has become an indispensable imaging modality for the diagnosis of a wide array of diseases in both pediatric and adult populations [1, 2]. Currently, approximately 272 million CT examinations are performed annually worldwide, with nearly 85 million of these in the United States alone [3]. Although this trend has decelerated in recent years, CT usage is still expected to increase mainly due to advanced technologies such as multi-energy [4], photon counting [5], and cone-beam CT [6].</p><p>Despite the significant clinical benefits, concerns have been raised regarding the population-based radiation dose associated with CT examinations [7]. From 1980 to 2006, the effective dose from medical diagnostic procedures rose six-fold, with CT contributing to almost half of the total dose from medical exposure [8]. For each patient, the risk associated with a single CT examination is likely to be minimal. However, the relatively large population-based radiation level has led to enormous efforts among the community to manage and optimize the CT dose.</p><p>As promoted by the international campaigns Image Gently and Image Wisely, exposure to CT radiation should be appropriate and safe [9, 10]. It is thus a responsibility to optimize the amount of radiation dose for CT examinations. The key for dose optimization is to determine the minimum amount of radiation dose that achieves the targeted image quality [11]. Based on such principle, dose optimization would significantly benefit from effective metrics to characterize radiation dose and image quality for a CT exam. Moreover, if accurate predictions of the radiation dose and image quality were possible before the initiation of the exam, it would be feasible to personalize it by adjusting the scanning parameters to achieve a desired level of image quality. The purpose of this thesis is to design and validate models to quantify patient-specific radiation dose prospectively and task-based image quality. The dual aim of the study is to implement the theoretical models into clinical practice by developing an organ-based dose monitoring system and an image-based noise addition software for protocol optimization. </p><p>More specifically, Chapter 3 aims to develop an organ dose-prediction method for CT examinations of the body under constant tube current condition. The study effectively modeled the anatomical diversity and complexity using a large number of patient models with representative age, size, and gender distribution. The dependence of organ dose coefficients on patient size and scanner models was further evaluated. Distinct from prior work, these studies use the largest number of patient models to date with representative age, weight percentile, and body mass index (BMI) range.</p><p>With effective quantification of organ dose under constant tube current condition, Chapter 4 aims to extend the organ dose prediction system to tube current modulated (TCM) CT examinations. The prediction, applied to chest and abdominopelvic exams, was achieved by combining a convolution-based estimation technique that quantifies the radiation field, a TCM scheme that emulates modulation profiles from major CT vendors, and a library of computational phantoms with representative sizes, ages, and genders. The prospective quantification model is validated by comparing the predicted organ dose with the dose estimated based on Monte Carlo simulations with TCM function explicitly modeled. </p><p>Chapter 5 aims to implement the organ dose-estimation framework in clinical practice to develop an organ dose-monitoring program based on a commercial software (Dose Watch, GE Healthcare, Waukesha, WI). In the first phase of the study we focused on body CT examinations, and so the patient’s major body landmark information was extracted from the patient scout image in order to match clinical patients against a computational phantom in the library. The organ dose coefficients were estimated based on CT protocol and patient size as reported in Chapter 3. The exam CTDIvol, DLP, and TCM profiles were extracted and used to quantify the radiation field using the convolution technique proposed in Chapter 4. </p><p>With effective methods to predict and monitor organ dose, Chapters 6 aims to develop and validate improved measurement techniques for image quality assessment. Chapter 6 outlines the method that was developed to assess and predict quantum noise in clinical body CT images. Compared with previous phantom-based studies, this study accurately assessed the quantum noise in clinical images and further validated the correspondence between phantom-based measurements and the expected clinical image quality as a function of patient size and scanner attributes. </p><p>Chapter 7 aims to develop a practical strategy to generate hybrid CT images and assess the impact of dose reduction on diagnostic confidence for the diagnosis of acute pancreatitis. The general strategy is (1) to simulate synthetic CT images at multiple reduced-dose levels from clinical datasets using an image-based noise addition technique; (2) to develop quantitative and observer-based methods to validate the realism of simulated low-dose images; (3) to perform multi-reader observer studies on the low-dose image series to assess the impact of dose reduction on the diagnostic confidence for multiple diagnostic tasks; and (4) to determine the dose operating point for clinical CT examinations based on the minimum diagnostic performance to achieve protocol optimization. </p><p>Chapter 8 concludes the thesis with a summary of accomplished work and a discussion about future research.</p> / Dissertation
59

Desenvolvimento de um software para avaliação de qualidade de imagens tomográficas usando o Phantom Catphan500® / Development of a Software for Image Quality Assessment in Computed Tomography using the Catphan500® Phantom

Vieira, Daniel Vicente 07 October 2016 (has links)
Desde a invenção da tomografia computadorizada (CT) nos anos 70, toda década trouxe novas tecnologias para esta modalidade. Com estes avanços, também surgiu a necessidade de novas e melhores técnicas de avaliação de desempenho e segurança dos equipamentos de CT. Hoje, o controle de qualidade de equipamentos de CT é, em grande parte, feito manualmente. Portanto, é lento e, em parte, subjetivo. Neste trabalho, um software foi escrito em MatLab® para processar imagens do phantom de CT Catphan500®, aperfeiçoando a rotina do programa de controle de qualidade de CT. Com pouca interferência do usuário, o software mede a espessura de corte, incremento entre cortes e tamanho de pixel, avalia a linearidade do número CT, estima a Função Transferência de Modulação (MTF), o ruído e o Espectro de Potência do Ruído (NPS). Para a validação do software, conjuntos de imagens do phantom foram obtidas em 10 equipamentos de CT diferentes, com 27 protocolos diferentes. Cada conjunto foi analisado pelo software, e os resultados obtidos foram comparados aos resultados previamente obtidos pela rotina normal do programa controle de qualidade. Para essa comparação, dois testes de hipótese foram empregados: o teste t de Student (para os valores de espessura de corte, incremento entre cortes, tamanho de pixel e os coeficientes da avaliação de linearidade do número CT, adotando um valor-p de 0,01) e o teste F de Fisher (para o ruído, valor-p de 0,05). As funções MTF e NPS atualmente não são medidas na rotina do controle de qualidade, portanto não há resultado prévio para fazer esta comparação. Ao invés disso, o NPS foi ajustado em função da MTF (através da relação teórica que há entre os dois) e a qualidade do ajuste foi avaliada pelo teste de qui-quadrado. Dos 101 valores de t e 25 valores de F calculados, 2 e 1 respectivamente estavam fora do intervalo de aceitação. Este resultado está de acordo com os valores-p escolhidos e, portanto, os resultados obtidos pelo software estão de acordo com os resultados da rotina de controle de qualidade convencional. Os ajustes de NPS e MTF obtiveram incertezas grandes nos parâmetros de ajuste (incertezas da mesma ordem de grandeza dos próprios parâmetros). Porém, a avaliação do qui-quadrado reduzido indica que os ajustes foram aceitáveis (com exceção de um, que mostrou uma anomalia no NPS medido e foi desconsiderado). Portanto, o NPS e MTF obtidos estão de acordo com a expectativa teórica. / Since the introduction of the CT scanner as a diagnostic imaging modality, the scientific community has seen new and more complex CT technologies. These improvements brought the need for new and improved techniques to evaluate the safety and performance of these scanners. Nowadays, the interpretation of images generated during the implementation of CT quality control procedures are done visually in much of the cases. Therefore, it is slow and partially subjective. In this work, a software was written in MatLab to process images of the Catphan500 CT phantom, in order to improve the CT quality control workflow and its accuracy. The software evaluate the slice thickness, slice increment, and pixel size, calculates the CT number linearity, and assesses the Modulation Transfer Function (MTF), the noise and the Noise Power Spectrum (NPS). Image sets of the phantom were obtained from 10 different scanners using 27 different protocols in order to validate the software. Comparative results correlating the software output and corresponding data previously obtained by the current quality control program routine were used to conduct this validation. For this comparison, two statistical tests were employed: the Students t-test (for slice thickness, slice increment, pixel size, and the coefficients of the CT number linearity evaluation, with a chosen p-value of 0.01) and the Fisher F-test (for the noise, with chosen p-value of 0.05).The functions MTF and NPS are not currently measured by the quality control routine, so there was no previous result for comparison. Instead, the NPS was fitted as a function of the MTF (using the theoretical relationship between both functions) and the quality of the fit was evaluated using the reduced chi-square. From 101 t values and 25 F values calculated, 2 and 1 were outside the acceptance interval, respectively. This result agrees with the chosen p-values, and therefore the software results are in good agreement with the traditional quality control routine results. The fits of NPS and MTF presented large uncertainties in the fitting parameters (uncertainties of the same order of magnitude as the parameters themselves). However, the reduced chi-square evaluation indicates a good fit (with the exception of one fit, which showed an anomaly on the measured NPS and was unconsidered). Therefore, the obtained MTF and NPS were in agreement with the theoretical expectations.
60

Influência da base craniana sobre as dimensões transversais das bases apicais e dos arcos dentários / Influence of cranial base on transverse dimensions of apical bases and arches

Almeida, Carolina Pedrinha de 08 April 2011 (has links)
Esta pesquisa foi realizada para verificar se o padrão morfogenético da base craniana é determinante das dimensões transversais da maxila e mandíbula e se tal influência também se estende à porção alveolar das bases apicais. Complementarmente buscou-se avaliar as inclinações vestíbulo-linguais dos dentes posteriores com intuito de verificar se tais inclinações são padronizadas, independente das larguras das bases alveolares, ou se existem variações significativas. A amostra foi composta por 30 indivíduos adultos jovens, brasileiros, leucodermas com perfis faciais equilibrados e neutro-oclusão. Dividiu-se em dois grupos de acordo com a dimensão transversal da base anterior do crânio, definida pela distância entre os pontos esfenóide direito e esquerdo, sendo o grupo G1 composto pelos indivíduos apresentando valores menores que a mediana, e o grupo G2 com indivíduos apresentando valores maiores ou iguais a mediana. Foi realizada correlação intraclasse para avaliar o erro do método; média, mediana e desvio padrão para descrever o grupo amostral; teste t-Student para comparar os grupos G1 e G2; teste exato de Fischer para avaliar associação entre base do crânio e gêneros e Teste de Correlação Linear de Pearson. As medidas apresentaram alta reprodutibilidade. Indivíduos do grupo G2 apresentaram maior largura de mandíbula e maior espessura alveolar dos primeiros molares e primeiros pré-molares superiores. Não houve associação entre largura de base do crânio e gêneros. A largura basal da mandíbula apresentou correlação estatisticamente significante com a largura da base do crânio, assim como a largura alveolar maxilar na região de prémolares e molares superiores em relação à largura basal da maxila. As conclusões foram: a largura da base craniana apresenta correlação com a largura da mandíbula; a largura da maxila varia em consonância com a largura da mandíbula; a largura alveolar da maxila na região de primeiros pré-molares e molares superiores apresenta correlação com largura basal da maxila; a largura basal mandibular apresenta correlação com a largura alveolar da mandíbula na região dos primeiros pré-molares; as inclinações vestíbulo-linguais dos primeiros molares e primeiros prémolares são constantes, independentes das larguras basais e alveolares de suas respectivas bases ósseas. / This research intended to check if the morphogenetic pattern of cranial base determines transverse dimensions of upper and lower face and if it extends to alveolar portion basis. In addition evaluated buccal-lingual inclinations of posterior teeth to examine if they are standardized, regardless of the widths of the alveolar bases, or whether there are significant variations. The sample comprised 30 young adults, Caucasian Brazilian facial profiles with balanced and neutral occlusion. Divided into two groups according to the transverse dimension of the anterior skull base, defined by the distance between right and left sphenoid´s points, the G1 made of individuals with lower values than the median, and G2 with individuals with values greater or equal to median. Intraclass correlation was performed to evaluate the method error, mean, median and standard deviation to describe the sample group, Student t test to compare the groups G1 and G2, Fisher exact test to access the relation between skull base and genres and Test Linear correlation of Pearson. Measurements showed high reproducibility. G2 individuals showed greater mandible width and greater thickness of alveolar bone on upper first molars and first premolars. There was no association between width of skull base and genres. The width of the mandible showed a statistically significant correlation with the width of the skull base and maxillary alveolar width in premolars and molars in relation to basal width of jaw. Conclusions: the width of cranial base is correlated with width of jaw; jaw width varies in line with jaw width and width of maxillary alveolar region of first premolars and molars is correlated with basal width jaw, the basal mandibular width correlates with width of jaw in alveolar region first premolars; the buccolingual inclinations of the first molars and first premolars are constant, independent of basal width and alveolar bone of their respective bases.

Page generated in 0.2439 seconds