• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5244
  • 1086
  • 919
  • 627
  • 226
  • 144
  • 119
  • 118
  • 69
  • 58
  • 55
  • 37
  • 37
  • 37
  • 37
  • Tagged with
  • 10392
  • 2579
  • 2540
  • 1453
  • 1323
  • 1195
  • 1179
  • 815
  • 799
  • 777
  • 769
  • 759
  • 728
  • 693
  • 665
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Assessment of the healing of vascularized fibula bone graft in the reconstruction of the mandible using computed tomography

Nadershahh, Mohammed 08 April 2016 (has links)
PURPOSE: Vascularized bone graft has become the standard for the reconstruction of large Mandibular defects, those with soft tissue defect or after radiation to the area. Fibula free flap represents the workhorse for simultaneous bone and soft tissue reconstruction of the Mandible. The aim of this study is to quantify bone formation, if any, in the graft-mandible and graft-graft gaps using computed tomography (CT) scans by developing a reliable threshold-based post-imaging processing tool, compare the healing of fibula to the mandible to the healing of the fibula to itself using this tool, and to investigate potential factors affecting bone formation specifically the linear distance between the bony edges during surgery. PATIENTS AND METHODS: This is a multicenter study centered at Boston medical center. DICOM images were analyzed using Osirix software (V.3.7.1, 32 bits) after blinding identifying data. The inclusion criteria for this study: 1) patients received a vascularized Fibula free flap for Mandible reconstruction; 2) patients who have at least 2 postoperative CT scans with at least one month interval; 3) the first CT is within the first 3 months after the surgery; 4) no signs of clinical failure of the graft or hardware failure. The reliability of this technique was tested using two independent blinded examiners. Each blinded examiner tested each scan three times. Pearson's correlation coefficient was used to assess inter-rater reliability while the mean, Standard deviation error, and standard deviation of the mean assessed the intra-rater reliability. Paired T-test was used to compare the amount of volume change over time in participants who had both graft-graft gaps and graft-Mandible gaps. Multiple linear regressions were used to investigate the relation between the initial linear distance between the bony edges of the gap, age, and time interval against the percentage of change in gap volume. All statistics were conducted using Microsoft excel software and SPSS. RESULTS: Twenty bony gaps from nine subjects were included in this study. This includes five graft-graft gaps and fifteen graft-Mandible gaps. The first post-operative CT scan was done within first three months after surgery (range= 2-77 days, mean= 22.2 days). Each subject had two CT scans with time interval ranging between 33 days to 390 days (mean= 191.1 days). The subjects' age ranged between 30 and 72 years (mean= 56.1 years). 12 bony gaps were used for assessing inter-rater and intra-rater reliability. The Pearson's correlation coefficient for inter-rater reliability was 0.94. Inter-reliability standard deviation error average was 0.03 and the standard error of the mean average was 0.003. Two-tailed paired T-test comparing the interval change in volume of graft-graft gaps to graft-Mandible gaps was 0.304. We found a significant negative correlation between absolute volume change and distance in mm (Pearson =-0.476, p-value=0.017). 22.7% of the variability in volume change can be explained by the initial linear distance between the bony edges of the gaps in millimeter. CONCLUSION: Small bony gaps between the fibula bone graft and the mandible after mandibular reconstruction can be reliably assessed. The healing of the fibula to itself was not found to be significantly different from the healing of fibula to the mandible in the same subject. The initial linear distance between the bone edges of the gap is inversely related to subsequent bone formation. It is recommended to adapt the bony segments as close as possible to increase bone formation.
42

Improved detection and characterization of obscured central gland tumors of the prostate: texture analysis of non contrast and contrast enhanced MR images for differentiation of benign prostate hyperplasia (BPH) nodules and cancer

Banaja, Duaa 03 November 2016 (has links)
OBJECTIVE: The purpose of this study to assess the value of texture analysis (TA) for prostate cancer (PCa) detection on T2 weighted images (T2WI) and dynamic contrast-enhanced images (DCE) by differentiating between the PCa and Benign Prostate Hyperplasia (BPH). MATERIALS & METHODS: This study used 10 retrospective MRI data sets that were acquired from men with confirmed PCa. The prostate region of interest (ROI) was delineated by an expert on MRI data sets using automated prostate capsule segmentation scheme. The statistical significance test was used for feature selection scheme for optimal differentiation of PCa from BPH on MR images. In pre-processing, for T2-WI, Bias correction and all images intensities are standardized to a representative template. For DCE images, Bias correction and all images are registered to time point 1 for that patient. Following pre-processing texture, features from ROI were extracted and analyzed. Texture features that were extracted are: Intensity mean and standard deviation, Sobel (Edge detection), Haralick features, and Gabor features. RESULTS: In T2-WI, statistically significant differences were observed in Haralick features. In DCE images, statistically significant differences were observed in mean intensity, Sobel, Gabor, and Haralick features. CONCLUSION: BPH is better differentiated in DCE images compared to T2-WI. The statically significant features may be combined to build a BPH vs. cancer detection system in future.
43

MRI overview for fat quantification in non-alcoholic fatty liver disease in the clinical and research settings

Kavanaugh, Ryan 13 July 2017 (has links)
The general purpose of this master’s thesis is to describe the MRI techniques used in scanning and post processing for quantifying liver fat percentages for the purpose of diagnosis and research. At the onset we will look at epidemiological data regarding nonalcoholic fatty liver disease, which is often called by the name of hepatic steatosis. Based on the prevalence of this disease it is worthwhile to fully understand non-invasive (MRI) analysis, and its use in the clinical and research setting. Following an introductory section regarding the basis of magnetic resonance imaging, we will take a more in-depth look at current methods utilized for liver fat quantification. Due to the massive population of those of suffer from this disease worldwide it is prudent to analyze current methods, as well as the implications that such research has and will have on the pharmaceutical approach to treating this disease. The purpose of this thesis is to elucidate the MRI techniques utilized for liver fat quantification and provide a comprehensive view of how these techniques are used for diagnosis in the clinical setting, and longitudinal studies in the research setting to measure liver fat levels and how they react to various treatment approaches.
44

Improved Spatial Coverage of High-Temporal Resolution Dynamic Susceptibility Contrast-MRI Through 3D Spiral-Based Acquisition and Parallel Imaging

January 2017 (has links)
abstract: Dynamic susceptibility contrast MRI (DSC-MRI) is a powerful tool used to quantitatively measure parameters related to blood flow and volume in the brain. The technique is known as a “bolus-tracking” method and relies upon very fast scanning to accurately measure the flow of contrast agent into and out of a region of interest. The need for high temporal resolution to measure contrast agent dynamics limits the spatial coverage of perfusion parameter maps which limits the utility of DSC-perfusion studies in pathologies involving the entire brain. Typical clinical DSC-perfusion studies are capable of acquiring 10-15 slices, generally centered on a known lesion or pathology. The methods developed in this work improve the spatial coverage of whole-brain DSC-MRI by combining a highly efficient 3D spiral k-space trajectory with Generalized Autocalibrating Partial Parallel Acquisition (GRAPPA) parallel imaging without increasing temporal resolution. The proposed method is capable of acquiring 30 slices with a temporal resolution of under 1 second, covering the entire cerebrum with isotropic spatial resolution of 3 mm. Additionally, the acquisition method allows for correction of T1-enhancing leakage effects by virtue of collecting two echoes, which confound DSC perfusion measurements. The proposed DSC-perfusion method results in high quality perfusion parameter maps across a larger volume than is currently available with current clinical standards, improving diagnostic utility of perfusion MRI methods, which ultimately improves patient care. / Dissertation/Thesis / Doctoral Dissertation Bioengineering 2017
45

fMRI Design under Autoregressive Model with One Type of Stimulus

January 2017 (has links)
abstract: Functional magnetic resonance imaging (fMRI) is used to study brain activity due to stimuli presented to subjects in a scanner. It is important to conduct statistical inference on such time series fMRI data obtained. It is also important to select optimal designs for practical experiments. Design selection under autoregressive models have not been thoroughly discussed before. This paper derives general information matrices for orthogonal designs under autoregressive model with an arbitrary number of correlation coefficients. We further provide the minimum trace of orthogonal circulant designs under AR(1) model, which is used as a criterion to compare practical designs such as M-sequence designs and circulant (almost) orthogonal array designs. We also explore optimal designs under AR(2) model. In practice, types of stimuli can be more than one, but in this paper we only consider the simplest situation with only one type of stimuli. / Dissertation/Thesis / Masters Thesis Statistics 2017
46

Estimation of 3D human motion kinematics model from multiple cameras

Zhang, Chunxiao January 2009 (has links)
Estimation of articulated human motion based on video sequences acquired from multiple synchronised cameras is an active and challenging research area. This is mainly due to the need of high dimensional non-linear models to describe the human motion, cluttered data, and occlusions present in the captured images. Although many diverse techniques have been proposed to solve this problem, none of the existing solutions is fully satisfactory. In this thesis, upper body motion tracking and full body motion tracking based on the annealed particle filter (APP) approach are presented. To successfully implement a body motion tracking algorithm, the first requirement is to prepare and pre-process the data. The work performed in this area includes calibration of multiple cameras, colour image segmentation to extract body silhouettes from the cluttered background, and visual hull reconstruction to provide voxels representing a human volume in 3D space. The second requirement is to build the models. Two set of models are proposed in this thesis. The first set is for upper body tracking and it contains point models and two-segment articulated arm models; the second set is for full body tracking and it contains five articulated chains as a full human model. The final requirement is to design a measurement method for aligning the models to the data. Two novel measurement methods are proposed for the motion tracking: one is based on a combination of different penalties tailored to each body part based on the percentage of the 3D to 2D projected body points, falling inside and outside the body silhouette, and the other is based on the symmetrical property of the intensity profile obtained from the body silhouette bisected by the 3D to 2D projection of the estimated skeletal model. Various evaluations were carried out to demonstrate the effectiveness of the algorithms implemented and the excellent performance of the proposed methods for upper body and full body motion tracking. These include the accuracy analysis of cameras calibration and image segmentation; the accuracy and speed of APF applied to the articulated arm model in tracking of the infra-red marker based human motion data; as well as the visual and quantitative assessments of the final results obtained from the proposed upper body and full body motion tracking.
47

Machine Learning in Neuroimaging

Punugu, Venkatapavani Pallavi 08 August 2017 (has links)
<p> The application of machine learning algorithms to analyze and determine disease related patterns in neuroimaging has emerged to be of extreme interest in Computer-Aided Diagnosis (CAD). This study is a small step towards categorizing Alzheimer's disease, Neurode-generative diseases, Psychiatric diseases and Cerebrovascular Small Vessel diseases using CAD. In this study, the SPECT neuroimages are pre-processed using powerful data reduction techniques such as Singular Value Decomposition (SVD), Independent Component Analysis (ICA) and Automated Anatomical Labeling (AAL). Each of the pre-processing methods is used in three machine learning algorithms namely: Artificial Neural Networks (ANNs), Support Vector Machines (SVMs) and k-Nearest Neighbors (k-nn) to recognize disease patterns and classify the diseases. While neurodegenerative diseases and psychiatric diseases overlap with a mix of diseases and resulted in fairly moderate classification, the classification between Alzheimer's disease and Cerebrovascular Small Vessel diseases yielded good results with an accuracy of up to 73.7%.</p><p>
48

Predicting sleep stages with machine learning and wearable byteflies sensor dots: a pilot study

Carroll, James Peter 20 February 2021 (has links)
The conventional method for quantifying sleep is through the use of Polysomnography (PSG) and a trained human sleep scorer by observing and evaluating the output in 30-second epochs. A PSG device can be rather invasive to one’s regular sleep pattern and therefore can potentially result in irregular sleep patterns. Furthermore, human sleep scoring classification by a trained expert can be rather time consuming and subject to inter/intra rater variability. Nevertheless, human sleep scoring with PSG still remains the gold-standard for sleep measuring and classification for the diagnosis disorders related to sleep. The present pilot study explores the possibility of using a wearable device known as a ByteFlies Sensor Dot to measure signal activity from an individual during a night’s sleep. This validation study focuses on the signal capture of alpha frequency band through a phenomenon known as “the Berger effect.” Participants will be asked to open and close their eyes while being connected to the gold standard PSG device and exploratory ByteFlies Sensor Dot device. The resulting alpha signals will be identified with a machine learning algorithm for cross comparison and analysis. In conclusion, the validation study will discuss methods to improve on the measuring of EEG and sleep stage scoring with the ByteFlies Sensor Dot for sleep monitoring and sleep disorder diagnosis.
49

The image processing for the target centre detection in digital image

Xue, R G January 1992 (has links)
This thesis comprises of five chapters. Chapter one describes basic principles of the digital image, digital image construction and the present status of the digital photogrammetry system, named PHOENICS (PHOtogrammetric ENgineering and Industrial digital Camera System), as developed by H. Rüther (1989). The target's shape analysis in the digital image are presented in chapter two. Chapter three presents the algorithms to detect and locate target on the digital image. These are the least squares adjustment technique, moment method, moment-preserving for edge detection as well as test methods for the evaluation of the various alglorithms. The novel RG method is presented in chapter four. Chapter five introduces the theory of some image processing methods.
50

Studying the extremely preterm brain with multiparametric quantitative MRI: algorithms for automated analyses of large databases

McNaughton, Ryan Christopher 23 May 2022 (has links)
With the advent of large-scale, multi-site imaging studies, there is a growing need for magnetic resonance imaging (MRI) pulse sequences and matching computer algorithms that generate accurate and harmonious quantitative information descriptive of the examined population. Algorithms for quantitative MRI (qMRI) are of particular importance, as rich information related to tissue structure and composition can be derived without ionizing radiation. With this thesis work, a multiparametric (MP) qMRI image processing pipeline is applied to the brains of adolescents born extremely preterm (EP), who experience high incidence of neurologic disability, and white and gray matter (WM, GM) injuries. Harmonized MP-qMRI parameters served as biological markers of neurodevelopment, and were implemented in computational frameworks for tissue segmentation, and characterization of macromolecular and metal components of the neuroarchitecture. This work first describes the triple turbo spin echo (Triple-TSE) MRI pulse sequence and multiple aspects of a highly automated MP-qMRI image processing pipeline. The primary MP-qMRI parameters of proton density, and longitudinal and transverse relaxation times were calculated according to the Bloch equation model of the Triple-TSE and harmonized across multiple MRI scanners. Next, the WM microstructure’s organization was studied with synthetic MRI and mapping spatial entropy (SE). The distribution of these parameters and their associations with SE density distinguished atypically versus neurotypically developing adolescents. In the second part, this work describes a deep GM segmentation method. A two-channel dual-clustering algorithm was applied in parallel with connected component theory to separate cortical and deep gray matter. For every voxel, the similarity of the three MP-qMRI parameters to those of a predefined imaging cluster was interrogated. In this way, the deep GM can be isolated from the in toto brain without additional pulse sequences for structural MRI. In the final part of this work, an MR relaxation theoretical framework was constructed to derive the distribution of macromolecules and metal deposits in the brain. These microstructural components follow interrelated pathways and play roles in neural signal transmission and normal brain function. Using a fast exchange relaxation model and synthetic MRI, linear associations between the concentrations of these components were identified in deep GM and WM structures. / 2023-05-23T00:00:00Z

Page generated in 0.0799 seconds