• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • Tagged with
  • 67
  • 67
  • 67
  • 32
  • 16
  • 16
  • 15
  • 15
  • 15
  • 13
  • 12
  • 11
  • 10
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Characterization of Computed Tomography Radiomic Features using Texture Phantoms

Shafiq ul Hassan, Muhammad 05 April 2018 (has links)
Radiomics treats images as quantitative data and promises to improve cancer prediction in radiology and therapy response assessment in radiation oncology. However, there are a number of fundamental problems that need to be solved in order to potentially apply radiomic features in clinic. The first basic step in computed tomography (CT) radiomic analysis is the acquisition of images using selectable image acquisition and reconstruction parameters. Radiomic features have shown large variability due to variation of these parameters. Therefore, it is important to develop methods to address these variability issues in radiomic features due to each CT parameter. To this end, texture phantoms provide a stable geometry and Hounsfield Units (HU) to characterize the radiomic features with respect to image acquisition and reconstruction parameters. In this project, normalization methods were developed to address the variability issues in CT Radiomics using texture phantoms. In the first part of this project, variability in radiomic features due to voxel size variation was addressed. A voxel size resampling method is presented as a preprocessing step for imaging data acquired with variable voxel sizes. After resampling, variability due to variable voxel size in 42 radiomic features was reduced significantly. Voxel size normalization is presented to address the intrinsic dependence of some key radiomic features. After normalization, 10 features became robust as a function of voxel size. Some of these features were identified as predictive biomarkers in diagnostic imaging or useful in response assessment in radiation therapy. However, these key features were found to be intrinsically dependent on voxel size (which also implies dependence on lesion volume). The normalization factors are also developed to address the intrinsic dependence of texture features on the number of gray levels. After normalization, the variability due to gray levels in 17 texture features was reduced significantly. In the second part of the project, voxel size and gray level (GL) normalizations developed based on phantom studies, were tested on the actual lung cancer tumors. Eighteen patients with non-small cell lung cancer of varying tumor volumes were studied and compared with phantom scans acquired on 8 different CT scanners. Eight out of 10 features showed high (Rs > 0.9) and low (Rs < 0.5) Spearman rank correlations with voxel size before and after normalizations, respectively. Likewise, texture features were unstable (ICC < 0.6) and highly stable (ICC > 0.9) before and after gray level normalizations, respectively. This work showed that voxel size and GL normalizations derived from texture phantom also apply to lung cancer tumors. This work highlights the importance and utility of investigating the robustness of CT radiomic features using CT texture phantoms. Another contribution of this work is to develop correction factors to address the variability issues in radiomic features due to reconstruction kernels. Reconstruction kernels and tube current contribute to noise texture in CT. Most of texture features were sensitive to correlated noise texture due to reconstruction kernels. In this work, noise power spectra (NPS) was measured on 5 CT scanners using standard ACR phantom to quantify the correlated noise texture. The variability in texture features due to different kernels was reduced by applying the NPS peak frequency and the region of interest (ROI) maximum intensity as correction factors. Most texture features were radiation dose independent but were strongly kernel dependent, which is demonstrated by a significant shift in NPS peak frequency among kernels. Percent improvements in robustness of 19 features were in the range of 30% to 78% after corrections. In conclusion, most texture features are sensitive to imaging parameters such as reconstruction kernels, reconstruction Field of View (FOV), and slice thickness. All reconstruction parameters contribute to inherent noise in CT images. The problem can be partly solved by quantifying noise texture in CT radiomics using a texture phantom and an ACR phantom. Texture phantoms should be a pre-requisite to patient studies as they provide stable geometry and HU distribution to characterize the radiomic features and provide ground truths for multi-institutional validation studies.
22

RECOVERING LOCAL NEURAL TRACT DIRECTIONS AND RECONSTRUCTING NEURAL PATHWAYS IN HIGH ANGULAR RESOLUTION DIFFUSION MRI

Cao, Ning 01 January 2013 (has links)
Magnetic resonance imaging (MRI) is an imaging technique to visualize internal structures of the body. Diffusion MRI is an MRI modality that measures overall diffusion effect of molecules in vivo and non-invasively. Diffusion tensor imaging (DTI) is an extended technique of diffusion MRI. The major application of DTI is to measure the location, orientation and anisotropy of fiber tracts in white matter. It enables non-invasive investigation of major neural pathways of human brain, namely tractography. As spatial resolution of MRI is limited, it is possible that there are multiple fiber bundles within the same voxel. However, diffusion tensor model is only capable of resolving a single direction. The goal of this dissertation is to investigate complex anatomical structures using high angular resolution diffusion imaging (HARDI) data without any assumption on the parameters. The dissertation starts with a study of the noise distribution of truncated MRI data. The noise is often not an issue in diffusion tensor model. However, in HARDI studies, with many more gradient directions being scanned, the number of repetitions of each gradient direction is often small to restrict total acquisition time, making signal-to-noise ratio (SNR) lower. Fitting complex diffusion models to data with reduced SNR is a major interest of this study. We focus on fitting diffusion models to data using maximum likelihood estimation (MLE) method, in which the noise distribution is used to maximize the likelihood. In addition to the parameters being estimated, we use likelihood values for model selection when multiple models are fit to the same data. The advantage of carrying out model selection after fitting the models is that both the quality of data and the quality of fitting results are taken into account. When it comes to tractography, we extend streamline method by using covariance of the estimated parameters to generate probabilistic tracts according to the uncertainty of local tract orientations.
23

VALIDATION, OPTIMIZATION, AND IMAGE PROCESSING OF SPIRAL CINE DENSE MAGNETIC RESONANCE IMAGING FOR THE QUANTIFICATION OF LEFT AND RIGHT VENTRICULAR MECHANICS

Wehner, Gregory J. 01 January 2017 (has links)
Recent evidence suggests that cardiac mechanics (e.g. cardiac strains) are better measures of heart function compared to common clinical metrics like ejection fraction. However, commonly-used parameters of cardiac mechanics remain limited to just a few measurements averaged over the whole left ventricle. We hypothesized that recent advances in cardiac magnetic resonance imaging (MRI) could be extended to provide measures of cardiac mechanics throughout the left and right ventricles (LV and RV, respectively). Displacement Encoding with Stimulated Echoes (DENSE) is a cardiac MRI technique that has been validated for measuring LV mechanics at a magnetic field strength of 1.5 T but not at higher field strengths such as 3.0 T. However, it is desirable to perform DENSE at 3.0 T, which would yield a better signal to noise ratio for imaging the thin RV wall. Results in Chapter 2 support the hypothesis that DENSE has similar accuracy at 1.5 and 3.0 T. Compared to standard, clinical cardiac MRI, DENSE requires more expertise to perform and is not as widely used. If accurate mechanics could be measured from standard MRI, the need for DENSE would be reduced. However, results from Chapter 3 support the hypothesis that measured cardiac mechanics from standard MRI do not agree with, and thus cannot be used in place of, measurements from DENSE. Imaging the thin RV wall with its complex contraction pattern requires both three-dimensional (3D) measures of myocardial motion and higher resolution imaging. Results from Chapter 4 support the hypothesis that a lower displacement-encoding frequency can be used to allow for easier processing of 3D DENSE images. Results from Chapter 5 support the hypothesis that images with higher resolution (decreased blurring) can be achieved by using more spiral interleaves during the DENSE image acquisition. Finally, processing DENSE images to yield measures of cardiac mechanics in the LV is relatively simple due to the LV’s mostly cylindrical geometry. Results from Chapter 6 support the hypothesis that a local coordinate system can be adapted to the geometry of the RV to quantify mechanics in an equivalent manner as the LV. In summary, cardiac mechanics can now be quantified throughout the left and right ventricles using DENSE cardiac MRI.
24

Optical Coherence Photoacoustic Microscopy (OC-PAM) for Multimodal Imaging

Liu, Xiaojing 23 November 2016 (has links)
Optical coherence tomography (OCT) and Photoacoustic microscopy (PAM) are two noninvasive, high-resolution, three-dimensional, biomedical imaging modalities based on different contrast mechanisms. OCT detects the light backscattered from a biological sample either in the time or spectral domain using an interferometer to form an image. PAM is sensitive to optical absorption by detecting the light-induced acoustic waves to form an image. Due to their complementary contrast mechanisms, OCT and PAM are suitable for being combined to achieve multimodal imaging. In this dissertation, an optical coherence photoacoustic microscopy (OC-PAM) system was developed for in vivo multimodal retinal imaging with a pulsed broadband NIR light source. To test the capabilities of the system on multimodal ophthalmic imaging, the retina of pigmented rats was imaged. The OCT images showed the retinal structures with quality similar to conventional OCT, while the PAM images revealed the distribution of melanin in the retina since the NIR PAM signals are generated mainly from melanin in the posterior segment of the eye. By using the pulsed broadband light source, the OCT image quality highly depends on the pulse-to-pulse stability of the light source without averaging. In addition, laser safety is always a concern for in vivo applications, especially for eye imaging with a pulsed light source. Therefore, a continuous wave (CW) light source is desired for OC-PAM applications. An OC-PAM system using an intensity-modulated CW superluminescent diode was then developed. The system was tested for multimodal imaging the vasculature of a mouse ear in vivo by using Gold Nanorods (GNRs) as contrast agent for PAM, as well as excised porcine eyes ex vivo. Since the quantitative information of the optical properties extracted from the proposed NIR OC-PAM system is potentially able to provide a unique technique to evaluate the existence of melanin and lipofuscin specifically, a phantom study has been conducted and the relationship between image intensity of OCT and PAM was interpreted to represent the relationship between the optical scattering property and optical absorption property. It will be strong evidence for practical application of the proposed NIR OC-PAM system.
25

Intraoperative Guidance for Pediatric Brain Surgery based on Optical Techniques

Song, Yinchen 30 June 2015 (has links)
For most of the patients with brain tumors and/or epilepsy, surgical resection of brain lesions, when applicable, remains one of the optimal treatment options. The success of the surgery hinges on accurate demarcation of neoplastic and epileptogenic brain tissue. The primary goal of this PhD dissertation is to demonstrate the feasibility of using various optical techniques in conjunction with sophisticated signal processing algorithms to differentiate brain tumor and epileptogenic cortex from normal brain tissue intraoperatively. In this dissertation, a new tissue differentiation algorithm was developed to detect brain tumors in vivo using a probe-based diffuse reflectance spectroscopy system. The system as well as the algorithm were validated experimentally on 20 pediatric patients undergoing brain tumor surgery at Nicklaus Children’s Hospital. Based on the three indicative parameters, which reflect hemodynamic and structural characteristics, the new algorithm was able to differentiate brain tumors from the normal brain with a very high accuracy. The main drawbacks of the probe-based system were its high susceptibility to artifacts induced by hand motion and its interference to the surgical procedure. Therefore, a new optical measurement scheme and its companion spectral interpretation algorithm were devised. The new measurement scheme was evaluated both theoretically with Monte Carlo simulation and experimentally using optical phantoms, which confirms the system is capable of consistently acquiring total diffuse reflectance spectra and accurately converting them to the ratio of reduced scattering coefficient to absorption coefficient (µs’(λ)/µa(λ)). The spectral interpretation algorithm for µs’(λ)/µa(λ) was also validated based on Monte Carlo simulation. In addition, it has been demonstrated that the new measurement scheme and the spectral interpretation algorithm together are capable of detecting significant hemodynamic and scattering variations from the Wistar rats’ somatosensory cortex under forepaw stimulation. Finally, the feasibility of using dynamic intrinsic optical imaging to distinguish epileptogenic and normal cortex was validated in an in vivo study involving 11 pediatric patients with intractable epilepsy. Novel data analysis methods were devised and applied to the data from the study; identification of the epileptogenic cortex was achieved with a high accuracy.
26

Multifunctional Nanoparticles in Cancer: in vitro Characterization, in vivo Distribution

Lei, Tingjun 28 March 2013 (has links)
A novel biocompatible and biodegradable polymer, termed poly(Glycerol malate co-dodecanedioate) (PGMD), was prepared by thermal condensation method and used for fabrication of nanoparticles (NPs). PGMD NPs were prepared using the single oil emulsion technique and loaded with an imaging/hyperthermia agent (IR820) and a chemotherapeutic agent (doxorubicin, DOX). The size of the void PGMD NPs, IR820-PGMD NPs and DOX-IR820-PGMD NPs were approximately 90 nm, 110 nm, and 125 nm respectively. An acidic environment (pH=5.0) induced higher DOX and IR820 release compared to pH=7.4. DOX release was also enhanced by exposure to laser, which increased the temperature to 42°C. Cytotoxicity of DOX-IR820-PGMD NPs was comparable in MES-SA but was higher in Dx5 cells compared to free DOX plus IR820 (pIn vivomouse studies showed that NP formulation significantly improved the plasma half-life of IR820 after tail vein injection. Significant lower IR820 content was observed in kidney in DOX-IR820-PGMD NP treatment as compared to free IR820 treatment in our biodistribution studies (p
27

3d On-Sensor Lensless Fluorescence Imaging

Shanmugam, Akshaya 01 January 2012 (has links) (PDF)
Fluorescence microscopy has revolutionized medicine and biological science with its ability to study the behavior and chemical expressions of living cells. Fluorescent probes can label cell components or cells of a particular type. Clinically the impact of fluorescence imaging can be seen in the diagnosis of cancers, AIDS, and other blood related disorders. Although fluorescence imaging devices have been established as a vital tool in medicine, the size, cost, and complexity of fluorescence microscopes limits their use to central laboratories. The work described in this thesis overcomes these limitations by developing a low cost integrated fluorescence microscope so single use fluorescence microscopy assays can be developed. These assays will enable at-home testing, diagnostics in resource limited settings, and improved emergency medicine.
28

Study of Immobilizing Cadmium Selenide Quantum Dots in Selected Polymers for Application in Peroxyoxalate Chemiluminescence Flow Injection Analysis

Moore, Christopher S 01 May 2013 (has links) (PDF)
Two batches of CdSe QDs with different sizes were synthesized for immobilizing in polyisoprene (PI), polymethylmethacrylate (PMMA), and low-density polyethylene (LDPE). The combinations of QDs and polymer substrates were evaluated for their analytical fit-for-use in applicable immunoassays. Hydrogen peroxide standards were injected into the flow injection analyzer (FIA) constructed to simulate enzyme-generated hydrogen peroxide reacting with bis-(2,4,6-trichlorophenyl) oxalate. Linear correlations between hydrogen peroxide and chemilumenscent intensities yielded regression values greater than 0.9750 for hydrogen peroxide concentrations between 1.0 x 10-4 M and 1.0 x 10-1 M. The developed technique’s LOD was approximately 10 ppm. Variability of the prepared QD-polymer products was as low as 3.2% throughout all preparations.Stability of the preparations was tested during a 30-day period that displayed up to a four-fold increase in the first 10 days. The preparations were decently robust to the FIA system demonstrating up to a 15.20% intensity loss after twenty repetitive injections.
29

Studying Milk Coagulation Kinetics with Laser Scanning Confocal Microscopy, Image Processing, and Computational Modeling

Hennessy, Richard Joseph 01 June 2011 (has links) (PDF)
The kinetics of milk coagulation are complex and still not well understood. A deeper understanding of coagulation and the impact of the relevant factors would aid in both cheese manufacturing and also in determining the nutritional benefits of dairy products. A method using confocal microscopy was developed to follow the movement of milk fat globules and the formation of a milk protein network during the enzyme-induced coagulation of milk. Image processing methods were then used to quantify the rate of coagulation. It was found that the texture of the protein network is an indicator of the current status of the milk gelation, and hence can be used to monitor the coagulation process. The imaging experiment was performed on milk gels with different concentrations of the coagulation enzyme, chymosin. Rheological measurements were taken using free oscillation rheometry to validate the imaging results. Both methods showed an inverse relationship between rennet concentration and the coagulation time. The results from the imaging study were used to create a computational model, which created simulated images of coagulating milk. The simulated images were then analyzed using the same image analysis algorithm. The temporal protein network texture behavior in the simulated images followed the same pattern as the protein texture in the confocal imaging data. The model was developed with temperature and rennet concentration as user inputs so that it could be implemented as a predictive tool for milk coagulation.
30

A Structural and Functional Analysis of Human Brain MRI with Attention Deficit Hyperactivity Disorder

Watane, Arjun A 01 January 2017 (has links)
Attention Deficit Hyperactivity Disorder (ADHD) affects 5-10% of children worldwide. Its effects are mainly behavioral, manifesting in symptoms such as inattention, hyperactivity, and impulsivity. If not monitored and treated, ADHD may adversely affect a child's health, education, and social life. Furthermore, the neurological disorder is currently diagnosed through interviews and opinions of teachers, parents, and physicians. Because this is a subjective method of identifying ADHD, it is easily prone to error and misdiagnosis. Therefore, there is a clear need to develop an objective diagnostic method for ADHD. The focus of this study is to explore the use of machine language classifiers on information from the brain MRI and fMRI of both ADHD and non-ADHD subjects. The imaging data are preprocessed to remove any intra-subject and inter-subject variation. For both MRI and fMRI, similar preprocessing stages are performed, including normalization, skull stripping, realignment, smoothing, and co-registration. The next step is to extract features from the data. For MRI, anatomical features such as cortical thickness, surface area, volume, and intensity are obtained. For fMRI, region of interest (ROI) correlation coefficients between 116 cortical structures are determined. A large number of image features are collected, yet many of them may include redundant and useless information. Therefore, the features used for training and testing the classifiers are selected in two separate ways, feature ranking and stability selection, and their results are compared. Once the best features from MRI and fMRI are determined, the following classifiers are trained and tested through leave-one-out cross validation, experimenting with varying feature numbers, for each imaging modality and feature selection method: support vector machine, support vector regression, random forest, and elastic net. Thus, there are four experiments (MRI-rank, MRI-stability, fMRI-rank, fMRI-stability) with four classifiers in each for a total of 16 classifiers trained per each feature count attempted. The results of each classifier are the decisions of each subject, ADHD or non-ADHD. Finally, a classifier decision ensemble is created through the combination of the outputs of the best classifiers in a majority voting method that includes results of both the MRI and fMRI classifiers and keeps both feature selection results independent. The results suggest that ADHD is more easily identified through fMRI because the classification accuracies are a lot higher using fMRI data rather than MRI data. Furthermore, significant activity correlation differences exist between the brain's frontal lobe and cerebellum and also the left and right hemispheres among ADHD and non-ADHD subjects. When including MRI decisions with fMRI in the classifier ensemble, performance is boosted to a high ADHD detection accuracy of 96.2%, suggesting that MRI information assists in validating fMRI classification decisions. This study is an important step towards the development of an automatic and objective method for ADHD diagnosis. While more work is needed to externally validate and improve the classification accuracy, new applications of current methods with promising results are introduced here.

Page generated in 0.0896 seconds