• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 222
  • 31
  • 23
  • 19
  • 17
  • 8
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 380
  • 380
  • 147
  • 98
  • 76
  • 69
  • 64
  • 44
  • 44
  • 39
  • 39
  • 38
  • 36
  • 31
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Low-dose imaging of liver diseases through neutron stimulated emission computed tomography: Simulations in GEANT4

Agasthya, Greeshma Ananth January 2013 (has links)
<p>Neutron stimulated emission computed tomography (NSECT) is a non-invasive, tomographic imaging technique with the ability to locate and quantify elemental concentration in a tissue sample. Previous studies have shown that NSECT has the ability to differentiate between benign and malignant tissue and diagnose liver iron overload while using a neutron beam tomographic acquisition protocol followed by iterative image reconstruction. These studies have shown that moderate concentrations of iron can be detected in the liver with moderate dose levels and long scan times. However, a low-dose, reduced scan time technique to differentiate various liver diseases has not been tested. As with other imaging modalities, the performance of NSECT in detecting different diseases while reducing dose and scan time will depend on the acquisition techniques and parameters that are used to scan the patients. In order to optimize a clinical liver imaging system based on NSECT, it is important to implement low-dose techniques and evaluate their feasibility, sensitivity, specificity and accuracy by analyzing the generated liver images from a patient population. This research work proposes to use Monte-Carlo simulations to optimize a clinical NSECT system for detection, localization, quantification and classification of liver diseases. This project has been divided into three parts; (a) implement two novel acquisition techniques for dose reduction, (b) modify MLEM iterative image reconstruction algorithm to incorporate the new acquisition techniques and (c) evaluate the performance of this combined technique on a simulated patient population. </p><p>The two dose-reduction, acquisition techniques that have been implemented are; (i) use of a single angle scanning, multi-detector acquisition system and (ii) the neutron-time resolved imaging (n-TRI) technique. In n-TRI, the NSECT signal has been resolved in time by a function of the speed of the incident neutron beam and this information has been used to locate the liver lesions in the tissue. These changes in the acquisition system have been incorporated and used to modify MLEM iterative image reconstruction algorithm to generate liver images. The liver images are generated from sinograms acquired by the simulated n-TRI based NSECT scanner from a simulated patient population.</p><p>The simulated patient population has patients of different sizes, with different liver diseases, multiple lesions with different sizes and locations in the liver. The NSECT images generated from this population have been used to validate the liver imaging system developed in this project. Statistical tests such as ROC and student t-tests have been used to evaluate this system. The overall improvement in dose and scan time as compared to the NSECT tomographic system have been calculated to verify the improvement in the imaging system. The patient dose was calculated by measuring the energy deposited by the neutron beam in the liver and surrounding body tissue. The scan time was calculated by measuring the time required by a neutron source to produce the neutron fluence required to generate a clinically viable NSECT image.</p><p>Simulation studies indicate that this NSECT system can detect, locate, quantify and classify liver lesions in different sized patients. The n-TRI imaging technique can detect lesions with wet iron concentration of 0.5 mg/g or higher in liver tissue in patients with 30 cm torso and can quantify lesions at 0.3 ns timing resolution with errors &#8804; 17.8%. The NSECT system can localize and classify liver lesions of hemochromatosis, hepatocellular carcinoma, fatty liver tissue and cirrhotic liver tissue based on bulk and trace element concentrations. In a small patient with a torso major axis of 30 cm, the n-TRI based liver imaging technique can localize 91.67% of all lesions and classify lesions with an accuracy of 88.23%. The dose to the small patient is 0.37 mSv a reduction of 39.9% as compared to the NSECT tomographic system and scan times are comparable to that of an abdominal MRI scan. In a bigger patient with a torso major axis of 50cm, the n-TRI based technique can detect 75% of the lesions, while localizing 66.67% of the lesions, the accuracy of classification is 76.47%. The effective dose equivalent delivered to the larger patient is 1.57 mSv for a 68.8% decrease in dose as compared to a tomographic NSECT system.</p><p>The research performed for this dissertation has two important outcomes. First, it demonstrates that NSECT has the clinical potential for detection, localization and classification of liver diseases in patients. Second, it provides a validation of the simulation of a novel low-dose liver imaging technique which can be used to guide future development and experimental implementation of the technique.</p> / Dissertation
182

Valid motion estimation for super-resolution image reconstruction

Santoro, Michael 14 August 2012 (has links)
In this thesis, a block-based motion estimation algorithm suitable for Super-Resolution (SR) image reconstruction is introduced. The motion estimation problem is formulated as an energy minimization problem that consists of both a data and regularization term. To handle cases when motion estimation fails, a block-based validity method is introduced, and is shown to outperform all other validity methods in the literature in terms of hybrid de-interlacing. By combining the validity metric into the energy minimization framework, it is shown that 1) the motion vector error is made less sensitive to block size, 2) a more uniform distribution of motion-compensated blocks results, and 3) the overall motion vector error is reduced. The final motion estimation algorithm is shown to outperform several state-of-the-art motion estimation algorithms in terms of both endpoint error and interpolation error, and is one of the fastest algorithms in the Middlebury benchmark. With the new motion estimation algorithm and validity metric, it is shown that artifacts are virtually eliminated from the POCS-based reconstruction of the high-resolution image.
183

Statistical Fusion of Scientific Images

Mohebi, Azadeh 30 July 2009 (has links)
A practical and important class of scientific images are the 2D/3D images obtained from porous materials such as concretes, bone, active carbon, and glass. These materials constitute an important class of heterogeneous media possessing complicated microstructure that is difficult to describe qualitatively. However, they are not totally random and there is a mixture of organization and randomness that makes them difficult to characterize and study. In order to study different properties of porous materials, 2D/3D high resolution samples are required. But obtaining high resolution samples usually requires cutting, polishing and exposure to air, all of which affect the properties of the sample. Moreover, 3D samples obtained by Magnetic Resonance Imaging (MRI) are very low resolution and noisy. Therefore, artificial samples of porous media are required to be generated through a porous media reconstruction process. The recent contributions in the reconstruction task are either only based on a prior model, learned from statistical features of real high resolution training data, and generating samples from that model, or based on a prior model and the measurements. The main objective of this thesis is to some up with a statistical data fusion framework by which different images of porous materials at different resolutions and modalities are combined in order to generate artificial samples of porous media with enhanced resolution. The current super-resolution, multi-resolution and registration methods in image processing fail to provide a general framework for the porous media reconstruction purpose since they are usually based on finding an estimate rather than a typical sample, and also based on having the images from the same scene -- the case which is not true for porous media images. The statistical fusion approach that we propose here is based on a Bayesian framework by which a prior model learned from high resolution samples are combined with a measurement model defined based on the low resolution, coarse-scale information, to come up with a posterior model. We define a measurement model, in the non-hierachical and hierarchical image modeling framework, which describes how the low resolution information is asserted in the posterior model. Then, we propose a posterior sampling approach by which 2D posterior samples of porous media are generated from the posterior model. A more general framework that we propose here is asserting other constraints rather than the measurement in the model and then propose a constrained sampling strategy based on simulated annealing to generate artificial samples.
184

Converting Network Media Data into Human Readable Form : A study on deep packet inspection with with real-time visualization.

Förderer, Steffen-Marc January 2012 (has links)
A proof of concept study into the working of network media capture and visualization through the use of Packet Capture in real-time. An application was developed that is able to capture tcp network packets; identify and display images in raw HTTP network traffic through the use of search, sort, error detection, timeout failsafe algorithms in real time. The application was designed for network administrators to visualize raw network media content together with its relevant network source \&amp; address identifiers. Different approaches were tried and tested such as using Perl with GTK+ and Visual Studio C\# .Net. Furthermore two different types of image identification methods were used: raw magic string identification in pure tcp network traffic and HTTP Mime type identification. The latter being more accurate and faster. C# was seen as vastly superior in both speed of prototyping and final performance evaluation. The study presents a novel new way of monitoring networks on the basis of their media content through deep packet inspection
185

Implementation of a fast method for reconstruction of ISAR images / Implementation av en snabb metod för rekonstruktion av ISAR-bilder

Dahlbäck, Niklas January 2003 (has links)
By analyzing ISAR images, the characteristics of military platforms with respect to radar visibility can be evaluated. The method, which is based on the Discrete-Time Fourier Transform (DTFT), that is currently used to calculate the ISAR images requires large computations efforts. This thesis investigates the possibility to replace the DTFT with the Fast Fourier Transform (FFT). Such a replacement is not trivial since the DTFT is able to compute a contribution anywhere along the spatial axis while the FFT delivers output data at fixed sampling, which requires subsequent interpolation. The interpolation leads to a difference in the ISAR image compared to the ISAR image obtained by DTFT. On the other hand, the FFT is much faster. In this quality-and-time trade-off, the objective is to minimize the error while keeping high computational efficiency. The FFT-approach is evaluated by studying execution time and image error when generating ISAR images for an aircraft model in a controlled environment. The FFT method shows good results. The execution speed is increased significantly without any visible differences in the ISAR images. The speed-up- factor depends on different parameters: image size, degree of zero-padding when calculating the FFT and the number of frequencies in the input data.
186

Statistical Fusion of Scientific Images

Mohebi, Azadeh 30 July 2009 (has links)
A practical and important class of scientific images are the 2D/3D images obtained from porous materials such as concretes, bone, active carbon, and glass. These materials constitute an important class of heterogeneous media possessing complicated microstructure that is difficult to describe qualitatively. However, they are not totally random and there is a mixture of organization and randomness that makes them difficult to characterize and study. In order to study different properties of porous materials, 2D/3D high resolution samples are required. But obtaining high resolution samples usually requires cutting, polishing and exposure to air, all of which affect the properties of the sample. Moreover, 3D samples obtained by Magnetic Resonance Imaging (MRI) are very low resolution and noisy. Therefore, artificial samples of porous media are required to be generated through a porous media reconstruction process. The recent contributions in the reconstruction task are either only based on a prior model, learned from statistical features of real high resolution training data, and generating samples from that model, or based on a prior model and the measurements. The main objective of this thesis is to some up with a statistical data fusion framework by which different images of porous materials at different resolutions and modalities are combined in order to generate artificial samples of porous media with enhanced resolution. The current super-resolution, multi-resolution and registration methods in image processing fail to provide a general framework for the porous media reconstruction purpose since they are usually based on finding an estimate rather than a typical sample, and also based on having the images from the same scene -- the case which is not true for porous media images. The statistical fusion approach that we propose here is based on a Bayesian framework by which a prior model learned from high resolution samples are combined with a measurement model defined based on the low resolution, coarse-scale information, to come up with a posterior model. We define a measurement model, in the non-hierachical and hierarchical image modeling framework, which describes how the low resolution information is asserted in the posterior model. Then, we propose a posterior sampling approach by which 2D posterior samples of porous media are generated from the posterior model. A more general framework that we propose here is asserting other constraints rather than the measurement in the model and then propose a constrained sampling strategy based on simulated annealing to generate artificial samples.
187

Hidden hierarchical Markov fields for image modeling

Liu, Ying 17 January 2011 (has links)
Random heterogeneous, scale-dependent structures can be observed from many image sources, especially from remote sensing and scientific imaging. Examples include slices of porous media data showing pores of various sizes, and a remote sensing image including small and large sea-ice blocks. Meanwhile, rather than the images of phenomena themselves, there are many image processing and analysis problems requiring to deal with \emph{discrete-state} fields according to a labeled underlying property, such as mineral porosity extracted from microscope images, or an ice type map estimated from a sea-ice image. In many cases, if discrete-state problems are associated with heterogeneous, scale-dependent spatial structures, we will have to deal with complex discrete state fields. Although scale-dependent image modeling methods are common for continuous-state problems, models for discrete-state cases have not been well studied in the literature. Therefore, a fundamental difficulty will arise which is how to represent such complex discrete-state fields. Considering the success of hidden field methods in representing heterogenous behaviours and the capability of hierarchical field methods in modeling scale-dependent spatial features, we propose a Hidden Hierarchical Markov Field (HHMF) approach, which combines the idea of hierarchical fields with hidden fields, for dealing with the discrete field modeling challenge. However, to define a general HHMF modeling structure to cover all possible situations is difficult. In this research, we use two image application problems to describe the proposed modeling methods: one for scientific image (porous media image) reconstruction and the other for remote-sensing image synthesis. For modeling discrete-state fields with a spatially separable complex behaviour, such as porous media images with nonoverlapped heterogeneous pores, we propose a Parallel HHMF model, which can decomposes a complex behaviour into a set of separated, simple behaviours over scale, and then represents each of these with a hierarchical field. Alternatively, discrete fields with a highly heterogeneous behaviour, such as a sea-ice image with multiple types of ice at various scales, which are not spatially separable but arranged more as a partition tree, leads to the proposed Tree-Structured HHMF model. According to the proposed approach, a complex, multi-label field can be repeatedly partitioned into a set of binary/ternary fields, each of which can be further handled by a hierarchical field.
188

Blur Estimation And Superresolution From Multiple Registered Images

Senses, Engin Utku 01 September 2008 (has links) (PDF)
Resolution is the most important criterion for the clarity of details on an image. Therefore, high resolution images are required in numerous areas. However, obtaining high resolution images has an evident technological cost and the value of these costs change with the quality of used optical systems. Image processing methods are used to obtain high resolution images with low costs. This kind of image improvement is named as superresolution image reconstruction. This thesis focuses on two main titles, one of which is the identification methods of blur parameters, one of the degradation operators, and the stochastic SR image reconstruction methods. The performances of different stochastic SR image reconstruction methods and blur identification methods are shown and compared. Then the identified blur parameters are used in superresolution algorithms and the results are shown.
189

Analysis Of Magnetic Resonance Imaging In Inhomogenous Main Magnetic Field

Arpinar, Volkan Emre 01 August 2009 (has links) (PDF)
In this study, analysis of Magnetic Resonance Imaging (MRI) in inhomogeneous main magnetic field is conducted. A numerical model based on Bloch equation is implemented for MRI, to understand effect of inhomogeneous magnetic field to Magnetic Resonance (MR) signal. Using the model, relations between inhomogeneity levels in main magnetic field with energy, decay time, bandwidth of the FID signal is investigated. Also relation between the magnetic field inhomogeneity and field of view is determined. To simulate measurement noise in the FID signal under inhomogeneous main magnetic field, noise model for MRI with homogeneous main field is altered. Following the numerical model development an image reconstruction algorithm for inhomogeneous main magnetic field is developed to remove undesirable effect of field inhomogeneity in image reconstruction. To evaluate capability of the reconstruction algorithm, the algorithm is tested for several input parameters which results in different noise levels in the FID signal. Then reconstruction errors are analysed to gain information about feasibility of MRI in inhomogeneous main magnetic field.
190

The Effects of Prefer Orientation on Three-Dimensional Reconstruction of T = 3 Virus Particles

Chen, Chun-Hong 20 June 2008 (has links)
Cyro-EM and three-dimensional reconstruction have become important research tools for virus structure. These techniques have benefit of fast and keep samples in native folding. Simulated prefer orientation images of DGNNV, PaV and TBSV were reconstructed by SPIDER or PurdueEM, with projection matching with. SPIDER prefers 3-fold and 5-fold view fields, and PurdueEM prefers 2-fold view fields. SPIDER could reconstruction images which have more noise than PurdueEM can successfully reconstruct. Reconstruction of RNA-cages has some relationship with the symmetry of capsid protein. Prefer orientation, noise and RNA-cages are the factors that can effect reconstruction.

Page generated in 0.0912 seconds