• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 220
  • 31
  • 23
  • 19
  • 17
  • 8
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 377
  • 377
  • 146
  • 97
  • 76
  • 68
  • 63
  • 44
  • 44
  • 39
  • 38
  • 37
  • 35
  • 31
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Image reconstruction with multisensors.

January 1998 (has links)
by Wun-Cheung Tang. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1998. / Includes bibliographical references. / Abstract also in Chinese. / Abstracts --- p.1 / Introduction --- p.3 / Toeplitz and Circulant Matrices --- p.3 / Conjugate Gradient Method --- p.6 / Cosine Transform Preconditioner --- p.7 / Regularization --- p.10 / Summary --- p.13 / Paper A --- p.19 / Paper B --- p.36
172

Practical Euclidean reconstruction of buildings.

January 2001 (has links)
Chou Yun-Sum, Bailey. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2001. / Includes bibliographical references (leaves 89-92). / Abstracts in English and Chinese. / List of Symbol / Chapter Chapter 1 --- Introduction / Chapter 1.1 --- The Goal: Euclidean Reconstruction --- p.1 / Chapter 1.2 --- Historical background --- p.2 / Chapter 1.3 --- Scope of the thesis --- p.2 / Chapter 1.4 --- Thesis Outline --- p.3 / Chapter Chapter 2 --- An introduction to stereo vision and 3D shape reconstruction / Chapter 2.1 --- Homogeneous Coordinates --- p.4 / Chapter 2.2 --- Camera Model / Chapter 2.2.1 --- Pinhole Camera Model --- p.5 / Chapter 2.3 --- Camera Calibration --- p.11 / Chapter 2.4 --- Geometry of Binocular System --- p.14 / Chapter 2.5 --- Stereo Matching --- p.15 / Chapter 2.5.1 --- Accuracy of Corresponding Point --- p.17 / Chapter 2.5.2 --- The Stereo Matching Approach --- p.18 / Chapter 2.5.2.1 --- Intensity-based stereo matching --- p.19 / Chapter 2.5.2.2 --- Feature-based stereo matching --- p.20 / Chapter 2.5.3 --- Matching Constraints --- p.20 / Chapter 2.6 --- 3D Reconstruction --- p.22 / Chapter 2.7 --- Recent development on self calibration --- p.24 / Chapter 2.8 --- Summary of the Chapter --- p.25 / Chapter Chapter 3 --- Camera Calibration / Chapter 3.1 --- Introduction --- p.26 / Chapter 3.2 --- Camera Self-calibration --- p.27 / Chapter 3.3 --- Self-calibration under general camera motion --- p.27 / Chapter 3.3.1 --- The absolute Conic Based Techniques --- p.28 / Chapter 3.3.2 --- A Stratified approach for self-calibration by Pollefeys --- p.33 / Chapter 3.3.3 --- Pollefeys self-calibration with Absolute Quadric --- p.34 / Chapter 3.3.4 --- Newsam's self-calibration with linear algorithm --- p.34 / Chapter 3.4 --- Camera Self-calibration under specially designed motion sequence / Chapter 3.4. 1 --- Hartley's self-calibration by pure rotations --- p.35 / Chapter 3.4.1.1 --- Summary of the Algorithm / Chapter 3.4.2 --- Pollefeys self-calibration with variant focal length --- p.36 / Chapter 3.4.2.1 --- Summary of the Algorithm / Chapter 3.4.3 --- Faugeras self-calibration of a 1D Projective Camera --- p.38 / Chapter 3.5 --- Summary of the Chapter --- p.39 / Chapter Chapter 4 --- Self-calibration under Planar motions / Chapter 4.1 --- Introduction --- p.40 / Chapter 4.2 --- 1D Projective Camera Self-calibration --- p.41 / Chapter 4.2.1 --- 1-D camera model --- p.42 / Chapter 4.2.2 --- 1-D Projective Camera Self-calibration Algorithms --- p.44 / Chapter 4.2.3 --- Planar motion detection --- p.45 / Chapter 4.2.4 --- Self-calibration under horizontal planar motions --- p.46 / Chapter 4.2.5 --- Self-calibration under three different planar motions --- p.47 / Chapter 4.2.6 --- Result analysis on self-calibration Experiments --- p.49 / Chapter 4.3 --- Essential Matrix and Triangulation --- p.51 / Chapter 4.4 --- Merge of Partial 3D models --- p.51 / Chapter 4.5 --- Summary of the Reconstruction Algorithms --- p.53 / Chapter 4.6 --- Experimental Results / Chapter 4.6.1 --- Experiment 1 : A Simulated Box --- p.54 / Chapter 4.6.2 --- Experiment 2 : A Real Building --- p.57 / Chapter 4.6.3 --- Experiment 3 : A Sun Flower --- p.58 / Chapter 4.7 --- Conclusion --- p.59 / Chapter Chapter 5 --- Building Reconstruction using a linear camera self- calibration technique / Chapter 5.1 --- Introduction --- p.60 / Chapter 5.2 --- Metric Reconstruction from Partially Calibrated image / Chapter 5.2.1 --- Partially Calibrated Camera --- p.62 / Chapter 5.2.2 --- Optimal Computation of Fundamental Matrix (F) --- p.63 / Chapter 5.2.3 --- Linearly Recovering Two Focal Lengths from F --- p.64 / Chapter 5.2.4 --- Essential Matrix and Triangulation --- p.66 / Chapter 5.3 --- Experiments and Discussions --- p.67 / Chapter 5.4 --- Conclusion --- p.71 / Chapter Chapter 6 --- Refine the basic model with detail depth information by a Model-Based Stereo technique / Chapter 6.1 --- Introduction --- p.72 / Chapter 6.2 --- Model Based Epipolar Geometry / Chapter 6.2.1 --- Overview --- p.74 / Chapter 6.2.2 --- Warped offset image preparation --- p.76 / Chapter 6.2.3 --- Epipolar line calculation --- p.78 / Chapter 6.2.4 --- Actual corresponding point finding by stereo matching --- p.80 / Chapter 6.2.5 --- Actual 3D point generated by Triangulation --- p.80 / Chapter 6.3 --- Summary of the Algorithms --- p.81 / Chapter 6.4 --- Experiments and discussions --- p.83 / Chapter 6.5 --- Conclusion --- p.85 / Chapter Chapter 7 --- Conclusions / Chapter 7.1 --- Summary --- p.86 / Chapter 7.2 --- Future Work --- p.88 / BIBLIOGRAPHY --- p.89
173

Characterization of Computed Tomography Radiomic Features using Texture Phantoms

Shafiq ul Hassan, Muhammad 05 April 2018 (has links)
Radiomics treats images as quantitative data and promises to improve cancer prediction in radiology and therapy response assessment in radiation oncology. However, there are a number of fundamental problems that need to be solved in order to potentially apply radiomic features in clinic. The first basic step in computed tomography (CT) radiomic analysis is the acquisition of images using selectable image acquisition and reconstruction parameters. Radiomic features have shown large variability due to variation of these parameters. Therefore, it is important to develop methods to address these variability issues in radiomic features due to each CT parameter. To this end, texture phantoms provide a stable geometry and Hounsfield Units (HU) to characterize the radiomic features with respect to image acquisition and reconstruction parameters. In this project, normalization methods were developed to address the variability issues in CT Radiomics using texture phantoms. In the first part of this project, variability in radiomic features due to voxel size variation was addressed. A voxel size resampling method is presented as a preprocessing step for imaging data acquired with variable voxel sizes. After resampling, variability due to variable voxel size in 42 radiomic features was reduced significantly. Voxel size normalization is presented to address the intrinsic dependence of some key radiomic features. After normalization, 10 features became robust as a function of voxel size. Some of these features were identified as predictive biomarkers in diagnostic imaging or useful in response assessment in radiation therapy. However, these key features were found to be intrinsically dependent on voxel size (which also implies dependence on lesion volume). The normalization factors are also developed to address the intrinsic dependence of texture features on the number of gray levels. After normalization, the variability due to gray levels in 17 texture features was reduced significantly. In the second part of the project, voxel size and gray level (GL) normalizations developed based on phantom studies, were tested on the actual lung cancer tumors. Eighteen patients with non-small cell lung cancer of varying tumor volumes were studied and compared with phantom scans acquired on 8 different CT scanners. Eight out of 10 features showed high (Rs > 0.9) and low (Rs < 0.5) Spearman rank correlations with voxel size before and after normalizations, respectively. Likewise, texture features were unstable (ICC < 0.6) and highly stable (ICC > 0.9) before and after gray level normalizations, respectively. This work showed that voxel size and GL normalizations derived from texture phantom also apply to lung cancer tumors. This work highlights the importance and utility of investigating the robustness of CT radiomic features using CT texture phantoms. Another contribution of this work is to develop correction factors to address the variability issues in radiomic features due to reconstruction kernels. Reconstruction kernels and tube current contribute to noise texture in CT. Most of texture features were sensitive to correlated noise texture due to reconstruction kernels. In this work, noise power spectra (NPS) was measured on 5 CT scanners using standard ACR phantom to quantify the correlated noise texture. The variability in texture features due to different kernels was reduced by applying the NPS peak frequency and the region of interest (ROI) maximum intensity as correction factors. Most texture features were radiation dose independent but were strongly kernel dependent, which is demonstrated by a significant shift in NPS peak frequency among kernels. Percent improvements in robustness of 19 features were in the range of 30% to 78% after corrections. In conclusion, most texture features are sensitive to imaging parameters such as reconstruction kernels, reconstruction Field of View (FOV), and slice thickness. All reconstruction parameters contribute to inherent noise in CT images. The problem can be partly solved by quantifying noise texture in CT radiomics using a texture phantom and an ACR phantom. Texture phantoms should be a pre-requisite to patient studies as they provide stable geometry and HU distribution to characterize the radiomic features and provide ground truths for multi-institutional validation studies.
174

The Influence of the Reference Measurement in MRI Image Reconstruction Using Sensitivity Encoding (SENSE)

Öhman, Tuva January 2006 (has links)
<p>The use of MRI for patient examinations has constantly increased as technical development has lead to faster image acquisitions and higher image quality. Nevertheless, an MR-examination still takes relatively long time and yet another way of speeding up the process is to employ parallel imaging. In this thesis, one of these parallel imaging techniques, called SENSE, is described and examined more closely.</p><p>When SENSE is employed, the number of spatial encoding steps can be reduced thanks to the use of several receiving coils. A reduction of the number of phase encoding steps not only leads to faster image acquisition, but also to superimposed pixel values in image space. In order to be able to separate the aliased pixels, knowledge about the spatial sensitivity of the coils is required.</p><p>There are several different alternatives to how and when information about the sensitivities of the coils should be collected, but in this thesis, focus is on the method of performing a reference measurement before the actual scan. The reference measurement consists of a fast, low-resolution sequence which either is collected with both the body coil and the parallel imaging coil or only with the parallel imaging coil. A comparison of these two methods by simulations in program written MATLAB leads to the conclusion that even if the scan time of the reference measurement is doubled it seems like there are numerous advantages of also collecting data with the body coil:</p><p>• the images are more homogeneous which facilitates the establishment of a diagnose</p><p>• the noise levels in the reconstructed images are somewhat lower</p><p>• images collected with a reduced sampling density show better agreement with those collected without reduction.</p><p>Furthermore, it is shown that the reference measurement preferably should be a 3D sequence covering all the volume of interest. If a 2D sequence is used it is absolutely necessary that it can be performed in any plane and it has to be repeated for every plane that is imaged.</p>
175

Implementation of a fast method for reconstruction of ISAR images / Implementation av en snabb metod för rekonstruktion av ISAR-bilder

Dahlbäck, Niklas January 2003 (has links)
<p>By analyzing ISAR images, the characteristics of military platforms with respect to radar visibility can be evaluated. The method, which is based on the Discrete-Time Fourier Transform (DTFT), that is currently used to calculate the ISAR images requires large computations efforts. This thesis investigates the possibility to replace the DTFT with the Fast Fourier Transform (FFT). Such a replacement is not trivial since the DTFT is able to compute a contribution anywhere along the spatial axis while the FFT delivers output data at fixed sampling, which requires subsequent interpolation. The interpolation leads to a difference in the ISAR image compared to the ISAR image obtained by DTFT. On the other hand, the FFT is much faster. In this quality-and-time trade-off, the objective is to minimize the error while keeping high computational efficiency. </p><p>The FFT-approach is evaluated by studying execution time and image error when generating ISAR images for an aircraft model in a controlled environment. The FFT method shows good results. The execution speed is increased significantly without any visible differences in the ISAR images. The speed-up- factor depends on different parameters: image size, degree of zero-padding when calculating the FFT and the number of frequencies in the input data.</p>
176

Combining analytical and iterative reconstruction in helical cone-beam CT

Sunnegårdh, Johan January 2007 (has links)
<p>Contemporary algorithms employed for reconstruction of 3D volumes from helical cone beam projections are so called non-exact algorithms. This means that the reconstructed volumes contain artifacts irrespective of the detector resolution and number of projection angles employed in the process. In this thesis, three iterative schemes for suppression of these so called cone artifacts are investigated.</p><p>The first scheme, iterative weighted filtered backprojection (IWFBP), is based on iterative application of a non-exact algorithm. For this method, artifact reduction, as well as spatial resolution and noise properties are measured. During the first five iterations, cone artifacts are clearly reduced. As a side effect, spatial resolution and noise are increased. To avoid this side effect and improve the convergence properties, a regularization procedure is proposed and evaluated.</p><p>In order to reduce the cost of the IWBP scheme, a second scheme is created by combining IWFBP with the so called ordered subsets technique, which we call OSIWFBP. This method divides the projection data set into subsets, and operates sequentially on each of these in a certain order, hence the name “ordered subsets”. We investigate two different ordering schemes and number of subsets, as well as the possibility to accelerate cone artifact suppression. The main conclusion is that the ordered subsets technique indeed reduces the number of iterations needed, but that it suffers from the drawback of noise amplification.</p><p>The third scheme starts by dividing input data into high- and low-frequency data, followed by non-iterative reconstruction of the high-frequency part and IWFBP reconstruction of the low-frequency part. This could open for acceleration by reduction of data in the iterative part. The results show that a suppression of artifacts similar to that of the IWFBP method can be obtained, even if a significant part of high-frequency data is non-iteratively reconstructed.</p>
177

Combining analytical and iterative reconstruction in helical cone-beam CT

Sunnegårdh, Johan January 2007 (has links)
Contemporary algorithms employed for reconstruction of 3D volumes from helical cone beam projections are so called non-exact algorithms. This means that the reconstructed volumes contain artifacts irrespective of the detector resolution and number of projection angles employed in the process. In this thesis, three iterative schemes for suppression of these so called cone artifacts are investigated. The first scheme, iterative weighted filtered backprojection (IWFBP), is based on iterative application of a non-exact algorithm. For this method, artifact reduction, as well as spatial resolution and noise properties are measured. During the first five iterations, cone artifacts are clearly reduced. As a side effect, spatial resolution and noise are increased. To avoid this side effect and improve the convergence properties, a regularization procedure is proposed and evaluated. In order to reduce the cost of the IWBP scheme, a second scheme is created by combining IWFBP with the so called ordered subsets technique, which we call OSIWFBP. This method divides the projection data set into subsets, and operates sequentially on each of these in a certain order, hence the name “ordered subsets”. We investigate two different ordering schemes and number of subsets, as well as the possibility to accelerate cone artifact suppression. The main conclusion is that the ordered subsets technique indeed reduces the number of iterations needed, but that it suffers from the drawback of noise amplification. The third scheme starts by dividing input data into high- and low-frequency data, followed by non-iterative reconstruction of the high-frequency part and IWFBP reconstruction of the low-frequency part. This could open for acceleration by reduction of data in the iterative part. The results show that a suppression of artifacts similar to that of the IWFBP method can be obtained, even if a significant part of high-frequency data is non-iteratively reconstructed.
178

The Influence of the Reference Measurement in MRI Image Reconstruction Using Sensitivity Encoding (SENSE)

Öhman, Tuva January 2006 (has links)
The use of MRI for patient examinations has constantly increased as technical development has lead to faster image acquisitions and higher image quality. Nevertheless, an MR-examination still takes relatively long time and yet another way of speeding up the process is to employ parallel imaging. In this thesis, one of these parallel imaging techniques, called SENSE, is described and examined more closely. When SENSE is employed, the number of spatial encoding steps can be reduced thanks to the use of several receiving coils. A reduction of the number of phase encoding steps not only leads to faster image acquisition, but also to superimposed pixel values in image space. In order to be able to separate the aliased pixels, knowledge about the spatial sensitivity of the coils is required. There are several different alternatives to how and when information about the sensitivities of the coils should be collected, but in this thesis, focus is on the method of performing a reference measurement before the actual scan. The reference measurement consists of a fast, low-resolution sequence which either is collected with both the body coil and the parallel imaging coil or only with the parallel imaging coil. A comparison of these two methods by simulations in program written MATLAB leads to the conclusion that even if the scan time of the reference measurement is doubled it seems like there are numerous advantages of also collecting data with the body coil: • the images are more homogeneous which facilitates the establishment of a diagnose • the noise levels in the reconstructed images are somewhat lower • images collected with a reduced sampling density show better agreement with those collected without reduction. Furthermore, it is shown that the reference measurement preferably should be a 3D sequence covering all the volume of interest. If a 2D sequence is used it is absolutely necessary that it can be performed in any plane and it has to be repeated for every plane that is imaged.
179

Performance Analysis between Two Sparsity Constrained MRI Methods: Highly Constrained Backprojection(HYPR) and Compressed Sensing(CS) for Dynamic Imaging

Arzouni, Nibal 2010 August 1900 (has links)
One of the most important challenges in dynamic magnetic resonance imaging (MRI) is to achieve high spatial and temporal resolution when it is limited by system performance. It is desirable to acquire data fast enough to capture the dynamics in the image time series without losing high spatial resolution and signal to noise ratio. Many techniques have been introduced in the recent decades to achieve this goal. Newly developed algorithms like Highly Constrained Backprojection (HYPR) and Compressed Sensing (CS) reconstruct images from highly undersampled data using constraints. Using these algorithms, it is possible to achieve high temporal resolution in the dynamic image time series with high spatial resolution and signal to noise ratio (SNR). In this thesis we have analyzed the performance of HYPR to CS algorithm. In assessing the reconstructed image quality, we considered computation time, spatial resolution, noise amplification factors, and artifact power (AP) using the same number of views in both algorithms, and that number is below the Nyquist requirement. In the simulations performed, CS always provides higher spatial resolution than HYPR, but it is limited by computation time in image reconstruction and SNR when compared to HYPR. HYPR performs better than CS in terms of SNR and computation time when the images are sparse enough. However, HYPR suffers from streaking artifacts when it comes to less sparse image data.
180

Ultrafast Coherent X-ray Diffractive Nanoimaging

R. N. C. Maia, Filipe January 2010 (has links)
X-ray lasers are creating unprecedented research opportunities in physics,chemistry and biology. The peak brightness of these lasers exceeds presentsynchrotrons by 1010, the coherence degeneracy parameters exceedsynchrotrons by 109, and the time resolution is 105 times better. In theduration of a single flash, the beam focused to a micron-sized spot has the samepower density as all the sunlight hitting the Earth, focused to a millimetresquare. Ultrafast coherent X-ray diffractive imaging (CXDI) with X-ray lasers exploitsthese unique properties of X-ray lasers to obtain high-resolution structures fornon-crystalline biological (and other) objects. In such an experiment, thesample is quickly vaporised, but not before sufficient scattered light can berecorded. The continuous diffraction pattern can then be phased and thestructure of a more or less undamaged sample recovered% (speed of light vs. speed of a shock wave).This thesis presents results from the first ultrafast X-ray diffractive imagingexperiments with linear accelerator-driven free-electron lasers and fromoptically-driven table-top X-ray lasers. It also explores the possibility ofinvestigating phase transitions in crystals by X-ray lasers. An important problem with ultrafast CXDI of small samples such as single proteinmolecules is that the signal from a single measurement will be small, requiringsignal enhancement by averaging over multiple equivalent samples. We present anumerical investigation of the problems, including the case where samplemolecules are not exactly identical, and propose tentative solutions. A new software package (Hawk) has been developed for data processing and imagereconstruction. Hawk is the first publicly available software package in thisarea, and it is released as an open source software with the aspiration offostering the development of this field.

Page generated in 0.1154 seconds