• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 222
  • 31
  • 23
  • 19
  • 17
  • 8
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 381
  • 381
  • 147
  • 98
  • 76
  • 70
  • 64
  • 44
  • 44
  • 40
  • 39
  • 39
  • 36
  • 31
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Segmentation and reconstruction of medical images

Su, Qi, 蘇琦 January 2008 (has links)
published_or_final_version / Computer Science / Doctoral / Doctor of Philosophy
42

Development & Implementation of Algorithms for Fast Image Reconstruction

Tappenden, Rachael Elizabeth Helen January 2011 (has links)
Signal and image processing is important in a wide range of areas, including medical and astronomical imaging, and speech and acoustic signal processing. There is often a need for the reconstruction of these objects to be very fast, as they have some cost (perhaps a monetary cost, although often it is a time cost) attached to them. This work considers the development of algorithms that allow these signals and images to be reconstructed quickly and without perceptual quality loss. The main problem considered here is that of reducing the amount of time needed for images to be reconstructed, by decreasing the amount of data necessary for a high quality image to be produced. In addressing this problem two basic ideas are considered. The first is a subset selection problem where the aim is to extract a subset of data, of a predetermined size, from a much larger data set. To do this we first need some metric with which to measure how `good' (or how close to `best') a data subset is. Then, using this metric, we seek an algorithm that selects an appropriate data subset from which an accurate image can be reconstructed. Current algorithms use a criterion based upon the trace of a matrix. In this work we derive a simpler criterion based upon the determinant of a matrix. We construct two new algorithms based upon this new criterion and provide numerical results to demonstrate their accuracy and efficiency. A row exchange strategy is also described, which takes a given subset and performs interchanges to improve the quality of the selected subset. The second idea is, given a reduced set of data, how can we quickly reconstruct an accurate signal or image? Compressed sensing provides a mathematical framework that explains that if a signal or image is known to be sparse relative to some basis, then it may be accurately reconstructed from a reduced set of data measurements. The reconstruction process can be posed as a convex optimization problem. We introduce an algorithm that aims to solve the corresponding problem and accurately reconstruct the desired signal or image. The algorithm is based upon the Barzilai-Borwein algorithm and tailored specifically to the compressed sensing framework. Numerical experiments show that the algorithm is competitive with currently used algorithms. Following the success of compressed sensing for sparse signal reconstruction, we consider whether it is possible to reconstruct other signals with certain structures from reduced data sets. Specifically, signals that are a combination of a piecewise constant part and a sparse component are considered. A reconstruction process for signals of this type is detailed and numerical results are presented.
43

Image reconstruction for emission optical projection tomography

Darrell, Alexander Louis January 2010 (has links)
Emission Optical Projection Tomography (eO PT) is a relatively new imag- ing modality that bridges a gap between micro Magnetic Resonance Imag- ing and Confocal Laser Scanning Microscopy. eO PT can be used to image the anatomy and gene expression of intact biological specimens at high resolution and thus provides an alternative to time consuming methods such as serial sectioning. Tomographic image reconstruction for eOPT is currently performed using the Filtered Back Projection algorithm which, while being fast, does not account for the physics of image formation and thus can result in reconstructions of reduced resolution and questionable quantitative consistency. This thesis describes work that was done on eOPT in three areas, including image formation, tomographic reconstruction, and memory savings, the latter of which were required to bring implementation of 3D iterative reconstruction algorithms within reach for the relatively high-resolution eO PT imaging modality. In the area of image formation, measurements were taken to reveal the effects of optical blurring, diffraction and charge-coupled device (CCD) camera noise. Accurate models of each of these phenomena were developed and compared against the measurements. The subject of image reconstruction was first addressed with a modi- fication to the FBP algorithm designed to correct for the quantitative inaccuracies suspected of being introduced by the FBP algorithm when reconstructing specimens consisting of very fine detail. This was done by incorporating the quantitative aspects of the model of image formation into the FBP algorithm. The full model of image formation was incorpo- rated into the iterative Maximum Likelihood Expectation Maximisation (MLEM) algorithm. The third strand of this thesis focuses on various memory saving meth- ods developed to enable the implementation and testing of a variation of MLEM known as the Ordered Subsets Expectation Maximisation (OSEM). , Without such memory saving methods, the implementation of an iterative 3D reconstruction algorithm such as MLEM or OSEM using a full model of image formation would have remained beyond the capacity of modern computers for the foreseeable future, requiring several Terabytes of RAM. Comparisons were made between the quality of and the time required to produce FBP and OSEM reconstructions of the same data sets given the availability of limited computing resources. The feasibility of adopting OSEM reconstructions as an alternative to FBP reconstructions was dis- cussed, based on the use of currently available cutting edge computing hardware.
44

Three-dimensional reconstruction outside of the laboratory

Bennett, Stuart Charles January 2014 (has links)
No description available.
45

Two approaches to sparsity for image restoration.

January 2013 (has links)
稀疏性在最近的圖像恢復技術發展中起到了重要作用。在這個碩士研究中,我們專注於兩種通過信號稀疏性假設相聯繫的圖像恢復問題。具體來講,在第一個圖像恢復問題中,信號本身在某些變換域是稀疏的,例如小波變換。在本研究的第二部分,信號並非傳統意義上的稀疏,但它可以用很少的幾個參數來表示--亦即信號具有稀疏的表示。我們希望通過講述一個「雙城記」,聯繫起這兩個稀疏圖像重建問題。 / 在第二章中,我們提出了一種創新的算法框架,用於解決信號稀疏假設下的圖像恢復問題。重建圖像的目標函數,由一個數據保真項和`1正則項組成。然而,我們不是直接估計重建的圖像,而是專注於如何獲得重建的這個過程。我們的策略是將這個重建過程表示成基本閾值函數的線性組合(LET):這些線性係數可以通過最小化目標函數解得。然後,可以更新閾值函數并迭代這個過程(i-LET)。這種線性參數化的主要優點是可以大幅降低問題的規模-每次我們只需解決一個線性係數維度大小的優化問題(通常小於十),而不是整個圖像大小的問題。如果閾值函滿足一定的條件,迭代LET算法可以保證全局的收斂性。多個測試圖像在不同噪音水平和不同卷積核類型的測試清楚地表明,我們提出的框架在所需運算時間和迭代循環次數方面,通常超越當今最好水平。 / 在第三章中,我們擴展了有限創新率採樣框架至某一種特定二維曲線。我們用掩模函數的解來間接定義這個二維曲線。這裡,掩模函數可以表示為有限數目的正弦信號加權求和。因此,從這個角度講,我們定義的二維曲線具有「有限創新率」(FRI)。由於與定義曲線相關聯的指示器圖像沒有帶寬限制,因而根據經典香農採樣定理,不能在有限數量的採樣基礎上獲得完全重建。然而,我們證明,仍然可以設計一個針對指示器圖像採樣的框架,實現完美重構。此外,對於這一方法的空間域解釋,使我們能夠拓展嚴格的FRI曲線模型用於描述自然圖像的邊緣,可以在各種圖像處理的問題中保持圖像的邊緣。我們用一個潛在的在圖像上採樣中的應用作為示例。 / Sparsity has played an important role in recent developments of various image restoration techniques. In this MPhil study, we focus on two different types of image restoration problems, which are related by the sparsity assumptions. Specifically, in the first image restoration problem, the signal (i.e. the restored image) itself is sparse in some transformation domain, e.g. wavelet. While in the second part of this study, the signal is not sparse in the traditional sense but that it can be parametrized with a few parameters hence having a sparse representation. Our goal is to tell a "tale of two cities" and to show the connections between the two sparse image restoration problems in this thesis. / In Chapter 2, we proposed a novel algorithmic framework to solve image restoration problems under sparsity assumptions. As usual, the reconstructed image is the minimum of an objective functional that consists of a data fidelity term and an ℓ₁ regularization. However, instead of estimating the reconstructed image that minimizes the objective functional directly, we focus on the restoration process that maps the degraded measurements to the reconstruction. Our idea amounts to parameterizing the process as a linear combination of few elementary thresholding functions (LET) and solve for the linear weighting coefficients by minimizing the objective functional. It is then possible to update the thresholding functions and to iterate this process (i-LET). The key advantage of such a linear parametrization is that the problem size reduces dramatically--each time we only need to solve an optimization problem over the dimension of the linear coefficients (typically less than 10) instead of the whole image dimensio . With the elementary thresholding functions satisfying certain constraints, global convergence of the iterated LET algorithm is guaranteed. Experiments on several test images over a wide range of noise levels and different types of convolution kernels clearly indicate that the proposed framework usually outperform state-of-theart algorithms in terms of both CPU time and number of iterations. / In Chapter 3, we extended the sampling framework for signals with finite rate of innovation to a specific class of two-dimensional curves, which are defined implicitly as the roots of a mask function. Here the mask function has a parametric representation as weighted summation of a finite number of sinusoids, and therefore, has finite rate of innovation [1]. The associated indicator image of the defined curve is not bandlimited and cannot be perfectly reconstructed based on the classical Shannon's sampling theorem. Yet, we show that it is possible to devise a sampling scheme and have a perfect reconstruction from finite number of (noiseless) samples of the indicator image with the annihilating filter method (also known as Prony's method). Robust reconstruction algorithms with noisy samples are also developed. Furthermore, the new spatial domain interpretation of the annihilating filter enables us to generalize the exact FRI curve model to characterize edges of a natural image. We can impose the annihilation constraint to preserve edges in various image processing problems. We exemplified the effectiveness of the annihilation constraint with a potential application in image up-sampling. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Pan, Hanjie. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2013. / Includes bibliographical references (leaves 69-74). / Abstracts also in Chinese. / Acknowledgments --- p.iii / Abstract --- p.vii / Contents --- p.xii / List of Figures --- p.xv / List of Tables --- p.xvii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Sampling Sparse Signals --- p.1 / Chapter 1.2 --- Thesis Organizations and Contributions --- p.3 / Chapter 2 --- An Iterated Linear Expansion of Thresholds for ℓ₁-based Image Restoration --- p.5 / Chapter 2.1 --- Introduction --- p.5 / Chapter 2.1.1 --- Problem Description --- p.5 / Chapter 2.1.2 --- Approaches to Solve the Problem --- p.6 / Chapter 2.1.3 --- Proposed Approach --- p.8 / Chapter 2.1.4 --- Organization of the Chapter --- p.9 / Chapter 2.2 --- Basic Ingredients --- p.9 / Chapter 2.2.1 --- Iterative Reweighted Least Square Methods --- p.9 / Chapter 2.2.2 --- Linear Expansion of Thresholds (LET) --- p.11 / Chapter 2.3 --- Iterative LET Restoration --- p.15 / Chapter 2.3.1 --- Selection of i-LET Bases --- p.15 / Chapter 2.3.2 --- Convergence of the i-LET Scheme --- p.16 / Chapter 2.3.3 --- Examples of i-LET Bases --- p.18 / Chapter 2.4 --- Experimental Results --- p.23 / Chapter 2.4.1 --- Deconvolution with Decimated Wavelet Transform --- p.24 / Chapter 2.4.2 --- Deconvolution with Redundant Wavelet Transform --- p.28 / Chapter 2.4.3 --- Algorithm Complexity Analysis --- p.29 / Chapter 2.4.4 --- Choice of Regularization Weight λ --- p.30 / Chapter 2.4.5 --- Deconvolution with Cycle Spinnings --- p.30 / Chapter 2.5 --- Summary --- p.31 / Chapter 3 --- Sampling Curves with Finite Rate of Innovation --- p.33 / Chapter 3.1 --- Introduction --- p.33 / Chapter 3.2 --- Two-dimensional Curves with Finite Rate of Innovation --- p.34 / Chapter 3.2.1 --- FRI Curves --- p.34 / Chapter 3.2.2 --- Interior Indicator Image --- p.35 / Chapter 3.2.3 --- Acquisition of Indicator Image Samples --- p.36 / Chapter 3.3 --- Reconstruction of the Annihilable Curves --- p.37 / Chapter 3.3.1 --- Annihilating Filter Method --- p.37 / Chapter 3.3.2 --- Relate Fourier Transform with Spatial Domain Samples --- p.39 / Chapter 3.3.3 --- Reconstruction of Annihilation Coe cients --- p.39 / Chapter 3.3.4 --- Reconstruction with Model Mismatch --- p.42 / Chapter 3.3.5 --- Retrieval of the Annihilable Curve Amplitudes --- p.46 / Chapter 3.4 --- Dealing with Non-ideal Low-pass Filtered Samples --- p.48 / Chapter 3.5 --- Generalization of the FRI Framework for Natural Images --- p.49 / Chapter 3.5.1 --- Spatial Domain Interpretation of the Annihilation Equation --- p.50 / Chapter 3.5.2 --- Annihilable Curve Approximation of Image Edges --- p.51 / Chapter 3.5.3 --- Up-sampling with Annihilation Constraint --- p.53 / Chapter 3.6 --- Conclusion --- p.57 / Chapter 4 --- Conclusions --- p.59 / Chapter 4.1 --- Thesis Summary --- p.59 / Chapter 4.2 --- Perspectives --- p.60 / Chapter A --- Proofs and Derivations --- p.61 / Chapter A.1 --- Proof of Lemma 3 --- p.61 / Chapter A.2 --- Proof of Theorem 2 --- p.62 / Chapter A.3 --- Efficient Implementation of IRLS Inner Loop with Matlab --- p.63 / Chapter A.4 --- Derivations of the Sampling Formula (3.7) --- p.64 / Chapter A.5 --- Correspondence between the Spatial and Fourier Domain Samples --- p.65 / Chapter A.6 --- Optimal Post-filter Applied to Non-ideal Samples --- p.66 / Bibliography --- p.69
46

Reconstruction of high-resolution image from movie frames.

January 2003 (has links)
by Ling Kai Tung. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2003. / Includes bibliographical references (leaves 44-45). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.7 / Chapter 2 --- Fundamentals --- p.9 / Chapter 2.1 --- Digital image representation --- p.9 / Chapter 2.2 --- Motion Blur --- p.13 / Chapter 3 --- Methods for Solving Nonlinear Least-Squares Prob- lem --- p.15 / Chapter 3.1 --- Introduction --- p.15 / Chapter 3.2 --- Nonlinear Least-Squares Problem --- p.15 / Chapter 3.3 --- Gauss-Newton-Type Methods --- p.16 / Chapter 3.3.1 --- Gauss-Newton Method --- p.16 / Chapter 3.3.2 --- Damped Gauss-Newton Method --- p.17 / Chapter 3.4 --- Full Newton-Type Methods --- p.17 / Chapter 3.4.1 --- Quasi-Newton methods --- p.18 / Chapter 3.5 --- Constrained problems --- p.19 / Chapter 4 --- Reconstruction of High-Resolution Images from Movie Frames --- p.20 / Chapter 4.1 --- Introduction --- p.20 / Chapter 4.2 --- The Mathematical Model --- p.22 / Chapter 4.2.1 --- The Discrete Model --- p.23 / Chapter 4.2.2 --- Regularization --- p.24 / Chapter 4.3 --- Acquisition of Low-Resolution Movie Frames --- p.25 / Chapter 4.4 --- Experimental Results --- p.25 / Chapter 4.5 --- Concluding Remarks --- p.26 / Chapter 5 --- Constrained Total Least-Squares Computations for High-Resolution Image Reconstruction --- p.31 / Chapter 5.1 --- Introduction --- p.31 / Chapter 5.2 --- The Mathematical Model --- p.32 / Chapter 5.3 --- Numerical Algorithm --- p.37 / Chapter 5.4 --- Numerical Results --- p.39 / Chapter 5.5 --- Concluding Remarks --- p.39 / Bibliography --- p.44
47

Computational methods for bioinformatics and image restoration

Liao, Haiyong 01 January 2010 (has links)
No description available.
48

Numerical methods for classification and image restoration

Fong, Wai Lam 01 January 2013 (has links)
No description available.
49

Efficient photometric stereo on glossy surfaces with wide specular lobes.

January 2008 (has links)
Chung, Hin Shun. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2008. / Includes bibliographical references (leaves 40-43). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Lambertian photometric stereo --- p.1 / Chapter 1.2 --- Non-Lambertian photometric stereo --- p.3 / Chapter 1.3 --- Large specular lobe problems --- p.4 / Chapter 2 --- Related Work --- p.9 / Chapter 2.1 --- Lambertian photometric stereo --- p.9 / Chapter 2.2 --- Non-Lambertian photometric stereo --- p.9 / Chapter 2.2.1 --- Analytic models to reconstruct non-Lambertian surface --- p.9 / Chapter 2.2.2 --- Reference object based --- p.10 / Chapter 2.2.3 --- Highlight removal before shape reconstruction --- p.11 / Chapter 2.2.4 --- Polarization based method --- p.12 / Chapter 2.2.5 --- Specularity fitting method --- p.12 / Chapter 2.2.6 --- Photometric stereo with shadow --- p.12 / Chapter 3 --- Our System --- p.13 / Chapter 3.1 --- Estimation of global parameters --- p.14 / Chapter 3.1.1 --- Shadow separation --- p.16 / Chapter 3.1.2 --- Separation edges of shadow and edges of foreground object --- p.16 / Chapter 3.1.3 --- Normal estimation using shadow boundary --- p.20 / Chapter 3.1.4 --- Global parameter estimation and refinement --- p.22 / Chapter 3.2 --- Surface shape and texture reconstruction --- p.24 / Chapter 3.3 --- Single material results --- p.25 / Chapter 4 --- Comparison between Our Method and Direct Specularity Fitting Method --- p.29 / Chapter 4.1 --- Summary of direct specularity fitting method [9] --- p.29 / Chapter 4.2 --- Comparison results --- p.31 / Chapter 5 --- Reconstructing Multiple-Material Surfaces --- p.33 / Chapter 5.1 --- Multiple material results --- p.34 / Chapter 6 --- Conclusion --- p.38 / Bibliography --- p.39 / Chapter A --- Proof of Surface Normal Projecting to Gradient of Cast Shadow Boundary --- p.43
50

Novel MR image recovery using patch-smoothness iterative shrinkage algorithm

Mohsin, Yasir Qasim 01 December 2018 (has links)
Obtaining high spatial or spatiotemporal resolution along with good slice coverage is challenging in dynamic magnetic resonance imaging, MRI, due to the slow nature of the acquisition process. In recent years, there has been a rapid growth of MRI techniques that allow faster scan speed by exploiting spatial or spatiotemporal redundancy of the images. These techniques can improve the performance of imaging significantly across multiple clinical applications, including cardiac functional examinations, perfusion imaging, blood flow assessment, contrast-enhanced angiography, functional MRI, and interventional imaging, among others. The ultimate goal of this thesis is to develop novel algorithms to reconstruct heavily undersampled sparse imaging. The designed schemes aim to achieve a shorter scan duration, higher spatial resolution, increased temporal resolution, signal-to-noise ratio and coverage in multidimensional multichannel MRI. In addition to improving patients comfort and compliance while imaging under the MRI device, the newly developed schemes will allow patients with arrhythmia problems, pediatric and obese subjects to breath freely without the need for any breath-hold scans. Shortening examination periods also reduces patient's stress, lowers the entire visit to the clinic and finally decreases the associated economic costs. Rapid imaging acquisitions will also allow for efficient extraction of quantitative information needed for the patients' diagnosis eg. tumor characterization and veins blockages through myocardial perfusion MRI. Current applications of interests include real-time CINE MRI and contrast changing perfusion MRI.

Page generated in 0.1149 seconds