• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2626
  • 782
  • 758
  • 243
  • 184
  • 156
  • 135
  • 45
  • 35
  • 27
  • 24
  • 24
  • 24
  • 24
  • 24
  • Tagged with
  • 6231
  • 6231
  • 1986
  • 1504
  • 1193
  • 1147
  • 1016
  • 999
  • 952
  • 923
  • 893
  • 794
  • 771
  • 660
  • 654
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.

Image denoising and deblurring under impulse noise, and framelet-based methods for image reconstruction. / CUHK electronic theses & dissertations collection

January 2007 (has links)
In Part I of the thesis, we study the problems of image denoising and de-blurring under impulse noise. We consider two-phase methods for solving these problems. In the first phase, efficient detectors are applied to detect the outliers. In the second phase, variational methods utilizing the outputs of the first phase are performed. For denoising, we prove that the functionals to be minimized in the second phase have many good properties such as maximum principle, Lipschitz continuity and etc. Based on the results, we propose conjugate gradient methods and quasi-Newton methods to minimize the functional efficiently. For deblurring, we propose a two-phase method combining the median-type filters and a variational method with Mumford-Shah regularization term. The experiments show that the two-phase methods give much better results than both the median-type filters and full variational methods. / In this thesis, we study two aspects in image processing. Part I is about image denoising and deblurring under impulse noise, and Part II is about framelet-based methods for image reconstruction. / Part II of the thesis focuses on framelet-based methods for image reconstruction. In particular, we consider framelet-based methods for chopped and nodded image reconstruction and image inpainting. By interpreting both the problems as recovery of missing data, framelet, a generalization of wavelet, is applied to solve the problems. We incorporate sophisticated thresholding schemes into the algorithm, hence the regularities of the restored images can be guaranteed. By the theory of convex analysis, we prove the convergence of the framelet-based methods. We find that the limits of the framelet-based methods satisfy some minimization properties, hence connections with variational methods are established. / Cai, Jianfeng. / "March 2007." / Adviser: Raymond H. Chan. / Source: Dissertation Abstracts International, Volume: 69-01, Section: B, page: 0350. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2007. / Includes bibliographical references (p. 119-129). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.

Image-based motion estimation and deblurring. / CUHK electronic theses & dissertations collection

January 2010 (has links)
Lastly, in the context of motion deblurring, we discuss a few new motion deblurring problems that are significant to blur kernel estimation and nonblind deconvolution. We found that strong edges do not always profit kernel estimation, but instead under certain circumstance degrade it. This finding leads to a new metric to measure the usefulness of image edges in motion deblurring and a gradient selection process to mitigate their possible adverse effect. It makes possible to solve for very large blur PSFs which easily fail existing blind deblurring methods. We also propose an efficient and high-quality kernel estimation method based on the spatial prior and the iterative support detection (ISD) kernel refinement, which avoids hard threshold of the kernel elements to enforce sparsity. We employ the TV-ℓ1 deconvolution model, solved with a new variable substitution scheme to robustly suppress noise. / This thesis covers complete discussion of motion estimation and deblurring and presents new methods to tackle them. In the context of motion estimation, we study the problem of estimating 2D apparent motion from two or more input images, referred to as optical flow estimation. We discuss several specific fundamental problems in existing optical flow estimation frameworks, including 1) estimating flow vectors for textureless and occluded regions, which was regarded as infeasible and with large ambiguities, and 2) the incapability of the commonly employed coarse-to-fine multi-scale scheme to preserve motion structures in several challenging circumstances. / To address the problem of multi-scale estimation, we extend the coarse-to-fine scheme by complementing the initialization at each scale with sparse feature matching, based on the observation that fine motion structures, especially those with significant and abrupt displacement transition, cannot always be correctly reconstructed owing to an incorrect initialization. We also introduce the adaption of the objective function and development of a new optimization procedure, which constitute a unified system for both large- and small-displacement optical flow estimation. The effectiveness of our method is borne out by extensive experiments on small-displacement benchmark dataset as well as the challenging large-displacement optical flow data. / To further increase the sub-pixel accuracy, we study how resolution changes affect the flow estimates. We show that by simple upsampling, we can effectively reduce errors for sub-pixel correspondence. In addition, we identify the regularization bias problem and explore its relationship to the image resolution. We propose a general fine-to-coarse framework to compute sub-pixel color matching for different computer vision problems. Various experiments were performed on motion estimation and stereo matching data. We are able to reduce errors by up to 30%, which would otherwise be very difficult to achieve through other conventional optimization methods. / We propose novel methods to solve these problems. Firstly, we introduce a segmentation based variational model to regularize flow estimates for textureless and occluded regions. Parametric and Non-parametric optical flow models are combined, using a confidence map to measure the rigidity of the moving regions. The resulted flow field is with high quality even at motion discontinuity and textureless regions and is very useful for applications such as video editing. / Xu, Li. / Adviser: Jiaya Jia. / Source: Dissertation Abstracts International, Volume: 73-03, Section: B, page: . / Thesis (Ph.D.)--Chinese University of Hong Kong, 2010. / Includes bibliographical references (leaves 126-137). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [201-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese.

Image segmentation by integrating multiple channels of features. / CUHK electronic theses & dissertations collection

January 2007 (has links)
Image segmentation is about how an image could be divided into pieces or segments so that each segment corresponds to a surface or an object which demonstrates high degree of uniformity in its visual appearance. Automatic image segmentation method must in a way make use of the uniformity property of the segments. The uniformity property of a segment however manifests in a number of ways, forming different channels of features. Since the segment's appearance is uniform within itself, the intensities or textures in the image of the segment must look rather similar, and they together are what the literature calls the region-level features. Since a segment's appearance uniformity must be different from those of the immediate neighboring segments, or else they should not be referred to as different segments, the boundary of a segment therefore should exhibit high degree of intensity contrast in the image data, and such contrast leads to edgel features (which is often referred to as boundary features) in the image data. Other channels of features include discrete labels assigned to image pixels according to their intensity levels, and certain prior knowledge of the object shape in the image. / In the first piece of work, an approach of gray level image segmentation is investigated, which uses boundary feature and region feature complementarity. In this approach, the line segments, which are derived by grouping edge elements, are used to construct a saliency map to indicate the location likelihood of the real boundaries. The closed boundaries extracted from the saliency map are then refined by a region based active contour method. The scheme allows the challenging issues of boundary closure and segmentation accuracy to be both addressed. / In the second piece of work, an approach of foreground-background segmentation is explored, which integrates the boundary features and certain labels assigned to the image pixels according to their intensity levels. The labels are in accordance with certain coarse clustering over the intensity histogram of the image. In this approach, an inhomogeneity measure is encoded in a variational formulation, thus the measure can be applied to the entire image domain and be made global. The approach has a uniform treatment to gray level, color and texture images. In addition, the approach allows explicit encouragement on the smoothness of the segmentation boundary by using the level set technique-based active contour method. / In the third piece of work, an approach of foreground-background segmentation is investigated, that makes use of both the boundary features and certain prior knowledge of object shape. This approach can also be categorized as an object detection method. In this approach, we adopt a new multiplicative formulation to combine the edgel information and the prior shape knowledge. The method reduces the number of system parameters and increases the algorithm's robustness. / Much of the previous work on image segmentation is based upon features of a particular channel, such as the edgel features, the region features, or others. The objective of this thesis is to explore how features of multiple channels can be put together integratively, or more precisely in an active contour deformation process under the level set formulation, for more accurate image segmentation. Three combinations of features are investigated. The first is about integrating the boundary features and region features. The second is about integrating the boundary features and the labels to pixels according to certain coarse intensity clustering. The third is about integrating the boundary features and certain prior knowledge of the object shape. / The proposed algorithms in this thesis have been tested on many real and synthetic images. The experimental results illustrate their efficacy and limitation. / Wang, Wei. / "September 2007." / Adviser: Ronald Chi-kit Chung. / Source: Dissertation Abstracts International, Volume: 69-08, Section: B, page: 4865. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2007. / Includes bibliographical references (p. 39-106). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.

Image coding based on wavelet transform. / CUHK electronic theses & dissertations collection

January 1998 (has links)
Jianhua Chen. / Thesis (Ph.D.)--Chinese University of Hong kong, 1998. / Includes bibliographical references (p. 127-[134]). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Mode of access: World Wide Web. / Abstracts in English and Chinese.

A computer stereo vision system: using horizontal intensity line segments bounded by edges.

January 1996 (has links)
by Chor-Tung Yau. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1996. / Includes bibliographical references (leaves 106-110). / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Objectives --- p.1 / Chapter 1.2 --- Factors of Depth Perception in Human Visual System --- p.2 / Chapter 1.2.1 --- Oculomotor Cues --- p.2 / Chapter 1.2.2 --- Pictorial Cues --- p.3 / Chapter 1.2.3 --- Movement-Produced Cues --- p.4 / Chapter 1.2.4 --- Binocular Disparity --- p.5 / Chapter 1.3 --- What Cues to Use in Computer Vision? --- p.6 / Chapter 1.4 --- The Process of Stereo Vision --- p.8 / Chapter 1.4.1 --- Depth and Disparity --- p.8 / Chapter 1.4.2 --- The Stereo Correspondence Problem --- p.10 / Chapter 1.4.3 --- Parallel and Nonparallel Axis Stereo Geometry --- p.11 / Chapter 1.4.4 --- Feature-based and Area-based Stereo Matching --- p.12 / Chapter 1.4.5 --- Constraints --- p.13 / Chapter 1.5 --- Organization of this thesis --- p.16 / Chapter 2 --- Related Work --- p.18 / Chapter 2.1 --- Marr and Poggio's Computational Theory --- p.18 / Chapter 2.2 --- Cooperative Methods --- p.19 / Chapter 2.3 --- Dynamic Programming --- p.21 / Chapter 2.4 --- Feature-based Methods --- p.24 / Chapter 2.5 --- Area-based Methods --- p.26 / Chapter 3 --- Overview of the Method --- p.30 / Chapter 3.1 --- Considerations --- p.31 / Chapter 3.2 --- Brief Description of the Method --- p.33 / Chapter 4 --- Preprocessing of Images --- p.35 / Chapter 4.1 --- Edge Detection --- p.35 / Chapter 4.1.1 --- The Laplacian of Gaussian (∇2G) operator --- p.37 / Chapter 4.1.2 --- The Canny edge detector --- p.40 / Chapter 4.2 --- Extraction of Horizontal Line Segments for Matching --- p.42 / Chapter 5 --- The Matching Process --- p.45 / Chapter 5.1 --- Reducing the Search Space --- p.45 / Chapter 5.2 --- Similarity Measure --- p.47 / Chapter 5.3 --- Treating Inclined Surfaces --- p.49 / Chapter 5.4 --- Ambiguity Caused By Occlusion --- p.51 / Chapter 5.5 --- Matching Segments of Different Length --- p.53 / Chapter 5.5.1 --- Cases Without Partial Occlusion --- p.53 / Chapter 5.5.2 --- Cases With Partial Occlusion --- p.55 / Chapter 5.5.3 --- Matching Scheme To Handle All the Cases --- p.56 / Chapter 5.5.4 --- Matching Scheme for Segments of same length --- p.57 / Chapter 5.6 --- Assigning Disparity Values --- p.58 / Chapter 5.7 --- Another Case of Partial Occlusion Not Handled --- p.60 / Chapter 5.8 --- Matching in Two passes --- p.61 / Chapter 5.8.1 --- Problems encountered in the First pass --- p.61 / Chapter 5.8.2 --- Second pass of matching --- p.63 / Chapter 5.9 --- Refinement of Disparity Map --- p.64 / Chapter 6 --- Coarse-to-fine Matching --- p.67 / Chapter 6.1 --- The Wavelet Representation --- p.67 / Chapter 6.2 --- Coarse-to-fine Matching --- p.71 / Chapter 7 --- Experimental Results and Analysis --- p.74 / Chapter 7.1 --- Experimental Results --- p.74 / Chapter 7.1.1 --- Image Pair 1 - The Pentagon Images --- p.74 / Chapter 7.1.2 --- Image Pair 2 - Random dot stereograms --- p.79 / Chapter 7.1.3 --- Image Pair 3 ´ؤ The Rubik Block Images --- p.81 / Chapter 7.1.4 --- Image Pair 4 - The Stack of Books Images --- p.85 / Chapter 7.1.5 --- Image Pair 5 - The Staple Box Images --- p.87 / Chapter 7.1.6 --- Image Pair 6 - Circuit Board Image --- p.91 / Chapter 8 --- Conclusion --- p.94 / Chapter A --- The Wavelet Transform --- p.96 / Chapter A.l --- Fourier Transform and Wavelet Transform --- p.96 / Chapter A.2 --- Continuous wavelet Transform --- p.97 / Chapter A.3 --- Discrete Time Wavelet Transform --- p.99 / Chapter B --- Acknowledgements to Testing Images --- p.100 / Chapter B.l --- The Circuit Board Image --- p.100 / Chapter B.2 --- The Stack of Books Image --- p.101 / Chapter B.3 --- The Rubik Block Images --- p.104 / Bibliography --- p.106

Use of a single reference image in visual processing of polyhedral objects.

January 2003 (has links)
He Yong. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2003. / Includes bibliographical references (leaves 69-72). / Abstracts in English and Chinese. / ABSTRACT --- p.i / ACKNOWLEDGEMENTS --- p.v / TABLE OF CONTENTS --- p.vi / LIST OF FIGURES --- p.viii / LIST OF TABLES --- p.x / Chapter 1 --- INTRODUCTION --- p.1 / Chapter 2 --- PRELIMINARY --- p.6 / Chapter 3 --- IMAGE MOSAICING FOR SINGLY VISIBLE SURFACES --- p.9 / Chapter 3.1 --- Background --- p.9 / Chapter 3.2 --- Correspondence Inference Mechanism --- p.13 / Chapter 3.3 --- Seamless Lining up of Surface Boundary --- p.17 / Chapter 3.4 --- Experimental Result --- p.21 / Chapter 3.5 --- Summary of Image Mosaicing Work --- p.32 / Chapter 4 --- MOBILE ROBOT SELF-LOCALIZATION FROM MONOCULAR VISION --- p.33 / Chapter 4.1 --- Background --- p.33 / Chapter 4.2 --- Problem Definition --- p.37 / Chapter 4.3 --- Our Strategy of Localizing the Mobile Robot --- p.38 / Chapter 4.3.1 --- Establishing Correspondences --- p.40 / Chapter 4.3.2 --- Determining Position from Factorizing E-matrix --- p.49 / Chapter 4.3.3 --- Improvement on the Factorization Result --- p.55 / Chapter 4.4 --- Experimental Result --- p.56 / Chapter 4.5 --- Summary of Mobile Robot Self-localization Work --- p.62 / Chapter 5 --- CONCLUSION AND FUTURE WORK --- p.63 / APPENDIX --- p.67 / BIBLIOGRAPHY --- p.69

Video motion estimation and noise reduction.

January 2012 (has links)
隨著數碼相機、攝影手機以及監控攝像機的快速普及,每天無數的視頻錄像被創造出來。運動估計是視頻處理中的一種基本問題,這個問題通常被稱為光流估計。現有光流估計算法無法處理發生較大尺度變化的視頻。但尺度變化在視頻和照片中非常普遍,所以尺度不變性的光流估計算法對於其他視頻處理操作諸如圖像去噪算法有很大幫助。所以我們提出新的方法來解決這個問題,以建立兩幀圖像不同尺度像素之間的稠密匹配。我們提出一個新的框架,引入像素級精度的尺度參數,然後提出一種有效的數值計算機制,迭代地優化離散尺度參數和連續光流參數。這個機制顯著地拓展了光流估計在包含各種類型運動的自然場景的實用性。 / 各種攝像設備獲得的視頻都不同程度地遭到噪聲的破壞。雖然已經提出許多視頻去噪算法,但在實際應用中仍然存在許多問題。所以,我們設計一種複雜度很低而且有效的實時視頻去噪算法。我們在視頻去噪的過程中引入高品質的光流估計來校準圖像序列。我們還設計了一種加權平均算法來從之前校準的原始視頻幀中恢復出沒有噪聲的圖像。實驗結果表明相比于其他算法,我們的算法能恢復出更多的細節。更重要的是,我們的算法保證視頻的時域連貫性,對視頻質量來說非常重要。 / 最後,我們還研究了光照不足的環境下拍攝的視頻和圖像中常見的有色噪聲現象。這種噪聲沒有辦法被現有算法有效地去除,因為它們通常假設噪聲是一個高斯或泊松分佈。根據我們對亮度噪聲和色度噪聲的觀察和分析,我們提出了一種新的去噪方法。我們採用了多分辨率雙重雙邊濾波的方法,借用現有算法去噪的亮度層來引導色度層的去噪。實驗表明,視覺和數據評價都表明了我們算法的有效性。 / With the popularity of digital cameras, mobile phone cameras and surveillance systems, numerous video clips are created everyday. Motion estimation is one of the fundamental tasks in video processing. Current optical flow estimation algorithms cannot deal with frames that are with large scale variation. Because scale variation commonly arises in images/videos, a scale invariant optical flow algorithm is important and fundamental for other video operations such as video denoising. In light of this, we propose a new method, aiming to establish dense correspondence between two frames containing pixels in different scales. We contribute a new framework taking pixel-wise scale into consideration in optical flow estimation and propose an effective numerical scheme, which iteratively optimizes discrete scale variables and continuous flow ones. This scheme notably expands the practicality of optical flow in natural scenes containing different types of object movements. / Further, Videos captured by all kinds of sensors are generally contaminated by noise. Although lots of algorithms are published, there are still many problems when applying them to real cases. We design a low-complexity but effective real-time video denoising framework by integrating robust optical flow estimation into the denoising process to register locally frame sequences and designing a weighted averaging algorithm to restore a latent clean frame from a sequence of well registered frames. Experiments show that our algorithm recovers more details than other state-of-the-art video denoising algorithms. More importantly our method preserves temporal coherence, which is vital for videos. / Lastly, we study the chrominance noise which is commonly observed in both videos and images taken under insuficient light conditions. This kind of noise cannot be effectively reduced by state-of-the-art denoising methods under the assumption of a Gaussian or Poisson distributions. Based on the observation of the different characteristics of luminance and chrominance noise, we propose a new denoising strategy that employs multi-resolution dual bilateral filtering on chrominance layers un¬der the guidance of well-estimated luminance layer. Both visual and quantitative evaluation demonstrates the effectiveness of our algorithm. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Dai, Zhenlong. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2012. / Includes bibliographical references (leaves 81-90). / Abstracts also in Chinese. / Abstract --- p.i / Acknowledgement --- p.v / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Motivation and Objectives --- p.1 / Chapter 1.2 --- Our Contributions --- p.6 / Chapter 1.3 --- Thesis Outline --- p.8 / Chapter 2 --- Background --- p.10 / Chapter 2.1 --- Optical Flow Estimation --- p.10 / Chapter 2.2 --- Single Image Denoising --- p.15 / Chapter 2.3 --- Multi-image and Video Denoising --- p.17 / Chapter 3 --- Scale Invariant Optical Flow --- p.20 / Chapter 3.1 --- Related Work --- p.23 / Chapter 3.2 --- Optical Flow Model with Scale Variables --- p.25 / Chapter 3.3 --- Optimization --- p.31 / Chapter 3.3.1 --- Computing E[zi] --- p.32 / Chapter 3.3.2 --- Minimizing Optical Flow Energy --- p.32 / Chapter 3.3.3 --- Overall Computation Framework --- p.34 / Chapter 3.4 --- Experiments --- p.37 / Chapter 3.4.1 --- Evaluation of Our Model to Handle Scales . --- p.37 / Chapter 3.4.2 --- Comparison with Other Optical Flow Methods . --- p.38 / Chapter 3.4.3 --- Comparison with Sparse Feature Matching . --- p.43 / Chapter 3.4.4 --- Evaluation on the Middlebury Dataset --- p.44 / Chapter 3.5 --- Summary --- p.46 / Chapter 4 --- Optical Flow Based Video Denoising --- p.47 / Chapter 4.1 --- Related Work --- p.48 / Chapter 4.2 --- Optical Flow based Video Denoising Framework --- p.48 / Chapter 4.2.1 --- Registration --- p.48 / Chapter 4.2.2 --- Accumulation --- p.52 / Chapter 4.2.3 --- Algorithm Implementation --- p.53 / Chapter 4.3 --- Experimental Results --- p.54 / Chapter 4.3.1 --- Comparisons with other algorithms --- p.54 / Chapter 4.3.2 --- Applications --- p.55 / Chapter 4.4 --- Limitation and Future Work --- p.55 / Chapter 4.5 --- Summary --- p.59 / Chapter 5 --- Chrominance Noise Reduction --- p.62 / Chapter 5.1 --- Related work --- p.65 / Chapter 5.2 --- Luminance and Chrominance Noise Characteristics --- p.68 / Chapter 5.3 --- Luminance and Chrominance Relationship --- p.69 / Chapter 5.4 --- Algorithm --- p.71 / Chapter 5.4.1 --- Dual Bilateral Filter --- p.71 / Chapter 5.4.2 --- Multi-resolution Framework --- p.72 / Chapter 5.5 --- Experiments --- p.72 / Chapter 5.5.1 --- Quantitative Evaluation --- p.73 / Chapter 5.5.2 --- Visual Comparison for Natural Noisy Images --- p.74 / Chapter 5.5.3 --- Applications --- p.75 / Chapter 5.6 --- Summary --- p.75 / Chapter 6 --- Conclusion --- p.79 / Bibliography --- p.82

Mathematical Models of Image Processing

Seacrest, Tyler 01 May 2006 (has links)
The purpose of this thesis is to develop various advanced linear algebra techniques that apply to image processing. With the increasing use of computers and digital photography, being able to manipulate digital images efficiently and with greater freedom is extremely important. By applying the tools of linear algebra, we hope to improve the ability to process such images. We are especially interested in developing techniques that allow computers to manipulate images with the least amount of human guidance. In Chapter 2 and Chapter 3, we develop the basic definitions and linear algebra concepts that lay the foundation for later chapters. Then, in Chapter 4, we demonstrate techniques that allow a computer to rotate an image to the correct orientation automatically, and similarly, for the computer to correct a certain class of color distortion automatically. In both cases, we use certain properties of the eigenvalues and eigenvectors of covariance matrices. We then model color clashing and color variation in Chapter 5 using a powerful tool from linear algebra known as the Perron-Frobenius theorem. Finally, we explore ways to determine whether an image is a blur of another image using invariant functions. The inspiration behind these functions are recent applications of Lie Groups and Lie algebra to image processing.

Feature extraction, browsing and retrieval of images

Lim, Suryani January 2005 (has links)
Abstract not available

Three-dimensional measurement using a single camera and target tracking

Iovenitti, Pio Gioacchino, piovenitti@swin.edu.au January 1997 (has links)
This thesis involves the development of a three-dimensional measurement system for digitising the surface of an object. The measurement system consists of a single camera and a four point planar target of known size. The target is hand held, and is used to probe the surface of the object being measured. The position of the target is tracked by the camera, and the contact point on the object is determined. The vision based digitising technique can be used in the industrial and engineering design fields during the product development phase. The accuracy of measurement is an important criterion for establishing the success of the 3-D measurement system, and the factors influencing the accuracy are investigated. These factors include the image processing algorithm, the intrinsic parameters of the camera, the algorithm to determine the position, and various procedural variables. A new iterative algorithm is developed to calculate position. This algorithm is evaluated, and its performance is compared to that of an analytic algorithm. Simple calibration procedures are developed to determine the intrinsic parameters, and mathematical models are constructed to justify these procedures. The performance of the 3-D measurement system is established and compared to that of existing digitising systems.

Page generated in 0.1284 seconds