Spelling suggestions: "subject:"image processing digital techniques"" "subject:"lmage processing digital techniques""
151 |
Image inpainting by global structure and texture propagation.January 2008 (has links)
Huang, Ting. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2008. / Includes bibliographical references (p. 37-41). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Related Area --- p.2 / Chapter 1.2 --- Previous Work --- p.4 / Chapter 1.3 --- Proposed Framework --- p.7 / Chapter 1.4 --- Overview --- p.8 / Chapter 2 --- Markov Random Fields and Optimization Schemes --- p.9 / Chapter 2.1 --- MRF Model --- p.10 / Chapter 2.1.1 --- MAP Understanding --- p.11 / Chapter 2.2 --- Belief Propagation Optimization Scheme --- p.14 / Chapter 2.2.1 --- Max-Product BP on MRFs --- p.14 / Chapter 2.2.2 --- Sum-Product BP on MRFs --- p.15 / Chapter 3 --- Our Formulation --- p.17 / Chapter 3.1 --- An MRF Model --- p.18 / Chapter 3.2 --- Coarse-to-Fine Optimization by BP --- p.21 / Chapter 3.2.1 --- Coarse-Level Belief Propagation --- p.23 / Chapter 3.2.2 --- Fine-Level Belief Propagation --- p.24 / Chapter 3.2.3 --- Performance Enhancement --- p.25 / Chapter 4 --- Experiments --- p.27 / Chapter 4.1 --- Comparison --- p.27 / Chapter 4.2 --- Failure Case --- p.32 / Chapter 5 --- Conclusion --- p.35 / Bibliography --- p.37
|
152 |
Robust and parallel mesh reconstruction from unoriented noisy points.January 2009 (has links)
Sheung, Hoi. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2009. / Includes bibliographical references (p. 65-70). / Abstract also in Chinese. / Abstract --- p.v / Acknowledgements --- p.ix / List of Figures --- p.xiii / List of Tables --- p.xv / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Main Contributions --- p.3 / Chapter 1.2 --- Outline --- p.3 / Chapter 2 --- Related Work --- p.5 / Chapter 2.1 --- Volumetric reconstruction --- p.5 / Chapter 2.2 --- Combinatorial approaches --- p.6 / Chapter 2.3 --- Robust statistics in surface reconstruction --- p.6 / Chapter 2.4 --- Down-sampling of massive points --- p.7 / Chapter 2.5 --- Streaming and parallel computing --- p.7 / Chapter 3 --- Robust Normal Estimation and Point Projection --- p.9 / Chapter 3.1 --- Robust Estimator --- p.9 / Chapter 3.2 --- Mean Shift Method --- p.11 / Chapter 3.3 --- Normal Estimation and Projection --- p.11 / Chapter 3.4 --- Moving Least Squares Surfaces --- p.14 / Chapter 3.4.1 --- Step 1: local reference domain --- p.14 / Chapter 3.4.2 --- Step 2: local bivariate polynomial --- p.14 / Chapter 3.4.3 --- Simpler Implementation --- p.15 / Chapter 3.5 --- Robust Moving Least Squares by Forward Search --- p.16 / Chapter 3.6 --- Comparison with RMLS --- p.17 / Chapter 3.7 --- K-Nearest Neighborhoods --- p.18 / Chapter 3.7.1 --- Octree --- p.18 / Chapter 3.7.2 --- Kd-Tree --- p.19 / Chapter 3.7.3 --- Other Techniques --- p.19 / Chapter 3.8 --- Principal Component Analysis --- p.19 / Chapter 3.9 --- Polynomial Fitting --- p.21 / Chapter 3.10 --- Highly Parallel Implementation --- p.22 / Chapter 4 --- Error Controlled Subsampling --- p.23 / Chapter 4.1 --- Centroidal Voronoi Diagram --- p.23 / Chapter 4.2 --- Energy Function --- p.24 / Chapter 4.2.1 --- Distance Energy --- p.24 / Chapter 4.2.2 --- Shape Prior Energy --- p.24 / Chapter 4.2.3 --- Global Energy --- p.25 / Chapter 4.3 --- Lloyd´ةs Algorithm --- p.26 / Chapter 4.4 --- Clustering Optimization and Subsampling --- p.27 / Chapter 5 --- Mesh Generation --- p.29 / Chapter 5.1 --- Tight Cocone Triangulation --- p.29 / Chapter 5.2 --- Clustering Based Local Triangulation --- p.30 / Chapter 5.2.1 --- Initial Surface Reconstruction --- p.30 / Chapter 5.2.2 --- Cleaning Process --- p.32 / Chapter 5.2.3 --- Comparisons --- p.33 / Chapter 5.3 --- Computing Dual Graph --- p.34 / Chapter 6 --- Results and Discussion --- p.37 / Chapter 6.1 --- Results of Mesh Reconstruction form Noisy Point Cloud --- p.37 / Chapter 6.2 --- Results of Clustering Based Local Triangulation --- p.47 / Chapter 7 --- Conclusions --- p.55 / Chapter 7.1 --- Key Contributions --- p.55 / Chapter 7.2 --- Factors Affecting Our Algorithm --- p.55 / Chapter 7.3 --- Future Work --- p.56 / Chapter A --- Building Neighborhood Table --- p.59 / Chapter A.l --- Building Neighborhood Table in Streaming --- p.59 / Chapter B --- Publications --- p.63 / Bibliography --- p.65
|
153 |
Learning-based descriptor for 2-D face recognition.January 2010 (has links)
Cao, Zhimin. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2010. / Includes bibliographical references (leaves 30-34). / Abstracts in English and Chinese. / Chapter 1 --- Introduction and related work --- p.1 / Chapter 2 --- Learning-based descriptor for face recognition --- p.7 / Chapter 2.1 --- Overview of framework --- p.7 / Chapter 2.2 --- Learning-based descriptor extraction --- p.9 / Chapter 2.2.1 --- Sampling and normalization --- p.9 / Chapter 2.2.2 --- Learning-based encoding and histogram rep-resentation --- p.11 / Chapter 2.2.3 --- PCA dimension reduction --- p.12 / Chapter 2.2.4 --- Multiple LE descriptors --- p.14 / Chapter 2.3 --- Pose-adaptive matching --- p.16 / Chapter 2.3.1 --- Component -level face alignment --- p.17 / Chapter 2.3.2 --- Pose-adaptive matching --- p.17 / Chapter 2.3.3 --- Evaluations of pose-adaptive matching --- p.19 / Chapter 3 --- Experiment --- p.21 / Chapter 3.1 --- Results on the LFW benchmark --- p.21 / Chapter 3.2 --- Results on Multi-PIE --- p.24 / Chapter 4 --- Conclusion and future work --- p.27 / Chapter 4.1 --- Conclusion --- p.27 / Chapter 4.2 --- Future work --- p.28 / Bibliography --- p.30
|
154 |
Image partial blur detection and classification.January 2008 (has links)
Liu, Renting. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2008. / Includes bibliographical references (leaves 40-46). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Related Work and System Overview --- p.6 / Chapter 2.1 --- Previous Work in Blur Analysis --- p.6 / Chapter 2.1.1 --- Blur detection and estimation --- p.6 / Chapter 2.1.2 --- Image deblurring --- p.8 / Chapter 2.1.3 --- Low DoF image auto-segmentation --- p.14 / Chapter 2.2 --- System Overview --- p.15 / Chapter 3 --- Blur Features and Classification --- p.18 / Chapter 3.1 --- Blur Features --- p.18 / Chapter 3.1.1 --- Local Power Spectrum Slope --- p.19 / Chapter 3.1.2 --- Gradient Histogram Span --- p.21 / Chapter 3.1.3 --- Maximum Saturation --- p.24 / Chapter 3.1.4 --- Local Autocorrelation Congruency --- p.25 / Chapter 3.2 --- Classification --- p.28 / Chapter 4 --- Experiments and Results --- p.29 / Chapter 4.1 --- Blur Patch Detection --- p.29 / Chapter 4.2 --- Blur degree --- p.33 / Chapter 4.3 --- Blur Region Segmentation --- p.34 / Chapter 5 --- Conclusion and Future Work --- p.38 / Bibliography --- p.40 / Chapter A --- Blurred Edge Analysis --- p.47
|
155 |
Dynamic texture synthesis in image and video processing.January 2008 (has links)
Xu, Leilei. / Thesis submitted in: October 2007. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2008. / Includes bibliographical references (leaves 78-84). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgement --- p.iii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Texture and Dynamic Textures --- p.1 / Chapter 1.2 --- Related work --- p.4 / Chapter 1.3 --- Thesis Outline --- p.7 / Chapter 2 --- Image/Video Processing --- p.8 / Chapter 2.1 --- Bayesian Analysis --- p.8 / Chapter 2.2 --- Markov Property --- p.10 / Chapter 2.3 --- Graph Cut --- p.12 / Chapter 2.4 --- Belief Propagation --- p.13 / Chapter 2.5 --- Expectation-Maximization --- p.15 / Chapter 2.6 --- Principle Component Analysis --- p.15 / Chapter 3 --- Linear Dynamic System --- p.17 / Chapter 3.1 --- System Model --- p.18 / Chapter 3.2 --- Degeneracy and Canonical Model Realization --- p.19 / Chapter 3.3 --- Learning of Dynamic Textures --- p.19 / Chapter 3.4 --- Synthesizing Dynamic Textures --- p.21 / Chapter 3.5 --- Summary --- p.21 / Chapter 4 --- Dynamic Color Texture Synthesis --- p.25 / Chapter 4.1 --- Related Work --- p.25 / Chapter 4.2 --- System Model --- p.26 / Chapter 4.2.1 --- Laplacian Pyramid-based DCTS Model --- p.28 / Chapter 4.2.2 --- RBF-based DCTS Model --- p.28 / Chapter 4.3 --- Experimental Results --- p.32 / Chapter 4.4 --- Summary --- p.42 / Chapter 5 --- Dynamic Textures using Multi-resolution Analysis --- p.43 / Chapter 5.1 --- System Model --- p.44 / Chapter 5.2 --- Multi-resolution Descriptors --- p.46 / Chapter 5.2.1 --- Laplacian Pyramids --- p.47 / Chapter 5.2.2 --- Haar Wavelets --- p.48 / Chapter 5.2.3 --- Steerable Pyramid --- p.49 / Chapter 5.3 --- Experimental Results --- p.51 / Chapter 5.4 --- Summary --- p.55 / Chapter 6 --- Motion Transfer --- p.59 / Chapter 6.1 --- Problem formulation --- p.60 / Chapter 6.1.1 --- Similarity on Appearance --- p.61 / Chapter 6.1.2 --- Similarity on Dynamic Behavior --- p.62 / Chapter 6.1.3 --- The Objective Function --- p.65 / Chapter 6.2 --- Further Work --- p.66 / Chapter 7 --- Conclusions --- p.67 / Chapter A --- List of Publications --- p.68 / Chapter B --- Degeneracy in LDS Model --- p.70 / Chapter B.l --- Equivalence Class --- p.70 / Chapter B.2 --- The Choice of the Matrix Q --- p.70 / Chapter B.3 --- Swapping the Column of C and A --- p.71 / Chapter C --- Probability Density Functions --- p.74 / Chapter C.1 --- Probability Distribution --- p.74 / Chapter C.2 --- Joint Probability Distributions --- p.75 / Bibliography --- p.78
|
156 |
Spatial, temporal and spectral satellite image fusion via sparse representation.January 2014 (has links)
为了监测以及分析全球或者局部范围内发生的气候变化、生态系统动态以及人类活动,遥感是不可或缺的重要工具。在过去的二十年间,随着众多应用领域对遥感数据需求的增长以及太空技术的发展,卫星传感器发射的数目一直在增加。然而,由于硬件技术和经济方面的制约,卫星传感器获取遥感数据时不得不在空间分辨率和其他数据属性之间进行平衡,包括时间分辨率、光谱分辨率、扫描宽度等。为了使得卫星数据同时兼备高空间分辨率和高时间分辨率或者高空间分辨率和高光谱分辨率,一种经济有效的方法是利用数据融合处理技术将多源遥感数据进行融合,从而提高可用遥感数据的应用潜力。在本论文中,我们提出利用稀疏表示理论来探索空间分辨率和时间分辨率的融合以及空间分辨率和光谱分辨率的融合。 / 以Landsat ETM+(空间分辨率为30米,时间分辨率为16天)和MODIS(空间分辨率为250米~1千米,重访周期为1天)的反射率融合为例,我们提出两种时空融合方法将Landsat图像的精细空间细节和MODIS图像的每天重访周期进行结合。这两种传感器捕获的反射率值在相应的波段具有可比性,受这一事实启发,我们提出在已知的Landsat-MODIS图像对上将它们的空间信息建立对应关系,然后在预测日期将Landsat图像从相应的MODIS图像中预测出来。为了有效地从先验图像中学习空间细节信息,我们基于稀疏表示理论对Landsat和MODIS图像分别建立一个冗余字典来提取它们的基本表示基元。在两对先验Landsat-MODIS图像场景下,我们通过从先验图像对中学习一个高-低分辨率字典对,在ETM+和MODIS的差图像间建立对应关系。在第二个融合场景下,即只有一对先验Landsat-MODIS图像对,我们通过一个图像降质模型直接连接ETM+和MODIS数据;在融合阶段,结合高通调制MODIS图像在一个两层融合框架下被提高分辨率从而得到融合图像。值得注意的是,本论文提出的时空融合方法对于物候变化和地物类型变化形成了一个统一的融合框架。 / 基于本文提出的时空融合模型,我们提出对中国深圳的土地利用/覆盖变化进行监测。为了达到合理的城市规划和可持续发展,深圳作为一个快速发展的城市面临着检测快速变化的问题。然而,这一地区的多云多雨天气使得获得高质量的遥感图像的周期比卫星的正常重访周期更长。时空融合方法可以处理这一问题,其通过提高具有低空间分辨率而频繁时间覆盖图像的空间分辨率来实现检测快速变化。通过选定两组分别具有年纪变化和月份变化的Landsat-MODIS数据,我们将本文提出的时空融合方法应用于检测多类变化的任务。 / 随后,基于字典对学习和稀疏非负矩阵分解,我们对于遥感多光谱和高光谱图像提出一种新的空谱融合方法。通过将高光谱图像 (具有低空间分辨率和高光谱分辨率,简称为LSHS)的光谱信息和多光谱图像(具有高空间分辨率和低光谱分辨率,简称为HSLS)的空间信息进行结合,本方法旨在产生同时具有高空间和高光谱分辨率的融合数据。对于高光谱数据,其每个像素可以表示成少数端元的线性组合,受这一现象启发,本方法首先充分利用LSHS数据中的丰富光谱信息提取LSHS和HSLS图像的光谱基元。由于这些光谱基元可以分别对应地表示LSHS和HSLS图像的每个像素光谱,我们将这两类数据的基元形成一个字典对。接着,我们将HSLS图像关于其对应的字典进行稀疏表示求得其表示系数,从而对LSHS图像进行空间解混。结合LSHS数据的光谱基元和HSLS数据的表示系数,我们可以最终得到具有LSHS数据的光谱分辨率和HSLS数据的空间分辨率的融合图像。 / Remote sensing provides good measurements for monitoring and further analyzing the climate change, dynamics of ecosystem, and human activities in global or regional scales. Over the past two decades, the number of launched satellite sensors has been increasing with the development of aerospace technologies and the growing requirements on remote sensing data in a vast amount of application fields. However, a key technological challenge confronting these sensors is that they tradeoff between spatial resolution and other properties, including temporal resolution, spectral resolution, swath width, etc., due to the limitations of hardware technology and budget constraints. To increase the spatial resolution of data with other good properties, one possible cost-effective solution is to explore data integration methods that can fuse multi-resolution data from multiple sensors, thereby enhancing the application capabilities of available remote sensing data. In this thesis, we propose to fuse the spatial resolution with temporal resolution and spectral resolution, respectively, based on sparse representation theory. / Taking the study case of Landsat ETM+ (with spatial resolution of 30m and temporal resolution of 16 days) and MODIS (with spatial resolution of 250m ~ 1km and daily temporal resolution) reflectance, we propose two spatial-temporal fusion methods to combine the fine spatial information of Landsat image and the daily temporal resolution of MODIS image. Motivated by that the images from these two sensors are comparable on corresponding bands, we propose to link their spatial information on available Landsat- MODIS image pair (captured on prior date) and then predict the Landsat image from the MODIS counterpart on prediction date. To well-learn the spatial details from the prior images, we use a redundant dictionary to extract the basic representation atoms for both Landsat and MODIS images based on sparse representation. Under the scenario of two prior Landsat-MODIS image pairs, we build the corresponding relationship between the difference images of MODIS and ETM+ by training a low- and high-resolution dictionary pair from the given prior image pairs. In the second scenario, i.e., only one Landsat- MODIS image pair being available, we directly correlate MODIS and ETM+ data through an image degradation model. Then, the fusion stage is achieved by super-resolving the MODIS image combining the high-pass modulation in a two-layer fusion framework. Remarkably, the proposed spatial-temporal fusion methods form a unified framework for blending remote sensing images with phenology change or land-cover-type change. / Based on the proposed spatial-temporal fusion models, we propose to monitor the land use/land cover changes in Shenzhen, China. As a fast-growing city, Shenzhen faces the problem of detecting the rapid changes for both rational city planning and sustainable development. However, the cloudy and rainy weather in region Shenzhen located makes the capturing circle of high-quality satellite images longer than their normal revisit periods. Spatial-temporal fusion methods are capable to tackle this problem by improving the spatial resolution of images with coarse spatial resolution but frequent temporal coverage, thereby making the detection of rapid changes possible. On two Landsat-MODIS datasets with annual and monthly changes, respectively, we apply the proposed spatial-temporal fusion methods to the task of multiple change detection. / Afterward, we propose a novel spatial and spectral fusion method for satellite multispectral and hyperspectral (or high-spectral) images based on dictionary-pair learning and sparse non-negative matrix factorization. By combining the spectral information from hyperspectral image, which is characterized by low spatial resolution but high spectral resolution and abbreviated as LSHS, and the spatial information from multispectral image, which is featured by high spatial resolution but low spectral resolution and abbreviated as HSLS, this method aims to generate the fused data with both high spatial and high spectral resolutions. Motivated by the observation that each hyperspectral pixel can be represented by a linear combination of a few endmembers, this method first extracts the spectral bases of LSHS and HSLS images by making full use of the rich spectral information in LSHS data. The spectral bases of these two categories data then formulate a dictionary-pair due to their correspondence in representing each pixel spectra of LSHS data and HSLS data, respectively. Subsequently, the LSHS image is spatially unmixed by representing the HSLS image with respect to the corresponding learned dictionary to derive its representation coefficients. Combining the spectral bases of LSHS data and the representation coefficients of HSLS data, we finally derive the fused data characterized by the spectral resolution of LSHS data and the spatial resolution of HSLS data. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Song, Huihui. / Thesis (Ph.D.) Chinese University of Hong Kong, 2014. / Includes bibliographical references (leaves 103-110). / Abstracts also in Chinese.
|
157 |
Removing Textured Artifacts from Digital Photos Using Spatial Frequency FilteringHuang, Ben 01 January 2010 (has links)
An abstract of the thesis of Ben Huang for the Master of Science in Electric and Computer Science presented [August 12nd, 2010]. Title: Removing textured artifacts from digital photos by using spatial frequency filtering Virtually all image processing is now done with digital images. These images, captured with digital cameras, can be readily processed with various types of editing software to serve a multitude of personal and commercial purposes. But not all images are directly captured and even of those that are directly captured many are not of sufficiently high quality. Digital images are also acquired by scanning old paper images. The result is often a digital image of poor quality. Textured artifacts on some old paper pictures were designed to help protect pictures from discoloration. However, after scanning, these textured artifacts exhibit annoying textured noise in the digital image, highly degrading the visual definition of images on electronic screens. This kind of image noise is academically called global periodic noise. It is in a spurious and repetitive pattern that exists consistently throughout the image. There does not appear to be any commercial graphic software with a tool box to directly resolve this global periodic noise. Even Photoshop, considered to be the most powerful and authoritative graphic software, does not have an effective function to reduce textured noise. This thesis addresses this problem by proposing the use of an alternative graphic filter to what is currently available. To achieve the best image quality in photographic editing, spatial frequency domain filtering is utilized instead of spatial domain filtering. In frequency domain images, the consistent periodicity of the textured noise leads to well defined spikes in the frequency transform of the noisy image. When the noise spikes are at a sufficient distance from the image spectrum, they can be removed by reducing their frequency amplitudes. The filtered spectrum may then yield a noise reduced image through inverse frequency transforming. This thesis proposes a method to reduce periodic noise in the spatial frequency domain; summarizes the difference between DFT and DCT, FFT and fast DCT in image processing applications; uses fast DCT as the frequency transform to solve the problem in order to improve both computational load and filtered image quality; and develops software that can be implemented as a plug in for large graphic software to remove textured artifacts from digital images.
|
158 |
Low to medium level image processing for a mobile robotEspinosa, Cecilia H. 01 January 1991 (has links)
The use of visual perception in autonomous mobile systems was approached with caution by mobile robot developers because of the high computational cost and huge memory requirements of most image processing operations. When used, the image processing is implemented on multiprocessors or complex and expensive systems, thereby requiring the robot to be wired or radio controlled from the computer system base.
|
159 |
Scalable video coding by stream morphingMacnicol, James Roy. January 2002 (has links) (PDF)
"October 2002 (Revised May 2003)"--T.p. Includes bibliographical references (leaves 256-264).
|
160 |
Natural feature extraction as a front end for simultaneous localization and mapping.Kiang, Kai-Ming, Mechanical & Manufacturing Engineering, Faculty of Engineering, UNSW January 2006 (has links)
This thesis is concerned with algorithms for finding natural features that are then used for simultaneous localisation and mapping, commonly known as SLAM in navigation theory. The task involves capturing raw sensory inputs, extracting features from these inputs and using the features for mapping and localising during navigation. The ability to extract natural features allows automatons such as robots to be sent to environments where no human beings have previously explored working in a way that is similar to how human beings understand and remember where they have been. In extracting natural features using images, the way that features are represented and matched is a critical issue in that the computation involved could be wasted if the wrong method is chosen. While there are many techniques capable of matching pre-defined objects correctly, few of them can be used for real-time navigation in an unexplored environment, intelligently deciding on what is a relevant feature in the images. Normally, feature analysis that extracts relevant features from an image is a 2-step process, the steps being firstly to select interest points and then to represent these points based on the local region properties. A novel technique is presented in this thesis for extracting a small enough set of natural features robust enough for navigation purposes. The technique involves a 3-step approach. The first step involves an interest point selection method based on extrema of difference of Gaussians (DOG). The second step applies Textural Feature Analysis (TFA) on the local regions of the interest points. The third step selects the distinctive features using Distinctness Analysis (DA) based mainly on the probability of occurrence of the features extracted. The additional step of DA has shown that a significant improvement on the processing speed is attained over previous methods. Moreover, TFA / DA has been applied in a SLAM configuration that is looking at an underwater environment where texture can be rich in natural features. The results demonstrated that an improvement in loop closure ability is attained compared to traditional SLAM methods. This suggests that real-time navigation in unexplored environments using natural features could now be a more plausible option.
|
Page generated in 0.117 seconds