• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • Tagged with
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Spatial, temporal and spectral satellite image fusion via sparse representation.

January 2014 (has links)
为了监测以及分析全球或者局部范围内发生的气候变化、生态系统动态以及人类活动,遥感是不可或缺的重要工具。在过去的二十年间,随着众多应用领域对遥感数据需求的增长以及太空技术的发展,卫星传感器发射的数目一直在增加。然而,由于硬件技术和经济方面的制约,卫星传感器获取遥感数据时不得不在空间分辨率和其他数据属性之间进行平衡,包括时间分辨率、光谱分辨率、扫描宽度等。为了使得卫星数据同时兼备高空间分辨率和高时间分辨率或者高空间分辨率和高光谱分辨率,一种经济有效的方法是利用数据融合处理技术将多源遥感数据进行融合,从而提高可用遥感数据的应用潜力。在本论文中,我们提出利用稀疏表示理论来探索空间分辨率和时间分辨率的融合以及空间分辨率和光谱分辨率的融合。 / 以Landsat ETM+(空间分辨率为30米,时间分辨率为16天)和MODIS(空间分辨率为250米~1千米,重访周期为1天)的反射率融合为例,我们提出两种时空融合方法将Landsat图像的精细空间细节和MODIS图像的每天重访周期进行结合。这两种传感器捕获的反射率值在相应的波段具有可比性,受这一事实启发,我们提出在已知的Landsat-MODIS图像对上将它们的空间信息建立对应关系,然后在预测日期将Landsat图像从相应的MODIS图像中预测出来。为了有效地从先验图像中学习空间细节信息,我们基于稀疏表示理论对Landsat和MODIS图像分别建立一个冗余字典来提取它们的基本表示基元。在两对先验Landsat-MODIS图像场景下,我们通过从先验图像对中学习一个高-低分辨率字典对,在ETM+和MODIS的差图像间建立对应关系。在第二个融合场景下,即只有一对先验Landsat-MODIS图像对,我们通过一个图像降质模型直接连接ETM+和MODIS数据;在融合阶段,结合高通调制MODIS图像在一个两层融合框架下被提高分辨率从而得到融合图像。值得注意的是,本论文提出的时空融合方法对于物候变化和地物类型变化形成了一个统一的融合框架。 / 基于本文提出的时空融合模型,我们提出对中国深圳的土地利用/覆盖变化进行监测。为了达到合理的城市规划和可持续发展,深圳作为一个快速发展的城市面临着检测快速变化的问题。然而,这一地区的多云多雨天气使得获得高质量的遥感图像的周期比卫星的正常重访周期更长。时空融合方法可以处理这一问题,其通过提高具有低空间分辨率而频繁时间覆盖图像的空间分辨率来实现检测快速变化。通过选定两组分别具有年纪变化和月份变化的Landsat-MODIS数据,我们将本文提出的时空融合方法应用于检测多类变化的任务。 / 随后,基于字典对学习和稀疏非负矩阵分解,我们对于遥感多光谱和高光谱图像提出一种新的空谱融合方法。通过将高光谱图像 (具有低空间分辨率和高光谱分辨率,简称为LSHS)的光谱信息和多光谱图像(具有高空间分辨率和低光谱分辨率,简称为HSLS)的空间信息进行结合,本方法旨在产生同时具有高空间和高光谱分辨率的融合数据。对于高光谱数据,其每个像素可以表示成少数端元的线性组合,受这一现象启发,本方法首先充分利用LSHS数据中的丰富光谱信息提取LSHS和HSLS图像的光谱基元。由于这些光谱基元可以分别对应地表示LSHS和HSLS图像的每个像素光谱,我们将这两类数据的基元形成一个字典对。接着,我们将HSLS图像关于其对应的字典进行稀疏表示求得其表示系数,从而对LSHS图像进行空间解混。结合LSHS数据的光谱基元和HSLS数据的表示系数,我们可以最终得到具有LSHS数据的光谱分辨率和HSLS数据的空间分辨率的融合图像。 / Remote sensing provides good measurements for monitoring and further analyzing the climate change, dynamics of ecosystem, and human activities in global or regional scales. Over the past two decades, the number of launched satellite sensors has been increasing with the development of aerospace technologies and the growing requirements on remote sensing data in a vast amount of application fields. However, a key technological challenge confronting these sensors is that they tradeoff between spatial resolution and other properties, including temporal resolution, spectral resolution, swath width, etc., due to the limitations of hardware technology and budget constraints. To increase the spatial resolution of data with other good properties, one possible cost-effective solution is to explore data integration methods that can fuse multi-resolution data from multiple sensors, thereby enhancing the application capabilities of available remote sensing data. In this thesis, we propose to fuse the spatial resolution with temporal resolution and spectral resolution, respectively, based on sparse representation theory. / Taking the study case of Landsat ETM+ (with spatial resolution of 30m and temporal resolution of 16 days) and MODIS (with spatial resolution of 250m ~ 1km and daily temporal resolution) reflectance, we propose two spatial-temporal fusion methods to combine the fine spatial information of Landsat image and the daily temporal resolution of MODIS image. Motivated by that the images from these two sensors are comparable on corresponding bands, we propose to link their spatial information on available Landsat- MODIS image pair (captured on prior date) and then predict the Landsat image from the MODIS counterpart on prediction date. To well-learn the spatial details from the prior images, we use a redundant dictionary to extract the basic representation atoms for both Landsat and MODIS images based on sparse representation. Under the scenario of two prior Landsat-MODIS image pairs, we build the corresponding relationship between the difference images of MODIS and ETM+ by training a low- and high-resolution dictionary pair from the given prior image pairs. In the second scenario, i.e., only one Landsat- MODIS image pair being available, we directly correlate MODIS and ETM+ data through an image degradation model. Then, the fusion stage is achieved by super-resolving the MODIS image combining the high-pass modulation in a two-layer fusion framework. Remarkably, the proposed spatial-temporal fusion methods form a unified framework for blending remote sensing images with phenology change or land-cover-type change. / Based on the proposed spatial-temporal fusion models, we propose to monitor the land use/land cover changes in Shenzhen, China. As a fast-growing city, Shenzhen faces the problem of detecting the rapid changes for both rational city planning and sustainable development. However, the cloudy and rainy weather in region Shenzhen located makes the capturing circle of high-quality satellite images longer than their normal revisit periods. Spatial-temporal fusion methods are capable to tackle this problem by improving the spatial resolution of images with coarse spatial resolution but frequent temporal coverage, thereby making the detection of rapid changes possible. On two Landsat-MODIS datasets with annual and monthly changes, respectively, we apply the proposed spatial-temporal fusion methods to the task of multiple change detection. / Afterward, we propose a novel spatial and spectral fusion method for satellite multispectral and hyperspectral (or high-spectral) images based on dictionary-pair learning and sparse non-negative matrix factorization. By combining the spectral information from hyperspectral image, which is characterized by low spatial resolution but high spectral resolution and abbreviated as LSHS, and the spatial information from multispectral image, which is featured by high spatial resolution but low spectral resolution and abbreviated as HSLS, this method aims to generate the fused data with both high spatial and high spectral resolutions. Motivated by the observation that each hyperspectral pixel can be represented by a linear combination of a few endmembers, this method first extracts the spectral bases of LSHS and HSLS images by making full use of the rich spectral information in LSHS data. The spectral bases of these two categories data then formulate a dictionary-pair due to their correspondence in representing each pixel spectra of LSHS data and HSLS data, respectively. Subsequently, the LSHS image is spatially unmixed by representing the HSLS image with respect to the corresponding learned dictionary to derive its representation coefficients. Combining the spectral bases of LSHS data and the representation coefficients of HSLS data, we finally derive the fused data characterized by the spectral resolution of LSHS data and the spatial resolution of HSLS data. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Song, Huihui. / Thesis (Ph.D.) Chinese University of Hong Kong, 2014. / Includes bibliographical references (leaves 103-110). / Abstracts also in Chinese.

Page generated in 0.1473 seconds