Return to search

Urban classification by pixel and object-based approaches for very high resolution imagery

Recently, there is a tremendous amount of high resolution imagery that wasn’t available years ago, mainly because of the advancement of the technology in capturing such images. Most of the very high resolution (VHR) imagery comes in three bands only the red, green and blue (RGB), whereas, the importance of using such imagery in remote sensing studies has been only considered lately, despite that, there are no enough studies examining the usefulness of these imagery in urban applications. This research proposes a method to investigate high resolution imagery to analyse an urban area using UAV imagery for land use and land cover classification. Remote sensing imagery comes in various characteristics and format from different sources, most commonly from satellite and airborne platforms. Recently, unmanned aerial vehicles (UAVs) have become a very good potential source to collect geographic data with new unique properties, most important asset is the VHR of spatiotemporal data structure. UAV systems are as a promising technology that will advance not only remote sensing but GIScience as well. UAVs imagery has been gaining popularity in the last decade for various remote sensing and GIS applications in general, and particularly in image analysis and classification. One of the concerns of UAV imagery is finding an optimal approach to classify UAV imagery which is usually hard to define, because many variables are involved in the process such as the properties of the image source and purpose of the classification. The main objective of this research is evaluating land use / land cover (LULC) classification for urban areas, whereas the data of the study area consists of VHR imagery of RGB bands collected by a basic, off-shelf and simple UAV. LULC classification was conducted by pixel and object-based approaches, where supervised algorithms were used for both approaches to classify the image. In pixel-based image analysis, three different algorithms were used to create a final classified map, where one algorithm was used in the object-based image analysis. The study also tested the effectiveness of object-based approach instead of pixel-based in order to minimize the difficulty in classifying mixed pixels in VHR imagery, while identifying all possible classes in the scene and maintain the high accuracy. Both approaches were applied to a UAV image with three spectral bands (red, green and blue), in addition to a DEM layer that was added later to the image as ancillary data. Previous studies of comparing pixel-based and object-based classification approaches claims that object-based had produced better results of classes for VHR imagery. Meanwhile several trade-offs are being made when selecting a classification approach that varies from different perspectives and factors such as time cost, trial and error, and subjectivity.       Classification based on pixels was approached in this study through supervised learning algorithms, where the classification process included all necessary steps such as selecting representative training samples and creating a spectral signature file. The process in object-based classification included segmenting the UAV’s imagery and creating class rules by using feature extraction. In addition, the incorporation of hue, saturation and intensity (IHS) colour domain and Principle Component Analysis (PCA) layers were tested to evaluate the ability of such method to produce better results of classes for simple UAVs imagery. These UAVs are usually equipped with only RGB colour sensors, where combining more derived colour bands such as IHS has been proven useful in prior studies for object-based image analysis (OBIA) of UAV’s imagery, however, incorporating the IHS domain and PCA layers in this research did not provide much better classes. For the pixel-based classification approach, it was found that Maximum Likelihood algorithm performs better for VHR of UAV imagery than the other two algorithms, the Minimum Distance and Mahalanobis Distance. The difference in the overall accuracy for all algorithms in the pixel-based approach was obvious, where the values for Maximum Likelihood, Minimum Distance and Mahalanobis Distance were respectively as 86%, 80% and 76%. The Average Precision (AP) measure was calculated to compare between the pixel and object-based approaches, the result was higher in the object-based approach when applied for the buildings class, the AP measure for object-based classification was 0.9621 and 0.9152 for pixel-based classification. The results revealed that pixel-based classification is still effective and can be applicable for UAV imagery, however, the object-based classification that was done by the Nearest Neighbour algorithm has produced more appealing classes with higher accuracy. Also, it was concluded that OBIA has more power for extracting geographic information and easier integration within the GIS, whereas the result of this research is estimated to be applicable for classifying UAV’s imagery used for LULC applications.

Identiferoai:union.ndltd.org:UPSALLA1/oai:DiVA.org:hig-23993
Date January 2015
CreatorsAli, Fadi
PublisherHögskolan i Gävle, Samhällsbyggnad, GIS
Source SetsDiVA Archive at Upsalla University
LanguageEnglish
Detected LanguageEnglish
TypeStudent thesis, info:eu-repo/semantics/bachelorThesis, text
Formatapplication/pdf
Rightsinfo:eu-repo/semantics/openAccess

Page generated in 0.0092 seconds