• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 191
  • 53
  • 19
  • 18
  • 8
  • 5
  • 4
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 357
  • 357
  • 96
  • 65
  • 64
  • 61
  • 52
  • 50
  • 50
  • 36
  • 35
  • 35
  • 34
  • 33
  • 32
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Fusion of images from dissimilar sensor systems

Chow, Khin Choong 12 1900 (has links)
Approved for public release; distribution in unlimited. / Different sensors exploit different regions of the electromagnetic spectrum; therefore, a multi-sensor image fusion system can take full advantage of the complementary capabilities of individual sensors in the suit; to produce information that cannot be obtained by viewing the images separately. In this thesis, a framework for the multiresolution fusion of the night vision devices and thermal infrared imagery is presented. It encompasses a wavelet-based approach that supports both pixel-level and region-based fusion, and aims to maximize scene content by incorporating spectral information from both the source images. In pixel-level fusion, source images are decomposed into different scales, and salient directional features are extracted and selectively fused together by comparing the corresponding wavelet coefficients. To increase the degree of subject relevance in the fusion process, a region-based approach which uses a multiresolution segmentation algorithm to partition the image domain at different scales is proposed. The region's characteristics are then determined and used to guide the fusion process. The experimental results obtained demonstrate the feasibility of the approach. Potential applications of this development include improvements in night piloting (navigation and target discrimination), law enforcement etc. / Civilian, Republic of Singapore
92

Ground Target Tracking with Multi-Lane Constraint

Chen, Yangsheng 15 May 2009 (has links)
Knowledge of the lane that a target is located in is of particular interest in on-road surveillance and target tracking systems. We formulate the problem and propose two approaches for on-road target estimation with lane tracking. The first approach for lane tracking is lane identification based ona Hidden Markov Model (HMM) framework. Two identifiers are developed according to different optimality goals of identification, i.e., the optimality for the whole lane sequence and the optimality of the current lane where the target is given the whole observation sequence. The second approach is on-road target tracking with lane estimation. We propose a 2D road representation which additionally allows to model the lateral motion of the target. For fusion of the radar and image sensor based measurement data we develop three, IMM-based, estimators that use different fusion schemes: centralized, distributed, and sequential. Simulation results show that the proposed two methods have new capabilities and achieve improved estimation accuracy for on-road target tracking.
93

Démélange d'images radar polarimétrique par séparation thématique de sources / Unmixing polarimetric radar images based on land cover type

Giordano, Sébastien 30 November 2015 (has links)
Cette thèse s'inscrit dans le contexte de l'amélioration de la caractérisation de l'occupation du sol à partir d'observations de télédétection de natures très différentes : le radar polarimétrique et les images optiques multispectrales. Le radar polarimétrique permet la détermination de mécanismes de rétrodiffusion provenant de théorèmes de décomposition de l'information polarimétrique utiles à la classification des types d'occupation du sol. Cependant ces décompositions sont peu compréhensibles lorsque que plu- sieurs classes thématiques co-existent dans des proportions très variables au sein des cellules de résolution radar. Le problème est d'autant plus important que le speckle inhérent à l'imagerie radar nécessite l'estimation de ces paramètres sur des voisinages locaux. Nous nous interrogeons alors sur la capacité des données optiques multispectrales sensiblement plus résolues spatialement que le radar polarimétrique à améliorer la compréhension des mécanismes radar. Pour répondre à cette question, nous mettons en place une méthode de démélange des images radar polarimétrique par séparation thématique de sources. L'image optique peut être considérée comme un paramètre de réglage du radar fournissant une vue du mélange. L'idée générale est donc de commencer par un démélange thématique (décomposer l'information radar sur les types d'occupation du sol) avant de réaliser les décompositions polarimétriques (identifier des mécanismes de rétrodiffusion).Dans ce travail nous proposons d'utiliser un modèle linéaire et présentons un algorithme pour réaliser le démélange thématique. Nous déterminons ensuite la capacité de l'algorithme de démé- lange à reconstruire le signal radar observé. Enfin nous évaluons si l'information radar démélangée contient de l'information thématique pertinente. Cette évaluation est réalisée sur des données simulées que nous avons générées et sur des données Radarsat-2 complètement polarimétriques pour un cas d'application de mélange sol nu/forêt. Les résultats montrent que, malgré le speckle, la reconstruction est valable. Il est toujours possible d'estimer localement des bases thématiques permettant de décomposer l'information radar polarimétrique puis de reconstruire le signal observé. Cet algorithme de démélange permet aussi d'assimiler de l'information portée par les images optiques. L'évaluation de la pertinence thématique des bases de la décomposition est plus problématique. Les expériences sur des données simulées montrent que celles-ci représentent bien l'information thématique souhaitée, mais que cette bonne estimation est dépendante de la nature des types thématiques et de leurs proportions de mélange. Cette méthode nécessite donc des études complémentaires sur l'utilisation de méthodes d'estimation plus robustes aux statistiques des images radar. Son application à des images radar de longueur d'onde plus longue pourrait permettre, par exemple, une meilleure estimation du volume de végétation dans le contexte de forêts ouvertes / Land cover is a layer of information of significant interest for land management issues. In this context, combining remote sensing observations of different types is expected to produce more reliable results on land cover classification. The objective of this work is to explore the use of polarimetric radar images in association with co-registered higher resolution optical images. Extracting information from a polarimetric representation consists in decomposing it with target decomposition algorithms. Understanding these mechanisms is challenging as they are mixed inside the radar cell resolution but it is the key to producing a reliable land cover classification. The problem while using these target decomposition algorithms is that average physical parameters are obtained. As a result, each land cover type of a mixed pixel might not be well described by the average polarimetric parameters. The effect is all the more important as speckle affecting radar observations requires a local estimation of the polarimetric matrices. In this context, we chose to assess whether optical images can improve the understanding of radar images at the observation scale so as to retrieve more information. Spatial and spectral unmixing methods, traditionally designed for optical image fusion, were found to be an interesting framework. As a consequence, the idea of unmixing physical radar scattering mechanisms with the optical images is proposed. The original method developed is the decomposition of the polarimetric information, based on land cover type. This thematic decomposition is performed before applying usual target decomposition algorithms. A linear mixing model for radar images and an unmixing algorithm are proposed in this document. Having pointed out that the linear unmixing model is able to split off polarimetric information on a land cover type basis, the information contained in the unmixed matrices is evaluated. The assesment is carried out with generated simulated data and polarimetric radar images from the Radarsat-2 satellite. For this experiment, textit {Bare soil} and textit {Forested area} were considered for land cover types. It was found that despite speckle the reconstructed radar information after the unmixing is statically relevant with the observations. Moreover, the unmixing algorithm is capable of assimilating information from optical images. The question whether the unmixed radar images contain relevant thematic information is more challenging. Results on real and simulated data show that this capacity depends on the types of land cover considered and their respective proportions. Future work will be carried out to make the estimation step more robust to speckle and to test this unmixing algorithm on longer wavelength radar images. In this case, this method could be used to have a better estimation of vegetation biomass in the context of open forested areas
94

Application of artificial neural networks to deduce robust forecast performance in technoeconomic contexts

Unknown Date (has links)
The focus of this research is concerned with performing forecasting in technoeconomic contexts using a set of certain novel artificial neural networks (ANNs). Relevant efforts in general, entail the task of quantitatively estimating the details about the likelihood of future events (or unknown outcomes/effects) based on past and current information on the observed events (or known causes). Commensurate with the scope and objectives of the research, the specific topics addressed are as follows: A review on various methods adopted in technoeconomic forecasting and identified are econometric projections that can be used for forecasting via artificial neural network (ANN)-based simulations Developing and testing a compatible version of ANN designed to support a dynamic sigmoidal (squashing) function that morphs to the stochastical trends of the ANN input. As such, the network architecture gets pruned for reduced complexity across the span of iterative training schedule leading to the realization of a constructive artificial neural-network (CANN). Formulating a training schedule on an ANN with sparsely-sampled data via sparsity removal with cardinality enhancement procedure (through Nyquist sampling) and invoking statistical bootstrapping technique of resampling applied on the cardinality-improved subset so as to obtain an enhanced number of pseudoreplicates required as an adequate ensemble for robust training of the test ANN: The training and prediction exercises on the test ANN corresponds to optimally elucidating output predictions in the context of the technoeconomics framework of the power generation considered Prescribing a cone-of-error to alleviate over- or under-predictions toward prudently interpreting the results obtained; and, squeezing the cone-of-error to get a final cone-of-forecast rendering the forecast estimation/inference to be more precise Designing an ANN-based fuzzy inference engine (FIE) to ascertain the ex ante forecast details based on sparse sets of ex post data gathered in technoeconomic contexts - Involved thereof a novel method of .fusing fuzzy considerations and data sparsity.Lastly, summarizing the results with essential conclusions and identifying possible research items for future efforts identified as open-questions. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2014. / FAU Electronic Theses and Dissertations Collection
95

Patterns for wireless sensor networks

Unknown Date (has links)
Sensors are shaping many activities in our society with an endless array of potential applications in military, civilian, and medical application. They support different real world applications ranging from common household appliances to complex systems. Technological advancement has enabled sensors to be used in medical applications, wherein they are deployed to monitor patients and assist disabled patients. Sensors have been invaluable in saving lives, be it a soldier's life in a remote battlefield or a civilian's life in a disaster area or natural calamities. In every application the sensors are deployed in a pre-defined manner to perform a specific function. Understanding the basic structure of a sensor node is essential as this would be helpful in using the sensors in devices and environments that have not been explored. In this research, patterns are used to present a more abstract view of the structure and architecture of sensor nodes and wireless sensor networks. This would help an application designer to choose from different types of sensor nodes and sensor network architectures for applications such as robotic landmine detection or remote patient monitoring systems. Moreover, it would also help the network designer to reuse, combine or modify the architectures to suit more complex needs. More importantly, they can be integrated with complete IT applications. One of the important applications of wireless sensor networks in the medical field is a remote patient monitoring system. In this work, patterns were developed to describe the architecture of patient monitoring system. / This pattern describes how to connect sensor nodes and other wireless devices with each other to form a network that aims to monitor the vital signs of a person and report it to a central system. This central system could be accessed by the patient's healthcare provider for treatment purposes. This system shows one of the most important applications of sensors and it application which needs to be integrated with medical records and the use of patterns makes this integration much simpler. / by Anupama Sahu. / Thesis (M.S.C.S.)--Florida Atlantic University, 2010. / Includes bibliography. / Electronic reproduction. Boca Raton, Fla., 2010. Mode of access: World Wide Web.
96

Mining and fusing data for ocean turbine condition monitoring

Unknown Date (has links)
An ocean turbine extarcts the kinetic energy from ocean currents to generate electricity. Machine Condition Monitoring (MCM) / Prognostic Health Monitoring (PHM) systems allow for self-checking and automated fault detection, and are integral in the construction of a highly reliable ocean turbine. MCM/PHM systems enable real time health assessment, prognostics and advisory generation by interpreting data from sensors installed on the machine being monitored. To effectively utilize sensor readings for determining the health of individual components, macro-components and the overall system, these measurements must somehow be combined or integrated to form a holistic picture. The process used to perform this combination is called data fusion. Data mining and machine learning techniques allow for the analysis of these sensor signals, any maintenance history and other available information (like expert knowledge) to automate decision making and other such processes within MCM/PHM systems. ... This dissertation proposes an MCM/PHM software architecture employing those techniques which were determined from the experiments to be ideal for this application. Our work also offers a data fusion framework applicable to ocean machinery MCM/PHM. Finally, it presents a software tool for monitoring ocean turbines and other submerged vessels, implemented according to industry standards. / by Janell A. Duhaney. / Thesis (Ph.D.)--Florida Atlantic University, 2012. / Includes bibliography. / Mode of access: World Wide Web. / System requirements: Adobe Reader.
97

3D human gesture tracking and recognition by MENS inertial sensor and vision sensor fusion. / 基於MEMS慣性傳感器和視覺傳感器的三維姿勢追蹤和識別系統 / CUHK electronic theses & dissertations collection / Ji yu MEMS guan xing chuan gan qi he shi jue chuan gan qi de san wei zi shi zhui zong he shi bie xi tong

January 2013 (has links)
Zhou, Shengli. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2013. / Includes bibliographical references (leaves 133-140). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts also in Chinese.
98

Fusion of remote sensing imagery: modeling and application. / CUHK electronic theses & dissertations collection

January 2013 (has links)
Zhang, Hankui. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2013. / Includes bibliographical references (leaves 99-118). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese.
99

Spatial and temporal data fusion for generating high-resolution land cover imagery. / CUHK electronic theses & dissertations collection

January 2012 (has links)
土地利用/覆盖变化是地球上最重要的景观之一,同全球环境变化高度相关。通过对全球变化的整体模拟以及综合评价研究,可以了解全球气候变化运行机制以及人地关系。同时,全球尺度的土地利用/覆盖变化及其驱动机制研究,将揭示人类在全球气候变化机制中所起的作用,使人类更好地适应全球环境的变化。目前全球尺度的土地利用/覆盖研究大多是基于现有的五种欧洲或美国开发的全球地表覆盖产品,这些产品在一定程度上满足了全球变化研究的基本要求。但是,仍然存在一些不足之处,如统一的分类系统,精度低,产品之间的不一致以及低时效性等,使得这些产品并不适合全球环境变化的对比研究,也不能满足建立更高的精度和更可靠的全球气候变化模型的要求。因此,开发高分辨率,实时的地表覆盖产品,已成为当前全球变化研究的紧迫需要。 / 目前,遥感影像已广泛被用于制作全球地表覆盖产品,但由于传感器的技术要求和资金预算的限制,影像的空间和时间分辨率不能满足更高精度和可靠的全球变化研究需要。鉴于此,迫切需要我们研究和开发更加先进的卫星影像处理方法和地表覆盖产品的生产技术,为全球变化研究提供高精度和高可靠性的地表覆盖产品。 / 因此,为了提供更多的时间和更高空间分辨率的卫星影像以及地表覆盖产品,以更好地开展全球变化研究。本文主要从技术层面上,研究利用多源遥感影像的优点,生成高分辨率和多时相的卫星合成影像,并在此基础上发展了卫星数据融合理论和方法。本文研究中,传统的光谱空间数据融合理论将被回顾和充分讨论,考虑到卫星影像的多时相特征,传统的数据融合理论在时间维度得到扩展,本文将提出新的时空数据融合方法,并应用于植被监测和土地利用制图。 / 通过对融合理论及相关方法的系统学习,本文对各种融合方法进行了系统的回顾与总结,比如基于HIS变换图像融合方法 ,基于小波变换的图像融合方法,时空自适应反射融合模型(STARFM)等,并从遥感应用的角度,提出各种方法的优缺点。结合本文的研究目标,以下为本论文的主要研究内容。 / (1)数据融合相关理论将得到系统的研究和总结,包括各种融合模型及其应用,如基于IHS变换,PCA变换,或者小波分析的数据融合方法,等等。同时,结合具体应用归纳并总结了这些方法的优缺点。 / (2)由于传统数据融合方法依赖于空间及光谱信息,很难处理多源影像数据所蕴含的时空变化信息。因此,本文中,传统数据融合理论和方法在考虑到时间信息后得到改善和扩展。本文通过结合高空间分辨率Landsat数据和高时间分辨率MODIS数据为例,提出两种不同的时空数据融合方法。实验结果也表明,他们适合于处理多时空数据集成, 并能够满足全球变化研究对高质量数据的需要。 / (3)时空数据融合建模中的主要问题有两个,第一个问题是不同数据源之间具有不一致性,如不同卫星数据具有不同的地表反射率以及不同的数据可靠性。第二个是地表覆盖的季节性或者土地利用变化规则在空间和时间的维度具有不确定性,尤其是在复杂地区。考虑这些问题,本文在基于时间和空间自适应反射融合模型(STARFM)的基础上,提出一种新的改进模型,结果表明,它将比原有模型更为有效和更为准确的生成高分辨率合成影像数据。 / (4)混合像元问题是处理卫星数据中的一个常见问题。对于多源卫星数据来说,一个低分辨率图像像素区域将包含多个高分辨率图像像素。因此,不同数据源所获得的遥感数据将会因为混合像元问题从而影响到地表反射率数据在空间尺度上的差异,并影响到最终的融合精度。为了解决时空多源数据融合中的混合像元问题,本文将提出一种改进的基于附加条件的混合像元解缠的时空数据融合方法,实验结果表明它是适合植被监测应用,特别是具有先验土地覆盖图的地区。 / (5)在时空数据融合方法产生的一系列高分辨率合成影像的基础上,时空马尔可夫随机场分类方法被提出并用于研制生产高分辨率土地覆盖产品,该方法利用影像的时空上下文信息。这种方法提供了新的策略去制作土地覆盖产品 ,在缺乏高分辨率影像的地区。实验结果表明,它的精度是可以接受的,可以为缺乏高分辨率数据地区提供高品质的土地覆盖产品。 / Land use/cover change is one of the most important landscapes on the earth and it is highly related to global environmental change, based on which an overall simulation and comprehensive evaluation of global change research can be achieved for understanding the global change mechanism and the linkages between the human and natural environments. Moreover, study of global-scale land use/cover change and its driving mechanism will reveal the human role in global change mechanisms and processes for human adaptation to global environmental change. Most of the current global-scale land use/cover research is based on the existing five land cover products that have been developed by Europe and the US, and these indeed meet the basic requirements for the global change research to some extent. However, certain shortcomings still exist, such as their unified classification system, low accuracy, poor inconsistency, weak timeliness, etc., so, it is impossible to take the comparative global environmental change research as a basis for building more highly accurate and more reliable global change models, and it is urgent and necessary to develop a high-resolution, and up-to-date land cover product for global change research. / Currently, remote sensing imagery has been widely used for generating global land cover products, but due to certain physical and budget limitations related to the sensors, their spatial and temporal resolution are too low to attain more accurate and more reliable global change research. In this situation, there is an urgent need to study and develop a more advanced satellite image processing method and land cover producing techniques to generate higher resolution images and land cover products for global change research. / Accordingly, in order to provide more multi-temporal, high-resolution images and land cover products for global change research, this research mainly focuses on the technical level, of using both advantages of satellite images from different sources to generate high-resolution, multi-temporal images and develop satellite data fusion theory and methods. In this research, the traditional data fusion theory will be fully discussed and an improved scheme will be produced, taking into consideration the temporal information from satellite images at different times. Consequently, the spatial and temporal data fusion method will be proposed and applied to the monitoring of vegetation growth and land cover mapping. / Through conducting a comprehensive study of the related theories and methods related to data fusion, various methods are systematically reviewed and summarized, such as HIS transformation image fusion, Wavelet transform image fusion, the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM), etc. The advantages and disadvantages of these methods are highlighted according to their specific applications in the field of remote sensing. Based on my research target, the following are the main contents of this thesis: / (1) Data fusion theory will be systematically studied and summarized, including various fusion models and specific applications, such as IHS transformation, PCA transformation, Wavelet analysis based data fusion, etc. Furthermore, their advantages and disadvantages are pointed out in relation to specific applications. / (2) As traditional data fusion methods rely on spatial information and it is hard to deal with multi-source data fusion with temporal variation, therefore, the traditional data fusion theory and methods will be improved by a consideration of temporal information. Accordingly, some spatial and temporal data fusion methods will be proposed, in which both high-resolution & low-temporary imagery and low-resolution & high-temporary imagery are incorporated. Our experiments also show that they are suitable for dealing with multi-temporal data integration and generating high-resolution, multi-temporal images for global change research. / (3) There are two main issues related to spatial and temporal data fusion theory. The first is that there are inconsistencies in different images, such as the different levels of land surface reflectance and different degrees of reliability of multi-source satellite data. The second is the rule of phonological variation/land cover variation in both the spatial and temporal dimensions, particularly in areas with heterogeneous landscapes. When considering these issues, an improved STARFM (spatial and temporal adaptive reflectance fusion model) is proposed, based on the original model, and the preliminary results show that it is more efficient and accurate in generating high-resolution land surface imagery than its predecessor. / (4) Mixed pixels is a common issue in relation to satellite data processing, as one pixel in a coarse resolution image will constitute several pixels in a high-resolution image of the same size, so different levels of land surface reflectance will be acquired from multi-source satellite data because of the mixed pixel effect on the coarse resolution data, and the final accuracy of the fused result will be affected if these data are subjected to data fusion. In order to solve the mixed pixel issue in multi-source data fusion, an improved spatial and temporal data fusion approach, based on the constraint unmixing technique, was developed in this thesis. The experimental results show that it is well-suited to the phenological monitoring task when a prior land cover map is available. / (5) Based on the high-resolution reflectance images generated from spatial and temporal fusion, a spatial and temporal classification method based on the spatial and temporal Markov random field was developed to produce a high-resolution land cover product, in which both spatial and temporal contextual information are included within the classification scheme. This method provides a new strategy for generating high-resolution land cover products in the area without high-resolution images at a certain time, and the experimental results show that it is acceptable and suitable for generating high quality land cover products in areas for which there is a lack of high-resolution data. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Xu, Yong. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2012. / Includes bibliographical references (leaves 151-158). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese. / ABSTRACT --- p.II / Acknowledgement --- p.VII / Contents --- p.VIII / List of Figures --- p.X / List of Tables --- p.XII / Abbreviations --- p.XIV / Chapter CHAPTER 1 --- Introduction --- p.1 / Chapter 1.1 --- Background --- p.1 / Chapter 1.2 --- Research objectives and significance --- p.5 / Chapter 1.3 --- Research issues --- p.11 / Chapter 1.4 --- Research framework and methodology --- p.13 / Chapter 1.5 --- Organization of thesis --- p.16 / Chapter CHAPTER 2 --- Review of the Existing Image Fusion Methods --- p.19 / Chapter 2.1 --- Overview --- p.19 / Chapter 2.2 --- The multi-source image fusion method --- p.24 / Chapter 2.3 --- The multi-temporal, multi-source image fusion method --- p.29 / Chapter 2.4 --- Details of STARFM --- p.35 / Chapter 2.5 --- Accuracy of the assessment of the image fusion method --- p.41 / Chapter 2.6 --- Summary and discussion --- p.44 / Chapter CHAPTER 3 --- An Improved Spatial and Temporal Adaptive Reflectance Data Fusion Model --- p.47 / Chapter 3.1 --- Introduction --- p.48 / Chapter 3.2 --- Theoretical basis of the spatial and temporal reflectance data fusion model --- p.49 / Chapter 3.3 --- An improved spatial and temporal reflectance data fusion model --- p.57 / Chapter 3.4 --- Experiments with simulated data --- p.60 / Chapter 3.5 --- Experiments with actual data from the BOREAS and PANYU study areas --- p.67 / Chapter 3.6 --- Summary and discussion --- p.76 / Chapter CHAPTER 4 --- Spatial and Temporal Data Fusion Method Using the Constrained Unmixing Approach --- p.78 / Chapter 4.1 --- Introduction --- p.78 / Chapter 4.2 --- Methodology --- p.80 / Chapter 4.3 --- Experiments with simulated data --- p.86 / Chapter 4.4 --- Experiments with actual data --- p.90 / Chapter 4.5 --- Applications for NDVI and Land Surface Reflectance Monitoring --- p.96 / Chapter 4.6 --- Summary and conclusions --- p.105 / Chapter CHAPTER 5 --- Spatial and Temporal Classification of Synthetic Satellite Imagery: Land Cover Mapping and Accuracy Validation --- p.107 / Chapter 5.1 --- Introduction --- p.107 / Chapter 5.2 --- Study sites and data sources --- p.109 / Chapter 5.3 --- Methodology --- p.113 / Chapter 5.4 --- Synthetic Data Generation at the HARV and PANYU Study Areas --- p.119 / Chapter 5.5 --- Land Cover Mapping with Synthetic Data --- p.133 / Chapter 5.6 --- Summary and discussion --- p.142 / Chapter CHAPTER 6 --- Summary and Conclusions --- p.144 / Chapter 6.1 --- Summary --- p.144 / Chapter 6.2 --- Contributions --- p.147 / Chapter 6.3 --- Recommendations for further research --- p.149 / REFERENCES --- p.151
100

Pedestrian Detection Based on Data and Decision Fusion Using Stereo Vision and Thermal Imaging

Sun, Roy 25 April 2016 (has links)
Pedestrian detection is a canonical instance of object detection that remains a popular topic of research and a key problem in computer vision due to its diverse applications. These applications have the potential to positively improve the quality of life. In recent years, the number of approaches to detecting pedestrians in monocular and binocular images has grown steadily. However, the use of multispectral imaging is still uncommon. This thesis work presents a novel approach to data and feature fusion of a multispectral imaging system for pedestrian detection. It also includes the design and building of a test rig which allows for quick data collection of real-world driving. An application of the mathematical theory of trifocal tensor is used to post process this data. This allows for pixel level data fusion across a multispectral set of data. Performance results based on commonly used SVM classification architectures are evaluated against the collected data set. Lastly, a novel cascaded SVM architecture used in both classification and detection is discussed. Performance improvements through the use of feature fusion is demonstrated.

Page generated in 0.0572 seconds