• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 62
  • 12
  • 2
  • 1
  • 1
  • Tagged with
  • 88
  • 88
  • 28
  • 17
  • 15
  • 15
  • 14
  • 13
  • 13
  • 13
  • 12
  • 12
  • 12
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Dimensionality Reduction and Fusion Strategies for the Design of Parametric Signal Classifiers

Kota, Srinivas 01 December 2010 (has links)
This dissertation focuses on two specific problems related to the design of parametric signal classifiers: dimensionality reduction to overcome the curse of dimensionality and information fusion to improve classification by exploiting complementary information from multiple sensors or multiple classifiers. Dimensionality reduction is achieved by introducing a strategy to rank and select a subset of principal component transform (PCT) coefficients that carry the most useful discriminatory information. The criteria considered for ranking transform coefficients include magnitude, variance, inter-class separation, and classification accuracies of individual transform coefficients. The ranking strategy not only facilitates overcoming the dimensionality curse for multivariate classifier implementation but also provides a means to further select, out of a rank-ordered set, a smaller set of features that give the best classification accuracies. Because the class-conditional densities of transform feature vectors are often assumed to be multivariate Gaussian, the dimensionality reduction strategy focuses on overcoming the specific problems encountered in the design of practical multivariate Gaussian classifiers using transform feature vectors. Through experiments with event related potentials (ERPs) and ear pressure signals, it is shown that the dimension of the feature space can be decreased quite significantly by means of the feature ranking and selection strategy. Furthermore, the resulting Gaussian classifiers yield higher classification accuracies than those reported in previous classification studies on the same signal sets. Amongst the four feature selection criteria, Gaussian classifiers using the maximum magnitude and maximum variance selection criteria gave the best classification accuracies across the two sets of classification experiments. For the multisensor case, dimensionality reduction is achieved by introducing a spatio-temporal array model to observe the signals across channels and time, simultaneously. A two-step process which uses the Kolmogrov-Smirnov test and the Lilliefors test is formulated to select the array elements which have different Gaussian densities across all signal categories. Selecting spatio-temporal elements that fit the assumed model and also statistically differ across the signal categories not only decreases the dimensionality significantly but also ensures high classification accuracies. The selection is dynamic in the sense that selecting spatio-temporal array elements corresponds to selecting samples of different sensors at different time-instants. Each selected array element is classified using a univariate Gaussian classifier and the resulting decisions are fused into a decision fusion vector which is classified using a discrete Bayes classifier. The application of the resulting dynamic channel selection-based classification strategy is demonstrated by designing and testing classifiers for multi-channel ERPs and it is shown that strategy yields high classification accuracies. Most noteworthy of the two dimensionality reduction strategies is the fact that the multivariate Gaussian signal classifiers developed can be implemented without having to collect a prohibitively large number of training signals simply to satisfy the dimensionality conditions. Consequently, the classification strategies can be beneficial for designing personalized human-machine-interface (HMI) signal classifiers for individuals from whom only a limited number of training signals can reliably be collected due to severe disabilities. The information fusion strategy introduced is aimed at improving the performance of signal classifiers by combining signals from multiple sensors or by combining decisions of multiple classifiers. Fusion classifiers with diverse components (classifiers or data sets) outperform those with less diverse components. Determining component diversity, therefore, is of the utmost importance in the design of fusion classifiers which are often employed in clinical diagnostic and numerous other pattern recognition problems. A new pairwise diversity-based ranking strategy is introduced to select a subset of ensemble components, which when combined, will be more diverse than any other component subset of the same size. The strategy is unified in the sense that the components can be either polychotomous classifiers or polychotomous data sets. Classifier fusion and data fusion systems are formulated based on the diversity selection strategy and the application of the two fusion strategies are demonstrated through the classification of multi-channel ERPs. From the results it is concluded that data fusion outperforms classifier fusion. It is also shown that the diversity-based data fusion system outperforms the system using randomly selected data components. Furthermore, it is demonstrated that the combination of data components that yield the best performance, in a relative sense, can be determined through the diversity selection strategy.
32

A digital identity management system

Phiri, Jackson January 2007 (has links)
>Magister Scientiae - MSc / The recent years have seen an increase in the number of users accessing online services using communication devices such as computers, mobile phones and cards based credentials such as credit cards. This has prompted most governments and business organizations to change the way they do business and manage their identity information. The coming of the online services has however made most Internet users vulnerable to identity fraud and theft. This has resulted in a subsequent increase in the number of reported cases of identity theft and fraud, which is on the increase and costing the global industry excessive amounts. Today with more powerful and effective technologies such as artificial intelligence, wireless communication, mobile storage devices and biometrics, it should be possible to come up with a more effective multi-modal authentication system to help reduce the cases of identity fraud and theft. A multi-modal digital identity management system is proposed as a solution for managing digital identity information in an effort to reduce the cases of identity fraud and theft seen on most online services today. The proposed system thus uses technologies such as artificial intelligence and biometrics on the current unsecured networks to maintain the security and privacy of users and service providers in a transparent, reliable and efficient way. In order to be authenticated in the proposed multi-modal authentication system, a user is required to submit more than one credential attribute. An artificial intelligent technology is used to implement a technique of information fusion to combine the user’s credential attributes for optimum recognition. The information fusion engine is then used to implement the required multi-modal authentication system.
33

Contribution to evidential models for perception grids : application to intelligent vehicle navigation / Contribution aux modèles évidentiels pour les grilles de perception : application à la navigation des véhicules intelligents

Yu, Chunlei 15 September 2016 (has links)
Pour les véhicules intelligents, un système de perception est un élément clé pour caractériser en temps réel un modèle de l’environnement de conduite autour du véhicule. Lors de la modélisation de l’environnement, les informations relatives aux obstacles doivent être gérées prioritairement car les collisions peuvent être mortelles pour les autres usagers de la route ou pour les passagers à bord du véhicule considéré. La caractérisation de l’espace occupé est donc cruciale mais pas suffisante pour les véhicules autonomes puisque le système de contrôle a besoin de trouver l’espace navigable pour assurer une planification sure de trajectoire. En effet, afin de naviguer sur les routes publiques avec d’autres utilisateurs, le véhicule doit aussi suivre les règles de circulation qui sont décrites, par exemple, par des marquages au sol peints sur la chaussée. Dans ce travail, nous nous concentrons sur une approche fondée sur des grilles égocentrées pour modéliser l’environnement. L’objectif est d’obtenir un modèle unifié contenant les informations d’obstacle avec des règles sémantiques de la route. Pour modéliser les informations d’obstacle, l’occupation est assurée par l’interprétation des informations des différents capteurs comme les valeurs des cellules. Pour modéliser la sémantique de l’espace navigable, nous proposons d’introduire la notion de grille de voies qui consiste à intégrer l’information sémantique de voie dans les cellules de la grille. La combinaison de ces deux niveaux d’information donne ainsi un modèle d’environnement plus raffiné. Lors de l’interprétation des données des capteurs en information d’obstacle, il faut manipuler des incertitudes dues à de l’ignorance ou des erreurs. L’ignorance est liée à la perception des nouveaux espaces dans la zone de perception et les erreurs proviennent de mesures bruitées et d’estimations imprécises de la pose. Dans cette recherche, la théorie de la fonction de croyance est adoptée pour faire face aux incertitudes et nous proposons des modèles évidentiels pour différents types de capteurs comme des lidars et des caméras. Les grilles de voie contiennent des informations sémantiques sur les voies provenant des marquages au sol, par exemple. À cette fin, nous proposons d’utiliser une carte a priori qui contient des informations détaillées sur la route comme l’orientation de la route et les marquages des voies. Ces informations sont extraites de la carte en utilisant une estimation de pose fournie par un système de localisation. Dans le modèle proposé, nous intégrons dans les grilles les informations de voie en tenant compte de l’incertitude de la pose estimée. Les algorithmes proposés ont été implémentés et testés sur des données réelles obtenues sur des routes publiques. Nous avons développé des algorithmes Matlab et C ++ avec le logiciel PACPUS développé au laboratoire. / For intelligent vehicle applications, a perception system is a key component to characterize in real-time a model of the driving environment at the surrounding of the vehicle. When modeling the environment, obstacle information is the first feature that has to be managed since collisions can be fatal for the other road users or for the passengers on-board the considered vehicle. Characterization of occupation space is therefore crucial but not sufficient for autonomous vehicles since the control system needs to find the navigable space for safe trajectory planning. Indeed, in order to run on public roads with other users, the vehicle needs to follow the traffic rules which are, for instance, described by markings painted on the carriageway. In this work, we focus on an ego-centered grid-based approach to model the environment. The objective is to include in a unified world model obstacle information with semantic road rules. To model obstacle information, occupancy is handled by interpreting the information of different sensors into the values of the cells. To model the semantic of the navigable space, we propose to introduce the notion of lane grids which consist in integrating semantic lane information into the cells of the grid. The combination of these two levels of information gives a refined environment model. When interpreting sensor data into obstacle information, uncertainty inevitably arises from ignorance and errors. Ignorance is due to the perception of new areas and errors come from noisy measurements and imprecise pose estimation. In this research, the belief function theory is adopted to deal with uncertainties and we propose evidential models for different kind of sensors like lidars and cameras. Lane grids contain semantic lane information coming from lane marking information for instance. To this end, we propose to use a prior map which contains detailed road information including road orientation and lane markings. This information is extracted from the map by using a pose estimate provided by a localization system. In the proposed model, we integrate lane information into the grids by taking into account the uncertainty of the estimated pose. The proposed algorithms have been implemented and tested on real data acquired on public roads. We have developed algorithms in Matlab and C++ using the PACPUS software framework developed at the laboratory.
34

Fusion de mesures de déplacement issues d'imagerie SAR : application aux modélisations séismo-volcaniques / Fusion of displacement measurements from SAR imagery : application to seismo-volcanic modeling

Yan, Yajing 08 December 2011 (has links)
Suite aux lancements successifs de satellites pour l'observation de la Terre dotés de capteur SAR (Synthetic Aperture Radar), la masse de données SAR disponible est considérable. Dans ce contexte, la fusion des mesures de déplacement issues de l'imagerie SAR est prometteuse à la fois dans la communauté de la télédétection et dans le domaine géophysique. Dans cette optique, cette thèse propose d'élargir les approches conventionnelles en combinant les techniques de traitement des images SAR, les méthodes de fusion d'informations et la connaissance géophysique. Dans un premier temps, cette thèse a pour objectif d'étudier plusieurs stratégies de fusion, l'inversion jointe, la pré-fusion et la post-fusion, afin de réduire l'incertitude associée d'une part à l'estimation du déplacement en 3 dimensions (3D) à la surface de la Terre, d'autre part à la modélisation physique qui décrit la source en profondeur du déplacement observé en surface. Nous évaluons les avantages et les inconvénients de chacune des stratégies en ce qui concerne la réduction de l'incertitude et la robustesse vis à vis du bruit. Dans un second temps, nous visons à prendre en compte les incertitudes épistémiques, en plus des incertitudes aléatoires, présentes dans les mesures et proposons les approches classiques et floues basées sur la théorie des probabilités et la théorie des possibilités pour modéliser ces incertitudes. Nous analysons et mettons en évidence l'efficacité de chaque approche dans le cadre de chaque stratégie de fusion. La première application consiste à estimer les champs du déplacement 3D à la surface de la Terre dus au séisme du Cachemire en octobre 2005 et à l'éruption du Piton de la Fournaise en janvier 2004 sur l'île de la Réunion. La deuxième application porte sur la modélisation de la rupture de la faille en profondeur liée au séisme du Cachemire. Les principales avancées sont évaluées d'un point de vue méthodologique en traitement de l'information et d'un point de vue géophysique. Au niveau méthodologique, afin de lever les principales difficultées rencontrées pour l'application de l'interférométrie différentielle à la mesure du déplacement induit par le séisme du Cachemire, une stratégie de multi-échelles basée sur l'information a priori en utilisant les fréquences locales de phase interférométrique est adoptée avec succès. En ce qui concerne la gestion de l'incertitude, les incertitudes aléatoires et épistémiques sont analysées et identifiées dans les mesures du déplacement. La théorie des probabilités et la théorie des possibilités sont utilisées afin de modéliser et de gérer les propagations des incertitudes au cours de la fusion. En outre, les comparaisons entre les distributions de possibilité enrichissent les comparaisons faites simplement entre les valeurs et indiquent la pertinence des distributions de possibilité dans le contexte étudié. Par ailleurs, la pré-fusion et la post-fusion, 2 stratégies de fusion différentes de la stratégie d'inversion jointe couramment utilisée, sont proposées afin de réduire autant que possible les incertitudes hétérogènes présentes en pratique dans les mesures et pour contourner les principales limitations de la stratégie d'inversion jointe. Les bons cadres d'application de chaque approche de la gestion de l'incertitude sont mis en évidence dans le contexte de ces stratégies de fusion. Au niveau géophysique, l'application de l'interférométrie différentielle à l'étude du séisme du Cachemire est réalisée pour la première fois et compléte les études antérieures basées sur les mesures issues de la corrélation des images SAR et optiques, les mesures télésismiques et les mesures de terrain. L'interférométrie différentielle apporte une information précise sur le déplacement en champ lointain par rapport à la position de la faille. Ceci permet d'une part de réduire les incertitudes associées aux mesures de déplacement en surface et aux paramètres du modèle, et d'autre part de détecter les déplacements post-sismiques qui existent potentiellement dans les mesures cosismiques qui couvrent la période de mouvement post-sismique. Par ailleurs, la prise en compte de l'incertitude épistémique et la proposition de l'approche floue pour gérer ce type d'incertitude, fournissent une vision différente de l'incertitude de mesure connue par la plupart des géophysiciens et complétent la connaissance de l'incertitude aléatoire et l'application de la théorie des probabilités dans ce domaine. En particulier, la gestion de l'incertitude par la théorie des possibilités permet de contourner le problème de sous-estimation d'incertitude par la théorie des probabilités. Enfin, la comparaison du déplacement mesuré par les images SAR avec le déplacement mesuré par les images optiques et le déplacement issu des mesures sur le terrain révèle toute la difficulté d'interpréter différentes sources de données plus ou moins compatibles entre elles. Les outils développés dans le cadre de cette thèse sont intégrés dans le package MDIFF (Methods of Displacement Information Fuzzy Fusion) dans l'ensemble des "EFIDIR Tools" distribués sous licence GPL. / Following the successive launches of satellites for Earth observation with SAR (Synthetic Aperture Radar) sensor, the volume of available radar data is increasing considerably. In this context, fusion of displacement measurements from SAR imagery is promising both in the community of remote sensing and in geophysics. With this in mind, this Ph.D thesis proposes to extend conventional approaches by combining SAR image processing techniques, information fusion methods and the knowledge on geophysics. First, this Ph.D thesis aims to explore several fusion strategies, joint inversion, pre-fusion and post-fusion, to reduce the uncertainty associated on the one hand to the estimation of the 3-dimensional (3D) displacement at the Earth's surface, on the other hand to physical modeling that describes the source in depth of the displacement observed at the Earth's surface. We evaluate advantages and disadvantages of each fusion strategy in terms of reducing uncertainty and of robustness against noise. Second, we aim to take account of epistemic uncertainty, in addition to the random uncertainty present in the measurements and propose the conventional and fuzzy approaches based on probability theory and possibility theory respectively to model these uncertainties. We analyze and highlight the efficiency of each approach in context of each fusion strategy. The first application consists of estimating the 3D displacement fields at the Earth's surface due to the Kashmir earthquake in October 2005 and the eruption of Piton de la Fournaise in January 2004 on Reunion Island. The second application involves the modeling of the fault rupture in depth related to the Kashmir earthquake. The main achievements and contributions are evaluated from a methodological point of view in information processing and from a geophysical point of view. In the methodological view, in order to address the major difficulties encountered in the application of differential interferometry for measuring the displacement induced by the Kashmir earthquake, a multi-scale strategy based on prior information issued from a deformation model using local frequencies of interferometric phase is adopted successfully. Regarding the measurement uncertainty management, both random and epistemic uncertainties are analyzed and identified in the displacement measurements. The conventional approach and a fuzzy approach based on respectively probability theory and possibility theory are proposed to model uncertainties and manage the uncertainty propagation in the fusion system. In addition, comparisons between possibility distributions enrich the comparisons made simply between displacement values ​​and indicate the relevance of possibility distributions in the considered context. Furthermore, pre-fusion and post-fusion, two fusion strategies different from the commonly used fusion strategy of joint inversion, are proposed to reduce heterogeous uncertainties present in practice in the measurements and to get around the main limitations of joint inversion. Appropriated conditions of the application of each uncertainty management approach are highlighted in the context of these fusion strategies. In the geophysical view, the application of differential interferometry to the Kashmir earthquake is performed successfully for the first time and it completes previous studies based on measurements from the correlation of SAR and optical images, teleseismic measurements and in situ field measurements. Differential interferometry provides accurate displacement information in the far field relative to the fault position. This allows on the one hand reducing uncertainties associated with surface displacement measurements and with model parameters, on the other hand detecting post-seismic movements that exist potentially in the used coseismic measurements covering the post-seismic period. Moreover, taking into consideration of epistemic uncertainty and the proposition of a fuzzy approach for its management, provide a different view of the measurement uncertainty known by most geophysicists and complete the knowledge of the random uncertainty and the application of probability theory in this domain. In particular, the management of uncertainty by possibility theory allows overcoming the problem of under-estimation of uncertainty by probability theory. Finally, comparisons of the displacement measured by SAR images with the displacement measured by optical images and the displacement from in situ field measurements reveal the difficulty to interpret different data sources more or less compatible among them. The tools developed during this Ph.D thesis are included in the MDIFF (Methods of Displacement Information Fuzzy Fusion) package in "EFIDIR Tools" distributed under the GPL lisence.
35

Signature-based activity detection based on Bayesian networks acquired from expert knowledge

Fooladvandi, Farzad January 2008 (has links)
The maritime industry is experiencing one of its longest and fastest periods of growth. Hence, the global maritime surveillance capacity is in a great need of growth as well. The detection of vessel activity is an important objective of the civil security domain. Detecting vessel activity may become problematic if audit data is uncertain. This thesis aims to investigate if Bayesian networks acquired from expert knowledge can detect activities with a signature-based detection approach. For this, a maritime pilot-boat scenario has been identified with a domain expert. Each of the scenario’s activities has been divided up into signatures where each signature relates to a specific Bayesian network information node. The signatures were implemented to find evidences for the Bayesian network information nodes. AIS-data with real world observations have been used for testing, which have shown that it is possible to detect the maritime pilot-boat scenario based on the taken approach.
36

Evaluation and Implementation of Traceable Uncertainty for Threat Evaluation

Haglind, Carl January 2014 (has links)
Threat evaluation is used in various applications to find threatening objects or situations and neutralize them before they cause any damage. To make the threat evaluation as user-friendly as possible, it is important to know where the uncertainties are. The method Traceable Uncertainty can make the threat evaluation process more transparent and hopefully easier to rely on. Traceable Uncertainty is used when different sources of information are combined to find support for the decision making process. The uncertainty of the current information is measured before and after the combination. If the magnitude of uncertainty has changed more than a threshold, a new branch will be created which excludes the new information from the combination of evidence. Traceable Uncertainty has never been tested on any realistic scenario to investigate whether it is possible to implement the method on a large scale system. The hypothesis of this thesis is that Traceable Uncertainty can be used on large scale systems if its threshold parameter is tuned in the right way. Different threshold values were tested when recorded radar data were analyzed for threatening targets. Experiments combining random generated evidence were also analyzed for different threshold values. The results showed that a threshold value in the range [0.15, 0.25] generated a satisfying amount of interpretations that were not too similar to eachother. The results could also be filtered to take away unnecessary interpretations. This shows that in this aspect and for this data set, Traceable Uncertainty can be used on large scale systems.
37

Transparency for Future Semi-Automated Systems : Effects of transparency on operator performance, workload and trust

Helldin, Tove January 2014 (has links)
More and more complex semi-automated systems are being developed, aiding human operators to collect and analyze data and information and even to recommend decisions and act upon these. The goal of such development is often to support the operators make better decisions faster, while at the same time decrease their workload. However, these promises are not always fulfilled and several incidents have highlighted the fact that the introduction of automated technologies might instead increase the need for human involvement andexpertise in the tasks carried out. The significance of communicating information regarding an automated system's performance and to explain its strengths and limitations to its operators is strongly highlighted within the system transparencyand operator-centered automation literature. However, it is not common that feedback containing system qualifiers is incorporated into the primary displays of the automated system, obscuring its transparency. In this thesis, we deal with the investigation of the effects of explaining and visualizing system reasoning and performance parameters in different domains on the operators' trust, workload and performance. Different proof-of-concept prototypes have been designed with transparency characteristics in mind, and quantitative and qualitative evaluations together with operators of these systems have been carried out. Our results show that the effects of automation transparency can positively influence the performance and trust calibration of operators of complex systems, yet possibly at the costs of higher workload and longer decision-making times. Further, this thesis provides recommendations for designers and developers of automated systems in terms of general design concepts and guidelines for developing transparent automated systems for the future.
38

Information Acquisition in Data Fusion Systems

Johansson, Ronnie January 2003 (has links)
By purposefully utilising sensors, for instance by a datafusion system, the state of some system-relevant environmentmight be adequately assessed to support decision-making. Theever increasing access to sensors o.ers great opportunities,but alsoincurs grave challenges. As a result of managingmultiple sensors one can, e.g., expect to achieve a morecomprehensive, resolved, certain and more frequently updatedassessment of the environment than would be possible otherwise.Challenges include data association, treatment of con.ictinginformation and strategies for sensor coordination. We use the term information acquisition to denote the skillof a data fusion system to actively acquire information. Theaim of this thesis is to instructively situate that skill in ageneral context, explore and classify related research, andhighlight key issues and possible future work. It is our hopethat this thesis will facilitate communication, understandingand future e.orts for information acquisition. The previously mentioned trend towards utilisation of largesets of sensors makes us especially interested in large-scaleinformation acquisition, i.e., acquisition using many andpossibly spatially distributed and heterogeneous sensors. Information acquisition is a general concept that emerges inmany di.erent .elds of research. In this thesis, we surveyliterature from, e.g., agent theory, robotics and sensormanagement. We, furthermore, suggest a taxonomy of theliterature that highlights relevant aspects of informationacquisition. We describe a function, perception management (akin tosensor management), which realizes information acquisition inthe data fusion process and pertinent properties of itsexternal stimuli, sensing resources, and systemenvironment. An example of perception management is also presented. Thetask is that of managing a set of mobile sensors that jointlytrack some mobile targets. The game theoretic algorithmsuggested for distributing the targets among the sensors proveto be more robust to sensor failure than a measurement accuracyoptimal reference algorithm. <b>Keywords:</b>information acquisition, sensor management,resource management, information fusion, data fusion,perception management, game theory, target tracking / NR 20140805
39

Knowledge representation and stocastic multi-agent plan recognition

Suzic, Robert January 2005 (has links)
To incorporate new technical advances into military domain and make those processes more efficient in accuracy, time and cost, a new concept of Network Centric Warfare has been introduced in the US military forces. In Sweden a similar concept has been studied under the name Network Based Defence (NBD). Here we present one of the methodologies, called tactical plan recognition that is aimed to support NBD in future. Advances in sensor technology and modelling produce large sets of data for decision makers. To achieve decision superiority, decision makers have to act agile with proper, adequate and relevant information (data aggregates) available. Information fusion is a process aimed to support decision makers’ situation awareness. This involves a process of combining data and information from disparate sources with prior information or knowledge to obtain an improved state estimate about an agent or phenomena. Plan recognition is the term given to the process of inferring an agent’s intentions from a set of actions and is intended to support decision making. The aim of this work has been to introduce a methodology where prior (empirical) knowledge (e.g. behaviour, environment and organization) is represented and combined with sensor data to recognize plans/behaviours of an agent or group of agents. We call this methodology multi-agent plan recognition. It includes knowledge representation as well as imprecise and statistical inference issues. Successful plan recognition in large scale systems is heavily dependent on the data that is supplied. Therefore we introduce a bridge between the plan recognition and sensor management where results of our plan recognition are reused to the control of, give focus of attention to, the sensors that are supposed to acquire most important/relevant information. Here we combine different theoretical methods (Bayesian Networks, Unified Modeling Language and Plan Recognition) and apply them for tactical military situations for ground forces. The results achieved from several proof-ofconcept models show that it is possible to model and recognize behaviour of tank units. / QC 20101222
40

Terrain Object recognition and Context Fusion for Decision Support

Lantz, Fredrik January 2008 (has links)
A laser radar can be used to generate 3D data about the terrain in a very high resolution. The development of new support technologies to analyze these data is critical to the effective and efficient use of these data in decision support systems, due to the large amounts of data that are generated. Adequate technology in this regard is currently not available and development of new methods and algorithms to this end are important goals of this work. A semi-qualitative data structure for terrain surface modelling has been developed. A categorization and triangulation process has also been developed to substitute the high resolution 3D model for this data structure. The qualitative part of the structure can be used for detection and recognition of terrain features. The quantitative part of the structure is, together with the qualitative part, used for visualization of the terrain surface. Substituting the 3D model for the semi-qualitative structures means that a data reduction is performed. A number of algorithms for detection and recognition of different terrain objects have been developed. The algorithms use the qualitative part of the previously developed semi-qualitative data structure as input. The taken approach is based on matching of symbols and syntactic pattern recognition. Results regarding the accuracy of the implemented algorithms for detection and recognition of terrain objects are visualized. A further important goal has been to develop a methodology for determining driveability using 3D-data and other geographic data. These data must be fused with vehicle data to determine the properties of the terrain context of our operations with respect to driveability. This fusion process is therefore called context fusion. The recognized terrain objects are used together with map data in this method. The uncertainty associated with the imprecision of the data has been taken into account as well. / <p>Report code: LiU-Tek-Lic-2008:29.</p>

Page generated in 0.206 seconds