• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 62
  • 12
  • 2
  • 1
  • 1
  • Tagged with
  • 88
  • 88
  • 28
  • 17
  • 15
  • 15
  • 14
  • 13
  • 13
  • 13
  • 12
  • 12
  • 12
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Evaluating credal set theory as a belief framework in high-level information fusion for automated decision-making

Karlsson, Alexander January 2010 (has links)
High-level information fusion is a research field in which methods for achieving an overall understanding of the current situation in an environment of interest are studied. The ultimate goal of these methods is to provide effective decision-support for human or automated decision-making. One of the main proposed ways of achieving this is to reduce the uncertainty, coupled with the decision, by utilizing multiple sources of information. Handling uncertainty in high-level information fusion is performed through a belief framework, and one of the most commonly used such frameworks is based on Bayesian theory. However, Bayesian theory has often been criticized for utilizing a representation of belief and evidence that does not sufficiently express some types of uncertainty. For this reason, a generalization of Bayesian theory has been proposed, denoted as credal set theory, which allows one to represent belief and evidence imprecisely. In this thesis, we explore whether credal set theory  yields measurable advantages, compared to Bayesian theory, when used as a belief framework in high-level information fusion for automated decision-making, i.e., when decisions are made by some pre-determined algorithm. We characterize the Bayesian and credal operators for belief updating and evidence combination and perform three experiments where the Bayesian and credal frameworks are evaluated with respect to automated decision-making. The decision performance of the frameworks are measured by enforcing a single decision, and allowing a set of decisions, based on the frameworks’ belief and evidence structures. We construct anomaly detectors based on the frameworks and evaluate these detectors with respect to maritime surveillance. The main conclusion of the thesis is that although the credal framework uses considerably more expressive structures to represent belief and evidence, compared to the Bayesian framework, the performance of the credal framework can be significantly worse, on average, than that of the Bayesian framework, irrespective of the amount of imprecision. / Högnivåfusion är ett forskningsområde där man studerar metoder för att uppnå en övergripande situationsförståelse för någon miljö av intresse. Syftet med högnivåfusion är att tillhandahålla ett effektivt beslutstöd for mänskligt eller automatiskt beslutsfattande. För att åstadkomma detta har det föreslagits att man ska reducera osäkerhet kring beslutet genom att använda flera olika källor av information. Det främsta verktyget för att hantera osäkerhet inom högnivåfusion är ett ramverk för att hantera evidensbaserad trolighet och evidenser kring en given tillståndsrymd. Ett av de vanligaste ramverken som används inom högnivåfusion för detta syfte är baserad på Bayesiansk teori. Denna teori har dock ofta blivit kritiserad för att den använder en representation av evidensbaserad trolighet och evidenser som inte är tillräckligt uttrycksfull för att representera vissa typer av osäkerheter. På grund av detta har en generalisering av Bayesiansk teori föreslagits, kallad “credal set theory“, där man kan representera evidensbaserad trolighet och evidenser oprecist. I denna avhandling undersöker vi om “credal set theory“ medför mätbara fördelar, jämfört med Bayesiansk teori, då det används som ett ramverk i högnivåfusion för automatiskt beslutsfattande, dvs. när ett beslut fattas av en algoritm. Vi karaktäriserar Bayesiansk och “credal“ operatorer för updatering av evidensbaserad trolighet och kombination av evidenser och vi presenterar tre experiment där vi utvärderar ramverken med avseende på automatiskt beslutsfattande. Utvärderingen genomförs med avseende på ett enskilt beslut och för en mängd beslut baserade på ramverkens strukturer för evidensbaserad trolighet och evidens. Vi konstruerar anomalidetektorer baserat på de två ramverken som vi sedan utvärderar med avseende på maritim övervakning.Den främsta slutsatsen av denna avhandling är att även om “credal set theory“ har betydligt mer uttrycksfulla strukturer för att representera evidensbaserad trolighet och evidenser kring ett tillståndsrum, jämfört med det Bayesianska ramverket, så kan “credal set theory“ prestera signifikant sämre i genomsnitt än det Bayesianska ramverket, oberoende av mängden oprecision. / <p>Examining Committee: Arnborg, Stefan, Professor (KTH Royal Institute of Technology), Kjellström, Hedvig, Associate Professor (Docent) (KTH Royal Institute of Technology), Saffiotti, Alessandro, Professor (Örebro University)</p>
72

Accurate and efficient localisation in wireless sensor networks using a best-reference selection

Abu-Mahfouz, Adnan Mohammed 12 October 2011 (has links)
Many wireless sensor network (WSN) applications depend on knowing the position of nodes within the network if they are to function efficiently. Location information is used, for example, in item tracking, routing protocols and controlling node density. Configuring each node with its position manually is cumbersome, and not feasible in networks with mobile nodes or dynamic topologies. WSNs, therefore, rely on localisation algorithms for the sensor nodes to determine their own physical location. The basis of several localisation algorithms is the theory that the higher the number of reference nodes (called “references”) used, the greater the accuracy of the estimated position. However, this approach makes computation more complex and increases the likelihood that the location estimation may be inaccurate. Such inaccuracy in estimation could be due to including data from nodes with a large measurement error, or from nodes that intentionally aim to undermine the localisation process. This approach also has limited success in networks with sparse references, or where data cannot always be collected from many references (due for example to communication obstructions or bandwidth limitations). These situations require a method for achieving reliable and accurate localisation using a limited number of references. Designing a localisation algorithm that could estimate node position with high accuracy using a low number of references is not a trivial problem. As the number of references decreases, more statistical weight is attached to each reference’s location estimate. The overall localisation accuracy therefore greatly depends on the robustness of the selection method that is used to eliminate inaccurate references. Various localisation algorithms and their performance in WSNs were studied. Information-fusion theory was also investigated and a new technique, rooted in information-fusion theory, was proposed for defining the best criteria for the selection of references. The researcher chose selection criteria to identify only those references that would increase the overall localisation accuracy. Using these criteria also minimises the number of iterations needed to refine the accuracy of the estimated position. This reduces bandwidth requirements and the time required for a position estimation after any topology change (or even after initial network deployment). The resultant algorithm achieved two main goals simultaneously: accurate location discovery and information fusion. Moreover, the algorithm fulfils several secondary design objectives: self-organising nature, simplicity, robustness, localised processing and security. The proposed method was implemented and evaluated using a commercial network simulator. This evaluation of the proposed algorithm’s performance demonstrated that it is superior to other localisation algorithms evaluated; using fewer references, the algorithm performed better in terms of accuracy, robustness, security and energy efficiency. These results confirm that the proposed selection method and associated localisation algorithm allow for reliable and accurate location information to be gathered using a minimum number of references. This decreases the computational burden of gathering and analysing location data from the high number of references previously believed to be necessary. / Thesis (PhD(Eng))--University of Pretoria, 2011. / Electrical, Electronic and Computer Engineering / unrestricted
73

Soft Data-Augmented Risk Assessment and Automated Course of Action Generation for Maritime Situational Awareness

Plachkov, Alex January 2016 (has links)
This thesis presents a framework capable of integrating hard (physics-based) and soft (people-generated) data for the purpose of achieving increased situational assessment (SA) and effective course of action (CoA) generation upon risk identification. The proposed methodology is realized through the extension of an existing Risk Management Framework (RMF). In this work, the RMF’s SA capabilities are augmented via the injection of soft data features into its risk modeling; the performance of these capabilities is evaluated via a newly-proposed risk-centric information fusion effectiveness metric. The framework’s CoA generation capabilities are also extended through the inclusion of people-generated data, capturing important subject matter expertise and providing mission-specific requirements. Furthermore, this work introduces a variety of CoA-related performance measures, used to assess the fitness of each individual potential CoA, as well as to quantify the overall chance of mission success improvement brought about by the inclusion of soft data. This conceptualization is validated via experimental analysis performed on a combination of real- world and synthetically-generated maritime scenarios. It is envisioned that the capabilities put forth herein will take part in a greater system, capable of ingesting and seamlessly integrating vast amounts of heterogeneous data, with the intent of providing accurate and timely situational updates, as well as assisting in operational decision making.
74

Ensemble Methods for Pedestrian Detection in Dense Crowds / Méthodes d'ensembles pour la détection de piétons en foules denses

Vandoni, Jennifer 17 May 2019 (has links)
Cette thèse s’intéresse à la détection des piétons dans des foules très denses depuis un système mono-camera, avec comme but d’obtenir des détections localisées de toutes les personnes. Ces détections peuvent être utilisées soit pour obtenir une estimation robuste de la densité, soit pour initialiser un algorithme de suivi. Les méthodologies classiques utilisées pour la détection de piétons s’adaptent mal au cas où seulement les têtes sont visibles, de part l’absence d’arrière-plan, l’homogénéité visuelle de la foule, la petite taille des objets et la présence d’occultations très fortes. En présence de problèmes difficiles tels que notre application, les approches à base d’apprentissage supervisé sont bien adaptées. Nous considérons un système à plusieurs classifieurs (Multiple Classifier System, MCS), composé de deux ensembles différents, le premier basé sur les classifieurs SVM (SVM- ensemble) et le deuxième basé sur les CNN (CNN-ensemble), combinés dans le cadre de la Théorie des Fonctions de Croyance (TFC). L’ensemble SVM est composé de plusieurs SVM exploitant les données issues d’un descripteur différent. La TFC nous permet de prendre en compte une valeur d’imprécision supposée correspondre soit à une imprécision dans la procédure de calibration, soit à une imprécision spatiale. Cependant, le manque de données labellisées pour le cas des foules très denses nuit à la génération d’ensembles de données d’entrainement et de validation robustes. Nous avons proposé un algorithme d’apprentissage actif de type Query-by- Committee (QBC) qui permet de sélectionner automatiquement de nouveaux échantillons d’apprentissage. Cet algorithme s’appuie sur des mesures évidentielles déduites des fonctions de croyance. Pour le second ensemble, pour exploiter les avancées de l’apprentissage profond, nous avons reformulé notre problème comme une tâche de segmentation en soft labels. Une architecture entièrement convolutionelle a été conçue pour détecter les petits objets grâce à des convolutions dilatées. Nous nous sommes appuyés sur la technique du dropout pour obtenir un ensemble CNN capable d’évaluer la fiabilité sur les prédictions du réseau lors de l’inférence. Les réalisations de cet ensemble sont ensuite combinées dans le cadre de la TFC. Pour conclure, nous montrons que la sortie du MCS peut être utile aussi pour le comptage de personnes. Nous avons proposé une méthodologie d’évaluation multi-échelle, très utile pour la communauté de modélisation car elle lie incertitude (probabilité d’erreur) et imprécision sur les valeurs de densité estimées. / This study deals with pedestrian detection in high- density crowds from a mono-camera system. The detections can be then used both to obtain robust density estimation, and to initialize a tracking algorithm. One of the most difficult challenges is that usual pedestrian detection methodologies do not scale well to high-density crowds, for reasons such as absence of background, high visual homogeneity, small size of the objects, and heavy occlusions. We cast the detection problem as a Multiple Classifier System (MCS), composed by two different ensembles of classifiers, the first one based on SVM (SVM-ensemble) and the second one based on CNN (CNN-ensemble), combined relying on the Belief Function Theory (BFT) to exploit their strengths for pixel-wise classification. SVM-ensemble is composed by several SVM detectors based on different gradient, texture and orientation descriptors, able to tackle the problem from different perspectives. BFT allows us to take into account the imprecision in addition to the uncertainty value provided by each classifier, which we consider coming from possible errors in the calibration procedure and from pixel neighbor's heterogeneity in the image space. However, scarcity of labeled data for specific dense crowd contexts reflects in the impossibility to obtain robust training and validation sets. By exploiting belief functions directly derived from the classifiers' combination, we propose an evidential Query-by-Committee (QBC) active learning algorithm to automatically select the most informative training samples. On the other side, we explore deep learning techniques by casting the problem as a segmentation task with soft labels, with a fully convolutional network designed to recover small objects thanks to a tailored use of dilated convolutions. In order to obtain a pixel-wise measure of reliability about the network's predictions, we create a CNN- ensemble by means of dropout at inference time, and we combine the different obtained realizations in the context of BFT. Finally, we show that the output map given by the MCS can be employed to perform people counting. We propose an evaluation method that can be applied at every scale, providing also uncertainty bounds on the estimated density.
75

Utilizing Diversity and Performance Measures for Ensemble Creation

Löfström, Tuve January 2009 (has links)
An ensemble is a composite model, aggregating multiple base models into one predictive model. An ensemble prediction, consequently, is a function of all included base models. Both theory and a wealth of empirical studies have established that ensembles are generally more accurate than single predictive models. The main motivation for using ensembles is the fact that combining several models will eliminate uncorrelated base classifier errors. This reasoning, however, requires the base classifiers to commit their errors on different instances – clearly there is no point in combining identical models. Informally, the key term diversity means that the base classifiers commit their errors independently of each other. The problem addressed in this thesis is how to maximize ensemble performance by analyzing how diversity can be utilized when creating ensembles. A series of studies, addressing different facets of the question, is presented. The results show that ensemble accuracy and the diversity measure difficulty are the two individually best measures to use as optimization criterion when selecting ensemble members. However, the results further suggest that combinations of several measures are most often better as optimization criteria than single measures. A novel method to find a useful combination of measures is proposed in the end. Furthermore, the results show that it is very difficult to estimate predictive performance on unseen data based on results achieved with available data. Finally, it is also shown that implicit diversity achieved by varied ANN architecture or by using resampling of features is beneficial for ensemble performance. / <p><strong>Sponsorship</strong>:</p><p>This work was supported by the Information Fusion Research Program (www.infofusion.se) at the University of Skövde, Sweden, in partnership with the Swedish Knowledge Foundation under grant 2003/0104.</p>
76

AUTOMATED IMAGE LOCALIZATION AND DAMAGE LEVEL EVALUATION FOR RAPID POST-EVENT BUILDING ASSESSMENT

Xiaoyu Liu (13989906) 25 October 2022 (has links)
<p>    </p> <p>Image data remains an important tool for post-event building assessment and documentation. After each natural hazard event, significant efforts are made by teams of engineers to visit the affected regions and collect useful image data. In general, a global positioning system (GPS) can provide useful spatial information for localizing image data. However, it is challenging to collect such information when images are captured in places where GPS signals are weak or interrupted, such as the indoor spaces of buildings. An inability to document the images’ locations would hinder the analysis, organization, and documentation of these images as they lack sufficient spatial context. This problem becomes more urgent to solve for the inspection mission covering a large area, like a community. To address this issue, the objective of this research is to generate a tool to automatically process the image data collected during such a mission and provide the location of each image. Towards this goal, the following tasks are performed. First, I develop a methodology to localize images and link them to locations on a structural drawing (Task 1). Second, this methodology is extended to be able to process data collected from a large scale area, and perform indoor localization for images collected on each of the indoor floors of each individual building (Task 2). Third, I develop an automated technique to render the damage condition decision of buildings by fusing the image data collected within (Task 3). The methods developed through each task have been evaluated with data collected from real world buildings. This research may also lead to automated assessment of buildings over a large scale area. </p>
77

Social Network Analysis : Link prediction under the Belief Function Framework / Analyse des réseaux sociaux : Prédiction de liens dans le cadre des fonctions de croyance

Mallek, Sabrine 03 July 2018 (has links)
Les réseaux sociaux sont de très grands systèmes permettant de représenter les interactions sociales entre les individus. L'analyse des réseaux sociaux est une collection de méthodes spécialement conçues pour examiner les aspects relationnels des structures sociales. L'un des défis les plus importants dans l'analyse de réseaux sociaux est le problème de prédiction de liens. La prédiction de liens étudie l'existence potentielle de nouvelles associations parmi des entités sociales non connectées. La plupart des approches de prédiction de liens se concentrent sur une seule source d'information, c'est-à-dire sur les aspects topologiques du réseau (par exemple le voisinage des nœuds) en supposant que les données sociales sont entièrement fiables. Pourtant, ces données sont généralement bruitées, manquantes et sujettes à des erreurs d'observation causant des distorsions et des résultats probablement erronés. Ainsi, cette thèse propose de gérer le problème de prédiction de liens sous incertitude. D'abord, deux nouveaux modèles de graphes de réseaux sociaux uniplexes et multiplexes sont introduits pour traiter l'incertitude dans les données sociales. L'incertitude traitée apparaît au niveau des liens et est représentée et gérée à travers le cadre de la théorie des fonctions de croyance. Ensuite, nous présentons huit méthodes de prédiction de liens utilisant les fonctions de croyance fondées sur différentes sources d'information dans les réseaux sociaux uniplexes et multiplexes. Nos contributions s'appuient sur les informations disponibles sur le réseau social. Nous combinons des informations structurelles aux informations des cercles sociaux et aux attributs des nœuds, ainsi que l'apprentissage supervisé pour prédire les nouveaux liens. Des tests sont effectués pour valider la faisabilité et l'intérêt de nos approches à celles de la littérature. Les résultats obtenus sur les données du monde réel démontrent que nos propositions sont pertinentes et valables dans le contexte de prédiction de liens. / Social networks are large structures that depict social linkage between millions of actors. Social network analysis came out as a tool to study and monitor the patterning of such structures. One of the most important challenges in social network analysis is the link prediction problem. Link prediction investigates the potential existence of new associations among unlinked social entities. Most link prediction approaches focus on a single source of information, i.e. network topology (e.g. node neighborhood) assuming social data to be fully trustworthy. Yet, such data are usually noisy, missing and prone to observation errors causing distortions and likely inaccurate results. Thus, this thesis proposes to handle the link prediction problem under uncertainty. First, two new graph-based models for uniplex and multiplex social networks are introduced to address uncertainty in social data. The handled uncertainty appears at the links level and is represented and managed through the belief function theory framework. Next, we present eight link prediction methods using belief functions based on different sources of information in uniplex and multiplex social networks. Our proposals build upon the available information in data about the social network. We combine structural information to social circles information and node attributes along with supervised learning to predict new links. Tests are performed to validate the feasibility and the interest of our link prediction approaches compared to the ones from literature. Obtained results on social data from real-world demonstrate that our proposals are relevant and valid in the link prediction context.
78

Large-Scale Information Acquisition for Data and Information Fusion

Johansson, Ronnie January 2006 (has links)
The purpose of information acquisition for data and information fusion is to provide relevant and timely information. The acquired information is integrated (or fused) to estimate the state of some environment. The success of information acquisition can be measured in the quality of the environment state estimates generated by the data and information fusion process. In this thesis, we introduce and set out to characterise the concept of large-scale information acquisition. Our interest in this subject is justified both by the identified lack of research on a holistic view on data and information fusion, and the proliferation of networked sensors which promises to enable handy access to a multitude of information sources. We identify a number of properties that could be considered in the context of large-scale information acquisition. The sensors used could be large in number, heterogeneous, complex, and distributed. Also, algorithms for large-scale information acquisition may have to deal with decentralised control and multiple and varying objectives. In the literature, a process that realises information acquisition is frequently denoted sensor management. We, however, introduce the term perception management instead, which encourages an agent perspective on information acquisition. Apart from explictly inviting the wealth of agent theory research into the data and information fusion research, it also highlights that the resource usage of perception management is constrained by the overall control of a system that uses data and information fusion. To address the challenges posed by the concept of large-scale information acquisition, we present a framework which highlights some of its pertinent aspects. We have implemented some important parts of the framework. What becomes evident in our study is the innate complexity of information acquisition for data and information fusion, which suggests approximative solutions. We, furthermore, study one of the possibly most important properties of large-scale information acquisition, decentralised control, in more detail. We propose a recurrent negotiation protocol for (decentralised) multi-agent coordination. Our approach to the negotiations is from an axiomatic bargaining theory perspective; an economics discipline. We identify shortcomings of the most commonly applied bargaining solution and demonstrate in simulations a problem instance where it is inferior to an alternative solution. However, we can not conclude that one of the solutions dominates the other in general. They are both preferable in different situations. We have also implemented the recurrent negotiation protocol on a group of mobile robots. We note some subtle difficulties with transferring bargaining solutions from economics to our computational problem. For instance, the characterising axioms of solutions in bargaining theory are useful to qualitatively compare different solutions, but care has to be taken when translating the solution to algorithms in computer science as some properties might be undesirable, unimportant or risk being lost in the translation. / QC 20100903
79

Impact des multitrajets sur les performances des systèmes de navigation par satellite : contribution à l'amélioration de la précision de localisation par modélisation bayésienne / Multipath impact on the performances of satellite navigation systems : contribution to the enhancement of location accuracy towards bayesian modeling

Nahimana, Donnay Fleury 19 February 2009 (has links)
De nombreuses solutions sont développées pour diminuer l'influence des multitrajets sur la précision et la disponibilité des systèmes GNSS. L'intégration de capteurs supplémentaires dans le système de localisation est l'une des solutions permettant de compenser notamment l'absence de données satellitaires. Un tel système est certes d'une bonne précision mais sa complexité et son coût limitent un usage très répandu.Cette thèse propose une approche algorithmique destinée à améliorer la précision des systèmes GNSS en milieu urbain. L'étude se base sur l'utilisation des signaux GNSS uniquement et une connaissance de l'environnement proche du récepteur à partir d'un modèle 3D du lieu de navigation.La méthode présentée intervient à l'étape de filtrage du signal reçu par le récepteur GNSS. Elle exploite les techniques de filtrage statistique de type Monte Carlo Séquentiels appelées filtre particulaire. L'erreur de position en milieu urbain est liée à l'état de réception des signaux satellitaires (bloqué, direct ou réfléchi). C'est pourquoi une information sur l'environnement du récepteur doit être prise en compte. La thèse propose également un nouveau modèle d'erreurs de pseudodistance qui permet de considérer les conditions de réception du signal dans le calcul de la position.Dans un premier temps, l'état de réception de chaque satellite reçu est supposé connu dans le filtre particulaire. Une chaîne de Markov, valable pour une trajectoire connue du mobile, est préalablement définie pour déduire les états successifs de réception des satellites. Par la suite, on utilise une distribution de Dirichlet pour estimer les états de réception des satellites / Most of the GNSS-based transport applications are employed in dense urban areas. One of the reasons of bad position accuracy in urban area is the obstacle's presence (building and trees). Many solutions are developed to decrease the multipath impact on accuracy and availability of GNSS systems. Integration of supplementary sensors into the localisation system is one of the solutions used to supply a lack of GNSS data. Such systems offer good accuracy but increase complexity and cost, which becomes inappropriate to equip a large fleet of vehicles.This thesis proposes an algorithmic approach to enhance the position accuracy in urban environment. The study is based on GNSS signals only and knowledge of the close reception environment with a 3D model of the navigation area.The method impacts the signal filtering step of the process. The filtering process is based on Sequential Monte Carlo methods called particle filter. As the position error in urban area is related to the satellite reception state (blocked, direct or reflected), information of the receiver environment is taken into account. A pseudorange error model is also proposed to fit satellite reception conditions. In a first work, the reception state of each satellite is assumed to be known. A Markov chain is defined for a known trajectory of the vehicle and is used to determine the successive reception states of each signal. Then, the states are estimated using a Dirichlet distribution
80

Threat Analysis Using Goal-Oriented Action Planning : Planning in the Light of Information Fusion

Bjarnolf, Philip January 2008 (has links)
An entity capable of assessing its and others action capabilities possess the power to predict how the involved entities may change their world. Through this knowledge and higher level of situation awareness, the assessing entity may choose the actions that have the most suitable effect, resulting in that entity’s desired world state. This thesis covers aspects and concepts of an arbitrary planning system and presents a threat analyzer architecture built on the novel planning system Goal-Oriented Action Planning (GOAP). This planning system has been suggested for an application for improved missile route planning and targeting, as well as being applied in contemporary computer games such as F.E.A.R. – First Encounter Assault Recon and S.T.A.L.K.E.R.: Shadow of Chernobyl. The GOAP architecture realized in this project is utilized by two agents that perform action planning to reach their desired world states. One of the agents employs a modified GOAP planner used as a threat analyzer in order to determine what threat level the adversary agent constitutes. This project does also introduce a conceptual schema of a general planning system that considers orders, doctrine and style; as well as a schema depicting an agent system using a blackboard in conjunction with the OODA-loop.

Page generated in 0.1908 seconds