• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 40
  • 23
  • 9
  • 3
  • 2
  • Tagged with
  • 91
  • 83
  • 47
  • 28
  • 19
  • 17
  • 16
  • 14
  • 14
  • 12
  • 12
  • 12
  • 12
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Grid-Based Multi-Sensor Fusion for On-Road Obstacle Detection: Application to Autonomous Driving / Rutnätsbaserad multisensorfusion för detektering av hinder på vägen: tillämpning på självkörande bilar

Gálvez del Postigo Fernández, Carlos January 2015 (has links)
Self-driving cars have recently become a challenging research topic, with the aim of making transportation safer and more efficient. Current advanced driving assistance systems (ADAS) allow cars to drive autonomously by following lane markings, identifying road signs and detecting pedestrians and other vehicles. In this thesis work we improve the robustness of autonomous cars by designing an on-road obstacle detection system. The proposed solution consists on the low-level fusion of radar and lidar through the occupancy grid framework. Two inference theories are implemented and evaluated: Bayesian probability theory and Dempster-Shafer theory of evidence. Obstacle detection is performed through image processing of the occupancy grid. Last, the Dempster-Shafer additional features are leveraged by proposing a sensor performance estimation module and performing advanced conflict management. The work has been carried out at Volvo Car Corporation, where real experiments on a test vehicle have been performed under different environmental conditions and types of objects. The system has been evaluated according to the quality of the resulting occupancy grids, detection rate as well as information content in terms of entropy. The results show a significant improvement of the detection rate over single-sensor approaches. Furthermore, the Dempster-Shafer implementation may slightly outperform the Bayesian one when there is conflicting information, although the high computational cost limits its practical application. Last, we demonstrate that the proposed solution is easily scalable to include additional sensors.
42

An Inconsistency-based Approach for Sensing Assessment in Unknown Environments

Gage, Jennifer Diane 18 June 2009 (has links)
While exploring an unknown environment, an intelligent agent has only its sensors to guide its actions. Each sensor's ability to provide accurate information depends on the environment's characteristics. If the agent does not know these characteristics, how can it determine which sensors to rely on? This problem is exacerbated by sensing anomalies: cases where sensor(s) are working but the readings lead to an incorrect interpretation of the environment, e.g. laser sensors cannot detect glass. This work addresses the following research question: Can an inconsistency-based sensing accuracy indicator, which relies solely on fused sensor readings, be used to detect and characterize sensing anomalies in unknown environments? A novel inconsistency-based approach was investigated for sensing anomaly detection and characterization by a mobile robot using range sensing for mapping. Based on the hypothesis that sensing anomalies manifest as inconsistent sensor readings, the approach employed Dempster-Shafer theory and six metrics from the evidential literature to measure the magnitude of inconsistency. These were applied directly to fused sensor data with a threshold, forming an indicator, used to distinguish minor noise from anomalous readings. Experiments with real sensor data from four indoor and two outdoor environments showed that three of the six evidential inconsistency metrics can partially address the issue of noticing sensing anomalies in unknown environments. Polaroid sonar sensors, SICK laser range finders, and a Canesta range camera were used. Despite extensive training in known environments, the indicators could not reliably detect sensing anomalies, i.e. distinguish them from ordinary noise. However, sensing accuracy could be estimated (correlations with sensor error exceeded 0.8) and regions with suspect readings could be isolated. Trained indicators failed to rank sensors, but improved map quality by resetting suspect regions (up to 57.65%) or guiding sensor selection (up to 75.86%). This work contributes to the robotics and uncertainty in artificial intelligence communities by establishing the use of evidential metrics for adapting a single sensor or identifying the most accurate sensor to optimize the sensing accuracy in unknown environments. Future applications could enable intelligent systems to switch information sources to optimize mission performance and identify the reliability of sources for different environments.
43

Quantum probabilities as Dempster-Shafer probabilities in the lattice of subspaces.

Vourdas, Apostolos 21 July 2014 (has links)
yes / The orthocomplemented modular lattice of subspaces L[H(d)] , of a quantum system with d-dimensional Hilbert space H(d), is considered. A generalized additivity relation which holds for Kolmogorov probabilities is violated by quantum probabilities in the full lattice L[H(d)] (it is only valid within the Boolean subalgebras of L[H(d)] ). This suggests the use of more general (than Kolmogorov) probability theories, and here the Dempster-Shafer probability theory is adopted. An operator D(H1,H2) , which quantifies deviations from Kolmogorov probability theory is introduced, and it is shown to be intimately related to the commutator of the projectors P(H1),P(H2) , to the subspaces H 1, H 2. As an application, it is shown that the proof of the inequalities of Clauser, Horne, Shimony, and Holt for a system of two spin 1/2 particles is valid for Kolmogorov probabilities, but it is not valid for Dempster-Shafer probabilities. The violation of these inequalities in experiments supports the interpretation of quantum probabilities as Dempster-Shafer probabilities.
44

Predicting threat capability in control systems to enhance cybersecurity risk determination

Price, Peyton 01 May 2020 (has links)
Risk assessment is a critical aspect of all businesses, and leaders are tasked with limiting risk to the lowest reasonable level within their systems. Industrial Control Systems (ICS) operate in a different cybersecurity risk environment than business systems due to the possibility of second and third-order effects when an attack occurs. We present a process for predicting when an adversary gains the ability to attack an industrial control system. We assist leaders in understanding how attackers are targeting ICS by providing visualizations and percentages that can be applied to updating infrastructure or shifting personnel responsibilities to counter the threat. This new process seeks to integrate defenders and threat intelligence providers, allowing defenders to proactively defend their networks prior to devastating attacks. We apply the process by observing it under randomness with constraints and through a case study of the 2015 attack on the Ukrainian power grid. We find that this process answers the question of what an attacker can do, provides the ability for the defender to possess an updated understanding of the threat’s capability, and can both increase and decrease the probability that an attacker has a capability against a control system. This process will allow leaders to provide strategic vision to the businesses and systems that they manage.
45

Development of Novel Computational Algorithms for Localization in Wireless Sensor Networks through Incorporation of Dempster-Shafer Evidence Theory

Elkin, Colin P. January 2015 (has links)
No description available.
46

Application du calcul d'incidence à la fusion de données

Dumas, Marc-André 11 April 2018 (has links)
Le calcul d'incidence généralisé est une technique hybride symbolique-numérique qui présente un potentiel intéressant pour la fusion de données, notamment par sa correspondance possible avec la théorie de l'évidence. Ce mémoire présente une série de modifications au calcul d'incidence généralisé afin qu'il puisse être utilisé pour éliminer le problème de bouclage d'information, un problème important de la fusion de données qui fait que les données corrélées prennent une importance plus grande. Ces modifications permettent aussi de représenter divers types de combinaisons à l'aide de l'approche des univers possibles. Il est notamment possible d'effectuer des combinaisons de Yager associatives et des parallèles peuvent être faits avec la théorie de Dezert et Smarandache. / Generalized Incidence Calculus is a hybrid symbolic-numeric approach to data fusion that presents many interesting characteristics, in particular a correspondence with the Theory of Evidence. This master's thesis presents modifications to Generalized Incidence Calculus for its application to eliminate the Data Looping problem which makes combination of correlated data take more importance. Those modifications also allow the representation of alternative combinations of the Theory of Evidence by using a possible worlds approach. In particular, it is possible to associatively combine data using the Yager combination and parallels can be made with the Dezert-Smarandache Theory.
47

Représentation et évaluation de structures argumentatives

Megzari, Idriss El 24 April 2018 (has links)
Dans des domaines comme l'aéronautique, l'énergie, le médical ou encore les technologies de l'information en général, les besoins de formuler des arguments de sûreté, sécurité, confidentialité, etc. sont de plus en plus présents. Or, la représentation et l'évaluation de ces arguments n'est pas une chose aisée et fait l'objet de nombreux débats depuis les dix dernières années particulièrement. Une structure argumentative est une modélisation d'un argument ayant comme but d'expliciter les liens (l'argument) reliant une affirmation modélisant une propriété d'un système aux preuves qui la supportent. Les deux principaux défis reliés aux structures argumentatives sont les langages de représentation et les méthodes permettant d'évaluer le niveau de confiance que l'on peut attribuer à l'argument modélisé. S'intéressant à ces deux problématiques, ce mémoire investigue d'une part des langages permettant de représenter des structures argumentatives, et d'une autre part la théorie de l'évidence de Dempster-Shafer. En particulier, ce mémoire présente les langages GSN et TCL, la correspondance entre ces deux langages ainsi que de possibles extensions permettant d'en augmenter l'expressivité. La théorie de l'évidence de Dempster-Shafer y est aussi présentée et y fait l'objet d'une extension qui évite de traiter les cas limites comme des cas particuliers. La théorie de l'évidence de Dempster-Shafer permet de construire un modèle de confiance global à partir d'évaluations locales. Ces dernières sont obtenues en évaluant chaque composante d'une structure argumentative de façon indépendante. Des approches de construction des structures argumentatives ainsi que d'évaluation de leurs éléments sont développées et appliquées dans le cas de deux exemples provenant de deux contextes différents : la conformité avec l'exemple de l'ISO-27001 et la sûreté avec l'exemple d'une pompe à perfusion.
48

An Evidence Theoretic Approach to Design of Reliable Low-Cost UAVs

Murtha, Justin Fortna 30 July 2009 (has links)
Small unmanned aerial vehicles (SUAVs) are plagued by alarmingly high failure rates. Because these systems are small and built at lower cost than full-scale aircraft, high quality components and redundant systems are often eschewed to keep production costs low. This thesis proposes a process to ``design in'' reliability in a cost-effective way. Fault Tree Analysis is used to evaluate a system's (un)reliability and Dempster-Shafer Theory (Evidence Theory) is used to deal with imprecise failure data. Three unique sensitivity analyses highlight the most cost-effective improvement for the system by either spending money to research a component and reduce uncertainty, swap a component for a higher quality alternative, or add redundancy to an existing component. A MATLAB$^{\circledR}$ toolbox has been developed to assist in practical design applications. Finally, a case study illustrates the proposed methods by improving the reliability of a new SUAV design: Virginia Tech's SPAARO UAV. / Master of Science
49

Combinaison d'informations hétérogènes dans le cadre unificateur des ensembles aléatoires : approximations et robustesse

Florea, Mihai Cristian. 13 April 2018 (has links)
Dans ce travail nous nous intéressons aux problèmes liés à la combinaison d'informations en provenance de sources multiples. Nous proposons de représenter les informations en provenance de la théorie des ensembles flous (FST) et de la théorie de l'évidence (DST) dans le cadre unificateur des ensembles aléatoires (RST). Le processus de combinaison fait face à deux problématiques majeures : (1) une explosion du temps de calcul dû au grand nombre d'éléments focaux, et (2) la combinaison d'informations en conflit total. Nous proposons dans un premier temps de réduire le temps de calcul du processus de combinaison, en appliquant une approximation directe aux informations de la FST qui s'avère très efficace lorsque la cardinalité du cadre de discernement est élevée. Dans un deuxième temps nous proposons une formulation générale pour les règles de combinaison de la RST, ainsi qu'une nouvelle classe de règles adaptatives qui a l'avantage de (a) prendre en compte de manière automatique la fiabilité des sources, (b) combiner des informations définies sur des cadres de discernement différents et homogènes. Elle possède un comportement similaire à la règle conjonctive lorsque les sources sont en accord et un comportement similaire à la règle disjonctive lorsque les sources sont en désaccord.
50

Multi-focus image fusion using local variability / Fusion d'image en utilisant la variabilité locale

Wahyuni, Ias Sri 28 February 2018 (has links)
Dans cette thèse, nous nous intéressons aux méthodes de la fusion d'images multi focales. Cette technique consiste à fusionner plusieurs images capturées avec différentes distances focales de la même scène. Cela permet d'obtenir une image de meilleure qualité à partir des deux images sources. Nous proposons une méthode de fusion d'images s'appuyant sur les techniques des pyramides Laplaciennes en utilisant comme règle de sélection les transformées d'ondelettes discretes(DWT: Discrete Wavelet Transform). Nous développons, par la suite, deux méthodes de fusion d'images multi focales basée sur la variabilité locale de chaque pixel. Elle tient en compte les informations dans la région environnante des pixels. La première consiste à utiliser la variabilité locale comme information dans la méthode de Dempster-Shafer. La seconde utilise une métrique basée sur la variabilité locale. En effet, la fusion proposée effectue une pondération de chaque pixel par une exponentielle de sa variabilité locale. Une étude comparative entre les méthodes proposées et celles existantes a été réalisée. Les résultats expérimentaux démontrent que nos méthodes proposées donnent des meilleurs fusions, tant dans la perception visuelle que dans l'analyse quantitative. / In this thesis, we are interested in the multi-focus image fusion method. This technique consists of fusing several captured images with different focal lengths of the same scene to obtain an image with better quality than the two source images. We propose an image fusion method based on Laplacian pyramid technique using Discrete Wavelet Transform (DWT) as a selection rule. We then develop two multi-focus image fusion methods based on the local variability of each pixel. It takes into account the information in the surrounding pixel area. The first method is to use local variability as an information in the Dempster-Shafer theory. The second method uses a metric based on local variability. Indeed, the proposed fusion method weighs each pixel by an exponential of its local variability. A comparative study between the proposed methods and the existing methods was carried out. The experimental results show that our proposed methods give better fusions, both in visual perception and in quantitative analysis.

Page generated in 0.0618 seconds