• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 62
  • 12
  • 2
  • 1
  • 1
  • Tagged with
  • 88
  • 88
  • 28
  • 17
  • 15
  • 15
  • 14
  • 13
  • 13
  • 13
  • 12
  • 12
  • 12
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Human-centred automation : with application to the fighter aircraft domain

Helldin, Tove January 2012 (has links)
The working situation of fighter pilots is often very challenging. The pilots are requested to perform their tasks and make decisions in situations characterised by time-pressure, huge amounts of data and high workload, knowing that wrong decisions might result in fatal consequences. To aid the pilots, several automatic support systems have been implemented in modern fighter aircraft and will continue to be implemented in pace with technological advancements and new demands posed on the pilots. For example, innovations within the information fusion (IF) domain have made it possible to fuse large amounts of data, stemming from different sensors, databases etc., to create a better foundation for making decisions and act than would have been possible if the information sources had been used separately. However, there are both positive and negative effects of automation, such as decreased workload and improved situation awareness on the one hand, but skill degradation and complacent behaviour on the other. To avoid the possible negative consequences of automation, while at the same time ameliorating the positive ones, a human-centred automation (HCA) approach to system design has been proposed as a way of optimizing the collaboration between the human and the machine. As a design approach, HCA stresses the importance of a cooperative human-machine relationship, where the operator is kept in the automation loop. However, how to introduce HCA within the fighter aircraft domain as well as its implications for the interface and automation design of support systems within the field has not been investigated. This thesis investigates the implications of introducing HCA into the fighter aircraft domain. Through literature surveys and empirical investigations, general and domain specific HCA guidelines have been identified. These advocate, for example, that an indication of the reliability of the information and the recommendations provided by the different aircraft support systems must be given as well as that support for appropriate updates of the pilots’ individual and team awareness of the situation must be provided. A demonstrator, mirroring some of the identified guidelines, has been implemented and used to evaluate the guidelines together with system developers within the domain. The evaluation indicated that system developers of modern fighter aircraft implicitly incorporate many of the identified HCA guidelines when designing. However, the evaluation further revealed that to explicitly incorporate these guidelines into the development approach, preferably through the development of a domain specific style guide, would aid the system developers design automated support systems that provide appropriate support for the pilots. The results presented in this thesis are expected to aid developers of modern fighter aircraft support systems by incorporating HCA into the traditional simulator-based design (SBD) approach. This approach is frequently used within the field and stresses early and frequent user-involvement when designing, in which complementary HCA evaluations could be performed to further improve the support systems implemented from an automation perspective. Furthermore, it is expected that the results presented in this thesis will contribute to the research regarding how to incorporate the human operator in the information fusion processes, which has been recognised as a research gap within the IF field. Thus, a further contribution of this thesis is the suggestion of how the HCA development approach could be of aid when improving the interaction between the operator and the automated fusion system. / Arbetssituationen för stridspiloter är ofta mycket utmanande. Piloterna måste utföra sina uppgifter och fatta beslut i stressiga situationer med stora informationsmängder och hög arbetsbörda, samtidigt som val av fel beslut kan leda till allvarliga konsekvenser. För att hjälpa piloterna har flera automatiska stödsystem implementerats i moderna stridsflygplan. Denna trend kommer att fortsätta i takt med nya tekniska framgångar och nya krav som ställs på piloterna. Forskning inom informationsfusion (IF) har bland annat gjort det möjligt att fusionera stora mängder data som härstammar från olika sensorer, databaser m.m. för att på så sätt skapa en bättre grund för att fatta beslut och agera än vad som hade varit möjligt om informationskällorna hade använts separat. Dock har både positiva och negativa effekter av automatisering rapporterats, såsom minskad arbetsbörda och förbättrad situationsuppfattning men även försämrad pilotprestation till följd av att de automatiska systemens prestanda inte övervakas. För att undvika negativa effekter av automation samtidigt som de positiva effekterna stärks har den så kallade människo centrerade automationen (HCA) lyfts fram som en möjlig väg att designa system där samverkan mellan automationen och den mänskliga operatören optimeras. Som en designapproach fokuserar HCA på viken av en samverkande människamaskin relation, där operatören hålls kvar i automatiseringsloopen. Men hur HCA kan introduceras inom stridsflygdomänen och dess implikationer för gränssnitts- och automationsdesign av stödsystem inom domänen har inte undersökts. Denna licentiatavhandling undersöker möjliga implikationer av att introducera HCA inom stridsflygdomänen. Genom litteraturundersökningar och empiriska studier har generalla och domänspecifika HCA riktlinjer identifierats, såsom att piloterna måste erbjudas en indikation angående tillförlitligheten hos den information och de rekommendationer som de olika implementerade stödsystemen i flygplanet har genererat, samt att stöd för att uppdatera piloternas individuella och gemensamma uppfattning av situationen måste ges. En demonstrator, som återspeglar några av de identifierade HCA riktlinjerna, har implementerats och använts för att utvärdera riktlinjerna tillsammans med systemutvecklare inom domänen. Denna utvärdering påvisade att systemutvecklare inom stridsflygdomänen implicit använder sig av många av de identifierade HCA riktlinjerna under designprocessen, men att explicit inkludera dessa i en domänspecifik design guide skulle kunna hjälpa dem att designa automatiska system som erbjuder lämpligt stöd för piloterna. De resultat som presenteras i denna licentiatavhandling förväntas kunna hjälpa utvecklare av moderna stridsflygsystem genom att inkludera HCA i den traditionella simulator-baserade designapproachen (SBD). Denna approach används flitigt inom området och fokuserar på tidigt och återkommande användardeltagande vid designarbetet, där komplementära HCA utvärderingar skulle kunna genomföras för att förbättra de stödsystem som implementeras från ett automationsperspektiv. Det förväntas även att de resultat som presenteras i denna avhandling kommer att bidra till forskningen kring hur operatörer kan påverka fusionsprocessen, vilket har identifierats som ett område där mer forskning behövs inom IF området. Ytterligare ett bidrag av denna avhandling är därför det förslag som ges på hur HCA utvecklingsprocessen skulle kunna användas för att förbättra interaktionen mellan operatören och det automatiska fusionssystemet.
82

Classification of uncertain data in the framework of belief functions : nearest-neighbor-based and rule-based approaches / Classification des données incertaines dans le cadre des fonctions de croyance : la métode des k plus proches voisins et la méthode à base de règles

Jiao, Lianmeng 26 October 2015 (has links)
Dans de nombreux problèmes de classification, les données sont intrinsèquement incertaines. Les données d’apprentissage disponibles peuvent être imprécises, incomplètes, ou même peu fiables. En outre, des connaissances spécialisées partielles qui caractérisent le problème de classification peuvent également être disponibles. Ces différents types d’incertitude posent de grands défis pour la conception de classifieurs. La théorie des fonctions de croyance fournit un cadre rigoureux et élégant pour la représentation et la combinaison d’une grande variété d’informations incertaines. Dans cette thèse, nous utilisons cette théorie pour résoudre les problèmes de classification des données incertaines sur la base de deux approches courantes, à savoir, la méthode des k plus proches voisins (kNN) et la méthode à base de règles.Pour la méthode kNN, une préoccupation est que les données d’apprentissage imprécises dans les régions où les classes de chevauchent peuvent affecter ses performances de manière importante. Une méthode d’édition a été développée dans le cadre de la théorie des fonctions de croyance pour modéliser l’information imprécise apportée par les échantillons dans les régions qui se chevauchent. Une autre considération est que, parfois, seul un ensemble de données d’apprentissage incomplet est disponible, auquel cas les performances de la méthode kNN se dégradent considérablement. Motivé par ce problème, nous avons développé une méthode de fusion efficace pour combiner un ensemble de classifieurs kNN couplés utilisant des métriques couplées apprises localement. Pour la méthode à base de règles, afin d’améliorer sa performance dans les applications complexes, nous étendons la méthode traditionnelle dans le cadre des fonctions de croyance. Nous développons un système de classification fondé sur des règles de croyance pour traiter des informations incertains dans les problèmes de classification complexes. En outre, dans certaines applications, en plus de données d’apprentissage, des connaissances expertes peuvent également être disponibles. Nous avons donc développé un système de classification hybride fondé sur des règles de croyance permettant d’utiliser ces deux types d’information pour la classification. / In many classification problems, data are inherently uncertain. The available training data might be imprecise, incomplete, even unreliable. Besides, partial expert knowledge characterizing the classification problem may also be available. These different types of uncertainty bring great challenges to classifier design. The theory of belief functions provides a well-founded and elegant framework to represent and combine a large variety of uncertain information. In this thesis, we use this theory to address the uncertain data classification problems based on two popular approaches, i.e., the k-nearest neighbor rule (kNN) andrule-based classification systems. For the kNN rule, one concern is that the imprecise training data in class over lapping regions may greatly affect its performance. An evidential editing version of the kNNrule was developed based on the theory of belief functions in order to well model the imprecise information for those samples in over lapping regions. Another consideration is that, sometimes, only an incomplete training data set is available, in which case the ideal behaviors of the kNN rule degrade dramatically. Motivated by this problem, we designedan evidential fusion scheme for combining a group of pairwise kNN classifiers developed based on locally learned pairwise distance metrics.For rule-based classification systems, in order to improving their performance in complex applications, we extended the traditional fuzzy rule-based classification system in the framework of belief functions and develop a belief rule-based classification system to address uncertain information in complex classification problems. Further, considering that in some applications, apart from training data collected by sensors, partial expert knowledge can also be available, a hybrid belief rule-based classification system was developed to make use of these two types of information jointly for classification.
83

Developing Artificial Intelligence-Based Decision Support for Resilient Socio-Technical Systems

Ali Lenjani (8921381) 15 June 2020 (has links)
<div>During 2017 and 2018, two of the costliest years on record regarding natural disasters, the U.S. experienced 30 events with total losses of $400 billion. These exuberant costs arise primarily from the lack of adequate planning spanning the breadth from pre-event preparedness to post-event response. It is imperative to start thinking about ways to make our built environment more resilient. However, empirically-calibrated and structure-specific vulnerability models, a critical input required to formulate decision-making problems, are not currently available. Here, the research objective is to improve the resilience of the built environment through an automated vision-based system that generates actionable information in the form of probabilistic pre-event prediction and post-event assessment of damage. The central hypothesis is that pre-event, e.g., street view images, along with the post-event image database, contain sufficient information to construct pre-event probabilistic vulnerability models for assets in the built environment. The rationale for this research stems from the fact that probabilistic damage prediction is the most critical input for formulating the decision-making problems under uncertainty targeting the mitigation, preparedness, response, and recovery efforts. The following tasks are completed towards the goal.</div><div>First, planning for one of the bottleneck processes of the post-event recovery is formulated as a decision making problem considering the consequences imposed on the community (module 1). Second, a technique is developed to automate the process of extracting multiple street-view images of a given built asset, thereby creating a dataset that illustrates its pre-event state (module 2). Third, a system is developed that automatically characterizes the pre-event state of the built asset and quantifies the probability that it is damaged by fusing information from deep neural network (DNN) classifiers acting on pre-event and post-event images (module 3). To complete the work, a methodology is developed to enable associating each asset of the built environment with a structural probabilistic vulnerability model by correlating the pre-event structure characterization to the post-event damage state (module 4). The method is demonstrated and validated using field data collected from recent hurricanes within the US.</div><div>The vision of this research is to enable the automatic extraction of information about exposure and risk to enable smarter and more resilient communities around the world.</div>
84

Efficient Data Driven Multi Source Fusion

Islam, Muhammad Aminul 10 August 2018 (has links)
Data/information fusion is an integral component of many existing and emerging applications; e.g., remote sensing, smart cars, Internet of Things (IoT), and Big Data, to name a few. While fusion aims to achieve better results than what any one individual input can provide, often the challenge is to determine the underlying mathematics for aggregation suitable for an application. In this dissertation, I focus on the following three aspects of aggregation: (i) efficient data-driven learning and optimization, (ii) extensions and new aggregation methods, and (iii) feature and decision level fusion for machine learning with applications to signal and image processing. The Choquet integral (ChI), a powerful nonlinear aggregation operator, is a parametric way (with respect to the fuzzy measure (FM)) to generate a wealth of aggregation operators. The FM has 2N variables and N(2N − 1) constraints for N inputs. As a result, learning the ChI parameters from data quickly becomes impractical for most applications. Herein, I propose a scalable learning procedure (which is linear with respect to training sample size) for the ChI that identifies and optimizes only data-supported variables. As such, the computational complexity of the learning algorithm is proportional to the complexity of the solver used. This method also includes an imputation framework to obtain scalar values for data-unsupported (aka missing) variables and a compression algorithm (lossy or losselss) of the learned variables. I also propose a genetic algorithm (GA) to optimize the ChI for non-convex, multi-modal, and/or analytical objective functions. This algorithm introduces two operators that automatically preserve the constraints; therefore there is no need to explicitly enforce the constraints as is required by traditional GA algorithms. In addition, this algorithm provides an efficient representation of the search space with the minimal set of vertices. Furthermore, I study different strategies for extending the fuzzy integral for missing data and I propose a GOAL programming framework to aggregate inputs from heterogeneous sources for the ChI learning. Last, my work in remote sensing involves visual clustering based band group selection and Lp-norm multiple kernel learning based feature level fusion in hyperspectral image processing to enhance pixel level classification.
85

Control room agents : an information-theoretic approach

Van der Westhuizen, Petra Laura 28 February 2007 (has links)
In this thesis, a particular class of agent is singled out for examination. In order to provide a guiding metaphor, we speak of control room agents. Our focus is on rational decision- making by such agents, where the circumstances obtaining are such that rationality is bounded. Control room agents, whether human or non-human, need to reason and act in a changing environment with only limited information available to them. Determining the current state of the environment is a central concern for control room agents if they are to reason and act sensibly. A control room agent cannot plan its actions without having an internal representation (epistemic state) of its environment, and cannot make rational decisions unless this representation, to some level of accuracy, reflects the state of its environment. The focus of this thesis is on three aspects regarding the epistemic functioning of a control room agent: 1. How should the epistemic state of a control room agent be represented in order to facilitate logical analysis? 2. How should a control room agent change its epistemic state upon receiving new information? 3. How should a control room agent combine available information from different sources? In describing the class of control room agents as first-order intentional systems hav- ing both informational and motivational attitudes, an agent-oriented view is adopted. The central construct used in the information-theoretic approach, which is qualitative in nature, is the concept of a templated ordering. Representing the epistemic state of a control room agent by a (special form of) tem- plated ordering signals a departure from the many approaches in which only the beliefs of an agent are represented. Templated orderings allow for the representation of both knowledge and belief. A control room agent changes its epistemic state according to a proposed epistemic change algorithm, which allows the agent to select between two well-established forms of belief change operations, namely, belief revision and belief update. The combination of (possibly conflicting) information from different sources has re- ceived a lot of attention in recent years. Using templated orderings for the semantic representation of information, a new family of purely qualitative merging operations is developed. / School of Computing / Ph. D. (Computer Science)
86

Extracting and Aggregating Temporal Events from Texts

Döhling, Lars 11 October 2017 (has links)
Das Finden von zuverlässigen Informationen über gegebene Ereignisse aus großen und dynamischen Textsammlungen, wie dem Web, ist ein wichtiges Thema. Zum Beispiel sind Rettungsteams und Versicherungsunternehmen an prägnanten Fakten über Schäden nach Katastrophen interessiert, die heutzutage online in Web-Blogs, Zeitungsartikeln, Social Media etc. zu finden sind. Solche Fakten helfen, die erforderlichen Hilfsmaßnahmen zu bestimmen und unterstützen deren Koordination. Allerdings ist das Finden, Extrahieren und Aggregieren nützlicher Informationen ein hochkomplexes Unterfangen: Es erfordert die Ermittlung geeigneter Textquellen und deren zeitliche Einordung, die Extraktion relevanter Fakten in diesen Texten und deren Aggregation zu einer verdichteten Sicht auf die Ereignisse, trotz Inkonsistenzen, vagen Angaben und Veränderungen über die Zeit. In dieser Arbeit präsentieren und evaluieren wir Techniken und Lösungen für jedes dieser Probleme, eingebettet in ein vierstufiges Framework. Die angewandten Methoden beruhen auf Verfahren des Musterabgleichs, der Verarbeitung natürlicher Sprache und des maschinellen Lernens. Zusätzlich berichten wir über die Ergebnisse zweier Fallstudien, basierend auf dem Einsatz des gesamten Frameworks: Die Ermittlung von Daten über Erdbeben und Überschwemmungen aus Webdokumenten. Unsere Ergebnisse zeigen, dass es unter bestimmten Umständen möglich ist, automatisch zuverlässige und zeitgerechte Daten aus dem Internet zu erhalten. / Finding reliable information about given events from large and dynamic text collections, such as the web, is a topic of great interest. For instance, rescue teams and insurance companies are interested in concise facts about damages after disasters, which can be found today in web blogs, online newspaper articles, social media, etc. Knowing these facts helps to determine the required scale of relief operations and supports their coordination. However, finding, extracting, and condensing specific facts is a highly complex undertaking: It requires identifying appropriate textual sources and their temporal alignment, recognizing relevant facts within these texts, and aggregating extracted facts into a condensed answer despite inconsistencies, uncertainty, and changes over time. In this thesis, we present and evaluate techniques and solutions for each of these problems, embedded in a four-step framework. Applied methods are pattern matching, natural language processing, and machine learning. We also report the results for two case studies applying our entire framework: gathering data on earthquakes and floods from web documents. Our results show that it is, under certain circumstances, possible to automatically obtain reliable and timely data from the web.
87

Control room agents : an information-theoretic approach

Van der Westhuizen, Petra Laura 28 February 2007 (has links)
In this thesis, a particular class of agent is singled out for examination. In order to provide a guiding metaphor, we speak of control room agents. Our focus is on rational decision- making by such agents, where the circumstances obtaining are such that rationality is bounded. Control room agents, whether human or non-human, need to reason and act in a changing environment with only limited information available to them. Determining the current state of the environment is a central concern for control room agents if they are to reason and act sensibly. A control room agent cannot plan its actions without having an internal representation (epistemic state) of its environment, and cannot make rational decisions unless this representation, to some level of accuracy, reflects the state of its environment. The focus of this thesis is on three aspects regarding the epistemic functioning of a control room agent: 1. How should the epistemic state of a control room agent be represented in order to facilitate logical analysis? 2. How should a control room agent change its epistemic state upon receiving new information? 3. How should a control room agent combine available information from different sources? In describing the class of control room agents as first-order intentional systems hav- ing both informational and motivational attitudes, an agent-oriented view is adopted. The central construct used in the information-theoretic approach, which is qualitative in nature, is the concept of a templated ordering. Representing the epistemic state of a control room agent by a (special form of) tem- plated ordering signals a departure from the many approaches in which only the beliefs of an agent are represented. Templated orderings allow for the representation of both knowledge and belief. A control room agent changes its epistemic state according to a proposed epistemic change algorithm, which allows the agent to select between two well-established forms of belief change operations, namely, belief revision and belief update. The combination of (possibly conflicting) information from different sources has re- ceived a lot of attention in recent years. Using templated orderings for the semantic representation of information, a new family of purely qualitative merging operations is developed. / School of Computing / Ph. D. (Computer Science)
88

Vers des communications de confiance et sécurisées dans un environnement véhiculaire / Towards trusted and secure communications in a vehicular environment

Tan, Heng Chuan 13 September 2017 (has links)
Le routage et la gestion des clés sont les plus grands défis dans les réseaux de véhicules. Un comportement de routage inapproprié peut affecter l’efficacité des communications et affecter la livraison des applications liées à la sécurité. D’autre part, la gestion des clés, en particulier en raison de l’utilisation de la gestion des certificats PKI, peut entraîner une latence élevée, ce qui peut ne pas convenir à de nombreuses applications critiques. Pour cette raison, nous proposons deux modèles de confiance pour aider le protocole de routage à sélectionner un chemin de bout en bout sécurisé pour le transfert. Le premier modèle se concentre sur la détection de noeuds égoïstes, y compris les attaques basées sur la réputation, conçues pour compromettre la «vraie» réputation d’un noeud. Le second modèle est destiné à détecter les redirecteurs qui modifient le contenu d’un paquet avant la retransmission. Dans la gestion des clés, nous avons développé un système de gestion des clés d’authentification et de sécurité (SA-KMP) qui utilise une cryptographie symétrique pour protéger la communication, y compris l’élimination des certificats pendant la communication pour réduire les retards liés à l’infrastructure PKI. / Routing and key management are the biggest challenges in vehicular networks. Inappropriate routing behaviour may affect the effectiveness of communications and affect the delivery of safety-related applications. On the other hand, key management, especially due to the use of PKI certificate management, can lead to high latency, which may not be suitable for many time-critical applications. For this reason, we propose two trust models to assist the routing protocol in selecting a secure end-to-end path for forwarding. The first model focusses on detecting selfish nodes, including reputation-based attacks, designed to compromise the “true” reputation of a node. The second model is intended to detect forwarders that modify the contents of a packet before retransmission. In key management, we have developed a Secure and Authentication Key Management Protocol (SA-KMP) scheme that uses symmetric cryptography to protect communication, including eliminating certificates during communication to reduce PKI-related delays.

Page generated in 0.5823 seconds