• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 62
  • 12
  • 2
  • 1
  • 1
  • Tagged with
  • 88
  • 88
  • 28
  • 17
  • 15
  • 15
  • 14
  • 13
  • 13
  • 13
  • 12
  • 12
  • 12
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Information Fusion of Data-Driven Engine Fault Classification from Multiple Algorithms

Baravdish, Ninos January 2021 (has links)
As the automotive industry constantly makes technological progress, higher demands are placed on safety, environmentally friendly and durability. Modern vehicles are headed towards increasingly complex system, in terms of both hardware and software making it important to detect faults in any of the components. Monitoring the engine’s health has traditionally been done using expert knowledge and model-based techniques, where derived models of the system’s nominal state are used to detect any deviations. However, due to increased complexity of the system this approach faces limitations regarding time and knowledge to describe the engine’s states. An alternative approach is therefore data-driven methods which instead are based on historical data measured from different operating points that are used to draw conclusion about engine’s present state. In this thesis a proposed diagnostic framework is presented, consisting of a systematically approach for fault classification of known and unknown faults along with a fault size estimation. The basis for this lies in using principal component analysis to find the fault vector for each fault class and decouple one fault at the time, thus creating different subspaces. Importantly, this work investigates the efficiency of taking multiple classifiers into account in the decision making from a performance perspective. Aggregating multiple classifiers is done solving a quadratic optimization problem. To evaluate the performance, a comparison with a random forest classifier has been made. Evaluation with challenging test data show promising results where the algorithm relates well to the performance of random forest classifier.
42

Geo-Information Fusion for Time-Critical Geo-Applications

Hillen, Florian 18 March 2016 (has links)
This thesis is addressing the fusion of geo-information from different data sources for time-critical geo-applications. Such geo-information is extracted from sensors that record earth observation (EO) data. In recent years the amount of sensors that provide geo-information experienced a major growth not least because of the rising market for small sensors that are nowadays integrated in smartphones or recently even in fitness wristbands that are carried at the body. The resulting flood of geo-information builds the basis for new, time-critical geo-applications that would have been inconceivable a decade ago. The real-time characteristics of geo-information, which is also getting more important for traditional sensors (e.g. remote sensors), require new methodologies and scientific investigations regarding aggregation and analysis that can be summarised under the term geo-information fusion. Thus, the main goal of this thesis is the investigation of fusing geo-information for time-critical geo-applications with the focus on the benefits as well as challenges and obstacles that appear. Three different use cases dealing with capturing, modelling and analysis of spatial information are studied. In that process, the main emphasis is on the added value and the benefits of geo-information fusion. One can speak of an “added value” if the informational content can only be derived by the combination of information from different sources, meaning that it cannot be derived from one source individually. Another goal of this thesis is the prototypical integration of the fusion approach in spatial data infrastructures (SDIs) to increase the interoperability of the developed methods. By doing so, the fusion can be provided (e.g. over the internet) and used by a multitude of users and developers. Above that, the integration is of high importance regarding systems and concepts like the Global Earth Observation System of Systems (GEOSS), the INSPIRE directive for Europe or the European monitoring system Copernicus. The results and findings of this thesis can be seen as the first advances and can be used for further research and studies in the field of geo-information fusion which will gain further importance and relevance for all spatial questions in the future.
43

A Framework For The Assessment And Analysis Of Multi-hazardsinduced Risk Resulting From Space Vehicles Operations

Sala-Diakanda, Serge 01 January 2007 (has links)
With the foreseeable increase in traffic frequency to and from orbit, the safe operation of current and future space vehicles at designated spaceports has become a serious concern. Due to their high explosive energy potential, operating those launch vehicles presents a real risk to: (1) the spaceport infrastructure and personnel, (2) the communities surrounding the spaceport and (3) the flying aircrafts whose routes could be relatively close to spaceport launch and reentry routes. Several computer models aimed at modeling the effects of the different hazards generated by the breakup of such vehicles (e.g., fragmentation of debris, release of toxic gases, propagation of blast waves, etc.) have been developed, and are used to assist in Go-No Go launch decisions. They can simulate a total failure scenario of the vehicle and, estimate a number of casualties to be expected as a result of such failure. However, as all of these models - which can be very elaborate and complex - consider only one specific explosion hazard in their simulations, the decision of whether or not a launch should occur is currently based on the evaluation of several estimates of an expected number of casualties. As such, current practices ignore the complex, nonlinear interactions between the different hazards as well as the interdependencies between the estimates. In this study, we developed a new framework which makes use of information fusion theory, hazards' dispersion modeling and, geographical statistical analysis and visualization capabilities of geographical information systems to assess the risk generated by the operation of space launch vehicles. A new risk metric, which effectively addresses the lack of a common risk metric with current methods, is also proposed. A case study, based on a proposed spaceport in the state of Oklahoma showed that the estimates we generate through our framework consistently outperform estimates provided by any individual hazard, or by the independent combination of those hazards. Furthermore, the study revealed that using anything else than fusion could provide seriously misleading results, with potentially catastrophic consequences.
44

A Unified Alert Fusion Model For Intelligent Analysis Of Sensor Data In An Intrusion Detection Environment

Siraj, Ambareen 05 August 2006 (has links)
The need for higher-level reasoning capabilities beyond low-level sensor abilities has prompted researchers to use different types of sensor fusion techniques for better situational awareness in the intrusion detection environment. These techniques primarily vary in terms of their mission objectives. Some prioritize alerts for alert reduction, some cluster alerts to identify common attack patterns, and some correlate alerts to identify multi-staged attacks. Each of these tasks has its own merits. Unlike previous efforts in this area, this dissertation combines the primary tasks of sensor alert fusion, i.e., alert prioritization, alert clustering and alert correlation into a single framework such that individual results are used to quantify a confidence score as an overall assessment for global diagnosis of a system?s security health. Such a framework is especially useful in a multi-sensor environment where the sensors can collaborate with or complement each other to provide increased reliability, making it essential that the outputs of the sensors are fused in an effective manner in order to provide an improved understanding of the security status of the protected resources in the distributed environment. This dissertation uses a possibilistic approach in intelligent fusion of sensor alerts with Fuzzy Cognitive Modeling in order to accommodate the impreciseness and vagueness in knowledge-based reasoning. We show that our unified architecture for sensor fusion provides better insight into the security health of systems. A new multi-level alert clustering method is developed to accommodate inexact matching in alert features and is shown to provide relevance to more alerts than traditional exact clustering. Alert correlation with a new abstract incident modeling technique is shown to deal with scalability and uncertainty issues present in traditional alert correlation. New concepts of dynamic fusion are presented for overall situation assessment, which a) in case of misuse sensors, combines results of alert clustering and alert correlation, and b) in case of anomaly sensors, corroborates evidence from primary and secondary sensors for deriving the final conclusion on the systems? security health.
45

Increasing DBM Reliability using Distribution Independent Tests and Information Fusion Techniques

Rajagopalan, Vidya 21 January 2010 (has links)
In deformation based morphometry (DBM) group-wise differences in brain structure are measured using deformable registration and some form of statistical test. However, it is known that DBM results are sensitive to both the registration method and statistical test used. Given the lack of an objective model of group variation it has been difficult to determine the extent of the influence of registration implementation or contraints on DBM analysis. In this thesis, we use registration methods with varying levels of theoretic similarity to study the influence of registration mechanics on DBM results. We show that because of the extent of the influence of registration mechanics on DBM results, analysis of changes should always be made with a thorough understanding of the registration method used. We also show that minor variations in registration methods can lead to large changes in DBM results. When using DBM, it would be imprudent to use only one registration method to draw any conclusions about the variations being studied. In order to provide a more complete representation of inter-group changes, we propose a method for combining multiple registration methods using Dempster-Shafer evidence theory to produce belief maps of categorical changes between groups. We show that the Dempster-Shafer combination produces a unique and easy to interpret belief map of regional changes between and within groups without the complications associated with hypothesis testing. Another, often confounding, element of DBM is the parametric hypothesis test used to specify voxels undergoing significant change between the two groups. The accuracy and reliability of these tests are contingent on a number of fundamental assumptions made about the distribution of the data used in the tests. Many DBM studies often overlook these assumptions and fail to verify their validity for the data being tested. This raises many doubts about the credibility of the results from such tests. In this thesis, we propose to perform statistical analysis on DBM data using nonparametric, distribution independent hypothesis tests. With no data distributional assumptions, these tests provide both increased flexibility and reliability of DBM statistical analysis. / Ph. D.
46

Explanation Methods for Bayesian Networks

Helldin, Tove January 2009 (has links)
<p> </p><p>The international maritime industry is growing fast due to an increasing number of transportations over sea. In pace with this development, the maritime surveillance capacity must be expanded as well, in order to be able to handle the increasing numbers of hazardous cargo transports, attacks, piracy etc. In order to detect such events, anomaly detection methods and techniques can be used. Moreover, since surveillance systems process huge amounts of sensor data, anomaly detection techniques can be used to filter out or highlight interesting objects or situations to an operator. Making decisions upon large amounts of sensor data can be a challenging and demanding activity for the operator, not only due to the quantity of the data, but factors such as time pressure, high stress and uncertain information further aggravate the task. Bayesian networks can be used in order to detect anomalies in data and have, in contrast to many other opaque machine learning techniques, some important advantages. One of these advantages is the fact that it is possible for a user to understand and interpret the model, due to its graphical nature.</p><p>This thesis aims to investigate how the output from a Bayesian network can be explained to a user by first reviewing and presenting which methods exist and second, by making experiments. The experiments aim to investigate if two explanation methods can be used in order to give an explanation to the inferences made by a Bayesian network in order to support the operator’s situation awareness and decision making process when deployed in an anomaly detection problem in the maritime domain.</p><p> </p>
47

Evaluating the performance of TEWA systems

Johansson, Fredrik January 2010 (has links)
It is in military engagements the task of the air defense to protect valuable assets such as air bases from being destroyed by hostile aircrafts and missiles. In order to fulfill this mission, the defenders are equipped with sensors and firing units. To infer whether a target is hostile and threatening or not is far from a trivial task. This is dealt with in a threat evaluation process, in which the targets are ranked based upon their estimated level of threat posed to the defended assets. Once the degree of threat has been estimated, the problem of weapon allocation comes into the picture. Given that a number of threatening targets have been identified, the defenders need to decide on whether any firing units shall be allocated to the targets, and if so, which firing unit to engage which target. To complicate matters, the outcomes of such engagements are usually stochastic. Moreover, there are often tight time constraints on how fast the threat evaluation and weapon allocation processes need to be executed. There are already today a large number of threat evaluation and weapon allocation (TEWA) systems in use, i.e. decision support systems aiding military decision makers with the threat evaluation and weapon allocation processes. However, despite the critical role of such systems, it is not clear how to evaluate the performance of the systems and their algorithms. Hence, the work in thesis is focused on the development and evaluation of TEWA systems, and the algorithms for threat evaluation and weapon allocation being part of such systems. A number of algorithms for threat evaluation and static weapon allocation are suggested and implemented, and testbeds for facilitating the evaluation of these are developed. Experimental results show that the use of particle swarm optimization is suitable for real-time target-based weapon allocation in situations involving up to approximately ten targets and ten firing units, while it for larger problem sizes gives better results to make use of an enhanced greedy maximum marginal return algorithm, or a genetic algorithm seeded with the solution returned by the greedy algorithm. / Fredrik Johansson forskar också vid Skövde Högskola, Informatics Research Centre / Fredrik Johansson also does research at the University of Skövde, Informatics Research Centre
48

Petri nets for situation recognition

Dahlbom, Anders January 2011 (has links)
Situation recognition is a process with the goal of identifying a priori defined situations in a flow of data and information. The purpose is to aid decision makers with focusing on relevant information by filtering out situations of interest. This is an increasingly important and non trivial problem to solve since the amount of information in various decision making situations constantly grow. Situation recognition thus addresses the information gap, i.e. the problem of finding the correct information at the correct time. Interesting situations may also evolve over time and they may consist of multiple participating objects and their actions. This makes the problem even more complex to solve. This thesis explores situation recognition and provides a conceptualization and a definition of the problem, which allow for situations of partial temporal definition to be described. The thesis then focuses on investigating how Petri nets can be used for recognising situations. Existing Petri net based approaches for recognition have some limitations when it comes to fulfilling requirements that can be put on solutions to the situation recognition problem. An extended Petri net based technique that addresses these limitations is therefore introduced. It is shown that this technique can be as efficient as a rule based techniques using the Rete algorithm with extensions for explicitly representing temporal constraints. Such techniques are known to be efficient; hence, the Petri net based technique is efficient too. The thesis also looks at the problem of learning Petri net situation templates using genetic algorithms. Results points towards complex dynamic genome representations as being more suited for learning complex concepts, since these allow for promising solutions to be found more quickly compared with classical bit string based representations. In conclusion, the extended Petri net based technique is argued to offer a viable approach for situation recognition since it: (1) can achieve good recognition performance, (2) is efficient with respect to time, (3) allows for manually constructed situation templates to be improved and (4) can be used with real world data to find real world situations. / <p>Anders Dahlbom is also affiliated to Skövde Artificial Intelligence Lab (SAIL), Information Fusion Research Program, Högskolan i Skövde</p>
49

Visual analytics for maritime anomaly detection

Riveiro, María José January 2011 (has links)
The surveillance of large sea areas typically involves  the analysis of huge quantities of heterogeneous data.  In order to support the operator while monitoring maritime traffic, the identification of anomalous behavior or situations that might need further investigation may reduce operators' cognitive load. While it is worth acknowledging that existing mining applications support the identification of anomalies, autonomous anomaly detection systems are rarely used for maritime surveillance. Anomaly detection is normally a complex task that can hardly be solved by using purely visual or purely computational methods. This thesis suggests and investigates the adoption of visual analytics principles to support the detection of anomalous vessel behavior in maritime traffic data. This adoption involves studying the analytical reasoning process that needs to be supported,  using combined automatic and visualization approaches to support such process, and evaluating such integration. The analysis of data gathered during interviews and participant observations at various maritime control centers and the inspection of video recordings of real anomalous incidents lead to a characterization of the analytical reasoning process that operators go through when monitoring traffic. These results are complemented with a literature review of anomaly detection techniques applied to sea traffic. A particular statistical-based technique is implemented, tested, and embedded in a proof-of-concept prototype that allows user involvement in the detection process. The quantitative evaluation carried out by employing the prototype reveals that participants who used the visualization of normal behavioral models outperformed the group without aid. The qualitative assessment shows that  domain experts are positive towards providing automatic support and the visualization of normal behavioral models, since these aids may reduce reaction time, as well as increase trust and comprehensibility in the system. Based on the lessons learned, this thesis provides recommendations for designers and developers of maritime control and anomaly detection systems, as well as guidelines for carrying out evaluations of visual analytics environments. / Maria Riveiro is also affiliated to Informatics Research Centre, Högskolan i Skövde / Information Fusion Research Program, Högskolan i Skövde
50

Fusion multimodale pour la reconnaissance d'espèces d'arbres / Multimodal fusion for leaf species recognition

Ben Ameur, Rihab 04 June 2018 (has links)
Les systèmes de fusion d’informations permettent de combiner des données issues de différentes sources d’informations tout en tenant compte de leur qualité. La combinaison de données issues de sources hétérogènes permet de profiter de la complémentarité des données et donc d’avoir potentiellement des performances plus élevées que celles obtenues en utilisant une seule source d’informations. L’utilisation de ces systèmes s’avère intéressante dans le cadre de la reconnaissance d’espèces d’arbres à travers la fusion d’informations issues de deux modalités : les feuilles et les écorces.Une seule modalité représente éventuellement différentes sources d’informations décrivant chacune une des caractéristiques les plus pertinentes. Ceci permet de reproduire la stratégie adoptée par les botanistes qui se basent sur ces même critères lors de la reconnaissance. L’adoption de cette stratégie entre dans la mise en valeur de l’aspect éducatif. Dans ce cadre, un système de fusion est envisageable afin de combiner les données issues d’une même modalité ainsi que les différentes modalités disponibles. Dans le contexte de la reconnaissance d’espèces d’arbres, il s’agit d’un problème réel où les photos des feuilles et des écorces sont prises en milieu naturel. Le traitement de ce type de données est compliqué vue leurs spécificités dues d’une part à la nature des objets à reconnaître (âge, similarité inter-espèces et variabilité intra-espèce) et d’autre part à l’environnement.Des erreurs peuvent s’accumuler tout au long du processus précédant la fusion. L’intérêt de la fusion est de prendre en compte toutes les imperfections pouvant entacher les données disponibles et essayer de bien les modéliser. La fusion est d’autant plus efficace que les données sont bien modélisées. La théorie des fonctions de croyance représente l’un des cadres théoriques les plus aptes à gérer et représenter l’incertitude, l’imprécision, le conflit, etc. Cette théorie tire son importance de sa richesse en termes d’outils permettant de gérer les différentes sources d’imperfections ainsi que les spécificités des données disponibles. Dans le cadre de cette théorie, il est possible de modéliser les données à travers la construction de fonctions de masse. Il est également possible de gérer la complexité calculatoire grâce aux approximations permettant de réduire le nombre d’éléments focaux. Le conflit étant l’une des sources d’imperfections les plus présentes, peut être traité à travers la sélection de la règle de combinaison la mieux adaptée.En fusionnant des sources d’informations ayant des degrés de fiabilité différents, il est possible que la source la moins fiable affecte les données issues de la source la plus fiable. Une des solutions pour ce problème est de chercher à améliorer les performances de la source la moins fiable. Ainsi, en la fusionnant avec d’autres sources, elle apportera des informations utiles et contribuera à son tour à l’amélioration des performances du système de fusion. L’amélioration des performances d’une source d’informations peut s’effectuer à travers la correction des fonctions de masse. Dans ce cadre, la correction peut se faire en se basant sur des mesures de la pertinence ou de la sincérité de la source étudiée. Les matrices de confusion présentent une source de données à partir desquelles des méta-connaissances caractérisant l’état d’une source peuvent être extraites.Dans ce manuscrit, le système de fusion proposé est un système de fusion hiérarchique mis en place dans le cadre de la théorie des fonctions de croyance. Il permet de fusionner les données issues des feuilles et des écorces et propose à l’utilisateur une liste des espèces les plus probables tout en respectant l’objectif éducatif de l’application. La complexité calculatoire de ce système de fusion est assez réduite permettant, à long termes, d’implémenter l’application sur un Smart-phone. / Information fusion systems allow the combination of data issued from different sources of information while considering their quality. Combining data from heterogeneous sources makes it possible to take advantage of the complementarity of the data and thus potentially have higher performances than those obtained when using a single source of information.The use of these systems is interesting in the context of tree species recognition through the fusion of information issued from two modalities : leaves and barks. A single modality may represent different sources of information, each describing one of its most relevant characteristics. This makes it possible to reproduce the strategy adopted by botanists who base themselves on these same criteria. The adoption of this strategy is part of the enhancement of the educational aspect. In this context, a merger system is conceivable in order to combine the data issued from one modality as well as the data issued from different modalities. In the context of tree species recognition, we treat a real problem since the photos of leaves and bark are taken in the natural environment. The processing of this type of data is complicated because of their specificities due firstly to the nature of the objects to be recognized (age, inter-species similarity and intra-species variability) and secondly to the environment.Errors can be accumulated during the pre-fusion process. The merit of the fusion is to take into account all the imperfections that can taint the available data and try to model them well. The fusion is more effective if the data is well modeled. The theory of belief functions represents one of the best theoretical frameworks able to manage and represent uncertainty, inaccuracy, conflict, etc. This theory is important because of its wealth of tools to manage the various sources of imperfections as well as the specificities of the available data. In the framework of this theory, it is possible to model the data through the construction of mass functions. It is also possible to manage the computational complexity thanks to the approximations allowing to reduce the number of focal elements. Conflict being one of the most present sources of imperfections, can be dealt through the selection of the best combination rule.By merging sources of information with different degrees of reliability, it is possible that the least reliable source affects the data issued from the most reliable one. One of the solutions for this problem is to try to improve the performances of the least reliable source. Thus, by merging with other sources, it will provide useful information and will in turn contribute in improving the performance of the fusion system.The performance improvement of an information source can be effected through the correction of mass functions. In this context, the correction can be made based on measures of the relevance or sincerity of the studied source. The confusion matrices present a data source from which meta-knowledge characterizing the state of a source can be extracted. In this manuscript, the proposed fusion system is a hierarchical fusion system set up within the framework of belief function theory. It allows to merge data from leaves and barks and provides the user with a list of the most likely species while respecting the educational purpose of the application. The computational complexity of this fusion system is quite small allowing, in the long term, to implement the application on a Smart-phone.

Page generated in 0.1537 seconds