• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 22
  • 4
  • 2
  • 2
  • Tagged with
  • 39
  • 39
  • 39
  • 10
  • 9
  • 8
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Data Fusion for Materials Location Estimation in Construction

Navabzadeh Razavi, Saiedeh 29 April 2010 (has links)
Effective automated tracking and locating of the thousands of materials on construction sites improves material distribution and project performance and thus has a significant positive impact on construction productivity. Many locating technologies and data sources have therefore been developed, and the deployment of a cost-effective, scalable, and easy-to-implement materials location sensing system at actual construction sites has very recently become both technically and economically feasible. However, considerable opportunity still exists to improve the accuracy, precision, and robustness of such systems. The quest for fundamental methods that can take advantage of the relative strengths of each individual technology and data source motivated this research, which has led to the development of new data fusion methods for improving materials location estimation. In this study a data fusion model is used to generate an integrated solution for the automated identification, location estimation, and relocation detection of construction materials. The developed model is a modified functional data fusion model. Particular attention is paid to noisy environments where low-cost RFID tags are attached to all materials, which are sometimes moved repeatedly around the site. A portion of the work focuses partly on relocation detection because it is closely coupled with location estimation and because it can be used to detect the multi-handling of materials, which is a key indicator of inefficiency. This research has successfully addressed the challenges of fusing data from multiple sources of information in a very noisy and dynamic environment. The results indicate potential for the proposed model to improve location estimation and movement detection as well as to automate the calculation of the incidence of multi-handling.
22

Source independence in the theory of belief functions / L'indépendance des sources dans la théorie des fonctions de croyance

Chebbah, Mouna 25 June 2014 (has links)
La fusion d'informations issues de plusieurs sources cherche à améliorer la prise de décision. Pour réaliser cette fusion, la théorie des fonctions de croyance utilise des règles de combinaison faisant bien souvent l'hypothèse de l'indépendance des sources. Cette forte hypothèse n'est, cependant, ni formalisée ni vérifiée. Elle est supposée pour justifier le choix du type de règles à utiliser sans avoir, pour autant, un moyen de la vérifier. Nous proposons dans ce rapport de thèse un apprentissage de l'indépendance cognitive de sources d'information. Nous détaillons également une approche d'apprentissage de la dépendance positive et négative des sources. Les degrés d'indépendance, de dépendance positive et négative des sources ont principalement trois utilités. Premièrement, ces degrés serviront à choisir le type de règles de combinaison à utiliser lors de la combinaison. Deuxièmement, ces degrés exprimés par une fonction de masse sont intégrés par une approche d'affaiblissement avant de réaliser la combinaison d'information. Une troisième utilisation de cette mesure d'indépendance consiste à l'intégrer dans une nouvelle règle de combinaison. La règle que nous proposons est une moyenne pondérée avec ce degré d'indépendance. / The theory of belief functions manages uncertainty and proposes a set of combination rules to aggregate beliefs of several sources. Some combination rules mix evidential information where sources are independent; other rules are suited to combine evidential information held by dependent sources. Information on sources ' independence is required to justify the choice of the adequate type of combination rules. In this thesis, we suggest a method to quantify sources' degrees of independence that may guide the choice of the appropriate type of combination rules. In fact, we propose a statistical approach to learn sources' degrees of independence from all provided evidential information. There are three main uses of estimating sources' degrees of independence: First, we use sources' degree of independence to guide the choice of combination rules to use when aggregating beliefs of several sources. Second, we propose to integrate sources' degrees of independence into sources' beliefs leading to an operator similar to the discounting. Finally, we define a new combination rule weighted with sources' degree of independence.
23

Detecção de fraudes em transações financeiras via Internet em tempo real. / Frauds detections in financial transactions via Internet in real time.

Kovach, Stephan 15 June 2011 (has links)
Um dos objetivos mais importantes de qualquer sistema de detecção de fraudes, independente de seu domínio de operação, é detectar o maior número de fraudes com menor número de alarmes falsos, também denominados de falsos positivos. A existência de falsos positivos é um fato inerente a qualquer sistema de detecção fraudes. O primeiro passo para alcançar esse objetivo é identificar os atributos que podem ser usados para diferenciar atividades legítimas das fraudulentas. O próximo passo consiste em identificar um método para cada atributo escolhido para efetuar essa distinção. A escolha adequada dos atributos e dos métodos correspondentes determina em grande parte o desempenho de um detector de fraudes tanto em termos da relação entre o número de fraudes detectadas e o número de falsos positivos, quanto em termos de tempo de processamento. O desafio desta escolha é maior ao se tratar de um detector de fraudes em tempo real, isto é, fazer a detecção antes que a fraude seja concretizada. O objetivo deste trabalho é apresentar a proposta de uma arquitetura de um sistema de detecção de fraudes em tempo real em transações bancárias via Internet, baseando-se em observações do comportamento local e global de usuários. O método estatístico baseado em análise diferencial é usado para obter a evidência local de uma fraude. Neste caso, a evidência de fraude é baseada na diferença entre os perfis de comportamento atual e histórico do usuário. A evidência local de fraude é fortalecida ou enfraquecida pelo comportamento global do usuário. Neste caso, a evidência de fraude é baseada no número de acessos efetuados em contas diferentes feitos pelo dispositivo utilizado pelo usuário, e por um valor probabilístico que varia com o tempo. A teoria matemática de evidências de Dempster-Shafer é utilizada para combinar estas evidências e obter um escore final. Este escore é então comparado com um limiar para disparar um alarme indicando a fraude. A principal inovação e contribuição deste trabalho estão na definição e exploração dos métodos de detecção baseados em atributos globais que são de natureza específica do domínio de transações financeiras. Os resultados da avaliação utilizando uma base de dados com registros de transações correspondentes a perfis reais de uso demonstraram que a integração de um detector baseado em atributos globais fez aumentar a capacidade do sistema de detectar fraudes em 20%. / One of the most important goals of any fraud detection system, whichever is the domain where it characterizes the possibility for fraud, is to detect the largest number of frauds with fewer false alarms, also denominated false positives. The existence of false positives is a fact inherent to any fraud detection system. The first step in achieving this goal is to identify the attributes that can be used to differentiate between legitimate and fraudulent activities. The next step is to identify a method for each attribute chosen to make this distinction. The proper choice of the attributes and corresponding methods largely determines the performance of a fraud detector, not only in terms of the rate between the number of detected frauds and the number of false positives, but in terms of processing time. The challenge of this choice is higher when dealing with fraud detection in real time, that is, making the detection before the fraud is carried out. The aim of this work is to present the proposal of an architecture of a real time fraud detection system for Internet banking transactions, based on local and global observations of users behavior. The statistical method based on differential analysis is used to obtain the local evidence of fraud. In this case, the evidence of fraud is based on the difference between the current and historical behavior of the user. The frauds local evidence is strengthened or weakened by the users global behavior. In this case, the evidence of fraud is based on the number of accesses performed on different accounts made by the device used by the user and by a probability value that varies over time. The Dempster-Shafers mathematical theory of evidence is applied in order to combine these evidences for final suspicion score of fraud. This score is then compared with a threshold to trigger an alarm indicating the fraud. The main innovation and contribution of this work are the definition and exploration of detection methods based on global attributes which are domain specific of financial transactions. The evaluation results using a database with records of transactions corresponding to actual usage profiles showed that the integration of a detector based on global attributes improves the system capacity to detect frauds in 20%.
24

Detecção de fraudes em transações financeiras via Internet em tempo real. / Frauds detections in financial transactions via Internet in real time.

Stephan Kovach 15 June 2011 (has links)
Um dos objetivos mais importantes de qualquer sistema de detecção de fraudes, independente de seu domínio de operação, é detectar o maior número de fraudes com menor número de alarmes falsos, também denominados de falsos positivos. A existência de falsos positivos é um fato inerente a qualquer sistema de detecção fraudes. O primeiro passo para alcançar esse objetivo é identificar os atributos que podem ser usados para diferenciar atividades legítimas das fraudulentas. O próximo passo consiste em identificar um método para cada atributo escolhido para efetuar essa distinção. A escolha adequada dos atributos e dos métodos correspondentes determina em grande parte o desempenho de um detector de fraudes tanto em termos da relação entre o número de fraudes detectadas e o número de falsos positivos, quanto em termos de tempo de processamento. O desafio desta escolha é maior ao se tratar de um detector de fraudes em tempo real, isto é, fazer a detecção antes que a fraude seja concretizada. O objetivo deste trabalho é apresentar a proposta de uma arquitetura de um sistema de detecção de fraudes em tempo real em transações bancárias via Internet, baseando-se em observações do comportamento local e global de usuários. O método estatístico baseado em análise diferencial é usado para obter a evidência local de uma fraude. Neste caso, a evidência de fraude é baseada na diferença entre os perfis de comportamento atual e histórico do usuário. A evidência local de fraude é fortalecida ou enfraquecida pelo comportamento global do usuário. Neste caso, a evidência de fraude é baseada no número de acessos efetuados em contas diferentes feitos pelo dispositivo utilizado pelo usuário, e por um valor probabilístico que varia com o tempo. A teoria matemática de evidências de Dempster-Shafer é utilizada para combinar estas evidências e obter um escore final. Este escore é então comparado com um limiar para disparar um alarme indicando a fraude. A principal inovação e contribuição deste trabalho estão na definição e exploração dos métodos de detecção baseados em atributos globais que são de natureza específica do domínio de transações financeiras. Os resultados da avaliação utilizando uma base de dados com registros de transações correspondentes a perfis reais de uso demonstraram que a integração de um detector baseado em atributos globais fez aumentar a capacidade do sistema de detectar fraudes em 20%. / One of the most important goals of any fraud detection system, whichever is the domain where it characterizes the possibility for fraud, is to detect the largest number of frauds with fewer false alarms, also denominated false positives. The existence of false positives is a fact inherent to any fraud detection system. The first step in achieving this goal is to identify the attributes that can be used to differentiate between legitimate and fraudulent activities. The next step is to identify a method for each attribute chosen to make this distinction. The proper choice of the attributes and corresponding methods largely determines the performance of a fraud detector, not only in terms of the rate between the number of detected frauds and the number of false positives, but in terms of processing time. The challenge of this choice is higher when dealing with fraud detection in real time, that is, making the detection before the fraud is carried out. The aim of this work is to present the proposal of an architecture of a real time fraud detection system for Internet banking transactions, based on local and global observations of users behavior. The statistical method based on differential analysis is used to obtain the local evidence of fraud. In this case, the evidence of fraud is based on the difference between the current and historical behavior of the user. The frauds local evidence is strengthened or weakened by the users global behavior. In this case, the evidence of fraud is based on the number of accesses performed on different accounts made by the device used by the user and by a probability value that varies over time. The Dempster-Shafers mathematical theory of evidence is applied in order to combine these evidences for final suspicion score of fraud. This score is then compared with a threshold to trigger an alarm indicating the fraud. The main innovation and contribution of this work are the definition and exploration of detection methods based on global attributes which are domain specific of financial transactions. The evaluation results using a database with records of transactions corresponding to actual usage profiles showed that the integration of a detector based on global attributes improves the system capacity to detect frauds in 20%.
25

A study of corporate culture compatibility on supply chain performance

Al-Mutawah, Khalid January 2009 (has links)
Supply chain systems have become a vital component of successful networked business firms/organisations. Over the last three decades, there has been a dramatic growth globally in the formation of supply chain networks. Research, however, indicates that there has been an increase in reported supply chains failures, and the incompatibility issues between participated organisations. Yet, these incompatibility issues are not just technical, but encompass wider cultural, organisational, and economical factors. Whilst research has shown the effect of such factors on supply chain performance, the influence of achieving corporate culture compatibility to the success of supply chains remains poorly understood. This is because it is widely accepted that organisations that operate in the same region possess a similar culture. In contrast, this research will examine the existence of corporate culture diversity between organisations in the same region, rather than diversity of national culture across different regions. Specifically, the study described the development of corporate culture compatibility between supply chains’ organisations and its influences on supply chain performance. Therefore, the thesis focus is the complex interrelationships between corporate culture compatibility of member organisations and supply chain performance. This research identifies cultural norms and beliefs of supply chain members within key organisational factors, rather than national or multi-national organisations factors, as in Hofstede (1983). A multi-method research design (combining case study, simulation, and neuro-fuzzy methods) was used to provide a rounded perspective on the phenomena studied. The multiple case studies helped to explore how corporate culture compatibility influences supply chain performance and develop a conceptual model for this association. The simulation experiments were conducted to verify the obtained conceptual framework from the multiple case studies, and investigate the effects of changing the corporate culture compatibility level on supply chain performance. The simulation is designed based on a Multi-Agent System (MAS) approach, in which each organisation in a supply chain is represented as an intelligent agent. Finally, a neuro-fuzzy approach is presented to assess corporate culture on supply chains context using real data. The analysis of the quantitative neuro-fuzzy study confirmed and validated the theoretical findings and adds depth to our understanding of the influences of corporate culture compatibility on supply chain performance. The study confirmed that organisations within the same supply chain in the same region possess different corporate cultures that consequently need the achievement of corporate culture compatibility as it is indicated by the literature. Moreover, the study revealed two types of corporate culture in supply chains’ context: individual culture and common culture. Individual culture refers to the internal beliefs within the organisation’s boundary, while common culture refers to beliefs when trading with partners across the organisation’s boundary. However, the study shows that common culture has more influences on supply chain performance than individual culture. In addition, the study highlighted bi-directional association between individual culture and common culture that helps the supply chain’s organisations developing their corporate culture compatibility. The results from the current study also showed that supply chain performance was shown to arise dramatically in response to corporate culture compatibility level increases. Yet, this increase in performance is diminished at a higher level of corporate culture compatibility, because more corporate culture compatibility increases are not cost effective for the organisations. In addition, organisations at a higher level of compatibility have more preferences to preserve their individual culture because it represents their identity. Furthermore, the study complements the gap in the literature related to the assessment of corporate culture of individual organisations in supply chains for sustaining a higher supply chain performance. While current culture assessment models observe individual organisations’ culture, the proposed approach describes a single concentrated model that integrates both individual and common culture in measuring influences of culture compatibility on supply chain performance. The findings from this study provide scholars, consultants, managers, and supply chain systems vendors with valuable information. This research thesis contributes to supply chain configuration and partnership formation theory, along with corporate culture theory, and is the first of its kind to establish the use of intelligent methods to model corporate culture compatibility. It is also one of the first empirical studies to compare corporate culture compatibility of supply chains’ organisations from organisational perspectives, rather than national perspectives.
26

An empirical study for the application of the evidential reasoning rule to decision making in financial investment

Gao, Quanjian January 2016 (has links)
The aim of this thesis is to explore the adaptability of the Evidential Reasoning (ER) Rule as a method to provide a useful supporting tool for helping investors make decisions on financial investments. Decision making in financial investment often involves conflicting information and subjective judgment of the investors. Accordingly, the ER Rule, extended from the original popular Evidential Reasoning algorithm and developed for MCDM (Multiple Criteria Decision Making), is particularly suited for handling conflicts in information and to allow for judgmental weighting on the sources of evidence. In order to do so, a specific EIA (Efficient Information Assessment) process modeled by the mass function of Dempster-Shafer Theory has been constructed such that the underlying architecture of the model satisfies the requirement of the ER rule. The fundamental concern is to define and assess “efficient information”. For this purpose, a process denoted the Efficient Information Assessment (EIA) is defined which applies the mass function of Dempster-Shafer theory. Any relevant information selected from an expert’s knowledge database is “efficient” if the data is fully in compliance with the requirement of the ER rule. The logical process of the EIA model proceeds with a set of portfolio strategies from the information recommended by top financial analysts. Then, as a result, the model enables the ER rule to make an evaluation of all strategies for helping investors make decisions. Experiments were carried out to back-test the investment strategy using data from the China Stock Market & Accounting Research (CSMAR) Database for the four-year period between 2009 and 2012. The data contained more than 270,000 reports from more than 4,600 financial analysts. The risk-adjusted average annual return of the strategy outperformed that of the CSI300 index by as much as 10.69% for an investment horizon of six months, with the p value from Student’s t-test as low as 0.02%. The EIA model serves as the first successful application adapting the ER Rule for a new and effective decision-making process in financial investment, and this work is the only empirical study applying the ER Rule to the opinions of financial analysts, to the best of my knowledge.
27

Multiple sensor fusion for detection, classification and tracking of moving objects in driving environments / Fusion multi-capteur pour la détection, classification et suivi d'objets mobiles en environnement routier

Chavez Garcia, Ricardo Omar 25 September 2014 (has links)
Les systèmes avancés d'assistance au conducteur (ADAS) aident les conducteurs à effectuer des tâches de conduite complexes et à éviter ou atténuer les situations dangereuses. Le véhicule détecte le monde extérieur au moyen de capteurs, et ensuite construit et met à jour un modèle interne de la configuration de l'environnement. La perception de véhicule consiste à établir des relations spatiales et temporelles entre le véhicule et les obstacles statiques et mobiles dans l'environnement. Cette perception se compose de deux tâches principales : la localisation et cartographie simultanées (SLAM) traite de la modélisation de pièces statiques; et la détection et le suivi d'objets en mouvement (DATMO) est responsable de la modélisation des pièces mobiles dans l'environnement. Afin de réaliser un bon raisonnement et contrôle, le système doit modéliser correctement l'environnement. La détection précise et la classification des objets en mouvement est un aspect essentiel d'un système de suivi d'objets. Classification des objets en mouvement est nécessaire pour déterminer le comportement possible des objets entourant le véhicule, et il est généralement réalisée au niveau de suivi des objets. La connaissance de la classe d'objets en mouvement au niveau de la détection peut aider à améliorer leur suivi. La plupart des solutions de perception actuels considèrent informations de classification seulement comme information additional pour la sortie final de la perception. Aussi, la gestion de l'information incomplète est une exigence importante pour les systèmes de perception. Une information incomplète peut être originaire de raisons liées à la détection, tels que les problèmes d calibrage et les dysfonctionnements des capteurs; ou des perturbations de la scène, comme des occlusions, des problèmes de météo et objet déplacement. Les principales contributions de cette thèse se concentrent sur ​​la scène DATMO. Précisément, nous pensons que l'inclusion de la classe de l'objet comme un élément clé de la représentation de l'objet et la gestion de l'incertitude de plusieurs capteurs de détections, peut améliorer les résultats de la tâche de perception. Par conséquent, nous abordons les problèmes de l'association de données, la fusion de capteurs, la classification et le suivi à différents niveaux au sein de la phase de DATMO. Même si nous nous concentrons sur un ensemble de trois capteurs principaux: radar, lidar, et la caméra, nous proposons une architecture modifiables pour inclure un autre type ou nombre de capteurs. Premièrement, nous définissons une représentation composite de l'objet pour inclure des informations de classe et de l'état d'objet deouis le début de la tâche de perception. Deuxièmement, nous proposons, mettre en œuvre, et comparons deux architectures de perception afin de résoudre le problème de DATMO selon le niveau où l'association des objets, la fusion et la classification des informations sont inclus et appliquées. Nos méthodes de fusion de données sont basées sur la théorie de l'evidence, qui est utilisé pour gérer et inclure l'incertitude de la détection du capteur et de la classification des objets. Troisièmement, nous proposons une approche d'association de données bassée en la théorie de l'evidence pour établir une relation entre deux liste des détections d'objets. Quatrièmement, nous intégrons nos approches de fusion dans le cadre d'une application véhicule en temps réel. Cette intégration a été réalisée dans un réelle démonstrateur de véhicule du projet European InteractIVe. Finalement, nous avons analysé et évalué expérimentalement les performances des méthodes proposées. Nous avons comparé notre fusion rapproche les uns contre les autres et contre une méthode state-of-the-art en utilisant des données réelles de scénarios de conduite différents. Ces comparaisons sont concentrés sur la détection, la classification et le suivi des différents objets en mouvement: piétons, vélos, voitures et camions. / Advanced driver assistance systems (ADAS) help drivers to perform complex driving tasks and to avoid or mitigate dangerous situations. The vehicle senses the external world using sensors and then builds and updates an internal model of the environment configuration. Vehicle perception consists of establishing the spatial and temporal relationships between the vehicle and the static and moving obstacles in the environment. Vehicle perception is composed of two main tasks: simultaneous localization and mapping (SLAM) deals with modelling static parts; and detection and tracking moving objects (DATMO) is responsible for modelling moving parts in the environment. In order to perform a good reasoning and control, the system has to correctly model the surrounding environment. The accurate detection and classification of moving objects is a critical aspect of a moving object tracking system. Therefore, many sensors are part of a common intelligent vehicle system. Classification of moving objects is needed to determine the possible behaviour of the objects surrounding the vehicle, and it is usually performed at tracking level. Knowledge about the class of moving objects at detection level can help improve their tracking. Most of the current perception solutions consider classification information only as aggregate information for the final perception output. Also, management of incomplete information is an important requirement for perception systems. Incomplete information can be originated from sensor-related reasons, such as calibration issues and hardware malfunctions; or from scene perturbations, like occlusions, weather issues and object shifting. It is important to manage these situations by taking them into account in the perception process. The main contributions in this dissertation focus on the DATMO stage of the perception problem. Precisely, we believe that including the object's class as a key element of the object's representation and managing the uncertainty from multiple sensors detections, we can improve the results of the perception task, i.e., a more reliable list of moving objects of interest represented by their dynamic state and appearance information. Therefore, we address the problems of sensor data association, and sensor fusion for object detection, classification, and tracking at different levels within the DATMO stage. Although we focus on a set of three main sensors: radar, lidar, and camera, we propose a modifiable architecture to include other type or number of sensors. First, we define a composite object representation to include class information as a part of the object state from early stages to the final output of the perception task. Second, we propose, implement, and compare two different perception architectures to solve the DATMO problem according to the level where object association, fusion, and classification information is included and performed. Our data fusion approaches are based on the evidential framework, which is used to manage and include the uncertainty from sensor detections and object classifications. Third, we propose an evidential data association approach to establish a relationship between two sources of evidence from object detections. We observe how the class information improves the final result of the DATMO component. Fourth, we integrate the proposed fusion approaches as a part of a real-time vehicle application. This integration has been performed in a real vehicle demonstrator from the interactIVe European project. Finally, we analysed and experimentally evaluated the performance of the proposed methods. We compared our evidential fusion approaches against each other and against a state-of-the-art method using real data from different driving scenarios. These comparisons focused on the detection, classification and tracking of different moving objects: pedestrian, bike, car and truck.
28

A basic probability assignment methodology for unsupervised wireless intrusion detection

Ghafir, Ibrahim, Kyriakopoulos, K.G., Aparicio-Navarro, F.J., Lambotharan, S., Assadhan, B., Binsalleeh, A.H. 24 January 2020 (has links)
Yes / The broadcast nature of wireless local area networks has made them prone to several types of wireless injection attacks, such as Man-in-the-Middle (MitM) at the physical layer, deauthentication, and rogue access point attacks. The implementation of novel intrusion detection systems (IDSs) is fundamental to provide stronger protection against these wireless injection attacks. Since most attacks manifest themselves through different metrics, current IDSs should leverage a cross-layer approach to help toward improving the detection accuracy. The data fusion technique based on the Dempster–Shafer (D-S) theory has been proven to be an efficient technique to implement the cross-layer metric approach. However, the dynamic generation of the basic probability assignment (BPA) values used by D-S is still an open research problem. In this paper, we propose a novel unsupervised methodology to dynamically generate the BPA values, based on both the Gaussian and exponential probability density functions, the categorical probability mass function, and the local reachability density. Then, D-S is used to fuse the BPA values to classify whether the Wi-Fi frame is normal (i.e., non-malicious) or malicious. The proposed methodology provides 100% true positive rate (TPR) and 4.23% false positive rate (FPR) for the MitM attack and 100% TPR and 2.44% FPR for the deauthentication attack, which confirm the efficiency of the dynamic BPA generation methodology. / Gulf Science, Innovation and Knowledge Economy Programme of the U.K. Government under UK-Gulf Institutional Link Grant IL 279339985 and in part by the Engineering and Physical Sciences Research Council (EPSRC), U.K., under Grant EP/R006385/1.
29

Reconstruction et analyse de trajectoires 2D d'objets mobiles par modélisation Markovienne et la théorie de l'évidence à partir de séquences d'images monoculaires - Application à l'évaluation de situations potentiellement dangereuses aux passages à niveau / Reconstruction and analysis of moving objects trajectoiries from monocular images sequences, using Hidden Markov Model and Dempster-Shafer Theory-Application for evaluating dangerous situations in level crossings

Salmane, Houssam 09 July 2013 (has links)
Les travaux présentés dans ce mémoire s’inscrivent dans le cadre duprojet PANsafer (Vers un Passage A Niveau plus sûr), lauréat de l’appel ANR-VTT2008. Ce projet est labellisé par les deux pôles de compétitivité i-Trans et Véhiculedu Futur. Le travail de la thèse est mené conjointement par le laboratoire IRTESSETde l’UTBM et le laboratoire LEOST de l’IFSTTAR.L’objectif de cette thèse est de développer un système de perception permettantl’interprétation de scénarios dans l’environnement d’un passage à niveau. Il s’agitd’évaluer des situations potentiellement dangereuses par l’analyse spatio-temporelledes objets présents autour du passage à niveau.Pour atteindre cet objectif, le travail est décomposé en trois étapes principales. Lapremière étape est consacrée à la mise en place d’une architecture spatiale des capteursvidéo permettant de couvrir de manière optimale l’environnement du passageà niveau. Cette étape est mise en oeuvre dans le cadre du développement d’unsimulateur d’aide à la sécurité aux passages à niveau en utilisant un système deperception multi-vues. Dans ce cadre, nous avons proposé une méthode d’optimisationpermettant de déterminer automatiquement la position et l’orientation descaméras par rapport à l’environnement à percevoir.La deuxième étape consisteà développer une méthode robuste de suivi d’objets enmouvement à partir d’une séquence d’images. Dans un premier temps, nous avonsproposé une technique permettant la détection et la séparation des objets. Le processusde suivi est ensuite mis en oeuvre par le calcul et la rectification du flotoptique grâce respectivement à un modèle gaussien et un modèle de filtre de Kalman.La dernière étape est destinée à l’analyse des trajectoires 2D reconstruites parl’étape précédente pour l’interprétation de scénarios. Cette analyse commence parune modélisation markovienne des trajectoires 2D. Un système de décision à basede théorie de l’évidence est ensuite proposé pour l’évaluation de scénarios, aprèsavoir modélisé les sources de danger.L’approche proposée a été testée et évaluée avec des données issues de campagnesexpérimentales effectuées sur site réel d’un passage à niveau mis à disposition parRFF. / The main objective of this thesis is to develop a system for monitoringthe close environment of a level crossing. It aims to develop a perception systemallowing the detection and the evaluation of dangerous situations around a levelcrossing.To achieve this goal, the overall problem of this work has been broken down intothree main stages. In the first stage, we propose a method for optimizing automaticallythe location of video sensors in order to cover optimally a level crossingenvironment. This stage addresses the problem of cameras positioning and orientationin order to view optimally monitored scenes.The second stage aims to implement a method for objects tracking within a surveillancezone. It consists first on developing robust algorithms for detecting and separatingmoving objects around level crossing. The second part of this stage consistsin performing object tracking using a Gaussian propagation optical flow based modeland Kalman filtering.On the basis of the previous steps, the last stage is concerned to present a newmodel to evaluate and recognize potential dangerous situations in a level crossingenvironment. This danger evaluation method is built using Hidden Markov Modeland credibility model.Finally, synthetics and real data are used to test the effectiveness and the robustnessof the proposed algorithms and the whole approach by considering various scenarioswithin several situations.This work is developed within the framework of PANsafer project (Towards a saferlevel crossing), supported by the ANR-VTT program (2008) of the French NationalAgency of Research. This project is also labelled by Pôles de compétitivité "i-Trans"and "Véhicule du Futur". All the work, presented in this thesis, has been conductedjointly within IRTES-SET laboratory from UTBM and LEOST laboratory fromIFSTTAR.
30

Modeling and Diagnosis of Excimer Laser Ablation

Setia, Ronald 23 November 2005 (has links)
Recent advances in the miniaturization, functionality, and integration of integrated circuits and packages, such as the system-on-package (SOP) methodology, require increasing use of microvias that generates vertical signal paths in a high-density multilayer substrate. A scanning projection excimer laser system has been utilized to fabricate the microvias. In this thesis, a novel technique implementing statistical experimental design and neural networks (NNs) is used to characterize and model the excimer laser ablation process for microvia formation. Vias with diameters from 10 50 micrometer have been ablated in DuPont Kapton(r) E polyimide using an Anvik HexScan(tm) 2150 SXE pulsed excimer laser operating at 308 nm. Accurate NN models, developed from experimental data, are obtained for microvia responses, including ablated thickness, via diameter, wall angle, and resistance. Subsequent to modeling, NNs and genetic algorithms (GAs) are utilized to generate optimal process recipes for the laser tool. Such recipes can be used to produce desired microvia responses, including open vias, specific diameter, steep wall angle, and low resistance. With continuing advancement in the use of excimer laser systems in microsystems packaging has come an increasing need to offset capital equipment investment and lower equipment downtime. In this thesis, an automated in-line failure diagnosis system using NNs and Dempster-Shafer (D-S) theory is implemented. For the sake of comparison, an adaptive neuro-fuzzy approach is applied to achieve the same objective. Both the D-S theory and neuro-fuzzy logic are used to develop an automated inference system to specifically identify failures. Successful results in failure detection and diagnosis are obtained from the two approaches. The result of this investigation will benefit both engineering and management. Engineers will benefit from high yield, reliable production, and low equipment down-time. Business people, on the other hand, will benefit from cost-savings resulting from more production-worthy (i.e., lower maintenance) laser ablation equipment.

Page generated in 0.0527 seconds