• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 35
  • 20
  • 9
  • 3
  • 2
  • Tagged with
  • 83
  • 83
  • 47
  • 25
  • 18
  • 15
  • 15
  • 14
  • 12
  • 12
  • 12
  • 12
  • 11
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Detecção de fraudes em transações financeiras via Internet em tempo real. / Frauds detections in financial transactions via Internet in real time.

Stephan Kovach 15 June 2011 (has links)
Um dos objetivos mais importantes de qualquer sistema de detecção de fraudes, independente de seu domínio de operação, é detectar o maior número de fraudes com menor número de alarmes falsos, também denominados de falsos positivos. A existência de falsos positivos é um fato inerente a qualquer sistema de detecção fraudes. O primeiro passo para alcançar esse objetivo é identificar os atributos que podem ser usados para diferenciar atividades legítimas das fraudulentas. O próximo passo consiste em identificar um método para cada atributo escolhido para efetuar essa distinção. A escolha adequada dos atributos e dos métodos correspondentes determina em grande parte o desempenho de um detector de fraudes tanto em termos da relação entre o número de fraudes detectadas e o número de falsos positivos, quanto em termos de tempo de processamento. O desafio desta escolha é maior ao se tratar de um detector de fraudes em tempo real, isto é, fazer a detecção antes que a fraude seja concretizada. O objetivo deste trabalho é apresentar a proposta de uma arquitetura de um sistema de detecção de fraudes em tempo real em transações bancárias via Internet, baseando-se em observações do comportamento local e global de usuários. O método estatístico baseado em análise diferencial é usado para obter a evidência local de uma fraude. Neste caso, a evidência de fraude é baseada na diferença entre os perfis de comportamento atual e histórico do usuário. A evidência local de fraude é fortalecida ou enfraquecida pelo comportamento global do usuário. Neste caso, a evidência de fraude é baseada no número de acessos efetuados em contas diferentes feitos pelo dispositivo utilizado pelo usuário, e por um valor probabilístico que varia com o tempo. A teoria matemática de evidências de Dempster-Shafer é utilizada para combinar estas evidências e obter um escore final. Este escore é então comparado com um limiar para disparar um alarme indicando a fraude. A principal inovação e contribuição deste trabalho estão na definição e exploração dos métodos de detecção baseados em atributos globais que são de natureza específica do domínio de transações financeiras. Os resultados da avaliação utilizando uma base de dados com registros de transações correspondentes a perfis reais de uso demonstraram que a integração de um detector baseado em atributos globais fez aumentar a capacidade do sistema de detectar fraudes em 20%. / One of the most important goals of any fraud detection system, whichever is the domain where it characterizes the possibility for fraud, is to detect the largest number of frauds with fewer false alarms, also denominated false positives. The existence of false positives is a fact inherent to any fraud detection system. The first step in achieving this goal is to identify the attributes that can be used to differentiate between legitimate and fraudulent activities. The next step is to identify a method for each attribute chosen to make this distinction. The proper choice of the attributes and corresponding methods largely determines the performance of a fraud detector, not only in terms of the rate between the number of detected frauds and the number of false positives, but in terms of processing time. The challenge of this choice is higher when dealing with fraud detection in real time, that is, making the detection before the fraud is carried out. The aim of this work is to present the proposal of an architecture of a real time fraud detection system for Internet banking transactions, based on local and global observations of users behavior. The statistical method based on differential analysis is used to obtain the local evidence of fraud. In this case, the evidence of fraud is based on the difference between the current and historical behavior of the user. The frauds local evidence is strengthened or weakened by the users global behavior. In this case, the evidence of fraud is based on the number of accesses performed on different accounts made by the device used by the user and by a probability value that varies over time. The Dempster-Shafers mathematical theory of evidence is applied in order to combine these evidences for final suspicion score of fraud. This score is then compared with a threshold to trigger an alarm indicating the fraud. The main innovation and contribution of this work are the definition and exploration of detection methods based on global attributes which are domain specific of financial transactions. The evaluation results using a database with records of transactions corresponding to actual usage profiles showed that the integration of a detector based on global attributes improves the system capacity to detect frauds in 20%.
22

Monitoramento da saúde humana através de sensores: análise de incertezas contextuais através da teoria da evidência de Dempster-Shafer. / Human health monitoring by sensors: analysis of contextual uncertainties through Dempster-Shafer evidence theory.

Kátia Cilene Neles da Silva 26 November 2012 (has links)
O monitoramento remoto da saúde humana envolve basicamente o emprego da tecnologia de rede de sensores como meio de captura dos dados do paciente em observação e todo ambiente em que este se encontra. Esta tecnologia favorece o monitoramento remoto de pacientes com doenças cardíacas, com problemas respiratórios, com complicações pós-operatórias e ainda pessoas em tratamento residencial, dentre outros. Um importante elemento dos sistemas de monitoramento remoto da saúde é a sua capacidade de interagir com o meio no qual está inserido possibilitando-lhe, por exemplo, agir como provedor de informação e serviços relevantes para o usuário. Essa interação com o ambiente imputa a esse sistema características relacionadas com uma aplicação sensível ao contexto, pois esses sistemas reagem e se adaptam às mudanças nos ambientes, provendo-lhes assistência inteligente e proativa. Outro aspecto observado em sistemas de monitoramento remoto da saúde humana está relacionado às incertezas associadas à tecnologia empregada como meio para obtenção e tratamento dos dados e, aos dados que serão apresentados aos usuários especialistas - médicos. Entende-se que incertezas são elementos inevitáveis em qualquer aplicação ubíqua e sensível ao contexto, podendo ser geradas por dados incompletos ou imperfeitos. No âmbito do monitoramento da saúde humana, fatores como a influência mútua entre dados fisiológicos, comportamentais e ambientais também podem ser apontados como potenciais geradores de informação contextual incerta, além daqueles inerentes às aplicações ubíquas e sensíveis ao contexto. Nesta pesquisa, considera-se que cada sensor captura um tipo de dado e o envia para uma estação localizada na residência do paciente. O objetivo deste trabalho é apresentar um processo para a análise das incertezas contextuais presentes no monitoramento da saúde humana através de sensores. O processo empregado baseou-se na Teoria da Evidência de Dempster- Shafer e no Modelo de Fatores de Certeza. No processo denominado PRANINC, cada dado capturado pelos diferentes sensores é considerado uma evidência e o conjunto dessas evidências é considerado na formação das hipóteses. Três classes de incertezas contextuais foram especificadas: as incertezas provenientes da tecnologia empregada na transmissão dos dados capturados por sensores; as incertezas relacionadas aos próprios sensores, que estão sujeitos a erros e defeitos; e, as incertezas associadas à influência mútua entre as variáveis observadas. O método foi empregado a partir da realização de experimentos sobre arquivos com dados fisiológicos de pacientes reais, aos quais foram adicionados elementos comportamentais e ambientais. Como resultado, foi possível confirmar que o contexto influencia nos dados repassados pelo sistema de monitoramento, e que as incertezas contextuais podem influenciar na qualidade das informações fornecidas, devendo estas serem consideradas pelo especialista. / The remote monitoring of human health basically involves the use of sensor network technology as a means of capturing patient data and observation, in every environment. The sensor technology facilitates remote monitoring of patients with heart disease, respiratory problems, postoperative complications and even people in residential treatment. An important element of the health monitoring system is its ability to interact with the environment which allows, for example, act as a provider of relevant information and services to the user. The interaction with the environment provides to the system the characteristics related to a context-aware application, once this kind of system can react and adapt itself in face of environment´s changes, through a proactive and intelligent assistance. Another significant aspect of health monitoring systems is related to the uncertainties associated with the technology used as a means for obtaining and processing the data sensed by sensors, and the data which will be presented to the experts users - physicians. Uncertainties are inevitable elements in any ubiquitous and context-aware application and it can be generated by incomplete or imperfect data. In the human health monitoring by sensors factors, such as the mutual influence between physiological, behavioral and environmental data are mentioned as potential generators of uncertain contextual information. This research take into consideration that each sensor captures a data type and sends it to a station located in the patient\'s home. The objective of this paper is to present a process to analyze the contextual uncertainties present in the monitoring of human health via sensors. The method used was based on the Dempster-Shafer Evidence Theory and The Uncertainty Factor Model. The process named PRANINC, considers each data captured, by different sensors, as evidence and, all of the evidences are considered in the formation of hypotheses. Three contextual classes of uncertainties were specified: the uncertainties arising from the technology employed in transmitting the data captured by sensors, the uncertainties related to the actual sensors, which are subject to errors and defects, and the uncertainties associated with the mutual influence between the observed variables. The method was employed through conducting experiments on files with physiological data of real patients, to which, were added behavioral and environmental factors. As a result was possible to confirm that the context influences the data transferred by the monitoring system and that contextual uncertainties may influence the quality of the information which shall be considered by the specialist.
23

Multiple sensor fusion for detection, classification and tracking of moving objects in driving environments / Fusion multi-capteur pour la détection, classification et suivi d'objets mobiles en environnement routier

Chavez Garcia, Ricardo Omar 25 September 2014 (has links)
Les systèmes avancés d'assistance au conducteur (ADAS) aident les conducteurs à effectuer des tâches de conduite complexes et à éviter ou atténuer les situations dangereuses. Le véhicule détecte le monde extérieur au moyen de capteurs, et ensuite construit et met à jour un modèle interne de la configuration de l'environnement. La perception de véhicule consiste à établir des relations spatiales et temporelles entre le véhicule et les obstacles statiques et mobiles dans l'environnement. Cette perception se compose de deux tâches principales : la localisation et cartographie simultanées (SLAM) traite de la modélisation de pièces statiques; et la détection et le suivi d'objets en mouvement (DATMO) est responsable de la modélisation des pièces mobiles dans l'environnement. Afin de réaliser un bon raisonnement et contrôle, le système doit modéliser correctement l'environnement. La détection précise et la classification des objets en mouvement est un aspect essentiel d'un système de suivi d'objets. Classification des objets en mouvement est nécessaire pour déterminer le comportement possible des objets entourant le véhicule, et il est généralement réalisée au niveau de suivi des objets. La connaissance de la classe d'objets en mouvement au niveau de la détection peut aider à améliorer leur suivi. La plupart des solutions de perception actuels considèrent informations de classification seulement comme information additional pour la sortie final de la perception. Aussi, la gestion de l'information incomplète est une exigence importante pour les systèmes de perception. Une information incomplète peut être originaire de raisons liées à la détection, tels que les problèmes d calibrage et les dysfonctionnements des capteurs; ou des perturbations de la scène, comme des occlusions, des problèmes de météo et objet déplacement. Les principales contributions de cette thèse se concentrent sur ​​la scène DATMO. Précisément, nous pensons que l'inclusion de la classe de l'objet comme un élément clé de la représentation de l'objet et la gestion de l'incertitude de plusieurs capteurs de détections, peut améliorer les résultats de la tâche de perception. Par conséquent, nous abordons les problèmes de l'association de données, la fusion de capteurs, la classification et le suivi à différents niveaux au sein de la phase de DATMO. Même si nous nous concentrons sur un ensemble de trois capteurs principaux: radar, lidar, et la caméra, nous proposons une architecture modifiables pour inclure un autre type ou nombre de capteurs. Premièrement, nous définissons une représentation composite de l'objet pour inclure des informations de classe et de l'état d'objet deouis le début de la tâche de perception. Deuxièmement, nous proposons, mettre en œuvre, et comparons deux architectures de perception afin de résoudre le problème de DATMO selon le niveau où l'association des objets, la fusion et la classification des informations sont inclus et appliquées. Nos méthodes de fusion de données sont basées sur la théorie de l'evidence, qui est utilisé pour gérer et inclure l'incertitude de la détection du capteur et de la classification des objets. Troisièmement, nous proposons une approche d'association de données bassée en la théorie de l'evidence pour établir une relation entre deux liste des détections d'objets. Quatrièmement, nous intégrons nos approches de fusion dans le cadre d'une application véhicule en temps réel. Cette intégration a été réalisée dans un réelle démonstrateur de véhicule du projet European InteractIVe. Finalement, nous avons analysé et évalué expérimentalement les performances des méthodes proposées. Nous avons comparé notre fusion rapproche les uns contre les autres et contre une méthode state-of-the-art en utilisant des données réelles de scénarios de conduite différents. Ces comparaisons sont concentrés sur la détection, la classification et le suivi des différents objets en mouvement: piétons, vélos, voitures et camions. / Advanced driver assistance systems (ADAS) help drivers to perform complex driving tasks and to avoid or mitigate dangerous situations. The vehicle senses the external world using sensors and then builds and updates an internal model of the environment configuration. Vehicle perception consists of establishing the spatial and temporal relationships between the vehicle and the static and moving obstacles in the environment. Vehicle perception is composed of two main tasks: simultaneous localization and mapping (SLAM) deals with modelling static parts; and detection and tracking moving objects (DATMO) is responsible for modelling moving parts in the environment. In order to perform a good reasoning and control, the system has to correctly model the surrounding environment. The accurate detection and classification of moving objects is a critical aspect of a moving object tracking system. Therefore, many sensors are part of a common intelligent vehicle system. Classification of moving objects is needed to determine the possible behaviour of the objects surrounding the vehicle, and it is usually performed at tracking level. Knowledge about the class of moving objects at detection level can help improve their tracking. Most of the current perception solutions consider classification information only as aggregate information for the final perception output. Also, management of incomplete information is an important requirement for perception systems. Incomplete information can be originated from sensor-related reasons, such as calibration issues and hardware malfunctions; or from scene perturbations, like occlusions, weather issues and object shifting. It is important to manage these situations by taking them into account in the perception process. The main contributions in this dissertation focus on the DATMO stage of the perception problem. Precisely, we believe that including the object's class as a key element of the object's representation and managing the uncertainty from multiple sensors detections, we can improve the results of the perception task, i.e., a more reliable list of moving objects of interest represented by their dynamic state and appearance information. Therefore, we address the problems of sensor data association, and sensor fusion for object detection, classification, and tracking at different levels within the DATMO stage. Although we focus on a set of three main sensors: radar, lidar, and camera, we propose a modifiable architecture to include other type or number of sensors. First, we define a composite object representation to include class information as a part of the object state from early stages to the final output of the perception task. Second, we propose, implement, and compare two different perception architectures to solve the DATMO problem according to the level where object association, fusion, and classification information is included and performed. Our data fusion approaches are based on the evidential framework, which is used to manage and include the uncertainty from sensor detections and object classifications. Third, we propose an evidential data association approach to establish a relationship between two sources of evidence from object detections. We observe how the class information improves the final result of the DATMO component. Fourth, we integrate the proposed fusion approaches as a part of a real-time vehicle application. This integration has been performed in a real vehicle demonstrator from the interactIVe European project. Finally, we analysed and experimentally evaluated the performance of the proposed methods. We compared our evidential fusion approaches against each other and against a state-of-the-art method using real data from different driving scenarios. These comparisons focused on the detection, classification and tracking of different moving objects: pedestrian, bike, car and truck.
24

Influência da incerteza no processo de decisão: priorização de projetos de melhoria. / Influence of uncertaities in the decision process: priorization of improvement projects.

Marcos Coitinho 18 December 2006 (has links)
Este trabalho descreve um experimento sobre o processo de decisão para priorização de projetos de melhoria em uma indústria de bens de capital. Apenas dois critérios eram aplicados na tarefa de priorizar projetos; exigências legais e complexidade técnica, então, foi proposto avaliar os projetos através de um conjunto mais amplo de critérios, incluindo imagem da marca, participação de mercado, alinhamento estratégico, tempo de lançamento de um novo produto. Para lidar com um número maior de critérios qualitativos e quantitativos, foi introduzido desde então, dois métodos multicritérios, a saber: o uso do AHP (analytical hierarchic process) e o DS-AHP (Dempster-Shafer /AHP). Os fundamentos teóricos dos dois métodos são apresentados. O primeiro método é usado para determinar as importâncias relativas das alternativas, por meio de ponderações em cada nível da estrutura hierárquica; a qualidade dos julgamentos é avaliada por um \"índice de consistência\". O segundo método também utiliza a plataforma de análise do AHP acrescentado mensuração da ignorância no processo de julgamentos por meio de probabilidades subjetivas. São comentadas as aplicações dos métodos em uma específica indústria. Foram observadas objeções dos decisores à aplicação do processo AHP, decorrentes da necessidade de numerosas re-avaliações dos julgamentos, quando o \"índice de consistência\" apresentava-se maior que os valores recomendados. Como ponto positivo foi destacada a simplicidade do método para aplicações no ambiente empresarial. Quanto ao método DS-AHP, o uso do conceito de crença nos julgamentos dos decisores, permitiu melhores aproximações às situações reais; neste caso, a alternativa eleita pôde ser claramente compreendida como a mais provável, e não classificada como provavelmente ou certamente a melhor. O DS-AHP quandocomparado ao AHP apresenta-se como um ferramental de condução mais direta para a obtenção dos resultados principalmente no que se refere ao menor número de comparações exigidas, também ajuda o decisor a identificar e corrigir as possíveis fontes de ignorância, que podem afetar a qualidade da decisão. / This study describes an experiment about the definition of portfolio of improvement projects in an industry of capital goods, which had until recently been prioritized based mainly on legislation demand and technical complexity. It was proposed to increase the criteria numbers including: market image, market share, strategic alignment, launch time to new products. To handle several criteria with focus quantitative and qualitative aspects was necessary introduced two multcriterial methods, namely, analytical hierarchic process (AHP) and Dempster-Shafer AHP process (DS-AHP). The first referred method is used to manage the relative importance of alternatives regarding the fore mentioned criteria based on weights attributed to structure hierarchical levels. The second referred method also uses AHP platform to enable ignorance measurement based on subjective probabilities. Results from direct application of the methods in this specific industry are commented. It was observed that the decision makers have some objections with respect to the AHP process, in the sense that there were judgment inconsistencies which required additional evaluation of the candidate solutions, what was seen as somewhat tiresome. The main advantages which was emphasize by decision makers refers to the ease of application in corporate environmental . As for the DS - AHP process, the possibility of considering believes in a structured way was felt as more appropriated to the real decision process, which effectively involves uncertainties. In this way, the elected alternative can be clearly understood as most probably - and not certainly - the best. Also the DS-AHP process was seen as more directly conducive to the final results in comparison with the AHP process.
25

Shadow/Vegetation and building detection from single optical remote sensing image / Détection de l'ombre, de la végétation et des bâtiments sur des images optiques en haute résolution

Ngo, Tran Thanh 22 September 2015 (has links)
Cette thèse est dédiée à la détection de l'ombre, de la végétation et des bâtiments à partir d'une unique image optique très haute résolution. La première partie présente une nouvelle méthode pour détecter simultanément les ombres et la végétation : plusieurs indices d'ombre et de végétation sont comparés puis fusionnés grâce à la théorie de l'évidence de Dempster-Shafer afin d'obtenir une segmentation en trois classes : “ombre”, “végétation” et “autre”. Comme la fusion est une méthode pixellique, elle est incorporée dans un contexte markovien pour régulariser la segmentation. Dans la deuxième partie, une nouvelle technique de segmentation d'images par croissance de région est proposée. L'image est tout d'abord sur-segmentée en régions homogènes afin de remplacer la structure rigide de la grille de pixels. Une classification-fusion itérative est ensuite appliquée sur ces régions. À chaque itération, les régions sont classées en utilisant une segmentation markovienne, puis regroupées entre elles en fonction de la position des ombres, de leur classe, et de la rectangularité de la forme fusionnée. Les bâtiments sont estimés à partir de la classification finale comme étant les rectangles d'emprise minimale. Ces deux algorithmes ont été validés sur plusieurs images de télédétection et ont permis de démontrer leur efficacité. / This PhD thesis is devoted to the detection of shadows, vegetation and buildings from single high resolution optical remote sensing images. The first part introduces a new method for simultaneously detecting shadows and vegetation. Several shadow and vegetation indices were investigated and merged using the Dempster-Shafer evidence theory so as to obtain a segmentation map with three classes : “shadow”, “vegetation” and “other”. However, the performance of the fusion is sensitive to noise since it processes at a pixel-level. A Markov random field (MRF) is thus integrated to model spatial information within the image. In the second part, a novel region growing segmentation technique is proposed. The image is oversegmented into smaller homogeneous regions which replace the rigid structure of the pixel grid. An iterative region classification-merging is then applied over these regions. At each iteration, regions are classified using a MRF-based image segmentation, then, according to the position of shadows, regions having the same class are merged to produce shapes appropriate to rectangles. The final buildings are estimated using the recursive minimum bounding rectangle method from the final classification. These two algorithms have been validated on a variety of image datasets and demonstrate their efficiency.
26

Classification multi-échelle d'images à très haute résolution spatiale basée sur une nouvelle approche texturale

Delahaye, Alexandre January 2016 (has links)
Résumé : Face à l’accroissement de la résolution spatiale des capteurs optiques satellitaires, de nouvelles stratégies doivent être développées pour classifier les images de télédétection. En effet, l’abondance de détails dans ces images diminue fortement l’efficacité des classifications spectrales; de nombreuses méthodes de classification texturale, notamment les approches statistiques, ne sont plus adaptées. À l’inverse, les approches structurelles offrent une ouverture intéressante : ces approches orientées objet consistent à étudier la structure de l’image pour en interpréter le sens. Un algorithme de ce type est proposé dans la première partie de cette thèse. Reposant sur la détection et l’analyse de points-clés (KPC : KeyPoint-based Classification), il offre une solution efficace au problème de la classification d’images à très haute résolution spatiale. Les classifications effectuées sur les données montrent en particulier sa capacité à différencier des textures visuellement similaires. Par ailleurs, il a été montré dans la littérature que la fusion évidentielle, reposant sur la théorie de Dempster-Shafer, est tout à fait adaptée aux images de télédétection en raison de son aptitude à intégrer des concepts tels que l’ambiguïté et l’incertitude. Peu d’études ont en revanche été menées sur l’application de cette théorie à des données texturales complexes telles que celles issues de classifications structurelles. La seconde partie de cette thèse vise à combler ce manque, en s’intéressant à la fusion de classifications KPC multi-échelle par la théorie de Dempster-Shafer. Les tests menés montrent que cette approche multi-échelle permet d’améliorer la classification finale dans le cas où l’image initiale est de faible qualité. De plus, l’étude effectuée met en évidence le potentiel d’amélioration apporté par l’estimation de la fiabilité des classifications intermédiaires, et fournit des pistes pour mener ces estimations. / Abstract : Classifying remote sensing images is an increasingly difficult task due to the availability of very high spatial resolution (VHSR) data. The amount of details in such images is a major obstacle to the use of spectral classification methods as well as most textural classification algorithms, including statistical methods. However, structural methods offer an interesting alternative to this issue: these object-oriented approaches focus on analyzing the structure of an image in order to interpret its meaning. In the first part of this thesis, we propose a new algorithm belonging to this category: KPC (KeyPoint-based Classification). KPC is based on keypoint detection and analysis and offers an efficient answer to the issue of classifying VHSR images. Tests led on artificial and real remote sensing images have proven its discriminating power. Furthermore, many studies have proven that evidential fusion (based on Dempster-Shafer theory) is well-suited to remote sensing images because of its ability to handle abstract concepts such as ambiguity and uncertainty. However, few studies did focus on the application of this theory to complex textural data such as structural data. This issue is dealt with in the second part of this thesis; we focused on fusing multiscale KPC classifications with the help of Dempster-Shafer theory. Tests have shown that this multi-scale approach leads to an increase in classification efficiency when the original image has a low quality. Our study also points out a substantial potential for improvement gained from the estimation of intermediate classifications reliability and provides ideas to get these estimations.
27

Model Based Learning and Reasoning from Partially Observed Data

Hewawasam, Kottigoda. K. Rohitha G. 09 June 2008 (has links)
Management of data imprecision has become increasingly important, especially with the advance of technology enabling applications to collect and store huge amount data from multiple sources. Data collected in such applications involve a large number of variables and various types of data imperfections. These data, when used in knowledge discovery applications, require the following: 1) computationally efficient algorithms that works faster with limited resources, 2) an effective methodology for modeling data imperfections and 3) procedures for enabling knowledge discovery and quantifying and propagating partial or incomplete knowledge throughout the decision-making process. Bayesian Networks (BNs) provide a convenient framework for modeling these applications probabilistically enabling a compact representation of the joint probability distribution involving large numbers of variables. BNs also form the foundation for a number of computationally efficient algorithms for making inferences. The underlying probabilistic approach however is not sufficiently capable of handling the wider range of data imperfections that may appear in many new applications (e.g., medical data). Dempster-Shafer theory on the other hand provides a strong framework for modeling a broader range of data imperfections. However, it must overcome the challenge of a potentially enormous computational burden. In this dissertation, we introduce the joint Dirichlet BoE, a certain mass assignment in the DS theoretic framework, that simplifies the computational complexity while enabling one to model many common types of data imperfections. We first use this Dirichlet BoE model to enhance the performance of the EM algorithm used in learning BN parameters from data with missing values. To form a framework of reasoning with the Dirichlet BoE, the DS theoretic notions of conditionals, independence and conditional independence are revisited. These notions are then used to develop the DS-BN, a BN-like graphical model in the DS theoretic framework, that enables a compact representation of the joint Dirichlet BoE. We also show how one may use the DS-BN in different types of reasoning tasks. A local message passing scheme is developed for efficient propagation of evidence in the DS-BN. We also extend the use of the joint Dirichlet BoE to Markov models and hidden Markov models to address the uncertainty arising due to inadequate training data. Finally, we present the results of various experiments carried out on synthetically generated data sets as well as data sets from medical applications.
28

DS-ARM: An Association Rule Based Predictor that Can Learn from Imperfect Data

Sooriyaarachchi Wickramaratna, Kasun Jayamal 13 January 2010 (has links)
Over the past decades, many industries have heavily spent on computerizing their work environments with the intention to simplify and expedite access to information and its processing. Typical of real-world data are various types of imperfections, uncertainties, ambiguities, that have complicated attempts at automated knowledge discovery. Indeed, it soon became obvious that adequate methods to deal with these problems were critically needed. Simple methods such as "interpolating" or just ignoring data imperfections being found often to lead to inferences of dubious practical value, the search for appropriate modification of knowledge-induction techniques began. Sometimes, rather non-standard approaches turned out to be necessary. For instance, the probabilistic approaches by earlier works are not sufficiently capable of handling the wider range of data imperfections that appear in many new applications (e.g., medical data). Dempster-Shafer theory provides a much stronger framework, and this is why it has been chosen as the fundamental paradigm exploited in this dissertation. The task of association rule mining is to detect frequently co-occurring groups of items in transactional databases. The majority of the papers in this field concentrate on how to expedite the search. Less attention has been devoted to how to employ the identified frequent itemsets for prediction purposes; worse still, methods to tailor association-mining techniques so that they can handle data imperfections are virtually nonexistent. This dissertation proposes a technique referred to by the acronym DS-ARM (Dempster-Shafer based Association Rule Mining) where the DS-theoretic framework is used to enhance a more traditional association-mining mechanism. Of particular interest is here a method to employ the knowledge of partial contents of a "shopping cart" for the prediction of what else the customer is likely to add to it. This formalized problem has many applications in the analysis of medical databases. A recently-proposed data structure, an itemset tree (IT-tree), is used to extract association rules in a computationally efficient manner, thus addressing the scalability problem that has disqualified more traditional techniques from real-world applications. The proposed algorithm is based on the Dempster-Shafer theory of evidence combination. Extensive experiments explore the algorithm's behavior; some of them use synthetically generated data, others relied on data obtained from a machine-learning repository, yet others use a movie ratings dataset or a HIV/AIDS patient dataset.
29

Evidence Based Uncertainty Models and Particles Swarm Optimization for Multiobjective Optimization of Engineering Systems

Annamdas, Kiran Kumar Kishore 28 July 2009 (has links)
The present work develops several methodologies for solving engineering analysis and design problems involving uncertainties and evidences from multiple sources. The influence of uncertainties on the safety/failure of the system and on the warranty costs (to the manufacturer) are also investigated. Both single and multiple objective optimization problems are considered. A methodology is developed to combine the evidences available from single or multiple sources in the presence (or absence) of credibility information of the sources using modified Dempster Shafer Theory (DST) and Fuzzy Theory in the design of uncertain engineering systems. To optimally design a system, multiple objectives, such as to maximize the belief for the overall safety of the system, minimize the deflection, maximize the natural frequency and minimize the weight of an engineering structure under both deterministic and uncertain parameters, and subjected to multiple constraints are considered. We also study the various combination rules like Dempster's rule, Yager's rule, Inagaki's extreme rule, Zhang's center combination rule and Murphy's average combination rule for combining evidences from multiple sources. These rules are compared and a selection procedure was developed to assist the analyst in selecting the most suitable combination rule to combine various evidences obtained from multiple sources based on the nature of evidence sets. A weighted Dempster Shafer theory for interval-valued data (WDSTI) and weighted fuzzy theory for intervals (WFTI) were proposed for combining evidence when different credibilities are associated with the various sources of evidence. For solving optimization problems which cannot be solved using traditional gradient-based methods (such as those involving nonconvex functions and discontinuities), a modified Particle Swarm Optimization (PSO) algorithm is developed to include dynamic maximum velocity function and bounce method to solve both deterministic multi-objective problems and uncertain multi-objective problems (vertex method is used in addition to the modified PSO algorithm for uncertain parameters). A modified game theory approach (MGT) is coupled with the modified PSO algorithm to solve multi-objective optimization problems. In case of problems with multiple evidences, belief is calculated for a safe design (satisfying all constraints) using the vertex method and the modified PSO algorithm is used to solve the multi-objective optimization problems. The multiobjective problem related to the design of a composite laminate simply supported beam with center load is also considered to minimize the weight and maximize buckling load using modified game theory. A comparison of different warranty policies for both repairable and non repairable products and an automobile warranty optimization problem is considered to minimize the total warranty cost of the automobile with a constraint on the total failure probability of the system. To illustrate the methodologies presented in this work, several numerical design examples are solved. We finally present the conclusions along with a brief discussion of the future scope of the research.
30

An ontology-driven evidence theory method for activity recognition / Uma abordagem baseada em ontologias e teoria da evidência para o reconhecimento de atividades

Rey, Vítor Fortes January 2016 (has links)
O reconhecimento de atividaes é vital no contexto dos ambientes inteligentes. Mesmo com a facilidade de acesso a sensores móveis baratos, reconhecer atividades continua sendo um problema difícil devido à incerteza nas leituras dos sensores e à complexidade das atividades. A teoria da evidência provê um modelo de reconhecimento de atividades que detecta atividades mesmo na presença de incerteza nas leituras dos sensores, mas ainda não é capaz de modelar atividades complexas ou mudanças na configuração dos sensores ou do ambiente. Este trabalho propõe combinar abordagens baseadas em modelagem de conhecimento com a teoria da evidência, melhorando assim a construção dos modelos da última trazendo a reusabilidade, flexibilidade e semântica rica da primeira. / Activity recognition is a vital need in the field of ambient intelligence. It is essential for many internet of things applications including energy management, healthcare systems and home automation. But, even with the many cheap mobile sensors envisioned by the internet of things, activity recognition remains a hard problem. This is due to uncertainty in sensor readings and the complexity of activities themselves. Evidence theory models provide activity recognition even in the presence of uncertain sensor readings, but cannot yet model complex activities or dynamic changes in sensor and environment configurations. This work proposes combining knowledge-based approaches with evidence theory, improving the construction of evidence theory models for activity recognition by bringing reusability, flexibility and rich semantics.

Page generated in 0.0725 seconds