• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1535
  • 677
  • 417
  • 259
  • 170
  • 142
  • 120
  • 79
  • 71
  • 34
  • 32
  • 31
  • 16
  • 14
  • 13
  • Tagged with
  • 4144
  • 1131
  • 820
  • 758
  • 489
  • 400
  • 389
  • 365
  • 345
  • 320
  • 304
  • 302
  • 290
  • 279
  • 263
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Mathematical Models for Predicting and Mitigating the Spread of Chlamydia Sexually Transmitted Infection

January 2018 (has links)
acase@tulane.edu / Chlamydia trachomatis (Ct) is the most common bacterial sexually transmitted infection (STI) in the United States and is major cause of infertility, pelvic inflammatory disease, and ectopic pregnancy among women. Despite decades of screening women for Ct, rates continue to increase among them in high prevalent areas such as New Orleans. A pilot study in New Orleans found approximately 11% of 14-24 year old of African Americans (AAs) were infected with Ct. Our goal is to mathematically model the impact of different interventions for AA men resident in New Orleans on the general rate of Ct among women resident at the same region. We create and analyze mathematical models such as multi-risk and continuous-risk compartmental models and agent-based network model to first help understand the spread of Ct and second evaluate and estimate behavioral and biomedical interventions including condom-use, screening, partner notification, social friend notification, and rescreening. Our compartmental models predict the Ct prevalence is a function of the number of partners for a person, and quantify how this distribution changes as a function of condom-use. We also observe that although increased Ct screening and rescreening, and treating partners of infected people will reduce the prevalence, these mitigations alone are not sufficient to control the epidemic. A combination of both sexual partner and social friend notification is needed to mitigate Ct. / 1 / Asma Aziz Boroojeni
32

一個以代理人為基礎具有分散式認證授權服務的安全性電子交易環境 / An Agent-Based Secure E-Commerce Environment with Distributed Authentication and Authorization Services

李英宗, Lee, Ing-Chung Unknown Date (has links)
本研究計畫的主題在於研究代理人的可信度管理,首要的目標是建立一個以代理人為基礎的安全式電子交易環境。以目前的情況來看,唯有代理人的觀念及技術來執行電子商務仲介者的角色,利用軟體代理者具有自主性,及適時反應等特質,提供服務時的效益和彈性,再輔以適當的安全性管理及深入的可信度探討,電子商務才可能被具體應用到人類實際日常生活上。在作法上除了採用FIPA的規格作為代理人系統平台的實作標準,延伸XML/RDF來便利代理人的建構與溝通,更進一步結合X.509及SPKI/SDSI兩種類型憑證的優點,導入分散式認證授權的觀念,並透過RBAC的控管,形成多重代理人系統的安全架構。配合相關的信任策略及商務模型,以期完成建構一個以代理人為基礎可信任安全式電子交易環境的目標。 / This thesis describes an agent-based secure E-Commerce environment with distributed authentication and authorization services. The previous researches about security issues in agent-mediated E-commerce do not solve the problems of deals with strangers. We merge role based access control (RBAC) concept for adapting the certificates to different business models or new content-based network. Several types of agent delegation mechanism based on our role certificates and some considerations about how to achieve agent trust management with policies both in logics and practice are presented. Finally, We will demonstrate a scenario on FIPA OS system by using agent communication language (ACL) and content language (CL) encoded by XML and XML/RDF.
33

An Historical Based Adaptation Mechanism For BDI Agents

Phung, Toan, Toan.Phung@gmail.com January 2008 (has links)
One of the limitations of the BDI (Belief-Desire-Intention) model is the lack of any explicit mechanisms within the architecture to be able to learn. In particular, BDI agents do not possess the ability to adapt based on past experience. This is important in dynamic environments as they can change, causing previously successful methods for achieving goals to become inefficient or ineffective. We present a model in which learning, analogous reasoning, data pruning and learner accuracy evaluation can be utilised by a BDI agent and verify this model experimentally using Inductive and Statistical learning. Intelligent Agents are a new way of developing software applications. They are an amalgam of Artificial Intelligence (AI) and Software Engineering concepts that are highly suited to domains that are inherently complex and dynamic. Agents are software entities that are autonomous, reactive, proactive, situated and social. They are autonomous in that they are able to make decisions on their own volition. They are situated in some environment and are reactive to this environment yet are also capable of proactive behaviour where they actively pursue goals. They are capable of social behaviour where communication can occur between agents. BDI (Belief Desire Intention) agents are one popular type of agent that support complex behaviour in dynamic environments. Agent adaptation can be viewed as the process of changing the way in which an agent achieves its goals. We distinguish between 'reactive' or short-term adaptation, 'long-term' or historical adaptation and 'very long term' or evolutionary adaptation. Short-term adaptation, an ability that current BDI agents already possess, involves reacting to changes in the environment and choosing alternative plans of action which may involve choosing new plans if the current plan fails. 'Long-term' or historical adaptation entails the use of past cases during the reasoning process which enables agents to avoid repeating past mistak es. 'Evolutionary adaptation' could involve the use of genetic programming or similar techniques to mutate plans to lead to altered behaviour. Our work aims to improve BDI agents by introducing a framework that allows BDI agents to alter their behaviour based on past experience, i.e. to learn.
34

SodaBot: A Software Agent Environment and Construction System

Coen, Michael H. 02 November 1994 (has links)
This thesis presents SodaBot, a general-purpose software agent user-environment and construction system. Its primary component is the basic software agent --- a computational framework for building agents which is essentially an agent operating system. We also present a new language for programming the basic software agent whose primitives are designed around human-level descriptions of agent activity. Via this programming language, users can easily implement a wide-range of typical software agent applications, e.g. personal on-line assistants and meeting scheduling agents. The SodaBot system has been implemented and tested, and its description comprises the bulk of this thesis.
35

Hybrid Layered Intrusion Detection System

Sainani, Varsha 01 January 2009 (has links)
The increasing number of network security related incidents has made it necessary for the organizations to actively protect their sensitive data with network intrusion detection systems (IDSs). Detecting intrusion in a distributed network from outside network segment as well as from inside is a difficult problem. IDSs are expected to analyze a large volume of data while not placing a significant added load on the monitoring systems and networks. This requires good data mining strategies which take less time and give accurate results. In this study, a novel hybrid layered multiagent-based intrusion detection system is created, particularly with the support of a multi-class supervised classification technique. In agent-based IDS, there is no central control and therefore no central point of failure. Agents can detect and take predefined actions against malicious activities, which can be detected with the help of data mining techniques. The proposed IDS shows superior performance compared to central sniffing IDS techniques, and saves network resources compared to other distributed IDSs with mobile agents that activate too many sniffers causing bottlenecks in the network. This is one of the major motivations to use a distributed model based on a multiagent platform along with a supervised classification technique. Applying multiagent technology to the management of network security is a challenging task since it requires the management on different time instances and has many interactions. To facilitate information exchange between different agents in the proposed hybrid layered multiagent architecture, a low cost and low response time agent communication protocol is developed to tackle the issues typically associated with a distributed multiagent system, such as poor system performance, excessive processing power requirement, and long delays. The bandwidth and response time performance of the proposed end-to-end system is investigated through the simulation of the proposed agent communication protocol on our private LAN testbed called Hierarchical Agent Network for Intrusion Detection Systems (HAN-IDS). The simulation results show that this system is efficient and extensible since it consumes negligible bandwidth with low cost and low response time on the network.
36

Semantic Web Based Multi-agent Framework for Real-time Freeway Traffic Incident Management System

Abou-Beih, Mahmoud Osman 20 August 2012 (has links)
Recurring traffic congestion is attributable to steadily increasing travel demand coupled with constrained space and financial resources for infrastructure expansion. Another major source of congestion is non-recurrent incidents that disrupt the normal operation of the infrastructure. Aiming to optimize the utilization of the transportation infrastructure, innovative infrastructure management techniques that incorporate on edge technological equipment and information systems need to be adopted to manage recurrent and non-recurrent congestion and reduce their adverse externalities. The framework presented in this thesis lays the foundation for multi-disciplinary semantic web based incident management. During traffic incident response, involved stakeholders will share their knowledge and resources, forming an ad-hoc framework within which each party will focus on its core competencies and cooperate to achieve a coherent incident management process. Negotiation between various response agencies operators is performed using intelligent software agents, alleviating the coordination and synchronization burden of the massive information flow during the incident response. The software agents provide a decision support to human operators based on the reasoning provided from the underlying system knowledge models. Ontological engineering is used to lay the foundation of the knowledge models, which are coded in a web based ontology language, allowing a decentralized access to various elements of the system. The whole system communication infrastructure is based on the Semantic Web technologies. The semantic web facilitates the use of, in an enhanced manner, the already existing web technologies as the communication infrastructure of the proposed system. Its semantic capabilities help to resolve the information and data interoperability issues among various parties. The web services concepts combined with the semantic web allow the direct exploration and access of knowledge models, resources, and data repertories held by various parties. The developed ontology along with the developed software system were tested and evaluated by domain experts and targeted system users. Based on the conducted evaluation, both the ontology and the software system were found to be promising tools in developing pervasive, collaborative and multi-disciplinary traffic incident management systems
37

Trust and reputation management in decentralized systems

Wang, Yao 17 September 2010
In large, open and distributed systems, agents are often used to represent users and act on their behalves. Agents can provide good or bad services or act honestly or dishonestly. Trust and reputation mechanisms are used to distinguish good services from bad ones or honest agents from dishonest ones. My research is focused on trust and reputation management in decentralized systems. Compared with centralized systems, decentralized systems are more difficult and inefficient for agents to find and collect information to build trust and reputation. In this thesis, I propose a Bayesian network-based trust model. It provides a flexible way to present differentiated trust and combine different aspects of trust that can meet agents different needs. As a complementary element, I propose a super-agent based approach that facilitates reputation management in decentralized networks. The idea of allowing super-agents to form interest-based communities further enables flexible reputation management among groups of agents. A reward mechanism creates incentives for super-agents to contribute their resources and to be honest. As a single package, my work is able to promote effective, efficient and flexible trust and reputation management in decentralized systems.
38

Semantic Web Based Multi-agent Framework for Real-time Freeway Traffic Incident Management System

Abou-Beih, Mahmoud Osman 20 August 2012 (has links)
Recurring traffic congestion is attributable to steadily increasing travel demand coupled with constrained space and financial resources for infrastructure expansion. Another major source of congestion is non-recurrent incidents that disrupt the normal operation of the infrastructure. Aiming to optimize the utilization of the transportation infrastructure, innovative infrastructure management techniques that incorporate on edge technological equipment and information systems need to be adopted to manage recurrent and non-recurrent congestion and reduce their adverse externalities. The framework presented in this thesis lays the foundation for multi-disciplinary semantic web based incident management. During traffic incident response, involved stakeholders will share their knowledge and resources, forming an ad-hoc framework within which each party will focus on its core competencies and cooperate to achieve a coherent incident management process. Negotiation between various response agencies operators is performed using intelligent software agents, alleviating the coordination and synchronization burden of the massive information flow during the incident response. The software agents provide a decision support to human operators based on the reasoning provided from the underlying system knowledge models. Ontological engineering is used to lay the foundation of the knowledge models, which are coded in a web based ontology language, allowing a decentralized access to various elements of the system. The whole system communication infrastructure is based on the Semantic Web technologies. The semantic web facilitates the use of, in an enhanced manner, the already existing web technologies as the communication infrastructure of the proposed system. Its semantic capabilities help to resolve the information and data interoperability issues among various parties. The web services concepts combined with the semantic web allow the direct exploration and access of knowledge models, resources, and data repertories held by various parties. The developed ontology along with the developed software system were tested and evaluated by domain experts and targeted system users. Based on the conducted evaluation, both the ontology and the software system were found to be promising tools in developing pervasive, collaborative and multi-disciplinary traffic incident management systems
39

Réalisation d'un agent tuteur intelligent conscient

Gaha, Mohamed January 2008 (has links) (PDF)
Pour améliorer le rendement des STI, de nombreux investissements matériels et immatériels ont été faits (Starkman, 2007). Cependant, les STI demeurent complexes et leur implémentation coûteuse (Aleven, 2006). En effet, plus l'apprentissage dispensé par le STI est personnalisé, plus le traitement computationnel est complexe. Réaliser et finaliser un STI capable d'évoluer de manière autonome dans un environnement complexe et riche en informations serait d'un grand intérêt. C'est à cette délicate réflexion que s'attaque mon mémoire. Tout su long de mon mémoire je présente un agent tuteur cognitif nommé CTS. Il repose sur une architecture semblable à un modèle psychologique de la conscience humaine. CTS cherche à simuler le fonctionnement de la conscience et ainsi tirer profit des phénomènes lui affairant. L'hypothèse de base est que les mécanismes de la conscience peuvent conférer au STI un comportement lui permettant de mieux gérer la complexité liée à l'environnement afin de prendre les décisions tutorielles de bonne qualité lors d'une séance d'entraînement à l'usage du bras. ______________________________________________________________________________ MOTS-CLÉS DE L’AUTEUR : Agent cognitif, Conscience, Réseau des actes, Microprocessus.
40

Développement d'une architecture d'agent conscient pour un système tutoriel intelligent

Hohmeyer, Patrick January 2006 (has links) (PDF)
Depuis au moins une trentaine d'années, des ordinateurs ont été utilisés dans le domaine de l'enseignement. Les premiers systèmes ont été raffinés par l'intégration des techniques de l'intelligence artificielle donnant ainsi lieu aux systèmes tutoriels intelligents (STI). Les STI sont des agents autonomes et intelligents qui doivent considérer une quantité importante d'information afin de mieux suivre le raisonnement d'un apprenant et l'aider dans son processus d'apprentissage. Chez les humains, la conscience joue un rôle de premier plan dans le traitement de l'information. En effet, elle permet entre autre de filtrer l'accès aux informations fournies par l'environnement. Récemment, des chercheurs dans le domaine de la psychologie et de l'informatique ont fondé un nouvel axe de recherche lié à la conscience artificielle; le but est de tenter de reproduire les mécanismes de la conscience dans des agents logiciels afin d'augmenter leur capacité à raisonner. Ce mémoire traite de l'architecture d'un agent tutoriel intelligent « conscient ». Cette architecture est une extension du système IDA, développé par l'équipe du Pr. Stan Franklin de l'université de Memphis. Le système IDA offre un ensemble d'outils et de modèles permettant l'intégration de la conscience dans un agent logiciel. Il confère à un agent des capacités à filtrer les évènements de l'environnement pour centrer le raisonnement sur les informations les plus importantes. Cette capacité de filtrer l'information est réalisée grâce à la théorie de la conscience humaine de Baars. L'architecture qui résulte de cette adaptation de IDA est basée sur l'interaction d'agents plus simples (appelés micro-processus) qui collaborent sous la direction d'un réseau des actes (inspiré des travaux de Maes). Elle a été intégrée avec succès dans un système tutoriel intelligent pour l'entraînement des astronautes (CanadarmTutor). En plus de comporter plusieurs avantages par rapport aux architectures existantes, l'architecture proposée est générique et peut être réutilisée pour d'autres projets.

Page generated in 0.0714 seconds