Spelling suggestions: "subject:"multiagent lemsystems"" "subject:"multiagent atemsystems""
281 |
Tisser le Web Social des Objets : Permettre une Interaction Autonome et Flexible dans l’Internet des Objets / Weaving a Social Web of Things : Enabling Autonomous and Flexible Interaction in the Internet of ThingsCiortea, Andrei-Nicolae 14 January 2016 (has links)
L’Internet des Objets (IoT) vise à créer un eco-système global et ubiquitaire composé d’un grand nombre d’objets hétérogènes. Afin d’atteindre cette vision, le World Wide Web apparaît comme un candidat adapté pour interconnecter objets et services à la couche applicative en un Web des Objets (WoT).Cependant l’évolution actuelle du WoT produit des silos d’objets et empêche ainsi la mise en place de cette vision. De plus, même si le Web facilite la composition d’objets et services hétérogènes, les approches existantes produisent des compositions statiques incapables de s’adapter à des environnements dynamiques et des exigences évolutives. Un autre défi est à relever: permettre aux personnes d’interagir avec le vaste, évolutif et hétérogène IoT.Afin de répondre à ces limitations, nous proposons une architecture pour IoT ouvert et autogouverné, constitué de personnes et d’objets situés, en interaction avec un environnement global via des plateformes hétérogènes. Notre approche consiste de rendre les objets autonomes et d’appliquer la métaphore des réseaux sociaux afin de créer des réseaux flexibles de personnes et d’objets. Nous fondons notre approche sur les résultats issus des domaines des multi-agents et du WoT afin de produit un WoT Social.Notre proposition prend en compte les besoins d’hétérogénéité, de découverte et d’interaction flexible dans l’IoT. Elle offre également un coût minimal pour les développeurs et les utilisateurs via différentes couches d’abstraction permettant de limité la complexité de cet éco-système. Nous démontrons ces caractéristiques par la mise en oeuvre de plus scénarios applicatifs. / The Internet of Things (IoT) aims to create a global ubiquitous ecosystem composed of large numbers of heterogeneous devices. To achieve this vision, the World Wide Web is emerging as a suitable candidate to interconnect IoT devices and services at the application layer into a Web of Things (WoT).However, the WoT is evolving towards large silos of things, and thus the vision of a global ubiquitous ecosystem is not fully achieved. Furthermore, even if the WoT facilitates mashing up heterogeneous IoT devices and services, existing approaches result in static IoT mashups that cannot adapt to dynamic environments and evolving user requirements. The latter emphasizes another well-recognized challenge in the IoT, that is enabling people to interact with a vast, evolving, and heterogeneous IoT.To address the above limitations, we propose an architecture for an open and self-governed IoT ecosystem composed of people and things situated and interacting in a global environment sustained by heterogeneous platforms. Our approach is to endow things with autonomy and apply the social network metaphor to createflexible networks of people and autonomous things. We base our approach on results from multi-agent and WoT research, and we call the envisioned IoT ecosystem the Social Web of Things.Our proposal emphasizes heterogeneity, discoverability and flexible interaction in the IoT. In the same time, it provides a low entry-barrier for developers and users via multiple layers of abstraction that enable them to effectively cope with the complexity of the overall ecosystem. We implement several application scenarios to demonstrate these features.
|
282 |
The engineering of emergence in complex adaptive systemsPotgieter, Anna Elizabeth Gezina 22 September 2004 (has links)
Agent-oriented software engineering is a new software engineering paradigm that is ideally suited to the analysis and design of complex systems. Open distributed environments place a growing demand on complex systems to be adaptive as well. Complex systems that can learn from and adapt to dynamically changing environments are called complex adaptive systems. These systems are characterized by emergent behaviour caused by interactions between system components and the environment. Agent-oriented software engineering methodologies attempt to control emergence during analysis and design by engineering the complex system in such a way that the correct emergent behaviour results during run-time. In a complex adaptive system however, emergent behaviour cannot be predicted during analysis and design, as it evolves only after implementation. By restricting emergent behaviour, as is done in most agent-oriented software engineering approaches, a complex system cannot be fully adaptive as well. We propose the BaBe methodology that will enable a complex system to be adaptive by learning from its environment and modifying its behaviour during run-time. This methodology adds a run-time emergence model consisting of distributed Bayesian behaviour networks to the agent-oriented software engineering lifecycle. These networks are initialised by the human software engineer during analysis and design and deployed by Bayesian agencies (also complex adaptive systems). The Bayesian agents are simple, and collectively they implement distributed Bayesian behaviour networks. These networks, being specialized Bayesian networks, enable the Bayesian agents to collectively mine relationships between emergent behaviours and the interactions that caused them to emerge, in order to adapt the behaviour of the system. The agents are organized into heterarchies of agencies, where each agency activates one or more component behaviour depending on the inference in the underlying Bayesian behaviour network. These agencies assist the human software engineer to bridge the gap between the implementation and the understanding of emergent behaviour in complex adaptive systems. Due to the simplicity of the agents and the minimal communication amongst them, they can be implemented using a commercially available component architecture. We describe a prototype implementation of the Bayesian agencies using Sun’s Enterprise JavaBeans™ component architecture. / Thesis (PhD (Computer Science))--University of Pretoria, 2005. / Computer Science / unrestricted
|
283 |
An Integrated Multi-Agent Framework for Optimizing Time, Cost and Environmental Impact of Construction ProcessesOzcan-Deniz, Gulbin 15 July 2011 (has links)
Environmentally conscious construction has received a significant amount of research attention during the last decades. Even though construction literature is rich in studies that emphasize the importance of environmental impact during the construction phase, most of the previous studies failed to combine environmental analysis with other project performance criteria in construction. This is mainly because most of the studies have overlooked the multi-objective nature of construction projects. In order to achieve environmentally conscious construction, multi-objectives and their relationships need to be successfully analyzed in the complex construction environment. The complex construction system is composed of changing project conditions that have an impact on the relationship between time, cost and environmental impact (TCEI) of construction operations. Yet, this impact is still unknown by construction professionals. Studying this impact is vital to fulfill multiple project objectives and achieve environmentally conscious construction. This research proposes an analytical framework to analyze the impact of changing project conditions on the relationship of TCEI. This study includes green house gas (GHG) emissions as an environmental impact category. The methodology utilizes multi-agent systems, multi-objective optimization, analytical network process, and system dynamics tools to study the relationships of TCEI and support decision-making under the influence of project conditions. Life cycle assessment (LCA) is applied to the evaluation of environmental impact in terms of GHG. The mixed method approach allowed for the collection and analysis of qualitative and quantitative data. Structured interviews of professionals in the highway construction field were conducted to gain their perspectives in decision-making under the influence of certain project conditions, while the quantitative data were collected from the Florida Department of Transportation (FDOT) for highway resurfacing projects. The data collected were used to test the framework. The framework yielded statistically significant results in simulating project conditions and optimizing TCEI. The results showed that the change in project conditions had a significant impact on the TCEI optimal solutions. The correlation between TCEI suggested that they affected each other positively, but in different strengths. The findings of the study will assist contractors to visualize the impact of their decision on the relationship of TCEI.
|
284 |
Význam poznávacích procesů pro tvorbu umělé inteligence / Meaning of cognitive processes for creating artificial intelligencePangrác, Vojtěch January 2011 (has links)
This work is aimed at creating a single view in the field of cognitive processes. Namely it is analysis of providing importance of cognitive processes for the entire field of artificial intelligence. The whole area of cognitive processes is described through the analysis of biological cognitive processes and their subsequent comparison with the processes of artificial intelligence and also the overall analysis of their limitations and their use. The work also contains a brief overview of the architecture of artificial intelligence and philosophical essay focused on the relationship of mind and body. In the end we present a project from IBM workshop, which is very important for their ability to work with natural language and understanding the content of questions asked.
|
285 |
[pt] DESENVOLVIMENTO INTENCIONAL DE SOFTWARE TRANSPARENTE BASEADO EM ARGUMENTAÇÃO / [en] INTENTIONAL DEVELOPMENT OF TRANSPARENT SOFTWARE BASED ON ARGUMENTATIONMAURICIO SERRANO 06 March 2012 (has links)
[pt] Transparência é um critério de qualidade crítico para sociedades
democráticas modernas. Como o software permeia a sociedade, a transparência se
tornou uma preocupação para softwares operando em domínios públicos, sejam
eles eGovernment, eCommerce ou softwares sociais. Dessa forma, a transparência
de software está se tornando um critério de qualidade que demanda mais atenção
dos desenvolvedores de software. Requisitos de transparência em um sistema de
software estão relacionados a requisitos não-funcionais, como disponibilidade,
usabilidade, informatividade, entendimento e auditabilidade. Entretanto, requisitos
de transparência são especialmente difíceis de serem validados devido à natureza
subjetiva dos conceitos envolvidos. Essa tese propõe o desenvolvimento
intencional de software transparente dirigido por requisitos de transparência. Os
requisitos de transparência são elicitados com o apoio de um catálogo de padrões
de requisitos, relativamente validados pelos interessados através do uso de
argumentação e representados em modelos intencionais. Modelos intencionais são
fundamentais para a transparência de software, uma vez que associam aos
requisitos as metas e os critérios de qualidade esperados pelos interessados e que
justificam as decisões tomadas. Um sistema exemplo foi implementado como um
sistema multi-agentes intencional, ou seja, com agentes colaborativos que
implementam o modelo Belief-Desire-Intention e que são capazes de raciocinar
sobre metas e critérios de qualidade. Essa tese discute as questões importantes
para o sucesso da nossa abordagem de desenvolvimento de software transparente,
como: (i) rastreabilidade requisitos-código e código-requisitos; (ii) o uso de lógica
nebulosa para desenvolver uma máquina de raciocínio para agentes intencionais;
(iii) a aplicação de argumentação para a validação relativa de requisitos de
transparência através da obtenção de um consenso entre os interessados; e (iv)
pré-rastreabilidade colaborativa para modelos intencionais baseada nas interações
sociais. Nossas idéias foram validadas através de estudos de caso em diferentes
domínios, tal como computação ubíqua e aplicações Web. / [en] Transparency is a critical quality criterion to modern democratic societies.
As software permeates society, transparency has become a concern to public
domain software, as eGovernment, eCommerce or social software. Therefore,
software transparency is becoming a quality criterion that demands more attention
from software developers. In particular, transparency requirements of a software
system are related to non-functional requirements, e.g. availability, usability,
informativeness, understandability and auditability. However, transparency
requirements are particularly difficult to validate due to the subjective nature of
the involved concepts. This thesis proposes a transparency-requirements-driven
intentional development of transparent software. Transparency requirements are
elicited with the support of a requirements patterns catalog, relatively validated by
the stakeholders through argumentation and represented on intentional models.
Intentional models are fundamental to software transparency, as they associate
goals and quality criteria expected by the stakeholders with the software
requirements. The goals and quality criteria also justify the decisions made during
software development. A system was implemented as an intentional multi-agents
system, i.e., a system with collaborative agents that implement the Belief-Desire-
Intention model and that are capable of reasoning about goals and quality criteria.
This thesis discusses important questions to the success of our approach to the
development of transparent software, such as: (i) forward and backward
traceability; (ii) a fuzzy-logic based reasoning engine for intentional agents; (iii)
the application of an argumentation framework to relatively validate transparency
requirements through stakeholders’ multi-party agreement; and (iv) collaborative
pre-traceability for intentional models based on social interactions. Our ideas were
validated through case studies from different domains, such as ubiquitous
computing and Web applications.
|
286 |
Důvěra a reputace v distribuovaných systémech / Trust and Reputation in Distributed SystemsSamek, Jan Unknown Date (has links)
This Ph.D. thesis deals with trust modelling for distributed systems especially to multi-context trust modelling for multi-agent distributed systems. There exists many trust and reputation models but most of them do not dealt with the multi-context property of trust or reputation. Therefore, the main focus of this thesis is on analysis of multi-context trust based models and provides main assumptions for new fully multi-contextual trust model on the bases of them. The main part of this thesis is in providing new formal multi-context trust model which are able to build, update and maintain trust value for different aspects (contexts) of the single entity in the multi-agent system. In our proposal, trust value can be built on the bases of direct interactions or on the bases on recommendations and reputation. Moreover we assume that some context of one agent is not fully independent and on the bases of trust about one of them we are able to infer trust to another's. Main contribution of this new model is increasing the efficiency in agent decision making in terms of optimal partner selection for interactions. Proposed model was verified by implementing prototype of multi-agent system when trust was used for agents' decision making and acting.
|
287 |
Privacité dans les problèmes distribués contraints pour agents basés utilité / Privacy in distributed constrained problems for utility-based agentsSavaux, Julien 25 October 2017 (has links)
Bien que le domaine des systèmes multi-agents ait été largement étudié, les interactions entre agents entraînent des pertes de privacité. En effet, la résolution de problèmes distribués, étant fréquemment combinatoires, impose un échange intensif d’informations entre les agents jusqu’à l’obtention d’un accord. Le problème est que les approches existantes ne considèrent généralement pas la confidentialité des données et se concentrent surtout sur la satisfaction des contraintes des agents pour évaluer les solutions. Les travaux présentés dans cette thèse visent donc à prendre en compte de façon principielle la problématique de la privacité dans le raisonnement distribué. Nous montrons que les travaux existants dans le domaine permettent toutefois aux agents de préserver implicitement un certain degré de privacité. Nous proposons une approche basée sur la théorie de l’utilité, un cadre formel bien défini en Intelligence Artificielle, permettant une approche objective et quantitative des intérêts et comportements raisonnables des agents. Plus précisément, le modèle que nous avons développé inclut non seulement les paramètres habituels mais également l’information sur la privacité des agents quantifiée en termes d’utilité. Nous montrons aussi que ces problèmes doivent être envisagés comme des problèmes de planification où les agents choisissent des actions maximisant leur utilité. Des algorithmes actuels peuvent être décrits comme des plans utilisables comme modèle générique par des planificateurs intelligents. Les expérimentations réalisées ont permis de valider l’approche et d’évaluer la qualité des solutions obtenues tout en montrant que leur efficacité peut être accrue à l’aide de traitements de privacité. / Although the field of multi-agent systems has been largely studied, interactions between agents imply privacy loss. Indeed, solving distributed problems, being frequently combinatorial, implies an extensive exchange of information between agents until an agreement is found. The problem is that existing approaches do not generally consider privacy and focus only on the satisfaction of agents’ constraints to evaluate solution. The works presented in this thesis therefore aim at considering systematically the issue of privacy in distributed reasoning. We show that existing works in the field still let agents preserve implicitly some degree of privacy. We propose an approach based on utility theory, a formal setting well defined in Artificial Intelligence, allowing an objective and quantitative approach to the interests and reasonable behaviours of agents. More precisely, the model we have developed includes non only the usual parameters but also information on agents privacy quantified in term of utility. We also show that these problems must be considered as planning problems where agents choose actions maximizing their utility. Common algorithms can be described as plans usable as generic models by intelligent planners. Conducted experiments let us validate the approach and evaluate the quality of obtained solution, while showing that their efficiency can be improved thanks to privacy considerations.
|
288 |
Contribution à la robustesse dans les CSPs distribués par réplication locale / Contribution to robustness in distributed CSPs by local replicationChakchouk, Fadoua 19 November 2018 (has links)
Nous visons à garantir la résolution d’un DisCSP en présence d’un ou plusieurs agents défaillants. Les méthodes traitant la tolérance aux fautes au sein des SMAs visent la continuité du fonctionnement du système. Mais, aucune de ces méthodes n’est appliquée pour résoudre un DisCSP. La défaillance d’un agent au cours de la résolution d’un DisCSP engendre la perte d’une partie du DisCSP global, d’où l’obtention d’un résultat erroné. Donc pour obtenir les résultats attendus, il faut garantir la résolution du CSP local de l’agent défaillant. Nous proposons de répliquer les CSPs locaux des agents défaillants au sein des agents non défaillants. Cette réplication permet la résolution du CSP local de l’agent défaillant par un autre agent. Cette résolution est effectuée en fusionnant les réplicats de CSPs des agents défaillants avec les CSPs des autres agents. Cette fusion permet la conservation de la modélisation initiale du DisCSP. L’algorithme de distribution des réplicats proposé garantit que les CSPs des agents défaillants ne soient pas répliqués au sein du même agent. De cette façon, le problème conserve son aspect distribué. / We aim to ensure a DisCSP resolution in presence of failed agents. Methods handling fault tolerance in MASs aim to ensure the continuity of the system operation. But, none of these methods are applied to solve a DisCSP. The failure of an agent generates the loss of a part of the DisCSP providing wrong results. Therefore, to obtain expected results, it is necessary to ensure the resolution of the failed agent local CSP.We propose to replicate the local CSPs of the failed agents within active agents. This replication allows local CSP resolution of the failed agent by another agent. The resolution is done by merging the replicates of failed agents CSPs with the CSPs of other agents. This technique conserve the initial DisCSP modeling. The proposed replicates distribution algorithm ensures that the CSPs of failed agents are not replicated within the same agent. In this way, the problem keeps its distributed aspect.
|
289 |
Optimal Information-Weighted Kalman Consensus FilterShiraz Khan (8782250) 30 April 2020 (has links)
<div>Distributed estimation algorithms have received considerable attention lately, owing to the advancements in computing, communication and battery technologies. They offer increased scalability, robustness and efficiency. In applications such as formation flight, where any discrepancies between sensor estimates has severe consequences, it becomes crucial to require consensus of estimates amongst all sensors. The Kalman Consensus Filter (KCF) is a seminal work in the field of distributed consensus-based estimation, which accomplishes this. </div><div><br></div><div>However, the KCF algorithm is mathematically sub-optimal, and does not account for the cross-correlation between the estimates of sensors. Other popular algorithms, such as the Information weighted Consensus Filter (ICF) rely on ad-hoc definitions and approximations, rendering them sub-optimal as well. Another major drawback of KCF is that it utilizes unweighted consensus, i.e., each sensor assigns equal weightage to the estimates of its neighbors. This fact has been shown to cause severely degraded performance of KCF when some sensors cannot observe the target, and can even cause the algorithm to be unstable.</div><div><br></div><div>In this work, we develop a novel algorithm, which we call Optimal Kalman Consensus Filter for Weighted Directed Graphs (OKCF-WDG), which addresses both of these limitations of existing algorithms. OKCF-WDG integrates the KCF formulation with that of matrix-weighted consensus. The algorithm achieves consensus on a weighted digraph, enabling a directed flow of information within the network. This aspect of the algorithm is shown to offer significant performance improvements over KCF, as the information may be directed from well-performing sensors to other sensors which have high estimation error due to environmental factors or sensor limitations. We validate the algorithm through simulations and compare it to existing algorithms. It is shown that the proposed algorithm outperforms existing algorithms by a considerable margin, especially in the case where some sensors are naive (i.e., cannot observe the target).</div>
|
290 |
Optimalizace hyperparametrů v systémech automatického strojového učení / Hyperparameter optimization in AutoML systemsPešková, Klára January 2019 (has links)
In the last few years, as processing the data became a part of everyday life in different areas of human activity, the automated machine learning systems that are designed to help with the process of data mining, are on the rise. Various metalearning techniques, including recommendation of the right method to use, or the sequence of steps to take, and to find its optimum hyperparameters configuration, are integrated into these systems to help the researchers with the machine learning tasks. In this thesis, we proposed metalearning algorithms and techniques for hyperparameters optimization, narrowing the intervals of hyperparameters, and recommendations of a machine learning method for a never before seen dataset. We designed two AutoML machine learning systems, where these metalearning techniques are implemented. The extensive set of experiments was proposed to evaluate these algorithms, and the results are presented.
|
Page generated in 0.0955 seconds