• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • Tagged with
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Évaluation de la qualité des comportements des agents en simulation : application à un simulateur de conduite en environnement virtuel / Evaluation of the agents' behavior quality in simulation : application to a driving simulator in virtual environment

Darty, Kevin 07 July 2015 (has links)
Cette thèse se situe dans le contexte de la Simulation Multi-Agent et s'intéresse à l'évaluation de la capacité des agents à reproduire des comportements humains. Cette problématique se retrouve dans de nombreux domaines comme la Réalité Virtuelle et les Agents Conversationnels Animés. L'approche dominante s'appuie sur l'utilisation de questionnaires de Sciences Humaines et Sociales (SHS). Il existe peu d'approches exploitant l'analyse automatique de données utilisée en Intelligence Artificielle (IA) à l'échelle microscopique. Nous montrons dans cette thèse que l'évaluation gagne à exploiter conjointement ces deux approches. Nous exposons une méthode d'évaluation de la qualité des comportements des agents qui combine l'approche d'IA et celle de SHS. La première est basée sur la classification de traces de simulation. La seconde évalue les utilisateurs par une annotation des comportements. Nous présentons ensuite un algorithme de comparaison des agents par rapport aux humains afin d'évaluer les capacités, les manques et les erreurs du modèle d'agent et fournissons des métriques. Puis nous explicitons ces comportements en nous appuyant sur les catégories d'utilisateur. Enfin, nous exposons un cycle de calibration automatique des agents et une exploration de l'espace des paramètres. Notre méthode d'évaluation est utilisable dans le but d'analyser un modèle d'agent et de comparer plusieurs modèles d'agent. Nous avons appliqué cette méthodologie sur plusieurs études du comportement de conduite en vue d'analyser la simulation de trafic routier ARCHISIM et nous présentons les résultats obtenus. / This thesis is in the context of the Multi-Agents Simulation and is interested in evaluating the ability of agents to reproduce human behaviors. This problem appears in many domains such as Virtual Reality and Embodied Conversational Agents. The dominant approach to evaluate these behaviors uses Social Sciences questionnaires. There are only few approaches based on Artificial Intelligence and automatic data analysis at the microscopic scale. We show in this thesis that the evaluation of behavior can benefit from both approaches when used jointly. First, we present a method for evaluating the agents' behavior quality. It combines the Artificial Intelligence approach and the Social Science approach. The first one is based on simulation logs clustering. The second one evaluates the users by an annotation of the behaviors. We then present an algorithm that compare agents to humans in order to assess the capacities, the lacks, and the errors in the agent model, and provide metrics. We then make these behaviors explicite based on user categories. Finally, we present a cycle for automatic calibration of the agents and an exploration of the parameter space. Our evaluation method is usable for the analysis of an agent model, and for comparing several agent models. We applied this methodology on several driver behavior studies to analyse the road traffic simulation ARCHISIM, and we present the obtained results.
2

Feedback-Driven Data Clustering

Hahmann, Martin 28 October 2013 (has links)
The acquisition of data and its analysis has become a common yet critical task in many areas of modern economy and research. Unfortunately, the ever-increasing scale of datasets has long outgrown the capacities and abilities humans can muster to extract information from them and gain new knowledge. For this reason, research areas like data mining and knowledge discovery steadily gain importance. The algorithms they provide for the extraction of knowledge are mandatory prerequisites that enable people to analyze large amounts of information. Among the approaches offered by these areas, clustering is one of the most fundamental. By finding groups of similar objects inside the data, it aims to identify meaningful structures that constitute new knowledge. Clustering results are also often used as input for other analysis techniques like classification or forecasting. As clustering extracts new and unknown knowledge, it obviously has no access to any form of ground truth. For this reason, clustering results have a hypothetical character and must be interpreted with respect to the application domain. This makes clustering very challenging and leads to an extensive and diverse landscape of available algorithms. Most of these are expert tools that are tailored to a single narrowly defined application scenario. Over the years, this specialization has become a major trend that arose to counter the inherent uncertainty of clustering by including as much domain specifics as possible into algorithms. While customized methods often improve result quality, they become more and more complicated to handle and lose versatility. This creates a dilemma especially for amateur users whose numbers are increasing as clustering is applied in more and more domains. While an abundance of tools is offered, guidance is severely lacking and users are left alone with critical tasks like algorithm selection, parameter configuration and the interpretation and adjustment of results. This thesis aims to solve this dilemma by structuring and integrating the necessary steps of clustering into a guided and feedback-driven process. In doing so, users are provided with a default modus operandi for the application of clustering. Two main components constitute the core of said process: the algorithm management and the visual-interactive interface. Algorithm management handles all aspects of actual clustering creation and the involved methods. It employs a modular approach for algorithm description that allows users to understand, design, and compare clustering techniques with the help of building blocks. In addition, algorithm management offers facilities for the integration of multiple clusterings of the same dataset into an improved solution. New approaches based on ensemble clustering not only allow the utilization of different clustering techniques, but also ease their application by acting as an abstraction layer that unifies individual parameters. Finally, this component provides a multi-level interface that structures all available control options and provides the docking points for user interaction. The visual-interactive interface supports users during result interpretation and adjustment. For this, the defining characteristics of a clustering are communicated via a hybrid visualization. In contrast to traditional data-driven visualizations that tend to become overloaded and unusable with increasing volume/dimensionality of data, this novel approach communicates the abstract aspects of cluster composition and relations between clusters. This aspect orientation allows the use of easy-to-understand visual components and makes the visualization immune to scale related effects of the underlying data. This visual communication is attuned to a compact and universally valid set of high-level feedback that allows the modification of clustering results. Instead of technical parameters that indirectly cause changes in the whole clustering by influencing its creation process, users can employ simple commands like merge or split to directly adjust clusters. The orchestrated cooperation of these two main components creates a modus operandi, in which clusterings are no longer created and disposed as a whole until a satisfying result is obtained. Instead, users apply the feedback-driven process to iteratively refine an initial solution. Performance and usability of the proposed approach were evaluated with a user study. Its results show that the feedback-driven process enabled amateur users to easily create satisfying clustering results even from different and not optimal starting situations.

Page generated in 0.1782 seconds