• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 26
  • 18
  • 9
  • 4
  • Tagged with
  • 68
  • 68
  • 20
  • 20
  • 14
  • 11
  • 10
  • 10
  • 10
  • 10
  • 9
  • 9
  • 9
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Ad hoc distributed simulation: a method for embedded online simulations

Huang, Ya-Lin 20 September 2013 (has links)
The continual growth of computing power in small devices has motivated the development of novel approaches to optimizing operational systems efficiently and effectively. These optimization problems are often so complex that solving them analytically may be difficult, if not prohibited. One method for solving such problems is to use online simulation. However, challenges in using online simulation include the issues of responsiveness (e.g., because of communication delays), scalability, and failure resistance. To tackle these issues, this study proposes embedding online simulations into a network of sensors that monitors the system under investigation. This thesis explores an approach termed “ad hoc distributed simulation,” which is based on embedding online simulations into a sensor network and adding communication and synchronization among simulators to model operational systems. This approach offers several potential advantages over existing approaches: (1) it can provide rapid response to system dynamics as well as efficiency since data exchange is local to the sensor network, (2) it can achieve better scalability to incorporate more sensors, and (3) it can provide better robustness to failures because portions of the system are still under local control. This research addresses several statistical issues in this ad hoc approach: (1) rapid and effective estimation of the input processes at model boundaries, (2) estimation of system-wide performance measures from individual simulator outputs, and (3) correction mechanisms responding to unexpected events or inaccuracies within the model. This thesis examines ad hoc distributed simulation analytically and experimentally, mainly focusing on the accuracy of predicting the performance of open queueing networks. First, the analytical part formalizes the ad hoc approach and evaluates its accuracy at modeling certain class of open queueing networks with regard to the steady-state system performance measures. This work concerning steady-state metrics is extended to a broader class of networks by an empirical study, which presents evidence to show that the ad hoc approach can generate predictions comparable to those from sequential simulations. Furthermore, a “buffered-area” mechanism is proposed to substantially reduce prediction bias with a moderate increase in execution time. In addition to those steady-state studies, another empirical study targets the prediction accuracy of the ad hoc approach at open queueing networks with short-term system-state transients. This study demonstrates that, with slight modification to the prior design of the ad hoc queueing simulation method for those steady-state studies, system dynamics can be well modeled. The results, again, support the conclusion that the ad hoc approach is competitive to the sequential simulation method in terms of prediction accuracy.
32

Architecture de simulation distribuée temps-réel / Real-time distributed simulation architecture

Chaudron, Jean-Baptiste 25 January 2012 (has links)
Ce travail de thèse s'inscrit dans le projet plus global PRISE (Plate-forme de Recherche pour l'Ingénierie des Systèmes Embarqués) dont l'objectif principal est le développement d'une plateforme d'exécution pour les logiciels embarqués. De tels logiciels sont dits critiques et ils sont, par conséquent, soumis à des règles de conception spécifiques. Notamment, ces logiciels doivent répondre à des contraintes de temps réel et ainsi garantir des comportements temporels prédictifs afin de toujours donner des résultats justes avec le respect d'échéances temporelles.L'objectif de cette thèse est d'évaluer l'utilisation des techniques de la simulation distribuée (et particulièrement de la norme HLA) pour répondre aux besoins de simulation hybride et temps réel de la plate-forme. Afin de respecter ces contraintes et garantir la prédictibilité temporelle d'une simulation distribuée, il faut avoir une vision complète de l'ensemble du problème et notamment des différents niveaux d'actions : applicatif, intergiciel, logiciel, matériel et aussi formel pour la validation du comportement temporel.Cette thèse se base sur la RTI (Run Time Infrastructure, intergiciel HLA) de l'ONERA : le CERTI et propose une démarche méthodologique adaptée à ces différents niveaux d'actions. Des cas d'étude, notamment un simulateur du vol d'un avion, ont été spécifiés, implémentés et expérimentés sur la plate-forme PRISE. / This work takes place in the global project PRISE (Plate-forme de Recherche pour l'Ingénierie des Systèmes Embarqués) in which the focus is to develop an execution platform for embedded software. Embedded software are said criticals and, therefore, are subject to specific design rules.Particularly, these software must meet real time constraints and thus ensure a temporal predictive behaviour in order to always give accurate results with respect to corresponding timing deadlines.The main objective of this thesis is to study the use of distributed simulation techniques (and specifically the HLA standard) to meet the real-time and hybrid simulation needs of the PRISE platform. To comply with these real-time constraints and ensure the predictability of a distributed simulation, we must have a complete view of the whole problem and in particular the different levels of action: application, middleware, software, hardware and also a formal level for validation of the timing behaviour.This work is based on the RTI (Run Time Infrastructure, HLA middleware) from ONERA laboratory called : the CERTI and proposes a methodological approach adapted to take into account these different levels of action. Some case studies, including a flight simulator of an aircraft, have been specified, implemented and tested on the PRISE platform.
33

Protegendo a economia virtual de MMOGS através da detecção de cheating. / Protecting the virtual economy in MMOGs by cheat detection

Severino, Felipe Lange January 2012 (has links)
Nos últimos anos Jogos Online Massivamente Multijogadores (MMOG) têm se expandido em popularidade e investimento, influenciado, especialmente, pela evolução da conexão residencial (conexões mais rápidas a preços mais baixos). Com o crescimento dessa demanda, surgem problemas na utilização da arquitetura cliente-servidor, normalmente utilizada em jogos comerciais. Entre as arquiteturas alternativas de suporte a MMOGs estão as arquiteturas peer-to-peer. Porém essas arquiteturas apresentam problemas relativos a segurança, problemas esses que possuem, muitas vezes, soluções de baixo desempenho, sendo impraticáveis em jogos reais. Entre os problemas de segurança mais significativos para MMOGs encontra-se o cheating, ou a ação que um ou mais jogador toma para burlar as regras em favor próprio. A preocupação com cheating agravase quando o efeito desse cheating pode causar danos irreversíveis à economia virtual e, potencialmente, afetar todos os jogadores. O presente trabalho faz uso de uma divisão celular do mundo virtual para restringir o impacto de um dado cheating a uma única célula, evitando que este se propague. Para tanto é realizada uma classificação do estado do jogador e utiliza-se uma técnica de detecção de cheating para cada uma das classificações. Foram realizados experimentos através de simulação para testes de aplicabilidade do modelo e análise de desempenho e acuracidade. Os testes indicam que o modelo proposto consegue, de forma eficaz, realizar a proteção da economia virtual, impedindo que a ocorrência de um cheating atinja todos os jogadores. / In the past few years, Massively Multiplayer Online Games (MMOG) grew in both popularity and investment. This growth has been influenced by the evolution of residential connection (faster and cheaper connections). With the demand, some limitations imposed by the client-server architecture becomes more significant. Peer-to-peer architectures aim to solve those problems by distributing the game among several computers. However, those solutions usually lack security, or presents low performance. Among the problems, cheating can be considered the most significant to MMOGs. Cheating can be defined as the action taken by a player when this action is against the rules. This may be aggravated when this action can cause irreversible damage to the virtual economy and, potentially, affect all players in the virtual world. This work’s goal is to restrict the cheating impact using a cellular world division. The proposal is to restrict the cheating in a limited virtual space, preventing the propagation. A state classification is presented, and different cheating detection techniques are presented to each element of this classification. Simulation is used to make the experiments aiming to test the performance and accuracy of the proposal. Results indicate that the proposed solution can efficiently protect the virtual economy, restraining the effects of a cheating occurrence to a small portion of the virtual world.
34

Protegendo a economia virtual de MMOGS através da detecção de cheating. / Protecting the virtual economy in MMOGs by cheat detection

Severino, Felipe Lange January 2012 (has links)
Nos últimos anos Jogos Online Massivamente Multijogadores (MMOG) têm se expandido em popularidade e investimento, influenciado, especialmente, pela evolução da conexão residencial (conexões mais rápidas a preços mais baixos). Com o crescimento dessa demanda, surgem problemas na utilização da arquitetura cliente-servidor, normalmente utilizada em jogos comerciais. Entre as arquiteturas alternativas de suporte a MMOGs estão as arquiteturas peer-to-peer. Porém essas arquiteturas apresentam problemas relativos a segurança, problemas esses que possuem, muitas vezes, soluções de baixo desempenho, sendo impraticáveis em jogos reais. Entre os problemas de segurança mais significativos para MMOGs encontra-se o cheating, ou a ação que um ou mais jogador toma para burlar as regras em favor próprio. A preocupação com cheating agravase quando o efeito desse cheating pode causar danos irreversíveis à economia virtual e, potencialmente, afetar todos os jogadores. O presente trabalho faz uso de uma divisão celular do mundo virtual para restringir o impacto de um dado cheating a uma única célula, evitando que este se propague. Para tanto é realizada uma classificação do estado do jogador e utiliza-se uma técnica de detecção de cheating para cada uma das classificações. Foram realizados experimentos através de simulação para testes de aplicabilidade do modelo e análise de desempenho e acuracidade. Os testes indicam que o modelo proposto consegue, de forma eficaz, realizar a proteção da economia virtual, impedindo que a ocorrência de um cheating atinja todos os jogadores. / In the past few years, Massively Multiplayer Online Games (MMOG) grew in both popularity and investment. This growth has been influenced by the evolution of residential connection (faster and cheaper connections). With the demand, some limitations imposed by the client-server architecture becomes more significant. Peer-to-peer architectures aim to solve those problems by distributing the game among several computers. However, those solutions usually lack security, or presents low performance. Among the problems, cheating can be considered the most significant to MMOGs. Cheating can be defined as the action taken by a player when this action is against the rules. This may be aggravated when this action can cause irreversible damage to the virtual economy and, potentially, affect all players in the virtual world. This work’s goal is to restrict the cheating impact using a cellular world division. The proposal is to restrict the cheating in a limited virtual space, preventing the propagation. A state classification is presented, and different cheating detection techniques are presented to each element of this classification. Simulation is used to make the experiments aiming to test the performance and accuracy of the proposal. Results indicate that the proposed solution can efficiently protect the virtual economy, restraining the effects of a cheating occurrence to a small portion of the virtual world.
35

Influências de políticas de escalonamento no desempenho de simulações distribuídas / Influences of scheduling policies on the performance of distributed simulations

Bárbara Lopes Voorsluys 07 April 2006 (has links)
Este trabalho analisa o impacto causado no desempenho de uma simulação distribuída quando técnicas de particionamento convencionais são empregadas. Essas técnicas não levam em conta informações inerentes ao estado da simulação. Pelo fato da execução de uma simulação também estar sujeita a sofrer interferências da plataforma, informações sobre a potência computacional de cada recurso utilizado e sobre o tipo de simulação, podem ser aplicadas em seu particionamento. Foram utilizadas informações estáticas, geradas através da avaliação da plataforma com benchmarks, e dinâmicas, obtidas através de índices de carga. Os resultados obtidos da utilização destas técnicas se mostram atrativos, principalmente quando o objetivo é a execução das simulações em ambientes que não disponibilizam políticas de escalonamento específicas e sim políticas convencionais. Nos estudos de casos avaliados, observaram-se ganhos satisfatórios, como a redução de até 24% do tempo de execução, um aumento de até 22% de eficiência e 79% menos rollbacks causados. Percebe-se que dependendo do tempo que se dispõe e dos objetivos pretendidos, as técnicas convencionais podem ser empregadas em simulações distribuídas. Este trabalho também contribui com o aperfeiçoamento das duas ferramentas utilizadas: WARPED e AMIGO. Uma interface de comunicação entre as duas ferramentas foi desenvolvida, ampliando assim seus campos de utilização. / This work analyses the impact caused on distributed simulation performance when conventional partitioning techniques are employed. These techniques do not take into account inherent information about the state of the simulation. Since a simulation execution is subject to platform interference, information about the type of simulations and about the computational power of resources may be applied to the partitioning process. Static performance information, generated from evaluating the platform with benchmarks has been employed, as well as dynamic load information provided by load indices. The results obtained with this approach are attractive, mainly when the objective is to execute simulations on environments which make conventional scheduling policies available, instead of specific policies. The evaluated case studies show satisfactory performance gains of up to 24% of reduction in execution time, 22% of improvement in efficiency and reduction of up to 79% in rollbacks. So, depending on the available time and the aimed objectives, it is worth using conventional techniques to assist distributed simulation partitioning. This work also contributes to the improvement of both tools used in it: Warped and AMIGO. A communication interface has been developed to integrate the tools, extending their capabilities.
36

Design, Implementation, and Performance Evaluation of HLA in Unity

Söderbäck, Karl January 2017 (has links)
This report investigates if an HLA-plugin for the game engine Unity can be made and whether or not it would lead to any drawbacks in regard to data exchange and performance. An implementation of a plugin and performance tests on it proceeds to show that the possibilities of running HLA as a plugin in Unity shows a lot of promise for 3D-applications designed in Unity communicating over HLA.
37

Parallélisation massive de dynamiques spatiales : contribution à la gestion durable du mildiou de la pomme de terre / Massive scale parallelization of spatial dynamics : input for potato blight sustainable management

Herbez, Christopher 21 November 2016 (has links)
La simulation à évènements discrets, dans le contexte du formalisme DEVS, est en plein essor depuis quelques années. Face à une demande grandissante en terme de taille de modèles et par conséquent en temps de calcul, il est indispensable de construire des outils tel qu'ils garantissent une optimalité ou au mieux une excellente réponse en terme de temps de simulations. Certes, des outils de parallélisation et de distribution tel que PDEVS existent, mais la répartition des modèles au sein des noeuds de calculs reste entièrement à la charge du modélisateur. L'objectif de cette thèse est de proposer une démarche d'optimisation des temps de simulation parallèle et distribuée, en restructurant la hiérarchie de modèles. La nouvelle hiérarchie ainsi créée doit garantir une exécution simultanée d'un maximum de modèles atomiques, tout en minimisant le nombre d'échanges entre modèles n'appartenant pas au même noeud de calculs (i.e. au même sous-modèle). En effet, l'optimisation des temps de simulation passe par une exécution simultanée d'un maximum de modèles atomiques, mais dans un contexte distribué, il est important de minimiser le transfert d'évènements via le réseau pour éviter les surcoûts liés à son utilisation. Il existe différentes façons de structurer un modèle DEVS : certains utilisent une structure hiérarchique à plusieurs niveaux, d'autres optent pour une structure dite "à plat". Notre approche s'appuie sur cette dernière. En effet, il est possible d'obtenir un unique graphe de modèles, correspondant au réseau de connexions qui lient l'ensemble des modèles atomiques. À partir de ce graphe, la création d'une hiérarchie de modèles optimisée pour la simulation distribuée repose sur le partitionnement de ce graphde de modèles. En effet, la théorie des graphes offre un certain nombre d'outils permettant de partitionner un graphe de façon à satisfaire certaines contraintes. Dans notre cas, la partition de modèles obtenue doit être équilibrée en charge de calcul et doit minimiser le transfert de messages entre les sous-modèles. L'objectif de cette thèse est de présenter la démarche d'optimization, ainsi que les outils de partitionnement et d'apprentissage utilisés pour y parvenir. En effet, le graphe de modèles fournit par la structure à plat ne contient pas toutes les informations nécessaires au partitionnement. C'est pourquoi, il est nécessaire de mettre en place une pondération de celui qui reflète au mieux la dynamique individuelle des modèles qui le compose. Cette pondération est obtenue par apprentissage, à l'aide de chaînes de Markov cachées (HMM). L'utilisation de l'apprentissage dans un contexte DEVS a nécessité quelques modifications pour prendre en compte toutes ces spécificités. Cette thèse présente également toute une phase de validation : à la fois, dans un contexte parallèle dans le but de valider le comportement du noyau de simulation et d'observer les limites liées au comportement des modèles atomiques, et d'autre part, dans un contexte distribué. Pour terminer, cette thèse présente un aspect applicatif lié à la gestion durable du mildiou de la pomme de terre. Le modèle mildiou actuel est conçu pour être utilisé à l'échelle de la parcelle. En collaboration avec des agronomes, nous proposons d'apporter quelques modifications à ce dernier pour étendre son champ d'action et proposer une nouvelle échelle spatiale. / Discrete-event simulation, in a context of DEVS formalism, has been experiencing a boom over the recent years. In a situation of increasing demand in terms of model size and consequently in calculation time, it is necessary to build up tools to ensure optimality, or even better, an excellent response to simulation times.Admittedly, there exist parallelization and distribution tools like PDEVS, but the distribution of models within compute nodes is under the modeler s sole responsability.The Ph.D. main scope is to propose an optimization approach of parallel and distributed simulation times, restructuring the hierarchy of models. The new founded hierarchy can thus guarantee a simultaneous execution of a maximum quantity of atomic models while minimizing the number of exchanges between models, which are not associated with the same calculation node (i.e. with the same sub-model). Accordingly, optimizing simulation time goes through a simultaneous implementation of a maximum quantity of atomic models, but in a distributed context it is highly important to minimize the adaptation transfer via the network to avoid overcharges related to its use. Ther exist deifferent ways of structuring a DEVS model : some scientist use a multi-leveled hierarchical structure, and others opt for a "flat" structure. Our objective focuses on the latter. Indeed, it is possible to obtain a single graph of models, corresponding to the connection network linking all the atomic models. From this graph the creation of a model hierarchy optimized by the distributed simulation focuses on the partitioning of this model graph. In such cases, the graph theory reveals a certain numbers of tools to partition the graph to meet some constraints. In our study, the resulting model partition must not only balance calculation needs but also minimize the message transfer between sub-models. The Ph.D. main scope it to propose not only an optimization approach but also partitioning and learning tools to achieve full compliance in our processing methods. In such cases, the model graph using the flat structure does not provide us with all the necessary information related to partitioning. That is the reason whi it is highly necessary to assign a weighting in the graph that best reflects the individual dynamics of models composing it. This weighting comes from learning, using the Hidden Markov Models (HMM). The use of learning in DEVS context results in some adjustments ti consider all the specificities. The thesis also ensures the complete validation phase, either in a parallel context to validate the simulation node behavior and observe the limits of atomic model behavior, or in a distributed context. This dissertation in its final state also includes a pratice-oriented approach to sustainably manage potato blight. The current fungus Phytophthora infestans simulation model is conceived for a plot scale. In collacoration with agronomists, we provide a few changes to update the Phytophthora infestans model to extend the scope of action and propose a new scale of values.
38

Performance comparison of data distribution management strategies in large-scale distributed simulation.

Dzermajko, Caron 05 1900 (has links)
Data distribution management (DDM) is a High Level Architecture/Run-time Infrastructure (HLA/RTI) service that manages the distribution of state updates and interaction information in large-scale distributed simulations. The key to efficient DDM is to limit and control the volume of data exchanged during the simulation, to relay data to only those hosts requiring the data. This thesis focuses upon different DDM implementations and strategies. This thesis includes analysis of three DDM methods including the fixed grid-based, dynamic grid-based, and region-based methods. Also included is the use of multi-resolution modeling with various DDM strategies and analysis of the performance effects of aggregation/disaggregation with these strategies. Running numerous federation executions, I simulate four different scenarios on a cluster of workstations with a mini-RTI Kit framework and propose a set of benchmarks for a comparison of the DDM schemes. The goals of this work are to determine the most efficient model for applying each DDM scheme, discover the limitations of the scalability of the various DDM methods, evaluate the effects of aggregation/disaggregation on performance and resource usage, and present accepted benchmarks for use in future research.
39

Avaliação de políticas de escalonamento para execução de simulações distribuídas / Evaluation of politics of scheduling for execution of distributed simulations

Osvaldo Adilson de Carvalho Junior 26 May 2008 (has links)
Um melhor escalonamento em simulação distribuída é fundamental para uma execução mais rápida e eficiente. O projeto desenvolvido tem como objetivo a avaliação de desempenho de políticas de escalonamento convencionais e específicas para Simulação Distribuída (SD), apresentando uma comparação do desempenho destas duas abordagens. Análises das pesquisas feitas na área mostram que não existe avaliação semelhante. Assim, este trabalho tem a importante contribuição de demonstrar as vantagens e desvantagens do uso de políticas tradicionais em relação às específicas em SD. Para execução das simulações foi utilizada a ferramenta Warped, que está descrita nesta dissertação. Foram desenvolvidas e implementadas novas técnicas de escalonamento que utilizam os resultados da simulação em execução, assim executam um melhor balanceamento de carga. Para o desenvolvimento deste projeto foi necessária uma revisão bibliográfica envolvendo conceitos de simulação distribuída com seus respectivos protocolos de sincronização, escalonamento de processos específicos para programas de SD e políticas tradicionais. Com este estudo soma-se como contribuição deste trabalho uma nova classificação das políticas específicas para SD que utilizam protocolo otimista / A bestter scheduling in distributed simulation is fundamental to a fast and efficient execution. The developed project has as objective the evaluation of performance of conventional and specific politics of scheduling for Distributed Simulation (DS), presenting a comparison of the performance of these two boardings. Analyses of the research done in the area show that similar evaluation does not exists. Thus, this work has the important contribution to demonstrate to the advantages and disadvantages of the use of traditional politics in relation to the specific ones in DS. For execution of the simulations the Warped tool was used, that is described in this work. They had been developed and implemented new techniques of scheduling that use the results of the simulation in execution, thus they execute one better load balancing. For the development of this project a bibliographical revision was necessary involving concepts of simulation distributed with its respective protocols of synchronization, traditional scheduling of specific processes for DS programs and politics. With this study a new classification of the specific politics for DS is added as contribution of this work that use optimistical protocol
40

"Simulação distribuída utilizando protocolos independentes e troca dinâmica nos processos lógicos" / Distributed Simulation using independent protocols and dynamical change in logical processes

Celia Leiko Ogawa Kawabata 26 September 2005 (has links)
Esta tese apresenta uma avaliação do desempenho de simulações distribuídas em tempo de execução. Baseando-se nos resultados obtidos nessa avaliação é proposto um mecanismo em que diferentes protocolos de sincronização coexistem em uma mesma simulação. Esse mecanismo tem por objetivo adequar a simulação em execução ao melhor protocolo de sincronização, para garantir melhor desempenho e, conseqüentemente, resultados mais rápidos. Todas as modificações que são necessárias nos protocolos e a definição da troca de mensagens entre os processos são detalhadas neste trabalho. Esta tese apresenta também os resultados dos testes realizados para identificar os casos onde é melhor manter o protocolo conservador ou onde uma troca de protocolo deve ser considerada. Os resultados obtidos são apresentados e mostram em que momento a troca deve ser considerada. Diferentes abordagens podem ser utilizadas para avaliar o desempenho da simulação, considerando cada processo individualmente ou todos os processos globalmente. De maneira análoga, a troca de protocolos pode ser realizada de forma local ou global. Essas considerações permitem a criação de uma taxonomia para a troca de protocolo que também é apresentada nesta tese. / This thesis presents a performance evaluation of distributed simulations during execution time. According to the results obtained in this evaluation, it is proposed a mechanism where different synchronization protocols can be used in the same simulation. This mechanism aims at tunning the simulation in execution to the best protocol in order to reach better performance. All modifications needed in the protocols and the definition of the exchanged messages between logical processes is presented in this work. This thesis also presents the results of the tests realized to identify the cases where it is better to keep the conservative protocol or it is better to swap the protocol. The results obtained are presented and shown when it is necessary to swap the protocol. Different approaches can be used to evaluate the simulation performance considering that it is possible to evaluate each logical process locally or all of them globally. The change of the protocol can also be applied in just one logical process or in all of them. These considerations allowed the definition of a taxonomy that is also presented in this thesis.

Page generated in 0.128 seconds