• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 17
  • 17
  • 17
  • 17
  • 8
  • 8
  • 7
  • 5
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Algorithms for efficiently and effectively matching agents in microsimulations of sexually transmitted infections

Geffen, Nathan 01 January 2018 (has links)
Mathematical models of the HIV epidemic have been used to estimate incidence, prevalence and life-expectancy, as well the benets and costs of public health interventions, such as the provision of antiretroviral treatment. Models of sexually transmitted infection epidemics attempt to account for varying levels of risk across a population based on diverse | or heterogeneous | sexual behaviour. Microsimulations are a type of model that can account for fine-grained heterogeneous sexual behaviour. This requires pairing individuals, or agents, into sexual partnerships whose distribution matches that of the population being studied, to the extent this is known. But pair-matching is computationally expensive. There is a need for computer algorithms that pair-match quickly. In this work we describe the role of modelling in responses to the South African HIV epidemic. We also chronicle a three-decade debate, greatly influenced since 2008 by a mathematical model, on the optimal time for people with HIV to start antiretroviral treatment. We then present and analyse several pair-matching algorithms, and compare them in a microsimulation of a fictitious STI. We find that there are algorithms, such as Cluster Shuffle Pair-Matching, that offer a good compromise between speed and approximating the distribution of sexual relationships of the study-population. An interesting further finding is that infection incidence decreases as population increases, all other things being equal. Whether this is an artefact of our methodology or a natural world phenomenon is unclear and a topic for further research.
12

Specification and Verification of Systems Using Model Checking and Markov Reward Models

Lifson, Farrel 01 May 2004 (has links)
The importance of service level management has come to the fore in recent years as computing power becomes more and more of a commodity. In order to present a consistently high quality of service systems must be rigorously analysed, even before implementation, and monitored to ensure these goals can be achieved. The tools and algorithms found in performability analysis offer a potentially ideal method to formally specify and analyse performance and reliability models. This thesis examines Markov reward models, a formalism based on continuous time Markov chains, and it's usage in the generation and analysis of service levels. The particular solution technique we employ in this thesis is model checking, using Continuous Reward Logic as a means to specify requirement and constraints on the model. We survey the current tools available allowing model checking to be performed on Markov reward models. Specifically we extended the Erlangen-Twente Markov Chain Checker to be able to solve Markov reward models by taking advantage of the Duality theorem of Continuous Stochastic Reward Logic, of which Continuous Reward Logic is a sub-logic. We are also concerned with the specification techniques available for Markov reward models, which have in the past merely been extensions to the available specification techniques for continuous time Markov chains. We implement a production rule system using Ruby, a high level language, and show the advantages gained by using it's native interpreter and language features in order to cut down on implementation time and code size. The limitations inherent in Markov reward models are discussed and we focus on the issue of zero reward states. Previous algorithms used to remove zero reward states, while preserving the numerical properties of the model, could potentially alter it's logical properties. We propose algorithms based on analysing the continuous reward logic requirement beforehand to determine whether a zero reward state can be removed safely as well as an approach based on substitution of zero reward states. We also investigate limitations on multiple reward structures and the ability to solve for both time and reward. Finally we perform a case study on a Beowulf parallel computing cluster using Markov reward models and the ETMCC tool, demonstrating their usefulness in the implementation of performability analysis and the determination of the service levels that can be offered by the cluster to it's users.
13

Model Driven Communication Protocol Engineering and Simulation based Performance Analysis using UML 2.0

de Wet, Nico 01 January 2005 (has links)
The automated functional and performance analysis of communication systems specified with some Formal Description Technique has long been the goal of telecommunication engineers. In the past SDL and Petri nets have been the most popular FDTs for the purpose. With the growth in popularity of UML the most obvious question to ask is whether one can translate one or more UML diagrams describing a system to a performance model. Until the advent of UML 2.0, that has been an impossible task since the semantics were not clear. Even though the UML semantics are still not clear for the purpose, with UML 2.0 now released and using ITU recommendation Z.109, we describe in this dissertation a methodology and tool called proSPEX (protocol Software Performance Engineering using XMI), for the design and performance analysis of communication protocols specified with UML. Our first consideration in the development of our methodology was to identify the roles of UML 2.0 diagrams in the performance modelling process. In addition, questions regarding the specification of non-functional duration contraints, or temporal aspects, were considered. We developed a semantic time model with which a lack of means of specifying communication delay and processing times in the language are addressed. Environmental characteristics such as channel bandwidth and buffer space can be specified and realistic assumptions are made regarding time and signal transfer. With proSPEX we aimed to integrate a commercial UML 2.0 model editing tool and a discrete-event simulation library. Such an approach has been advocated as being necessary in order to develop a closer integration of performance engineering with formal design and implementation methodologies. In order to realize the integration we firstly identified a suitable simulation library and then extended the library with features required to represent high-level SDL abstractions, such as extended finite state machines (EFSM) and signal addressing. In implementing proSPEX we filtered the XML output of our editor and used text templates for code generation. The filtering of the XML output and the need to extend our simulation library with EFSM abstractions was found to be significant implementation challenges. Lastly, in order to to illustrate the utility of proSPEX we conducted a performance analysis case-study in which the efficient short remote operations (ESRO) protocol is used in a wireless e-commerce scenario.
14

A Parallel Multidimensional Weighted Histogram Analysis Method

Potgieter, Andrew 01 January 2014 (has links)
The Weighted Histogram Analysis Method (WHAM) is a technique used to calculate free energy from molecular simulation data. WHAM recombines biased distributions of samples from multiple Umbrella Sampling simulations to yield an estimate of the global unbiased distribution. The WHAM algorithm iterates two coupled, non-linear, equations, until convergence at an acceptable level of accuracy. The equations have quadratic time complexity for a single reaction coordinate. However, this increases exponentially with the number of reaction coordinates under investigation, which makes multidimensional WHAM a computationally expensive procedure. There is potential to use general purpose graphics processing units (GPGPU) to accelerate the execution of the algorithm. Here we develop and evaluate a multidimensional GPGPU WHAM implementation to investigate the potential speed-up attained over its CPU counterpart. In addition, to avoid the cost of multiple Molecular Dynamics simulations and for validation of the implementations we develop a test system to generate samples analogous to Umbrella Sampling simulations. We observe a maximum problem size dependent speed-up of approximately 19 for the GPGPU optimized WHAM implementation over our single threaded CPU optimized version. We find that the WHAM algorithm is amenable to GPU acceleration, which provides the means to study ever more complex molecular systems in reduced time periods.
15

Real-time Generation of Procedural Forests

Kenwood, Julian 01 January 2014 (has links)
The creation of 3D models for games and simulations is generally a time-consuming and labour intensive task. Forested landscapes are an important component of many large virtual environments in games and film. To create the many individual tree models required for forests requires a large numbers of artists and a great deal of time. In order to reduce modelling time procedural methods are often used. Such methods allow tree models to be created automatically and relatively quickly, albeit at potentially reduced quality. Although the process is faster than manual creation, it can still be slow and resource-intensive for large forests. The main contribution of this work is the development of an efficient procedural generation system for creating large forests. Our system uses L-Systems, a grammar based procedural technique, to generate each tree. We explore two approaches to accelerating the creation of large forests. First, we demonstrate performance improvements for the creation of individual trees in the forest, by reducing the computation required by the underlying L-Systems. Second, we reduce the memory overhead by sharing geometry between trees using a novel branch instancing approach. Test results show that our scheme significantly improves the speed of forest generation over naive methods: our system is able to generate over 100, 000 trees in approximately 2 seconds, while using a modest amount of memory. With respect to improving L-System processing, one of our methods achieves a 25% speed up over traditional methods at the cost of a small amount of additional memory, while our second method manages a 99% reduction in memory at the expense of a small amount of extra processing.
16

Reinforcement Learning with Recurrent Neural Networks

Schäfer, Anton Maximilian 20 November 2008 (has links)
Controlling a high-dimensional dynamical system with continuous state and action spaces in a partially unknown environment like a gas turbine is a challenging problem. So far often hard coded rules based on experts´ knowledge and experience are used. Machine learning techniques, which comprise the field of reinforcement learning, are generally only applied to sub-problems. A reason for this is that most standard RL approaches still fail to produce satisfactory results in those complex environments. Besides, they are rarely data-efficient, a fact which is crucial for most real-world applications, where the available amount of data is limited. In this thesis recurrent neural reinforcement learning approaches to identify and control dynamical systems in discrete time are presented. They form a novel connection between recurrent neural networks (RNN) and reinforcement learning (RL) techniques. RNN are used as they allow for the identification of dynamical systems in form of high-dimensional, non-linear state space models. Also, they have shown to be very data-efficient. In addition, a proof is given for their universal approximation capability of open dynamical systems. Moreover, it is pointed out that they are, in contrast to an often cited statement, well able to capture long-term dependencies. As a first step towards reinforcement learning, it is shown that RNN can well map and reconstruct (partially observable) MDP. In the so-called hybrid RNN approach, the resulting inner state of the network is then used as a basis for standard RL algorithms. The further developed recurrent control neural network combines system identification and determination of an optimal policy in one network. In contrast to most RL methods, it determines the optimal policy directly without making use of a value function. The methods are tested on several standard benchmark problems. In addition, they are applied to different kinds of gas turbine simulations of industrial scale.
17

Modelling human behaviour in social dilemmas using attributes and heuristics

Ebenhöh, Eva 16 October 2007 (has links)
A question concerning not only modellers but also practitioners is: Under what circumstances can mutual cooperation be established and maintained by a group of people facing a common pool dilemma" A step before this question of institutional influences there is need for a different way of modelling human behaviour that does not draw on the rational actor paradigm, because this kind of modelling needs to be able to integrate various deviations from this theory shown in economic experiments. We have chosen a new approach based on observations in form of laboratory and field observations of actual human behaviour. We model human decision making as using an adaptive toolbox following the notion of Gigerenzer. Humans draw on a number of simple heuristics that are meaningful in a certain situation but may be useless in another. This is incorporated into our agent-based model by having agents perceive their environment, draw on a pool of heuristics to choose an appropriate one and use that heuristic.Behavioural differences can be incorporated in two ways. First, each agent has a number of attributes that differ in values, for example there are more and less cooperative agents. The second behavioural difference lies in the way, in which heuristics are chosen. With this modelling approach we contribute to a new way of modelling human behaviour, which is simple enough to be included into morecomplex models while at the same time realistic enough to cover actual decision making processes of humans. Modellers should be able to use this approach without a need to get deep into psychological, sociological or economic theory. Stakeholders in social dilemmas, who may be confronted with such a model should understand, why an agent decides in the way it does.

Page generated in 0.0283 seconds