Spelling suggestions: "subject:"approximately""
51 |
Statistical Inference Utilizing Agent Based ModelsHeard, Daniel Philip January 2014 (has links)
<p>Agent-based models (ABMs) are computational models used to simulate the behaviors, </p><p>actionsand interactions of agents within a system. The individual agents </p><p>each have their own set of assigned attributes and rules, which determine</p><p>their behavior within the ABM system. These rules can be</p><p>deterministic or probabilistic, allowing for a great deal of</p><p>flexibility. ABMs allow us to</p><p>observe how the behaviors of the individual agents affect the system</p><p>as a whole and if any emergent structure develops within the</p><p>system. Examining rule sets in conjunction with corresponding emergent</p><p>structure shows how small-scale changes can</p><p>affect large-scale outcomes within the system. Thus, we can better</p><p>understand and predict the development and evolution of systems of</p><p>interest. </p><p>ABMs have become ubiquitous---they used in business</p><p>(virtual auctions to select electronic ads for display), atomospheric</p><p>science (weather forecasting), and public health (to model epidemics).</p><p>But there is limited understanding of the statistical properties of</p><p>ABMs. Specifically, there are no formal procedures</p><p>for calculating confidence intervals on predictions, nor for</p><p>assessing goodness-of-fit, nor for testing whether a specific</p><p>parameter (rule) is needed in an ABM.</p><p>Motivated by important challenges of this sort, </p><p>this dissertation focuses on developing methodology for uncertainty</p><p>quantification and statistical inference in a likelihood-free context</p><p>for ABMs. </p><p>Chapter 2 of the thesis develops theory related to ABMs, </p><p>including procedures for model validation, assessing model </p><p>equivalence and measuring model complexity. </p><p>Chapters 3 and 4 of the thesis focuses on two approaches </p><p>for performing likelihood-free inference involving ABMs, </p><p>which is necessary because of the intractability of the </p><p>likelihood function due to the variety of input rules and </p><p>the complexity of outputs.</p><p>Chapter 3 explores the use of </p><p>Gaussian Process emulators in conjunction with ABMs to perform </p><p>statistical inference. This draws upon a wealth of research on emulators, </p><p>which find smooth functions on lower-dimensional Euclidean spaces that approximate</p><p>the ABM. Emulator methods combine observed data with output from ABM</p><p>simulations, using these</p><p>to fit and calibrate Gaussian-process approximations. </p><p>Chapter 4 discusses Approximate Bayesian Computation for ABM inference, </p><p>the goal of which is to obtain approximation of the posterior distribution </p><p>of some set of parameters given some observed data. </p><p>The final chapters of the thesis demonstrates the approaches </p><p>for inference in two applications. Chapter 5 presents application models the spread </p><p>of HIV based on detailed data on a social network of men who have sex with</p><p>men (MSM) in southern India. Use of an ABM</p><p>will allow us to determine which social/economic/policy </p><p>factors contribute to thetransmission of the disease. </p><p>We aim to estimate the effect that proposed medical interventions will</p><p>have on the spread of HIV in this community. </p><p>Chapter 6 examines the function of a heroin market </p><p>in the Denver, Colorado metropolitan area. Extending an ABM </p><p>developed from ethnographic research, we explore a procedure </p><p>for reducing the model, as well as estimating posterior </p><p>distributions of important quantities based on simulations.</p> / Dissertation
|
52 |
MULTI-SCALE DYNAMICS OF MECHANICAL SYSTEMS WITH FRICTIONSepehri, Ali 01 December 2010 (has links)
Contact between rough surfaces occurs in numerous engineering systems and in many instances influences the macro behavior of the system. In many instances, the interaction between rough surfaces, affect the macro behavior of the system. Effective treatment of systems containing rough surface contact requires multiscale modeling and analysis approach. It is the goal of this research to develop simple methods for treating contact of rough surfaces so as to facilitate multiscale analysis of systems containing rough surface contact and friction. This dissertation considers a multi-scale approach that includes interaction at nano-scale, micron-scale and accounting for their cumulative effect as to what we normally perceive to be the influence of contact surfaces and friction. In linking each scale to a higher scale this study employs statistical means to obtain cumulative effect of smaller-scale features. A mixed interactive/optimization technique is used to derive, in approximate closed form, equations for the contact load and real area of contact dependence on approach and parameters of rough surfaces. The equations so derived relate the normal and tangential components of contact load to displacement and surface parameters for three types of contact. The nature of contact interaction that include elastic, elastic-plastic, visco-elastic, and visco-elasto-adhesive behavior are considered and equations relating the normal and tangential contact load to approach and relative sliding are obtained in approximate closed form. The approximate equations provide a tool for efficient calculation of contact force components, especially in surface optimization efforts where repetitive calculation of contact force components may be needed. The approximate equations also facilitate a multi-scale dynamic analysis wherein the effect of contact interaction can be readily included in a mechanical system model. Several dynamical problems involving mechanical systems with friction contact are presented and nonlinear dynamic analyses are employed to link the micron-scale properties of surface to the macro-scale properties of the mechanical system. These lead to, perhaps, the first derivation of contact frequency and damping in rough surface contact.
|
53 |
Tail distribution of the sums of regularly varying random variables, computations and simulations / Queue de distribution de somme de variables aléatoires a variations régulières, calculs et simulationsNguyen, Quang Huy 03 November 2014 (has links)
Cette thèse s'intéresse à l'utilisation de techniques numériques par approximation sous forme de séries et de techniques de simulation pour l'approximation de la queue de distribution de sommes de variables aléatoires à variations régulières. Le calcul de la probabilité que la somme soit plus grande qu'un seuil donné est important en gestion des risques. En particulier, ce calcul est utilisé pour définir le besoin en capital des sociétés d'assurances ou d'autres institutions financières. Le premier chapitre constitue l'introduction de la thèse. Il explique les principaux résultats et présente les outils mathématiques qui sont développés dans la thèse. Le second chapitre est basé sur le travail : ”Series expansions for the sum of the independent Pareto random variables”, article rédigé avec le Professeur Christian ROBERT, directeur de la thèse. Cet article est soumis à publication. Il propose un algorithme de calcul pour déterminer la queue de distribution d'une somme de variables aléatoires de type Pareto non nécessairement équidistribuées. Il propose une approximation sous forme de série de la fonction de survie de la somme. L'algorithme utilisé pour calculer l'approximation est simple, facile à implémenter, et offre de très bons résultats numériques. Le troisième chapitre de cette thèse est basée sur l'article : ”New efficient estimators in rare event simulation with heavy tails”, publié dans Journal of Computational and Applied Mathematics, et co-écrit avec le Professeur Christian ROBERT. Il s'intéresse à l'approximation par simulation de la probabilité que la somme de variables aléatoires indépendantes à variations régulières soit plus grande qu'un seuil élevé. Des estimateurs efficaces ont déjà été introduits dans la littérature associée à la simulation d'évènements rares. Nous proposons de nouvelles techniques de simulation qui sont plus efficaces que les méthodes précédemment proposées. Le quatrième chapitre poursuit l'analyse de la simulation d'évènements rares du type ”la somme est plus grande qu'un seuil”, mais cette fois-ci il s'intéresse à des situations où les variables aléatoires sont dépendantes. Il se focalise sur le cas où la dépendance est donnée par une copule archimédienne. Ce chapitre est basé sur l'article en relecture : ”Efficient simulation of tail probabilities of sums with heavy tailed random variables and Archimedean copulas”. Les équivalents asymptotiques de la probabilité de dépassement de seuil ne sont connus que dans des cas particuliers et ils fournissent en général des approximations très médiocres de la vraie valeur. Les techniques de simulation sont donc très appréciables pour obtenir rapidement des approximations précises. Nous proposons quatre estimateurs et quatre techniques de simulation associées. Nous montrons que les erreurs relatives sont asymptotiquement bornées pour presque tous les estimateurs. Les simulations montrent que certains estimateurs sont plus précis / This thesis aims to study computation and simulation methods to approximate tail distribution of the sums of regularly varying random variables. The paper proceeds as follows: The first chapter provides the general introduction of the thesis. The second chapter is essentially constituted by the article ”Series expansions for the sum of the independent Pareto random variables” which was co-written with Professor Christian ROBERT, actually submitted for publication. It deals with the problem of estimating tail distribution of the sum of independent Pareto variables. This problem has been studied for a long time but a complete solution has not yet been found. In this section, we acquire an exact formula, a series expansions, for the distribution of the sum of independent Pareto of non-integer tail indices. Not only is this formula simple and easy to apply but it also gives better numerical results than most of existing methods.The third chapter rests on the article ”New efficient estimators in rare event simulation with heavy tails”, co-written with Professor Christian ROBERT, currently published on ”Journal of Computational and Applied Mathematics 261, 39-47” in 2013. Practically, efficient estimation for tail distribution of the sum of i.i.d. regularly varying random variables is one of widely researched problems in rare event simulation. In this context, Asmussen and Kroese’s estimator has performed better than other works. This part will introduce a new way to approach the sum. Our obtained estimator is more efficient than Asmussen and Kroese’s estimator in the case of regularly varying tail. In other cases, combined with techniques of conditional Monte Carlo and importance sampling, our estimator is still better. In the fourth chapter, we continue to study the tail behavior of the sum of regularly varying variables, with additional assumption that the dependence follows an Archimedean copula or an Archimedean survival copula. This section hinges on the article ”Efficient simulation of tail probabilities of sums with heavy tailed random variables and Archimedean copulas” which is under consideration for being published. Almost all previous studies on this problem used asymptotic approaches which are hard to control the errors. Therefore, techniques of simulation to calculate the tail probability of the sum are presented. Though some of our estimators have bounded relative errors while the others do not, all of them give favorable numerical performances for such a challenging problem
|
54 |
Memristive Probabilistic ComputingAlahmadi, Hamzah 10 1900 (has links)
In the era of Internet of Things and Big Data, unconventional techniques are rising
to accommodate the large size of data and the resource constraints. New computing
structures are advancing based on non-volatile memory technologies and different
processing paradigms. Additionally, the intrinsic resiliency of current applications
leads to the development of creative techniques in computations. In those applications,
approximate computing provides a perfect fit to optimize the energy efficiency
while compromising on the accuracy. In this work, we build probabilistic adders
based on stochastic memristor. Probabilistic adders are analyzed with respect of the
stochastic behavior of the underlying memristors. Multiple adder implementations
are investigated and compared. The memristive probabilistic adder provides a different
approach from the typical approximate CMOS adders. Furthermore, it allows for
a high area saving and design exibility between the performance and power saving.
To reach a similar performance level as approximate CMOS adders, the memristive
adder achieves 60% of power saving. An image-compression application is investigated using the memristive probabilistic adders with the performance and the energy trade-off.
|
55 |
A Simulation Based Approximate Dynamic Programming Approach to Multi-class, Multi-resource Surgical SchedulingAstaraky, Davood January 2013 (has links)
The thesis focuses on a model that seeks to address patient scheduling step of the surgical scheduling process to determine the number of surgeries to perform in a given day. Specifically, provided a master schedule that provides a cyclic breakdown of total OR availability into specific daily allocations to each surgical specialty, we look to provide a scheduling policy for all surgeries that minimizes a combination of the lead time between patient request and surgery date, overtime in the ORs and congestion in the wards. We cast the problem of generating optimal control strategies into the framework of Markov Decision Process (MDP). The Approximate Dynamic Programming (ADP) approach has been employed to solving the model which would otherwise be intractable due to the size of the state space. We assess performance of resulting policy and quality of the driven policy through simulation and we provide our policy insights and conclusions.
|
56 |
Application of analogue techniques to the solution of problems in optimal controlWiklund, Eric Charles January 1965 (has links)
The thesis is concerned with techniques for realizing optimum control that are suited for analogue computers. The first half of the thesis develops an iterative scheme for the solution of the two point boundary value problem. The theory of the iterative scheme is covered in detail and the scheme is implemented on an analogue computer. Studies of the scheme have also been made using a digital computer. The iterative scheme can be modified to cope with constraints on the control law. These modifications have been tested on a digital computer.
The latter half of the thesis is concerned with approximation techniques which produce, very simple controllers. These techniques require a large digital computer, such as the IBM 7040, to do the design calculations. The first approximation technique developed from the calculus of variations is covered in detail including a complete controller designed and simulated. The second approximation technique based on dynamic programming is discussed and a few points are made about the features of the controller. / Applied Science, Faculty of / Electrical and Computer Engineering, Department of / Graduate
|
57 |
Flexible finite automata-based algorithms for detecting microsatellites in DNADe Ridder, Corne 17 August 2010 (has links)
Apart from contributing to Computer Science, this research also contributes to Bioinformatics, a subset of the subject discipline Computational Biology. The main focus of this dissertation is the development of a data-analytical and theoretical algorithm to contribute to the analysis of DNA, and in particular, to detect microsatellites. Microsatellites, considered in the context of this dissertation, refer to consecutive patterns contained by genomic sequences. A perfect tandem repeat is defined as a string of nucleotides which is repeated at least twice in a sequence. An approximate tandem repeat is a string of nucleotides repeated consecutively at least twice, with small differences between the instances. The research presented in this dissertation was inspired by molecular biologists who were discovered to be visually scanning genetic sequences in search of short approximate tandem repeats or so called microsatellites. The aim of this dissertation is to present three algorithms that search for short approximate tandem repeats. The algorithms comprise the implementation of finite automata. Thus the hypothesis posed is as follows: Finite automata can detect microsatellites effectively in DNA. "Effectively" includes the ability to fine-tune the detection process so that redundant data is avoided, and relevant data is not missed during search. In order to verify whether the hypothesis holds, three theoretical related algorithms have been proposed based on theorems from finite automaton theory. They are generically referred to as the FireìSat algorithms. These algorithms have been implemented, and the performance of FireìSat2 has been investigated and compared to other software packages. From the results obtained, it is clear that the performance of these algorithms differ in terms of attributes such as speed, memory consumption and extensibility. In respect of speed performance, FireìSat outperformed rival software packages. It will be seen that the FireìSat algorithms have several parameters that can be used to tune their search. It should be emphasized that these parameters have been devised in consultation with the intended user community, in order to enhance the usability of the software. It was found that the parameters of FireìSat can be set to detect more tandem repeats than rival software packages, but also tuned to limit the number of detected tandem repeats. Copyright / Dissertation (MSc)--University of Pretoria, 2010. / Computer Science / unrestricted
|
58 |
COMPRESSIVE PARAMETER ESTIMATION VIA APPROXIMATE MESSAGE PASSINGHamzehei, Shermin 08 April 2020 (has links)
The literature on compressive parameter estimation has been mostly focused on the use of sparsity dictionaries that encode a discretized sampling of the parameter space; these dictionaries, however, suffer from coherence issues that must be controlled for successful estimation. To bypass such issues with discretization, we propose the use of statistical parameter estimation methods within the Approximate Message Passing (AMP) algorithm for signal recovery. Our method leverages the recently proposed use of custom denoisers in place of the usual thresholding steps (which act as denoisers for sparse signals) in AMP. We introduce the design of analog denoisers that are based on statistical parameter estimation algorithms, and we focus on two commonly used examples: frequency estimation and bearing estimation, coupled with the Root MUSIC estimation algorithm. We first analyze the performance of the proposed analog denoiser for signal recovery, and then link the performance in signal estimation to that of parameter estimation. Numerical experiments show significant improvements in estimation performance versus previously proposed approaches for compressive parameter estimation.
|
59 |
Zpracování otisků prstu / Fingerprint ProcessingPšenák, Patrik January 2010 (has links)
My master's thesis deals with the different techniques used in fingerprints processing for identifying fingerprints. Using the software tool Visual C++ and functions of OpenCV library I programmed a separate application, that is able to select from a database of fingerprints the most consistent with a comparative fingerprint images, even when they are mutually shifted in the direction of axes X and Y. The next step in my program is to gather the edges of the fingerprint image. Those obtained using Canny edge detector. Furthermore, getting the contours of the image edges. To determine, whether the contours are the same, just compare some characteristic points of contours. Next I use a histogram function to determine the number of points for approximation of contours and evaluating compliance fingerprints. Since the processing of the input fingerprint image (or rather the approximation of the contour points) remains in the picture as black (background) and red (the approximation of the contour points), this means, that zero and the last element of the histogram represent the number of black and red points. Comparison is in percentage and is obtained by subtracting the approximated points of contours image from the original fingerprint image of approximated contour points of matched fingerprints. It determined, what percentage of red points have disappeared, so as to match two fingerprint images. If on the resulting figure is not left neither a red point, that corresponds to 100% of the fingerprints Compliance.
|
60 |
Finite state automaton construction through regular expression hashingCoetser, Rayner Johannes Lodewikus 25 August 2010 (has links)
In this study, the regular expressions forming abstract states in Brzozowski’s algorithm are not remapped to sequential state transition table addresses as would be the case in the classical approach, but are hashed to integers. Two regular expressions that are hashed to the same hash code are assigned the same integer address in the state transition table, reducing the number of states in the automaton. This reduction does not necessarily lead to the construction of a minimal automaton: no restrictions are placed on the hash function hashing two regular expressions to the same code. Depending on the quality of the hash function, a super-automaton, previously referred to as an approximate automaton, or an exact automaton can be constructed. When two regular expressions are hashed to the same state, and they do not represent the same regular language, a super-automaton is constructed. A super-automaton accepts the regular language of the input regular expression, in addition to some extra strings. If the hash function is bad, many regular expressions that do not represent the same regular language will be hashed together, resulting in a smaller automaton that accepts extra strings. In the ideal case, two regular expressions will only be hashed together when they represent the same regular language. In this case, an exact minimal automaton will be constructed. It is shown that, using the hashing approach, an exact or super-automaton is always constructed. Another outcome of the hashing approach is that a non-deterministic automaton may be constructed. A new version of the hashing version of Brzozowski’s algorithm is put forward which constructs a deterministic automaton. A method is also put forward for measuring the difference between an exact and a super-automaton: this takes the form of the k-equivalence measure: the k-equivalence measure measures the number of characters up to which the strings of two regular expressions are equal. The better the hash function, the higher the value of k, up to the point where the hash function results in regular expressions being hashed together if and only if they have the same regular language. Using the k-equivalence measure, eight generated hash functions and one hand coded hash function are evaluated for a large number of short regular expressions, which are generated using G¨odel numbers. The k-equivalence concept is extended to the average k-equivalence value in order to evaluate the hash functions for longer regular expressions. The hand coded hash function is found to produce good results. Copyright / Dissertation (MEng)--University of Pretoria, 2009. / Computer Science / unrestricted
|
Page generated in 0.0681 seconds