• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • Tagged with
  • 5
  • 5
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Rigorous defect control and the numerical solution of ordinary differential equations

Ernsthausen, John+ 10 1900 (has links)
Modern numerical ordinary differential equation initial-value problem (ODE-IVP) solvers compute a piecewise polynomial approximate solution to the mathematical problem. Evaluating the mathematical problem at this approximate solution defines the defect. Corless and Corliss proposed rigorous defect control of numerical ODE-IVP. This thesis automates rigorous defect control for explicit, first-order, nonlinear ODE-IVP. Defect control is residual-based backward error analysis for ODE, a special case of Wilkinson's backward error analysis. This thesis describes a complete software implementation of the Corless and Corliss algorithm and extensive numerical studies. Basic time-stepping software is adapted to defect control and implemented. Advances in software developed for validated computing applications and advances in programming languages supporting operator overloading enable the computation of a tight rigorous enclosure of the defect evaluated at the approximate solution with Taylor models. Rigorously bounding a norm of the defect, the Corless and Corliss algorithm controls to mathematical certainty the norm of the defect to be less than a user specified tolerance over the integration interval. The validated computing software used in this thesis happens to compute a rigorous supremum norm. The defect of an approximate solution to the mathematical problem is associated with a new problem, the perturbed reference problem. This approximate solution is often the product of a numerical procedure. Nonetheless, it solves exactly the new problem including all errors. Defect control accepts the approximate solution whenever the sup-norm of the defect is less than a user specified tolerance. A user must be satisfied that the new problem is an acceptable model. / Thesis / Master of Science (MSc) / Many processes in our daily lives evolve in time, even the weather. Scientists want to predict the future makeup of the process. To do so they build models to model physical reality. Scientists design algorithms to solve these models, and the algorithm implemented in this project was designed over 25 years ago. Recent advances in mathematics and software enabled this algorithm to be implemented. Scientific software implements mathematical algorithms, and sometimes there is more than one software solution to apply to the model. The software tools developed in this project enable scientists to objectively compare solution techniques. There are two forces at play; models and software solutions. This project build software to automate the construction of the exact solution of a nearby model. That's cool.
2

Global Optimization of Dynamic Process Systems using Complete Search Methods

Sahlodin, Ali Mohammad 04 1900 (has links)
<p>Efficient global dynamic optimization (GDO) using spatial branch-and-bound (SBB) requires the ability to construct tight bounds for the dynamic model. This thesis works toward efficient GDO by developing effective convex relaxation techniques for models with ordinary differential equations (ODEs). In particular, a novel algorithm, based upon a verified interval ODE method and the McCormick relaxation technique, is developed for constructing convex and concave relaxations of solutions of nonlinear parametric ODEs. In addition to better convergence properties, the relaxations so obtained are guaranteed to be no looser than their underlying interval bounds, and are typically tighter in practice. Moreover, they are rigorous in the sense of accounting for truncation errors. Nonetheless, the tightness of the relaxations is affected by the overestimation from the dependency problem of interval arithmetic that is not addressed systematically in the underlying interval ODE method. To handle this issue, the relaxation algorithm is extended to a Taylor model ODE method, which can provide generally tighter enclosures with better convergence properties than the interval ODE method. This way, an improved version of the algorithm is achieved where the relaxations are generally tighter than those computed with the interval ODE method, and offer better convergence. Moreover, they are guaranteed to be no looser than the interval bounds obtained from Taylor models, and are usually tighter in practice. However, the nonlinearity and (potentially) nonsmoothness of the relaxations impedes their fast and reliable solution. Therefore, the algorithm is finally modified by incorporating polyhedral relaxations in order to generate relatively tight and computationally cheap linear relaxations for the dynamic model. The resulting relaxation algorithm along with a SBB procedure is implemented in the MC++ software package. GDO utilizing the proposed relaxation algorithm is demonstrated to have significantly reduced computational expense, up to orders of magnitude, compared to existing GDO methods.</p> / Doctor of Philosophy (PhD)
3

Taylor-regelns aktualitet och tillämpbarhet : En jämförelse av Taylor-skattningar i Brasilien, Kanada, Polen, Sverige och Sydafrika för åren 2000-2013 / The Taylor rule’s relevance and applicability : A comparision of Taylor interest rates in Brazil, Canada, Poland, Sweden and South Africa for the years 2000-2013

Björklund, Pontus, Hegart, Ellinor January 2014 (has links)
John B. Taylor, professor i nationalekonomi vid Stanford University, presenterade år 1993 en penningpolitisk regel som syftade till att vara ett hjälpmedel för centralbanker vid räntebeslut. Taylor-regeln är mycket enkel i sitt uförande och baseras på att styrräntan bör sättas efter två variabler: BNP-gapet och inflationsavvikelsen. Denna styrränteregel fick genomslag inom den vetenskapliga världen men spreds även till makroekonomisk praktik och medförde stora förändringar för penningpolitiken. Flera empriska studier har publicerats sedan Taylor-regeln tillkom och det råder det delade meningar om hur väl Taylor-regeln presterar för olika typer av ekonomier och hur användbar den är idag. Det har även uppkomit nya teorier angående trögheten i effekterna av styrränteförändringar och vid vilken tidpunkt dessa får en inverkan på inflationstakten. Syftet med denna uppsats är att jämföra hur väl den ursprungliga Taylor-modellen och en tidslaggad modell förklarar centralbankernas historiska styrräntesättning i fem länder med inflationsmål under tidsperioden 2000-2013. Analysen av resultaten görs med utgångspunkt i ländernas olika ekonomiska egenskaper samt tidsperioden som studien omfattar. Studien begränsas till jämförelser av de två Taylor-modellernas tillämpbarhet vid styrräntesättningar för länderna Brasilien, Kanada, Polen, Sverige och Sydafrika. De två modellerna modifieras också med en styrränteutjämningsfunktion.   Våra resultat tyder på att den ursprungliga Taylor-regeln presterar bättre i förhållande till den tidslaggade modellen när det gäller att förklara den faktiska styrräntesättningen idag för alla länder i studien utom Polen. Den tidslaggade presterar dock bättre än den ursprungliga för de utvecklade ekonomierna Sverige och Kanada under 1990-talet. Båda modellerna gör kraftiga över- och underskattningar som till stor del avhjälps med den utjämningsfunktion som vi tillämpar. Koefficienterna hålls konstanta över hela tidsperioden, vilket inte är rimligt då en viss dynamik bör inkluderas så att regeln justeras efter varje period då för mycket vikt läggs vid BNP-variabeln som såldes är en bidragande faktor till regelns över- och underskattningar. Regeln presterar bättre för ekonomier med stabila förhållanden mellan tillväxttakt och inflationstakt än för länder som lider av mer volatila förhållanden mellan dessa två variabler, likt tillväxtländerna i vår studie. Dessutom ger Taylor-regeln skattningar som ligger närmre den faktiska styrräntesättningen under de tidigare delarna av perioden för att sedan till större del börja avvika från den faktiskt satta styrräntan.   Slutsatserna som kan dras utifrån våra resultat är att den ursprungliga Taylor-regeln presterar bäst i att beskriva ett lands styrräntesättning sett till kvantitativa mått medan en tidslaggad modell tar större hänsyn faktiska förhållanden. Över lag presterar modellerna bättre för de utvecklade ekonomierna än för tillväxtekonomierna och huruvida storleken på ekonomin har någon inverkan är svårt att avgöra. Resultaten tyder också på att Taylor-regeln med tidslagg ligger närmre den faktiska styrräntesättningen för de utvecklade ekonomierna under 1990-talet än under perioden 2000-2013 medan den ursprungliga presterar bättre idag. / John. B Taylor, professor of Economics at Stanford University, presented a monetary policy rule in 1993 which intended to help central banks with their interst rate decisions. In its design the Taylor-rule was very simple and based on only two variables: the GDP-gap and the deviation of actual inflation from the inflation target. The Taylor rule had a great impact on the academic research and also contributed to changes within monetary policy around the world. Many empirical studies have been published on the Taylor rule and there are divided contentions about its applicability in different kind of economies and its relevance today. New theories have also been published regardning the time aspect of the impact on inflation due to a change in the interest rate. The intentions of this study is to make a comparsion between the original Taylor rule and a Taylor rule including a time lag regarding how well they describe the actual interest rates set by the central banks in five countries during the period 2000-2013. The results will be analyzed under consideration of the different economies attributes. The study compares the two kinds of Taylor rules and the applicability in describing the historical interest rate in Brazil, Canada, Poland, Sweden and South Africa. The two rules have also been modified with an interest rate smoothing-function.   Our results conclude that the original Taylor rule describes the historical interest rate better than the rule including a time lag for the time period 2000-2013 for all countries apart from Poland. For the developed economies Canada and Sweden the time lagged model show less deviations for the 1990’s. However both rules tend to over and underestimate the valutation of the interest rate. The smoothing function does to some extent correct this problem. The coefficients of the variables are held constant during the study which in reality should not be the case. They should instead be adjusted between every period to make allowances for the different relationship of the two variables. Mostly too much weight is put on the GDP-variable which should be a contributing cause of the overestimations. The rules do however have the tendency to describe the historical interst rate of the developed economies superior to the developing economies. The performance is greater at the beginning of the period with less deviation from the actual outcome than later on. The conclusion of our study is that the original Taylor rule generally performs superior to the one including time lag with conciderations to the deviations from the actual interest rates. However, the Taylor rule including the time-lag does allow for actual circumstances which the original Taylor rule does not take into consideration. Mainly the rules do perform better for developed economies compared to developing economies. Regarding the impact of the size of the economy on the applicability of the rules it was difficult to conclude anything specific. The Taylor rule with the time-lag is more applicable for the developed economies during the earlier time period, the 1990’s, than the later time period, the 2000’s where the original Taylor rule shows less deviations.
4

Approximations polynomiales rigoureuses et applications / Rigorous Polynomial Approximations and Applications

Joldes, Mioara Maria 26 September 2011 (has links)
Quand on veut évaluer ou manipuler une fonction mathématique f, il est fréquent de la remplacer par une approximation polynomiale p. On le fait, par exemple, pour implanter des fonctions élémentaires en machine, pour la quadrature ou la résolution d'équations différentielles ordinaires (ODE). De nombreuses méthodes numériques existent pour l'ensemble de ces questions et nous nous proposons de les aborder dans le cadre du calcul rigoureux, au sein duquel on exige des garanties sur la précision des résultats, tant pour l'erreur de méthode que l'erreur d'arrondi.Une approximation polynomiale rigoureuse (RPA) pour une fonction f définie sur un intervalle [a,b], est un couple (P, Delta) formé par un polynôme P et un intervalle Delta, tel que f(x)-P(x) appartienne à Delta pour tout x dans [a,b].Dans ce travail, nous analysons et introduisons plusieurs procédés de calcul de RPAs dans le cas de fonctions univariées. Nous analysons et raffinons une approche existante à base de développements de Taylor.Puis nous les remplaçons par des approximants plus fins, tels que les polynômes minimax, les séries tronquées de Chebyshev ou les interpolants de Chebyshev.Nous présentons aussi plusieurs applications: une relative à l'implantation de fonctions standard dans une bibliothèque mathématique (libm), une portant sur le calcul de développements tronqués en séries de Chebyshev de solutions d'ODE linéaires à coefficients polynômiaux et, enfin, un processus automatique d'évaluation de fonction à précision garantie sur une puce reconfigurable. / For purposes of evaluation and manipulation, mathematical functions f are commonly replaced by approximation polynomials p. Examples include floating-point implementations of elementary functions, integration, ordinary differential equations (ODE) solving. For that, a wide range of numerical methods exists. We consider the application of such methods in the context of rigorous computing, where we need guarantees on the accuracy of the result, with respect to both the truncation and rounding errors.A rigorous polynomial approximation (RPA) for a function f defined over an interval [a,b] is a couple (P, Delta) where P is a polynomial and Delta is an interval such that f(x)-P(x) belongs to Delta, for all x in [a,b]. In this work we analyse and bring forth several ways of obtaining RPAs for univariate functions. Firstly, we analyse and refine an existing approach based on Taylor expansions. Secondly, we replace them with better approximations such as minimax approximations, Chebyshev truncated series or interpolation polynomials.Several applications are presented: one from standard functions implementation in mathematical libraries (libm), another regarding the computation of Chebyshev series expansions solutions of linear ODEs with polynomial coefficients, and finally an automatic process for function evaluation with guaranteed accuracy in reconfigurable hardware.
5

Contributions à la vérification formelle d'algorithmes arithmétiques / Contributions to the Formal Verification of Arithmetic Algorithms

Martin-Dorel, Erik 26 September 2012 (has links)
L'implantation en Virgule Flottante (VF) d'une fonction à valeurs réelles est réalisée avec arrondi correct si le résultat calculé est toujours égal à l'arrondi de la valeur exacte, ce qui présente de nombreux avantages. Mais pour implanter une fonction avec arrondi correct de manière fiable et efficace, il faut résoudre le «dilemme du fabricant de tables» (TMD en anglais). Deux algorithmes sophistiqués (L et SLZ) ont été conçus pour résoudre ce problème, via des calculs longs et complexes effectués par des implantations largement optimisées. D'où la motivation d'apporter des garanties fortes sur le résultat de ces pré-calculs coûteux. Dans ce but, nous utilisons l'assistant de preuves Coq. Tout d'abord nous développons une bibliothèque d'«approximation polynomiale rigoureuse», permettant de calculer un polynôme d'approximation et un intervalle bornant l'erreur d'approximation à l'intérieur de Coq. Cette formalisation est un élément clé pour valider la première étape de SLZ, ainsi que l'implantation d'une fonction mathématique en général (avec ou sans arrondi correct). Puis nous avons implanté en Coq, formellement prouvé et rendu effectif 3 vérifieurs de certificats, dont la preuve de correction dérive du lemme de Hensel que nous avons formalisé dans les cas univarié et bivarié. En particulier, notre «vérifieur ISValP» est un composant clé pour la certification formelle des résultats générés par SLZ. Ensuite, nous nous sommes intéressés à la preuve mathématique d'algorithmes VF en «précision augmentée» pour la racine carré et la norme euclidienne en 2D. Nous donnons des bornes inférieures fines sur la plus petite distance non nulle entre sqrt(x²+y²) et un midpoint, permettant de résoudre le TMD pour cette fonction bivariée. Enfin, lorsque différentes précisions VF sont disponibles, peut survenir le phénomène de «double-arrondi», qui peut changer le comportement de petits algorithmes usuels en arithmétique. Nous avons prouvé en Coq un ensemble de théorèmes décrivant le comportement de Fast2Sum avec double-arrondis. / The Floating-Point (FP) implementation of a real-valued function is performed with correct rounding if the output is always equal to the rounding of the exact value, which has many advantages. But for implementing a function with correct rounding in a reliable and efficient manner, one has to solve the ``Table Maker's Dilemma'' (TMD). Two sophisticated algorithms (L and SLZ) have been designed to solve this problem, relying on some long and complex calculations that are performed by some heavily-optimized implementations. Hence the motivation to provide strong guarantees on these costly pre-computations. To this end, we use the Coq proof assistant. First, we develop a library of ``Rigorous Polynomial Approximation'', allowing one to compute an approximation polynomial and an interval that bounds the approximation error in Coq. This formalization is a key building block for verifying the first step of SLZ, as well as the implementation of a mathematical function in general (with or without correct rounding). Then we have implemented, formally verified and made effective 3 interrelated certificates checkers in Coq, whose correctness proof derives from Hensel's lemma that we have formalized for both univariate and bivariate cases. In particular, our ``ISValP verifier'' is a key component for formally verifying the results generated by SLZ. Then, we have focused on the mathematical proof of ``augmented-precision'' FP algorithms for the square root and the Euclidean 2D norm. We give some tight lower bounds on the minimum non-zero distance between sqrt(x²+y²) and a midpoint, allowing one to solve the TMD for this bivariate function. Finally, the ``double-rounding'' phenomenon can typically occur when several FP precision are available, and may change the behavior of some usual small FP algorithms. We have formally verified in Coq a set of results describing the behavior of the Fast2Sum algorithm with double-roundings.

Page generated in 0.0643 seconds