• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Interval methods for global optimization

Moa, Belaid 22 August 2007 (has links)
We propose interval arithmetic and interval constraint algorithms for global optimization. Both of these compute lower and upper bounds of a function over a box, and return a lower and an upper bound for the global minimum. In interval arithmetic methods, the bounds are computed using interval arithmetic evaluations. Interval constraint methods instead use domain reduction operators and consistency algorithms. The usual interval arithmetic algorithms for global optimization suffer from at least one of the following drawbacks: - Mixing the fathoming problem, in which we ask for the global minimum only, with the localization problem, in which we ask for the set of points at which the global minimum occurs. - Not handling the inner and outer approximations for epsilon-minimizer, which is the set of points at which the objective function is within epsilon of the global minimum. - Nothing is said about the quality for their results in actual computation. The properties of the algorithms are stated only in the limit for infinite running time, infinite memory, and infinite precision of the floating-point number system. To handle these drawbacks, we propose interval arithmetic algorithms for fathoming problems and for localization problems. For these algorithms we state properties that can be verified in actual executions of the algorithms. Moreover, the algorithms proposed return the best results that can be computed with given expressions for the objective function and the conditions, and a given hardware. Interval constraint methods combine interval arithmetic and constraint processing techniques, namely consistency algorithms, to obtain tighter bounds for the objective function over a box. The basic building block of interval constraint methods is the generic propagation algorithm. This explains our efforts to improve the generic propagation algorithm as much as possible. All our algorithms, namely dual, clustered, deterministic, and selective propagation algorithms, are developed as an attempt to improve the efficiency of the generic propagation algorithm. The relational box-consistency algorithm is another key algorithm in interval constraints. This algorithm keeps squashing the left and right bounds of the intervals of the variables until no further narrowing is possible. A drawback of this way of squashing is that as we proceed further, the process of squashing becomes slow. Another drawback is that, for some cases, the actual narrowing occurs late. To address these problems, we propose the following algorithms: - Dynamic Box-Consistency algorithm: instead of pruning the left and then the right bound of each domain, we alternate the pruning between all the domains. - Adaptive Box-Consistency algorithm: the idea behind this algorithm is to get rid of the boxes as soon as possible: start with small boxes and extend them or shrink them depending on the pruning outcome. This adaptive behavior makes this algorithm very suitable for quick squashing. Since the efficiency of interval constraint optimization methods depends heavily on the sharpness of the upper bound for the global minimum, we must make some effort to find the appropriate point or box to use for computing the upper bound, and not to randomly pick one as is commonly done. So, we introduce interval constraints with exploration. These methods use non-interval methods as an exploratory step in solving a global optimization problem. The results of the exploration are then used to guide interval constraint algorithms, and thus improve their efficiency.
2

Rigorous defect control and the numerical solution of ordinary differential equations

Ernsthausen, John+ 10 1900 (has links)
Modern numerical ordinary differential equation initial-value problem (ODE-IVP) solvers compute a piecewise polynomial approximate solution to the mathematical problem. Evaluating the mathematical problem at this approximate solution defines the defect. Corless and Corliss proposed rigorous defect control of numerical ODE-IVP. This thesis automates rigorous defect control for explicit, first-order, nonlinear ODE-IVP. Defect control is residual-based backward error analysis for ODE, a special case of Wilkinson's backward error analysis. This thesis describes a complete software implementation of the Corless and Corliss algorithm and extensive numerical studies. Basic time-stepping software is adapted to defect control and implemented. Advances in software developed for validated computing applications and advances in programming languages supporting operator overloading enable the computation of a tight rigorous enclosure of the defect evaluated at the approximate solution with Taylor models. Rigorously bounding a norm of the defect, the Corless and Corliss algorithm controls to mathematical certainty the norm of the defect to be less than a user specified tolerance over the integration interval. The validated computing software used in this thesis happens to compute a rigorous supremum norm. The defect of an approximate solution to the mathematical problem is associated with a new problem, the perturbed reference problem. This approximate solution is often the product of a numerical procedure. Nonetheless, it solves exactly the new problem including all errors. Defect control accepts the approximate solution whenever the sup-norm of the defect is less than a user specified tolerance. A user must be satisfied that the new problem is an acceptable model. / Thesis / Master of Science (MSc) / Many processes in our daily lives evolve in time, even the weather. Scientists want to predict the future makeup of the process. To do so they build models to model physical reality. Scientists design algorithms to solve these models, and the algorithm implemented in this project was designed over 25 years ago. Recent advances in mathematics and software enabled this algorithm to be implemented. Scientific software implements mathematical algorithms, and sometimes there is more than one software solution to apply to the model. The software tools developed in this project enable scientists to objectively compare solution techniques. There are two forces at play; models and software solutions. This project build software to automate the construction of the exact solution of a nearby model. That's cool.

Page generated in 0.1096 seconds