• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 20
  • 5
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 44
  • 44
  • 26
  • 14
  • 13
  • 10
  • 9
  • 9
  • 7
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

On the Effect of Numerical Noise in Simulation-Based Optimization

Vugrin, Kay E. 10 April 2003 (has links)
Numerical noise is a prevalent concern in many practical optimization problems. Convergence of gradient based optimization algorithms in the presence of numerical noise is not always assured. One way to improve optimization algorithm performance in the presence of numerical noise is to adjust the method of gradient computation. This study investigates the use of Continuous Sensitivity Equation (CSE) gradient approximations in the context of numerical noise and optimization. Three problems are considered: a problem with a system of ODE constraints, a single parameter flow problem constrained by the Navier-Stokes equations, and a multiple parameter flow problem constrained by the Navier-Stokes equations. All three problems use adaptive methods in the simulation of the constraint and are numerically noisy. Gradients for each problem are computed with both CSE and finite difference methods. The gradients are analyzed and compared. The two flow problems are optimized with a trust region optimization algorithm using both sets of gradient calculations. Optimization results are also compared, and the CSE gradient approximation yields impressive results for these examples. / Master of Science
2

A Nonlinear Response Model for Single Nucleotide Polymorphism Detection Assays

Kouri, Drew P. 05 June 2008 (has links)
No description available.
3

Filter-Trust-Region Methods for Nonlinear Optimization

Sainvitu, Caroline 17 April 2007 (has links)
This work is concerned with the theoretical study and the implementation of algorithms for solving two particular types of nonlinear optimization problems, namely unconstrained and simple-bound constrained optimization problems. For unconstrained optimization, we develop a new algorithm which uses a filter technique and a trust-region method in order to enforce global convergence and to improve the efficiency of traditional approaches. We also analyze the effect of approximate first and second derivatives on the performance of the filter-trust-region algorithm. We next extend our algorithm to simple-bound constrained optimization problems by combining these ideas with a gradient-projection method. Numerical results follow the proposed methods and indicate that they are competitive with more classical trust-region algorithms.
4

A survey of the trust region subproblem within a semidefinite framework

Fortin, Charles January 2000 (has links)
Trust region subproblems arise within a class of unconstrained methods called trust region methods. The subproblems consist of minimizing a quadratic function subject to a norm constraint. This thesis is a survey of different methods developed to find an approximate solution to the subproblem. We study the well-known method of More and Sorensen and two recent methods for large sparse subproblems: the so-called Lanczos method of Gould et al. and the Rendland Wolkowicz algorithm. The common ground to explore these methods will be semidefinite programming. This approach has been used by Rendl and Wolkowicz to explain their method and the More and Sorensen algorithm; we extend this work to the Lanczos method. The last chapter of this thesis is dedicated to some improvements done to the Rendl and Wolkowicz algorithm and to comparisons between the Lanczos method and the Rendl and Wolkowicz algorithm. In particular, we show some weakness of the Lanczos method and show that the Rendl and Wolkowicz algorithm is more robust.
5

A survey of the trust region subproblem within a semidefinite framework

Fortin, Charles January 2000 (has links)
Trust region subproblems arise within a class of unconstrained methods called trust region methods. The subproblems consist of minimizing a quadratic function subject to a norm constraint. This thesis is a survey of different methods developed to find an approximate solution to the subproblem. We study the well-known method of More and Sorensen and two recent methods for large sparse subproblems: the so-called Lanczos method of Gould et al. and the Rendland Wolkowicz algorithm. The common ground to explore these methods will be semidefinite programming. This approach has been used by Rendl and Wolkowicz to explain their method and the More and Sorensen algorithm; we extend this work to the Lanczos method. The last chapter of this thesis is dedicated to some improvements done to the Rendl and Wolkowicz algorithm and to comparisons between the Lanczos method and the Rendl and Wolkowicz algorithm. In particular, we show some weakness of the Lanczos method and show that the Rendl and Wolkowicz algorithm is more robust.
6

An Empirical Study of the Distributed Ellipsoidal Trust Region Method for Large Batch Training

Alnasser, Ali 10 February 2021 (has links)
Neural networks optimizers are dominated by first-order methods, due to their inexpensive computational cost per iteration. However, it has been shown that firstorder optimization is prone to reaching sharp minima when trained with large batch sizes. As the batch size increases, the statistical stability of the problem increases, a regime that is well suited for second-order optimization methods. In this thesis, we study a distributed ellipsoidal trust region model for neural networks. We use a block diagonal approximation of the Hessian, assigning consecutive layers of the network to each process. We solve in parallel for the update direction of each subset of the parameters. We show that our optimizer is fit for large batch training as well as increasing number of processes.
7

Impact of Discretization Techniques on Nonlinear Model Reduction and Analysis of the Structure of the POD Basis

Unger, Benjamin 19 November 2013 (has links)
In this thesis a numerical study of the one dimensional viscous Burgers equation is conducted. The discretization techniques Finite Differences, Finite Element Method and Group Finite Elements are applied and their impact on model reduction techniques, namely Proper Orthogonal Decomposition (POD), Group POD and the Discrete Empirical Interpolation Method (DEIM), is studied. This study is facilitated by examination of several common ODE solvers. Embedded in this process, some results on the structure of the POD basis and an alternative algorithm to compute the POD subspace are presented. Various numerical studies are conducted to compare the different methods and the to study the interaction of the spatial discretization on the ROM through the basis functions. Moreover, the results are used to investigate the impact of Reduced Order Models (ROM) on Optimal Control Problems. To this end, the ROM is embedded in a Trust Region Framework and the convergence results of Arian et al. (2000) is extended to POD-DEIM. Based on the convergence theorem and the results of the numerical studies, the emphasis is on implementation strategies for numerical speedup. / Master of Science
8

Trust-Region Algorithms for Nonlinear Stochastic Programming and Mixed Logit Models

Bastin, Fabian 12 March 2004 (has links)
This work is concerned with the study of nonlinear nonconvex stochastic programming, in particular in the context of trust-region approaches. We first explore how to exploit the structure of multistage stochastic nonlinear programs with linear constraints, in the framework of primal-dual interior point methods. We next study consistency of sample average approximations (SAA) for general nonlinear stochastic programs. We also develop a new algorithm to solve the SAA problem, using the statistical inference information to reduce numercial costs, by means of an internal variable sample size strategy. We finally assess the numerical efficiency of the proposed method for the estimation of discrete choice models, more precisely mixed logit models, using our software AMLET, written for this purpose.
9

Effective and Efficient Optimization Methods for Kernel Based Classification Problems

Tayal, Aditya January 2014 (has links)
Kernel methods are a popular choice in solving a number of problems in statistical machine learning. In this thesis, we propose new methods for two important kernel based classification problems: 1) learning from highly unbalanced large-scale datasets and 2) selecting a relevant subset of input features for a given kernel specification. The first problem is known as the rare class problem, which is characterized by a highly skewed or unbalanced class distribution. Unbalanced datasets can introduce significant bias in standard classification methods. In addition, due to the increase of data in recent years, large datasets with millions of observations have become commonplace. We propose an approach to address both the problem of bias and computational complexity in rare class problems by optimizing area under the receiver operating characteristic curve and by using a rare class only kernel representation, respectively. We justify the proposed approach theoretically and computationally. Theoretically, we establish an upper bound on the difference between selecting a hypothesis from a reproducing kernel Hilbert space and a hypothesis space which can be represented using a subset of kernel functions. This bound shows that for a fixed number of kernel functions, it is optimal to first include functions corresponding to rare class samples. We also discuss the connection of a subset kernel representation with the Nystrom method for a general class of regularized loss minimization methods. Computationally, we illustrate that the rare class representation produces statistically equivalent test error results on highly unbalanced datasets compared to using the full kernel representation, but with significantly better time and space complexity. Finally, we extend the method to rare class ordinal ranking, and apply it to a recent public competition problem in health informatics. The second problem studied in the thesis is known as the feature selection problem in literature. Embedding feature selection in kernel classification leads to a non-convex optimization problem. We specify a primal formulation and solve the problem using a second-order trust region algorithm. To improve efficiency, we use the two-block Gauss-Seidel method, breaking the problem into a convex support vector machine subproblem and a non-convex feature selection subproblem. We reduce possibility of saddle point convergence and improve solution quality by sharing an explicit functional margin variable between block iterates. We illustrate how our algorithm improves upon state-of-the-art methods.
10

Multilevel optimization in infinity norm and associated stopping criteria / Optimisation multiniveaux en norme infinie et critères d’arrêt associés

Mouffe, Mélodie 10 February 2009 (has links)
Cette thèse se concentre sur l'étude d'un algorithme multi niveaux de régions de confiance en norme infinie, conçu pour la résolution de problèmes d'optimisation non linéaires de grande taille pouvant être soumis a des contraintes de bornes. L'étude est réalisée tant sur le plan théorique que numérique. L'algorithme RMTR8 que nous étudions ici a été élaboré a partir de l'algorithme présente par Gratton, Sartenaer et Toint (2008b), et modifie d'abord en remplaçant l'usage de la norme Euclidienne par une norme infinie, et ensuite en l'adaptant a la résolution de problèmes de minimisation soumis a des contraintes de bornes. Dans un premier temps, les spécificités du nouvel algorithme sont exposées et discutées. De plus, l'algorithme est démontré globalement convergent au sens de Conn, Gould et Toint (2000), c'est-a-dire convergent vers un minimum local au départ de tout point admissible. D'autre part, il est démontre que la propriété d'identification des contraintes actives des méthodes de régions de confiance basées sur l'utilisation d'un point de Cauchy peut être étendue a tout solveur interne respectant une décroissance suffisante. En conséquence, cette propriété d'identification est aussi respectée par une variante particulière du nouvel algorithme. Par la suite, nous étudions différents critères d'arrêt pour les algorithmes d'optimisation avec contraintes de bornes afin de déterminer le sens et les avantages de chacun, et ce pour pouvoir choisir aisément celui qui convient le mieux a certaines situations. En particulier, les critères d'arrêts sont analyses en termes d'erreur inverse (backward erreur), tant au sens classique du terme (avec l'usage d'une norme produit) que du point de vue de l'optimisation multicritères. Enfin, un algorithme pratique est mis en place, utilisant en particulier une technique similaire au lissage de Gauss-Seidel comme solveur interne. Des expérimentations numériques sont réalisées sur une version FORTRAN 95 de l'algorithme. Elles permettent d'une part de définir un panel de paramètres efficaces par défaut et, d'autre part, de comparer le nouvel algorithme a d'autres algorithmes classiques d'optimisation, comme la technique de raffinement de maillage ou la méthode du gradient conjugue, sur des problèmes avec et sans contraintes de bornes. Ces comparaisons numériques semblent donner l'avantage à l'algorithme multi niveaux, en particulier sur les cas peu non-linéaires, comportement attendu de la part d'un algorithme inspire des techniques multi grilles. En conclusion, l'algorithme de région de confiance multi niveaux présente dans cette thèse est une amélioration du précédent algorithme de cette classe d'une part par l'usage de la norme infinie et d'autre part grâce a son traitement de possibles contraintes de bornes. Il est analyse tant sur le plan de la convergence que de son comportement vis-à-vis des bornes, ou encore de la définition de son critère d'arrêt. Il montre en outre un comportement numérique prometteur. / This thesis concerns the study of a multilevel trust-region algorithm in infinity norm, designed for the solution of nonlinear optimization problems of high size, possibly submitted to bound constraints. The study looks at both theoretical and numerical sides. The multilevel algorithm RMTR8 that we study has been developed on the basis of the algorithm created by Gratton, Sartenaer and Toint (2008b), which was modified first by replacing the use of the Euclidean norm by the infinity norm and also by adapting it to solve bound-constrained problems. In a first part, the main features of the new algorithm are exposed and discussed. The algorithm is then proved globally convergent in the sense of Conn, Gould and Toint (2000), which means that it converges to a local minimum when starting from any feasible point. Moreover, it is shown that the active constraints identification property of the trust-region methods based on the use of a Cauchy step can be extended to any internal solver that satisfies a sufficient decrease property. As a consequence, this identification property also holds for a specific variant of our new algorithm. Later, we study several stopping criteria for nonlinear bound-constrained algorithms, in order to determine their meaning and their advantages from specific points of view, and such that we can choose easily the one that suits best specific situations. In particular, the stopping criteria are examined in terms of backward error analysis, which has to be understood both in the usual meaning (using a product norm) and in a multicriteria optimization framework. In the end, a practical algorithm is set on, that uses a Gauss-Seidel-like smoothing technique as an internal solver. Numerical tests are run on a FORTRAN 95 version of the algorithm in order to define a set of efficient default parameters for our method, as well as to compare the algorithm with other classical algorithms like the mesh refinement technique and the conjugate gradient method, on both unconstrained and bound-constrained problems. These comparisons seem to give the advantage to the designed multilevel algorithm, particularly on nearly quadratic problems, which is the behavior expected from an algorithm inspired by multigrid techniques. In conclusion, the multilevel trust-region algorithm presented in this thesis is an improvement of the previous algorithm of this kind because of the use of the infinity norm as well as because of its handling of bound constraints. Its convergence, its behavior concerning the bounds and the definition of its stopping criteria are studied. Moreover, it shows a promising numerical behavior.

Page generated in 0.0756 seconds