• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 2
  • 1
  • 1
  • Tagged with
  • 15
  • 15
  • 6
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

On the Relationship between Conjugate Gradient and Optimal First-Order Methods for Convex Optimization

Karimi, Sahar January 2014 (has links)
In a series of work initiated by Nemirovsky and Yudin, and later extended by Nesterov, first-order algorithms for unconstrained minimization with optimal theoretical complexity bound have been proposed. On the other hand, conjugate gradient algorithms as one of the widely used first-order techniques suffer from the lack of a finite complexity bound. In fact their performance can possibly be quite poor. This dissertation is partially on tightening the gap between these two classes of algorithms, namely the traditional conjugate gradient methods and optimal first-order techniques. We derive conditions under which conjugate gradient methods attain the same complexity bound as in Nemirovsky-Yudin's and Nesterov's methods. Moreover, we propose a conjugate gradient-type algorithm named CGSO, for Conjugate Gradient with Subspace Optimization, achieving the optimal complexity bound with the payoff of a little extra computational cost. We extend the theory of CGSO to convex problems with linear constraints. In particular we focus on solving $l_1$-regularized least square problem, often referred to as Basis Pursuit Denoising (BPDN) problem in the optimization community. BPDN arises in many practical fields including sparse signal recovery, machine learning, and statistics. Solving BPDN is fairly challenging because the size of the involved signals can be quite large; therefore first order methods are of particular interest for these problems. We propose a quasi-Newton proximal method for solving BPDN. Our numerical results suggest that our technique is computationally effective, and can compete favourably with the other state-of-the-art solvers.
12

Inexact Newton Methods Applied to Under-Determined Systems

Simonis, Joseph P 04 May 2006 (has links)
Consider an under-determined system of nonlinear equations F(x)=0, F:R^m→R^n, where F is continuously differentiable and m > n. This system appears in a variety of applications, including parameter-dependent systems, dynamical systems with periodic solutions, and nonlinear eigenvalue problems. Robust, efficient numerical methods are often required for the solution of this system. Newton's method is an iterative scheme for solving the nonlinear system of equations F(x)=0, F:R^n→R^n. Simple to implement and theoretically sound, it is not, however, often practical in its pure form. Inexact Newton methods and globalized inexact Newton methods are computationally efficient variations of Newton's method commonly used on large-scale problems. Frequently, these variations are more robust than Newton's method. Trust region methods, thought of here as globalized exact Newton methods, are not as computationally efficient in the large-scale case, yet notably more robust than Newton's method in practice. The normal flow method is a generalization of Newton's method for solving the system F:R^m→R^n, m > n. Easy to implement, this method has a simple and useful local convergence theory; however, in its pure form, it is not well suited for solving large-scale problems. This dissertation presents new methods that improve the efficiency and robustness of the normal flow method in the large-scale case. These are developed in direct analogy with inexact-Newton, globalized inexact-Newton, and trust-region methods, with particular consideration of the associated convergence theory. Included are selected problems of interest simulated in MATLAB.
13

Stochastic Newton Methods With Enhanced Hessian Estimation

Reddy, Danda Sai Koti January 2017 (has links) (PDF)
Optimization problems involving uncertainties are common in a variety of engineering disciplines such as transportation systems, manufacturing, communication networks, healthcare and finance. The large number of input variables and the lack of a system model prohibit a precise analytical solution and a viable alternative is to employ simulation-based optimization. The idea here is to simulate a few times the stochastic system under consideration while updating the system parameters until a good enough solution is obtained. Formally, given only noise-corrupted measurements of an objective function, we wish to end a parameter which minimises the objective function. Iterative algorithms using statistical methods search the feasible region to improve upon the candidate parameter. Stochastic approximation algorithms are best suited; most studied and applied algorithms for funding solutions when the feasible region is a continuously valued set. One can use information on the gradient/Hessian of the objective to aid the search process. However, due to lack of knowledge of the noise distribution, one needs to estimate the gradient/Hessian from noisy samples of the cost function obtained from simulation. Simple gradient search schemes take much iteration to converge to a local minimum and are heavily dependent on the choice of step-sizes. Stochastic Newton methods, on the other hand, can counter the ill-conditioning of the objective function as they incorporate second-order information into the stochastic updates. Stochastic Newton methods are often more accurate than simple gradient search schemes. We propose enhancements to the Hessian estimation scheme used in two recently proposed stochastic Newton methods, based on the ideas of random directions stochastic approximation (2RDSA) [21] and simultaneous perturbation stochastic approximation (2SPSA-31) [6], respectively. The proposed scheme, inspired by [29], reduces the error in the Hessian estimate by (i) Incorporating a zero-mean feedback term; and (ii) optimizing the step-sizes used in the Hessian recursion. We prove that both 2RDSA and 2SPSA-3 with our Hessian improvement scheme converges asymptotically to the true Hessian. The key advantage with 2RDSA and 2SPSA-3 is that they require only 75% of the simulation cost per-iteration for 2SPSA with improved Hessian estimation (2SPSA-IH) [29]. Numerical experiments show that 2RDSA-IH outperforms both 2SPSA-IH and 2RDSA without the improved Hessian estimation scheme.
14

Approximation of the Neutron Diffusion Equation on Hexagonal Geometries

González Pintor, Sebastián 16 November 2012 (has links)
La ecuación de la difusión neutrónica describe la población de neutrones de un reactor nuclear. Este trabajo trata con este modelo para reactores nucleares con geometría hexagonal. En primer lugar se estudia la ecuación de la difusión neutrónica. Este es un problema diferencial de valores propios, llamado problema de los modos Lambda. Para resolver el problema de los modos Lambda se han comparado diferentes métodos en geometrías unidimensionales, resultando como el mejor el método de elementos espectrales. Usando este método discretizamos los operadores en geometrías bidimensiones y tridimensionales, resolviendo el problema algebraica de valores propios resultante con el método de Arnoldi. La distribución de neutrones estado estacionario se utiliza como condición inicial para la integración de la ecuación de la difusión neutrónica dependiente del tiempo. Se utiliza un método de Euler implícito para integrar en el tiempo. Cuando un nodo está parcialmente insertado aparece un comportamiento no físico de la solución, el efecto ``rod cusping'', que se corrige mediante la ponderación de las secciones eficaces con el flujo del paso de tiempo anterior. Cuando la solución de los sistemas algebraicos que surgen en el método hacia atrás, un método de Krylov se utiliza para resolver los sistemas resultantes, y diferentes estrategias de precondicionamiento se evalúan se. La primera consiste en el uso de la estructura de bloque obtenido por los grupos de energía para resolver el sistema por bloques, y diferentes técnicas de aceleración para el esquema iterativo de bloques y un precondicionador utilizando esta estructura de bloque se proponen. Además se estudia un precondicionador espectral, que hace uso de la información en un subespacio de Krylov para precondicionar el siguiente sistema. También se proponen métodos exponenciales de segundo y cuarto orden integrar la ecuación de difusión neutrónica dependiente del tiempo, donde la exponencial de la matriz del sistema tiene qu / González Pintor, S. (2012). Approximation of the Neutron Diffusion Equation on Hexagonal Geometries [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/17829 / Palancia
15

Modifikacije Njutnovog postupka za rešavanje nelinearnih singularnih problema / Modification of the Newton method for nonlinear singular problems

Buhmiler Sandra 18 December 2013 (has links)
<p>U doktorskoj diseratciji posmatrani su singularni nelinearni problemi. U prvom&nbsp;poglavlju predstavljene su oznake i osnovne definicije i teoreme koje se koriste u&nbsp;disertaciji. U drugom poglavlju prikazani su poznati postupci i njihovo pona&scaron;anje&nbsp;u slučajevima da je re&scaron;enje regularno ili singularno. Takođe su pokazane poznate&nbsp;modifikacije ovih postupaka kako bi se pobolj&scaron;ala konvergencija. Posebno su&nbsp;predstavljena četiri kvazi-Njutnova metoda i predložene njihove modifikacije u&nbsp;slučaju singularnosti re&scaron;enja. U trećem poglavlju predstavljeni su teorijski okvir&nbsp;pri definisanju graničnih sistema i neki poznati algoritmi za njihovo re&scaron;avanje i&nbsp;definisan je novi algoritam koji je podjednako efikasan ali jeftiniji za rad jer ne&nbsp;uključuje izračunavanje izvoda. Takođe, predložena je kombinacija definisanog&nbsp;algortitma sa metodom negativnog gradijenta, kao i algoritam koji predstavlja&nbsp;primenu poznatog algoritma na definisani granični sistem. U četvrtom poglavlju&nbsp;predstavljeni su numerički rezultati dobijeni primenom definisanih algoritama na&nbsp;relevantne primere i potvrđeni su teorijski dobijeni rezultati.</p> / <p>In this doctoral thesis nonlinear singular problems were observed. The first&nbsp;chapter presents basic definitions and theorems that are used in the thesis. The&nbsp;second chapter presents several methods that are commonly used and their&nbsp;behavior if the solution is regular or singular. Also, some known modifications to&nbsp;these methods are presented in order to improve convergence. In addition four&nbsp;quasi-Newton methods and their modifications in the case the singularity of the&nbsp;solution. The third chapter consists of the theoretical foundation for defining the&nbsp;bordered system, some known algorithms for solving them and new algorithm is&nbsp;defined to accelerate convergence to a singular solution. New algorithm is&nbsp;efficient but cheaper for the use since there is no derivative evaluations in it. It is&nbsp;presented synthesis of new algorithm with negative gradient method and using&nbsp;one of well known method on the bordered system as well. The fourth chapter&nbsp;presents the numerical results obtained by the defined algorithms on the relevant&nbsp;examples and theoretical results are confirmed.</p>

Page generated in 0.0597 seconds