In this research effort, we consider nondifferentiable optimization (NDO) problems that arise in several applications in science, engineering, and management, as well as in the context of other mathematical programming approaches such as Dantzig-Wolfe decomposition, Benders decomposition, Lagrangian duality, penalty function methods, and minimax problems. The importance and necessity of having effective solution methods for NDO problems has long been recognized by many scientists and engineers. However, the practical use of NDO techniques has been somewhat limited, principally due to the lack of computationally viable procedures, that are also supported by theoretical convergence properties, and are suitable for solving large-scale problems. In this research, we present some new algorithms that are based on popular computationally effective strategies, while at the same time, do not compromise on theoretical convergence issues.
First, a new variable target value method (VTVM) is introduced that has an e-convergence property, and that differs from other known methods in that it does not require any prior assumption regarding bounds on an optimum or regarding the solution space. In practice, the step-length is often calculated by using an estimate of the optimal objective function value. For general nondifferentiable optimization problems, however, this may not be readily available. Hence, we design an algorithm that does not assume the possibility of having an a prior estimate of the optimal objective function value. Furthermore, along with this new step-length rule, we present a new subgradient deflection strategy in which a selected subgradient is rotated optimally toward a point that has an objective function value less than the incumbent target value. We also develop another deflection strategy based on Shor’s space dilation algorithm, so that the resulting direction of motion turns out to be a particular convex combination of two successive subgradients and we establish suitable convergence results.
In the second part of this dissertation, we consider Lagrangian dual problems. Our motivation here is the inadequacy of the simplex method or even interior point methods to obtain quick, near-optimal solutions to large linear programming relaxations of certain discrete or combinatorial optimization problems. Lagrangian dual methods, on the other hand, are quite well equipped to deal with complicating constraints and ill-conditioning problems. However, available optimization methods for such problems are not very satisfactory, and can stall far from the optimal objective function value for some problems. Also, there is no practical implementation strategy for recovering a primal optimal solution. This is a major shortcoming, even if the method is used only for bounding purposes in the context of a branch and bound scheme. With this motivation, we present new primal convergence theorems that generalize existing results on recovering primal optimal solutions, and we describe a scheme for embedding these results within a practical primal-dual Lagrangian optimization procedure. / Ph. D.
Identifer | oai:union.ndltd.org:VTETD/oai:vtechworks.lib.vt.edu:10919/38657 |
Date | 19 June 2006 |
Creators | Choi, Gyunghyun |
Contributors | Industrial and Systems Engineering, Sherali, Hanif D., Beattie, Christopher A., Loganathan, G. V., Koelling, C. Patrick, Sarin, Subhash C. |
Publisher | Virginia Tech |
Source Sets | Virginia Tech Theses and Dissertation |
Language | English |
Detected Language | English |
Type | Dissertation, Text |
Format | vii, 161 leaves, BTD, application/pdf, application/pdf |
Rights | In Copyright, http://rightsstatements.org/vocab/InC/1.0/ |
Relation | OCLC# 30902055, LD5655.V856_1993.C565.pdf |
Page generated in 0.002 seconds