Return to search

Lagrangian Relaxation / Dual Approaches For Solving Large-Scale Linear Programming Problems

This research effort focuses on large-scale linear programming problems that arise in the context of solving various problems such as discrete linear or polynomial, and continuous nonlinear, nonconvex programming problems, using linearization and branch-and-cut algorithms for the discrete case, and using polyhedral outer-approximation methods for the continuous case. These problems arise in various applications in production planning, location-allocation, game theory, economics, and many engineering and systems design problems. During the solution process of discrete or continuous nonconvex problems using polyhedral approaches, one has to contend with repeatedly solving large-scale linear programming(LP) relaxations. Thus, it becomes imperative to employ an efficient method in solving these problems. It has been amply demonstrated that solving LP relaxations using a simplex-based algorithm, or even an interior-point type of procedure, can be inadequately slow ( especially in the presence of complicating constraints, dense coefficient matrices, and ill-conditioning ) in comparison with a Lagrangian Relaxation approach. With this motivation, we present a practical primal-dual subgradient algorithm that incorporates a dual ascent, a primal recovery, and a penalty function approach to recover a near optimal and feasible pair of primal and dual solutions.

The proposed primal-dual approach is comprised of three stages. Stage I deals with solving the Lagrangian dual problem by using various subgradient deflection strategies such as the Modified Gradient Technique (MGT), the Average Direction Strategy (ADS), and a new direction strategy called the Modified Average Direction Strategy (M-ADS). In the latter, the deflection parameter is determined based on the process of projecting the unknown optimal direction onto the space spanned by the current subgradient direction and the previous direction. This projected direction approximates the desired optimal direction as closely as possible using the conjugate subgradient concept. The step-length rules implemented in this regard are the Quadratic Fit Line Search Method and a new line search method called the Directional Derivative Line Search Method in which we start with a prescribed step-length and then ascertain whether to increase or decrease the step-length value based on the right-hand and left-hand derivative information available at each iteration. In the second stage of the algorithm (Stage II), a sequence of updated primal solutions is generated using some convex combinations of the Lagrangian subproblem solutions. Alternatively, a starting primal optimal solution can be obtained using the complementary slackness conditions. Depending on the extent of feasibility and optimality attained, Stage III applies a penalty function method to improve the obtained primal solution toward a near feasible and optimal solution.

We present computational experience using a set of randomly generated, structured, linear programming problems of the type that might typically arise in the context of discrete optimization. / Master of Science

Identiferoai:union.ndltd.org:VTETD/oai:vtechworks.lib.vt.edu:10919/36833
Date17 February 1997
CreatorsMadabushi, Ananth R.
ContributorsIndustrial and Systems Engineering, Sherali, Hanif D., Kobza, John E., Jacobson, Sheldon H.
PublisherVirginia Tech
Source SetsVirginia Tech Theses and Dissertation
Detected LanguageEnglish
TypeThesis
Formatapplication/pdf
RightsIn Copyright, http://rightsstatements.org/vocab/InC/1.0/
Relationetd.pdf, Amadhabu.pdf

Page generated in 0.002 seconds