Type of Document Master's Thesis Author Madabushi, Ananth R. URN etd-612112439741131 Title Lagrangian Relaxation / Dual Approaches For Solving Large-Scale Linear Programming Problems Degree Master of Science Department Industrial and Systems Engineering Advisory Committee

Advisor Name Title Jacobson, Sheldon H. Kobza, John E. Sherali, Hanif D. Committee Chair Keywords

- lagrangian relaxation
- subgradient
- line search
- primal recovery
- penalty function
Date of Defense 1997-02-17 Availability unrestricted Abstract

This research effort focuses on large-scale

linear programming problems that arise in the

context of solving various problems such as

discrete linear or polynomial, and continuous

nonlinear, nonconvex programming problems,

using linearization and branch-and-cut

algorithms for the discrete case, and using

polyhedral outer-approximation methods for

the continuous case. These problems arise in

various applications in production planning,

location-allocation, game theory, economics,

and many engineering and systems design

problems. During the solution process of

discrete or continuous nonconvex problems

using polyhedral approaches, one has to

contend with repeatedly solving large-scale

linear programming(LP) relaxations. Thus, it

becomes imperative to employ an efficient

method in solving these problems. It has been

amply demonstrated that solving LP

relaxations using a simplex-based algorithm, or

even an interior-point type of procedure, can

be inadequately slow ( especially in the

presence of complicating constraints, dense

coefficient matrices, and ill-conditioning ) in

comparison with a Lagrangian Relaxation

approach. With this motivation, we present a

practical primal-dual subgradient algorithm that

incorporates a dual ascent, a primal recovery,

and a penalty function approach to recover a

near optimal and feasible pair of primal and

dual solutions. The proposed primal-dual

approach is comprised of three stages. Stage I

deals with solving the Lagrangian dual problem

by using various subgradient deflection

strategies such as the Modified Gradient

Technique (MGT), the Average Direction

Strategy (ADS), and a new direction strategy

called the Modified Average Direction

Strategy (M-ADS). In the latter, the deflection

parameter is determined based on the process

of projecting the unknown optimal direction

onto the space spanned by the current

subgradient direction and the previous

direction. This projected direction

approximates the desired optimal direction as

closely as possible using the conjugate

subgradient concept. The step-length rules

implemented in this regard are the Quadratic

Fit Line Search Method and a new line search

method called the Directional Derivative Line

Search Method in which we start with a

prescribed step-length and then ascertain

whether to increase or decrease the

step-length value based on the right-hand and

left-hand derivative information available at

each iteration. In the second stage of the

algorithm (Stage II), a sequence of updated

primal solutions is generated using some

convex combinations of the Lagrangian

subproblem solutions. Alternatively, a starting

primal optimal solution can be obtained using

the complementary slackness conditions.

Depending on the extent of feasibility and

optimality attained, Stage III applies a penalty

function method to improve the obtained

primal solution toward a near feasible and

optimal solution. We present computational

experience using a set of randomly generated,

structured, linear programming problems of the

type that might typically arise in the context of

discrete optimization.

Files

Filename Size Approximate Download Time (Hours:Minutes:Seconds)

28.8 Modem 56K Modem ISDN (64 Kb) ISDN (128 Kb) Higher-speed Access Amadhabu.pdf173.75 Kb 00:00:48 00:00:24 00:00:21 00:00:10 < 00:00:01 etd.pdf173.75 Kb 00:00:48 00:00:24 00:00:21 00:00:10 < 00:00:01

Browse All Available ETDs by
( Author |
Department )

If you have questions or technical problems, please Contact DLA.