Return to search

A New Interpolation Approach for Linearly Constrained Convex Optimization

In this thesis we propose a new class of Linearly Constrained Convex Optimization
methods based on the use of a generalization of Shepard's interpolation formula. We
prove the properties of the surface such as the interpolation property at the boundary
of the feasible region and the convergence of the gradient to the null space of the
constraints at the boundary. We explore several descent techniques such as steepest
descent, two quasi-Newton methods and the Newton's method. Moreover, we implement
in the Matlab language several versions of the method, particularly for the
case of Quadratic Programming with bounded variables. Finally, we carry out performance
tests against Matab Optimization Toolbox methods for convex optimization
and implementations of the standard log-barrier and active-set methods. We conclude
that the steepest descent technique seems to be the best choice so far for our
method and that it is competitive with other standard methods both in performance
and empirical growth order.

Identiferoai:union.ndltd.org:kaust.edu.sa/oai:repository.kaust.edu.sa:10754/244891
Date08 1900
CreatorsEspinoza, Francisco
ContributorsRockwood, Alyn, Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division, Turkiyyah, George, Zhang, Xiangliang
Source SetsKing Abdullah University of Science and Technology
LanguageEnglish
Detected LanguageEnglish
TypeThesis

Page generated in 0.0014 seconds