Spelling suggestions: "subject:"convex programming."" "subject:"konvex programming.""
21 
Approximation algorithms for Lpball and quadratically constrained polynomial optimization problems.January 2013 (has links)
本论文着重研究了带有Lp模球约束以及二次约束的多项式优化问题的计算复杂度以及关于此类问题的近似算法。在本论文中，利用张量对称化的技巧，我们首次证明了当P∈ [2 ，∞] ，任意高阶的带有Lp模球约束的多项式优化问题均为NP 困难。借助模的对偶性质，我们将这类优化问题转化为求解凸体半径的问题，从而使得我们获得了之前研究所无法使用的算法工具。具体来说，利用计算凸几何的算法工具，对于Lp模球约束的多项式优化问题，我们得到了近似比为[附圖]的确定性多项式时间近似算法，其中d为目标多项式的阶次， n 为问题的维度。使用随机算法，我们将近似比进一步提高为此类问题的己知最优值。[附圖]。此外，我们发展了计算凸几何当中对于凸体半径的计算方法，从而设计出了一种对二次约束多项式优化问题近似比为[附圖]的近似算法，其中m为问题的约束个数。我们的结果涵盖并提高了之前关于此类问题的研究结果。我们相信在本论文中使用的新的算法工具，将在今后的多项式优化问题研究中得到更广泛的应用。 / In this thesis, we present polynomial time approximation algorithms for solving various homogeneous polynomial optimization problems and their multilinear relaxations. Specifically, for the problems with Lp ball constraint, where P∈ [2 ，∞], by reducing them to that of determining the Lqdiameter of certain convex body, we show that they can be approximated to within a factor of [with formula] in deterministic polynomial time, where q = p=(p  1) is the conjugate of p, n is the number of variables, and d is the degree of the polynomial. We further show that with the help of randomization, the approximation guarantee can be improved to [with formula], which is independent of p and is currently the best for the aforementioned problems. Moreover, we extend the argument of deterministic algorithm mentioned above to solve the quadratically constrained polynomial optimization problems. In particular, for any intersection of ellipsoids K, we can, in polynomial time, construct a random polytope P, which satisfies [with formula]. Then, by reducing the problem to that of evaluating the maximum polytopal norm [with formula] induced by P, over certain convex body, we can approximate the quadratically constrained problem within a factor of [with formula] in polynomial time. Our results unify and generalize those in the literature, which focus either on the quadratic case or the case where [with formula]. We believe that the wide array of tools used in this thesis will have further applications in the study of polynomial optimization problems. / Detailed summary in vernacular field only. / Hou, Ke. / On title page "p" is subscript. / Thesis (Ph.D.) Chinese University of Hong Kong, 2013. / Includes bibliographical references (leaves 106111). / Abstracts also in Chinese.

22 
Portfolio Selection Under Nonsmooth Convex Transaction CostsPotaptchik, Marina January 2006 (has links)
We consider a portfolio selection problem in the presence of transaction costs. Transaction costs on each asset are assumed to be a convex function of the amount sold or bought. This function can be nondifferentiable in a finite number of points. The objective function of this problem is a sum of a convex twice differentiable function and a separable convex nondifferentiable function. We first consider the problem in the presence of linear constraints and later generalize the results to the case when the constraints are given by the convex piecewise linear functions. <br /><br /> Due to the special structure, this problem can be replaced by an equivalent differentiable problem in a higher dimension. It's main drawback is efficiency since the higher dimensional problem is computationally expensive to solve. <br /><br /> We propose several alternative ways to solve this problem which do not require introducing new variables or constraints. We derive the optimality conditions for this problem using subdifferentials. First, we generalize an active set method to this class of problems. We solve the problem by considering a sequence of equality constrained subproblems, each subproblem having a twice differentiable objective function. Information gathered at each step is used to construct the subproblem for the next step. We also show how the nonsmoothness can be handled efficiently by using spline approximations. The problem is then solved using a primaldual interiorpoint method. <br /><br /> If a higher accuracy is needed, we do a crossover to an active set method. Our numerical tests show that we can solve large scale problems efficiently and accurately.

23 
Portfolio Selection Under Nonsmooth Convex Transaction CostsPotaptchik, Marina January 2006 (has links)
We consider a portfolio selection problem in the presence of transaction costs. Transaction costs on each asset are assumed to be a convex function of the amount sold or bought. This function can be nondifferentiable in a finite number of points. The objective function of this problem is a sum of a convex twice differentiable function and a separable convex nondifferentiable function. We first consider the problem in the presence of linear constraints and later generalize the results to the case when the constraints are given by the convex piecewise linear functions. <br /><br /> Due to the special structure, this problem can be replaced by an equivalent differentiable problem in a higher dimension. It's main drawback is efficiency since the higher dimensional problem is computationally expensive to solve. <br /><br /> We propose several alternative ways to solve this problem which do not require introducing new variables or constraints. We derive the optimality conditions for this problem using subdifferentials. First, we generalize an active set method to this class of problems. We solve the problem by considering a sequence of equality constrained subproblems, each subproblem having a twice differentiable objective function. Information gathered at each step is used to construct the subproblem for the next step. We also show how the nonsmoothness can be handled efficiently by using spline approximations. The problem is then solved using a primaldual interiorpoint method. <br /><br /> If a higher accuracy is needed, we do a crossover to an active set method. Our numerical tests show that we can solve large scale problems efficiently and accurately.

24 
Algoritmes vir die maksimering van konvekse en verwante knapsakprobleme /Visagie, S. E. January 2007 (has links)
Thesis (PhD)University of Stellenbosch, 2007. / Bibliography. Also availabe via the Internet.

25 
Solving convex programming with simple convex constraintsHou, Liangshao 09 July 2020 (has links)
The problems we studied in this thesis are linearly constrained convex programming (LCCP) and nonnegative matrix factorization (NMF). The resolutions of these two problems are all closely related to convex programming with simple convex constraints. The work can mainly be described in the following three parts. Firstly, an interior point algorithm following a parameterized central path for linearly constrained convex programming is proposed. The convergence and polynomialtime complexity are proved under the assumption that the Hessian of the objective function is locally Lipschitz continuous. Also, an initialization strategy is proposed, and some numerical results are provided to show the efficiency of the proposed algorithm. Secondly, the path following algorithm is promoted for general barrier functions. A class of barrier functions is proposed, and their corresponding paths are proved to be continuous and converge to optimal solutions. Applying the path following algorithm to these paths provide more flexibility to interior point methods. With some adjustments, the initialization method is equipped to validate implementation and convergence. Thirdly, we study the convergence of hierarchical alternating least squares algorithm (HALS) and its fast form (Fast HALS) for nonnegative matrix factorization. The coordinate descend idea for these algorithms is restated. With a precise estimation of objective reduction, some limiting properties are illustrated. The accumulation points are proved to be stationary points, and some adjustments are proposed to improve the implementation and efficiency

26 
Internal convex programming, orthogonal linear programming, and program generation proceduresRistroph, John Heard 05 January 2010 (has links)
Three topics are developed: interval convex programming, and program generation techniques. The interval convex programming problem is similar to the convex programming problem of the real number system except that all parameters are specified as intervals of real numbers rather than as real scalars. The interval programming solution procedure involves the solution of a series of 2n real valued convex programs where n is the dimension of the space. The solution of an interval programming problem is an interval vector which contains all possible solutions to any real valued convex program which may be realized.
Attempts to improve the efficiency of the interval convex programming problem lead to the eventual development of a new solution procedure for the real valued linear programming problem, Orthogonal linear programming. This new algorithm evolved from some heuristic procedures which were initially examined in the attempt to improve solution efficiency. In the course of testing these heuristics, which were unsuccessful, procedures were developed whereby it is possible to generate discrete and continuous mathematical programs with randomly chosen parameters, but known solutions. / Ph. D.

27 
Algoritmes vir die maksimering van konvekse en verwante knapsakproblemeVisagie, Stephan E. 03 1900 (has links)
Thesis (PhD (Logistics))University of Stellenbosch, 2007. / In this dissertation original algorithms are introduced to solve separable resource allocation problems
(RAPs) with increasing nonlinear functions in the objective function, and lower and upper bounds on
each variable. Algorithms are introduced in three special cases. The first case arises when the objective
function of the RAP consists of the sum of convex functions and all the variables for these functions
range over the same interval. In the second case RAPs with the sum of convex functions in the objective
function are considered, but the variables of these functions can range over different intervals. In the
last special case RAPs with an objective function comprising the sum of convex and concave functions
are considered. In this case the intervals of the variables can range over different values.
In the first case two new algorithms, namely the fraction and the slope algorithm are presented to solve
the RAPs adhering to the conditions of the case. Both these algorithms yield far better solution times
than the existing branch and bound algorithm.
A new heuristic and three new algorithms are presented to solve RAPs falling into the second case. The
isobound heuristic yields, on average, good solutions relative to the optimal objective function value in
faster times than exact algorithms. The three algorithms, namely the isobound algorithm, the branch
and cut algorithm and the isobound branch and cut algorithm also yield considerably beter solution
times than the existing branch and bound algorithm. It is shown that, on average, the isobound branch
and cut algorithm yields the fastest solution times, followed by the isobound algorithm and then by die
branch and cut algorithm.
In the third case the necessary and sufficient conditions for optimality are considered. From this, the
conclusion is drawn that search techniques for points complying with the necessary conditions will take
too long relative to branch and bound techniques. Thus three new algorithms, namely the KL, SKL and
IKL algorithms are introduced to solve RAPs falling into this case. These algorithms are generalisations
of the branch and bound, branch and cut, and isobound algorithms respectively. The KL algorithm
was then used as a benchmark. Only the IKL algorithm yields a considerable improvement on the KL
algorithm.

28 
The design of transmitter/receiver and high speed analog to digital converters in wireless communication systems: a convex programming approachZhao, Shaohua, 趙少華 January 2008 (has links)
published_or_final_version / Electrical and Electronic Engineering / Doctoral / Doctor of Philosophy

29 
A value estimation approach to IriImai's method for constrained convex optimization.January 2002 (has links)
Lam Sze Wan. / Thesis (M.Phil.)Chinese University of Hong Kong, 2002. / Includes bibliographical references (leaves 9395). / Abstracts in English and Chinese. / Chapter 1  Introduction  p.1 / Chapter 2  Background  p.4 / Chapter 3  Review of IriImai Algorithm for Convex Programming Prob lems  p.10 / Chapter 3.1  IriImai Algorithm for Convex Programming  p.11 / Chapter 3.2  Numerical Results  p.14 / Chapter 3.2.1  Linear Programming Problems  p.15 / Chapter 3.2.2  Convex Quadratic Programming Problems with Linear Inequality Constraints  p.17 / Chapter 3.2.3  Convex Quadratic Programming Problems with Con vex Quadratic Inequality Constraints  p.18 / Chapter 3.2.4  Summary of Numerical Results  p.21 / Chapter 3.3  Chapter Summary  p.22 / Chapter 4  Value Estimation Approach to IriImai Method for Con strained Optimization  p.23 / Chapter 4.1  Value Estimation Function Method  p.24 / Chapter 4.1.1  Formulation and Properties  p.24 / Chapter 4.1.2  Value Estimation Approach to IriImai Method  p.33 / Chapter 4.2  "A New Smooth Multiplicative Barrier Function Φθ+,u"  p.35 / Chapter 4.2.1  Formulation and Properties  p.35 / Chapter 4.2.2  "Value Estimation Approach to IriImai Method by Us ing Φθ+,u"  p.41 / Chapter 4.3  Convergence Analysis  p.43 / Chapter 4.4  Numerical Results  p.46 / Chapter 4.4.1  Numerical Results Based on Algorithm 4.1  p.46 / Chapter 4.4.2  Numerical Results Based on Algorithm 4.2  p.50 / Chapter 4.4.3  Summary of Numerical Results  p.59 / Chapter 4.5  Chapter Summary  p.60 / Chapter 5  Extension of Value Estimation Approach to IriImai Method for More General Constrained Optimization  p.61 / Chapter 5.1  Extension of IriImai Algorithm 3.1 for More General Con strained Optimization  p.62 / Chapter 5.1.1  Formulation and Properties  p.62 / Chapter 5.1.2  Extension of IriImai Algorithm 3.1  p.63 / Chapter 5.2  Extension of Value Estimation Approach to IriImai Algo rithm 4.1 for More General Constrained Optimization  p.64 / Chapter 5.2.1  Formulation and Properties  p.64 / Chapter 5.2.2  Value Estimation Approach to IriImai Method  p.67 / Chapter 5.3  Extension of Value Estimation Approach to IriImai Algo rithm 4.2 for More General Constrained Optimization  p.69 / Chapter 5.3.1  Formulation and Properties  p.69 / Chapter 5.3.2  Value Estimation Approach to IriImai Method  p.71 / Chapter 5.4  Numerical Results  p.72 / Chapter 5.4.1  Numerical Results Based on Algorithm 5.1  p.73 / Chapter 5.4.2  Numerical Results Based on Algorithm 5.2  p.76 / Chapter 5.4.3  Numerical Results Based on Algorithm 5.3  p.78 / Chapter 5.4.4  Summary of Numerical Results  p.86 / Chapter 5.5  Chapter Summary  p.87 / Chapter 6  Conclusion  p.88 / Bibliography  p.93 / Chapter A  Search Directions  p.96 / Chapter A.1  Newton's Method  p.97 / Chapter A.1.1  Golden Section Method  p.99 / Chapter A.2  Gradients and Hessian Matrices  p.100 / Chapter A.2.1  Gradient of Φθ(x)  p.100 / Chapter A.2.2  Hessian Matrix of Φθ(x)  p.101 / Chapter A.2.3  Gradient of Φθ(x)  p.101 / Chapter A.2.4  Hessian Matrix of φθ (x)  p.102 / Chapter A.2.5  Gradient and Hessian Matrix of Φθ(x) in Terms of ∇xφθ (x) and∇2xxφθ (x)  p.102 / Chapter A.2.6  "Gradient of φθ+,u(x)"  p.102 / Chapter A.2.7  "Hessian Matrix of φθ+,u(x)"  p.103 / Chapter A.2.8  "Gradient and Hessian Matrix of Φθ+,u(x) in Terms of ∇xφθ+,u(x)and ∇2xxφθ+,u(x)"  p.103 / Chapter A.3  Newton's Directions  p.103 / Chapter A.3.1  Newton Direction of Φθ (x) in Terms of ∇xφθ (x) and ∇2xxφθ(x)  p.104 / Chapter A.3.2  "Newton Direction of Φθ+,u(x) in Terms of ∇xφθ+,u(x) and ∇2xxφθ,u(x)"  p.104 / Chapter A.4  Feasible Descent Directions for the Minimization Problems (Pθ) and (Pθ+)  p.105 / Chapter A.4.1  Feasible Descent Direction for the Minimization Prob lems (Pθ)  p.105 / Chapter A.4.2  Feasible Descent Direction for the Minimization Prob lems (Pθ+)  p.107 / Chapter B  Randomly Generated Test Problems for Positive Definite Quadratic Programming  p.109 / Chapter B.l  Convex Quadratic Programming Problems with Linear Con straints  p.110 / Chapter B.l.1  General Description of Test Problems  p.110 / Chapter B.l.2  The Objective Function  p.112 / Chapter B.l.3  The Linear Constraints  p.113 / Chapter B.2  Convex Quadratic Programming Problems with Quadratic In equality Constraints  p.116 / Chapter B.2.1  The Quadratic Constraints  p.117

30 
ROI: An extensible R Optimization InfrastructureTheußl, Stefan, Schwendinger, Florian, Hornik, Kurt 01 1900 (has links) (PDF)
Optimization plays an important role in many methods routinely used in statistics, machine learning and data science. Often, implementations of these methods rely on highly specialized optimization algorithms, designed to be only applicable within a specific application. However, in many instances recent advances, in particular in the field of convex optimization, make it possible to conveniently and straightforwardly use modern solvers instead with the advantage of enabling broader usage scenarios and thus promoting reusability.
This paper introduces the R Optimization Infrastructure which provides an extensible infrastructure to model linear, quadratic, conic and general nonlinear optimization problems in a consistent way.
Furthermore, the infrastructure administers many different solvers, reformulations, problem collections and functions to read and write optimization problems in various formats. / Series: Research Report Series / Department of Statistics and Mathematics

Page generated in 0.0985 seconds