• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 30
  • 11
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 54
  • 54
  • 20
  • 14
  • 11
  • 11
  • 10
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Portfolio Selection Under Nonsmooth Convex Transaction Costs

Potaptchik, Marina January 2006 (has links)
We consider a portfolio selection problem in the presence of transaction costs. Transaction costs on each asset are assumed to be a convex function of the amount sold or bought. This function can be nondifferentiable in a finite number of points. The objective function of this problem is a sum of a convex twice differentiable function and a separable convex nondifferentiable function. We first consider the problem in the presence of linear constraints and later generalize the results to the case when the constraints are given by the convex piece-wise linear functions. <br /><br /> Due to the special structure, this problem can be replaced by an equivalent differentiable problem in a higher dimension. It's main drawback is efficiency since the higher dimensional problem is computationally expensive to solve. <br /><br /> We propose several alternative ways to solve this problem which do not require introducing new variables or constraints. We derive the optimality conditions for this problem using subdifferentials. First, we generalize an active set method to this class of problems. We solve the problem by considering a sequence of equality constrained subproblems, each subproblem having a twice differentiable objective function. Information gathered at each step is used to construct the subproblem for the next step. We also show how the nonsmoothness can be handled efficiently by using spline approximations. The problem is then solved using a primal-dual interior-point method. <br /><br /> If a higher accuracy is needed, we do a crossover to an active set method. Our numerical tests show that we can solve large scale problems efficiently and accurately.
22

Approximation algorithms for Lp-ball and quadratically constrained polynomial optimization problems.

January 2013 (has links)
本论文着重研究了带有Lp模球约束以及二次约束的多项式优化问题的计算复杂度以及关于此类问题的近似算法。在本论文中,利用张量对称化的技巧,我们首次证明了当P∈ [2 ,∞] ,任意高阶的带有Lp模球约束的多项式优化问题均为NP 困难。借助模的对偶性质,我们将这类优化问题转化为求解凸体半径的问题,从而使得我们获得了之前研究所无法使用的算法工具。具体来说,利用计算凸几何的算法工具,对于Lp模球约束的多项式优化问题,我们得到了近似比为[附圖]的确定性多项式时间近似算法,其中d为目标多项式的阶次, n 为问题的维度。使用随机算法,我们将近似比进一步提高为此类问题的己知最优值。[附圖]。此外,我们发展了计算凸几何当中对于凸体半径的计算方法,从而设计出了一种对二次约束多项式优化问题近似比为[附圖]的近似算法,其中m为问题的约束个数。我们的结果涵盖并提高了之前关于此类问题的研究结果。我们相信在本论文中使用的新的算法工具,将在今后的多项式优化问题研究中得到更广泛的应用。 / In this thesis, we present polynomial time approximation algorithms for solving various homogeneous polynomial optimization problems and their multilinear relaxations. Specifically, for the problems with Lp ball constraint, where P∈ [2 ,∞], by reducing them to that of determining the Lq-diameter of certain convex body, we show that they can be approximated to within a factor of [with formula] in deterministic polynomial time, where q = p=(p - 1) is the conjugate of p, n is the number of variables, and d is the degree of the polynomial. We further show that with the help of randomization, the approximation guarantee can be improved to [with formula], which is independent of p and is currently the best for the aforementioned problems. Moreover, we extend the argument of deterministic algorithm mentioned above to solve the quadratically constrained polynomial optimization problems. In particular, for any intersection of ellipsoids K, we can, in polynomial time, construct a random polytope P, which satisfies [with formula]. Then, by reducing the problem to that of evaluating the maximum polytopal norm [with formula] induced by P, over certain convex body, we can approximate the quadratically constrained problem within a factor of [with formula] in polynomial time. Our results unify and generalize those in the literature, which focus either on the quadratic case or the case where [with formula]. We believe that the wide array of tools used in this thesis will have further applications in the study of polynomial optimization problems. / Detailed summary in vernacular field only. / Hou, Ke. / On title page "p" is subscript. / Thesis (Ph.D.) Chinese University of Hong Kong, 2013. / Includes bibliographical references (leaves 106-111). / Abstracts also in Chinese.
23

Improved recurrent neural networks for convex optimization. / CUHK electronic theses & dissertations collection

January 2008 (has links)
Constrained optimization problems arise widely in scientific research and engineering applications. In the past two decades, solving optimization problems using recurrent neural network methods have been extensively investigated due to the advantages of massively parallel operations and rapid convergence. In real applications, neural networks with simple architecture and good performance are desired. However, most existing neural networks have some limitations and disadvantages in the convergence conditions or architecture complexity. This thesis is concentrated on analysis and design of recurrent neural networks with simplified architecture but for solving more general convex optimization problems. In this thesis, some improved recurrent neural networks have been proposed for solving smooth and non-smooth convex optimization problems and applied to some selected applications. / In Part I, we first propose a one-layer recurrent neural network for solving linear programming problems. Compared with other neural networks for linear programming, the proposed neural network has simpler architecture and better convergence properties. Second, a one-layer recurrent neural network is proposed for solving quadratic programming problems. The global convergence of the neural network can be guaranteed if only the objective function of the programming problem is convex on the equality constraints and not necessarily convex everywhere. Compared with the other neural networks for quadratic programming, such as the Lagrangian network and projection neural network, the proposed neural network has simpler architecture which neurons is the same as the number of the optimization problems. Third, combining the projection and penalty parameter methods, a one-layer recurrent neural network is proposed for solving general convex optimization problems with linear constraints. / In Part II, some improved recurrent neural networks are proposed for solving non-smooth convex optimization problems. We first proposed a one-layer recurrent neural network for solving the non-smooth convex programming problems with only equality constraints. This neural network simplifies the Lagrangian network and extend the neural network to solve non-smooth convex optimization problems. Then, a two-layers recurrent neural network is proposed for the non-smooth convex optimization subject to linear equality and bound constraints. / In Part III, some selected applications of the proposed neural networks are also discussed. The k-winners-take-all (kWTA) operation is first converted to equivalent linear and quadratic optimization problems and two kWTA network models are tailed to do the kWTA operation. Then, the proposed neural networks are applied to some other problems, such as the linear assignment, support vector machine learning and curve fitting problems. / Liu, Qingshan. / Source: Dissertation Abstracts International, Volume: 70-06, Section: B, page: 3606. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2008. / Includes bibliographical references (leaves 133-145). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.
24

Solving convex programming with simple convex constraints

Hou, Liangshao 09 July 2020 (has links)
The problems we studied in this thesis are linearly constrained convex programming (LCCP) and nonnegative matrix factorization (NMF). The resolutions of these two problems are all closely related to convex programming with simple convex constraints. The work can mainly be described in the following three parts. Firstly, an interior point algorithm following a parameterized central path for linearly constrained convex programming is proposed. The convergence and polynomial-time complexity are proved under the assumption that the Hessian of the objective function is locally Lipschitz continuous. Also, an initialization strategy is proposed, and some numerical results are provided to show the efficiency of the proposed algorithm. Secondly, the path following algorithm is promoted for general barrier functions. A class of barrier functions is proposed, and their corresponding paths are proved to be continuous and converge to optimal solutions. Applying the path following algorithm to these paths provide more flexibility to interior point methods. With some adjustments, the initialization method is equipped to validate implementation and convergence. Thirdly, we study the convergence of hierarchical alternating least squares algorithm (HALS) and its fast form (Fast HALS) for nonnegative matrix factorization. The coordinate descend idea for these algorithms is restated. With a precise estimation of objective reduction, some limiting properties are illustrated. The accumulation points are proved to be stationary points, and some adjustments are proposed to improve the implementation and efficiency
25

Internal convex programming, orthogonal linear programming, and program generation procedures

Ristroph, John Heard 05 January 2010 (has links)
Three topics are developed: interval convex programming, and program generation techniques. The interval convex programming problem is similar to the convex programming problem of the real number system except that all parameters are specified as intervals of real numbers rather than as real scalars. The interval programming solution procedure involves the solution of a series of 2n real valued convex programs where n is the dimension of the space. The solution of an interval programming problem is an interval vector which contains all possible solutions to any real valued convex program which may be realized. Attempts to improve the efficiency of the interval convex programming problem lead to the eventual development of a new solution procedure for the real valued linear programming problem, Orthogonal linear programming. This new algorithm evolved from some heuristic procedures which were initially examined in the attempt to improve solution efficiency. In the course of testing these heuristics, which were unsuccessful, procedures were developed whereby it is possible to generate discrete and continuous mathematical programs with randomly chosen parameters, but known solutions. / Ph. D.
26

Algoritmes vir die maksimering van konvekse en verwante knapsakprobleme /

Visagie, S. E. January 2007 (has links)
Thesis (PhD)--University of Stellenbosch, 2007. / Bibliography. Also availabe via the Internet.
27

Computing optimal designs for regression models via convex programming

Zhou, Wenjie 25 August 2015 (has links)
Optimal design problems aim at selecting design points optimally with respect to certain statistical criteria. The research of this thesis focuses on optimal design problems with respect to A-, D- and E-optimal criteria, which minimize the trace, determinant and largest eigenvalue of the information matrix, respectively. Semide nite programming (SDP) is concerned with optimizing a linear objective function subject to a linear matrix being positive semide nite. Two powerful MATLAB add-ons, SeDuMi and CVX, have been developed to solve SDP problems e ciently. In this paper, we show in detail how to formulate A- and E-optimal design problems as SDP problems and solve them by SeDuMi and CVX. This technique can be used to construct approximate A-optimal and E-optimal designs for all linear and non-linear models with discrete design spaces. The results can also provide guidance to nd optimal designs on continuous design spaces. For one variable polynomial regression models, we solve the A- and E- optimal designs on the continuous design space by using a two-stage procedure. In the rst stage we nd the optimal moments by casting it as an SDP problem and in the second stage we extract the optimal designs from the optimal moments obtained from the rst stage. Unlike E- and A-optimal design problems, the objective function of D-optimal design problem is nonlinear. So D-optimal design problems cannot be reformulated as an SDP. However, it can be cast as a convex problem and solved by an interior point method. In this thesis we give details on how to use the interior point method to solve D-optimal design problems. Finally several numerical examples for A-, D-, and E-optimal designs along with the MATLAB codes are presented. / Graduate
28

New design methods for perfect reconstruction filter banks

Tsui, Kai-man, 徐啟民 January 2004 (has links)
published_or_final_version / abstract / toc / Electrical and Electronic Engineering / Master / Master of Philosophy
29

The design of transmitter/receiver and high speed analog to digital converters in wireless communication systems: a convex programming approach

Zhao, Shaohua, 趙少華 January 2008 (has links)
published_or_final_version / Electrical and Electronic Engineering / Doctoral / Doctor of Philosophy
30

Algoritmes vir die maksimering van konvekse en verwante knapsakprobleme

Visagie, Stephan E. 03 1900 (has links)
Thesis (PhD (Logistics))--University of Stellenbosch, 2007. / In this dissertation original algorithms are introduced to solve separable resource allocation problems (RAPs) with increasing nonlinear functions in the objective function, and lower and upper bounds on each variable. Algorithms are introduced in three special cases. The first case arises when the objective function of the RAP consists of the sum of convex functions and all the variables for these functions range over the same interval. In the second case RAPs with the sum of convex functions in the objective function are considered, but the variables of these functions can range over different intervals. In the last special case RAPs with an objective function comprising the sum of convex and concave functions are considered. In this case the intervals of the variables can range over different values. In the first case two new algorithms, namely the fraction and the slope algorithm are presented to solve the RAPs adhering to the conditions of the case. Both these algorithms yield far better solution times than the existing branch and bound algorithm. A new heuristic and three new algorithms are presented to solve RAPs falling into the second case. The iso-bound heuristic yields, on average, good solutions relative to the optimal objective function value in faster times than exact algorithms. The three algorithms, namely the iso-bound algorithm, the branch and cut algorithm and the iso-bound branch and cut algorithm also yield considerably beter solution times than the existing branch and bound algorithm. It is shown that, on average, the iso-bound branch and cut algorithm yields the fastest solution times, followed by the iso-bound algorithm and then by die branch and cut algorithm. In the third case the necessary and sufficient conditions for optimality are considered. From this, the conclusion is drawn that search techniques for points complying with the necessary conditions will take too long relative to branch and bound techniques. Thus three new algorithms, namely the KL, SKL and IKL algorithms are introduced to solve RAPs falling into this case. These algorithms are generalisations of the branch and bound, branch and cut, and iso-bound algorithms respectively. The KL algorithm was then used as a benchmark. Only the IKL algorithm yields a considerable improvement on the KL algorithm.

Page generated in 0.1213 seconds