• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 30
  • 12
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 55
  • 55
  • 20
  • 14
  • 12
  • 12
  • 10
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

The generalized inverse dual for quadratically constrained quadratic programs

Randolph, William David, January 1974 (has links)
Thesis--University of Florida. / Description based on print version record. Typescript. Vita. Bibliography: leaves 107-111.
12

Algorithm-tailored error bound conditions and the linear convergence rae of ADMM

Zeng, Shangzhi 30 October 2017 (has links)
In the literature, error bound conditions have been widely used for studying the linear convergence rates of various first-order algorithms and the majority of literature focuses on how to sufficiently ensure these error bound conditions, usually posing more assumptions on the model under discussion. In this thesis, we focus on the alternating direction method of multipliers (ADMM), and show that the known error bound conditions for studying ADMM's linear convergence, can indeed be further weakened if the error bound is studied over the specific iterative sequence generated by ADMM. A so-called partial error bound condition, which is tailored for the specific ADMM's iterative scheme and weaker than known error bound conditions in the literature, is thus proposed to derive the linear convergence of ADMM. We further show that this partial error bound condition theoretically justifies the difference if the two primal variables are updated in different orders in implementing ADMM, which had been empirically observed in the literature yet no theory is known so far.
13

Pre-Conditioners and Relations between Different Measures of Conditioning for Conic Linear Systems

Epelman, Marina A., 1973-, Freund, Robert M. 06 1900 (has links)
In recent years, new and powerful research into "condition numbers" for convex optimization has been developed, aimed at capturing the intuitive notion of problem behavior. This research has been shown to be important in studying the efficiency of algorithms, including interior-point algorithms, for convex optimization as well as other behavioral characteristics of these problems such as problem geometry, deformation under data perturbation, etc. This paper studies measures of conditioning for a conic linear system of the form (FPd): Ax = b, x E Cx, whose data is d = (A, b). We present a new measure of conditioning, denoted pd, and we show implications of lid for problem geometry and algorithm complexity, and demonstrate that the value of = id is independent of the specific data representation of (FPd). We then prove certain relations among a variety of condition measures for (FPd), including ld, pad, Xd, and C(d). We discuss some drawbacks of using the condition number C(d) as the sole measure of conditioning of a conic linear system, and we then introduce the notion of a "pre-conditioner" for (FPd) which results in an equivalent formulation (FPj) of (FPd) with a better condition number C(d). We characterize the best such pre-conditioner and provide an algorithm for constructing an equivalent data instance d whose condition number C(d) is within a known factor of the best possible.
14

Internal convex programming, orthogonal linear programming, and program generation procedures.

Ristroph, John Heard, January 1975 (has links)
Thesis (Ph. D.)--Virginia Polytechnic Institute, 1975. / Also available via the Internet.
15

The design of transmitter/receiver and high speed analog to digital converters in wireless communication systems a convex programming approach /

Zhao, Shaohua, January 2008 (has links)
Thesis (Ph. D.)--University of Hong Kong, 2009. / Includes bibliographical references (leaves 155-165) Also available in print.
16

The design of transmitter/receiver and high speed analog to digital converters in wireless communication systems : a convex programming approach /

Zhao, Shaohua, January 2008 (has links)
Thesis (Ph. D.)--University of Hong Kong, 2009. / Includes bibliographical references (leaves 155-165) Also available online.
17

Classical mechanics with dissipative constraints

Harker, Shaun Russell. January 2009 (has links) (PDF)
Thesis (PhD)--Montana State University--Bozeman, 2009. / Typescript. Chairperson, Graduate Committee: Tomas Gedeon. Includes bibliographical references (leaves 234-237).
18

The solution of non-convex optimization problems by iterative convex programming

Meyer, Robert R. January 1968 (has links)
Thesis (Ph. D.)--University of Wisconsin--Madison, 1968. / Typescript. Vita. eContent provider-neutral record in process. Description based on print version record. Includes bibliographical references.
19

Some operator splitting methods for convex optimization

Li, Xinxin 14 August 2014 (has links)
Many applications arising in various areas can be well modeled as convex optimization models with separable objective functions and linear coupling constraints. Such areas include signal processing, image processing, statistical learning, wireless networks, etc. If these well-structured convex models are treated as generic models and their separable structures are ignored in algorithmic design, then it is hard to effectively exploit the favorable properties that the objective functions possibly have. Therefore, some operator splitting methods have regained much attention from different areas for solving convex optimization models with separable structures in different contexts. In this thesis, some new operator splitting methods are proposed for convex optimiza- tion models with separable structures. We first propose combining the alternating direction method of multiplier with the logarithmic-quadratic proximal regulariza- tion for a separable monotone variational inequality with positive orthant constraints and propose a new operator splitting method. Then, we propose a proximal version of the strictly contractive Peaceman-Rachford splitting method, which was recently proposed for the convex minimization model with linear constraints and an objective function in form of the sum of two functions without coupled variables. After that, an operator splitting method suitable for parallel computation is proposed for a convex model whose objective function is the sum of three functions. For the new algorithms, we establish their convergence and estimate their convergence rates measured by the iteration complexity. We also apply the new algorithms to solve some applications arising in the image processing area; and report some preliminary numerical results. Last, we will discuss a particular video processing application and propose a series of new models for background extraction in different scenarios; to which some of the new methods are applicable. Keywords: Convex optimization, Operator splitting method, Alternating direction method of multipliers, Peaceman-Rachford splitting method, Image processing
20

Improved recurrent neural networks for convex optimization. / CUHK electronic theses & dissertations collection

January 2008 (has links)
Constrained optimization problems arise widely in scientific research and engineering applications. In the past two decades, solving optimization problems using recurrent neural network methods have been extensively investigated due to the advantages of massively parallel operations and rapid convergence. In real applications, neural networks with simple architecture and good performance are desired. However, most existing neural networks have some limitations and disadvantages in the convergence conditions or architecture complexity. This thesis is concentrated on analysis and design of recurrent neural networks with simplified architecture but for solving more general convex optimization problems. In this thesis, some improved recurrent neural networks have been proposed for solving smooth and non-smooth convex optimization problems and applied to some selected applications. / In Part I, we first propose a one-layer recurrent neural network for solving linear programming problems. Compared with other neural networks for linear programming, the proposed neural network has simpler architecture and better convergence properties. Second, a one-layer recurrent neural network is proposed for solving quadratic programming problems. The global convergence of the neural network can be guaranteed if only the objective function of the programming problem is convex on the equality constraints and not necessarily convex everywhere. Compared with the other neural networks for quadratic programming, such as the Lagrangian network and projection neural network, the proposed neural network has simpler architecture which neurons is the same as the number of the optimization problems. Third, combining the projection and penalty parameter methods, a one-layer recurrent neural network is proposed for solving general convex optimization problems with linear constraints. / In Part II, some improved recurrent neural networks are proposed for solving non-smooth convex optimization problems. We first proposed a one-layer recurrent neural network for solving the non-smooth convex programming problems with only equality constraints. This neural network simplifies the Lagrangian network and extend the neural network to solve non-smooth convex optimization problems. Then, a two-layers recurrent neural network is proposed for the non-smooth convex optimization subject to linear equality and bound constraints. / In Part III, some selected applications of the proposed neural networks are also discussed. The k-winners-take-all (kWTA) operation is first converted to equivalent linear and quadratic optimization problems and two kWTA network models are tailed to do the kWTA operation. Then, the proposed neural networks are applied to some other problems, such as the linear assignment, support vector machine learning and curve fitting problems. / Liu, Qingshan. / Source: Dissertation Abstracts International, Volume: 70-06, Section: B, page: 3606. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2008. / Includes bibliographical references (leaves 133-145). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.

Page generated in 0.093 seconds