• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 30
  • 11
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 54
  • 54
  • 20
  • 14
  • 11
  • 11
  • 10
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Convergence and approximation for primal-dual methods in large-scale optimization /

Wright, Stephen E., January 1990 (has links)
Thesis (Ph. D.)--University of Washington, 1990. / Vita. Includes bibliographical references (leaves [97]-100).
12

The generalized inverse dual for quadratically constrained quadratic programs

Randolph, William David, January 1974 (has links)
Thesis--University of Florida. / Description based on print version record. Typescript. Vita. Bibliography: leaves 107-111.
13

Some operator splitting methods for convex optimization

Li, Xinxin 14 August 2014 (has links)
Many applications arising in various areas can be well modeled as convex optimization models with separable objective functions and linear coupling constraints. Such areas include signal processing, image processing, statistical learning, wireless networks, etc. If these well-structured convex models are treated as generic models and their separable structures are ignored in algorithmic design, then it is hard to effectively exploit the favorable properties that the objective functions possibly have. Therefore, some operator splitting methods have regained much attention from different areas for solving convex optimization models with separable structures in different contexts. In this thesis, some new operator splitting methods are proposed for convex optimiza- tion models with separable structures. We first propose combining the alternating direction method of multiplier with the logarithmic-quadratic proximal regulariza- tion for a separable monotone variational inequality with positive orthant constraints and propose a new operator splitting method. Then, we propose a proximal version of the strictly contractive Peaceman-Rachford splitting method, which was recently proposed for the convex minimization model with linear constraints and an objective function in form of the sum of two functions without coupled variables. After that, an operator splitting method suitable for parallel computation is proposed for a convex model whose objective function is the sum of three functions. For the new algorithms, we establish their convergence and estimate their convergence rates measured by the iteration complexity. We also apply the new algorithms to solve some applications arising in the image processing area; and report some preliminary numerical results. Last, we will discuss a particular video processing application and propose a series of new models for background extraction in different scenarios; to which some of the new methods are applicable. Keywords: Convex optimization, Operator splitting method, Alternating direction method of multipliers, Peaceman-Rachford splitting method, Image processing
14

Internal convex programming, orthogonal linear programming, and program generation procedures.

Ristroph, John Heard, January 1975 (has links)
Thesis (Ph. D.)--Virginia Polytechnic Institute, 1975. / Also available via the Internet.
15

The design of transmitter/receiver and high speed analog to digital converters in wireless communication systems a convex programming approach /

Zhao, Shaohua, January 2008 (has links)
Thesis (Ph. D.)--University of Hong Kong, 2009. / Includes bibliographical references (leaves 155-165) Also available in print.
16

The design of transmitter/receiver and high speed analog to digital converters in wireless communication systems : a convex programming approach /

Zhao, Shaohua, January 2008 (has links)
Thesis (Ph. D.)--University of Hong Kong, 2009. / Includes bibliographical references (leaves 155-165) Also available online.
17

Classical mechanics with dissipative constraints

Harker, Shaun Russell. January 2009 (has links) (PDF)
Thesis (PhD)--Montana State University--Bozeman, 2009. / Typescript. Chairperson, Graduate Committee: Tomas Gedeon. Includes bibliographical references (leaves 234-237).
18

The solution of non-convex optimization problems by iterative convex programming

Meyer, Robert R. January 1968 (has links)
Thesis (Ph. D.)--University of Wisconsin--Madison, 1968. / Typescript. Vita. eContent provider-neutral record in process. Description based on print version record. Includes bibliographical references.
19

Pre-Conditioners and Relations between Different Measures of Conditioning for Conic Linear Systems

Epelman, Marina A., 1973-, Freund, Robert M. 06 1900 (has links)
In recent years, new and powerful research into "condition numbers" for convex optimization has been developed, aimed at capturing the intuitive notion of problem behavior. This research has been shown to be important in studying the efficiency of algorithms, including interior-point algorithms, for convex optimization as well as other behavioral characteristics of these problems such as problem geometry, deformation under data perturbation, etc. This paper studies measures of conditioning for a conic linear system of the form (FPd): Ax = b, x E Cx, whose data is d = (A, b). We present a new measure of conditioning, denoted pd, and we show implications of lid for problem geometry and algorithm complexity, and demonstrate that the value of = id is independent of the specific data representation of (FPd). We then prove certain relations among a variety of condition measures for (FPd), including ld, pad, Xd, and C(d). We discuss some drawbacks of using the condition number C(d) as the sole measure of conditioning of a conic linear system, and we then introduce the notion of a "pre-conditioner" for (FPd) which results in an equivalent formulation (FPj) of (FPd) with a better condition number C(d). We characterize the best such pre-conditioner and provide an algorithm for constructing an equivalent data instance d whose condition number C(d) is within a known factor of the best possible.
20

Portfolio Selection Under Nonsmooth Convex Transaction Costs

Potaptchik, Marina January 2006 (has links)
We consider a portfolio selection problem in the presence of transaction costs. Transaction costs on each asset are assumed to be a convex function of the amount sold or bought. This function can be nondifferentiable in a finite number of points. The objective function of this problem is a sum of a convex twice differentiable function and a separable convex nondifferentiable function. We first consider the problem in the presence of linear constraints and later generalize the results to the case when the constraints are given by the convex piece-wise linear functions. <br /><br /> Due to the special structure, this problem can be replaced by an equivalent differentiable problem in a higher dimension. It's main drawback is efficiency since the higher dimensional problem is computationally expensive to solve. <br /><br /> We propose several alternative ways to solve this problem which do not require introducing new variables or constraints. We derive the optimality conditions for this problem using subdifferentials. First, we generalize an active set method to this class of problems. We solve the problem by considering a sequence of equality constrained subproblems, each subproblem having a twice differentiable objective function. Information gathered at each step is used to construct the subproblem for the next step. We also show how the nonsmoothness can be handled efficiently by using spline approximations. The problem is then solved using a primal-dual interior-point method. <br /><br /> If a higher accuracy is needed, we do a crossover to an active set method. Our numerical tests show that we can solve large scale problems efficiently and accurately.

Page generated in 0.0846 seconds