Spelling suggestions: "subject:"convex optimization"" "subject:"konvex optimization""
1 
Greedy Strategies for Convex MinimizationNguyen, Hao Thanh 16 December 2013 (has links)
We have investigated two greedy strategies for finding an approximation to the minimum of a convex function E, defined on a Hilbert space H. We have proved convergence rates for a modification of the orthogonal matching pursuit and its weak version under suitable conditions on the objective function E. These conditions involve the behavior of the moduli of smoothness and the modulus of uniform convexity of E.

2 
Accelerating convex optimization in machine learning by leveraging functional growth conditionsXu, Yi 01 August 2019 (has links)
In recent years, unprecedented growths in scale and dimensionality of data raise big computational challenges for traditional optimization algorithms; thus it becomes very important to develop efficient and effective optimization algorithms for solving numerous machine learning problems. Many traditional algorithms (e.g., gradient descent method) are blackbox algorithms, which are simple to implement but ignore the underlying geometrical property of the objective function. Recent trend in accelerating these traditional blackbox algorithms is to leverage geometrical properties of the objective function such as strong convexity. However, most existing methods rely too much on the knowledge of strong convexity, which makes them not applicable to problems without strong convexity or without knowledge of strong convexity. To bridge the gap between traditional blackbox algorithms without knowing the problem's geometrical property and accelerated algorithms under strong convexity, how can we develop adaptive algorithms that can be adaptive to the objective function's underlying geometrical property? To answer this question, in this dissertation we focus on convex optimization problems and propose to explore an error bound condition that characterizes the functional growth condition of the objective function around a global minimum. Under this error bound condition, we develop algorithms that (1) can adapt to the problem's geometrical property to enjoy faster convergence in stochastic optimization; (2) can leverage the problem's structural regularizer to further improve the convergence speed; (3) can address both deterministic and stochastic optimization problems with explicit maxstructural loss; (4) can leverage the objective function's smoothness property to improve the convergence rate for stochastic optimization. We first considered stochastic optimization problems with general stochastic loss. We proposed two accelerated stochastic subgradient (ASSG) methods with theoretical guarantees by iteratively solving the original problem approximately in a local region around a historical solution with the size of the local region gradually decreasing as the solution approaches the optimal set. Second, we developed a new theory of alternating direction method of multipliers (ADMM) with a new adaptive scheme of the penalty parameter for both deterministic and stochastic optimization problems with structured regularizers. With LEB condition, the proposed deterministic and stochastic ADMM enjoy improved iteration complexities of $\widetilde O(1/\epsilon^{1\theta})$ and $\widetilde O(1/\epsilon^{2(1\theta)})$ respectively. Then, we considered a family of optimization problems with an explicit maxstructural loss. We developed a novel homotopy smoothing (HOPS) algorithm that employs Nesterov's smoothing technique and accelerated gradient method and runs in stages. Under a generic condition socalled local error bound (LEB) condition, it can improve the iteration complexity of $O(1/\epsilon)$ to $\widetilde O(1/\epsilon^{1\theta})$ omitting a logarithmic factor with $\theta\in(0,1]$. Next, we proposed new restarted stochastic primaldual (RSPD) algorithms for solving the problem with stochastic explicit maxstructural loss. We successfully got a better iteration complexity than $O(1/\epsilon^2)$ without bilinear structure assumption, which is a big challenge of obtaining faster convergence for the considered problem. Finally, we consider finitesum optimization problems with smooth loss and simple regularizer. We proposed novel techniques to automatically search for the unknown parameter on the fly of optimization while maintaining almost the same convergence rate as an oracle setting assuming the involved parameter is given. Under the Holderian error bound (HEB) condition with $\theta\in(0,1/2)$, the proposed algorithm also enjoys intermediate faster convergence rates than its standard counterparts with only the smoothness assumption.

3 
Linear phase filter bank design by convex programmingHa, Hoang Kha, Electrical Engineering & Telecommunications, Faculty of Engineering, UNSW January 2008 (has links)
Digital filter banks have found in a wide variety of applications in data compression, digital communications, and adaptive signal processing. The common objectives of the filter bank design consist of frequency selectivity of the individual filters and perfect reconstruction of the filter banks. The design problems of filter banks are intrinsically challenging because their natural formulations are nonconvex constrained optimization problems. Therefore, there is a strong motivation to cast the design problems into convex optimization problems whose globally optimal solutions can be efficiently obtained. The main contributions of this dissertation are to exploit the convex optimization algorithms to design several classes of the filter banks. First, the twochannel orthogonal symmetric complexvalued filter banks are investigated. A key contribution is to derive the necessary and sufficient condition for the existence of complexvalued symmetric spectral factors. Moreover, this condition can be expressed as linear matrix inequalities (LMIs), and hence semidefinite programming (SDP) is applicable. Secondly, for twochannel symmetric realvalued filter banks, a more general and efficient method for designing the optimal triplet halfband filter banks with regularity is developed. By exploiting the LMI characterization of nonnegative cosine polynomials, the semiinfinite constraints can be efficiently handled. Consequently, the filter bank design is cast as an SDP problem. Furthermore, it is demonstrated that the resulting filter banks are applied to image coding with improved performance. It is not straightforward to extend the proposed design methods for twochannel filter banks to Mchannel filter banks. However, it is investigated that the design problem of Mchannel cosinemodulated filter banks is a nonconvex optimization problem with the low degree of nonconvexity. Therefore, the efficient semidefinite relaxation technique is proposed to design optimal prototype filters. Additionally, a cheap iterative algorithm is developed to further improve the performance of the filter banks. Finally, the application of filter banks to multicarrier systems is considered. The condition on the transmit filter bank and channel for the existence of zeroforcing filter bank equalizers is obtained. A closedform expression of the optimal equalizer is then derived. The proposed filter bank transceivers are shown to outperform the orthogonal frequencydivision multiplexing (OFDM) systems.

4 
Spectral functions and smoothing techniques on Jordan algebrasBaes, Michel 22 September 2006 (has links)
Successful methods for a large class of nonlinear convex optimization problems have recently been developed. This class, known as selfscaled optimization problems, has been defined by Nesterov and Todd in 1994. As noticed by Guler in 1996, this class is best described using an algebraic structure known as Euclidean Jordan algebra, which provides an elegant and powerful unifying framework for its study. Euclidean Jordan algebras are now a popular setting for the analysis of algorithms designed for selfscaled optimization problems : dozens of research papers in optimization deal explicitely with them.
This thesis proposes an extensive and selfcontained description of Euclidean Jordan algebras, and shows how they can be used to design and analyze new algorithms for selfscaled optimization.
Our work focuses on the socalled spectral functions on Euclidean Jordan algebras, a natural generalization of spectral functions of symmetric matrices. Based on an original variational analysis technique for Euclidean Jordan algebras, we discuss their most important properties, such as differentiability and convexity. We show how these results can be applied in order to extend several algorithms existing for linear or secondorder programming to the general class of selfscaled problems. In particular, our methods allowed us to extend to some nonlinear convex problems the powerful smoothing techniques of Nesterov, and to demonstrate its excellent theoretical and practical efficiency.

5 
Optimization algorithms in compressive sensing (CS) sparse magnetic resonance imaging (MRI)TakevaVelkova, Viliyana 01 June 2010 (has links)
Magnetic Resonance Imaging (MRI) is an essential instrument in clinical diag
nosis; however, it is burdened by a slow data acquisition process due to physical
limitations. Compressive Sensing (CS) is a recently developed mathematical
framework that o ers signi cant bene ts in MRI image speed by reducing the
amount of acquired data without degrading the image quality. The process
of image reconstruction involves solving a nonlinear constrained optimization
problem. The reduction of reconstruction time in MRI is of signi cant bene t.
We reformulate sparse MRI reconstruction as a Second Order Cone Program
(SOCP).We also explore two alternative techniques to solving the SOCP prob
lem directly: NESTA and speci cally designed SOCPLB. / UOIT

6 
Novel Convex Optimization Approaches for VLSI FloorplanningLuo, Chaomin January 2008 (has links)
The floorplanning problem aims to arrange a set of rectangular modules on a rectangular chip area so as to optimize an appropriate
measure of performance. This problem is known to be NPhard, and is particularly challenging if the chip dimensions are fixed. Fixedoutline floorplanning is becoming increasingly important as a
tool to design flows in the hierarchical design of Application Specific Integrated Circuits and SystemOnChip. Therefore, it has recently received much attention.
A twostage convex optimization methodology is proposed to solve the fixedoutline floorplanning problem. It is a global optimization problem for wirelength minimization. In the first stage, an
attractorrepeller convex optimization model provides the relative positions of the modules on the floorplan. The second stage places and sizes the modules using convex optimization. Given the relative positions of the modules from the first stage, a Voronoi diagram and Delaunay triangulation method is used to obtain
a planar graph and hence a relative position matrix connecting the two stages. An efficient method for generating sparse relative position matrices and an interchangefree algorithm for local
improvement of the floorplan are also presented.
Experimental results on the standard benchmarks MCNC and GSRC demonstrate that we obtain significant improvements on the best results in the literature. Overlapfree and deadspacefree floorplans are achieved in a fixed outline and floorplans with any specified percentage of whitespace can be produced. Most important, our method provides a greater improvement as the number of modules increases. A very important feature of our methodology is that not only do the dimensions of the floorplans in our experiments comply with the original ones provided in the GSRC
benchmark, but also zerodeadspace floorplans can be obtained. Thus, our approach is able to guarantee complete area utilization in a fixedoutline situation. Our method is also applicable to area minimization in classical floorplanning.

7 
Novel Convex Optimization Approaches for VLSI FloorplanningLuo, Chaomin January 2008 (has links)
The floorplanning problem aims to arrange a set of rectangular modules on a rectangular chip area so as to optimize an appropriate
measure of performance. This problem is known to be NPhard, and is particularly challenging if the chip dimensions are fixed. Fixedoutline floorplanning is becoming increasingly important as a
tool to design flows in the hierarchical design of Application Specific Integrated Circuits and SystemOnChip. Therefore, it has recently received much attention.
A twostage convex optimization methodology is proposed to solve the fixedoutline floorplanning problem. It is a global optimization problem for wirelength minimization. In the first stage, an
attractorrepeller convex optimization model provides the relative positions of the modules on the floorplan. The second stage places and sizes the modules using convex optimization. Given the relative positions of the modules from the first stage, a Voronoi diagram and Delaunay triangulation method is used to obtain
a planar graph and hence a relative position matrix connecting the two stages. An efficient method for generating sparse relative position matrices and an interchangefree algorithm for local
improvement of the floorplan are also presented.
Experimental results on the standard benchmarks MCNC and GSRC demonstrate that we obtain significant improvements on the best results in the literature. Overlapfree and deadspacefree floorplans are achieved in a fixed outline and floorplans with any specified percentage of whitespace can be produced. Most important, our method provides a greater improvement as the number of modules increases. A very important feature of our methodology is that not only do the dimensions of the floorplans in our experiments comply with the original ones provided in the GSRC
benchmark, but also zerodeadspace floorplans can be obtained. Thus, our approach is able to guarantee complete area utilization in a fixedoutline situation. Our method is also applicable to area minimization in classical floorplanning.

8 
Lossless convexification of optimal control problemsHarris, Matthew Wade 30 June 2014 (has links)
This dissertation begins with an introduction to finitedimensional optimization and optimal control theory. It then proves lossless convexification for three problems: 1) a minimum time rendezvous using differential drag, 2) a maximum divert and landing, and 3) a general optimal control problem with linear state constraints and mixed convex and nonconvex control constraints. Each is a unique contribution to the theory of lossless convexification. The first proves lossless convexification in the presence of singular controls and specifies a procedure for converting singular controls to the bangbang type. The second is the first example of lossless convexification with state constraints. The third is the most general result to date. It says that lossless convexification holds when the state space is a strongly controllable subspace. This extends the controllability concepts used previously, and it recovers earlier results as a special case. Lastly, a few of the remaining research challenges are discussed. / text

9 
Linear phase filter bank design by convex programmingHa, Hoang Kha, Electrical Engineering & Telecommunications, Faculty of Engineering, UNSW January 2008 (has links)
Digital filter banks have found in a wide variety of applications in data compression, digital communications, and adaptive signal processing. The common objectives of the filter bank design consist of frequency selectivity of the individual filters and perfect reconstruction of the filter banks. The design problems of filter banks are intrinsically challenging because their natural formulations are nonconvex constrained optimization problems. Therefore, there is a strong motivation to cast the design problems into convex optimization problems whose globally optimal solutions can be efficiently obtained. The main contributions of this dissertation are to exploit the convex optimization algorithms to design several classes of the filter banks. First, the twochannel orthogonal symmetric complexvalued filter banks are investigated. A key contribution is to derive the necessary and sufficient condition for the existence of complexvalued symmetric spectral factors. Moreover, this condition can be expressed as linear matrix inequalities (LMIs), and hence semidefinite programming (SDP) is applicable. Secondly, for twochannel symmetric realvalued filter banks, a more general and efficient method for designing the optimal triplet halfband filter banks with regularity is developed. By exploiting the LMI characterization of nonnegative cosine polynomials, the semiinfinite constraints can be efficiently handled. Consequently, the filter bank design is cast as an SDP problem. Furthermore, it is demonstrated that the resulting filter banks are applied to image coding with improved performance. It is not straightforward to extend the proposed design methods for twochannel filter banks to Mchannel filter banks. However, it is investigated that the design problem of Mchannel cosinemodulated filter banks is a nonconvex optimization problem with the low degree of nonconvexity. Therefore, the efficient semidefinite relaxation technique is proposed to design optimal prototype filters. Additionally, a cheap iterative algorithm is developed to further improve the performance of the filter banks. Finally, the application of filter banks to multicarrier systems is considered. The condition on the transmit filter bank and channel for the existence of zeroforcing filter bank equalizers is obtained. A closedform expression of the optimal equalizer is then derived. The proposed filter bank transceivers are shown to outperform the orthogonal frequencydivision multiplexing (OFDM) systems.

10 
Identification using Convexification and RecursionDai, Liang January 2016 (has links)
System identification studies how to construct mathematical models for dynamical systems from the input and output data, which finds applications in many scenarios, such as predicting future output of the system or building model based controllers for regulating the output the system. Among many other methods, convex optimization is becoming an increasingly useful tool for solving system identification problems. The reason is that many identification problems can be formulated as, or transformed into convex optimization problems. This transformation is commonly referred to as the convexification technique. The first theme of the thesis is to understand the efficacy of the convexification idea by examining two specific examples. We first establish that a l1 norm based approach can indeed help in exploiting the sparsity information of the underlying parameter vector under certain persistent excitation assumptions. After that, we analyze how the nuclear norm minimization heuristic performs on a lowrank Hankel matrix completion problem. The underlying key is to construct the dual certificate based on the structure information that is available in the problem setting. Recursive algorithms are ubiquitous in system identification. The second theme of the thesis is the study of some existing recursive algorithms, by establishing new connections, giving new insights or interpretations to them. We first establish a connection between a basic property of the convolution operator and the score function estimation. Based on this relationship, we show how certain recursive Bayesian algorithms can be exploited to estimate the score function for systems with intractable transition densities. We also provide a new derivation and interpretation of the recursive direct weight optimization method, by exploiting certain structural information that is present in the algorithm. Finally, we study how an improved randomization strategy can be found for the randomized Kaczmarz algorithm, and how the convergence rate of the classical Kaczmarz algorithm can be studied by the stability analysis of a related time varying linear dynamical system.

Page generated in 0.1289 seconds