Spelling suggestions: "subject:"nonconvex aptimization"" "subject:"nonconvex anoptimization""
1 |
Greedy Strategies for Convex MinimizationNguyen, Hao Thanh 16 December 2013 (has links)
We have investigated two greedy strategies for finding an approximation to the minimum of a convex function E, defined on a Hilbert space H. We have proved convergence rates for a modification of the orthogonal matching pursuit and its weak version under suitable conditions on the objective function E. These conditions involve the behavior of the moduli of smoothness and the modulus of uniform convexity of E.
|
2 |
Linear phase filter bank design by convex programmingHa, Hoang Kha, Electrical Engineering & Telecommunications, Faculty of Engineering, UNSW January 2008 (has links)
Digital filter banks have found in a wide variety of applications in data compression, digital communications, and adaptive signal processing. The common objectives of the filter bank design consist of frequency selectivity of the individual filters and perfect reconstruction of the filter banks. The design problems of filter banks are intrinsically challenging because their natural formulations are nonconvex constrained optimization problems. Therefore, there is a strong motivation to cast the design problems into convex optimization problems whose globally optimal solutions can be efficiently obtained. The main contributions of this dissertation are to exploit the convex optimization algorithms to design several classes of the filter banks. First, the two-channel orthogonal symmetric complex-valued filter banks are investigated. A key contribution is to derive the necessary and sufficient condition for the existence of complex-valued symmetric spectral factors. Moreover, this condition can be expressed as linear matrix inequalities (LMIs), and hence semi-definite programming (SDP) is applicable. Secondly, for two-channel symmetric real-valued filter banks, a more general and efficient method for designing the optimal triplet halfband filter banks with regularity is developed. By exploiting the LMI characterization of nonnegative cosine polynomials, the semi-infinite constraints can be efficiently handled. Consequently, the filter bank design is cast as an SDP problem. Furthermore, it is demonstrated that the resulting filter banks are applied to image coding with improved performance. It is not straightforward to extend the proposed design methods for two-channel filter banks to M-channel filter banks. However, it is investigated that the design problem of M-channel cosine-modulated filter banks is a nonconvex optimization problem with the low degree of nonconvexity. Therefore, the efficient semidefinite relaxation technique is proposed to design optimal prototype filters. Additionally, a cheap iterative algorithm is developed to further improve the performance of the filter banks. Finally, the application of filter banks to multicarrier systems is considered. The condition on the transmit filter bank and channel for the existence of zero-forcing filter bank equalizers is obtained. A closed-form expression of the optimal equalizer is then derived. The proposed filter bank transceivers are shown to outperform the orthogonal frequency-division multiplexing (OFDM) systems.
|
3 |
Spectral functions and smoothing techniques on Jordan algebrasBaes, Michel 22 September 2006 (has links)
Successful methods for a large class of nonlinear convex optimization problems have recently been developed. This class, known as self-scaled optimization problems, has been defined by Nesterov and Todd in 1994. As noticed by Guler in 1996, this class is best described using an algebraic structure known as Euclidean Jordan algebra, which provides an elegant and powerful unifying framework for its study. Euclidean Jordan algebras are now a popular setting for the analysis of algorithms designed for self-scaled optimization problems : dozens of research papers in optimization deal explicitely with them.
This thesis proposes an extensive and self-contained description of Euclidean Jordan algebras, and shows how they can be used to design and analyze new algorithms for self-scaled optimization.
Our work focuses on the so-called spectral functions on Euclidean Jordan algebras, a natural generalization of spectral functions of symmetric matrices. Based on an original variational analysis technique for Euclidean Jordan algebras, we discuss their most important properties, such as differentiability and convexity. We show how these results can be applied in order to extend several algorithms existing for linear or second-order programming to the general class of self-scaled problems. In particular, our methods allowed us to extend to some nonlinear convex problems the powerful smoothing techniques of Nesterov, and to demonstrate its excellent theoretical and practical efficiency.
|
4 |
Optimization algorithms in compressive sensing (CS) sparse magnetic resonance imaging (MRI)Takeva-Velkova, Viliyana 01 June 2010 (has links)
Magnetic Resonance Imaging (MRI) is an essential instrument in clinical diag-
nosis; however, it is burdened by a slow data acquisition process due to physical
limitations. Compressive Sensing (CS) is a recently developed mathematical
framework that o ers signi cant bene ts in MRI image speed by reducing the
amount of acquired data without degrading the image quality. The process
of image reconstruction involves solving a nonlinear constrained optimization
problem. The reduction of reconstruction time in MRI is of signi cant bene t.
We reformulate sparse MRI reconstruction as a Second Order Cone Program
(SOCP).We also explore two alternative techniques to solving the SOCP prob-
lem directly: NESTA and speci cally designed SOCP-LB. / UOIT
|
5 |
Lossless convexification of optimal control problemsHarris, Matthew Wade 30 June 2014 (has links)
This dissertation begins with an introduction to finite-dimensional optimization and optimal control theory. It then proves lossless convexification for three problems: 1) a minimum time rendezvous using differential drag, 2) a maximum divert and landing, and 3) a general optimal control problem with linear state constraints and mixed convex and non-convex control constraints. Each is a unique contribution to the theory of lossless convexification. The first proves lossless convexification in the presence of singular controls and specifies a procedure for converting singular controls to the bang-bang type. The second is the first example of lossless convexification with state constraints. The third is the most general result to date. It says that lossless convexification holds when the state space is a strongly controllable subspace. This extends the controllability concepts used previously, and it recovers earlier results as a special case. Lastly, a few of the remaining research challenges are discussed. / text
|
6 |
Linear phase filter bank design by convex programmingHa, Hoang Kha, Electrical Engineering & Telecommunications, Faculty of Engineering, UNSW January 2008 (has links)
Digital filter banks have found in a wide variety of applications in data compression, digital communications, and adaptive signal processing. The common objectives of the filter bank design consist of frequency selectivity of the individual filters and perfect reconstruction of the filter banks. The design problems of filter banks are intrinsically challenging because their natural formulations are nonconvex constrained optimization problems. Therefore, there is a strong motivation to cast the design problems into convex optimization problems whose globally optimal solutions can be efficiently obtained. The main contributions of this dissertation are to exploit the convex optimization algorithms to design several classes of the filter banks. First, the two-channel orthogonal symmetric complex-valued filter banks are investigated. A key contribution is to derive the necessary and sufficient condition for the existence of complex-valued symmetric spectral factors. Moreover, this condition can be expressed as linear matrix inequalities (LMIs), and hence semi-definite programming (SDP) is applicable. Secondly, for two-channel symmetric real-valued filter banks, a more general and efficient method for designing the optimal triplet halfband filter banks with regularity is developed. By exploiting the LMI characterization of nonnegative cosine polynomials, the semi-infinite constraints can be efficiently handled. Consequently, the filter bank design is cast as an SDP problem. Furthermore, it is demonstrated that the resulting filter banks are applied to image coding with improved performance. It is not straightforward to extend the proposed design methods for two-channel filter banks to M-channel filter banks. However, it is investigated that the design problem of M-channel cosine-modulated filter banks is a nonconvex optimization problem with the low degree of nonconvexity. Therefore, the efficient semidefinite relaxation technique is proposed to design optimal prototype filters. Additionally, a cheap iterative algorithm is developed to further improve the performance of the filter banks. Finally, the application of filter banks to multicarrier systems is considered. The condition on the transmit filter bank and channel for the existence of zero-forcing filter bank equalizers is obtained. A closed-form expression of the optimal equalizer is then derived. The proposed filter bank transceivers are shown to outperform the orthogonal frequency-division multiplexing (OFDM) systems.
|
7 |
Identification using Convexification and RecursionDai, Liang January 2016 (has links)
System identification studies how to construct mathematical models for dynamical systems from the input and output data, which finds applications in many scenarios, such as predicting future output of the system or building model based controllers for regulating the output the system. Among many other methods, convex optimization is becoming an increasingly useful tool for solving system identification problems. The reason is that many identification problems can be formulated as, or transformed into convex optimization problems. This transformation is commonly referred to as the convexification technique. The first theme of the thesis is to understand the efficacy of the convexification idea by examining two specific examples. We first establish that a l1 norm based approach can indeed help in exploiting the sparsity information of the underlying parameter vector under certain persistent excitation assumptions. After that, we analyze how the nuclear norm minimization heuristic performs on a low-rank Hankel matrix completion problem. The underlying key is to construct the dual certificate based on the structure information that is available in the problem setting. Recursive algorithms are ubiquitous in system identification. The second theme of the thesis is the study of some existing recursive algorithms, by establishing new connections, giving new insights or interpretations to them. We first establish a connection between a basic property of the convolution operator and the score function estimation. Based on this relationship, we show how certain recursive Bayesian algorithms can be exploited to estimate the score function for systems with intractable transition densities. We also provide a new derivation and interpretation of the recursive direct weight optimization method, by exploiting certain structural information that is present in the algorithm. Finally, we study how an improved randomization strategy can be found for the randomized Kaczmarz algorithm, and how the convergence rate of the classical Kaczmarz algorithm can be studied by the stability analysis of a related time varying linear dynamical system.
|
8 |
Aspects of Design and Analysis of Cognitive Radios and NetworksHanif, Muhammad Fainan January 2010 (has links)
Recent survey campaigns have shown a tremendous under utilization of the bandwidth allocated to various wireless services. Motivated by this and the ever increasing demand for wireless applications, the concept of cognitive radio (CR) systems has rendered hope to end the so called spectrum scarcity. This thesis presents various different facets related to the design and analysis of CR systems in a unified way. We begin the thesis by presenting an information theoretic study of cognitive systems working in the so called low interference regime of the overlay mode. We show that as long as the coverage area of a CR is less than that of a primary user (PU) device, the probability of the cognitive terminal inflicting small interference at
the PU is overwhelmingly high. We have also analyzed the effect of a key parameter governing the amount of power allocated to relaying the PU message in the overlay mode of operation in realistic environments by presenting a simple and accurate approximation. Then, we explore the possibilities of statistical modeling of the cumulative
interference due to multiple interfering CRs. We show that although it is possible to obtain a closed form expression for such an interference due a single CR, the problem is particularly difficult when it comes to the total CR interference in lognormally faded environments. In particular, we have demonstrated that fitting a
two or three parameter lognormal is not a feasible option for all scenarios. We also explore the second-order characteristics of the cumulative interference by evaluating
its level crossing rate (LCR) and average exceedance duration (AED) in Rayleigh and Rician channel conditions. We show that the LCRs in both these cases can be evaluated by modeling the interference process with gamma and noncentral χ2 processes, respectively. By exploiting radio environment map (REM) information, we have presented two CR scheduling schemes and compared their performance with the naive primary exclusion zone (PEZ) technique. The results demonstrate the significance of using an intelligent allocation method to reap the benefits of the tremendous information available to exploit in the REM based methods. At this juncture, we divert our attention to multiple-input multiple-output (MIMO) CR systems operating in the underlay mode. Using an antenna selection philosophy, we solve a convex optimization problem accomplishing the task and show via analysis and simulations that antenna selection can be a viable option for CRs operating in relatively sparse PU environments. Finally, we study the impact of imperfect
channel state information (CSI) on the downlink of an underlay multiple antenna CR network designed to achieve signal-to-interference-plus-noise ratio (SINR) fairness
among the CR terminals. By employing a newly developed convex iteration technique, we solve the relevant optimization problem exactly without performing any relaxation on the variables involved.
|
9 |
Learning algorithms and statistical software, with applications to bioinformatics / Algorithmes d'apprentissage et logiciels pour la statistique, avec applications à la bioinformatiqueHocking, Toby Dylan 20 November 2012 (has links)
L'apprentissage statistique est le domaine des mathématiques qui aborde le développement des algorithmes d'analyse de données. Cette thèse est divisée en deux parties : l'introduction de modèles mathématiques et l'implémentation d'outils logiciels. Dans la première partie, je présente de nouveaux algorithmes pour la segmentation et pour le partitionnement de données (clustering). Le partitionnement de données et la segmentation sont des méthodes d'analyse qui cherche des structures dans les données. Je présente les contributions suivantes, en soulignant les applications à la bioinformatique. Dans la deuxième partie, je présente mes contributions au logiciel libre pour la statistique, qui est utilisé pour l'analyse quotidienne du statisticien. / Statistical machine learning is a branch of mathematics concerned with developing algorithms for data analysis. This thesis presents new mathematical models and statistical software, and is organized into two parts. In the first part, I present several new algorithms for clustering and segmentation. Clustering and segmentation are a class of techniques that attempt to find structures in data. I discuss the following contributions, with a focus on applications to cancer data from bioinformatics. In the second part, I focus on statistical software contributions which are practical for use in everyday data analysis.
|
10 |
Image restoration in the presence of Poisson-Gaussian noise / Restauration d'images dégradées par un bruit Poisson-GaussJezierska, Anna Maria 13 May 2013 (has links)
Cette thèse porte sur la restauration d'images dégradées à la fois par un flou et par un bruit. Une attention particulière est portée aux images issues de la microscopie confocale et notamment celles de macroscopie. Dans ce contexte, un modèle de bruit Poisson-Gauss apparaît bien adapté car il permet de prendre en compte le faible nombre de photons et le fort bruit enregistrés simultanément par les détecteurs. Cependant, ce type de modèle de bruit a été peu exploité car il pose de nombreuses difficultés tant théoriques que pratiques. Dans ce travail, une approche variationnelle est adoptée pour résoudre le problème de restauration dans le cas où le terme de fidélité exact est considéré. La solution du problème peut aussi être interprétée au sens du Maximum A Posteriori (MAP). L'utilisation d'algorithmes primaux-duaux récemment proposés en optimisation convexe permet d'obtenir de bons résultats comparativement à plusieurs approches existantes qui considèrent des approximations variées du terme de fidélité. En ce qui concerne le terme de régularisation de l'approche MAP, des approximations discrète et continue de la pseudo-norme $ell_0$ sont considérées. Cette mesure, célèbre pour favoriser la parcimonie, est difficile à optimiser car elle est, à la fois, non convexe et non lisse. Dans un premier temps, une méthode basée sur les coupures de graphes est proposée afin de prendre en compte des à priori de type quadratique tronqué. Dans un second temps, un algorithme à mémoire de gradient de type Majoration-Minimisation, dont la convergence est garantie, est considéré afin de prendre en compte des a priori de type norme $ell_2-ell_0$. Cet algorithme permet notamment d'obtenir de bons résultats dans des problèmes de déconvolution. Néanmoins, un inconvénient des approches variationnelles est qu'elles nécessitent la détermination d'hyperparamètres. C'est pourquoi, deux méthodes, reposant sur une approche Espérance-Maximisation (EM) sont proposées, dans ce travail, afin d'estimer les paramètres d'un bruit Poisson-Gauss: (1) à partir d'une série temporelle d'images (dans ce cas, des paramètres de « bleaching » peuvent aussi être estimés) et (2) à partir d'une seule image. De manière générale, cette thèse propose et teste de nombreuses méthodologies adaptées à la prise en compte de bruits et de flous difficiles, ce qui devrait se révéler utile pour des applications variées, au-delà même de la microscopie / This thesis deals with the restoration of images corrupted by blur and noise, with emphasis on confocal microscopy and macroscopy applications. Due to low photon count and high detector noise, the Poisson-Gaussian model is well suited to this context. However, up to now it had not been widely utilized because of theoretical and practical difficulties. In view of this, we formulate the image restoration problem in the presence of Poisson-Gaussian noise in a variational framework, where we express and study the exact data fidelity term. The solution to the problem can also be interpreted as a Maximum A Posteriori (MAP) estimate. Using recent primal-dual convex optimization algorithms, we obtain results that outperform methods relying on a variety of approximations. Turning our attention to the regularization term in the MAP framework, we study both discrete and continuous approximation of the $ell_0$ pseudo-norm. This useful measure, well-known for promoting sparsity, is difficult to optimize due to its non-convexity and its non-smoothness. We propose an efficient graph-cut procedure for optimizing energies with truncated quadratic priors. Moreover, we develop a majorize-minimize memory gradient algorithm to optimize various smooth versions of the $ell_2-ell_0$ norm, with guaranteed convergence properties. In particular, good results are achieved on deconvolution problems. One difficulty with variational formulations is the necessity to tune automatically the model hyperparameters. In this context, we propose to estimate the Poisson-Gaussian noise parameters based on two realistic scenarios: one from time series images, taking into account bleaching effects, and another from a single image. These estimations are grounded on the use of an Expectation-Maximization (EM) approach.Overall, this thesis proposes and evaluates various methodologies for tackling difficult image noise and blur cases, which should be useful in various applicative contexts within and beyond microscopy
|
Page generated in 0.1015 seconds