Spelling suggestions: "subject:"convex optimization"" "subject:"konvex optimization""
11 
Convex relaxation for the planted clique, biclique, and clustering problemsAmes, Brendan January 2011 (has links)
A clique of a graph G is a set of pairwise adjacent nodes of G. Similarly, a biclique (U, V ) of a bipartite graph G is a pair of disjoint, independent vertex sets such that each node in U is adjacent to every node in V in G. We consider the problems of identifying the maximum clique of a graph, known as the maximum clique problem, and identifying the biclique (U, V ) of a bipartite graph that maximizes the product U  · V , known as the maximum edge biclique problem. We show that ﬁnding a clique or biclique of a given size in a graph is equivalent to ﬁnding a rank one matrix satisfying a particular set of linear constraints. These problems can be formulated as rank minimization problems and relaxed to convex programming by replacing rank with its convex envelope, the nuclear norm. Both problems are NPhard yet we show that our relaxation is exact in the case that the input graph contains a large clique or biclique plus additional nodes and edges. For each problem, we provide two analyses of when our relaxation is exact. In the ﬁrst,
the diversionary edges are added deterministically by an adversary. In the second, each potential edge is added to the graph independently at random with ﬁxed probability p. In the random case, our bounds match the earlier bounds of Alon, Krivelevich, and Sudakov, as well as Feige and Krauthgamer for the maximum clique problem.
We extend these results and techniques to the kdisjointclique problem. The maximum node kdisjointclique problem is to ﬁnd a set of k disjoint cliques of a given input graph containing the maximum number of nodes. Given input graph G and nonnegative edge
weights w, the maximum mean weight kdisjointclique problem seeks to identify the set of k disjoint cliques of G that maximizes the sum of the average weights of the edges, with respect to w, of the complete subgraphs of G induced by the cliques. These problems may be considered as a way to pose the clustering problem. In clustering, one wants to partition a given data set so that the data items in each partition or cluster are similar and the items in diﬀerent clusters are dissimilar. For the graph G such that the set of nodes represents a given data set and any two nodes are adjacent if and only if the corresponding items are similar, clustering the data into k disjoint clusters is equivalent to partitioning G into kdisjoint cliques. Similarly, given a complete graph with nodes corresponding to a given data set and edge weights indicating similarity between each pair of items, the data may be clustered by solving the maximum mean weight kdisjointclique problem.
We show that both instances of the kdisjointclique problem can be formulated as rank constrained optimization problems and relaxed to semideﬁnite programs using the nuclear norm relaxation of rank. We also show that when the input instance corresponds to a collection of k disjoint planted cliques plus additional edges and nodes, this semideﬁnite relaxation is exact for both problems. We provide theoretical bounds that guarantee exactness of our relaxation and provide empirical examples of successful applications of our algorithm to synthetic data sets, as well as data sets from clustering applications.

12 
Information, complexity and structure in convex optimizationGuzman Paredes, Cristobal 08 June 2015 (has links)
This thesis is focused on the limits of performance of largescale convex optimization algorithms. Classical theory of oracle complexity, first proposed by Nemirovski and Yudin in 1983, successfully established the worstcase behavior of methods based on local oracles (a generalization of firstorder oracle for smooth functions) for nonsmooth convex minimization, both in the largescale and lowscale regimes; and the complexity of approximately solving linear systems of equations (equivalent to convex quadratic minimization) over Euclidean balls, under a matrixvector multiplication oracle.
Our work extends the applicability of lower bounds in two directions:
WorstCase Complexity of LargeScale Smooth Convex Optimization: We generalize lower bounds on the complexity of firstorder methods for convex optimization, considering classes of convex functions with Hölder continuous gradients. Our technique relies on the existence of a smoothing kernel, which defines a smooth approximation for any convex function via infimal convolution. As a consequence, we derive lower bounds for \ell_p/\ell_qsetups, where 1\leq p,q\leq \infty, and extend to its matrix analogue: Smooth convex minimization (with respect to the Schatten qnorm) over matrices with bounded Schatten pnorm.
The major consequences of this result are the nearoptimality of the Conditional Gradient method over boxtype domains (p=q=\infty), and the nearoptimality of Nesterov's accelerated method over the crosspolytope (p=q=1).
Distributional Complexity of Nonsmooth Convex Optimization: In this work, we prove averagecase lower bounds for the complexity of nonsmooth convex ptimization. We introduce an informationtheoretic method to analyze the complexity of oraclebased algorithms solving a random instance, based on the reconstruction principle.
Our technique shows that all known lower bounds for nonsmooth convex optimization can be derived by an emulation procedure from a common StringGuessing Problem, which is combinatorial in nature. The derived averagecase lower bounds extend to hold with high probability, and for algorithms with bounded probability error, via Fano's inequality.
Finally, from the proposed technique we establish the equivalence (up to constant factors) of distributional, randomized, and worstcase complexity for blackbox convex optimization. In particular, there is no gain from randomization in this setup.

13 
Deblurring with Framelets in the Sparse Analysis SettingDanniels, Travis 23 December 2013 (has links)
In this thesis, algorithms for blind and nonblind motion deblurring
of digital images are proposed. The nonblind algorithm is based on a convex program
consisting of a data fitting term and a sparsitypromoting regularization term.
The data fitting term is the squared l_2 norm of the residual between the blurred image
and the latent image convolved with a known blur kernel.
The regularization term
is the l_1 norm of the latent image under a wavelet frame (framelet) decomposition.
This convex program is solved with the firstorder primaldual algorithm proposed by Chambolle and Pock. The proposed blind deblurring algorithm
is based on the work of Cai, Ji, Liu, and Shen.
It works by embedding the proposed nonblind algorithm in an alternating minimization scheme
and imposing additional constraints in order
to deal with the challenging nonconvex nature of the blind deblurring problem.
Numerical experiments are performed on artificially and naturally blurred images,
and both proposed algorithms are found to be competitive with recent deblurring methods. / Graduate / 0544 / tdanniels@gmail.com

14 
Aspects of Design and Analysis of Cognitive Radios and NetworksHanif, Muhammad Fainan January 2010 (has links)
Recent survey campaigns have shown a tremendous under utilization of the bandwidth allocated to various wireless services. Motivated by this and the ever increasing demand for wireless applications, the concept of cognitive radio (CR) systems has rendered hope to end the so called spectrum scarcity. This thesis presents various different facets related to the design and analysis of CR systems in a unified way. We begin the thesis by presenting an information theoretic study of cognitive systems working in the so called low interference regime of the overlay mode. We show that as long as the coverage area of a CR is less than that of a primary user (PU) device, the probability of the cognitive terminal inflicting small interference at
the PU is overwhelmingly high. We have also analyzed the effect of a key parameter governing the amount of power allocated to relaying the PU message in the overlay mode of operation in realistic environments by presenting a simple and accurate approximation. Then, we explore the possibilities of statistical modeling of the cumulative
interference due to multiple interfering CRs. We show that although it is possible to obtain a closed form expression for such an interference due a single CR, the problem is particularly difficult when it comes to the total CR interference in lognormally faded environments. In particular, we have demonstrated that fitting a
two or three parameter lognormal is not a feasible option for all scenarios. We also explore the secondorder characteristics of the cumulative interference by evaluating
its level crossing rate (LCR) and average exceedance duration (AED) in Rayleigh and Rician channel conditions. We show that the LCRs in both these cases can be evaluated by modeling the interference process with gamma and noncentral χ2 processes, respectively. By exploiting radio environment map (REM) information, we have presented two CR scheduling schemes and compared their performance with the naive primary exclusion zone (PEZ) technique. The results demonstrate the significance of using an intelligent allocation method to reap the benefits of the tremendous information available to exploit in the REM based methods. At this juncture, we divert our attention to multipleinput multipleoutput (MIMO) CR systems operating in the underlay mode. Using an antenna selection philosophy, we solve a convex optimization problem accomplishing the task and show via analysis and simulations that antenna selection can be a viable option for CRs operating in relatively sparse PU environments. Finally, we study the impact of imperfect
channel state information (CSI) on the downlink of an underlay multiple antenna CR network designed to achieve signaltointerferenceplusnoise ratio (SINR) fairness
among the CR terminals. By employing a newly developed convex iteration technique, we solve the relevant optimization problem exactly without performing any relaxation on the variables involved.

15 
Audit GamesSinha, Arunesh 01 July 2014 (has links)
Modern organizations (e.g., hospitals, banks, social networks, search engines) hold large volumes of personal information, and rely heavily on auditing for enforcement of privacy policies. These audit mechanisms combine automated methods with human input to detect and punish violators. Since human audit resources are limited, and often not sufficient to investigate all potential violations, current stateofthe art audit tools provide heuristics to guide human effort. However, numerous reports of privacy breaches caused by malicious insiders bring to question the effectiveness of these audit mechanisms. Our thesis is that effective audit resource allocation and punishment levels can be efficiently computed by modeling the audit process as a game between a rational auditor and a rational or worstcase auditee. We present several results in support of the thesis. In the worstcase adversary setting, we design a game model taking into account organizational cost of auditing and loss from violations. We propose the notion of low regret as a desired audit property and provide a regret minimizing audit algorithm that outputs an optimal audit resource allocation strategy. The algorithm improves upon prior regret bounds in the partial information setting. In the rational adversary setting, we enable punishments by the auditor, and model the adversary's utility as a tradeoff between the benefit from violations and loss due to punishment when detected. Our Stackelberg game model generalizes an existing deployed security game model with punishment parameters. It applies to natural auditing settings with multiple auditors where each auditor is restricted to audit a subset of the potential violations. We provide novel polynomial time algorithms to approximate the nonconvex optimization problem used to compute the Stackelberg equilibrium. The algorithms output optimal audit resource allocation strategy and punishment levels. We also provide a method to reduce the optimization problem size, achieving up to 5x speedup for realistic instances of the audit problem, and for the related security game instances.

16 
An Improved Convex Optimization Model for TwoDimensional Facility LayoutJankovits, Ibolya 22 January 2007 (has links)
The facility layout design problem is a fundamental optimization problem encountered in many manufacturing and service organizations that was originally formulated in 1963 by Armour & Buffa. This thesis derives a convex programming model, IBIMODEL, that is designed to improve upon the ModCoAR model of Anjos & Vannelli for the facility layout problem with unequal areas. The purpose of IBIMODEL is to find 'good' initial locations for the departments that a second model then uses to produce a detailed solution to the facility layout problem. The proposed model has four ideas behind it: unlike ModCoAR, it does not improve the objective function as the departments start overlapping, it takes into account the aspect ratio requirements, it introduces a systematic approach to making parameter choices, and it uses a new second stage recently proposed by Luo, Anjos & Vannelli to obtain the actual facility layouts. In this way, IBIMODEL efficiently generates a reasonably diverse set of superior solutions that allow the second stage to provide a wide variety of layouts with relatively low aspect ratios and total cost.
The proposed methodology was implemented and numerical results are presented on wellknown large layout problems from the literature. To demonstrate the potential of the combination of IBIMODEL with Luo, Anjos & Vannelli's model, our results are compared with the best layouts found to date for these wellknown large facility layout problems. The results support the conclusion that the propose a methodology consistently produces competitive, and often improved, layouts for large instances when compared with other approaches in the literature.

17 
Convex relaxation for the planted clique, biclique, and clustering problemsAmes, Brendan January 2011 (has links)
A clique of a graph G is a set of pairwise adjacent nodes of G. Similarly, a biclique (U, V ) of a bipartite graph G is a pair of disjoint, independent vertex sets such that each node in U is adjacent to every node in V in G. We consider the problems of identifying the maximum clique of a graph, known as the maximum clique problem, and identifying the biclique (U, V ) of a bipartite graph that maximizes the product U  · V , known as the maximum edge biclique problem. We show that ﬁnding a clique or biclique of a given size in a graph is equivalent to ﬁnding a rank one matrix satisfying a particular set of linear constraints. These problems can be formulated as rank minimization problems and relaxed to convex programming by replacing rank with its convex envelope, the nuclear norm. Both problems are NPhard yet we show that our relaxation is exact in the case that the input graph contains a large clique or biclique plus additional nodes and edges. For each problem, we provide two analyses of when our relaxation is exact. In the ﬁrst,
the diversionary edges are added deterministically by an adversary. In the second, each potential edge is added to the graph independently at random with ﬁxed probability p. In the random case, our bounds match the earlier bounds of Alon, Krivelevich, and Sudakov, as well as Feige and Krauthgamer for the maximum clique problem.
We extend these results and techniques to the kdisjointclique problem. The maximum node kdisjointclique problem is to ﬁnd a set of k disjoint cliques of a given input graph containing the maximum number of nodes. Given input graph G and nonnegative edge
weights w, the maximum mean weight kdisjointclique problem seeks to identify the set of k disjoint cliques of G that maximizes the sum of the average weights of the edges, with respect to w, of the complete subgraphs of G induced by the cliques. These problems may be considered as a way to pose the clustering problem. In clustering, one wants to partition a given data set so that the data items in each partition or cluster are similar and the items in diﬀerent clusters are dissimilar. For the graph G such that the set of nodes represents a given data set and any two nodes are adjacent if and only if the corresponding items are similar, clustering the data into k disjoint clusters is equivalent to partitioning G into kdisjoint cliques. Similarly, given a complete graph with nodes corresponding to a given data set and edge weights indicating similarity between each pair of items, the data may be clustered by solving the maximum mean weight kdisjointclique problem.
We show that both instances of the kdisjointclique problem can be formulated as rank constrained optimization problems and relaxed to semideﬁnite programs using the nuclear norm relaxation of rank. We also show that when the input instance corresponds to a collection of k disjoint planted cliques plus additional edges and nodes, this semideﬁnite relaxation is exact for both problems. We provide theoretical bounds that guarantee exactness of our relaxation and provide empirical examples of successful applications of our algorithm to synthetic data sets, as well as data sets from clustering applications.

18 
Iterative Methods for Common Fixed Points of Nonexpansive Mappings in Hilbert spacesLai, Peilin 16 May 2011 (has links)
The aim of this work is to propose viscositylike methods for finding a specific common fixed point of a finite family T={ T_{i} }_{i=1}^{N} of nonexpansive selfmappings of a closed convex subset C of a Hilbert space H.We propose two schemes: one implicit and the other explicit.The implicit scheme determines a set {x_{t} : 0 < t < 1} through
the fixed point equation x_{t}= tf (x_{t} ) + (1− t)Tx_{t}, where f : C¡÷C is a contraction.The explicit scheme is the discretization of the implicit scheme and de defines a sequence {x_{n} } by the recursion x_{n+1}=£\\_{n}f(x_{n}) +(1−£\\_{n})Tx_{n} for n ≥ 0, where {£\\_{n} }⊂ (0,1) It has been shown in the literature that both implicit and explicit schemes converge in
norm to a fixed point of T (with additional conditions imposed on the sequence {£\ _{n} } in the explicit scheme).We will extend both schemes to the case of a finite family of nonexpansive mappings. Our proposed schemes converge in norm to a common fixed point of the family which in addition solves a variational inequality.

19 
Learning algorithms and statistical software, with applications to bioinformatics / Algorithmes d'apprentissage et logiciels pour la statistique, avec applications à la bioinformatiqueHocking, Toby Dylan 20 November 2012 (has links)
L'apprentissage statistique est le domaine des mathématiques qui aborde le développement des algorithmes d'analyse de données. Cette thèse est divisée en deux parties : l'introduction de modèles mathématiques et l'implémentation d'outils logiciels. Dans la première partie, je présente de nouveaux algorithmes pour la segmentation et pour le partitionnement de données (clustering). Le partitionnement de données et la segmentation sont des méthodes d'analyse qui cherche des structures dans les données. Je présente les contributions suivantes, en soulignant les applications à la bioinformatique. Dans la deuxième partie, je présente mes contributions au logiciel libre pour la statistique, qui est utilisé pour l'analyse quotidienne du statisticien. / Statistical machine learning is a branch of mathematics concerned with developing algorithms for data analysis. This thesis presents new mathematical models and statistical software, and is organized into two parts. In the first part, I present several new algorithms for clustering and segmentation. Clustering and segmentation are a class of techniques that attempt to find structures in data. I discuss the following contributions, with a focus on applications to cancer data from bioinformatics. In the second part, I focus on statistical software contributions which are practical for use in everyday data analysis.

20 
Image restoration in the presence of PoissonGaussian noise / Restauration d'images dégradées par un bruit PoissonGaussJezierska, Anna Maria 13 May 2013 (has links)
Cette thèse porte sur la restauration d'images dégradées à la fois par un flou et par un bruit. Une attention particulière est portée aux images issues de la microscopie confocale et notamment celles de macroscopie. Dans ce contexte, un modèle de bruit PoissonGauss apparaît bien adapté car il permet de prendre en compte le faible nombre de photons et le fort bruit enregistrés simultanément par les détecteurs. Cependant, ce type de modèle de bruit a été peu exploité car il pose de nombreuses difficultés tant théoriques que pratiques. Dans ce travail, une approche variationnelle est adoptée pour résoudre le problème de restauration dans le cas où le terme de fidélité exact est considéré. La solution du problème peut aussi être interprétée au sens du Maximum A Posteriori (MAP). L'utilisation d'algorithmes primauxduaux récemment proposés en optimisation convexe permet d'obtenir de bons résultats comparativement à plusieurs approches existantes qui considèrent des approximations variées du terme de fidélité. En ce qui concerne le terme de régularisation de l'approche MAP, des approximations discrète et continue de la pseudonorme $ell_0$ sont considérées. Cette mesure, célèbre pour favoriser la parcimonie, est difficile à optimiser car elle est, à la fois, non convexe et non lisse. Dans un premier temps, une méthode basée sur les coupures de graphes est proposée afin de prendre en compte des à priori de type quadratique tronqué. Dans un second temps, un algorithme à mémoire de gradient de type MajorationMinimisation, dont la convergence est garantie, est considéré afin de prendre en compte des a priori de type norme $ell_2ell_0$. Cet algorithme permet notamment d'obtenir de bons résultats dans des problèmes de déconvolution. Néanmoins, un inconvénient des approches variationnelles est qu'elles nécessitent la détermination d'hyperparamètres. C'est pourquoi, deux méthodes, reposant sur une approche EspéranceMaximisation (EM) sont proposées, dans ce travail, afin d'estimer les paramètres d'un bruit PoissonGauss: (1) à partir d'une série temporelle d'images (dans ce cas, des paramètres de « bleaching » peuvent aussi être estimés) et (2) à partir d'une seule image. De manière générale, cette thèse propose et teste de nombreuses méthodologies adaptées à la prise en compte de bruits et de flous difficiles, ce qui devrait se révéler utile pour des applications variées, audelà même de la microscopie / This thesis deals with the restoration of images corrupted by blur and noise, with emphasis on confocal microscopy and macroscopy applications. Due to low photon count and high detector noise, the PoissonGaussian model is well suited to this context. However, up to now it had not been widely utilized because of theoretical and practical difficulties. In view of this, we formulate the image restoration problem in the presence of PoissonGaussian noise in a variational framework, where we express and study the exact data fidelity term. The solution to the problem can also be interpreted as a Maximum A Posteriori (MAP) estimate. Using recent primaldual convex optimization algorithms, we obtain results that outperform methods relying on a variety of approximations. Turning our attention to the regularization term in the MAP framework, we study both discrete and continuous approximation of the $ell_0$ pseudonorm. This useful measure, wellknown for promoting sparsity, is difficult to optimize due to its nonconvexity and its nonsmoothness. We propose an efficient graphcut procedure for optimizing energies with truncated quadratic priors. Moreover, we develop a majorizeminimize memory gradient algorithm to optimize various smooth versions of the $ell_2ell_0$ norm, with guaranteed convergence properties. In particular, good results are achieved on deconvolution problems. One difficulty with variational formulations is the necessity to tune automatically the model hyperparameters. In this context, we propose to estimate the PoissonGaussian noise parameters based on two realistic scenarios: one from time series images, taking into account bleaching effects, and another from a single image. These estimations are grounded on the use of an ExpectationMaximization (EM) approach.Overall, this thesis proposes and evaluates various methodologies for tackling difficult image noise and blur cases, which should be useful in various applicative contexts within and beyond microscopy

Page generated in 0.1346 seconds