Spelling suggestions: "subject:"convex aptimization"" "subject:"convex anoptimization""
21 
Information, complexity and structure in convex optimizationGuzman Paredes, Cristobal 08 June 2015 (has links)
This thesis is focused on the limits of performance of largescale convex optimization algorithms. Classical theory of oracle complexity, first proposed by Nemirovski and Yudin in 1983, successfully established the worstcase behavior of methods based on local oracles (a generalization of firstorder oracle for smooth functions) for nonsmooth convex minimization, both in the largescale and lowscale regimes; and the complexity of approximately solving linear systems of equations (equivalent to convex quadratic minimization) over Euclidean balls, under a matrixvector multiplication oracle.
Our work extends the applicability of lower bounds in two directions:
WorstCase Complexity of LargeScale Smooth Convex Optimization: We generalize lower bounds on the complexity of firstorder methods for convex optimization, considering classes of convex functions with Hölder continuous gradients. Our technique relies on the existence of a smoothing kernel, which defines a smooth approximation for any convex function via infimal convolution. As a consequence, we derive lower bounds for \ell_p/\ell_qsetups, where 1\leq p,q\leq \infty, and extend to its matrix analogue: Smooth convex minimization (with respect to the Schatten qnorm) over matrices with bounded Schatten pnorm.
The major consequences of this result are the nearoptimality of the Conditional Gradient method over boxtype domains (p=q=\infty), and the nearoptimality of Nesterov's accelerated method over the crosspolytope (p=q=1).
Distributional Complexity of Nonsmooth Convex Optimization: In this work, we prove averagecase lower bounds for the complexity of nonsmooth convex ptimization. We introduce an informationtheoretic method to analyze the complexity of oraclebased algorithms solving a random instance, based on the reconstruction principle.
Our technique shows that all known lower bounds for nonsmooth convex optimization can be derived by an emulation procedure from a common StringGuessing Problem, which is combinatorial in nature. The derived averagecase lower bounds extend to hold with high probability, and for algorithms with bounded probability error, via Fano's inequality.
Finally, from the proposed technique we establish the equivalence (up to constant factors) of distributional, randomized, and worstcase complexity for blackbox convex optimization. In particular, there is no gain from randomization in this setup.

22 
Audit GamesSinha, Arunesh 01 July 2014 (has links)
Modern organizations (e.g., hospitals, banks, social networks, search engines) hold large volumes of personal information, and rely heavily on auditing for enforcement of privacy policies. These audit mechanisms combine automated methods with human input to detect and punish violators. Since human audit resources are limited, and often not sufficient to investigate all potential violations, current stateofthe art audit tools provide heuristics to guide human effort. However, numerous reports of privacy breaches caused by malicious insiders bring to question the effectiveness of these audit mechanisms. Our thesis is that effective audit resource allocation and punishment levels can be efficiently computed by modeling the audit process as a game between a rational auditor and a rational or worstcase auditee. We present several results in support of the thesis. In the worstcase adversary setting, we design a game model taking into account organizational cost of auditing and loss from violations. We propose the notion of low regret as a desired audit property and provide a regret minimizing audit algorithm that outputs an optimal audit resource allocation strategy. The algorithm improves upon prior regret bounds in the partial information setting. In the rational adversary setting, we enable punishments by the auditor, and model the adversary's utility as a tradeoff between the benefit from violations and loss due to punishment when detected. Our Stackelberg game model generalizes an existing deployed security game model with punishment parameters. It applies to natural auditing settings with multiple auditors where each auditor is restricted to audit a subset of the potential violations. We provide novel polynomial time algorithms to approximate the nonconvex optimization problem used to compute the Stackelberg equilibrium. The algorithms output optimal audit resource allocation strategy and punishment levels. We also provide a method to reduce the optimization problem size, achieving up to 5x speedup for realistic instances of the audit problem, and for the related security game instances.

23 
Deblurring with Framelets in the Sparse Analysis SettingDanniels, Travis 23 December 2013 (has links)
In this thesis, algorithms for blind and nonblind motion deblurring
of digital images are proposed. The nonblind algorithm is based on a convex program
consisting of a data fitting term and a sparsitypromoting regularization term.
The data fitting term is the squared l_2 norm of the residual between the blurred image
and the latent image convolved with a known blur kernel.
The regularization term
is the l_1 norm of the latent image under a wavelet frame (framelet) decomposition.
This convex program is solved with the firstorder primaldual algorithm proposed by Chambolle and Pock. The proposed blind deblurring algorithm
is based on the work of Cai, Ji, Liu, and Shen.
It works by embedding the proposed nonblind algorithm in an alternating minimization scheme
and imposing additional constraints in order
to deal with the challenging nonconvex nature of the blind deblurring problem.
Numerical experiments are performed on artificially and naturally blurred images,
and both proposed algorithms are found to be competitive with recent deblurring methods. / Graduate / 0544 / tdanniels@gmail.com

24 
Convex relaxation for the planted clique, biclique, and clustering problemsAmes, Brendan January 2011 (has links)
A clique of a graph G is a set of pairwise adjacent nodes of G. Similarly, a biclique (U, V ) of a bipartite graph G is a pair of disjoint, independent vertex sets such that each node in U is adjacent to every node in V in G. We consider the problems of identifying the maximum clique of a graph, known as the maximum clique problem, and identifying the biclique (U, V ) of a bipartite graph that maximizes the product U  · V , known as the maximum edge biclique problem. We show that ﬁnding a clique or biclique of a given size in a graph is equivalent to ﬁnding a rank one matrix satisfying a particular set of linear constraints. These problems can be formulated as rank minimization problems and relaxed to convex programming by replacing rank with its convex envelope, the nuclear norm. Both problems are NPhard yet we show that our relaxation is exact in the case that the input graph contains a large clique or biclique plus additional nodes and edges. For each problem, we provide two analyses of when our relaxation is exact. In the ﬁrst,
the diversionary edges are added deterministically by an adversary. In the second, each potential edge is added to the graph independently at random with ﬁxed probability p. In the random case, our bounds match the earlier bounds of Alon, Krivelevich, and Sudakov, as well as Feige and Krauthgamer for the maximum clique problem.
We extend these results and techniques to the kdisjointclique problem. The maximum node kdisjointclique problem is to ﬁnd a set of k disjoint cliques of a given input graph containing the maximum number of nodes. Given input graph G and nonnegative edge
weights w, the maximum mean weight kdisjointclique problem seeks to identify the set of k disjoint cliques of G that maximizes the sum of the average weights of the edges, with respect to w, of the complete subgraphs of G induced by the cliques. These problems may be considered as a way to pose the clustering problem. In clustering, one wants to partition a given data set so that the data items in each partition or cluster are similar and the items in diﬀerent clusters are dissimilar. For the graph G such that the set of nodes represents a given data set and any two nodes are adjacent if and only if the corresponding items are similar, clustering the data into k disjoint clusters is equivalent to partitioning G into kdisjoint cliques. Similarly, given a complete graph with nodes corresponding to a given data set and edge weights indicating similarity between each pair of items, the data may be clustered by solving the maximum mean weight kdisjointclique problem.
We show that both instances of the kdisjointclique problem can be formulated as rank constrained optimization problems and relaxed to semideﬁnite programs using the nuclear norm relaxation of rank. We also show that when the input instance corresponds to a collection of k disjoint planted cliques plus additional edges and nodes, this semideﬁnite relaxation is exact for both problems. We provide theoretical bounds that guarantee exactness of our relaxation and provide empirical examples of successful applications of our algorithm to synthetic data sets, as well as data sets from clustering applications.

25 
An open source object oriented platform for rapid design of high performance path following interiorpoint methodsChiş, Voicu January 2008 (has links)
<p> Interior point methods (IPMs) is a powerful tool in convex optimization. From
the theoretical point of view, the convex set of feasible solutions is represented
by a socalled barrier functional and the only information required by the
algorithms is the evaluation of the barrier, its gradient and Hessian. As a
result, IPM algorithms can be used for many types of convex problems and
their theoretical performance depends on the properties of the barrier. In
practice, performance depends on how the data structure is exploited at the
linear algebra level. In this thesis, we make use of the objectoriented paradigm
supported by C++ to create a platform where the aforementioned generality
of IPM algorithms is valued and the possibility to exploit the data structure
is available. We will illustrate the power of such an approach on optimization
problems arrising in the field of Radiation Therapy, in particular Intensity
Modulated Radiation Therapy. </p> / Thesis / Master of Science (MSc)

26 
Mathematical optimization techniques for cognitive radar networksRossetti, Gaia January 2018 (has links)
This thesis discusses mathematical optimization techniques for waveform design in cognitive radars. These techniques have been designed with an increasing level of sophistication, starting from a bistatic model (i.e. two transmitters and a single receiver) and ending with a cognitive network (i.e. multiple transmitting and multiple receiving radars). The environment under investigation always features strong signaldependent clutter and noise. All algorithms are based on an iterative waveformfilter optimization. The waveform optimization is based on convex optimization techniques and the exploitation of initial radar waveforms characterized by desired auto and crosscorrelation properties. Finally, robust optimization techniques are introduced to account for the assumptions made by cognitive radars on certain second order statistics such as the covariance matrix of the clutter. More specifically, initial optimization techniques were proposed for the case of bistatic radars. By maximizing the signal to interference and noise ratio (SINR) under certain constraints on the transmitted signals, it was possible to iteratively optimize both the orthogonal transmission waveforms and the receiver filter. Subsequently, the above work was extended to a convex optimization framework for a waveform design technique for bistatic radars where both radars transmit and receive to detect targets. The method exploited prior knowledge of the environment to maximize the accumulated target return signal power while keeping the disturbance power to unity at both radar receivers. The thesis further proposes convex optimization based waveform designs for multiple input multiple output (MIMO) based cognitive radars. All radars within the system are able to both transmit and receive signals for detecting targets. The proposed model investigated two complementary optimization techniques. The first one aims at optimizing the signal to interference and noise ratio (SINR) of a specific radar while keeping the SINR of the remaining radars at desired levels. The second approach optimizes the SINR of all radars using a maxmin optimization criterion. To account for possible mismatches between actual parameters and estimated ones, this thesis includes robust optimization techniques. Initially, the multistatic, signaldependent model was tested against existing worstcase and probabilistic methods. These methods appeared to be over conservative and generic for the considered signaldependent clutter scenario. Therefore a new approach was derived where uncertainty was assumed directly on the radar crosssection and Doppler parameters of the clutters. Approximations based on Taylor series were invoked to make the optimization problem convex and {subsequently} determine robust waveforms with specific SINR outage constraints. Finally, this thesis introduces robust optimization techniques for throughthewall radars. These are also cognitive but rely on different optimization techniques than the ones previously discussed. By noticing the similarities between the minimum variance distortionless response (MVDR) problem and the matchedillumination one, this thesis introduces robust optimization techniques that consider uncertainty on environmentrelated parameters. Various performance analyses demonstrate the effectiveness of all the above algorithms in providing a significant increase in SINR in an environment affected by very strong clutter and noise.

27 
Enhancing physical layer security in wireless networks with cooperative approachesLiu, Weigang January 2016 (has links)
Motivated by recent developments in wireless communication, this thesis aims to characterize the secrecy performance in several types of typical wireless networks. Advanced techniques are designed and evaluated to enhance physical layer security in these networks with realistic assumptions, such as signal propagation loss, random node distribution and noninstantaneous channel state information (CSI). The first part of the thesis investigates secret communication through relayassisted cognitive interference channel. The primary and secondary base stations (PBS and SBS) communicate with the primary and secondary receivers (PR and SR) respectively in the presence of multiple eavesdroppers. The SBS is allowed to transmit simultaneously with the PBS over the same spectrum instead of waiting for an idle channel. To improve security, cognitive relays transmit cooperative jamming (CJ) signals to create additional interferences in the direction of the eavesdroppers. Two CJ schemes are proposed to improve the secrecy rate of cognitive interference channels depending on the structure of cooperative relays. In the scheme where the multipleantenna relay transmits weighted jamming signals, the combined approach of CJ and beamforming is investigated. In the scheme with multiple relays transmitting weighted jamming signals, the combined approach of CJ and relay selection is analyzed. Numerical results show that both these two schemes are effective in improving physical layer security of cognitive interference channel. In the second part, the focus is shifted to physical layer security in a random wireless network where both legitimate and eavesdropping nodes are randomly distributed. Three scenarios are analyzed to investigate the impact of various factors on security. In scenario one, the basic scheme is studied without a protected zone and interference. The probability distribution function (PDF) of channel gain with both fading and path loss has been derived and further applied to derive secrecy connectivity and ergodic secrecy capacity. In the second scenario, we studied using a protected zone surrounding the source node to enhance security where interference is absent. Both the cases that eavesdroppers are aware and unaware of the protected zone boundary are investigated. Based on the above scenarios, further deployment of the protected zones at legitimate receivers is designed to convert detrimental interference into a beneficial factor. Numerical results are investigated to check the reliability of the PDF for reciprocal of channel gain and to analyze the impact of protected zones on secrecy performance. In the third part, physical layer security in the downlink transmission of cellular network is studied. To model the repulsive property of the cellular network planning, we assume that the base stations (BSs) follow the Mat´ern hardcore point process (HCPP), while the eavesdroppers are deployed as an independent Poisson point process (PPP). The distribution function of the distances from a typical point to the nodes of the HCPP is derived. The noiselimited and interferencelimited cellular networks are investigated by applying the fractional frequency reuse (FFR) in the system. For the noiselimited network, we derive the secrecy outage probability with two different strategies, i.e. the best BS serve and the nearest BS serve, by analyzing the statistics of channel gains. For the interferencelimited network with the nearest BS serve, two transmission schemes are analyzed, i.e., transmission with and without the FFR. Numerical results reveal that both the schemes of transmitting with the best BS and the application of the FFR are beneficial for physical layer security in the downlink cellular networks, while the improvement due to the application of the FFR is limited by the capacity of the legitimate channel.

28 
Statistical Guarantee for NonConvex OptimizationBotao Hao (7887845) 26 November 2019 (has links)
The aim of this thesis is to systematically study the statistical guarantee for two
representative nonconvex optimization problems arsing in the statistics community.
The first one is the highdimensional Gaussian mixture model, which is motivated by
the estimation of multiple graphical models arising from heterogeneous observations.
The second one is the lowrank tensor estimation model, which is motivated by
highdimensional interaction model. Both optimal statistical rates and numerical
comparisons are studied in depth.
In the first part of my thesis, we consider joint estimation of multiple graphical
models arising from heterogeneous and highdimensional observations. Unlike most
previous approaches which assume that the cluster structure is given in advance, an
appealing feature of our method is to learn cluster structure while estimating heterogeneous graphical models. This is achieved via a high dimensional version of Expectation
Conditional Maximization (ECM) algorithm. A joint graphical lasso penalty is
imposed on the conditional maximization step to extract both homogeneity and heterogeneity components across all clusters. Our algorithm is computationally efficient
due to fast sparse learning routines and can be implemented without unsupervised
learning knowledge. The superior performance of our method is demonstrated by extensive experiments and its application to a Glioblastoma cancer dataset reveals some
new insights in understanding the Glioblastoma cancer. In theory, a nonasymptotic
error bound is established for the output directly from our high dimensional ECM
algorithm, and it consists of two quantities: statistical error (statistical accuracy) and optimization error (computational complexity). Such a result gives a theoretical
guideline in terminating our ECM iterations.
In the second part of my thesis, we propose a general framework for sparse and lowrank tensor estimation from cubic sketchings. A twostage nonconvex implementation
is developed based on sparse tensor decomposition and thresholded gradient descent,
which ensures exact recovery in the noiseless case and stable recovery in the noisy
case with high probability. The nonasymptotic analysis sheds light on an interplay
between optimization error and statistical error. The proposed procedure is shown to
be rateoptimal under certain conditions. As a technical byproduct, novel highorder
concentration inequalities are derived for studying highmoment subGaussian tensors.
An interesting tensor formulation illustrates the potential application to highorder
interaction pursuit in highdimensional linear regression

29 
Distributed model predictive control based consensus of general linear multiagent systems with input constraintsLi, Zhuo 16 April 2020 (has links)
In the study of multiagent systems (MASs), cooperative control is one of the most fundamental issues. As it covers a broad spectrum of applications in many industrial areas, there is a desire to design cooperative control protocols for different system and network setups.
Motivated by this fact, in this thesis we focus on elaborating consensus protocol design, via model predictive control (MPC), under two different scenarios: (1) general constrained linear MASs with bounded additive disturbance; (2) linear MASs with input constraints underlying distributed communication networks.
In Chapter 2, a tubebased robust MPC consensus protocol for constrained linear MASs is proposed. For undisturbed linear MASs without constraints, the results on designing a centralized linear consensus protocol are first developed by a suboptimal linear quadratic approach. In order to evaluate the control performance of the suboptimal consensus protocol, we use an infinite horizon linear quadratic objective function to penalize the disagreement among agents and the size of control inputs. Due to the nonconvexity of the performance function, an optimal controller gain is difficult or even impossible to find, thus a suboptimal consensus protocol is derived. In the presence of disturbance, the original MASs may not maintain certain properties such as stability and cooperative performance. To this end, a tubebased robust MPC framework is introduced. When disturbance is involved, the original constraints in nominal prediction should be tightened so as to achieve robust constraint satisfaction, as the predicted states and the actual states are not necessarily the same. Moreover, the corresponding robust constraint sets can be determined offline, requiring no extra iterative online computation in implementation.
In Chapter 3, a novel distributed MPCbased consensus protocol is proposed for general linear MASs with input constraints. For the linear MAS without constraints, a prestabilizing distributed linear consensus protocol is developed by an inverse optimal approach, such that the corresponding closedloop system is asymptotically stable with respect to a consensus set. Implementing this prestabilizing controller in a distributed digital setting is however not possible, as it requires every local decision maker to continuously access the state of their neighbors simultaneously when updating the control input. To relax these requirements, the assumed neighboring state, instead of the actual state of neighbors, is used. In our distributed MPC scheme, each local controller minimizes a group of control variables to generate control input. Moreover, an additional state constraint is proposed to bound deviation between the actual and the assumed state. In this way, consistency is enforced between intended behaviors of an agent and what its neighbors believe it will behave. We later show that the closedloop system converges to a neighboring set of the consensus set thanks to the bounded state deviation in prediction.
In Chapter 4, conclusions are made and some research topics for future exploring are presented. / Graduate / 20210331

30 
ITEM RESPONSE MODELS AND CONVEX OPTIMIZATION.Lewis, Naama 01 May 2020 (has links)
Item Response Theory (IRT) Models, like the one parameter, two parameters, or normal Ogive, have been discussed for many years. These models represent a rich area of investigation due to their complexity as well as the large amount of data collected in relationship to model parameter estimation. Here we propose a new way of looking at IRT models using Iprojections and duality. We use convex optimization methods to derive these models. The KullbackLeibler divergence is used as a metric and specific constraints are proposed for the various models. With this approach, the dual problem is shown to be much easier to solve than the primal problem. In particular when there are many constraints, we propose the application of a projection algorithm for solving these types of problems. We also consider reframing the problem and utilizing a decomposition algorithm to solve for parameters as well. Both of these methods will be compared to the Rasch and 2Parameter Logistic models using established computer software where estimation of model parameters are done under Maximum Likelihood Estimation framework. We will also compare the appropriateness of these techniques on multidimensional item response data sets and propose new models with the use of Iprojections.

Page generated in 0.1784 seconds