• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 146
  • 14
  • 13
  • 11
  • 6
  • 1
  • 1
  • Tagged with
  • 225
  • 225
  • 53
  • 45
  • 42
  • 39
  • 38
  • 32
  • 24
  • 24
  • 24
  • 23
  • 23
  • 22
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Discovery of low-dimensional structure in high-dimensional inference problems

Aksoylar, Cem 10 March 2017 (has links)
Many learning and inference problems involve high-dimensional data such as images, video or genomic data, which cannot be processed efficiently using conventional methods due to their dimensionality. However, high-dimensional data often exhibit an inherent low-dimensional structure, for instance they can often be represented sparsely in some basis or domain. The discovery of an underlying low-dimensional structure is important to develop more robust and efficient analysis and processing algorithms. The first part of the dissertation investigates the statistical complexity of sparse recovery problems, including sparse linear and nonlinear regression models, feature selection and graph estimation. We present a framework that unifies sparse recovery problems and construct an analogy to channel coding in classical information theory. We perform an information-theoretic analysis to derive bounds on the number of samples required to reliably recover sparsity patterns independent of any specific recovery algorithm. In particular, we show that sample complexity can be tightly characterized using a mutual information formula similar to channel coding results. Next, we derive major extensions to this framework, including dependent input variables and a lower bound for sequential adaptive recovery schemes, which helps determine whether adaptivity provides performance gains. We compute statistical complexity bounds for various sparse recovery problems, showing our analysis improves upon the existing bounds and leads to intuitive results for new applications. In the second part, we investigate methods for improving the computational complexity of subgraph detection in graph-structured data, where we aim to discover anomalous patterns present in a connected subgraph of a given graph. This problem arises in many applications such as detection of network intrusions, community detection, detection of anomalous events in surveillance videos or disease outbreaks. Since optimization over connected subgraphs is a combinatorial and computationally difficult problem, we propose a convex relaxation that offers a principled approach to incorporating connectivity and conductance constraints on candidate subgraphs. We develop a novel nearly-linear time algorithm to solve the relaxed problem, establish convergence and consistency guarantees and demonstrate its feasibility and performance with experiments on real networks.
62

Operator splitting methods for convex optimization : analysis and implementation

Banjac, Goran January 2018 (has links)
Convex optimization problems are a class of mathematical problems which arise in numerous applications. Although interior-point methods can in principle solve these problems efficiently, they may become intractable for solving large-scale problems or be unsuitable for real-time embedded applications. Iterations of operator splitting methods are relatively simple and computationally inexpensive, which makes them suitable for these applications. However, some of their known limitations are slow asymptotic convergence, sensitivity to ill-conditioning, and inability to detect infeasible problems. The aim of this thesis is to better understand operator splitting methods and to develop reliable software tools for convex optimization. The main analytical tool in our investigation of these methods is their characterization as the fixed-point iteration of a nonexpansive operator. The fixed-point theory of nonexpansive operators has been studied for several decades. By exploiting the properties of such an operator, it is possible to show that the alternating direction method of multipliers (ADMM) can detect infeasible problems. Although ADMM iterates diverge when the problem at hand is unsolvable, the differences between subsequent iterates converge to a constant vector which is also a certificate of primal and/or dual infeasibility. Reliable termination criteria for detecting infeasibility are proposed based on this result. Similar ideas are used to derive necessary and sufficient conditions for linear (geometric) convergence of an operator splitting method and a bound on the achievable convergence rate. The new bound turns out to be tight for the class of averaged operators. Next, the OSQP solver is presented. OSQP is a novel general-purpose solver for quadratic programs (QPs) based on ADMM. The solver is very robust, is able to detect infeasible problems, and has been extensively tested on many problem instances from a wide variety of application areas. Finally, operator splitting methods can also be effective in nonconvex optimization. The developed algorithm significantly outperforms a common approach based on convex relaxation of the original nonconvex problem.
63

Robust Large Margin Approaches for Machine Learning in Adversarial Settings

Torkamani, MohamadAli 21 November 2016 (has links)
Machine learning algorithms are invented to learn from data and to use data to perform predictions and analyses. Many agencies are now using machine learning algorithms to present services and to perform tasks that used to be done by humans. These services and tasks include making high-stake decisions. Determining the right decision strongly relies on the correctness of the input data. This fact provides a tempting incentive for criminals to try to deceive machine learning algorithms by manipulating the data that is fed to the algorithms. And yet, traditional machine learning algorithms are not designed to be safe when confronting unexpected inputs. In this dissertation, we address the problem of adversarial machine learning; i.e., our goal is to build safe machine learning algorithms that are robust in the presence of noisy or adversarially manipulated data. Many complex questions -- to which a machine learning system must respond -- have complex answers. Such outputs of the machine learning algorithm can have some internal structure, with exponentially many possible values. Adversarial machine learning will be more challenging when the output that we want to predict has a complex structure itself. In this dissertation, a significant focus is on adversarial machine learning for predicting structured outputs. In this thesis, first, we develop a new algorithm that reliably performs collective classification: It jointly assigns labels to the nodes of graphed data. It is robust to malicious changes that an adversary can make in the properties of the different nodes of the graph. The learning method is highly efficient and is formulated as a convex quadratic program. Empirical evaluations confirm that this technique not only secures the prediction algorithm in the presence of an adversary, but it also generalizes to future inputs better, even if there is no adversary. While our robust collective classification method is efficient, it is not applicable to generic structured prediction problems. Next, we investigate the problem of parameter learning for robust, structured prediction models. This method constructs regularization functions based on the limitations of the adversary in altering the feature space of the structured prediction algorithm. The proposed regularization techniques secure the algorithm against adversarial data changes, with little additional computational cost. In this dissertation, we prove that robustness to adversarial manipulation of data is equivalent to some regularization for large-margin structured prediction, and vice versa. This confirms some of the previous results for simpler problems. As a matter of fact, an ordinary adversary regularly either does not have enough computational power to design the ultimate optimal attack, or it does not have sufficient information about the learner's model to do so. Therefore, it often tries to apply many random changes to the input in a hope of making a breakthrough. This fact implies that if we minimize the expected loss function under adversarial noise, we will obtain robustness against mediocre adversaries. Dropout training resembles such a noise injection scenario. Dropout training was initially proposed as a regularization technique for neural networks. The procedure is simple: At each iteration of training, randomly selected features are set to zero. We derive a regularization method for large-margin parameter learning based on dropout. Our method calculates the expected loss function under all possible dropout values. This method results in a simple objective function that is efficient to optimize. We extend dropout regularization to non-linear kernels in several different directions. We define the concept of dropout for input space, feature space, and input dimensions, and we introduce methods for approximate marginalization over feature space, even if the feature space is infinite-dimensional. Empirical evaluations show that our techniques consistently outperform the baselines on different datasets.
64

Optimal regression design under second-order least squares estimator: theory, algorithm and applications

Yeh, Chi-Kuang 23 July 2018 (has links)
In this thesis, we first review the current development of optimal regression designs under the second-order least squares estimator in the literature. The criteria include A- and D-optimality. We then introduce a new formulation of A-optimality criterion so the result can be extended to c-optimality which has not been studied before. Following Kiefer's equivalence results, we derive the optimality conditions for A-, c- and D-optimal designs under the second-order least squares estimator. In addition, we study the number of support points for various regression models including Peleg models, trigonometric models, regular and fractional polynomial models. A generalized scale invariance property for D-optimal designs is also explored. Furthermore, we discuss one computing algorithm to find optimal designs numerically. Several interesting applications are presented and related MATLAB code are provided in the thesis. / Graduate
65

A Convex Approach for Stability Analysis of Partial Differential Equations

January 2016 (has links)
abstract: A computational framework based on convex optimization is presented for stability analysis of systems described by Partial Differential Equations (PDEs). Specifically, two forms of linear PDEs with spatially distributed polynomial coefficients are considered. The first class includes linear coupled PDEs with one spatial variable. Parabolic, elliptic or hyperbolic PDEs with Dirichlet, Neumann, Robin or mixed boundary conditions can be reformulated in order to be used by the framework. As an example, the reformulation is presented for systems governed by Schr¨odinger equation, parabolic type, relativistic heat conduction PDE and acoustic wave equation, hyperbolic types. The second form of PDEs of interest are scalar-valued with two spatial variables. An extra spatial variable allows consideration of problems such as local stability of fluid flows in channels and dynamics of population over two dimensional domains. The approach does not involve discretization and is based on using Sum-of-Squares (SOS) polynomials and positive semi-definite matrices to parameterize operators which are positive on function spaces. Applying the parameterization to construct Lyapunov functionals with negative derivatives allows to express stability conditions as a set of LinearMatrix Inequalities (LMIs). The MATLAB package SOSTOOLS was used to construct the LMIs. The resultant LMIs then can be solved using existent Semi-Definite Programming (SDP) solvers such as SeDuMi or MOSEK. Moreover, the proposed approach allows to calculate bounds on the rate of decay of the solution norm. The methodology is tested using several numerical examples and compared with the results obtained from simulation using standard methods of numerical discretization and analytic solutions. / Dissertation/Thesis / Masters Thesis Mechanical Engineering 2016
66

Improved Convex Optimal Decision-making Processes in Distribution Systems: Enable Grid Integration of Photovoltaic Resources and Distributed Energy Storage

January 2016 (has links)
abstract: This research mainly focuses on improving the utilization of photovoltaic (PV) re-sources in distribution systems by reducing their variability and uncertainty through the integration of distributed energy storage (DES) devices, like batteries, and smart PV in-verters. The adopted theoretical tools include statistical analysis and convex optimization. Operational issues have been widely reported in distribution systems as the penetration of PV resources has increased. Decision-making processes for determining the optimal allo-cation and scheduling of DES, and the optimal placement of smart PV inverters are con-sidered. The alternating current (AC) power flow constraints are used in these optimiza-tion models. The first two optimization problems are formulated as quadratically-constrained quadratic programming (QCQP) problems while the third problem is formu-lated as a mixed-integer QCQP (MIQCQP) problem. In order to obtain a globally opti-mum solution to these non-convex optimization problems, convex relaxation techniques are introduced. Considering that the costs of the DES are still very high, a procedure for DES sizing based on OpenDSS is proposed in this research to avoid over-sizing. Some existing convex relaxations, e.g. the second order cone programming (SOCP) relaxation and semidefinite programming (SDP) relaxation, which have been well studied for the optimal power flow (OPF) problem work unsatisfactorily for the DES and smart inverter optimization problems. Several convex constraints that can approximate the rank-1 constraint X = xxT are introduced to construct a tighter SDP relaxation which is referred to as the enhanced SDP (ESDP) relaxation using a non-iterative computing framework. Obtaining the convex hull of the AC power flow equations is beneficial for mitigating the non-convexity of the decision-making processes in power systems, since the AC power flow constraints exist in many of these problems. The quasi-convex hull of the quadratic equalities in the AC power bus injection model (BIM) and the exact convex hull of the quadratic equality in the AC power branch flow model (BFM) are proposed respectively in this thesis. Based on the convex hull of BFM, a novel convex relaxation of the DES optimizations is proposed. The proposed approaches are tested on a real world feeder in Arizona and several benchmark IEEE radial feeders. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2016
67

H-Infinity Control Design Via Convex Optimization: Toward A Comprehensive Design Environment

January 2013 (has links)
abstract: The problem of systematically designing a control system continues to remain a subject of intense research. In this thesis, a very powerful control system design environment for Linear Time-Invariant (LTI) Multiple-Input Multiple-Output (MIMO) plants is presented. The environment has been designed to address a broad set of closed loop metrics and constraints; e.g. weighted H-infinity closed loop performance subject to closed loop frequency and/or time domain constraints (e.g. peak frequency response, peak overshoot, peak controls, etc.). The general problem considered - a generalized weighted mixed-sensitivity problem subject to constraints - permits designers to directly address and tradeoff multivariable properties at distinct loop breaking points; e.g. at plant outputs and at plant inputs. As such, the environment is particularly powerful for (poorly conditioned) multivariable plants. The Youla parameterization is used to parameterize the set of all stabilizing LTI proper controllers. This is used to convexify the general problem being addressed. Several bases are used to turn the resulting infinite-dimensional problem into a finite-dimensional problem for which there exist many efficient convex optimization algorithms. A simple cutting plane algorithm is used within the environment. Academic and physical examples are presented to illustrate the utility of the environment. / Dissertation/Thesis / M.S. Electrical Engineering 2013
68

Novos métodos incrementais para otimização convexa não-diferenciável em dois níveis com aplicações em reconstrução de imagens em tomografia por emissão / New incremental methods for bivel nondifferentiable convex optimization with applications on image reconstruction in emission tomography

Lucas Eduardo Azevedo Simões 28 March 2013 (has links)
Apresentamos dois novos métodos para a solução de problemas de otimização convexa em dois níveis não necessariamente diferenciáveis, i.e., mostramos que as sequências geradas por ambos os métodos convergem para o conjunto ótimo de uma função não suave sujeito a um conjunto que também envolve a minimização de uma função não diferenciável. Ambos os algoritmos dispensam qualquer tipo de resolução de subproblemas ou busca linear durante suas iterações. Ao final, para demonstrar que os métodos são viáveis, resolvemos um problema de reconstrução de imagens tomográficas / We present two new methods for solving bilevel convex optimization problems, where both functions are not necessarily differentiable, i.e., we show that the sequences generated by those methods converge to the optimal set of a nonsmooth function subject to a set that also involves a function minimization. Both algorithms do not require any kind of subproblems resolution or linear search during the iterations. At the end, to prove that our methods are viable, we solve a problem of tomographic image reconstruction
69

Tópicos em métodos ótimos para otimização convexa / Topics in optimal methods for convex optimization

Diane Rizzotto Rossetto 29 March 2012 (has links)
Neste trabalho apresentamos um novo método ótimo para otimização de uma função convexa diferenciável sujeita a restrições convexas. Nosso método é baseado em ideias de Nesterov e Auslender e Teboulle. A proposta dos últimos autores usa uma distância de Bregman coerciva para garantir que os iterados permaneçam no interior do conjunto viável. Nosso método estende esses resultados para permitir o emprego da distância Euclidiana ao quadrado. Mostramos também como estimar a constante de Lipschitz para o gradiente da função objetivo, o que resulta em uma melhora na eficiência numérica do método. Finalmente, apresentamos experimentos numéricos para validar nossa proposta e comparar com o algoritmo de Nesterov. / In this work we introduce a new optimal method for constrained differentiable convex optimization which is based on previous ideas by Nesterov and Auslender and Teboulle. The method proposed by the last authors use a coercive Bregman distance to ensure that the iterates remain in the interior of the feasible set. Our results extend this method to allow the use of the squared Euclidean distance. We also show how to estimate the Lipschitz constant of the gradient of the objective function, improving the numerical behavior of the method. Finally, we present numerical experiments to validate our approach and compare it to Nesterov\'s algorithm.
70

Arquitetura de controle de movimento para um robô móvel sobre rodas visando otimização energética. / Motion control architecture for a wheeled mobile robot to energy optimization.

Werther Alexandre de Oliveira Serralheiro 05 March 2018 (has links)
Este trabalho apresenta uma arquitetura de controle de movimento entre duas posturas distintas para um robô móvel sob rodas com acionamento diferencial em um ambiente estruturado e livre de obstáculos. O conceito clássico de eficiência foi utilizado para a definição das estratégias de controle: um robô se movimenta de forma eficiente quando realiza a tarefa determinada no menor tempo e utilizando menor quantidade energética. A arquitetura proposta é um recorte do modelo de Controle Hierárquico Aninhado (NHC), composto por três níveis de abstração: (i) Planejamento de Caminho, (ii) Planejamento de Trajetória e (iii) Rastreamento de Trajetória. O Planejamento de Caminho proposto suaviza uma geodésica Dubins - o caminho mais eficiente - por uma Spline Grampeada para que este caminho seja definido por uma curva duplamente diferenciável. Uma transformação do espaço de configuração do robô é realizada. O Planejamento de Trajetória é um problema de otimização convexa na forma de Programação Cônica de Segunda Ordem, cujo objetivo é uma função ponderada entre tempo e energia. Como o tempo de percurso e a energia total consumida pelo robô possui uma relação hiperbólica, um algoritmo de sintonia do coeficiente de ponderação entre estas grandezas é proposta. Por fim, um Rastreador de Trajetória de dupla malha baseado em linearização entrada-saída e controle PID é proposto, e obteve resultados satisfatórios no rastreamento do caminho pelo robô. / This work presents a motion control architecture between two different positions for a differential driven wheeled mobile robot in a obstacles free structured environment. The classic concept of efficiency was used to define the control strategies: a robot moves efficiently when it accomplishes the determined task in the shortest time and using less amount of energy. The proposed architecture is a clipping of the Nested Hierarchical Controller (NHC) model, composed of three levels of abstraction: (i) Path Planning, (ii) Trajectory Planning and (iii) Trajectory Tracking. The proposed Path Planning smoothes a geodesic Dubins - the most efficient path - by a Clamped Spline as this path is defined by a twice differentiable curve. A transformation of the robot configuration space is performed. The Trajectory Planning is a convex optimization problem in the form of Second Order Cone Programming, whose objective is a weighted function between time and energy. As the travel time and the total energy consumed by the robot has a hyperbolic relation, a tuning algorithm to the weighting is proposed. Finnaly, a dual-loop Trajectory Tracker based on input-output feedback linearization and PID control is proposed, which obtained satisfactory results in tracking the path by the robot.

Page generated in 0.1169 seconds