31 |
An open source object oriented platform for rapid design of high performance path following interior-point methodsChiş, Voicu January 2008 (has links)
<p> Interior point methods (IPMs) is a powerful tool in convex optimization. From
the theoretical point of view, the convex set of feasible solutions is represented
by a so-called barrier functional and the only information required by the
algorithms is the evaluation of the barrier, its gradient and Hessian. As a
result, IPM algorithms can be used for many types of convex problems and
their theoretical performance depends on the properties of the barrier. In
practice, performance depends on how the data structure is exploited at the
linear algebra level. In this thesis, we make use of the object-oriented paradigm
supported by C++ to create a platform where the aforementioned generality
of IPM algorithms is valued and the possibility to exploit the data structure
is available. We will illustrate the power of such an approach on optimization
problems arrising in the field of Radiation Therapy, in particular Intensity
Modulated Radiation Therapy. </p> / Thesis / Master of Science (MSc)
|
32 |
Distributed, Stable Topology Control of Multi-Robot Systems with Asymmetric InteractionsMukherjee, Pratik 17 June 2021 (has links)
Multi-robot systems have recently witnessed a swell in interest in the past few years because of their various applications such as agricultural autonomy, medical robotics, industrial and commercial automation and,
search and rescue. In this thesis, we particularly investigate the behavior of multi-robot systems with respect to stable topology control in asymmetric interaction settings.
From theoretical perspective, we first classify stable topologies, and identify the conditions under which we can determine whether a topology is stable or not. Then, we design a limited fields-of-view (FOV) controller for robots that use sensors like cameras for coordination which induce asymmetric robot to robot interactions. Finally, we conduct a rigorous theoretical analysis to qualitatively determine which interactions are suitable for stable directed topology control of multi-robot systems with asymmetric interactions. In this regard, we solve an optimal topology selection problem to determine the topology with the best interactions based on a suitable metric that represents the quality of interaction. Further, we solve this optimal problem distributively and validate the distributed optimization formulation with extensive simulations. For experimental purposes, we developed a portable multi-robot testbed which enables us to conduct multi-robot topology control experiments in both indoor and outdoor settings and validate our theoretical findings.
Therefore, the contribution of this thesis is two fold: i) We provide rigorous theoretical analysis of stable coordination of multi-robot systems with directed graphs, demonstrating the graph structures that induce stability for a broad class of coordination objectives;
ii) We develop a testbed that enables validating multi-robot topology control in both indoor and outdoor settings. / Doctor of Philosophy / In this thesis, we address the problem of collaborative tasks in a multi-robot system where we investigate how interactions within members of the multi-robot system can induce instability. We conduct rigorous theoretical analysis and identify when the system will be unstable and hence classify interactions that will lead to stable multi-robot coordination. Our theoretical analysis tries to emulate realistic interactions in a multi-robot system such as limited interactions (blind spots) that exist when on-board cameras are used to detect and track other robots in the vicinity. So we study how these limited interactions induce instability in the multi-robot system. To verify our theoretical analysis experimentally, we developed a portable multi-robot testbed that enables us to test our theory on stable coordination of multi-robot system with a team of Unmanned Aerial Vehicles (UAVs) in both indoor and outdoor settings. With this feature of the testbed we are able to investigate the difference in the multi-robot system behavior when tested in controlled indoor environments versus an uncontrolled outdoor environment. Ultimately, the motivation behind this thesis is to emulate realistic conditions for multi-robot cooperation and investigate suitable conditions for them to work in a stable and safe manner. Therefore, our contribution is twofold ; i) We provide rigorous theoretical analysis that enables stable coordination of multi-robot systems with limited interactions induced by sensor capabilities such as cameras; ii) We developed a testbed that enables testing of our theoretical contribution with a team of real robots in realistic environmental conditions.
|
33 |
Online convex optimization: algorithms, learning, and duality / Otimização convexa online: algoritmos, aprendizado, e dualidadePortella, Victor Sanches 03 May 2019 (has links)
Online Convex Optimization (OCO) is a field in the intersection of game theory, optimization, and machine learning which has been receiving increasing attention due to its recent applications to a wide range of topics such as complexity theory and graph sparsification. Besides the usually simple description and implementation of OCO algorithms, a lot of this recent success is due to a deepening of our understanding of the OCO setting and their algorithms by using cornerstone ideas from convex analysis and optimization such as the powerful results from convex duality theory. In this text we present a mostly self-contained introduction to the field of online convex optimization. We first describe the online learning and online convex optimization settings, proposing an alternative way to formalize both of them so we can make formal claims in a clear and unambiguous fashion while not cluttering the readers understanding. We then present an overview of the main concepts of convex analysis we use, with a focus on building intuition. With respect to algorithms for OCO, we first present and analyze the Adaptive Follow the Regularized Leader (AdaFTRL) together with an analysis which relies mainly on the duality between strongly convex and strongly smooth functions. We then describe the Adaptive Online Mirror Descent (AdaOMD) and the Adaptive Dual Averaging (AdaDA) algorithms and analyze both by writing them as special cases of the AdaFTRL algorithm. Additionally, we show simple sufficient conditions for Eager and Lazy Online Mirror Descent (the non-adaptive counter-parts of AdaOMD and AdaDA) to be equivalent. We also present the well-known AdaGrad and Online Newton Step algorithms as special cases of the AdaReg algorithm, proposed by Gupta, Koren, and Singer, which is itself a special case of the AdaOMD algorithm. We conclude by taking a bird\'s-eyes view of the connections shown throughout the text, forming a ``genealogy\'\' of OCO algorithms, and discuss some possible path for future research. / Otimização Convexa Online (OCO) é uma área na intersecção de teoria dos jogos, otimização e aprendizado de máquina que tem recebido maior atenção recentemente devido a suas recentes aplicações em uma grande gama de áreas como complexidade computacional e esparsificação de grafos. Além dos algoritmos de OCO usualmente terem descrições diretas e poderem ser implementados de forma relativamente simples, muito do recente sucesso da área foi possível graças a um melhor entendimento do cenário e dos algoritmos de OCO que se deu com uso de conhecidas ideias de análise e otimização convexa como a poderosa teoria de dualidade convexa. Nesse texto nós apresentamos uma introdução (em sua maioria auto-contida) à área de otimização convexa online. Primeiro, descrevemos os cenários de aprendizado online e de otimização convexa online, propondo uma forma alternativa de formalizar ambos os modelos de forma que conseguimos enunciar afirmações claras e formais de forma que não atrapalha o entendimento do leitor. Nós então apresentamos um resumo dos principais conceitos e resultados de análise convexa que usamos no texto com um foco em criar intuição sobre os mesmos. Com relação a algoritmos para OCO, nós começamos apresentando o algoritmo Adaptive Follow the Regularized Leader (AdaFTRL) e analisamos sua eficácia com um resultado sobre a dualidade de funções strongly convex e strongly smooth. Na sequência, descrevemos os algoritmos Adaptive Online Mirror Descent (AdaOMD) e Adaptive Dual Averaging (AdaDA), analisando a eficácia de cada um escrevendo eles como instâncias do algoritmo AdaFTRL. Além disso, nós mostramos condições simples para que as versões Eager e Lazy do Online Mirror Descent (que são as versões não adaptativas do AdaOMD e do AdaDA, respectivamente) sejam equivalentes. Também apresentamos os algoritmos AdaGrad e Online Newton Step, bem conhecidos na literatura sobre OCO, como casos especiais do algoritmo AdaReg, esse último um algoritmo proposto por Gupta, Koren, and Singer, que, por sua vez, é um caso especial do algoritmo AdaOMD. Nós concluímos o texto com uma visão global das conexões entre os algoritmos que mostramos durante o texto, formando uma \"genealogia\" de algoritmos para OCO, além de discutirmos possíveis direções futuras de pesquisa.
|
34 |
New insights into conjugate dualityGrad, Sorin - Mihai 19 July 2006 (has links) (PDF)
With this thesis we bring some new results and improve some
existing ones in conjugate duality and some of the areas it is
applied in.
First we recall the way Lagrange, Fenchel and Fenchel - Lagrange
dual problems to a given primal optimization problem can be
obtained via perturbations and we present some connections between
them. For the Fenchel - Lagrange dual problem we prove strong
duality under more general conditions than known so far, while for
the Fenchel duality we show that the convexity assumptions on the
functions involved can be weakened without altering the
conclusion. In order to prove the latter we prove also that some
formulae concerning conjugate functions given so far only for
convex functions hold also for almost convex, respectively nearly
convex functions.
After proving that the generalized geometric dual problem can be
obtained via perturbations, we show that the geometric duality is
a special case of the Fenchel - Lagrange duality and the strong
duality can be obtained under weaker conditions than stated in the
existing literature. For various problems treated in the
literature via geometric duality we show that Fenchel - Lagrange
duality is easier to apply, bringing moreover strong duality and
optimality conditions under weaker assumptions.
The results presented so far are applied also in convex composite
optimization and entropy optimization. For the composed convex
cone - constrained optimization problem we give strong duality and
the related optimality conditions, then we apply these when
showing that the formula of the conjugate of the precomposition
with a proper convex K - increasing function of a K - convex
function on some n - dimensional non - empty convex set X, where
K is a k - dimensional non - empty closed convex cone, holds under
weaker conditions than known so far. Another field were we apply
these results is vector optimization, where we provide a general
duality framework based on a more general scalarization that
includes as special cases and improves some previous results in
the literature. Concerning entropy optimization, we treat first
via duality a problem having an entropy - like objective function,
from which arise as special cases some problems found in the
literature on entropy optimization. Finally, an application of
entropy optimization into text classification is presented.
|
35 |
Solving support vector machine classification problems and their applications to supplier selectionKim, Gitae January 1900 (has links)
Doctor of Philosophy / Department of Industrial & Manufacturing Systems Engineering / Chih-Hang Wu / Recently, interdisciplinary (management, engineering, science, and economics) collaboration research has been growing to achieve the synergy and to reinforce the weakness of each discipline. Along this trend, this research combines three topics: mathematical programming, data mining, and supply chain management. A new pegging algorithm is developed for solving the continuous nonlinear knapsack problem. An efficient solving approach is proposed for solving the ν-support vector machine for classification problem in the field of data mining. The new pegging algorithm is used to solve the subproblem of the support vector machine problem. For the supply chain management, this research proposes an efficient integrated solving approach for the supplier selection problem. The support vector machine is applied to solve the problem of selecting potential supplies in the procedure of the integrated solving approach.
In the first part of this research, a new pegging algorithm solves the continuous nonlinear knapsack problem with box constraints. The problem is to minimize a convex and differentiable nonlinear function with one equality constraint and box constraints. Pegging algorithm needs to calculate primal variables to check bounds on variables at each iteration, which frequently is a time-consuming task. The newly proposed dual bound algorithm checks the bounds of Lagrange multipliers without calculating primal variables explicitly at each iteration. In addition, the calculation of the dual solution at each iteration can be reduced by a proposed new method for updating the solution.
In the second part, this research proposes several streamlined solution procedures of ν-support vector machine for the classification. The main solving procedure is the matrix splitting method. The proposed method in this research is a specified matrix splitting method combined with the gradient projection method, line search technique, and the incomplete Cholesky decomposition method. The method proposed can use a variety of methods for line search and parameter updating. Moreover, large scale problems are solved with the incomplete Cholesky decomposition and some efficient implementation techniques.
To apply the research findings in real-world problems, this research developed an efficient integrated approach for supplier selection problems using the support vector machine and the mixed integer programming. Supplier selection is an essential step in the procurement processes. For companies considering maximizing their profits and reducing costs, supplier selection requires seeking satisfactory suppliers and allocating proper orders to the selected suppliers. In the early stage of supplier selection, a company can use the support vector machine classification to choose potential qualified suppliers using specific criteria. However, the company may not need to purchase from all qualified suppliers. Once the company determines the amount of raw materials and components to purchase, the company then selects final suppliers from which to order optimal order quantities at the final stage of the process. Mixed integer programming model is then used to determine final suppliers and allocates optimal orders at this stage.
|
36 |
Supervised Descent MethodXiong, Xuehan 01 September 2015 (has links)
In this dissertation, we focus on solving Nonlinear Least Squares problems using a supervised approach. In particular, we developed a Supervised Descent Method (SDM), performed thorough theoretical analysis, and demonstrated its effectiveness on optimizing analytic functions, and four other real-world applications: Inverse Kinematics, Rigid Tracking, Face Alignment (frontal and multi-view), and 3D Object Pose Estimation. In Rigid Tracking, SDM was able to take advantage of more robust features, such as, HoG and SIFT. Those non-differentiable image features were out of consideration of previous work because they relied on gradient-based methods for optimization. In Inverse Kinematics where we minimize a non-convex function, SDM achieved significantly better convergence than gradient-based approaches. In Face Alignment, SDM achieved state-of-the-arts results. Moreover, it was extremely computationally efficient, which makes it applicable for many mobile applications. In addition, we provided a unified view of several popular methods including SDM on sequential prediction, and reformulated them as a sequence of function compositions. Finally, we suggested some future research directions on SDM and sequential prediction.
|
37 |
Decentralized probabilistic density control of swarm of autonomous agents with conflict avoidance constraintsDemir, Nazlı 01 October 2014 (has links)
This report describes a method to control the density distribution of a large number of autonomous agents. The approach is based on the fact that there are a large number of agents in the system, and hence the time evolution of the probabilistic density distribution of agents can be described as a Markov chain. The main contribution of this paper is the synthesis of a Markov matrix which will guide the multi-agent system density to a desired steady-state density distribution, in a probabilistic sense, while satisfying some motion and safety constraints. Also, an adaptive density control method based on real time density feedback is introduced to synthesize a time-varying Markov ma- trix, which leads to better convergence to the desired density distribution. Finally, a decentralized density computation method is described. This method guarantees that all agents will have a best, and common, density estimate in a finite, with an explicit bound, number of communication updates. / text
|
38 |
Parallel magnetic resonance imaging reconstruction problems using wavelet representations / Problèmes de reconstruction en imagerie par résonance magnétique parallèle à l'aide de représentations en ondelettesChaari, Lotfi 05 November 2010 (has links)
Pour réduire le temps d'acquisition ou bien améliorer la résolution spatio-temporelle dans certaines application en IRM, de puissantes techniques parallèles utilisant plusieurs antennes réceptrices sont apparues depuis les années 90. Dans ce contexte, les images d'IRM doivent être reconstruites à partir des données sous-échantillonnées acquises dans le « k-space ». Plusieurs approches de reconstruction ont donc été proposées dont la méthode SENSitivity Encoding (SENSE). Cependant, les images reconstruites sont souvent entâchées par des artéfacts dus au bruit affectant les données observées, ou bien à des erreurs d'estimation des profils de sensibilité des antennes. Dans ce travail, nous présentons de nouvelles méthodes de reconstruction basées sur l'algorithme SENSE, qui introduisent une régularisation dans le domaine transformé en ondelettes afin de promouvoir la parcimonie de la solution. Sous des conditions expérimentales dégradées, ces méthodes donnent une bonne qualité de reconstruction contrairement à la méthode SENSE et aux autres techniques de régularisation classique (e.g. Tikhonov). Les méthodes proposées reposent sur des algorithmes parallèles d'optimisation permettant de traiter des critères convexes, mais non nécessairement différentiables contenant des a priori parcimonieux. Contrairement à la plupart des méthodes de reconstruction qui opèrent coupe par coupe, l'une des méthodes proposées permet une reconstruction 4D (3D + temps) en exploitant les corrélations spatiales et temporelles. Le problème d'estimation d'hyperparamètres sous-jacent au processus de régularisation a aussi été traité dans un cadre bayésien en utilisant des techniques MCMC. Une validation sur des données réelles anatomiques et fonctionnelles montre que les méthodes proposées réduisent les artéfacts de reconstruction et améliorent la sensibilité/spécificité statistique en IRM fonctionnelle / To reduce scanning time or improve spatio-temporal resolution in some MRI applications, parallel MRI acquisition techniques with multiple coils have emerged since the early 90's as powerful methods. In these techniques, MRI images have to be reconstructed from acquired undersampled « k-space » data. To this end, several reconstruction techniques have been proposed such as the widely-used SENSitivity Encoding (SENSE) method. However, the reconstructed images generally present artifacts due to the noise corrupting the observed data and coil sensitivity profile estimation errors. In this work, we present novel SENSE-based reconstruction methods which proceed with regularization in the complex wavelet domain so as to promote the sparsity of the solution. These methods achieve accurate image reconstruction under degraded experimental conditions, in which neither the SENSE method nor standard regularized methods (e.g. Tikhonov) give convincing results. The proposed approaches relies on fast parallel optimization algorithms dealing with convex but non-differentiable criteria involving suitable sparsity promoting priors. Moreover, in contrast with most of the available reconstruction methods which proceed by a slice by slice reconstruction, one of the proposed methods allows 4D (3D + time) reconstruction exploiting spatial and temporal correlations. The hyperparameter estimation problem inherent to the regularization process has also been addressed from a Bayesian viewpoint by using MCMC techniques. Experiments on real anatomical and functional data show that the proposed methods allow us to reduce reconstruction artifacts and improve the statistical sensitivity/specificity in functional MRI
|
39 |
Dynamical system decomposition and analysis using convex optimizationAnderson, James David January 2012 (has links)
This thesis is concerned with investigating new methods for the analysis of large-scale dynamical systems using convex optimization. The proposed methodology is based on composite Lyapunov theory and is computationally implemented using polynomial programming techniques. The main result of this work is the development of a system decomposition framework that makes it possible to analyze systems that are of such a scale that traditional methods cannot cope with. We begin by addressing the problem of model invalidation. A barrier certificate method for invalidating models in the presence of uncertain data is presented for both continuous and discrete time models. It is shown how a re-parameterization of the time dependent variables can improve the numerical conditioning of the underlying optimization problem. The main contribution of this thesis is the development of an automated dynamical system decomposition framework that permits us to verify the stability of systems that typically have a state dimension large enough to render traditional computational methods intractable. The underlying idea is to decompose a system into a set of lower order subsystems connected in feedback in such a manner that composite methods for stability verification may be employed. What is unique about the algorithm presented is that it takes into account both dynamics and the topology of the interconnection graph. In the first instance we illustrate the methodology with an ecological network and primal Internet congestion control scheme. The versatility of the decomposition framework is also highlighted when it is shown that when applied to a model of the EGF-MAPK signaling pathway it is capable of identifying biologically relevant subsystems in addition to stability verification. Finally we introduce stability metrics for interconnected dynamical systems based on the theory of dissipativity. We conclude by outlining a clustering based decomposition algorithm that explicitly takes into account the input and output dynamics when determining the system decomposition.
|
40 |
Computational and Statistical Advances in Testing and LearningRamdas, Aaditya Kumar 01 July 2015 (has links)
This thesis makes fundamental computational and statistical advances in testing and estimation, making critical progress in theory and application of classical statistical methods like classification, regression and hypothesis testing, and understanding the relationships between them. Our work connects multiple fields in often counter-intuitive and surprising ways, leading to new theory, new algorithms, and new insights, and ultimately to a cross-fertilization of varied fields like optimization, statistics and machine learning. The first of three thrusts has to do with active learning, a form of sequential learning from feedback-driven queries that often has a provable statistical advantage over passive learning. We unify concepts from two seemingly different areas—active learning and stochastic firstorder optimization. We use this unified view to develop new lower bounds for stochastic optimization using tools from active learning and new algorithms for active learning using ideas from optimization. We also study the effect of feature noise, or errors-in-variables, on the ability to actively learn. The second thrust deals with the development and analysis of new convex optimization algorithms for classification and regression problems. We provide geometrical and convex analytical insights into the role of the margin in margin-based classification, and develop new greedy primal-dual algorithms for non-linear classification. We also develop a unified proof for convergence rates of randomized algorithms for the ordinary least squares and ridge regression problems in a variety of settings, with the purpose of investigating which algorithm should be utilized in different settings. Lastly, we develop fast state-of-the-art numerically stable algorithms for an important univariate regression problem called trend filtering with a wide variety of practical extensions. The last thrust involves a series of practical and theoretical advances in nonparametric hypothesis testing. We show that a smoothedWasserstein distance allows us to connect many vast families of univariate and multivariate two sample tests. We clearly demonstrate the decreasing power of the families of kernel-based and distance-based two-sample tests and independence tests with increasing dimensionality, challenging existing folklore that they work well in high dimensions. Surprisingly, we show that these tests are automatically adaptive to simple alternatives and achieve the same power as other direct tests for detecting mean differences. We discover a computation-statistics tradeoff, where computationally more expensive two-sample tests have a provable statistical advantage over cheaper tests. We also demonstrate the practical advantage of using Stein shrinkage for kernel independence testing at small sample sizes. Lastly, we develop a novel algorithmic scheme for performing sequential multivariate nonparametric hypothesis testing using the martingale law of the iterated logarithm to near-optimally control both type-1 and type-2 errors. One perspective connecting everything in this thesis involves the closely related and fundamental problems of linear regression and classification. Every contribution in this thesis, from active learning to optimization algorithms, to the role of the margin, to nonparametric testing fits in this picture. An underlying theme that repeats itself in this thesis, is the computational and/or statistical advantages of sequential schemes with feedback. This arises in our work through comparing active with passive learning, through iterative algorithms for solving linear systems instead of direct matrix inversions, and through comparing the power of sequential and batch hypothesis tests.
|
Page generated in 0.1241 seconds