• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 1
  • 1
  • Tagged with
  • 14
  • 14
  • 7
  • 7
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Copositive programming: separation and relaxations

Dong, Hongbo 01 December 2011 (has links)
A large portion of research in science and engineering, as well as in business, concerns one similar problem: how to make things "better”? Once properly modeled (although usually a highly nontrivial task), this kind of questions can be approached via a mathematical optimization problem. Optimal solution to a mathematical optimization problem, when interpreted properly, might corresponds to new knowledge, effective methodology or good decisions in corresponding application area. As already proved in many success stories, research in mathematical optimization has a significant impact on numerous aspects of human life. Recently, it was discovered that a large amount of difficult optimization problems can be formulated as copositive programming problems. Famous examples include a large class of quadratic optimization problems as well as many classical combinatorial optimization problems. For some more general optimization problems, copositive programming provides a way to construct tight convex relaxations. Because of this generality, new knowledge of copositive programs has the potential of being uniformly applied to these cases. While it is provably difficult to design efficient algorithms for general copositive programs, we study copositive programming from two standard aspects, its relaxations and its separation problem. With regard to constructing computational tractable convex relaxations for copositive programs, we develop direct constructions of two tensor relaxation hierarchies for the completely positive cone, which is a fundamental geometric object in copositive programming. We show connection of our relaxation hierarchies with known hierarchies. Then we consider the application of these tensor relaxations to the maximum stable set problem. With regard to the separation problem for copositive programming. We first prove some new results in low dimension of 5 x 5 matrices. Then we show how a separation procedure for this low dimensional case can be extended to any symmetric matrices with a certain block structure. Last but not least, we provide another approach to the separation and relaxations for the (generalized) completely positive cone. We prove some generic results, and discuss applications to the completely positive case and another case related to box-constrained quadratic programming. Finally, we conclude the thesis with remarks on some interesting open questions in the field of copositive programming.
2

Sensitivity Analysis of Convex Relaxations for Nonsmooth Global Optimization

Yuan, Yingwei January 2020 (has links)
Nonsmoothness appears in various applications in chemical engineering, including multi-stream heat exchangers, nonsmooth flash calculation, process integration. In terms of numerical approaches, convex/concave relaxations of static and dynamic systems may also exhibit nonsmoothness. These relaxations are used in deterministic methods for global optimization. This thesis presents several new theoretical results for nonsmooth sensitivity analysis, with an emphasis on convex relaxations. Firstly, the "compass difference" and established ODE results by Pang and Stewart are used to describe a correct subgradient for a nonsmooth dynamic system with two parameters. This sensitivity information can be computed using standard ODE solvers. Next, this thesis also uses the compass difference to obtain a subgradient for the Tsoukalas-Mitsos convex relaxations of composite functions of two variables. Lastly, this thesis develops a new general subgradient result for Tsoukalas-Mitsos convex relaxations of composite functions. This result does not limit on the dimensions of input variables. It gives the whole subdifferential of Tsoukalas-Mitsos convex relaxations. Compare to Tsoukalas-Mitsos’ previous subdifferential results, it does not require additionally solving a dual optimization problem as well. The new subgradient results are extended to obtain directional derivatives for Tsoukalas-Mitsos convex relaxations. The new subgradient results and directional derivative results are computationally approachable: subgradients in this article can be calculated both by the vector forward AD mode and reverse AD mode. A proof-of-concept implementation in Matlab is discussed. / Thesis / Master of Applied Science (MASc)
3

Trend-Filtered Projection for Principal Component Analysis

Li, Liubo, Li January 2017 (has links)
No description available.
4

Large Scale Matrix Completion and Recommender Systems

Amadeo, Lily 04 September 2015 (has links)
"The goal of this thesis is to extend the theory and practice of matrix completion algorithms, and how they can be utilized, improved, and scaled up to handle large data sets. Matrix completion involves predicting missing entries in real-world data matrices using the modeling assumption that the fully observed matrix is low-rank. Low-rank matrices appear across a broad selection of domains, and such a modeling assumption is similar in spirit to Principal Component Analysis. Our focus is on large scale problems, where the matrices have millions of rows and columns. In this thesis we provide new analysis for the convergence rates of matrix completion techniques using convex nuclear norm relaxation. In addition, we validate these results on both synthetic data and data from two real-world domains (recommender systems and Internet tomography). The results we obtain show that with an empirical, data-inspired understanding of various parameters in the algorithm, this matrix completion problem can be solved more efficiently than some previous theory suggests, and therefore can be extended to much larger problems with greater ease. "
5

Contributions à la résolution globale de problèmes bilinéaires appliqués à l'indstrie porcine / Contribution to the global resolution of bilinear problems applied to the swine industry

Joannopoulos, Emilie 27 April 2018 (has links)
Aujourd'hui, l'alimentation représente plus de 70% du coût de production en engraissement porcin et dans le contexte économique actuel, il est important de parvenir à réduire ce coût. L'alimentation utilisée actuellement utilise des phases et est représentée par un modèle linéaire. L'alimentation par mélanges introduite récemment est représentée par un modèle bilinéaire. Nous introduisons ici une nouvelle formulation qui est une combinaison des alimentations traditionnelle par mélanges: la méthode hybride. Nous montrons qu'elle permet de réduire le coût de plus de 5%. L'étude principale porte sur l'optimisation globale du problème bilinéaire, non convexe, modélisant l'alimentation par mélanges. La bilinéarité apparaît dans la fonction objectif et dans les contraintes. Ce problème peut posséder plusieurs minima, mais nous souhaitons obtenir un minimum global. Il est équivalent à un problème de pooling et nous montrons qu'il est fortement NP-difficile. Après de premiers résultats, nous énonçons la conjecture que tout minimum local est global pour ce problème bilinéaire appliqué à l'industrie porcine. Nous la prouvons sur un exemple de dimension réduite. Notre problème ne pouvant pas être résolu avec des solveurs globaux à cause de sa dimension, nous utilisons des approches telle que la pénalisation, la discrétisation, et techniques de relaxation lagrangienne ou convexe. Toutes ces approches supportent notre conjecture. Nous faisons également une étude de la robustesse des modèles à la variation des prix des ingrédients ainsi qu'une étude multicritère nous permettant d'obtenir des résultats numériques réduisant considérablement les rejets, autres enjeux importants. / Today, feed represents more than 70% of the production cost in growing-finishing pig industry and in the current economic context, it is important to reduce it. The feeding system currently used uses phases and is expressed as a linear model. The feeding system using feeds introduced more recently is represented by a bilinear model. We introduced here new feeding system which is a combination offeeding systems using phases and feeds: the hybrid method. We show that it can reduce the feed cost by more than 5%. The main part of this manuscript is about global optimization of the bilinear problem, and non convex, problem modeling feeding system using feeds. The objective function and some constraints are bilinear. This problem can have several local minima but we would like to have a global one. It is equivalent to a pooling problem and we prove that it is a strongly NP-hard problem. After a study of first results, we enounce the conjecture that any local minimum is a global minimum for that problem applied in the pig industry. We prove it for a small size example. Our problem cannot be solved by using global solver due to its size, then we applied some relaxation methods such as penalization of bilinear terms, their discretization and Langrangian and convex relaxations. All these approaches support our conjecture. Then we study the robustness of the models on the ingredient price variations and a multicriteria study reducing phosphorus and nitrogen excretion.
6

Bounding Reachable Sets for Global Dynamic Optimization

Cao, Huiyi January 2021 (has links)
Many chemical engineering applications, such as safety verification and parameter estimation, require global optimization of dynamic models. Global optimization algorithms typically require obtaining global bounding information of the dynamic system, to aid in locating and verifying the global optimum. The typical approach for providing these bounds is to generate convex relaxations of the dynamic system and minimize them using a local optimization solver. Tighter convex relaxations typically lead to tighter lower bounds, so that the number of iterations in global optimization algorithms can be reduced. To carry out this local optimization efficiently, subgradient-based solvers require gradients or subgradients to be furnished. Smooth convex relaxations would aid local optimization even more. To address these issues and improve the computational performance of global dynamic optimization, this thesis proposes several novel formulations for constructing tight convex relaxations of dynamic systems. In some cases, these relaxations are smooth. Firstly, a new strategy is developed to generate convex relaxations of implicit functions, under minimal assumptions. These convex relaxations are described by parametric programs whose constraints are convex relaxations of the residual function. Compared with established methods for relaxing implicit functions, this new approach does not assume uniqueness of the implicit function and does not require the original residual function to be factorable. This new strategy was demonstrated to construct tighter convex relaxations in multiple numerical examples. Moreover, this new convex relaxation strategy extends to inverse functions, feasible-set mappings in constraint satisfaction problems, as well as parametric ordinary differential equations (ODEs). Using a proof-of-concept implementation in Julia, numerical examples are presented to illustrate the convex relaxations produced for various implicit functions and optimal-value functions. In certain cases, these convex relaxations are tighter than those generated with existing methods. Secondly, a novel optimization-based framework is introduced for computing time-varying interval bounds for ODEs. Such interval bounds are useful for constructing convex relaxations of ODEs, and tighter interval bounds typically translate into tighter convex relaxations. This framework includes several established bounding approaches, but also includes many new approaches. Some of these new methods can generate tighter interval bounds than established methods, which are potentially helpful for constructing tighter convex relaxations of ODEs. Several of these approaches have been implemented in Julia. Thirdly, a new approach is developed to improve a state-of-the-art ODE relaxation method and generate tighter and smooth convex relaxations. Unlike state-of-the-art methods, the auxiliary ODEs used in these new methods for computing convex relaxations have continuous right-hand side functions. Such continuity not only makes the new methods easier to implement, but also permits the evaluation of the subgradients of convex relaxations. Under some additional assumptions, differentiable convex relaxations can be constructed. Besides that, it is demonstrated that the new convex relaxations are at least as tight as state-of-the-art methods, which benefits global dynamic optimization. This approach has been implemented in Julia, and numerical examples are presented. Lastly, a new approach is proposed for generating a guaranteed lower bound for the optimal solution value of a nonconvex optimal control problem (OCP). This lower bound is obtained by constructing a relaxed convex OCP that satisfies the sufficient optimality conditions of Pontryagin's Minimum Principle. Such lower bounding information is useful for optimizing the original nonconvex OCP to a global minimum using deterministic global optimization algorithms. Compared with established methods for underestimating nonconvex OCPs, this new approach constructs tighter lower bounds. Moreover, since it does not involve any numerical approximation of the control and state trajectories, it provides lower bounds that are reliable and consistent. This approach has been implemented for control-affine systems, and numerical examples are presented. / Thesis / Doctor of Philosophy (PhD)
7

Optimization, Learning, and Control for Energy Networks

Singh, Manish K. 30 June 2021 (has links)
Massive infrastructure networks such as electric power, natural gas, or water systems play a pivotal role in everyday human lives. Development and operation of these networks is extremely capital-intensive. Moreover, security and reliability of these networks is critical. This work identifies and addresses a diverse class of computationally challenging and time-critical problems pertaining to these networks. This dissertation extends the state of the art on three fronts. First, general proofs of uniqueness for network flow problems are presented, thus addressing open problems. Efficient network flow solvers based on energy function minimizations, convex relaxations, and mixed-integer programming are proposed with performance guarantees. Second, a novel approach is developed for sample-efficient training of deep neural networks (DNN) aimed at solving optimal network dispatch problems. The novel feature here is that the DNNs are trained to match not only the minimizers, but also their sensitivities with respect to the optimization problem parameters. Third, control mechanisms are designed that ensure resilient and stable network operation. These novel solutions are bolstered by mathematical guarantees and extensive simulations on benchmark power, water, and natural gas networks. / Doctor of Philosophy / Massive infrastructure networks play a pivotal role in everyday human lives. A minor service disruption occurring locally in electric power, natural gas, or water networks is considered a significant loss. Uncertain demands, equipment failures, regulatory stipulations, and most importantly complicated physical laws render managing these networks an arduous task. Oftentimes, the first principle mathematical models for these networks are well known. Nevertheless, the computations needed in real-time to make spontaneous decisions frequently surpass the available resources. Explicitly identifying such problems, this dissertation extends the state of the art on three fronts: First, efficient models enabling the operators to tractably solve some routinely encountered problems are developed using fundamental and diverse mathematical tools; Second, quickly trainable machine learning based solutions are developed that enable spontaneous decision making while learning offline from sophisticated mathematical programs; and Third, control mechanisms are designed that ensure a safe and autonomous network operation without human intervention. These novel solutions are bolstered by mathematical guarantees and extensive simulations on benchmark power, water, and natural gas networks.
8

Optimal Operation of Water and Power Distribution Networks

Singh, Manish K. 12 1900 (has links)
Under the envisioned smart city paradigm, there is an increasing demand for the coordinated operation of our infrastructure networks. In this context, this thesis puts forth a comprehensive toolbox for the optimization of electric power and water distribution networks. On the analytical front, the toolbox consists of novel mixed-integer (non)-linear program (MINLP) formulations; convex relaxations with optimality guarantees; and the powerful technique of McCormick linearization. On the application side, the developed tools support the operation of each of the infrastructure networks independently, but also towards their joint operation. Starting with water distribution networks, the main difficulty in solving any (optimal-) water flow problem stems from a piecewise quadratic pressure drop law. To efficiently handle these constraints, we have first formulated a novel MINLP, and then proposed a relaxation of the pressure drop constraints to yield a mixed-integer second-order cone program. Further, a novel penalty term is appended to the cost that guarantees optimality and exactness under pre-defined network conditions. This contribution can be used to solve the WF problem; the OWF task of minimizing the pumping cost satisfying operational constraints; and the task of scheduling the operation of tanks to maximize the water service time in an area experiencing electric power outage. Regarding electric power systems, a novel MILP formulation for distribution restoration using binary indicator vectors on graph properties alongside exact McCormick linearization is proposed. This can be used to minimize the restoration time of an electric system under critical operational constraints, and to enable a coordinated response with the water utilities during outages. / Master of Science / The advent of smart cities has promoted research towards interdependent operation of utilities such as water and power systems. While power system analysis is significantly developed due to decades of focused research, water networks have been relying on relatively less sophisticated tools. In this context, this thesis develops Advanced efficient computational tools for the analysis and optimization for water distribution networks. Given the consumer demands, an optimal water flow (OWF) problem for minimizing the pump operation cost is formulated. Developing a rigorous analytical framework, the proposed formulation provides significant computational improvements without compromising on the accuracy. Explicit network conditions are provided that guarantee the optimality and feasibility of the obtained OWF solution. The developed formulation is next used to solve two practical problems: the water flow problem, that solves the complex physical equations yielding nodal pressures and pipeline flows given the demands/injections; and an OWF problem that finds the best operational strategy for water utilities during power outages. The latter helps the water utility to maximize their service time during power outages, and helps power utilities better plan their restoration strategy. While the increased instrumentation and automation has enabled power utilities to better manage restoration during outages, finding an optimal strategy remains a difficult problem. The operational and coordination requirements for the upcoming distributed resources and microgrids further complicate the problem. This thesis develops a computationally fast and reasonably accurate power distribution restoration scheme enabling optimal coordination of different generators with optimal islanding. Numerical tests are conducted on benchmark water and power networks to corroborate the claims of the developed formulations.
9

Localization algorithms for passive sensor networks

Ismailova, Darya 23 January 2017 (has links)
Locating a radiating source based on range or range measurements obtained from a network of passive sensors has been a subject of research over the past two decades due to the problem’s importance in applications in wireless communications, surveillance, navigation, geosciences, and several other fields. In this thesis, we develop new solution methods for the problem of localizing a single radiating source based on range and range-difference measurements. Iterative re-weighting algorithms are developed for both range-based and range-difference-based least squares localization. Then we propose a penalty convex-concave procedure for finding an approximate solution to nonlinear least squares problems that are related to the range measurements. Finally, the sequential convex relaxation procedures are proposed to obtain the nonlinear least squares estimate of source coordinates. Localization in wireless sensor network, where the RF signals are used to derive the ranging measurements, is the primary application area of this work. However, the solution methods proposed are general and could be applied to range and range-difference measurements derived from other types of signals. / Graduate / 0544 / ismailds@uvic.ca
10

Convex relaxations in nonconvex and applied optimization

Chen, Jieqiu 01 July 2010 (has links)
Traditionally, linear programming (LP) has been used to construct convex relaxations in the context of branch and bound for determining global optimal solutions to nonconvex optimization problems. As second-order cone programming (SOCP) and semidefinite programming (SDP) become better understood by optimization researchers, they become alternative choices for obtaining convex relaxations and producing bounds on the optimal values. In this thesis, we study the use of these convex optimization tools in constructing strong relaxations for several nonconvex problems, including 0-1 integer programming, nonconvex box-constrained quadratic programming (BoxQP), and general quadratic programming (QP). We first study a SOCP relaxation for 0-1 integer programs and a sequential relaxation technique based on this SOCP relaxation. We present desirable properties of this SOCP relaxation, for example, this relaxation cuts off all fractional extreme points of the regular LP relaxation. We further prove that the sequential relaxation technique generates the convex hull of 0-1 solutions asymptotically. We next explore nonconvex quadratic programming. We propose a SDP relaxation for BoxQP based on relaxing the first- and second-order KKT conditions, where the difficulty and contribution lie in relaxing the second-order KKT condition. We show that, although the relaxation we obtain this way is equivalent to an existing SDP relaxation at the root node, it is significantly stronger on the children nodes in a branch-and-bound setting. New advance in optimization theory allows one to express QP as optimizing a linear function over the convex cone of completely positive matrices subject to linear constraints, referred to as completely positive programming (CPP). CPP naturally admits strong semidefinite relaxations. We incorporate the first-order KKT conditions of QP into the constraints of QP, and then pose it in the form of CPP to obtain a strong relaxation. We employ the resulting SDP relaxation inside a finite branch-and-bound algorithm to solve the QP. Comparison of our algorithm with commercial global solvers shows potential as well as room for improvement. The remainder is devoted to new techniques for solving a class of large-scale linear programming problems. First order methods, although not as fast as second-order methods, are extremely memory efficient. We develop a first-order method based on Nesterov's smoothing technique and demonstrate the effectiveness of our method on two machine learning problems.

Page generated in 0.038 seconds