• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 4
  • 2
  • Tagged with
  • 14
  • 14
  • 14
  • 7
  • 6
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Condition-Measure Bounds on the Behavior of the Central Trajectory of a Semi-Definete Program

Nunez, Manuel A., Freund, Robert M. 08 1900 (has links)
We present bounds on various quantities of interest regarding the central trajectory of a semi-definite program (SDP), where the bounds are functions of Renegar's condition number C(d) and other naturally-occurring quantities such as the dimensions n and m. The condition number C(d) is defined in terms of the data instance d = (A, b, C) for SDP; it is the inverse of a relative measure of the distance of the data instance to the set of ill-posed data instances, that is, data instances for which arbitrary perturbations would make the corresponding SDP either feasible or infeasible. We provide upper and lower bounds on the solutions along the central trajectory, and upper bounds on changes in solutions and objective function values along the central trajectory when the data instance is perturbed and/or when the path parameter defining the central trajectory is changed. Based on these bounds, we prove that the solutions along the central trajectory grow at most linearly and at a rate proportional to the inverse of the distance to ill-posedness, and grow at least linearly and at a rate proportional to the inverse of C(d)2 , as the trajectory approaches an optimal solution to the SDP. Furthermore, the change in solutions and in objective function values along the central trajectory is at most linear in the size of the changes in the data. All such bounds involve polynomial functions of C(d), the size of the data, the distance to ill-posedness of the data, and the dimensions n and m of the SDP.
2

Optimization methods for side-chain positioning and macromolecular docking

Moghadasi, Mohammad 08 April 2016 (has links)
This dissertation proposes new optimization algorithms targeting protein-protein docking which is an important class of problems in computational structural biology. The ultimate goal of docking methods is to predict the 3-dimensional structure of a stable protein-protein complex. We study two specific problems encountered in predictive docking of proteins. The first problem is Side-Chain Positioning (SCP), a central component of homology modeling and computational protein docking methods. We formulate SCP as a Maximum Weighted Independent Set (MWIS) problem on an appropriately constructed graph. Our formulation also considers the significant special structure of proteins that SCP exhibits for docking. We develop an approximate algorithm that solves a relaxation of MWIS and employ randomized estimation heuristics to obtain high-quality feasible solutions to the problem. The algorithm is fully distributed and can be implemented on multi-processor architectures. Our computational results on a benchmark set of protein complexes show that the accuracy of our approximate MWIS-based algorithm predictions is comparable with the results achieved by a state-of-the-art method that finds an exact solution to SCP. The second problem we target in this work is protein docking refinement. We propose two different methods to solve the refinement problem. The first approach is based on a Monte Carlo-Minimization (MCM) search to optimize rigid-body and side-chain conformations for binding. In particular, we study the impact of optimally positioning the side-chains in the interface region between two proteins in the process of binding. We report computational results showing that incorporating side-chain flexibility in docking provides substantial improvement in the quality of docked predictions compared to the rigid-body approaches. Further, we demonstrate that the inclusion of unbound side-chain conformers in the side-chain search introduces significant improvement in the performance of the docking refinement protocols. In the second approach, we propose a novel stochastic optimization algorithm based on Subspace Semi-Definite programming-based Underestimation (SSDU), which aims to solve protein docking and protein structure prediction. SSDU is based on underestimating the binding energy function in a permissive subspace of the space of rigid-body motions. We apply Principal Component Analysis (PCA) to determine the permissive subspace and reduce the dimensionality of the conformational search space. We consider the general class of convex polynomial underestimators, and formulate the problem of finding such underestimators as a Semi-Definite Programming (SDP) problem. Using these underestimators, we perform a biased sampling in the vicinity of the conformational regions where the energy function is at its global minimum. Moreover, we develop an exploration procedure based on density-based clustering to detect the near-native regions even when there are many local minima residing far from each other. We also incorporate a Model Selection procedure into SSDU to pick a predictive conformation. Testing our algorithm over a benchmark of protein complexes indicates that SSDU substantially improves the quality of docking refinement compared with existing methods.
3

Phasor Measurement Unit Data-based States and Parameters Estimation in Power System

Ghassempour Aghamolki, Hossein 08 November 2016 (has links)
The dissertation research investigates estimating of power system static and dynamic states (e.g. rotor angle, rotor speed, mechanical power, voltage magnitude, voltage phase angle, mechanical reference point) as well as identification of synchronous generator parameters. The research has two focuses: i. Synchronous generator dynamic model states and parameters estimation using real-time PMU data. ii.Integrate PMU data and conventional measurements to carry out static state estimation. The first part of the work focuses on Phasor Measurement Unit (PMU) data-based synchronous generator states and parameters estimation. In completed work, PMU data-based synchronous generator model identification is carried out using Unscented Kalman Filter (UKF). The identification not only gives the states and parameters related to a synchronous generator swing dynamics but also gives the states and parameters related to turbine-governor and primary and secondary frequency control. PMU measurements of active power and voltage magnitude, are treated as the inputs to the system while voltage phasor angle, reactive power, and frequency measurements are treated as the outputs. UKF-based estimation can be carried out at real-time. Validation is achieved through event play back to compare the outputs of the simplified simulation model and the PMU measurements, given the same input data. Case studies are conducted not only for measurements collected from a simulation model, but also for a set of real-world PMU data. The research results have been disseminated in one published article. In the second part of the research, new state estimation algorithm is designed for static state estimation. The algorithm contains a new solving strategy together with simultaneous bad data detection. The primary challenge in state estimation solvers relates to the inherent non-linearity and non-convexity of measurement functions which requires using of Interior Point algorithm with no guarantee for a global optimum solution and higher computational time. Such inherent non-linearity and non-convexity of measurement functions come from the nature of power flow equations in power systems. The second major challenge in static state estimation relates to the bad data detection algorithm. In traditional algorithms, Largest Normalized Residue Test (LNRT) has been used to identify bad data in static state estimation. Traditional bad data detection algorithm only can be applied to state estimation. Therefore, in a case of finding any bad datum, the SE algorithm have to rerun again with eliminating found bad data. Therefore, new simultaneous and robust algorithm is designed for static state estimation and bad data identification. In the second part of the research, Second Order Cone Programming (SOCP) is used to improve solving technique for power system state estimator. However, the non-convex feasible constraints in SOCP based estimator forces the use of local solver such as IPM (interior point method) with no guarantee for quality answers. Therefore, cycle based SOCP relaxation is applied to the state estimator and a least square estimation (LSE) based method is implemented to generate positive semi-definite programming (SDP) cuts. With this approach, we are able to strengthen the state estimator (SE) with SOCP relaxation. Since SDP relaxation leads the power flow problem to the solution of higher quality, adding SDP cuts to the SOCP relaxation makes Problem’s feasible region close to the SDP feasible region while saving us from computational difficulty associated with SDP solvers. The improved solver is effective to reduce the feasible region and get rid of unwanted solutions violate cycle constraints. Different Case studies are carried out to demonstrate the effectiveness and robustness of the method. After introducing the new solving technique, a novel co-optimization algorithm for simultaneous nonlinear state estimation and bad data detection is introduced in this dissertation. ${\ell}_1$-Norm optimization of the sparse residuals is used as a constraint for the state estimation problem to make the co-optimization algorithm possible. Numerical case studies demonstrate more accurate results in SOCP relaxed state estimation, successful implementation of the algorithm for the simultaneous state estimation and bad data detection, and better state estimation recovery against single and multiple Gaussian bad data compare to the traditional LNRT algorithm.
4

Applications of Lattice Codes in Communication Systems

Mobasher, Amin 03 December 2007 (has links)
In the last decade, there has been an explosive growth in different applications of wireless technology, due to users' increasing expectations for multi-media services. With the current trend, the present systems will not be able to handle the required data traffic. Lattice codes have attracted considerable attention in recent years, because they provide high data rate constellations. In this thesis, the applications of implementing lattice codes in different communication systems are investigated. The thesis is divided into two major parts. Focus of the first part is on constellation shaping and the problem of lattice labeling. The second part is devoted to the lattice decoding problem. In constellation shaping technique, conventional constellations are replaced by lattice codes that satisfy some geometrical properties. However, a simple algorithm, called lattice labeling, is required to map the input data to the lattice code points. In the first part of this thesis, the application of lattice codes for constellation shaping in Orthogonal Frequency Division Multiplexing (OFDM) and Multi-Input Multi-Output (MIMO) broadcast systems are considered. In an OFDM system a lattice code with low Peak to Average Power Ratio (PAPR) is desired. Here, a new lattice code with considerable PAPR reduction for OFDM systems is proposed. Due to the recursive structure of this lattice code, a simple lattice labeling method based on Smith normal decomposition of an integer matrix is obtained. A selective mapping method in conjunction with the proposed lattice code is also presented to further reduce the PAPR. MIMO broadcast systems are also considered in the thesis. In a multiple antenna broadcast system, the lattice labeling algorithm should be such that different users can decode their data independently. Moreover, the implemented lattice code should result in a low average transmit energy. Here, a selective mapping technique provides such a lattice code. Lattice decoding is the focus of the second part of the thesis, which concerns the operation of finding the closest point of the lattice code to any point in N-dimensional real space. In digital communication applications, this problem is known as the integer least-square problem, which can be seen in many areas, e.g. the detection of symbols transmitted over the multiple antenna wireless channel, the multiuser detection problem in Code Division Multiple Access (CDMA) systems, and the simultaneous detection of multiple users in a Digital Subscriber Line (DSL) system affected by crosstalk. Here, an efficient lattice decoding algorithm based on using Semi-Definite Programming (SDP) is introduced. The proposed algorithm is capable of handling any form of lattice constellation for an arbitrary labeling of points. In the proposed methods, the distance minimization problem is expressed in terms of a binary quadratic minimization problem, which is solved by introducing several matrix and vector lifting SDP relaxation models. The new SDP models provide a wealth of trade-off between the complexity and the performance of the decoding problem.
5

Applications of Lattice Codes in Communication Systems

Mobasher, Amin 03 December 2007 (has links)
In the last decade, there has been an explosive growth in different applications of wireless technology, due to users' increasing expectations for multi-media services. With the current trend, the present systems will not be able to handle the required data traffic. Lattice codes have attracted considerable attention in recent years, because they provide high data rate constellations. In this thesis, the applications of implementing lattice codes in different communication systems are investigated. The thesis is divided into two major parts. Focus of the first part is on constellation shaping and the problem of lattice labeling. The second part is devoted to the lattice decoding problem. In constellation shaping technique, conventional constellations are replaced by lattice codes that satisfy some geometrical properties. However, a simple algorithm, called lattice labeling, is required to map the input data to the lattice code points. In the first part of this thesis, the application of lattice codes for constellation shaping in Orthogonal Frequency Division Multiplexing (OFDM) and Multi-Input Multi-Output (MIMO) broadcast systems are considered. In an OFDM system a lattice code with low Peak to Average Power Ratio (PAPR) is desired. Here, a new lattice code with considerable PAPR reduction for OFDM systems is proposed. Due to the recursive structure of this lattice code, a simple lattice labeling method based on Smith normal decomposition of an integer matrix is obtained. A selective mapping method in conjunction with the proposed lattice code is also presented to further reduce the PAPR. MIMO broadcast systems are also considered in the thesis. In a multiple antenna broadcast system, the lattice labeling algorithm should be such that different users can decode their data independently. Moreover, the implemented lattice code should result in a low average transmit energy. Here, a selective mapping technique provides such a lattice code. Lattice decoding is the focus of the second part of the thesis, which concerns the operation of finding the closest point of the lattice code to any point in N-dimensional real space. In digital communication applications, this problem is known as the integer least-square problem, which can be seen in many areas, e.g. the detection of symbols transmitted over the multiple antenna wireless channel, the multiuser detection problem in Code Division Multiple Access (CDMA) systems, and the simultaneous detection of multiple users in a Digital Subscriber Line (DSL) system affected by crosstalk. Here, an efficient lattice decoding algorithm based on using Semi-Definite Programming (SDP) is introduced. The proposed algorithm is capable of handling any form of lattice constellation for an arbitrary labeling of points. In the proposed methods, the distance minimization problem is expressed in terms of a binary quadratic minimization problem, which is solved by introducing several matrix and vector lifting SDP relaxation models. The new SDP models provide a wealth of trade-off between the complexity and the performance of the decoding problem.
6

Graph Partitioning and Semi-definite Programming Hierarchies

Sinop, Ali Kemal 15 May 2012 (has links)
Graph partitioning is a fundamental optimization problem that has been intensively studied. Many graph partitioning formulations are important as building blocks for divide-and-conquer algorithms on graphs as well as to many applications such as VLSI layout, packet routing in distributed networks, clustering and image segmentation. Unfortunately such problems are notorious for the huge gap between known best known approximation algorithms and hardness of approximation results. In this thesis, we study approximation algorithms for graph partitioning problems using a strong hierarchy of relaxations based on semi-definite programming, called Lasserre Hierachy. Our main contribution in this thesis is a propagation based rounding framework for solutions arising from such relaxations. We present a novel connection between the quality of solutions it outputs and column based matrix reconstruction problem. As part of our work, we derive optimal bounds on the number of columns necessary together with efficient randomized and deterministic algorithms to find such columns. Using this framework, we derive approximation schemes for many graph partitioning problems with running times dependent on how fast the graph spectrum grows. Our final contribution is a fast SDP solver for this rounding framework: Even though SDP relaxation has nO(r) many variables, we achieve running times of the form 2O(r) poly(n) by only partially solving the relevant part of relaxation. In order to achieve this, we present a new ellipsoid algorithm that returns certificate of infeasibility.
7

Approche par une méthode d’homogénéisation du comportement des ouvrages en sols renforcés par colonnes ou tranchées / A homogenization approach for assessing the behavior of soil structures reinforced by columns or trenches

Gueguin, Maxime 09 July 2014 (has links)
Ce travail s'inscrit dans le contexte des techniques de renforcement des sols, permettant d'améliorer les performances mécaniques de terrains de qualité médiocre. Parmi ces techniques, l'utilisation d'inclusions souples prenant la forme de colonnes ou de tranchées croisées connaît une diffusion croissante. Même si les aspects relatifs à leur procédé de construction sont aujourd'hui bien maîtrisés, les méthodes de dimensionnement de ces ouvrages en sols renforcés restent à améliorer. Dans cette thèse, nous proposons d'utiliser la méthode d'homogénéisation afin d'analyser le comportement global des ouvrages en sols renforcés, dans le cadre de la théorie de l'élasticité (propriétés de rigidité) aussi bien que dans celle du calcul à la rupture (propriétés de résistance). Tenant compte de la périodicité géométrique des différentes configurations de renforcement, nous déterminons le comportement des sols renforcés tout d'abord au niveau local puis à l'échelle de l'ouvrage. Pour évaluer les capacités de résistance des ouvrages en sols renforcés, les approches statique et cinématique du calcul à la rupture sont mises en œuvre analytiquement ou numériquement selon la nature du matériau de renforcement utilisé. Par des formulations numériques innovantes adaptées à cette théorie, nous parvenons notamment à évaluer les domaines de résistance macroscopiques des sols renforcés par colonnes ou tranchées croisées, qui peuvent ensuite être pris en compte dans le comportement à la rupture des ouvrages en sols renforcés. Deux exemples d'application de cette procédure, relatifs au problème de capacité portante d'une semelle de fondation reposant sur un sol renforcé d'une part et à l'analyse de la stabilité d'un remblai d'autre part, sont effectués / This work takes place in the context of soil reinforcement techniques, aimed at improving the mechanical performances of poor quality grounds. Among these techniques, the use of soft inclusions taking the form of columns or cross trenches has known important developments. Even if the aspects relative to their construction process are presently well mastered, the design methods of such reinforced soil structures still remain to be greatly improved. The present work advocates the use of the homogenization method for assessing the global behavior of reinforced soil structures, both in the context of linear elasticity (stiffness properties) and in the framework of yield design (strength properties). Taking into account the geometrical periodicity of the various reinforcement configurations, we thus determine the behavior of the reinforced soils first locally and then at the global scale. To assess the strength capacities of reinforced soil structures, the static and kinematic approaches of the yield design theory are performed analytically or numerically depending on the kind of reinforcing material which is used. Adopting innovative numerical formulations dedicated to this theory, we can notably evaluate the macroscopic strength domains of column as well as cross trench reinforced soils which can then be introduced in the yield design of reinforced soil structures. Two illustrative applications of this procedure are performed relating to the bearing capacity problem of a reinforced soil shallow foundation on the one hand, the stability analysis of an embankment on the other hand
8

Analyse statique de systèmes de contrôle commande : synthèse d'invariants non linéaires / Static Analysis of Control Command Systems : Synthesizing non Linear Invariants

Roux, Pierre 18 December 2013 (has links)
Les systèmes critiques comme les commandes de vol peuvent entraîner des désastres en cas de dysfonctionnement. D'où l'intérêt porté à la fois par le monde industriel et académique aux méthodes de preuve formelle capable d'apporter, plus ou moins automatiquement, une preuve mathématique de correction. Parmi elles, cette thèse s'intéresse particulièrement à l'interprétation abstraite, une méthode efficacepour générer automatiquement des preuves de propriétés numériques qui sont essentielles dans notre contexte.Il est bien connu des automaticiens que les contrôleurs linéaires sont stables si et seulement si ils admettent un invariant quadratique(un ellipsoïde, d'un point de vue géométrique). Ils les appellent fonction de Lyapunov quadratique et une première partie propose d'encalculer automatiquement pour des contrôleurs donnés comme paire de matrices. Ceci est réalisé en utilisant des outils de programmation semi-définie. Les aspects virgule flottante sont pris en compte, que ce soit dans les calculs effectués par le programme analysé ou dans les outils utilisés pour l'analyse. Toutefois, le véritable but est d'analyser des programmes implémentant des contrôleurs (et non des paires de matrices), incluant éventuellement des réinitialisation ou des saturations, donc non purement linéaires. L'itération sur les stratégies est une techniqued'analyse statique récemment développée et bien adaptée à nos besoins. Toutefois, elle ne se marrie pas facilement avec lestechniques classiques d'interprétation abstraite. La partie suivante propose une interface entre les deux mondes.Enfin, la dernière partie est un travail plus préliminaire sur l'usage de l'optimisation globale sur des polynômes basée sur les polynômes deBernstein pour calculer des invariants polynomiaux sur des programmes polynomiaux. / Critical Systems such as flight commands may have disastrous results in case of failure. Hence the interest of both the industrial and theacademic communities in formal methods able to more or less automatically deliver mathematical proof of correctness. Among them, this thesis will particularly focus on abstract interpretation, an efficient method to automatically generate proofs of numerical properties which are essential in our context.It is well known from control theorists that linear controllers are stable if and only if they admit a quadratic invariant (geometrically speaking, an ellipsoid). They call these invariants quadratic Lyapunov functions and a first part offers to automatically compute such invariants for controllers given as a pair of matrices. This is done using semi-definite programming optimization tools. It is worth noting that floating point aspects are taken care of, whether they affectcomputations performed by the analyzed program or by the tools used for the analysis.However, the actual goal is to analyze programs implementing controllers (and not pairs of matrices), potentially including resets or saturations, hence not purely linears. The policy iteration technique is a recently developed static analysis techniques well suited to that purpose. However, it does not marry very easily with the classic abstract interpretation paradigm. The next part tries to offer a nice interface between the two worlds.Finally, the last part is a more prospective work on the use of polynomial global optimization based on Bernstein polynomials to compute polynomial invariants on polynomials systems.
9

Analysis and control of parabolic partial differential equations with application to tokamaks using sum-of-squares polynomials / Analyse et contrôle des équations aux dérivées partielles parabolique aide de polynômes somme des carrés avec une application sur les tokamaks

Gahlawat, Aditya 28 October 2015 (has links)
Dans ce travail, nous abordons les problèmes de l'analyse de la stabilité et de la synthèse de contrôleur pour une Equation aux Dérivées Partielles (EDP) parabolique linéaire de dimension 1. Ces problèmes sont résolus avec des méthodologies analogues au cadre des inégalités matricielles linéaires (LMI) pour les équations différentielles ordinaires (EDO). Nous développons une méthode pour EDP paraboliques dans laquelle nous testons la faisabilité de certaines LMIs utilisant la programmation semi-définie (SDP) pour construire des fonctions de Lyapunov quadratiques et des contrôleurs. Le cœur de notre démarche est la construction de fonctions de Lyapunov quadratiques paramétrées par les opérateurs définis positifs sur les espaces de Hilbert de dimension infinie. Contrairement aux matrices positives, il n'y a pas de méthode unique paramétrisant l'ensemble des opérateurs positifs sur un espace de Hilbert. Bien sûr, nous pouvons toujours paramétrer un sous-ensemble des opérateurs positifs en utilisant, par exemple, des scalaires positifs. Cependant, nous devons nous assurer que le paramétrage des opérateurs positifs ne doit pas être conservatif. Notre contribution est de construire une paramétrisation qui a seulement une petite quantité de conservatisme comme indiqué par nos résultats numériques. Nous utilisons des polynômes en somme des carrés (SOS) pour paramétrer l'ensemble des opérateurs positifs, linéaire et bornés sur les espaces de Hilbert. Comme son nom l'indique, un polynôme SOS est celui qui peut être représenté comme une somme de polynômes carrés. La propriété la plus importante d'un polynôme SOS est qu'il peut être représenté au moyen d'une matrice (semi-)définie positive. Cela implique que, même si le problème de polynôme (semi-)positif est NP-difficile, le problème de vérifier si polynôme est SOS (et donc (semi-)positif) peut être résolu en utilisant la SDP. Par conséquent, nous nous efforçons de construire des fonctions de Lyapunov quadratiques paramétrées par les opérateurs positifs. Ces opérateurs positifs sont à leur tour paramétrés par des polynômes SOS. Cette paramétrisation SOS nous permet de formuler le problème de faisabilité pour l'existence d'une fonction de Lyapunov quadratique comme un problème de faisabilité LMI. Le problème de la faisabilité LMI peut alors être adressé à l'aide de SDP. Dans la première partie de la thèse nous considérons analyse de stabilité et la synthèse de contrôleur aux frontières pour une large classe d'EDP paraboliques. Les EDP ont des coefficients de transport distribués spatialement. Ces EDP sont utilisés pour modéliser les processus de diffusion, de convection et de réaction de quantités physiques dans les milieux anisotropes. Nous considérons la synthèse de contrôleurs limite à la fois pour le cas de retour d'état et le cas de retour de sortie (à l'aide d'un observateur). Dans la deuxième partie de la thèse, nous concevons un contrôleur distribué pour la régulation du flux magnétique poloïdal dans un tokamak (procédé de fusion thermonucléaire par confinement magnétique). Tout d'abord, nous concevons un contrôleur régulant la pente des lignes de champ magnétique (le facteur de sécurité). La régulation du profil du facteur de sécurité est importante pour supprimer les instabilités MHD dans un tokamak. Ensuite, nous concevons un contrôleur maximisant la densité de courant bootstrap généré en interne. Une proportion accrue du courant bootstrap conduirait à une réduction des besoins énergétiques exogènes pour l'exploitation d'un tokamak. / In this work we address the problems of stability analysis and controller synthesis for one dimensional linear parabolic Partial Differential Equations (PDEs). To achieve the tasks of stability analysis and controller synthesis we develop methodologies akin to the Linear Matrix Inequality (LMI) framework for Ordinary Differential Equations (ODEs). We develop a method for parabolic PDEs wherein we test the feasibility of certain LMIs using SDP to construct quadratic Lyapunov functions and controllers. The core of our approach is the construction of quadratic Lyapunov functions parametrized by positive definite operators on infinite dimensional Hilbert spaces. Unlike positive matrices, there is no single method of parametrizing the set of all positive operators on a Hilbert space. Of course, we can always parametrize a subset of positive operators, using, for example, positive scalars. However, we must ensure that the parametrization of positive operators should not be conservative. Our contribution is constructing a parametrization which has only a small amount of conservatism as indicated by our numerical results. We use Sum-of-Squares (SOS) polynomials to parametrize the set of positive, linear and bounded operators on Hilbert spaces. As the name indicates, an SOS polynomial is one which can be represented as a sum of squared polynomials. The most important property of an SOS polynomial is that it can be represented using a positive (semi)-definite matrix. This implies that even though the problem of polynomial (semi)-positivity is NP-hard, the problem of checking if polynomial is SOS (and hence (semi)-positive) can be solved using SDP. Therefore, we aim to construct quadratic Lyapunov functions parametrized by positive operators. These positive operators are in turn parametrized by SOS polynomials. This parametrization using SOS allows us to cast the feasibility problem for the existence of a quadratic Lyapunov function as the feasibility problem of LMIs. The feasibility problem of LMIs can then be addressed using SDP. In the first part of the thesis we consider stability analysis and boundary controller synthesis for a large class of parabolic PDEs. The PDEs have spatially distributed coefficients. Such PDEs are used to model processes of diffusion, convection and reaction of physical quantities in anisotropic media. We consider boundary controller synthesis for both the state feedback case and the output feedback case (using and observer design). IN the second part of thesis we design distributed controllers for the regulation of poloidal magnetic flux in a tokamak (a thermonuclear fusion devise). First, we design the controllers to regulate the magnetic field line pitch (the safety factor). The regulation of the safety factor profile is important to suppress the magnetohydrodynamic instabilities in a tokamak. Then, we design controllers to maximize the internally generated bootstrap current density. An increased proportion of bootstrap current would lead to a reduction in the external energy requirements for the operation of a tokamak.
10

Spectral methods and computational trade-offs in high-dimensional statistical inference

Wang, Tengyao January 2016 (has links)
Spectral methods have become increasingly popular in designing fast algorithms for modern highdimensional datasets. This thesis looks at several problems in which spectral methods play a central role. In some cases, we also show that such procedures have essentially the best performance among all randomised polynomial time algorithms by exhibiting statistical and computational trade-offs in those problems. In the first chapter, we prove a useful variant of the well-known Davis{Kahan theorem, which is a spectral perturbation result that allows us to bound of the distance between population eigenspaces and their sample versions. We then propose a semi-definite programming algorithm for the sparse principal component analysis (PCA) problem, and analyse its theoretical performance using the perturbation bounds we derived earlier. It turns out that the parameter regime in which our estimator is consistent is strictly smaller than the consistency regime of a minimax optimal (yet computationally intractable) estimator. We show through reduction from a well-known hard problem in computational complexity theory that the difference in consistency regimes is unavoidable for any randomised polynomial time estimator, hence revealing subtle statistical and computational trade-offs in this problem. Such computational trade-offs also exist in the problem of restricted isometry certification. Certifiers for restricted isometry properties can be used to construct design matrices for sparse linear regression problems. Similar to the sparse PCA problem, we show that there is also an intrinsic gap between the class of matrices certifiable using unrestricted algorithms and using polynomial time algorithms. Finally, we consider the problem of high-dimensional changepoint estimation, where we estimate the time of change in the mean of a high-dimensional time series with piecewise constant mean structure. Motivated by real world applications, we assume that changes only occur in a sparse subset of all coordinates. We apply a variant of the semi-definite programming algorithm in sparse PCA to aggregate the signals across different coordinates in a near optimal way so as to estimate the changepoint location as accurately as possible. Our statistical procedure shows superior performance compared to existing methods in this problem.

Page generated in 0.1071 seconds