• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 159
  • 14
  • 14
  • 13
  • 6
  • 1
  • 1
  • 1
  • Tagged with
  • 248
  • 248
  • 54
  • 45
  • 45
  • 42
  • 39
  • 35
  • 28
  • 27
  • 27
  • 26
  • 25
  • 24
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Graph-based variational optimization and applications in computer vision

Couprie, Camille 10 October 2011 (has links) (PDF)
Many computer vision applications such as image filtering, segmentation and stereovision can be formulated as optimization problems. Recently discrete, convex, globally optimal methods have received a lot of attention. Many graph-based methods suffer from metrication artefacts, segmented contours are blocky in areas where contour information is lacking. In the first part of this work, we develop a discrete yet isotropic energy minimization formulation for the continuous maximum flow problem that prevents metrication errors. This new convex formulation leads us to a provably globally optimal solution. The employed interior point method can optimize the problem faster than the existing continuous methods. The energy formulation is then adapted and extended to multi-label problems, and shows improvements over existing methods. Fast parallel proximal optimization tools have been tested and adapted for the optimization of this problem. In the second part of this work, we introduce a framework that generalizes several state-of-the-art graph-based segmentation algorithms, namely graph cuts, random walker, shortest paths, and watershed. This generalization allowed us to exhibit a new case, for which we developed a globally optimal optimization method, named "Power watershed''. Our proposed power watershed algorithm computes a unique global solution to multi labeling problems, and is very fast. We further generalize and extend the framework to applications beyond image segmentation, for example image filtering optimizing an L0 norm energy, stereovision and fast and smooth surface reconstruction from a noisy cloud of 3D points
162

Supervised metric learning with generalization guarantees

Bellet, Aurélien 11 December 2012 (has links) (PDF)
In recent years, the crucial importance of metrics in machine learningalgorithms has led to an increasing interest in optimizing distanceand similarity functions using knowledge from training data to make them suitable for the problem at hand.This area of research is known as metric learning. Existing methods typically aim at optimizing the parameters of a given metric with respect to some local constraints over the training sample. The learned metrics are generally used in nearest-neighbor and clustering algorithms.When data consist of feature vectors, a large body of work has focused on learning a Mahalanobis distance, which is parameterized by a positive semi-definite matrix. Recent methods offer good scalability to large datasets.Less work has been devoted to metric learning from structured objects (such as strings or trees), because it often involves complex procedures. Most of the work has focused on optimizing a notion of edit distance, which measures (in terms of number of operations) the cost of turning an object into another.We identify two important limitations of current supervised metric learning approaches. First, they allow to improve the performance of local algorithms such as k-nearest neighbors, but metric learning for global algorithms (such as linear classifiers) has not really been studied so far. Second, and perhaps more importantly, the question of the generalization ability of metric learning methods has been largely ignored.In this thesis, we propose theoretical and algorithmic contributions that address these limitations. Our first contribution is the derivation of a new kernel function built from learned edit probabilities. Unlike other string kernels, it is guaranteed to be valid and parameter-free. Our second contribution is a novel framework for learning string and tree edit similarities inspired by the recent theory of (epsilon,gamma,tau)-good similarity functions and formulated as a convex optimization problem. Using uniform stability arguments, we establish theoretical guarantees for the learned similarity that give a bound on the generalization error of a linear classifier built from that similarity. In our third contribution, we extend the same ideas to metric learning from feature vectors by proposing a bilinear similarity learning method that efficiently optimizes the (epsilon,gamma,tau)-goodness. The similarity is learned based on global constraints that are more appropriate to linear classification. Generalization guarantees are derived for our approach, highlighting that our method minimizes a tighter bound on the generalization error of the classifier. Our last contribution is a framework for establishing generalization bounds for a large class of existing metric learning algorithms. It is based on a simple adaptation of the notion of algorithmic robustness and allows the derivation of bounds for various loss functions and regularizers.
163

Primal dual pursuit: a homotopy based algorithm for the Dantzig selector

Asif, Muhammad Salman 10 July 2008 (has links)
Consider the following system model y = Ax + e, where x is n-dimensional sparse signal, y is the measurement vector in a much lower dimension m, A is the measurement matrix and e is the error in our measurements. The Dantzig selector estimates x by solving the following optimization problem minimize || x ||₁ subject to || A'(Ax - y) ||∞ ≤ ε, (DS). This is a convex program and can be recast into a linear program and solved using any modern optimization method e.g., interior point methods. We propose a fast and efficient scheme for solving the Dantzig Selector (DS), which we call "Primal-Dual pursuit". This algorithm can be thought of as a "primal-dual homotopy" approach to solve the Dantzig selector (DS). It computes the solution to (DS) for a range of successively relaxed problems, by starting with a large artificial ε and moving towards the desired value. Our algorithm iteratively updates the primal and dual supports as ε reduces to the desired value, which gives final solution. The homotopy path solution of (DS) takes with varying ε is piecewise linear. At some critical values of ε in this path, either some new elements enter the support of the signal or some existing elements leave the support. We derive the optimality and feasibility conditions which are used to update the solutions at these critical points. We also present a detailed analysis of primal-dual pursuit for sparse signals in noiseless case. We show that if our signal is S-sparse, then we can find all its S elements in exactly S steps using about "S² log n" random measurements, with very high probability.
164

[en] SHIP ROUTING AND SPEED OPTIMIZATION WITH HETEROGENEOUS FUEL CONSUMPTION PROFILES / [pt] ROTEAMENTO DE NAVIOS E OTIMIZAÇÃO DE VELOCIDADE COM PERFIS DE CONSUMO DE COMBUSTÍVEL HETEROGÊNEOS

GABRIEL ANDRE HOMSI 14 June 2018 (has links)
[pt] A indústria de transporte marítimo é essencial para o comércio internacional. No entanto, no despertar da crise financeira de 2008, essa indústria foi severamente atingida. Nessas ocasiões, empresas de transporte só são capazes de obter lucro se suas frotas forem roteadas de forma eficaz. Neste trabalho, nós estudamos uma classe de problemas de roteamento de navios relacionados ao Pickup and Delivery Problem with Time Windows. Para resolver esses problemas, nós introduzimos um método heurístico e um exato. O método heurístico é uma meta-heurística híbrida com uma vizinhança larga baseada em set partitioning, enquanto o método exato é um algoritmo de branch-and-price. Nós conduzimos experimentos em um conjunto de instâncias baseadas em rotas de navios reais. Os resultados obtidos mostram que nossos algoritmos superam as metodologias estado da arte. Em seguida, nós adaptamos o conjunto de instâncias para modelar um problema de roteamento de navios no qual a velocidade em cada segmento de rota é uma variável de decisão, e o consumo de combustível por unidade de tempo é uma função convexa da velocidade e carga do navio. A fim de resolver esse novo problema de roteamento de navios com otimização de velocidade, nós estendemos nossa meta-heurística para encontrar decisões de velocidade ótimas em toda avaliação de solução vizinha de uma busca local. Nossos experimentos demonstram que essa abordagem pode ser altamente rentável, e que requer apenas um aumento moderado de recursos computacionais. / [en] The shipping industry is essential for international trade. However, in the wake of the 2008 financial crisis, this industry was severely hit. In these times, transportation companies can only obtain profit if their fleet is routed effectively. In this work, we study a class of ship routing problems related to the Pickup and Delivery Problem with Time Windows. To solve these problems, we introduce a heuristic and an exact method. The heuristic method is a hybrid metaheuristic with a set-partitioning-based large neighborhood, while the exact method is a branch-and-price algorithm. We conduct experiments on a benchmark suite based on real-life shipping segments. The results obtained show that our algorithms largely outperform the state-of-the-art methodologies. Next, we adapt the benchmark suite to model a ship routing problem where the speed on each sailing leg is a decision variable, and fuel consumption per time unit is a convex function of the ship speed and payload. To solve this new ship routing problem with speed optimization, we extend our metaheuristic to find optimal speed decisions on every local search move evaluation. Our computational experiments demonstrate that such approach can be highly profitable, with only a moderate increase in computational effort.
165

Endmember Variability in hyperspectral image unmixing / Variabilité spectrale dans le démélange d'images hyperspectrales

Drumetz, Lucas 25 October 2016 (has links)
La finesse de la résolution spectrale des images hyperspectrales en télédétection permet une analyse précise de la scène observée, mais leur résolution spatiale est limitée, et un pixel acquis par le capteur est souvent un mélange des contributions de différents matériaux. Le démélange spectral permet d'estimer les spectres des matériaux purs (endmembers) de la scène, et leurs abondances dans chaque pixel. Les endmembers sont souvent supposés être parfaitement représentés par un seul spectre, une hypothèse fausse en pratique, chaque matériau ayant une variabilité intra-classe non négligeable. Le but de cette thèse est de développer des algorithmes prenant mieux en compte ce phénomène. Nous effectuons le démélange localement, dans des régions bien choisies de l'image où les effets de la variabilité sont moindres, en éliminant automatiquement les endmembers non pertinents grâce à de la parcimonie collaborative. Dans une autre approche, nous raffinons l'estimation des abondances en utilisant la structure de groupe d'un dictionnaire d'endmembers extrait depuis les données. Ensuite, nous proposons un modèle de mélange linéaire étendu, basé sur des considérations physiques, qui modélise la variabilité spectrale par des facteurs d'échelle, et développons des algorithmes d'optimisation pour en estimer les paramètres. Ce modèle donne des résultats facilement interprétables et de meilleures performances que d'autres approches de la littérature. Nous étudions enfin deux applications de ce modèle pour confirmer sa pertinence. / The fine spectral resolution of hyperspectral remote sensing images allows an accurate analysis of the imaged scene, but due to their limited spatial resolution, a pixel acquired by the sensor is often a mixture of the contributions of several materials. Spectral unmixing aims at estimating the spectra of the pure materials (called endmembers) in the scene, and their abundances in each pixel. The endmembers are usually assumed to be perfectly represented by a single spectrum, which is wrong in practice since each material exhibits a significant intra-class variability. This thesis aims at designing unmixing algorithms to better handle this phenomenon. First, we perform the unmixing locally in well chosen regions of the image where variability effects are less important, and automatically discard wrongly estimated local endmembers using collaborative sparsity. In another approach, we refine the abundance estimation of the materials by taking into account the group structure of an image-derived endmember dictionary. Second, we introduce an extended linear mixing model, based on physical considerations, modeling spectral variability in the form of scaling factors, and develop optimization algorithms to estimate its parameters. This model provides easily interpretable results and outperforms other state-of-the-art approaches. We finally investigate two applications of this model to confirm its relevance.
166

Robust and stochastic MPC of uncertain-parameter systems

Fleming, James January 2016 (has links)
Constraint handling is difficult in model predictive control (MPC) of linear differential inclusions (LDIs) and linear parameter varying (LPV) systems. The designer is faced with a choice of using conservative bounds that may give poor performance, or accurate ones that require heavy online computation. This thesis presents a framework to achieve a more flexible trade-off between these two extremes by using a state tube, a sequence of parametrised polyhedra that is guaranteed to contain the future state. To define controllers using a tube, one must ensure that the polyhedra are a sub-set of the region defined by constraints. Necessary and sufficient conditions for these subset relations follow from duality theory, and it is possible to apply these conditions to constrain predicted system states and inputs with only a little conservatism. This leads to a general method of MPC design for uncertain-parameter systems. The resulting controllers have strong theoretical properties, can be implemented using standard algorithms and outperform existing techniques. Crucially, the online optimisation used in the controller is a convex problem with a number of constraints and variables that increases only linearly with the length of the prediction horizon. This holds true for both LDI and LPV systems. For the latter it is possible to optimise over a class of gain-scheduled control policies to improve performance, with a similar linear increase in problem size. The framework extends to stochastic LDIs with chance constraints, for which there are efficient suboptimal methods using online sampling. Sample approximations of chance constraint-admissible sets are generally not positively invariant, which motivates the novel concept of ‘sample-admissible' sets with this property to ensure recursive feasibility when using sampling methods. The thesis concludes by introducing a simple, convex alternative to chance-constrained MPC that applies a robust bound to the time average of constraint violations in closed-loop.
167

Finding inductive invariants using satisfiability modulo theories and convex optimization / Recherche d'invariants inductifs par satisfiabilité modulo théorie et optimisation convexe

Karpenkov, George Egor 29 March 2017 (has links)
L'analyse statique correcte d'un programme consiste à obtenir des propriétés vraies de toute exécution de ce programme. Celles-ci sont utiles pour démontrer des caractéristiques appréciables du logiciel, telles que l'absence de dépassement de capacité ou autre erreur à l'exécution quelle que soient les entrées du programme. Elles sont presque toujours établies à l'aide d'invariants inductifs : des propriétés vraies de l'état initial et telles que si elles sont vraies à une étape de calcul, alors elles restent vraies à l'étape suivante de la transition de calcul, donc sont toujours vraies par récurrence. L'interprétation abstraite est une approche traditionnelle de la recherche d'invariants numériques, que l'on peut exprimer comme une interprétation non-standard du programme dans un domaine abstrait choisi et ne tenant compte que de certaines propriétés intéressantes. Même dans un domaine aussi simple que les intervalles (un minorant et un majorant pour chaque variable), ce calcul ne converge pas nécessairement, et l'analyse doit recourir à des opérateurs d'élargissement pour forcer la convergence au détriment de la précision. Une autre approche, appelée itération de politique et inspirée par la théorie des jeux, garantit de trouver le plus fort invariant inductif dans le domaine abstrait choisi après un nombre fini d'itérations. Cependant, la description originale de cet algorithme souffrait de quelques faiblesses : elle se basait sur une conversion totale du programme en un système d'équations, sans intégration ni synergie avec les autres méthodes d'analyse. Notre nouvel algorithme est une forme locale de l'itération de politique, qui la replace dans l'itération de Kleene mais avec un opérateur d'élargissement spécial qui garantit d'obtenir le plus petit invariant inductif dans le domaine abstrait après un nombre fini de ses applications. L'itération de politique locale opère dans les domaines de contraintes linéaires données par patron, qui demandent de fixer d'avance la «forme» de l'invariant (p.ex. "x + 2y" pour obtenir "x + 2y <= 10" ). Notre seconde contribution théorique est le développement et la comparaison de plusieurs stratégies de synthèse de patrons, utilisées en conjonction avec l'itération locale de politiques. De plus, nous présentons une méthode pour générer des arbres d'accessibilité abstraite par interprétation abstraite, permettant la génération de traces de contre-exemples, et ensuite la génération de nouveaux patrons à partir d'interpolants de Craig. Notre troisième contribution concerne l'analyse interprocédurale de programmes, éventuellement récursifs. Nous proposons un algorithme qui génère pour chaque procédure un résumé, applicable à toute interprétation abstraite et notamment à l'itération de politique locale. Nous pouvons ainsi générer les invariants inductifs les plus forts dans le domaine pour un nombre fixé de résumés pour un programme récursif. Notre dernière contribution théorique est une méthode d'affaiblissement permettant de trouver des invariants inductifs, éventuellement disjonctifs, à partir de formules obtenues par exécution symbolique. Nous avons mis en œuvre toutes ces approches dans le système d'analyse statique CPAchecker, un logiciel libre, ce qui permet des communications et collaborations entre analyses. Nos techniques utilisent des résolveurs de satisfiabilité modulo théorie, capables, étant donné une formule de logique du premier ordre sur certaines théories, d'en donner un modèle ou de démontrer qu'aucun n'existe.Afin de simplifier les communications avec ces outils, nous présentons la bibliothèque JavaSMT, fournissant une interface générique. Cette bibliothèque a déjà démontré son utilité pour de nombreux chercheurs. / Static analysis concerns itself with deriving program properties which holduniversally for all program executions.Such properties are used for proving program properties (e.g. there neveroccurs an overflow or other runtime error regardless of a particular execution) and are almostinvariably established using inductive invariants: properties which holdfor the initial state and imply themselves under the program transition, and thushold universally due to induction.A traditional approach for finding numerical invariants is using abstractinterpretation, which can be seen as interpreting the program in the abstractdomain of choice, only tracking properties of interest.Yet even in the intervals abstract domain (upper and lower boundsfor each variable) such computation does not necessarily converge, and theanalysis has to resort to the use of widenings to enforceconvergence at the cost of precision.An alternative game-theoretic approach called policy iteration,guarantees to findthe least inductive invariant in the chosen abstract domain under the finitenumber of iterations.Yet the original description of the algorithm includes a number of drawbacks:it requires converting the entire program to an equation system,does not integrate with other approaches,and is unable to benefit from other analyses.Our new algorithm for running local policy iteration (LPI)instead formulates policy iteration as traditional Kleene iteration,with a widening operator that guarantees to return the least inductiveinvariant in the domain after finitely many applications.Local policy iteration runs in template linear constraint domains whichrequires setting in advance the ``shape'' of the derived invariant (e.g.$x + 2y$ for deriving $x + 2y leq 10$).Our second theoretical contribution involves development and comparison ofa number of different template synthesis strategies, when used in conjunctionwith LPI.Additionally, we present an approach for generating abstract reachabilitytrees using abstract interpretation,enabling the construction of counterexample traces,which in turns lets us generate new templates using Craig interpolants.In our third contribution we bring our attention to interprocedural andpotentially recursive programs.We develop an algorithm parameterizable with any abstract interpretation forsummary generation, and we study it's parameterization with LPI.The resulting approach is able to generate least inductive invariants in the domain for a fixed number of summaries for recursive programs.Our final theoretical contribution is a novel "formula slicing''method for finding potentially disjunctive inductive invariantsfrom program fragments obtained by symbolic execution.We implement all of these techniques in the open-source state-of-the-artCPAchecker program analysis framework, enabling communication and collaborationbetween different analyses.The techniques mentioned above rely onsatisfiability modulo theories solvers,which are capable ofgiving solutions tofirst-order formulas over certain theories or showingthat none exists.In order to simplify communication with such toolswe present the JavaSMT library, which provides a generic interface for suchcommunication.The library has shown itself to be a valuable tool, and is already used by manyresearchers.
168

[en] A QUADRATIC OPTIMIZATION APPROACH FOR THE RESERVOIR GEOMECHANICAL MESH GENERATION / [pt] UMA METODOLOGIA BASEADA EM OTIMIZAÇÃO QUADRÁTICA PARA GERAÇÃO DE MALHAS GEOMECÂNICAS DE RESERVATÓRIOS

JEFERSON ROMULO PEREIRA COELHO 31 July 2018 (has links)
[pt] A geração de malhas geomecânicas de reservatórios ainda é uma tarefa tediosa que consome muito tempo. Para acelerar este processo, soluções que reconstroem analiticamente a geometria do reservatório têm sido propostas, mas essas soluções não são as mais adequadas para modelagem de objetos naturais. Este trabalho propõe uma modelagem discreta para a geometria do reservatório, onde os vértices da malha são posicionados por meio da solução de um problema de otimização quadrático e convexo. O problema de otimização é modelado de forma a garantir que as malhas geomecânicas de saída sejam suaves e que ao mesmo tempo respeitem as restrições do reservatório e dos horizontes presentes. Além disso, a metodologia proposta permite uma implementação eficiente, paralelizável e de baixo consumo de memória. Casos de teste com milhões de variáveis são apresentados para validar essa abordagem. Finalmente, a metodologia proposta neste trabalho para malhas de geomecânica pode ser naturalmente estendida para a modelagem estrutural de sub-superfícies na interpretação sísmica e de restauração geológica. / [en] Geomechanical mesh generation of complex reservoirs remains a tedious task prone to errors. Recently proposed solutions based on analytical reconstruction of the sub-surfaces are not capable to represent all the geometric details of natural objects. This work proposes a discrete model where the mesh vertices are positioned based on a convex quadratic optimization process. The optimization problem seeks to guarantee smooth meshes that conform with prescribed constraints. The resulting mesh therefore respects, as far as possible, the finite volume mesh of the reservoir pay zone and the existing horizons. Finally, the proposed methodology for Geomechanical meshes can be easily extend to model sub-surfaces present in the structural interpretation and geological restauration.
169

Impact of Engine Dynamics on Optimal Energy Management Strategies for Hybrid Electric Vehicles

Hägglund, Andreas, Källgren, Moa January 2018 (has links)
In recent years, rules and regulations regarding fuel consumption of vehicles and the amount of emissions produced by them are becoming stricter. This has led the automotive industry to develop more advanced solutions to propel vehicles to meet the legal requirements. The Hybrid Electric Vehicle is one of the solutions that is becoming more popular in the automotive industry. It consists of an electrical driveline combined with a conventional powertrain, propelled by either a diesel or petrol engine. Two power sources create the possibility to choose when and how to use the power sources to propel the vehicle. The strategy that decides how this is done is referred to as an energy management strategy. Today most energy management strategies only try to reduce fuel consumption using models that describe the steady state behaviour of the engine. In other words, no reduction of emissions is achieved and all transient behaviour is considered negligible.  In this thesis, an energy management strategy incorporating engine dynamics to reduce fuel consumption and nitrogen oxide emissions have been designed. First, the models that describe how fuel consumption and nitrogen oxide emissions behave during transient engine operation are developed. Then, an energy management strategy is developed consisting of a model predictive controller that combines the equivalent consumption minimization strategy and convex optimization. Results indicate that by considering engine dynamics in the energy management strategy, both fuel consumption and nitrogen oxide emissions can be reduced. Furthermore, it is also shown that the major reduction in fuel consumption and nitrogen oxide emissions is achieved for short prediction horizons.
170

Analysis and control of parabolic partial differential equations with application to tokamaks using sum-of-squares polynomials / Analyse et contrôle des équations aux dérivées partielles parabolique aide de polynômes somme des carrés avec une application sur les tokamaks

Gahlawat, Aditya 28 October 2015 (has links)
Dans ce travail, nous abordons les problèmes de l'analyse de la stabilité et de la synthèse de contrôleur pour une Equation aux Dérivées Partielles (EDP) parabolique linéaire de dimension 1. Ces problèmes sont résolus avec des méthodologies analogues au cadre des inégalités matricielles linéaires (LMI) pour les équations différentielles ordinaires (EDO). Nous développons une méthode pour EDP paraboliques dans laquelle nous testons la faisabilité de certaines LMIs utilisant la programmation semi-définie (SDP) pour construire des fonctions de Lyapunov quadratiques et des contrôleurs. Le cœur de notre démarche est la construction de fonctions de Lyapunov quadratiques paramétrées par les opérateurs définis positifs sur les espaces de Hilbert de dimension infinie. Contrairement aux matrices positives, il n'y a pas de méthode unique paramétrisant l'ensemble des opérateurs positifs sur un espace de Hilbert. Bien sûr, nous pouvons toujours paramétrer un sous-ensemble des opérateurs positifs en utilisant, par exemple, des scalaires positifs. Cependant, nous devons nous assurer que le paramétrage des opérateurs positifs ne doit pas être conservatif. Notre contribution est de construire une paramétrisation qui a seulement une petite quantité de conservatisme comme indiqué par nos résultats numériques. Nous utilisons des polynômes en somme des carrés (SOS) pour paramétrer l'ensemble des opérateurs positifs, linéaire et bornés sur les espaces de Hilbert. Comme son nom l'indique, un polynôme SOS est celui qui peut être représenté comme une somme de polynômes carrés. La propriété la plus importante d'un polynôme SOS est qu'il peut être représenté au moyen d'une matrice (semi-)définie positive. Cela implique que, même si le problème de polynôme (semi-)positif est NP-difficile, le problème de vérifier si polynôme est SOS (et donc (semi-)positif) peut être résolu en utilisant la SDP. Par conséquent, nous nous efforçons de construire des fonctions de Lyapunov quadratiques paramétrées par les opérateurs positifs. Ces opérateurs positifs sont à leur tour paramétrés par des polynômes SOS. Cette paramétrisation SOS nous permet de formuler le problème de faisabilité pour l'existence d'une fonction de Lyapunov quadratique comme un problème de faisabilité LMI. Le problème de la faisabilité LMI peut alors être adressé à l'aide de SDP. Dans la première partie de la thèse nous considérons analyse de stabilité et la synthèse de contrôleur aux frontières pour une large classe d'EDP paraboliques. Les EDP ont des coefficients de transport distribués spatialement. Ces EDP sont utilisés pour modéliser les processus de diffusion, de convection et de réaction de quantités physiques dans les milieux anisotropes. Nous considérons la synthèse de contrôleurs limite à la fois pour le cas de retour d'état et le cas de retour de sortie (à l'aide d'un observateur). Dans la deuxième partie de la thèse, nous concevons un contrôleur distribué pour la régulation du flux magnétique poloïdal dans un tokamak (procédé de fusion thermonucléaire par confinement magnétique). Tout d'abord, nous concevons un contrôleur régulant la pente des lignes de champ magnétique (le facteur de sécurité). La régulation du profil du facteur de sécurité est importante pour supprimer les instabilités MHD dans un tokamak. Ensuite, nous concevons un contrôleur maximisant la densité de courant bootstrap généré en interne. Une proportion accrue du courant bootstrap conduirait à une réduction des besoins énergétiques exogènes pour l'exploitation d'un tokamak. / In this work we address the problems of stability analysis and controller synthesis for one dimensional linear parabolic Partial Differential Equations (PDEs). To achieve the tasks of stability analysis and controller synthesis we develop methodologies akin to the Linear Matrix Inequality (LMI) framework for Ordinary Differential Equations (ODEs). We develop a method for parabolic PDEs wherein we test the feasibility of certain LMIs using SDP to construct quadratic Lyapunov functions and controllers. The core of our approach is the construction of quadratic Lyapunov functions parametrized by positive definite operators on infinite dimensional Hilbert spaces. Unlike positive matrices, there is no single method of parametrizing the set of all positive operators on a Hilbert space. Of course, we can always parametrize a subset of positive operators, using, for example, positive scalars. However, we must ensure that the parametrization of positive operators should not be conservative. Our contribution is constructing a parametrization which has only a small amount of conservatism as indicated by our numerical results. We use Sum-of-Squares (SOS) polynomials to parametrize the set of positive, linear and bounded operators on Hilbert spaces. As the name indicates, an SOS polynomial is one which can be represented as a sum of squared polynomials. The most important property of an SOS polynomial is that it can be represented using a positive (semi)-definite matrix. This implies that even though the problem of polynomial (semi)-positivity is NP-hard, the problem of checking if polynomial is SOS (and hence (semi)-positive) can be solved using SDP. Therefore, we aim to construct quadratic Lyapunov functions parametrized by positive operators. These positive operators are in turn parametrized by SOS polynomials. This parametrization using SOS allows us to cast the feasibility problem for the existence of a quadratic Lyapunov function as the feasibility problem of LMIs. The feasibility problem of LMIs can then be addressed using SDP. In the first part of the thesis we consider stability analysis and boundary controller synthesis for a large class of parabolic PDEs. The PDEs have spatially distributed coefficients. Such PDEs are used to model processes of diffusion, convection and reaction of physical quantities in anisotropic media. We consider boundary controller synthesis for both the state feedback case and the output feedback case (using and observer design). IN the second part of thesis we design distributed controllers for the regulation of poloidal magnetic flux in a tokamak (a thermonuclear fusion devise). First, we design the controllers to regulate the magnetic field line pitch (the safety factor). The regulation of the safety factor profile is important to suppress the magnetohydrodynamic instabilities in a tokamak. Then, we design controllers to maximize the internally generated bootstrap current density. An increased proportion of bootstrap current would lead to a reduction in the external energy requirements for the operation of a tokamak.

Page generated in 0.2489 seconds