• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 2
  • 2
  • Tagged with
  • 8
  • 8
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

On the influence of indenter tip geometry on the identification of material parameters in indentation testing

Guo, Weichao 08 December 2010 (has links)
ABSTRACT The rapid development of structural materials and their successful applications in various sectors of industry have led to increasing demands for assessing their mechanical properties in small volumes. If the size dimensions are below micron, it is difficult to perform traditional tensile and compression tests at such small scales. Indentation testing as one of the advanced technologies to characterize the mechanical properties of material has already been widely employed since indentation technology has emerged as a cost-effective, convenient and non-destructive method to solve this problem at micro- and nanoscales. In spite of the advances in indentation testing, the theory and development on indentation testing are still not completely mature. Many factors affect the accuracy and reliability of identified material parameters. For instance, when the material properties are determined utilizing the inverse analysis relying on numerical modelling, the procedures often suffer from a strong material parameter correlation, which leads to a non-uniqueness of the solution or high errors in parameter identification. In order to overcome that problem, an approach is proposed to reduce the material parameter correlation by designing appropriate indenter tip shapes able to sense indentation piling-up or sinking-in occurring in non-linear materials. In the present thesis, the effect of indenter tip geometry on parameter correlation in material parameter identification is investigated. It may be helpful to design indenter tip shapes producing a minimal material parameter correlation, which may help to improve the reliability of material parameter identification procedures based on indentation testing combined with inverse methods. First, a method to assess the effect of indenter tip geometry on the identification of material parameters is proposed, which contains a gradient-based numerical optimization method with sensitivity analysis. The sensitivities of objective function computed by finite difference method and by direct differentiation method are compared. Subsequently, the direct differentiation method is selected to use because it is more reliable, accurate and versatile for computing the sensitivities of the objective function. Second, the residual imprint mappings produced by different indenters are investigated. In common indentation experiments, the imprint data are not available because the indenter tip itself shields that region from access by measurement devices during loading and unloading. However, they include information about sinking-in and piling-up, which may be valuable to reduce the correlation of material parameter. Therefore, the effect of the imprint data on identification of material parameters is investigated. Finally, some strategies for improvement of the identifiability of material parameter are proposed. Indenters with special tip shapes and different loading histories are investigated. The sensitivities of material parameters toward indenter tip geometries are evaluated on the materials with elasto-plastic and elasto-visoplastic constitutive laws. The results of this thesis have shown that first, the correlations of material parameters are related to the geometries of indenter tip shapes. The abilities of different indenters for determining material parameters are significantly different. Second, residual imprint mapping data are proved to be important for identification of material parameters, because they contain the additional information about plastic material behaviour. Third, different loading histories are helpful to evaluate the material parameters of time-dependent materials. Particularly, a holding cycle is necessary to determine the material properties of time-dependent materials. These results may be useful to enable a more reliable material parameter identification.
2

An Atomistic Study of the Mechanical Behavior of Carbon Nanotubes and Nanocomposite Interfaces

Awasthi, Amnaya P. 2009 December 1900 (has links)
The research presented in this dissertation pertains to the evaluation of stiffness of carbon nanotubes (CNTs) in a multiscale framework and modeling of the interfacial mechanical behavior in CNT-polymer nanocomposites. The goal is to study the mechanical behavior of CNTs and CNT-polymer interfaces at the atomic level, and utilize this information to develop predictive capabilities of material behavior at the macroscale. Stiffness of CNTs is analyzed through quantum mechanical (QM) calculations while the CNT-polymer interface is examined using molecular dynamics (MD) simulations. CNT-polymer-matrix composites exhibit promising properties as structural materials and constitutive models are sought to predict their macroscale behavior. The reliability of determining the homogenized response of such materials depends upon the ability to accurately capture the interfacial behavior between the nanotubes and the polymer matrix. In the proposed work, atomistic methods are be used to investigate the behavior of the interface by utilizing appropriately chosen atomistic representative volume elements (RVEs). Atomistic simulations are conducted on the RVEs to study mechanical separation with and without covalent functionalization between the polymeric matrix and two filler materials, namely graphite and a (12,0) Single Wall zig zag CNT. The information obtained from atomistic studies of separation is applicable for higher level length scale models as cohesive zone properties. The results of the present research have been correlated with available experimental data from characterization efforts.
3

Diferenciação automática de matrizes Hessianas / Automatic differentiation of hessian matrices

Gower, Robert Mansel 18 August 2018 (has links)
Orientador: Margarida Pinheiro Mello / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matemática, Estatística e Computação Científica / Made available in DSpace on 2018-08-18T06:57:47Z (GMT). No. of bitstreams: 1 Gower_RobertMansel_M.pdf: 908087 bytes, checksum: f8067f63c68dadafecf74e1387966331 (MD5) Previous issue date: 2011 / Resumo: Dentro do contexto de programação não linear, vários algoritmos resumem-se à aplicação do método de Netwon aos sistemas constituídos pelas condições de primeira ordem de Lagrange. Nesta classe de métodos é necessário calcular a matriz hessiana. Nosso foco é o cálculo exato, dentro da precisão da máquina, de matrizes hessianas usando diferenciação automática. Para esse fim, exploramos o cálculo da matriz hessiana sob dois pontos de vista. O primeiro é um modelo de grafo que foca nas simetrias que ocorrem no processo do cálculo da hessiana. Este ângulo propicia a intuição de como deve ser calculada a hessiana e leva ao desenvolvimento de um novo método de modo reverso para o cálculo de matrizes hessianas denominado edge pushing. O segundo ponto de vista é uma representação puramente algébrica que reduz o cálculo da hessiana à avaliação de uma expressão. Esta expressão pode ser usada para demonstrar algoritmos já existentes e projetar novos. Para ilustrar, deduzimos dois novos algoritmos, edge pushing e um novo algoritmo de modo direto, e uma série de outros métodos conhecidos [1], [20, p.157] e [9]. Apresentamos estudos teóricos e empíricos sobre o algoritmo edge pushing. Analisamos sua complexidade temporal e de uso de memória. Implementamos o algoritmo como um driver do pacote ADOL-C [19] e efetuamos testes computacionais, comparando sua performance com à de dois outros drivers em dezesseis problemas da coleção CUTE [5]. Os resultados indicam que o novo algoritmo é muito promissor. Pequenas modificações em edge pushing produzem um novo algoritmo, edge pushing sp, para o cálculo da esparsidade de matrizes hessianas, um passo necessário de uma classe de métodos que calculam a matriz hessiana usando colorações de grafos, [14, 19, 30]. Estudos de complexidade e testes numéricos são realizados comparando o novo método contra um outro recentemente desenvolvido [30] e os testes favorecem o novo algoritmo edge pushing sp. No capítulo final, motivado pela disponibilidade crescente de computadores com multiprocesadores, investigamos o processamento em paralelo do cálculo de matrizes hessianas. Examinamos o cálculo em paralelo de matrizes hessianas de funções parcialmente separáveis. Apresentamos uma abordagem desenvolvida para o cômputo em paralelo que pode ser usando em conjunto com qualquer método de cálculo de hessiana e outra estratégia específica para métodos de modo reverso. Testes são executados em um computador com memória compartilhada usando a interface de programação de aplicativo OpenMP / Abstract: In the context of nonlinear programming, many algorithms boil down to the application of Newton's method to the system constituted by the first order Lagrangian conditions. The calculation of Hessian matrices is necessary in this class of solvers. Our focus is on the exact calculation, within machine precision, of Hessian matrices through automatic differentiation. To this end, we detail the calculations of the Hessian matrix under two points of view. The first is an intuitive graph model that focuses on what symmetries occur throughout the Hessian calculation. This provides insight on how one should calculate the Hessian matrix, and we use this enlightened perspective to deduce a new reverse Hessian algorithm called edge pushing. The second viewpoint is a purely algebraic representation of the Hessian calculation via a closed formula. This formula can be used to demonstrate existing algorithms and design new ones. In order to illustrate, we deduce two new algorithms, edge pushing and a new forward algorithm, and a series of other known Hessian methods [1], [20, p.157] and [9]. We present theoretical and empirical studies of the edge pushing algorithm, establishing memory and temporal bounds, and comparing the performance of its computer implementation against that of two algorithms available as drivers of the software ADOL-C [14, 19, 30] on sixteen functions from the CUTE collection [5]. Test results indicate that the new algorithm is very promising. As a by-product of the edge pushing algorithm, we obtain an efficient algorithm, edge pushing sp, for automatically obtaining the sparsity pattern of Hessian matrices, a necessary step in a class of methods used for computing Hessian matrices via graph coloring, [14, 19, 30]. Complexity bounds are developed and numerical tests are carried out comparing the new sparsity detection algorithm against a recently developed method [30] and the results favor the new edge pushing sp algorithm. In the final chapter, motivated by the increasing commercial availability of multiprocessors, we investigate the implementation of parallel versions of the edge pushing algorithm. We address the concurrent calculation of Hessian matrices of partially separable functions. This includes a general approach to be used in conjunction with any Hessian software, and a strategy specific to reverse Hessian methods. Tests are carried out on a shared memory computer using the OpenMP paradigm / Mestrado / Analise Numerica / Mestre em Matemática Aplicada
4

Towards a Charcterization of the Symmetries of the Nisan-Wigderson Polynomial Family

Gupta, Nikhil January 2017 (has links) (PDF)
Understanding the structure and complexity of a polynomial family is a fundamental problem of arithmetic circuit complexity. There are various approaches like studying the lower bounds, which deals with nding the smallest circuit required to compute a polynomial, studying the orbit and stabilizer of a polynomial with respect to an invertible transformation etc to do this. We have a rich understanding of some of the well known polynomial families like determinant, permanent, IMM etc. In this thesis we study some of the structural properties of the polyno-mial family called the Nisan-Wigderson polynomial family. This polynomial family is inspired from a well known combinatorial design called Nisan-Wigderson design and is recently used to prove strong lower bounds on some restricted classes of arithmetic circuits ([KSS14],[KLSS14], [KST16]). But unlike determinant, permanent, IMM etc, our understanding of the Nisan-Wigderson polynomial family is inadequate. For example we do not know if this polynomial family is in VP or VNP complete or VNP-intermediate assuming VP 6= VNP, nor do we have an understanding of the complexity of its equivalence test. We hope that the knowledge of some of the inherent properties of Nisan-Wigderson polynomial like group of symmetries and Lie algebra would provide us some insights in this regard. A matrix A 2 GLn(F) is called a symmetry of an n-variate polynomial f if f(A x) = f(x): The set of symmetries of f forms a subgroup of GLn(F), which is also known as group of symmetries of f, denoted Gf . A vector space is attached to Gf to get the complete understanding of the symmetries of f. This vector space is known as the Lie algebra of group of symmetries of f (or Lie algebra of f), represented as gf . Lie algebra of f contributes some elements of Gf , known as continuous symmetries of f. Lie algebra has also been instrumental in designing e cient randomized equivalence tests for some polynomial families like determinant, permanent, IMM etc ([Kay12], [KNST17]). In this work we completely characterize the Lie algebra of the Nisan-Wigderson polynomial family. We show that gNW contains diagonal matrices of a speci c type. The knowledge of gNW not only helps us to completely gure out the continuous symmetries of the Nisan-Wigderson polynomial family, but also gives some crucial insights into the other symmetries of Nisan-Wigderson polynomial (i.e. the discrete symmetries). Thereafter using the Hessian matrix of the Nisan-Wigderson polynomial and the concept of evaluation dimension, we are able to almost completely identify the structure of GNW . In particular we prove that any A 2 GNW is a product of diagonal and permutation matrices of certain kind that we call block-permuted permutation matrix. Finally, we give explicit examples of nontrivial block-permuted permutation matrices using the automorphisms of nite eld that establishes the richness of the discrete symmetries of the Nisan-Wigderson polynomial family.
5

Static And Transient Voltage Stability Assessment Of Hybrid Ac/Dc Power Systems

Lin, Minglan 10 December 2010 (has links)
Voltage stability is a challenging problem in the design and operation of terrestrial and shipboard power systems. DC links can be integrated in the AC systems to increase the transmission capacity or to enhance the distribution performance. However, DC links introduce voltage stability issues related to the reactive power shortage due to power converters. Multi-infeed DC systems make this existing phenomenon more complicated. In addition, shipboard power systems have unique characteristics, and some concepts and methodologies developed for terrestrial power systems need to be investigated and modified before they are extended for shipboard power systems. One goal of this work was to develop a systematic method for voltage stability assessment of hybrid AC/DC systems, independent of system configuration. The static and dynamic approaches have been used as complementary methods to address different aspects in voltage stability. The other goal was to develop or to apply voltage stability indicators for voltage stability assessment. Two classical indicators (the minimum eigenvalue and loading margin) and an improvement (the 2nd order performance indicator) have been jointly used for the prediction of voltage stability, providing information on the system state and proximity to and mechanism of instability. The eliminated variable method has been introduced to calculate the partial derivatives of AC/DC systems for modal analysis. The previously mentioned methodologies and the associated indicators have been implemented for the application of integrated shipboard power system including DC zonal arrangement. The procedure of voltage stability assessment has been performed for three test systems, the WSCC 3-machine 9-bus system, the benchmark integrated shipboard power system, and the modified I RTS-96. The static simulation results illustrate the critical location and the contributing factors to the voltage instability, and screen the critical contingencies for dynamic simulation. The results obtained from various static methods have been compared. The dynamic simulation results demonstrate the response of dynamic characteristics of system components, and benchmark the static simulation results.
6

Detecção e rastreamento de leucócitos em imagens de microscopia intravital via processamento espaçotemporal

Silva, Bruno César Gregório da 19 February 2016 (has links)
Submitted by Luciana Sebin (lusebin@ufscar.br) on 2016-10-06T13:29:02Z No. of bitstreams: 1 DissBCGS.pdf: 7250050 bytes, checksum: df4b2203e5e586a2cba2f75ff4af7f2f (MD5) / Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-10-13T20:24:11Z (GMT) No. of bitstreams: 1 DissBCGS.pdf: 7250050 bytes, checksum: df4b2203e5e586a2cba2f75ff4af7f2f (MD5) / Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-10-13T20:24:21Z (GMT) No. of bitstreams: 1 DissBCGS.pdf: 7250050 bytes, checksum: df4b2203e5e586a2cba2f75ff4af7f2f (MD5) / Made available in DSpace on 2016-10-13T20:24:30Z (GMT). No. of bitstreams: 1 DissBCGS.pdf: 7250050 bytes, checksum: df4b2203e5e586a2cba2f75ff4af7f2f (MD5) Previous issue date: 2016-02-19 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP) / Over the last few years, a large number of researchers have directed their efforts and interests for the in vivo study of the cellular and molecular mechanisms of leukocyte-endothelial interactions in the microcirculation of many tissues under different inflammatory conditions. The main goal of these studies is to develop more effective therapeutic strategies for the treatment of inflammatory and autoimmune diseases. Nowadays, analysis of the leukocyte-endothelial interactions in small animals is performed by visual assessment from an intravital microscopy image sequences. Besides being time consuming, this procedure may cause visual fatigue of the observer and, therefore, generate false statistics. In this context, this work aims to study and develop computational techniques for the automatic detection and tracking of leukocytes in intravital video microscopy. For that, results from frame to frame processing (2D – spatial analysis) will be combined with those from the three-dimensional analysis (3D=2D+t – spatio-temporal analysis) of the volume formed by stacking the video frames. The main technique addressed for both processings is based on the analysis of the eigenvalues of the local Hessian matrix. While the 2D image processing aims the leukocyte detection without worrying about their tracking, 2D+t processing is intended to assist on the dynamic analysis of cell movement (tracking), being able to predict cell movements in cases of occlusion, for example. In this work we used intravital video microscopy obtained from a study of Multiple Sclerosis in mice. Noise reduction and registration techniques comprise the preprocessing step. In addition, techniques for the analysis and definition of cellular pathways comprise the post processing step. Results of 2D and 2D+t processing steps, compared with conventional visual analysis, have shown the effectiveness of the proposed approach. / Nos últimos anos, um grande número de pesquisadores tem voltado seus esforços e interesses para o estudo in vivo dos mecanismos celulares e moleculares da interação leucócitoendotélio na microcirculação de vários tecidos e em várias condições inflamatórias. O principal objetivo desses estudos é desenvolver estratégias terapêuticas mais eficazes para o tratamento de doenças inflamatórias e autoimunes. Atualmente, a análise de interações leucócito-endotélio em pequenos animais é realizada de maneira visual a partir de uma sequência de imagens de microscopia intravital. Além de ser demorado, esse procedimento pode levar à fadiga visual do observador e, portanto, gerar falsas estatísticas. Nesse contexto, este trabalho de pesquisa tem como objetivo estudar e desenvolver técnicas computacionais para a detecção e rastreamento automáticos de leucócitos em vídeos de microscopia intravital. Para isso, resultados do processamento quadro a quadro do vídeo (2D – análise espacial) serão combinados com os resultados da análise tridimensional (3D=2D+t – análise espaço-temporal) do volume formado pelo empilhamento dos quadros do vídeo. A principal técnica abordada para ambos os processamentos é baseada na análise local dos autovalores da matriz Hessiana. Enquanto o processamento de imagens 2D tem como objetivo a detecção de leucócitos sem se preocupar com seu rastreamento, o processamento 2D+t pretende auxiliar na análise dinâmica de movimentação das células (rastreamento), sendo capaz de prever movimentos celulares em casos de oclusão, por exemplo. Neste trabalho foram utilizados vídeos de microscopia intravital obtidos a partir de um estudo da Esclerose Múltipla realizado com camundongos. Técnicas de redução de ruído e estabilização do movimento das sequências de imagens compõem a etapa de pré-processamento, assim como técnicas para a definição e análise dos caminhos celulares compõem a etapa de pós-processamento. Resultados das etapas de processamento 2D e 2D+t, comparados com a convencional análise visual, demonstraram a eficácia da abordagem proposta. / FAPESP: 2013/26171-6
7

Fusion d'informations par la théorie de l'évidence pour la segmentation d'images / Information fusion using theory of evidence for image segmentation

Chahine, Chaza 31 October 2016 (has links)
La fusion d’informations a été largement étudiée dans le domaine de l’intelligence artificielle. Une information est en général considérée comme imparfaite. Par conséquent, la combinaison de plusieurs sources d’informations (éventuellement hétérogènes) peut conduire à une information plus globale et complète. Dans le domaine de la fusion on distingue généralement les approches probabilistes et non probabilistes dont fait partie la théorie de l’évidence, développée dans les années 70. Cette méthode permet de représenter à la fois, l’incertitude et l’imprécision de l’information, par l’attribution de fonctions de masses qui s’appliquent non pas à une seule hypothèse (ce qui est le cas le plus courant pour les méthodes probabilistes) mais à un ensemble d’hypothèses. Les travaux présentés dans cette thèse concernent la fusion d’informations pour la segmentation d’images.Pour développer cette méthode nous sommes partis de l’algorithme de la « Ligne de Partage des Eaux » (LPE) qui est un des plus utilisés en détection de contours. Intuitivement le principe de la LPE est de considérer l’image comme un relief topographique où la hauteur d’un point correspond à son niveau de gris. On suppose alors que ce relief se remplit d’eau par des sources placées au niveau des minima locaux de l’image, formant ainsi des bassins versants. Les LPE sont alors les barrages construits pour empêcher les eaux provenant de différents bassins de se mélanger. Un problème de cette méthode de détection de contours est que la LPE directement appliquée sur l’image engendre une sur-segmentation, car chaque minimum local engendre une région. Meyer et Beucher ont proposé de résoudre cette question en spécifiant un ensemble de marqueurs qui seront les seules sources d’inondation du relief. L'extraction automatique des marqueurs à partir des images ne conduit pas toujours à un résultat satisfaisant, en particulier dans le cas d'images complexes. Plusieurs méthodes ont été proposées pour déterminer automatiquement ces marqueurs.Nous nous sommes en particulier intéressés à l’approche stochastique d’Angulo et Jeulin qui estiment une fonction de densité de probabilité (fdp) d'un contour (LPE) après M simulations de la segmentation LPE classique. N marqueurs sont choisis aléatoirement pour chaque réalisation. Par conséquent, une valeur de fdp élevée est attribuée aux points de contours correspondant aux fortes réalisations. Mais la décision d’appartenance d’un point à la « classe contour » reste dépendante d’une valeur de seuil. Un résultat unique ne peut donc être obtenu.Pour augmenter la robustesse de cette méthode et l’unicité de sa réponse, nous proposons de combiner des informations grâce à la théorie de l’évidence.La LPE se calcule généralement à partir de l’image gradient, dérivée du premier ordre, qui donne une information globale sur les contours dans l’image. Alors que la matrice Hessienne, matrice des dérivées d’ordre secondaire, donne une information plus locale sur les contours. Notre objectif est donc de combiner ces deux informations de nature complémentaire en utilisant la théorie de l’évidence. Les différentes versions de la fusion sont testées sur des images réelles de la base de données Berkeley. Les résultats sont comparés avec cinq segmentations manuelles fournies, en tant que vérités terrain, avec cette base de données. La qualité des segmentations obtenues par nos méthodes sont fondées sur différentes mesures: l’uniformité, la précision, l’exactitude, la spécificité, la sensibilité ainsi que la distance métrique de Hausdorff / Information fusion has been widely studied in the field of artificial intelligence. Information is generally considered imperfect. Therefore, the combination of several sources of information (possibly heterogeneous) can lead to a more comprehensive and complete information. In the field of fusion are generally distinguished probabilistic approaches and non-probabilistic ones which include the theory of evidence, developed in the 70s. This method represents both the uncertainty and imprecision of the information, by assigning masses not only to a hypothesis (which is the most common case for probabilistic methods) but to a set of hypothesis. The work presented in this thesis concerns the fusion of information for image segmentation.To develop this method we start with the algorithm of Watershed which is one of the most used methods for edge detection. Intuitively the principle of the Watershed is to consider the image as a landscape relief where heights of the different points are associated with grey levels. Assuming that the local minima are pierced with holes and the landscape is immersed in a lake, the water filled up from these minima generate the catchment basins, whereas watershed lines are the dams built to prevent mixing waters coming from different basins.The watershed is practically applied to the gradient magnitude, and a region is associated with each minimum. Therefore the fluctuations in the gradient image and the great number of local minima generate a large set of small regions yielding an over segmented result which can hardly be useful. Meyer and Beucher proposed seeded watershed or marked-controlled watershed to surmount this oversegmentation problem. The essential idea of the method is to specify a set of markers (or seeds) to be considered as the only minima to be flooded by water. The number of detected objects is therefore equal to the number of seeds and the result is then markers dependent. The automatic extraction of markers from the images does not lead to a satisfying result especially in the case of complex images. Several methods have been proposed for automatically determining these markers.We are particularly interested in the stochastic approach of Angulo and Jeulin who calculate a probability density function (pdf) of contours after M simulations of segmentation using conventional watershed with N markers randomly selected for each simulation. Therefore, a high pdf value is assigned to strong contour points that are more detected through the process. But the decision that a point belong to the "contour class" remains dependent on a threshold value. A single result cannot be obtained.To increase the robustness of this method and the uniqueness of its response, we propose to combine information with the theory of evidence.The watershed is generally calculated on the gradient image, first order derivative, which gives comprehensive information on the contours in the image.While the Hessian matrix, matrix of second order derivatives, gives more local information on the contours. Our goal is to combine these two complementary information using the theory of evidence. The method is tested on real images from the Berkeley database. The results are compared with five manual segmentation provided as ground truth, with this database. The quality of the segmentation obtained by our methods is tested with different measures: uniformity, precision, recall, specificity, sensitivity and the Hausdorff metric distance
8

Synthèse de contrôleurs prédictifs auto-adaptatifs pour l'optimisation des performances des systèmes / Synthesis of self-adaptive predictive controllers for optimizing system performance

Turki, Marwa 12 October 2018 (has links)
Bien que la commande prédictive fasse appel à des paramètres ayant une signification concrète, la valeur de ces derniers impacte fortement les performances obtenues du système à contrôler. Leur réglage n’étant pas trivial, la littérature fait état d’un nombre conséquent de méthodes de réglage. Celles-ci ne garantissent cependant pas des valeurs optimales. L’objectif de cette thèse est de proposer une approche analytique et originale de réglage de ces paramètres. Initialement applicable aux systèmes MIMO linéaires, l’approche proposée a été étendue aux systèmes non linéaires avec ou sans contraintes et pour lesquels il existe un modèle Takagi-Sugeno (T-S). La classe des systemès non linéaires considérés ici est écrite sous la forme quasi-linéaire paramétrique (quasi-LPV). Sous l’hypothese que le système soit commandable et observable, la méthode proposée garantit la stabilité optimale de ce système en boucle fermée. Pour ce faire, elle s’appuie, d’une part, sur une technique d’amélioration du conditionnement de la matrice hessienne et, d’autre part, sur le concept de rang effectif. Elle présente également l’avantage de requérir une charge calculatoire moindre que celle des approches identifiées dans la littérature. L’intérêt de l’approche proposée est montré à travers l’application en simulation à différents systèmes de complexité croissante. Les travaux menés ont permis d’aboutir à une stratégie de commande prédictive auto-adaptative dénommée "ATSMPC" (Adaptive Takagi-Sugeno Model-based Predictive Control). / Even though predictive control uses concrete parameters, the value of these latter has a strong impact on the obtained performances from the system to be controlled. Their tuning is not trivial. That is why the literature reports a number of adjustment methods. However, these ones do not always guarantee optimal values. The goal of this thesis is to propose an analytical and original tuning tuning approach of these parameters. Initially applicable to linear MIMO systems, the proposed approach has been extended to non-linear systems with or without constraints and for which a Takagi-Sugeno (T-S) model exists. The class of nonlinear systems considered here is written in quasi-linear parametric form (quasi-LPV). Assuming that the system is controllable and observable, the proposed method guarantees the optimal stability of this closed-loop system. To do this, it relies, on the one hand, on a conditioning improving technique of the Hessian matrix and, on the other hand, on the concept of effective rank. It also has the advantage of requiring a lower computational load than the approaches identified in the literature. The interest of the proposed approach is shown through the simulation on different systems of increasingcomplexity. The work carried out has led to a self-adaptive predictive control strategy called "ATSMPC" (Adaptive Takagi-Sugeno Model-based Predictive Control).

Page generated in 0.0595 seconds