• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 144
  • 133
  • 20
  • 14
  • 13
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 381
  • 381
  • 149
  • 148
  • 100
  • 53
  • 51
  • 43
  • 42
  • 40
  • 39
  • 37
  • 35
  • 35
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
371

Analyse d’atteignabilité de systèmes max-plus incertains / Reachability Analysis of Uncertain Max Plus Linear Systems

Ferreira Cândido, Renato Markele 23 June 2017 (has links)
Les Systèmes à Evénements Discrets (SED) peuvent être définis comme des systèmes dans lesquels les variables d'état changent sous l'occurrence d'évènements au fil du temps. Les SED mettant en jeu des phénomènes de synchronisation peuvent être modélisés par des équations linéaires dans les algèbres de type (max,+). L'analyse d'atteignabilité est une problématique majeure pour les systèmes dynamiques. L'objectif est de calculer l'ensemble des états atteignables d'un système dynamique pour toutes les valeurs admissibles d'un ensemble d'états initiaux. Le problème de l'analyse d'atteignabilité pour les systèmes Max-Plus Linéaire (MPL) a été, proprement, résolu en décomposant le système MPL en une combinaison de systèmes affines par morceaux où les composantes affines du système sont représentées par des matrices de différences bornées (Difference Bound Matrix, DBM). La contribution principale de cette thèse est de présenter une procédure similaire pour résoudre le problème de l'atteignabilité pour des systèmes MPL incertains (uMPL), c'est-à-dire des systèmes MPL soumis à des bruits bornés, des perturbations et/ou des erreurs de modélisation. Tout d'abord, nous présentons une procédure permettant de partionner l'espace d'état d'un système uMPL en parties représentables par des DBM. Ensuite, nous étendons l'analyse d'atteignabilité des systèmes MPL aux systèmes uMPL. Enfin, les résultats sur l'analyse d'atteignabilité sont mis en oeuvre pour résoudre le problème d'atteignabilité conditionnelle, qui est étroitement lié au calcul du support de la densité de probabilité impliquée dans le problème de filtage stochastique / Discrete Event Dynamic Systems (DEDS) are discrete-state systems whose dynamics areentirely driven by the occurrence of asynchronous events over time. Linear equations in themax-plus algebra can be used to describe DEDS subjected to synchronization and time delayphenomena. The reachability analysis concerns the computation of all states that can bereached by a dynamical system from an initial set of states. The reachability analysis problemof Max Plus Linear (MPL) systems has been properly solved by characterizing the MPLsystems as a combination of Piece-Wise Affine (PWA) systems and then representing eachcomponent of the PWA system as Difference-Bound Matrices (DBM). The main contributionof this thesis is to present a similar procedure to solve the reachability analysis problemof MPL systems subjected to bounded noise, disturbances and/or modeling errors, calleduncertain MPL (uMPL) systems. First, we present a procedure to partition the state spaceof an uMPL system into components that can be completely represented by DBM. Then weextend the reachability analysis of MPL systems to uMPL systems. Moreover, the results onreachability analysis of uMPL systems are used to solve the conditional reachability problem,which is closely related to the support calculation of the probability density function involvedin the stochastic filtering problem. / Os Sistemas a Eventos Discretos (SEDs) constituem uma classe de sistemas caracterizada por apresentar espaço de estados discreto e dinâmica dirigida única e exclusivamente pela ocorrência de eventos. SEDs sujeitos aos problemas de sincronização e de temporização podem ser descritos em termos de equações lineares usando a álgebra max-plus. A análise de alcançabilidade visa o cálculo do conjunto de todos os estados que podem ser alcançados a partir de um conjunto de estados iniciais através do modelo do sistema. A análise de alcançabilidade de sistemas Max Plus Lineares (MPL) pode ser tratada por meio da decomposição do sistema MPL em sistemas PWA (Piece-Wise Affine) e de sua correspondente representação por DBM (Difference-Bound Matrices). A principal contribuição desta tese é a proposta de uma metodologia similar para resolver o problema de análise de alcançabilidade em sistemas MPL sujeitos a ruídos limitados, chamados de sistemas MPL incertos ou sistemas uMPL (uncertain Max Plus Linear Systems). Primeiramente, apresentamos uma metodologia para particionar o espaço de estados de um sistema uMPL em componentes que podem ser completamente representados por DBM. Em seguida, estendemos a análise de alcançabilidade de sistemas MPL para sistemas uMPL. Além disso, a metodologia desenvolvida é usada para resolver o problema de análise de alcançabilidade condicional, o qual esta estritamente relacionado ao cálculo do suporte da função de probabilidade de densidade envolvida o problema de filtragem estocástica.
372

Algebraic analysis of V-cycle multigrid and aggregation-based two-grid methods

Napov, Artem 12 February 2010 (has links)
This thesis treats two essentially different subjects: V-cycle schemes are considered in Chapters 2-4, whereas the aggregation-based coarsening is analysed in Chapters 5-6. As a matter of paradox, these two multigrid ingredients, when combined together, can hardly lead to an optimal algorithm. Indeed, a V-cycle needs more accurate prolongations than the simple piecewise-constant one, associated to aggregation-based coarsening. On the other hand, aggregation-based approaches use almost exclusively piecewise constant prolongations, and therefore need more involved cycling strategies, K-cycle <a href=http://www3.interscience.wiley.com/journal/114286660/abstract?CRETRY=1&SRETRY=0>[Num.Lin.Alg.Appl. vol.15(2008), pp.473-487]</a> being an attractive alternative in this respect.<p><br><p><br><p>Chapter 2 considers more precisely the well-known V-cycle convergence theories: the approximation property based analyses by Hackbusch (see [Multi-Grid Methods and Applications, 1985, pp.164-167]) and by McCormick [SIAM J.Numer.Anal. vol.22(1985), pp.634-643] and the successive subspace correction theory, as presented in [SIAM Review, vol.34(1992), pp.581-613] by Xu and in [Acta Numerica, vol.2(1993), pp.285-326.] by Yserentant. Under the constraint that the resulting upper bound on the convergence rate must be expressed with respect to parameters involving two successive levels at a time, these theories are compared. Unlike [Acta Numerica, vol.2(1993), pp.285-326.], where the comparison is performed on the basis of underlying assumptions in a particular PDE context, we compare directly the upper bounds. We show that these analyses are equivalent from the qualitative point of view. From the quantitative point of view,<p>we show that the bound due to McCormick is always the best one.<p><br><p><br><p>When the upper bound on the V-cycle convergence factor involves only two successive levels at a time, it can further be compared with the two-level convergence factor. Such comparison is performed in Chapter 3, showing that a nice two-grid convergence (at every level) leads to an optimal McCormick's bound (the best bound from the previous chapter) if and only if a norm of a given projector is bounded on every level.<p><br><p><br><p>In Chapter 4 we consider the Fourier analysis setting for scalar PDEs and extend the comparison between two-grid and V-cycle multigrid methods to the smoothing factor. In particular, a two-sided bound involving the smoothing factor is obtained that defines an interval containing both the two-grid and V-cycle convergence rates. This interval is narrow when an additional parameter α is small enough, this latter being a simple function of Fourier components.<p><br><p><br><p>Chapter 5 provides a theoretical framework for coarsening by aggregation. An upper bound is presented that relates the two-grid convergence factor with local quantities, each being related to a particular aggregate. The bound is shown to be asymptotically sharp for a large class of elliptic boundary value problems, including problems with anisotropic and discontinuous coefficients.<p><br><p><br><p>In Chapter 6 we consider problems resulting from the discretization with edge finite elements of 3D curl-curl equation. The variables in such discretization are associated with edges. We investigate the performance of the Reitzinger and Schöberl algorithm [Num.Lin.Alg.Appl. vol.9(2002), pp.223-238], which uses aggregation techniques to construct the edge prolongation matrix. More precisely, we perform a Fourier analysis of the method in two-grid setting, showing its optimality. The analysis is supplemented with some numerical investigations. / Doctorat en Sciences de l'ingénieur / info:eu-repo/semantics/nonPublished
373

On numerical resilience in linear algebra / Conception d'algorithmes numériques pour la résilience en algèbre linéaire

Zounon, Mawussi 01 April 2015 (has links)
Comme la puissance de calcul des systèmes de calcul haute performance continue de croître, en utilisant un grand nombre de cœurs CPU ou d’unités de calcul spécialisées, les applications hautes performances destinées à la résolution des problèmes de très grande échelle sont de plus en plus sujettes à des pannes. En conséquence, la communauté de calcul haute performance a proposé de nombreuses contributions pour concevoir des applications tolérantes aux pannes. Cette étude porte sur une nouvelle classe d’algorithmes numériques de tolérance aux pannes au niveau de l’application qui ne nécessite pas de ressources supplémentaires, à savoir, des unités de calcul ou du temps de calcul additionnel, en l’absence de pannes. En supposant qu’un mécanisme distinct assure la détection des pannes, nous proposons des algorithmes numériques pour extraire des informations pertinentes à partir des données disponibles après une pannes. Après l’extraction de données, les données critiques manquantes sont régénérées grâce à des stratégies d’interpolation pour constituer des informations pertinentes pour redémarrer numériquement l’algorithme. Nous avons conçu ces méthodes appelées techniques d’Interpolation-restart pour des problèmes d’algèbre linéaire numérique tels que la résolution de systèmes linéaires ou des problèmes aux valeurs propres qui sont indispensables dans de nombreux noyaux scientifiques et applications d’ingénierie. La résolution de ces problèmes est souvent la partie dominante; en termes de temps de calcul, des applications scientifiques. Dans le cadre solveurs linéaires du sous-espace de Krylov, les entrées perdues de l’itération sont interpolées en utilisant les entrées disponibles sur les nœuds encore disponibles pour définir une nouvelle estimation de la solution initiale avant de redémarrer la méthode de Krylov. En particulier, nous considérons deux politiques d’interpolation qui préservent les propriétés numériques clés de solveurs linéaires bien connus, à savoir la décroissance monotone de la norme-A de l’erreur du gradient conjugué ou la décroissance monotone de la norme résiduelle de GMRES. Nous avons évalué l’impact du taux de pannes et l’impact de la quantité de données perdues sur la robustesse des stratégies de résilience conçues. Les expériences ont montré que nos stratégies numériques sont robustes même en présence de grandes fréquences de pannes, et de perte de grand volume de données. Dans le but de concevoir des solveurs résilients de résolution de problèmes aux valeurs propres, nous avons modifié les stratégies d’interpolation conçues pour les systèmes linéaires. Nous avons revisité les méthodes itératives de l’état de l’art pour la résolution des problèmes de valeurs propres creux à la lumière des stratégies d’Interpolation-restart. Pour chaque méthode considérée, nous avons adapté les stratégies d’Interpolation-restart pour régénérer autant d’informations spectrale que possible. Afin d’évaluer la performance de nos stratégies numériques, nous avons considéré un solveur parallèle hybride (direct/itérative) pleinement fonctionnel nommé MaPHyS pour la résolution des systèmes linéaires creux, et nous proposons des solutions numériques pour concevoir une version tolérante aux pannes du solveur. Le solveur étant hybride, nous nous concentrons dans cette étude sur l’étape de résolution itérative, qui est souvent l’étape dominante dans la pratique. Les solutions numériques proposées comportent deux volets. A chaque fois que cela est possible, nous exploitons la redondance de données entre les processus du solveur pour effectuer une régénération exacte des données en faisant des copies astucieuses dans les processus. D’autre part, les données perdues qui ne sont plus disponibles sur aucun processus sont régénérées grâce à un mécanisme d’interpolation. / As the computational power of high performance computing (HPC) systems continues to increase by using huge number of cores or specialized processing units, HPC applications are increasingly prone to faults. This study covers a new class of numerical fault tolerance algorithms at application level that does not require extra resources, i.e., computational unit or computing time, when no fault occurs. Assuming that a separate mechanism ensures fault detection, we propose numerical algorithms to extract relevant information from available data after a fault. After data extraction, well chosen part of missing data is regenerated through interpolation strategies to constitute meaningful inputs to numerically restart the algorithm. We have designed these methods called Interpolation-restart techniques for numerical linear algebra problems such as the solution of linear systems or eigen-problems that are the inner most numerical kernels in many scientific and engineering applications and also often ones of the most time consuming parts. In the framework of Krylov subspace linear solvers the lost entries of the iterate are interpolated using the available entries on the still alive nodes to define a new initial guess before restarting the Krylov method. In particular, we consider two interpolation policies that preserve key numerical properties of well-known linear solvers, namely the monotony decrease of the A-norm of the error of the conjugate gradient or the residual norm decrease of GMRES. We assess the impact of the fault rate and the amount of lost data on the robustness of the resulting linear solvers.For eigensolvers, we revisited state-of-the-art methods for solving large sparse eigenvalue problems namely the Arnoldi methods, subspace iteration methods and the Jacobi-Davidson method, in the light of Interpolation-restart strategies. For each considered eigensolver, we adapted the Interpolation-restart strategies to regenerate as much spectral information as possible. Through intensive experiments, we illustrate the qualitative numerical behavior of the resulting schemes when the number of faults and the amount of lost data are varied; and we demonstrate that they exhibit a numerical robustness close to that of fault-free calculations. In order to assess the efficiency of our numerical strategies, we have consideredan actual fully-featured parallel sparse hybrid (direct/iterative) linear solver, MaPHyS, and we proposed numerical remedies to design a resilient version of the solver. The solver being hybrid, we focus in this study on the iterative solution step, which is often the dominant step in practice. The numerical remedies we propose are twofold. Whenever possible, we exploit the natural data redundancy between processes from the solver toperform an exact recovery through clever copies over processes. Otherwise, data that has been lost and is not available anymore on any process is recovered through Interpolationrestart strategies. These numerical remedies have been implemented in the MaPHyS parallel solver so that we can assess their efficiency on a large number of processing units (up to 12; 288 CPU cores) for solving large-scale real-life problems.
374

Effect Of Cross-sectional Nonlinearities On Anisotropic Strip-based Mechanisms

Pollayi, Hemaraju 09 1900 (has links) (PDF)
The goal of this work is to develop and demonstrate a comprehensive analysis of single and multi-body composite strip-beam systems using an asymptotically-correct geometrically nonlinear theory. The comprehensiveness refers to the two distinguishing features of this work, namely the unified framework for the analysis and the inclusion of the usually ignored cross-sectional nonlinearities in thin-beam and multi-beam analyses. The first part of this work stitches together an approach to analyse generally anisotropic composite beams. Based on geometrically exact nonlinear elasticity theory, the nonlinear 3-D beam problem splits into either a linear (conventionally considered) or nonlinear (considered in this work) 2-D analysis of the beam cross-section and a nonlinear 1-D analysis along the beam reference curve. The two sub-tasks of this work (viz. nonlinear analysis of the beam cross-section and nonlinear beam analysis) are accomplished on a single platform using an object-oriented framework. First, two established nonlinear cross-sectional analyses (numerical and analytical), both based on the Variational-Asymptotic Method (VAM), are invoked. The numerical analysis is capable of treating cross-sections of arbitrary geometry and material distributions and can capture certain nonlinear effects such as the trapeze effect. The closed-form analytical analysis is restricted to thin rectangular cross-sections for generally anisotropic composites but captures ALL cross-sectional nonlinearities, and not just the well-known Brazier and trapeze effects. Second, the well-established geometrically-exact nonlinear 1-D governing equations along the beam reference curve, after being generalized to utilize the expressions for nonlinear stiffness matrix, are solved using the mixed variational finite element method. Finally, local 3-D stress, strain and displacement fields for representative sections in the beam are recovered, based on the stress resultants from the 1-D global beam analysis. This part of the work is then validated by applying it to an initially twisted cantilevered laminated composite strip under axial force. The second part is concerned with the dynamic analysis of nonlinear multi-body systems involving elastic strip-like beams made of laminated, anisotropic composite materials using an object-oriented framework. In this work, unconditionally stable time-integration schemes presenting high-frequency numerical dissipation are used to solve the ensuing governing equations. The codes developed based on such time-integration schemes are first validated with the literature for two standard test cases: non-linear spring mass oscillator and pendulum. In order to apply the comprehensive analysis code thus developed to a multi-body system, the four-bar mechanism is chosen as an example. All component bars of the mechanism have thin rectangular cross-sections and are made of fiber reinforced laminates of various types of layups. They could, in general, be pre-twisted and/or possess initial curvature, either by design or by defect. They are linked to each other by means of revolute joints. Each component of the mechanism is modeled as a beam based on the first part of this work. Results from this analysis are compared with those available in the literature, both theoretical and experimental. The margins between the linear and non-linear results are evaluated specifically due to the cross-sectional nonlinearities and shown to vary with stacking sequences. This work thus demonstrates the importance of geometrically nonlinear cross-sectional analysis of certain composite beam-based four-bar mechanisms in predicting system dynamic characteristics. To enable graphical visualization, the behavior of the four-bar mechanism is also observed by using commercial software (I-DEAS + NASTRAN + ADAMS). Finally, the component-laminate load-carrying capacity is estimated using the Tsai-Wu-Hahn failure criterion for various layups and the same criterion is used to predict the first-ply-failure and the mechanism as a whole.
375

Binary Arithmetic for Finite-Word-Length Linear Controllers : MEMS Applications / Intégration sur électronique dédiée et embarquée du traitement du signal et de la commande pour les microsystemes appliqués à la microrobotique

Oudjida, Abdelkrim Kamel 20 January 2014 (has links)
Cette thèse traite le problème d'intégration hardware optimale de contrôleurs linéaires à taille de mot finie, dédiés aux applications MEMS. Le plus grand défi est d'assurer des performances de contrôle satisfaisantes avec un minimum de ressources logiques. Afin d'y parvenir, deux optimisations distinctes mais complémentaires peuvent être entreprises: en théorie de contrôle et en arithmétique binaire. Seule cette dernière est considérée dans ce travail.Comme cette arithmétique cible des applications MEMS, elle doit faire preuve de vitesse afin de prendre en charge la dynamique rapide des MEMS, à faible consommation de puissance pour un contrôle intégré, hautement re-configurabe pour un ajustement facile des performances de contrôle, et facilement prédictible pour fournir une idée précise sur les ressources logiques nécessaires avant l'implémentation même.L'exploration d'un certain nombre d'arithmétiques binaires a montré que l'arithmétique radix-2r est celle qui répond au mieux aux exigences précitées. Elle a été pleinement exploitée afin de concevoir des circuits de multiplication efficaces, qui sont au fait, le véritable moteur des systèmes linéaires.L'arithmétique radix-2r a été appliquée à l'intégration hardware de deux structures linéaires à taille de mot finie: un contrôleur PID variant dans le temps et à un contrôleur LQG invariant dans le temps,avec un filtre de Kalman. Le contrôleur PID a montré une nette supériorité sur ses homologues existants. Quant au contrôleur LQG, une réduction très importante des ressources logiques a été obtenue par rapport à sa forme initiale non optimisée / This thesis addresses the problem of optimal hardware-realization of finite-word-length(FWL) linear controllers dedicated to MEMS applications. The biggest challenge is to ensuresatisfactory control performances with a minimal hardware. To come up, two distinct butcomplementary optimizations can be undertaken: in control theory and in binary arithmetic. Only thelatter is involved in this work.Because MEMS applications are targeted, the binary arithmetic must be fast enough to cope withthe rapid dynamic of MEMS; power-efficient for an embedded control; highly scalable for an easyadjustment of the control performances; and easily predictable to provide a precise idea on therequired logic resources before the implementation.The exploration of a number of binary arithmetics showed that radix-2r is the best candidate that fitsthe aforementioned requirements. It has been fully exploited to designing efficient multiplier cores,which are the real engine of the linear systems.The radix-2r arithmetic was applied to the hardware integration of two FWL structures: a linear timevariant PID controller and a linear time invariant LQG controller with a Kalman filter. Both controllersshowed a clear superiority over their existing counterparts, or in comparison to their initial forms.
376

Solving dense linear systems on accelerated multicore architectures / Résoudre des systèmes linéaires denses sur des architectures composées de processeurs multicœurs et d’accélerateurs

Rémy, Adrien 08 July 2015 (has links)
Dans cette thèse de doctorat, nous étudions des algorithmes et des implémentations pour accélérer la résolution de systèmes linéaires denses en utilisant des architectures composées de processeurs multicœurs et d'accélérateurs. Nous nous concentrons sur des méthodes basées sur la factorisation LU. Le développement de notre code s'est fait dans le contexte de la bibliothèque MAGMA. Tout d'abord nous étudions différents solveurs CPU/GPU hybrides basés sur la factorisation LU. Ceux-ci visent à réduire le surcoût de communication dû au pivotage. Le premier est basé sur une stratégie de pivotage dite "communication avoiding" (CALU) alors que le deuxième utilise un préconditionnement aléatoire du système original pour éviter de pivoter (RBT). Nous montrons que ces deux méthodes surpassent le solveur utilisant la factorisation LU avec pivotage partiel quand elles sont utilisées sur des architectures hybrides multicœurs/GPUs. Ensuite nous développons des solveurs utilisant des techniques de randomisation appliquées sur des architectures hybrides utilisant des GPU Nvidia ou des coprocesseurs Intel Xeon Phi. Avec cette méthode, nous pouvons éviter l'important surcoût du pivotage tout en restant stable numériquement dans la plupart des cas. L'architecture hautement parallèle de ces accélérateurs nous permet d'effectuer la randomisation de notre système linéaire à un coût de calcul très faible par rapport à la durée de la factorisation. Finalement, nous étudions l'impact d'accès mémoire non uniformes (NUMA) sur la résolution de systèmes linéaires denses en utilisant un algorithme de factorisation LU. En particulier, nous illustrons comment un placement approprié des processus légers et des données sur une architecture NUMA peut améliorer les performances pour la factorisation du panel et accélérer de manière conséquente la factorisation LU globale. Nous montrons comment ces placements peuvent améliorer les performances quand ils sont appliqués à des solveurs hybrides multicœurs/GPU. / In this PhD thesis, we study algorithms and implementations to accelerate the solution of dense linear systems by using hybrid architectures with multicore processors and accelerators. We focus on methods based on the LU factorization and our code development takes place in the context of the MAGMA library. We study different hybrid CPU/GPU solvers based on the LU factorization which aim at reducing the communication overhead due to pivoting. The first one is based on a communication avoiding strategy of pivoting (CALU) while the second uses a random preconditioning of the original system to avoid pivoting (RBT). We show that both of these methods outperform the solver using LU factorization with partial pivoting when implemented on hybrid multicore/GPUs architectures. We also present new solvers based on randomization for hybrid architectures for Nvidia GPU or Intel Xeon Phi coprocessor. With this method, we can avoid the high cost of pivoting while remaining numerically stable in most cases. The highly parallel architecture of these accelerators allow us to perform the randomization of our linear system at a very low computational cost compared to the time of the factorization. Finally we investigate the impact of non-uniform memory accesses (NUMA) on the solution of dense general linear systems using an LU factorization algorithm. In particular we illustrate how an appropriate placement of the threads and data on a NUMA architecture can improve the performance of the panel factorization and consequently accelerate the global LU factorization. We show how these placements can improve the performance when applied to hybrid multicore/GPU solvers.
377

Improvements in Genetic Approach to Pole Placement in Linear State Space Systems Through Island Approach PGA with Orthogonal Mutation Vectors

Cassell, Arnold 01 January 2012 (has links)
This thesis describes a genetic approach for shaping the dynamic responses of linear state space systems through pole placement. This paper makes further comparisons between this approach and an island approach parallel genetic algorithm (PGA) which incorporates orthogonal mutation vectors to increase sub-population specialization and decrease convergence time. Both approaches generate a gain vector K. The vector K is used in state feedback for altering the poles of the system so as to meet step response requirements such as settling time and percent overshoot. To obtain the gain vector K by the proposed genetic approaches, a pair of ideal, desired poles is calculate first. Those poles serve as the basis by which an initial population is created. In the island approach, those poles serve as a basis for n populations, where n is the dimension of the necessary K vector. Each member of the population is tested for its fitness (the degree to which it matches the criteria). A new population is created each “generation” from the results of the previous iteration, until the criteria are met, or a certain number of generations have passed. Several case studies are provided in this paper to illustrate that this new approach is working, and also to compare performance of the two approaches.
378

Representation and Reconstruction of Linear, Time-Invariant Networks

Woodbury, Nathan Scott 01 April 2019 (has links)
Network reconstruction is the process of recovering a unique structured representation of some dynamic system using input-output data and some additional knowledge about the structure of the system. Many network reconstruction algorithms have been proposed in recent years, most dealing with the reconstruction of strictly proper networks (i.e., networks that require delays in all dynamics between measured variables). However, no reconstruction technique presently exists capable of recovering both the structure and dynamics of networks where links are proper (delays in dynamics are not required) and not necessarily strictly proper.The ultimate objective of this dissertation is to develop algorithms capable of reconstructing proper networks, and this objective will be addressed in three parts. The first part lays the foundation for the theory of mathematical representations of proper networks, including an exposition on when such networks are well-posed (i.e., physically realizable). The second part studies the notions of abstractions of a network, which are other networks that preserve certain properties of the original network but contain less structural information. As such, abstractions require less a priori information to reconstruct from data than the original network, which allows previously-unsolvable problems to become solvable. The third part addresses our original objective and presents reconstruction algorithms to recover proper networks in both the time domain and in the frequency domain.
379

Numerical approximations with tensor-based techniques for high-dimensional problems

Mora Jiménez, María 29 January 2024 (has links)
Tesis por compendio / [ES] La idea de seguir una secuencia de pasos para lograr un resultado deseado es inherente a la naturaleza humana: desde que empezamos a andar, siguiendo una receta de cocina o aprendiendo un nuevo juego de cartas. Desde la antigüedad se ha seguido este esquema para organizar leyes, corregir escritos, e incluso asignar diagnósticos. En matemáticas a esta forma de pensar se la denomina 'algoritmo'. Formalmente, un algoritmo es un conjunto de instrucciones definidas y no-ambiguas, ordenadas y finitas, que permite solucionar un problema. Desde pequeños nos enfrentamos a ellos cuando aprendemos a multiplicar o dividir, y a medida que crecemos, estas estructuras nos permiten resolver diferentes problemas cada vez más complejos: sistemas lineales, ecuaciones diferenciales, problemas de optimización, etcétera. Hay multitud de algoritmos que nos permiten hacer frente a este tipo de problemas, como métodos iterativos, donde encontramos el famoso Método de Newton para buscar raíces; algoritmos de búsqueda para localizar un elemento con ciertas propiedades en un conjunto mayor; o descomposiciones matriciales, como la descomposición LU para resolver sistemas lineales. Sin embargo, estos enfoques clásicos presentan limitaciones cuando se enfrentan a problemas de grandes dimensiones, problema que se conoce como `la maldición de la dimensionalidad'. El avance de la tecnología, el uso de redes sociales y, en general, los nuevos problemas que han aparecido con el desarrollo de la Inteligencia Artificial, ha puesto de manifiesto la necesidad de manejar grandes cantidades de datos, lo que requiere el diseño de nuevos mecanismos que permitan su manipulación. En la comunidad científica, este hecho ha despertado el interés por las estructuras tensoriales, ya que éstas permiten trabajar eficazmente con problemas de grandes dimensiones. Sin embargo, la mayoría de métodos clásicos no están pensados para ser empleados junto a estas operaciones, por lo que se requieren herramientas específicas que permitan su tratamiento, lo que motiva un proyecto como este. El presente trabajo se divide de la siguiente manera: tras revisar algunas definiciones necesarias para su comprensión, en el Capítulo 3, se desarrolla la teoría de una nueva descomposición tensorial para matrices cuadradas. A continuación, en el Capítulo 4, se muestra una aplicación de dicha descomposición a grafos regulares y redes de mundo pequeño. En el Capítulo 5, se plantea una implementación eficiente del algoritmo que proporciona la nueva descomposición matricial, y se estudian como aplicación algunas EDP de orden dos. Por último, en los Capítulos 6 y 7 se exponen unas breves conclusiones y se enumeran algunas de las referencias consultadas, respectivamente. / [CA] La idea de seguir una seqüència de passos per a aconseguir un resultat desitjat és inherent a la naturalesa humana: des que comencem a caminar, seguint una recepta de cuina o aprenent un nou joc de cartes. Des de l'antiguitat s'ha seguit aquest esquema per a organitzar lleis, corregir escrits, i fins i tot assignar diagnòstics. En matemàtiques a aquesta manera de pensar se la denomina algorisme. Formalment, un algorisme és un conjunt d'instruccions definides i no-ambigües, ordenades i finites, que permet solucionar un problema. Des de xicotets ens enfrontem a ells quan aprenem a multiplicar o dividir, i a mesura que creixem, aquestes estructures ens permeten resoldre diferents problemes cada vegada més complexos: sistemes lineals, equacions diferencials, problemes d'optimització, etcètera. Hi ha multitud d'algorismes que ens permeten fer front a aquesta mena de problemes, com a mètodes iteratius, on trobem el famós Mètode de Newton per a buscar arrels; algorismes de cerca per a localitzar un element amb unes certes propietats en un conjunt major; o descomposicions matricials, com la descomposició DL. per a resoldre sistemes lineals. No obstant això, aquests enfocaments clàssics presenten limitacions quan s'enfronten a problemes de grans dimensions, problema que es coneix com `la maledicció de la dimensionalitat'. L'avanç de la tecnologia, l'ús de xarxes socials i, en general, els nous problemes que han aparegut amb el desenvolupament de la Intel·ligència Artificial, ha posat de manifest la necessitat de manejar grans quantitats de dades, la qual cosa requereix el disseny de nous mecanismes que permeten la seua manipulació. En la comunitat científica, aquest fet ha despertat l'interés per les estructures tensorials, ja que aquestes permeten treballar eficaçment amb problemes de grans dimensions. No obstant això, la majoria de mètodes clàssics no estan pensats per a ser emprats al costat d'aquestes operacions, per la qual cosa es requereixen eines específiques que permeten el seu tractament, la qual cosa motiva un projecte com aquest. El present treball es divideix de la següent manera: després de revisar algunes definicions necessàries per a la seua comprensió, en el Capítol 3, es desenvolupa la teoria d'una nova descomposició tensorial per a matrius quadrades. A continuació, en el Capítol 4, es mostra una aplicació d'aquesta descomposició a grafs regulars i xarxes de món xicotet. En el Capítol 5, es planteja una implementació eficient de l'algorisme que proporciona la nova descomposició matricial, i s'estudien com a aplicació algunes EDP d'ordre dos. Finalment, en els Capítols 6 i 7 s'exposen unes breus conclusions i s'enumeren algunes de les referències consultades, respectivament. / [EN] The idea of following a sequence of steps to achieve a desired result is inherent in human nature: from the moment we start walking, following a cooking recipe or learning a new card game. Since ancient times, this scheme has been followed to organize laws, correct writings, and even assign diagnoses. In mathematics, this way of thinking is called an algorithm. Formally, an algorithm is a set of defined and unambiguous instructions, ordered and finite, that allows for solving a problem. From childhood, we face them when we learn to multiply or divide, and as we grow, these structures will enable us to solve different increasingly complex problems: linear systems, differential equations, optimization problems, etc. There is a multitude of algorithms that allow us to deal with this type of problem, such as iterative methods, where we find the famous Newton Method to find roots; search algorithms to locate an element with specific properties in a more extensive set; or matrix decompositions, such as the LU decomposition to solve some linear systems. However, these classical approaches have limitations when faced with large-dimensional problems, a problem known as the `curse of dimensionality'. The advancement of technology, the use of social networks and, in general, the new problems that have appeared with the development of Artificial Intelligence, have revealed the need to handle large amounts of data, which requires the design of new mechanisms that allow its manipulation. This fact has aroused interest in the scientific community in tensor structures since they allow us to work efficiently with large-dimensional problems. However, most of the classic methods are not designed to be used together with these operations, so specific tools are required to allow their treatment, which motivates work like this. This work is divided as follows: after reviewing some definitions necessary for its understanding, in Chapter 3, the theory of a new tensor decomposition for square matrices is developed. Next, Chapter 4 shows an application of said decomposition to regular graphs and small-world networks. In Chapter 5, an efficient implementation of the algorithm provided by the new matrix decomposition is proposed, and some order two PDEs are studied as an application. Finally, Chapters 6 and 7 present some brief conclusions and list some of the references consulted. / María Mora Jiménez acknowledges funding from grant (ACIF/2020/269) funded by the Generalitat Valenciana and the European Social Found / Mora Jiménez, M. (2023). Numerical approximations with tensor-based techniques for high-dimensional problems [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/202604 / Compendio
380

Fault diagnosis of lithium ion battery using multiple model adaptive estimation

Sidhu, Amardeep Singh 12 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Lithium ion (Li-ion) batteries have become integral parts of our lives; they are widely used in applications like handheld consumer products, automotive systems, and power tools among others. To extract maximum output from a Li-ion battery under optimal conditions it is imperative to have access to the state of the battery under every operating condition. Faults occurring in the battery when left unchecked can lead to irreversible, and under extreme conditions, catastrophic damage. In this thesis, an adaptive fault diagnosis technique is developed for Li-ion batteries. For the purpose of fault diagnosis the battery is modeled by using lumped electrical elements under the equivalent circuit paradigm. The model takes into account much of the electro-chemical phenomenon while keeping the computational effort at the minimum. The diagnosis process consists of multiple models representing the various conditions of the battery. A bank of observers is used to estimate the output of each model; the estimated output is compared with the measurement for generating residual signals. These residuals are then used in the multiple model adaptive estimation (MMAE) technique for generating probabilities and for detecting the signature faults. The effectiveness of the fault detection and identification process is also dependent on the model uncertainties caused by the battery modeling process. The diagnosis performance is compared for both the linear and nonlinear battery models. The non-linear battery model better captures the actual system dynamics and results in considerable improvement and hence robust battery fault diagnosis in real time. Furthermore, it is shown that the non-linear battery model enables precise battery condition monitoring in different degrees of over-discharge.

Page generated in 0.1722 seconds