• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 1
  • Tagged with
  • 7
  • 7
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Solving the Hamilton-Jacobi-Bellman Equation for Route Planning Problems Using Tensor Decomposition

Mosskull, Albin, Munhoz Arfvidsson, Kaj January 2020 (has links)
Optimizing routes for multiple autonomous vehiclesin complex traffic situations can lead to improved efficiency intraffic. Attempting to solve these optimization problems centrally,i.e. for all vehicles involved, often lead to algorithms that exhibitthe curse of dimensionality: that is, the computation time andmemory needed scale exponentially with the number of vehiclesresulting in infeasible calculations for moderate number ofvehicles. However, using a numerical framework called tensordecomposition one can calculate and store solutions for theseproblems in a more manageable way. In this project, we investi-gate different tensor decomposition methods and correspondingalgorithms for solving optimal control problems, by evaluatingtheir accuracy for a known solution. We also formulate complextraffic situations as optimal control problems and solve them.We do this by using the best tensor decomposition and carefullyadjusting different cost parameters. From these results it canbe concluded that the Sequential Alternating Least Squaresalgorithm used with canonical tensor decomposition performedthe best. By asserting a smooth cost function one can solve certainscenarios and acquire satisfactory solutions, but it requiresextensive testing to achieve such results, since numerical errorsoften can occur as a result of an ill-formed problem. / Att optimera färdvägen för flertalet au-tonoma fordon i komplexa trafiksituationer kan leda till effekti-vare trafik. Om man försöker lösa dessa optimeringsproblemcentralt, för alla fordon samtidigt, leder det ofta till algorit-mer som uppvisar The curse of dimensionality, vilket är då beräkningstiden och minnes-användandet växer exponentielltmed antalet fordon. Detta gör många problem olösbara för endasten måttlig mängd fordon. Däremot kan sådana problem hanterasgenom numeriska verktyg så som tensornedbrytning. I det här projektet undersöker vi olika metoder för tensornedbrytningoch motsvarandes algoritmer för att lösa optimala styrproblem,genom att jämföra dessa för ett problem med en känd lösning.Dessutom formulerar vi komplexa trafiksituationer som optimalastyrproblem för att sedan lösa dem. Detta gör vi genom attanvända den bästa tensornedbrytningen och genom att noggrantanpassa kostnadsparametrar. Från dessa resultat framgår det att Sequential Alternating Least Squaresalgoritmen, tillsammans medkanonisk tensornedbrytning, överträffade de andra algoritmersom testades. De komplexa trafiksituationerna kan lösas genomatt ansätta släta kostnadsfunktioner, men det kräver omfattandetestning för att uppnå sådana resultat då numeriska fel lätt kan uppstå som ett resultat av dålig problemformulering. / Kandidatexjobb i elektroteknik 2020, KTH, Stockholm
2

Tensor product methods in numerical simulation of high-dimensional dynamical problems

Dolgov, Sergey 08 September 2014 (has links) (PDF)
Quantification of stochastic or quantum systems by a joint probability density or wave function is a notoriously difficult computational problem, since the solution depends on all possible states (or realizations) of the system. Due to this combinatorial flavor, even a system containing as few as ten particles may yield as many as $10^{10}$ discretized states. None of even modern supercomputers are capable to cope with this curse of dimensionality straightforwardly, when the amount of quantum particles, for example, grows up to more or less interesting order of hundreds. A traditional approach for a long time was to avoid models formulated in terms of probabilistic functions, and simulate particular system realizations in a randomized process. Since different times in different communities, data-sparse methods came into play. Generally, they aim to define all data points indirectly, by a map from a low amount of representers, and recast all operations (e.g. linear system solution) from the initial data to the effective parameters. The most advanced techniques can be applied (at least, tried) to any given array, and do not rely explicitly on its origin. The current work contributes further progress to this area in the particular direction: tensor product methods for separation of variables. The separation of variables has a long history, and is based on the following elementary concept: a function of many variables may be expanded as a product of univariate functions. On the discrete level, a function is encoded by an array of its values, or a tensor. Therefore, instead of a huge initial array, the separation of variables allows to work with univariate factors with much less efforts. The dissertation contains a short overview of existing tensor representations: canonical PARAFAC, Hierarchical Tucker, Tensor Train (TT) formats, as well as the artificial tensorisation, resulting in the Quantized Tensor Train (QTT) approximation method. The contribution of the dissertation consists in both theoretical constructions and practical numerical algorithms for high-dimensional models, illustrated on the examples of the Fokker-Planck and the chemical master equations. Both arise from stochastic dynamical processes in multiconfigurational systems, and govern the evolution of the probability function in time. A special focus is put on time propagation schemes and their properties related to tensor product methods. We show that these applications yield large-scale systems of linear equations, and prove analytical separable representations of the involved functions and operators. We propose a new combined tensor format (QTT-Tucker), which descends from the TT format (hence TT algorithms may be generalized smoothly), but provides complexity reduction by an order of magnitude. We develop a robust iterative solution algorithm, constituting most advantageous properties of the classical iterative methods from numerical analysis and alternating density matrix renormalization group (DMRG) techniques from quantum physics. Numerical experiments confirm that the new method is preferable to DMRG algorithms. It is as fast as the simplest alternating schemes, but as reliable and accurate as the Krylov methods in linear algebra.
3

Low-rank Tensor Methods for PDE-constrained Optimization

Bünger, Alexandra 14 December 2021 (has links)
Optimierungsaufgaben unter Partiellen Differentialgleichungen (PDGLs) tauchen in verschiedensten Anwendungen der Wissenschaft und Technik auf. Wenn wir ein PDGL Problem formulieren, kann es aufgrund seiner Größe unmöglich werden, das Problem mit konventionellen Methoden zu lösen. Zusätzlich noch eine Optimierung auszuführen birgt zusätzliche Schwierigkeiten. In vielen Fällen können wir das PDGL Problem in einem kompakteren Format formulieren indem wir der zugrundeliegenden Kronecker-Produkt Struktur zwischen Raum- und Zeitdimension Aufmerksamkeit schenken. Wenn die PDGL zusätzlich mit Isogeometrischer Analysis diskretisiert wurde, können wir zusätlich eine Niedrig-Rang Approximation zwischen den einzelnen Raumdimensionen erzeugen. Diese Niedrig-Rang Approximation lässt uns die Systemmatrizen schnell und speicherschonend aufstellen. Das folgende PDGL-Problem lässt sich als Summe aus Kronecker-Produkten beschreiben, welche als eine Niedrig-Rang Tensortrain Formulierung interpretiert werden kann. Diese kann effizient im Niedrig-Rang Format gelöst werden. Wir illustrieren dies mit unterschiedlichen, anspruchsvollen Beispielproblemen.:Introduction Tensor Train Format Isogeometric Analysis PDE-constrained Optimization Bayesian Inverse Problems A low-rank tensor method for PDE-constrained optimization with Isogeometric Analysis A low-rank matrix equation method for solving PDE-constrained optimization problems A low-rank tensor method to reconstruct sparse initial states for PDEs with Isogeometric Analysis Theses and Summary Bibilography / Optimization problems governed by Partial Differential Equations (PDEs) arise in various applications of science and engineering. If we formulate a discretization of a PDE problem, it may become infeasible to treat the problem with conventional methods due to its size. Solving an optimization problem on top of the forward problem poses additional difficulties. Often, we can formulate the PDE problem in a more compact format by paying attention to the underlying Kronecker product structure between the space and time dimension of the discretization. When the PDE is discretized with Isogeometric Analysis we can additionally formulate a low-rank representation with Kronecker products between its individual spatial dimensions. This low-rank formulation gives rise to a fast and memory efficient assembly for the system matrices. The PDE problem represented as a sum of Kronecker products can then be interpreted as a low-rank tensor train formulation, which can be efficiently solved in a low-rank format. We illustrate this for several challenging PDE-constrained problems.:Introduction Tensor Train Format Isogeometric Analysis PDE-constrained Optimization Bayesian Inverse Problems A low-rank tensor method for PDE-constrained optimization with Isogeometric Analysis A low-rank matrix equation method for solving PDE-constrained optimization problems A low-rank tensor method to reconstruct sparse initial states for PDEs with Isogeometric Analysis Theses and Summary Bibilography
4

Décompositions tensorielles et factorisations de calculs intensifs appliquées à l'identification de modèles de comportement non linéaire / Tensor decompositions and factorizations of intensive computing applied to the calibration of nonlinear constitutive material laws

Olivier, Clément 14 December 2017 (has links)
Cette thèse développe une méthodologie originale et non intrusive de construction de modèles de substitution applicable à des modèles physiques multiparamétriques.La méthodologie proposée permet d’approcher en temps réel, sur l’ensemble du domaine paramétrique, de multiples quantités d’intérêt hétérogènes issues de modèles physiques.Les modèles de substitution sont basés sur des représentations en train de tenseurs obtenues lors d'une phase hors ligne de calculs intensifs.L'idée essentielle de la phase d'apprentissage est de construire simultanément les approximations en se basant sur un nombre limité de résolutions du modèle physique lancées à la volée.L'exploration parcimonieuse du domaine paramétrique couplée au format compact de train de tenseurs permet de surmonter le fléau de la dimension.L'approche est particulièrement adaptée pour traiter des modèles présentant un nombre élevé de paramètres définis sur des domaines étendus.Les résultats numériques sur des lois élasto-viscoplastiques non linéaires montrent que des modèles de substitution compacts en mémoire qui approchent précisément les différentes variables mécaniques dépendantes du temps peuvent être obtenus à des coûts modérés.L'utilisation de tels modèles exploitables en temps réel permet la conception d'outils d'aide à la décision destinés aux experts métiers dans le cadre d'études paramétriques et visent à améliorer la procédure de calibration des lois matériaux. / This thesis presents a novel non-intrusive methodology to construct surrogate models of parametric physical models.The proposed methodology enables to approximate in real-time, over the entire parameter space, multiple heterogeneous quantities of interest derived from physical models.The surrogate models are based on tensor train representations built during an intensive offline computational stage.The fundamental idea of the learning stage is to construct simultaneously all tensor approximations based on a reduced number of solutions of the physical model obtained on the fly.The parsimonious exploration of the parameter space coupled with the compact tensor train representation allows to alleviate the curse of dimensionality.The approach accommodates particularly well to models involving many parameters defined over large domains.The numerical results on nonlinear elasto-viscoplastic laws show that compact surrogate models in terms of memory storage that accurately predict multiple time dependent mechanical variables can be obtained at a low computational cost.The real-time response provided by the surrogate model for any parameter value allows the implementation of decision-making tools that are particularly interesting for experts in the context of parametric studies and aim at improving the procedure of calibration of material laws.
5

Approximations de rang faible et modèles d'ordre réduit appliqués à quelques problèmes de la mécanique des fluides / Low rank approximation techniques and reduced order modeling applied to some fluid dynamics problems

Lestandi, Lucas 16 October 2018 (has links)
Les dernières décennies ont donné lieux à d'énormes progrès dans la simulation numérique des phénomènes physiques. D'une part grâce au raffinement des méthodes de discrétisation des équations aux dérivées partielles. Et d'autre part grâce à l'explosion de la puissance de calcul disponible. Pourtant, de nombreux problèmes soulevés en ingénierie tels que les simulations multi-physiques, les problèmes d'optimisation et de contrôle restent souvent hors de portée. Le dénominateur commun de ces problèmes est le fléau des dimensions. Un simple problème tridimensionnel requiert des centaines de millions de points de discrétisation auxquels il faut souvent ajouter des milliers de pas de temps pour capturer des dynamiques complexes. L'avènement des supercalculateurs permet de générer des simulations de plus en plus fines au prix de données gigantesques qui sont régulièrement de l'ordre du pétaoctet. Malgré tout, cela n'autorise pas une résolution ``exacte'' des problèmes requérant l'utilisation de plusieurs paramètres. L'une des voies envisagées pour résoudre ces difficultés est de proposer des représentations ne souffrant plus du fléau de la dimension. Ces représentations que l'on appelle séparées sont en fait un changement de paradigme. Elles vont convertir des objets tensoriels dont la croissance est exponentielle $n^d$ en fonction du nombre de dimensions $d$ en une représentation approchée dont la taille est linéaire en $d$. Pour le traitement des données tensorielles, une vaste littérature a émergé ces dernières années dans le domaine des mathématiques appliquées.Afin de faciliter leurs utilisations dans la communauté des mécaniciens et en particulier pour la simulation en mécanique des fluides, ce manuscrit présente dans un vocabulaire rigoureux mais accessible les formats de représentation des tenseurs et propose une étude détaillée des algorithmes de décomposition de données qui y sont associées. L'accent est porté sur l'utilisation de ces méthodes, aussi la bibliothèque de calcul texttt{pydecomp} développée est utilisée pour comparer l'efficacité de ces méthodes sur un ensemble de cas qui se veut représentatif. La seconde partie de ce manuscrit met en avant l'étude de l'écoulement dans une cavité entraînée à haut nombre de Reynolds. Cet écoulement propose une physique très riche (séquence de bifurcation de Hopf) qui doit être étudiée en amont de la construction de modèle réduit. Cette étude est enrichie par l'utilisation de la décomposition orthogonale aux valeurs propres (POD). Enfin une approche de construction ``physique'', qui diffère notablement des développements récents pour les modèles d'ordre réduit, est proposée. La connaissance détaillée de l'écoulement permet de construire un modèle réduit simple basé sur la mise à l'échelle des fréquences d'oscillation (time-scaling) et des techniques d'interpolation classiques (Lagrange,..). / Numerical simulation has experienced tremendous improvements in the last decadesdriven by massive growth of computing power. Exascale computing has beenachieved this year and will allow solving ever more complex problems. But suchlarge systems produce colossal amounts of data which leads to its own difficulties.Moreover, many engineering problems such as multiphysics or optimisation andcontrol, require far more power that any computer architecture could achievewithin the current scientific computing paradigm. In this thesis, we proposeto shift the paradigm in order to break the curse of dimensionality byintroducing decomposition and building reduced order models (ROM) for complexfluid flows.This manuscript is organized into two parts. The first one proposes an extendedreview of data reduction techniques and intends to bridge between appliedmathematics community and the computational mechanics one. Thus, foundingbivariate separation is studied, including discussions on the equivalence ofproper orthogonal decomposition (POD, continuous framework) and singular valuedecomposition (SVD, discrete matrices). Then a wide review of tensor formats andtheir approximation is proposed. Such work has already been provided in theliterature but either on separate papers or into a purely applied mathematicsframework. Here, we offer to the data enthusiast scientist a comparison ofCanonical, Tucker, Hierarchical and Tensor train formats including theirapproximation algorithms. Their relative benefits are studied both theoreticallyand numerically thanks to the python library texttt{pydecomp} that wasdeveloped during this thesis. A careful analysis of the link between continuousand discrete methods is performed. Finally, we conclude that for mostapplications ST-HOSVD is best when the number of dimensions $d$ lower than fourand TT-SVD (or their POD equivalent) when $d$ grows larger.The second part is centered on a complex fluid dynamics flow, in particular thesingular lid driven cavity at high Reynolds number. This flow exhibits a seriesof Hopf bifurcation which are known to be hard to capture accurately which iswhy a detailed analysis was performed both with classical tools and POD. Oncethis flow has been characterized, emph{time-scaling}, a new ``physics based''interpolation ROM is presented on internal and external flows. This methodsgives encouraging results while excluding recent advanced developments in thearea such as EIM or Grassmann manifold interpolation.
6

Tensor product methods in numerical simulation of high-dimensional dynamical problems

Dolgov, Sergey 20 August 2014 (has links)
Quantification of stochastic or quantum systems by a joint probability density or wave function is a notoriously difficult computational problem, since the solution depends on all possible states (or realizations) of the system. Due to this combinatorial flavor, even a system containing as few as ten particles may yield as many as $10^{10}$ discretized states. None of even modern supercomputers are capable to cope with this curse of dimensionality straightforwardly, when the amount of quantum particles, for example, grows up to more or less interesting order of hundreds. A traditional approach for a long time was to avoid models formulated in terms of probabilistic functions, and simulate particular system realizations in a randomized process. Since different times in different communities, data-sparse methods came into play. Generally, they aim to define all data points indirectly, by a map from a low amount of representers, and recast all operations (e.g. linear system solution) from the initial data to the effective parameters. The most advanced techniques can be applied (at least, tried) to any given array, and do not rely explicitly on its origin. The current work contributes further progress to this area in the particular direction: tensor product methods for separation of variables. The separation of variables has a long history, and is based on the following elementary concept: a function of many variables may be expanded as a product of univariate functions. On the discrete level, a function is encoded by an array of its values, or a tensor. Therefore, instead of a huge initial array, the separation of variables allows to work with univariate factors with much less efforts. The dissertation contains a short overview of existing tensor representations: canonical PARAFAC, Hierarchical Tucker, Tensor Train (TT) formats, as well as the artificial tensorisation, resulting in the Quantized Tensor Train (QTT) approximation method. The contribution of the dissertation consists in both theoretical constructions and practical numerical algorithms for high-dimensional models, illustrated on the examples of the Fokker-Planck and the chemical master equations. Both arise from stochastic dynamical processes in multiconfigurational systems, and govern the evolution of the probability function in time. A special focus is put on time propagation schemes and their properties related to tensor product methods. We show that these applications yield large-scale systems of linear equations, and prove analytical separable representations of the involved functions and operators. We propose a new combined tensor format (QTT-Tucker), which descends from the TT format (hence TT algorithms may be generalized smoothly), but provides complexity reduction by an order of magnitude. We develop a robust iterative solution algorithm, constituting most advantageous properties of the classical iterative methods from numerical analysis and alternating density matrix renormalization group (DMRG) techniques from quantum physics. Numerical experiments confirm that the new method is preferable to DMRG algorithms. It is as fast as the simplest alternating schemes, but as reliable and accurate as the Krylov methods in linear algebra.
7

Breaking the curse of dimensionality based on tensor train : models and algorithms / Gérer le fleau de la dimension à l'aide des trains de tenseurs : modèles et algorithmes

Zniyed, Yassine 15 October 2019 (has links)
Le traitement des données massives, communément connu sous l’appellation “Big Data”, constitue l’un des principaux défis scientifiques de la communauté STIC.Plusieurs domaines, à savoir économique, industriel ou scientifique, produisent des données hétérogènes acquises selon des protocoles technologiques multi-modales. Traiter indépendamment chaque ensemble de données mesurées est clairement une approche réductrice et insatisfaisante. En faisant cela, des “relations cachées” ou des inter-corrélations entre les données peuvent être totalement ignorées.Les représentations tensorielles ont reçu une attention particulière dans ce sens en raison de leur capacité à extraire de données hétérogènes et volumineuses une information physiquement interprétable confinée à un sous-espace de dimension réduite. Dans ce cas, les données peuvent être organisées selon un tableau à D dimensions, aussi appelé tenseur d’ordre D.Dans ce contexte, le but de ce travail et que certaines propriétés soient présentes : (i) avoir des algorithmes de factorisation stables (ne souffrant pas de probème de convergence), (ii) avoir un faible coût de stockage (c’est-à-dire que le nombre de paramètres libres doit être linéaire en D), et (iii) avoir un formalisme sous forme de graphe permettant une visualisation mentale simple mais rigoureuse des décompositions tensorielles de tenseurs d’ordre élevé, soit pour D > 3.Par conséquent, nous nous appuyons sur la décomposition en train de tenseurs (TT) pour élaborer de nouveaux algorithmes de factorisation TT, et des nouvelles équivalences en termes de modélisation tensorielle, permettant une nouvelle stratégie de réduction de dimensionnalité et d'optimisation de critère des moindres carrés couplés pour l'estimation des paramètres d'intérêts nommé JIRAFE.Ces travaux d'ordre méthodologique ont eu des applications dans le contexte de l'analyse spectrale multidimensionelle et des systèmes de télécommunications à relais. / Massive and heterogeneous data processing and analysis have been clearly identified by the scientific community as key problems in several application areas. It was popularized under the generic terms of "data science" or "big data". Processing large volumes of data, extracting their hidden patterns, while preforming prediction and inference tasks has become crucial in economy, industry and science.Treating independently each set of measured data is clearly a reductiveapproach. By doing that, "hidden relationships" or inter-correlations between thedatasets may be totally missed. Tensor decompositions have received a particular attention recently due to their capability to handle a variety of mining tasks applied to massive datasets, being a pertinent framework taking into account the heterogeneity and multi-modality of the data. In this case, data can be arranged as a D-dimensional array, also referred to as a D-order tensor.In this context, the purpose of this work is that the following properties are present: (i) having a stable factorization algorithms (not suffering from convergence problems), (ii) having a low storage cost (i.e., the number of free parameters must be linear in D), and (iii) having a formalism in the form of a graph allowing a simple but rigorous mental visualization of tensor decompositions of tensors of high order, i.e., for D> 3.Therefore, we rely on the tensor train decomposition (TT) to develop new TT factorization algorithms, and new equivalences in terms of tensor modeling, allowing a new strategy of dimensionality reduction and criterion optimization of coupled least squares for the estimation of parameters named JIRAFE.This methodological work has had applications in the context of multidimensional spectral analysis and relay telecommunications systems.

Page generated in 0.0417 seconds