• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 79
  • 39
  • 13
  • 8
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 183
  • 183
  • 48
  • 36
  • 33
  • 30
  • 28
  • 26
  • 25
  • 22
  • 22
  • 21
  • 21
  • 19
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

The Spatial 2:1 Resonant Orbits in Multibody Models: Analysis and Applications

Andrew Joseph Binder (18848701) 24 June 2024 (has links)
<p dir="ltr">Within the aerospace community in recent years, there has been a marked increase in interest in cislunar space. To this end, the study of the dynamics of this regime has flourished in both quantity and quality in recent years, spearheaded by the use of simplified dynamical models to gain insight into the dynamics and to generate viable mission concepts. The most popular and simple of these models, the Circular Restricted Three-Body Problem, has been thoroughly explored to meet these goals (even well-prior to the recent spike in interest). Much work has been done investigating periodic orbits within these models, and similarly has been performed on non-periodic transfers into periodic orbits. Studied less is the superposition of these two concepts, or using periodic orbits as a way to transit, for example, cislunar space. In this thesis, the development of periodic orbits amenable to transiting is accomplished. Beginning from periodic orbit families already present in the literature, this research finds a novel and useful family of periodic orbits, here dubbed the spatial 2:1-resonant orbit family. Within this newly-discovered family, multitudes of qualitative behaviors interesting to the astrodynamics community are found. Many family members seem accommadating to a diverse set of mission profiles, from purely-unstable family members best suited to use as transfers, to marginally stable ones best suited to longer-term use. This family as a whole is analyzed and catalogued with thorough descriptions of behavior, both quantitative and qualitative. While the Circular Restricted Three-Body Problem serves as an excellent starting point for analysis, trajectories found there must be generalized to higher-fidelity modeling. In this spirit, this thesis also focuses on demonstrating such generalization and putting it into practice using the more sophisticated Elliptic-Restricted Three-Body Problem. Documentation of the numerical tools necessary and helpful in accomplishing this generalization is included in this work. Prototypically, the truly 2:1 sidereally-resonant unstable member of the 2:1 family is transitioned into the elliptic problem, as is a nearly-stable L2 Halo orbit family member. This new trajectory is paired with a more classically-present example to show the validity of the methodology. To aid this analysis, symmetries present within the elliptic model are also explored and explained. With this analysis completed, this orbit family is demonstrated to be both interesting and useful, when considered under even more realistic modelling. Further work to mature this novel family of orbits is merited, both for use as the fundamental building block for transfers and for use for more-permanent habitation. More broadly, this work aims to achieve a further proliferation of the merger between transfer and orbit, concepts which seem distinct at first, but deserve more gradual consideration as different flavors of the same idea.</p>
172

Conception d’un solveur linéaire creux parallèle hybride direct-itératif

Gaidamour, Jérémie 08 December 2009 (has links)
Cette thèse présente une méthode de résolution parallèle de systèmes linéaires creux qui combine efficacement les techniques de résolutions directes et itératives en utilisant une approche de type complément de Schur. Nous construisons une décomposition de domaine. L'intérieur des sous-domaines est éliminé de manière directe pour se ramener à un problème sur l'interface. Ce problème est résolu grâce à une méthode itérative préconditionnée par une factorisation incomplète. Un réordonnancement de l'interface permet la construction d'un préconditionneur global du complément de Schur. Des algorithmes minimisant le pic mémoire de la construction du préconditionneur sont proposés. Nous exploitons un schéma d'équilibrage de charge utilisant une répartition de multiples sous-domaines sur les processeurs. Les méthodes sont implémentées dans le solveur HIPS et des résultats expérimentaux parallèles sont présentés sur de grands cas tests industriels. / This thesis presents a parallel resolution method for sparse linear systems which combines effectively techniques of direct and iterative solvers using a Schur complement approach. A domain decomposition is built ; the interiors of the subdomains are eliminated by a direct method in order to use an iterative method only on the interface unknowns. The system on the interface (Schur complement) is solved thanks to an iterative method preconditioned by a global incomplete factorization. A special ordering on the Schur complement allows to build a scalable preconditioner. Algorithms minimizing the memory peak that appears during the construction of the preconditioner are presented. The memory is balanced thanks to a multiple domains per processors parallelization scheme. The methods are implemented in the HIPS solver and parallel experimental results are presented on large industrial test cases.
173

Ghosts and machines : regularized variational methods for interactive simulations of multibodies with dry frictional contacts

Lacoursière, Claude January 2007 (has links)
<p>A time-discrete formulation of the variational principle of mechanics is used to provide a consistent theoretical framework for the construction and analysis of low order integration methods. These are applied to mechanical systems subject to mixed constraints and dry frictional contacts and impacts---machines. The framework includes physics motivated constraint regularization and stabilization schemes. This is done by adding potential energy and Rayleigh dissipation terms in the Lagrangian formulation used throughout. These terms explicitly depend on the value of the Lagrange multipliers enforcing constraints. Having finite energy, the multipliers are thus massless ghost particles. The main numerical stepping method produced with the framework is called SPOOK.</p><p>Variational integrators preserve physical invariants globally, exactly in some cases, approximately but within fixed global bounds for others. This allows to product realistic physical trajectories even with the low order methods. These are needed in the solution of nonsmooth problems such as dry frictional contacts and in addition, they are computationally inexpensive. The combination of strong stability, low order, and the global preservation of invariants allows for large integration time steps, but without loosing accuracy on the important and visible physical quantities. SPOOK is thus well-suited for interactive simulations, such as those commonly used in virtual environment applications, because it is fast, stable, and faithful to the physics.</p><p>New results include a stable discretization of highly oscillatory terms of constraint regularization; a linearly stable constraint stabilization scheme based on ghost potential and Rayleigh dissipation terms; a single-step, strictly dissipative, approximate impact model; a quasi-linear complementarity formulation of dry friction that is isotropic and solvable for any nonnegative value of friction coefficients; an analysis of a splitting scheme to solve frictional contact complementarity problems; a stable, quaternion-based rigid body stepping scheme and a stable linear approximation thereof. SPOOK includes all these elements. It is linearly implicit and linearly stable, it requires the solution of either one linear system of equations of one mixed linear complementarity problem per regular time step, and two of the same when an impact condition is detected. The changes in energy caused by constraints, impacts, and dry friction, are all shown to be strictly dissipative in comparison with the free system. Since all regularization and stabilization parameters are introduced in the physics, they map directly onto physical properties and thus allow modeling of a variety of phenomena, such as constraint compliance, for instance.</p><p>Tutorial material is included for continuous and discrete-time analytic mechanics, quaternion algebra, complementarity problems, rigid body dynamics, constraint kinematics, and special topics in numerical linear algebra needed in the solution of the stepping equations of SPOOK.</p><p>The qualitative and quantitative aspects of SPOOK are demonstrated by comparison with a variety of standard techniques on well known test cases which are analyzed in details. SPOOK compares favorably for all these examples. In particular, it handles ill-posed and degenerate problems seamlessly and systematically. An implementation suitable for large scale performance and accuracy testing is left for future work.</p>
174

Ghosts and machines : regularized variational methods for interactive simulations of multibodies with dry frictional contacts

Lacoursière, Claude January 2007 (has links)
A time-discrete formulation of the variational principle of mechanics is used to provide a consistent theoretical framework for the construction and analysis of low order integration methods. These are applied to mechanical systems subject to mixed constraints and dry frictional contacts and impacts---machines. The framework includes physics motivated constraint regularization and stabilization schemes. This is done by adding potential energy and Rayleigh dissipation terms in the Lagrangian formulation used throughout. These terms explicitly depend on the value of the Lagrange multipliers enforcing constraints. Having finite energy, the multipliers are thus massless ghost particles. The main numerical stepping method produced with the framework is called SPOOK. Variational integrators preserve physical invariants globally, exactly in some cases, approximately but within fixed global bounds for others. This allows to product realistic physical trajectories even with the low order methods. These are needed in the solution of nonsmooth problems such as dry frictional contacts and in addition, they are computationally inexpensive. The combination of strong stability, low order, and the global preservation of invariants allows for large integration time steps, but without loosing accuracy on the important and visible physical quantities. SPOOK is thus well-suited for interactive simulations, such as those commonly used in virtual environment applications, because it is fast, stable, and faithful to the physics. New results include a stable discretization of highly oscillatory terms of constraint regularization; a linearly stable constraint stabilization scheme based on ghost potential and Rayleigh dissipation terms; a single-step, strictly dissipative, approximate impact model; a quasi-linear complementarity formulation of dry friction that is isotropic and solvable for any nonnegative value of friction coefficients; an analysis of a splitting scheme to solve frictional contact complementarity problems; a stable, quaternion-based rigid body stepping scheme and a stable linear approximation thereof. SPOOK includes all these elements. It is linearly implicit and linearly stable, it requires the solution of either one linear system of equations of one mixed linear complementarity problem per regular time step, and two of the same when an impact condition is detected. The changes in energy caused by constraints, impacts, and dry friction, are all shown to be strictly dissipative in comparison with the free system. Since all regularization and stabilization parameters are introduced in the physics, they map directly onto physical properties and thus allow modeling of a variety of phenomena, such as constraint compliance, for instance. Tutorial material is included for continuous and discrete-time analytic mechanics, quaternion algebra, complementarity problems, rigid body dynamics, constraint kinematics, and special topics in numerical linear algebra needed in the solution of the stepping equations of SPOOK. The qualitative and quantitative aspects of SPOOK are demonstrated by comparison with a variety of standard techniques on well known test cases which are analyzed in details. SPOOK compares favorably for all these examples. In particular, it handles ill-posed and degenerate problems seamlessly and systematically. An implementation suitable for large scale performance and accuracy testing is left for future work.
175

Finite element modeling of electromagnetic radiation and induced heat transfer in the human body

Kim, Kyungjoo 24 September 2013 (has links)
This dissertation develops adaptive hp-Finite Element (FE) technology and a parallel sparse direct solver enabling the accurate modeling of the absorption of Electro-Magnetic (EM) energy in the human head. With a large and growing number of cell phone users, the adverse health effects of EM fields have raised public concerns. Most research that attempts to explain the relationship between exposure to EM fields and its harmful effects on the human body identifies temperature changes due to the EM energy as the dominant source of possible harm. The research presented here focuses on determining the temperature distribution within the human body exposed to EM fields with an emphasis on the human head. Major challenges in accurately determining the temperature changes lie in the dependence of EM material properties on the temperature. This leads to a formulation that couples the BioHeat Transfer (BHT) and Maxwell equations. The mathematical model is formed by the time-harmonic Maxwell equations weakly coupled with the transient BHT equation. This choice of equations reflects the relevant time scales. With a mobile device operating at a single frequency, EM fields arrive at a steady-state in the micro-second range. The heat sources induced by EM fields produce a transient temperature field converging to a steady-state distribution on a time scale ranging from seconds to minutes; this necessitates the transient formulation. Since the EM material properties depend upon the temperature, the equations are fully coupled; however, the coupling is realized weakly due to the different time scales for Maxwell and BHT equations. The BHT equation is discretized in time with a time step reflecting the thermal scales. After multiple time steps, the temperature field is used to determine the EM material properties and the time-harmonic Maxwell equations are solved. The resulting heat sources are recalculated and the process continued. Due to the weak coupling of the problems, the corresponding numerical models are established separately. The BHT equation is discretized with H¹ conforming elements, and Maxwell equations are discretized with H(curl) conforming elements. The complexity of the human head geometry naturally leads to the use of tetrahedral elements, which are commonly employed by unstructured mesh generators. The EM domain, including the head and a radiating source, is terminated by a Perfectly Matched Layer (PML), which is discretized with prismatic elements. The use of high order elements of different shapes and discretization types has motivated the development of a general 3D hp-FE code. In this work, we present new generic data structures and algorithms to perform adaptive local refinements on a hybrid mesh composed of different shaped elements. A variety of isotropic and anisotropic refinements that preserve conformity of discretization are designed. The refinement algorithms support one- irregular meshes with the constrained approximation technique. The algorithms are experimentally proven to be deadlock free. A second contribution of this dissertation lies with a new parallel sparse direct solver that targets linear systems arising from hp-FE methods. The new solver interfaces to the hierarchy of a locally refined mesh to build an elimination ordering for the factorization that reflects the h-refinements. By following mesh refinements, not only the computation of element matrices but also their factorization is restricted to new elements and their ancestors. The solver is parallelized by exploiting two-level task parallelism: tasks are first generated from a parallel post-order tree traversal on the assembly tree; next, those tasks are further refined by using algorithms-by-blocks to gain fine-grained parallelism. The resulting fine-grained tasks are asynchronously executed after their dependencies are analyzed. This approach effectively reduces scheduling overhead and increases flexibility to handle irregular tasks. The solver outperforms the conventional general sparse direct solver for a class of problems formulated by high order FEs. Finally, numerical results for a 3D coupled BHT with Maxwell equations are presented. The solutions of this Maxwell code have been verified using the analytic Mie series solutions. Starting with simple spherical geometry, parametric studies are conducted on realistic head models for a typical frequency band (900 MHz) of mobile phones. / text
176

Preconditioned Newton methods for ill-posed problems / Vorkonditionierte Newton-Verfahren für schlecht gestellte Probleme

Langer, Stefan 21 June 2007 (has links)
No description available.
177

Optimisations des solveurs linéaires creux hybrides basés sur une approche par complément de Schur et décomposition de domaine / Optimizations of hybrid sparse linear solvers relying on Schur complement and domain decomposition approaches

Casadei, Astrid 19 October 2015 (has links)
Dans cette thèse, nous nous intéressons à la résolution parallèle de grands systèmes linéaires creux. Nous nous focalisons plus particulièrement sur les solveurs linéaires creux hybrides directs itératifs tels que HIPS, MaPHyS, PDSLIN ou ShyLU, qui sont basés sur une décomposition de domaine et une approche « complément de Schur ». Bien que ces solveurs soient moins coûteux en temps et en mémoire que leurs homologues directs, ils ne sont néanmoins pas exempts de surcoûts. Dans une première partie, nous présentons les différentes méthodes de réduction de la consommation mémoire déjà existantes et en proposons une nouvelle qui n’impacte pas la robustesse numérique du précondionneur construit. Cette technique se base sur une atténuation du pic mémoire par un ordonnancement spécifique des tâches de calcul, d’allocation et de désallocation des blocs, notamment ceux se trouvant dans les parties « couplage » des domaines.Dans une seconde partie, nous nous intéressons à la question de l’équilibrage de la charge que pose la décomposition de domaine pour le calcul parallèle. Ce problème revient à partitionner le graphe d’adjacence de la matrice en autant de parties que de domaines désirés. Nous mettons en évidence le fait que pour avoir un équilibrage correct des temps de calcul lors des phases les plus coûteuses d’un solveur hybride tel que MaPHyS, il faut à la fois équilibrer les domaines en termes de nombre de noeuds et de taille d’interface locale. Jusqu’à aujourd’hui, les partitionneurs de graphes tels que Scotch et MeTiS ne s’intéressaient toutefois qu’au premier critère (la taille des domaines) dans le contexte de la renumérotation des matrices creuses. Nous proposons plusieurs variantes des algorithmes existants afin de prendre également en compte l’équilibrage des interfaces locales. Toutes nos modifications sont implémentées dans le partitionneur Scotch, et nous présentons des résultats sur de grands cas de tests industriels. / In this thesis, we focus on the parallel solving of large sparse linear systems. Our main interestis on direct-iterative hybrid solvers such as HIPS, MaPHyS, PDSLIN or ShyLU, whichrely on domain decomposition and Schur complement approaches. Althrough these solvers arenot as time and space consuming as direct methods, they still suffer from serious overheads. Ina first part, we thus present the existing techniques for reducing the memory consumption, andwe present a new method which does not impact the numerical robustness of the preconditioner.This technique reduces the memory peak by doing a special scheduling of computation, allocation,and freeing tasks in particular in the Schur coupling blocks of the matrix. In a second part,we focus on the load balancing of the domain decomposition in a parallel context. This problemconsists in partitioning the adjacency graph of the matrix in as many domains as desired. Wepoint out that a good load balancing for the most expensive steps of an hybrid solver such asMaPHyS relies on the balancing of both interior nodes and interface nodes of the domains.Through, until now, graph partitioners such as MeTiS or Scotch used to optimize only thefirst criteria (i.e., the balancing of interior nodes) in the context of sparse matrix ordering. Wepropose different variations of the existing algorithms to improve the balancing of interface nodesand interior nodes simultaneously. All our changes are implemented in the Scotch partitioner.We present our results on large collection of matrices coming from real industrial cases.
178

Solving dense linear systems on accelerated multicore architectures / Résoudre des systèmes linéaires denses sur des architectures composées de processeurs multicœurs et d’accélerateurs

Rémy, Adrien 08 July 2015 (has links)
Dans cette thèse de doctorat, nous étudions des algorithmes et des implémentations pour accélérer la résolution de systèmes linéaires denses en utilisant des architectures composées de processeurs multicœurs et d'accélérateurs. Nous nous concentrons sur des méthodes basées sur la factorisation LU. Le développement de notre code s'est fait dans le contexte de la bibliothèque MAGMA. Tout d'abord nous étudions différents solveurs CPU/GPU hybrides basés sur la factorisation LU. Ceux-ci visent à réduire le surcoût de communication dû au pivotage. Le premier est basé sur une stratégie de pivotage dite "communication avoiding" (CALU) alors que le deuxième utilise un préconditionnement aléatoire du système original pour éviter de pivoter (RBT). Nous montrons que ces deux méthodes surpassent le solveur utilisant la factorisation LU avec pivotage partiel quand elles sont utilisées sur des architectures hybrides multicœurs/GPUs. Ensuite nous développons des solveurs utilisant des techniques de randomisation appliquées sur des architectures hybrides utilisant des GPU Nvidia ou des coprocesseurs Intel Xeon Phi. Avec cette méthode, nous pouvons éviter l'important surcoût du pivotage tout en restant stable numériquement dans la plupart des cas. L'architecture hautement parallèle de ces accélérateurs nous permet d'effectuer la randomisation de notre système linéaire à un coût de calcul très faible par rapport à la durée de la factorisation. Finalement, nous étudions l'impact d'accès mémoire non uniformes (NUMA) sur la résolution de systèmes linéaires denses en utilisant un algorithme de factorisation LU. En particulier, nous illustrons comment un placement approprié des processus légers et des données sur une architecture NUMA peut améliorer les performances pour la factorisation du panel et accélérer de manière conséquente la factorisation LU globale. Nous montrons comment ces placements peuvent améliorer les performances quand ils sont appliqués à des solveurs hybrides multicœurs/GPU. / In this PhD thesis, we study algorithms and implementations to accelerate the solution of dense linear systems by using hybrid architectures with multicore processors and accelerators. We focus on methods based on the LU factorization and our code development takes place in the context of the MAGMA library. We study different hybrid CPU/GPU solvers based on the LU factorization which aim at reducing the communication overhead due to pivoting. The first one is based on a communication avoiding strategy of pivoting (CALU) while the second uses a random preconditioning of the original system to avoid pivoting (RBT). We show that both of these methods outperform the solver using LU factorization with partial pivoting when implemented on hybrid multicore/GPUs architectures. We also present new solvers based on randomization for hybrid architectures for Nvidia GPU or Intel Xeon Phi coprocessor. With this method, we can avoid the high cost of pivoting while remaining numerically stable in most cases. The highly parallel architecture of these accelerators allow us to perform the randomization of our linear system at a very low computational cost compared to the time of the factorization. Finally we investigate the impact of non-uniform memory accesses (NUMA) on the solution of dense general linear systems using an LU factorization algorithm. In particular we illustrate how an appropriate placement of the threads and data on a NUMA architecture can improve the performance of the panel factorization and consequently accelerate the global LU factorization. We show how these placements can improve the performance when applied to hybrid multicore/GPU solvers.
179

Algorithms for computing the optimal Geršgorin-type localizations / Алгоритми за рачунање оптималних локализација Гершгориновог типа / Algoritmi za računanje optimalnih lokalizacija Geršgorinovog tipa

Milićević Srđan 27 July 2020 (has links)
<p>There are numerous ways to localize eigenvalues. One of the best known results is that the spectrum of a given matrix ACn,n is a subset of a union of discs centered at diagonal elements whose radii equal to the sum of the absolute values of the off-diagonal elements of a corresponding row in the matrix. This result (Ger&scaron;gorin&#39;s theorem, 1931) is one of the most important and elegant ways of eigenvalues localization ([63]). Among all Ger&scaron;gorintype sets, the minimal Ger&scaron;gorin set gives the sharpest and the most precise localization of the spectrum ([39]). In this thesis, new algorithms for computing an efficient and accurate approximation of the minimal Ger&scaron;gorin set are presented.</p> / <p>Постоје бројни начини за локализацију карактеристичних корена. Један од најчувенијих резултата је да се спектар дате матрице АCn,n налази у скупу који представља унију кругова са центрима у дијагоналним елементима матрице и полупречницима који су једнаки суми модула вандијагоналних елемената одговарајуће врсте у матрици. Овај резултат (Гершгоринова теорема, 1931.), сматра се једним од најзначајнијих и најелегантнијих начина за локализацију карактеристичних корена ([61]). Међу свим локализацијама Гершгориновог типа, минимални Гершгоринов скуп даје најпрецизнију локализацију спектра ([39]). У овој дисертацији, приказани су нови алгоритми за одређивање тачне и поуздане апроксимације минималног Гершгориновог скупа.</p> / <p>Postoje brojni načini za lokalizaciju karakterističnih korena. Jedan od najčuvenijih rezultata je da se spektar date matrice ACn,n nalazi u skupu koji predstavlja uniju krugova sa centrima u dijagonalnim elementima matrice i poluprečnicima koji su jednaki sumi modula vandijagonalnih elemenata odgovarajuće vrste u matrici. Ovaj rezultat (Geršgorinova teorema, 1931.), smatra se jednim od najznačajnijih i najelegantnijih načina za lokalizaciju karakterističnih korena ([61]). Među svim lokalizacijama Geršgorinovog tipa, minimalni Geršgorinov skup daje najprecizniju lokalizaciju spektra ([39]). U ovoj disertaciji, prikazani su novi algoritmi za određivanje tačne i pouzdane aproksimacije minimalnog Geršgorinovog skupa.</p>
180

On the Efficient Utilization of Dense Nonlocal Adjacency Information In Graph Neural Networks

Bünger, Dominik 14 December 2021 (has links)
In den letzten Jahren hat das Teilgebiet des Maschinellen Lernens, das sich mit Graphdaten beschäftigt, durch die Entwicklung von spezialisierten Graph-Neuronalen Netzen (GNNs) mit mathematischer Begründung in der spektralen Graphtheorie große Sprünge nach vorn gemacht. Zusätzlich zu natürlichen Graphdaten können diese Methoden auch auf Datensätze ohne Graphen angewendet werden, indem man einen Graphen künstlich mithilfe eines definierten Adjazenzbegriffs zwischen den Samplen konstruiert. Nach dem neueste Stand der Technik wird jedes Sample mit einer geringen Anzahl an Nachbarn verknüpft, um gleichzeitig das dünnbesetzte Verhalten natürlicher Graphen nachzuahmen, die Stärken bestehender GNN-Methoden auszunutzen und quadratische Abhängigkeit von der Knotenanzahl zu verhinden, welche diesen Ansatz für große Datensätze unbrauchbar machen würde. Die vorliegende Arbeit beleuchtet die alternative Konstruktion von vollbesetzten Graphen basierend auf Kernel-Funktionen. Dabei quantifizieren die Verknüpfungen eines jeden Samples explizit die Ähnlichkeit zu allen anderen Samplen. Deshalb enthält der Graph eine quadratische Anzahl an Kanten, die die lokalen und nicht-lokalen Nachbarschaftsinformationen beschreiben. Obwohl dieser Ansatz in anderen Kontexten wie der Lösung partieller Differentialgleichungen ausgiebig untersucht wurde, wird er im Maschinellen Lernen heutzutage meist wegen der dichtbesetzten Adjazenzmatrizen als unbrauchbar empfunden. Aus diesem Grund behandelt ein großer Teil dieser Arbeit numerische Techniken für schnelle Auswertungen, insbesondere Eigenwertberechnungen, in wichtigen Spezialfällen, bei denen die Samples durch niedrigdimensionale Vektoren (wie z.B. in dreidimensionalen Punktwolken) oder durch kategoriale Attribute beschrieben werden. Weiterhin wird untersucht, wie diese dichtbesetzten Adjazenzinformationen in Lernsituationen auf Graphen benutzt werden können. Es wird eine eigene transduktive Lernmethode vorgeschlagen und präsentiert, eine Version eines Graph Convolutional Networks (GCN), das auf die spektralen und räumlichen Eigenschaften von dichtbesetzten Graphen abgestimmt ist. Schließlich wird die Anwendung von Kernel-basierten Adjazenzmatrizen in der Beschleunigung der erfolgreichen Architektur “PointNet++” umrissen. Im Verlauf der Arbeit werden die Methoden in ausführlichen numerischen Experimenten evaluiert. Zusätzlich zu der empirischen Genauigkeit der Neuronalen Netze liegt der Fokus auf wettbewerbsfähigen Laufzeiten, um die Berechnungs- und Energiekosten der Methoden zu reduzieren. / Over the past few years, graph learning - the subdomain of machine learning on graph data - has taken big leaps forward through the development of specialized Graph Neural Networks (GNNs) that have mathematical foundations in spectral graph theory. In addition to natural graph data, these methods can be applied to non-graph data sets by constructing a graph artificially using a predefined notion of adjacency between samples. The state of the art is to only connect each sample to a low number of neighbors in order to simultaneously mimic the sparse behavior of natural graphs, play into the strengths of existing GNN methods, and avoid quadratic scaling in the number of nodes that would make the approach infeasible for large problem sizes. In this thesis, we shine light on the alternative construction of kernel-based fully-connected graphs. Here the connections of each sample explicitly quantify the similarities to all other samples. Hence the graph contains a quadratic number of edges which encode local and non-local neighborhood information. Though this approach is well studied in other settings including the solution of partial differential equations, it is typically dismissed in machine learning nowadays because of its dense adjacency matrices. We thus dedicate a large portion of this work to showcasing numerical techniques for fast evaluations, especially eigenvalue computations, in important special cases where samples are described by low-dimensional feature vectors (e.g., three-dimensional point clouds) or by a small set of categorial attributes. We then continue to investigate how this dense adjacency information can be utilized in graph learning settings. In particular, we present our own proposed transductive learning method, a version of a Graph Convolutional Network (GCN) designed towards the spectral and spatial properties of dense graphs. We furthermore outline the application of kernel-based adjacency matrices in the speedup of the successful PointNet++ architecture. Throughout this work, we evaluate our methods in extensive numerical experiments. In addition to the empirical accuracy of our neural network tasks, we focus on competitive runtimes in order to decrease the computational and energy cost of our methods.

Page generated in 0.0412 seconds