Spelling suggestions: "subject:"[een] HYPERGRAPHS"" "subject:"[enn] HYPERGRAPHS""
61 |
A GPU Accelerated Tensor Spectral Method for Subspace ClusteringPai, Nithish January 2016 (has links) (PDF)
In this thesis we consider the problem of clustering the data lying in a union of subspaces using spectral methods. Though the data generated may have high dimensionality, in many of the applications, such as motion segmentation and illumination invariant face clustering, the data resides in a union of subspaces having small dimensions. Furthermore, for a number of classification and inference problems, it is often useful to identify these subspaces and work with data in this smaller dimensional manifold. If the observations in each cluster were to be distributed around a centric, applying spectral clustering on an a nifty matrix built using distance based similarity measures between the data points have been used successfully to solve the problem. But it has been observed that using such pair-wise distance based measure between the data points to construct a similarity matrix is not sufficient to solve the subspace clustering problem. Hence, a major challenge is to end a similarity measure that can capture the information of the subspace the data lies in.
This is the motivation to develop methods that use an affinity tensor by calculating similarity between multiple data points. One can then use spectral methods on these tensors to solve the subspace clustering problem. In order to keep the algorithm computationally feasible, one can employ column sampling strategies. However, the computational costs for performing the tensor factorization increases very quickly with increase in sampling rate. Fortunately, the advances in GPU computing has made it possible to perform many linear algebra operations several order of magnitudes faster than traditional CPU and multicourse computing.
In this work, we develop parallel algorithms for subspace clustering on a GPU com-putting environment. We show that this gives us a significant speedup over the implementations on the CPU, which allows us to sample a larger fraction of the tensor and thereby achieve better accuracies. We empirically analyze the performance of these algorithms on a number of synthetically generated subspaces con gyrations. We ally demonstrate the effectiveness of these algorithms on the motion segmentation, handwritten digit clustering and illumination invariant face clustering and show that the performance of these algorithms are comparable with the state of the art approaches.
|
62 |
[en] ARITHMETIC STRUCTURES IN RANDOM SETS / [pt] ESTRUTURAS ARITMÉTICAS EM CONJUNTOS ALEATÓRIOSMATHEUS SECCO TORRES DA SILVA 08 September 2020 (has links)
[pt] Nesta tese de Doutorado, nós estudamos cotas para as probabilidades de desvio de uma variável aleatória X que conta o número de arestas de um hipergrafo induzido por um subconjunto aleatório de m elementos do seu conjunto de vértices. Nós consideramos dois contextos: o primeiro corresponde a hipergrafos que possuem certo tipo de regularidade, ao passo que o segundo lida com hipergrafos que são, em algum sentido, longe de serem regulares. É possível aplicar estes resultados a estruturas discretas, como o conjunto de progressões aritméticas de tamanho k no grupo aditivo de inteiros módulo um primo e também no conjunto dos N primeiros inteiros positivos. Além disso, também deduzimos resultados para o caso em que o subconjunto aleatório é gerado incluindo cada vértice do hipergrafo independentemente com probabilidade p. / [en] In this Ph.D. thesis, we study bounds for the deviation probabilities of a random variable X that counts the number of edges of a hypergraph induced by a random m–element subset of its vertex set. We consider two contexts: the first corresponds to hypergraphs with some kind of regularity, whereas the second addresses hypergraphs that are in some sense far from being regular. It is possible to apply these results to discrete structures such as the set of k–term arithmetic progressions in the additive group of integers modulo a prime and in the set of the first N positive integers. Furthermore, we also deduce results for the case when the random subset is generated by including each vertex of the hypergraph independently with probability p.
|
63 |
[pt] O MÉTODO DE EQUAÇÕES DIFERENCIAIS E CONJUNTOS INDEPENDENTES EM HIPERGRAFOS / [en] THE DIFFERENTIAL EQUATIONS METHOD AND INDEPENDENT SETS IN HYPERGRAPHSIGOR ALBUQUERQUE ARAUJO 18 September 2019 (has links)
[pt] Nesta dissertação, discutiremos o método de equações diferenciais de Wormald, que possui muitas aplicações recentes em Combinatória. Esse método explora a interação entre a matemática discreta e contínua e pode ser usado para provar concentração em uma grande quantidade de processos aleatórios discretos. Em particular, estudaremos o processo livre de H e o algoritmo guloso aleatório para gerar conjuntos independentes em hipergrafos. Esses processos tem sido amplamente estudados nos últimos
anos, culminando com o recente grande avanço de Tom Bohman e Patrick Bennett em 2016, que obtiveram uma cota inferior para hipergrafos com certas condições de densidade. Nós não só reproduzimos sua demonstração mas também obtemos um resultado mais forte (expandindo seu resultado para hipergrafos mais esparsos) e analisamos o caso de hipergrafos lineares, com o intuito de progredir rumo a uma conjectura de Johnson e Pinto sobre o processo livre de Q2 no hipercubo Qd. / [en] In this dissertation, we will discuss Wormald s differential equations method, which has recently had many intriguing applications in Combinatorics. This method explores the interplay between discrete and continuous mathematics and it can be used to prove concentration in a number of discrete random processes. In particular, we will discuss the H-free process and the random greedy algorithm to obtain independent sets in hypergraphs. These processes had been extensively studied through the past few years, culminating in the recent breakthrough of Tom Bohman and Patrick Bennett in 2016, who obtained a lower bound for hypergraphs with certain density conditions. We not only reproduce the proof given by them but also obtain a stronger result (expanding their result to sparser hypergraphs) and we analyze the case of linear hypergraphs, in order to make progress towards a conjecture by Johnson and Pinto concerning the Q2-free process in the hypercube Qd.
|
64 |
Memory and performance issues in parallel multifrontal factorizations and triangular solutions with sparse right-hand sides / Problèmes de mémoire et de performance de la factorisation multifrontale parallèle et de la résolution triangulaire à seconds membres creuxRouet, François-Henry 17 October 2012 (has links)
Nous nous intéressons à la résolution de systèmes linéaires creux de très grande taille sur des machines parallèles. Dans ce contexte, la mémoire est un facteur qui limite voire empêche souvent l’utilisation de solveurs directs, notamment ceux basés sur la méthode multifrontale. Cette étude se concentre sur les problèmes de mémoire et de performance des deux phases des méthodes directes les plus coûteuses en mémoire et en temps : la factorisation numérique et la résolution triangulaire. Dans une première partie nous nous intéressons à la phase de résolution à seconds membres creux, puis, dans une seconde partie, nous nous intéressons à la scalabilité mémoire de la factorisation multifrontale. La première partie de cette étude se concentre sur la résolution triangulaire à seconds membres creux, qui apparaissent dans de nombreuses applications. En particulier, nous nous intéressons au calcul d’entrées de l’inverse d’une matrice creuse, où les seconds membres et les vecteurs solutions sont tous deux creux. Nous présentons d’abord plusieurs schémas de stockage qui permettent de réduire significativement l’espace mémoire utilisé lors de la résolution, dans le cadre d’exécutions séquentielles et parallèles. Nous montrons ensuite que la façon dont les seconds membres sont regroupés peut fortement influencer la performance et nous considérons deux cadres différents : le cas "hors-mémoire" (out-of-core) où le but est de réduire le nombre d’accès aux facteurs, qui sont stockés sur disque, et le cas "en mémoire" (in-core) où le but est de réduire le nombre d’opérations. Finalement, nous montrons comment améliorer le parallélisme. Dans la seconde partie, nous nous intéressons à la factorisation multifrontale parallèle. Nous montrons tout d’abord que contrôler la mémoire active spécifique à la méthode multifrontale est crucial, et que les technique de "répartition" (mapping) classiques ne peuvent fournir une bonne scalabilité mémoire : le coût mémoire de la factorisation augmente fortement avec le nombre de processeurs. Nous proposons une classe d’algorithmes de répartition et d’ordonnancement "conscients de la mémoire" (memory-aware) qui cherchent à maximiser la performance tout en respectant une contrainte mémoire fournie par l’utilisateur. Ces techniques ont révélé des problèmes de performances dans certains des noyaux parallèles denses utilisés à chaque étape de la factorisation, et nous avons proposé plusieurs améliorations algorithmiques. Les idées présentées tout au long de cette étude ont été implantées dans le solveur MUMPS (Solveur MUltifrontal Massivement Parallèle) et expérimentées sur des matrices de grande taille (plusieurs dizaines de millions d’inconnues) et sur des machines massivement parallèles (jusqu’à quelques milliers de coeurs). Elles ont permis d’améliorer les performances et la robustesse du code et seront disponibles dans une prochaine version. Certaines des idées présentées dans la première partie ont également été implantées dans le solveur PDSLin (solveur linéaire hybride basé sur une méthode de complément de Schur). / We consider the solution of very large sparse systems of linear equations on parallel architectures. In this context, memory is often a bottleneck that prevents or limits the use of direct solvers, especially those based on the multifrontal method. This work focuses on memory and performance issues of the two memory and computationally intensive phases of direct methods, that is, the numerical factorization and the solution phase. In the first part we consider the solution phase with sparse right-hand sides, and in the second part we consider the memory scalability of the multifrontal factorization. In the first part, we focus on the triangular solution phase with multiple sparse right-hand sides, that appear in numerous applications. We especially emphasize the computation of entries of the inverse, where both the right-hand sides and the solution are sparse. We first present several storage schemes that enable a significant compression of the solution space, both in a sequential and a parallel context. We then show that the way the right-hand sides are partitioned into blocks strongly influences the performance and we consider two different settings: the out-of-core case, where the aim is to reduce the number of accesses to the factors, that are stored on disk, and the in-core case, where the aim is to reduce the computational cost. Finally, we show how to enhance the parallel efficiency. In the second part, we consider the parallel multifrontal factorization. We show that controlling the active memory specific to the multifrontal method is critical, and that commonly used mapping techniques usually fail to do so: they cannot achieve a high memory scalability, i.e. they dramatically increase the amount of memory needed by the factorization when the number of processors increases. We propose a class of "memory-aware" mapping and scheduling algorithms that aim at maximizing performance while enforcing a user-given memory constraint and provide robust memory estimates before the factorization. These techniques have raised performance issues in the parallel dense kernels used at each step of the factorization, and we have proposed some algorithmic improvements. The ideas presented throughout this study have been implemented within the MUMPS (MUltifrontal Massively Parallel Solver) solver and experimented on large matrices (up to a few tens of millions unknowns) and massively parallel architectures (up to a few thousand cores). They have demonstrated to improve the performance and the robustness of the code, and will be available in a future release. Some of the ideas presented in the first part have also been implemented within the PDSLin (Parallel Domain decomposition Schur complement based Linear solver) solver.
|
65 |
Extremal hypergraph theory and algorithmic regularity lemma for sparse graphsHàn, Hiêp 18 October 2011 (has links)
Einst als Hilfssatz für Szemerédis Theorem entwickelt, hat sich das Regularitätslemma in den vergangenen drei Jahrzehnten als eines der wichtigsten Werkzeuge der Graphentheorie etabliert. Im Wesentlichen hat das Lemma zum Inhalt, dass dichte Graphen durch eine konstante Anzahl quasizufälliger, bipartiter Graphen approximiert werden können, wodurch zwischen deterministischen und zufälligen Graphen eine Brücke geschlagen wird. Da letztere viel einfacher zu handhaben sind, stellt diese Verbindung oftmals eine wertvolle Zusatzinformation dar. Vom Regularitätslemma ausgehend gliedert sich die vorliegende Arbeit in zwei Teile. Mit Fragestellungen der Extremalen Hypergraphentheorie beschäftigt sich der erste Teil der Arbeit. Es wird zunächst eine Version des Regularitätslemmas Hypergraphen angewandt, um asymptotisch scharfe Schranken für das Auftreten von Hamiltonkreisen in uniformen Hypergraphen mit hohem Minimalgrad herzuleiten. Nachgewiesen werden des Weiteren asymptotisch scharfe Schranken für die Existenz von perfekten und nahezu perfekten Matchings in uniformen Hypergraphen mit hohem Minimalgrad. Im zweiten Teil der Arbeit wird ein neuer, Szemerédis ursprüngliches Konzept generalisierender Regularitätsbegriff eingeführt. Diesbezüglich wird ein Algorithmus vorgestellt, welcher zu einem gegebenen Graphen ohne zu dichte induzierte Subgraphen eine reguläre Partition in polynomieller Zeit berechnet. Als eine Anwendung dieses Resultats wird gezeigt, dass das Problem MAX-CUT für die oben genannte Graphenklasse in polynomieller Zeit bis auf einen multiplikativen Faktor von (1+o(1)) approximierbar ist. Der Untersuchung von Chung, Graham und Wilson zu quasizufälligen Graphen folgend wird ferner der sich aus dem neuen Regularitätskonzept ergebende Begriff der Quasizufälligkeit studiert und in Hinsicht darauf eine Charakterisierung mittels Eigenwertseparation der normalisierten Laplaceschen Matrix angegeben. / Once invented as an auxiliary lemma for Szemerédi''s Theorem the regularity lemma has become one of the most powerful tools in graph theory in the last three decades which has been widely applied in several fields of mathematics and theoretical computer science. Roughly speaking the lemma asserts that dense graphs can be approximated by a constant number of bipartite quasi-random graphs, thus, it narrows the gap between deterministic and random graphs. Since the latter are much easier to handle this information is often very useful. With the regularity lemma as the starting point two roads diverge in this thesis aiming at applications of the concept of regularity on the one hand and clarification of several aspects of this concept on the other. In the first part we deal with questions from extremal hypergraph theory and foremost we will use a generalised version of Szemerédi''s regularity lemma for uniform hypergraphs to prove asymptotically sharp bounds on the minimum degree which ensure the existence of Hamilton cycles in uniform hypergraphs. Moreover, we derive (asymptotically sharp) bounds on minimum degrees of uniform hypergraphs which guarantee the appearance of perfect and nearly perfect matchings. In the second part a novel notion of regularity will be introduced which generalises Szemerédi''s original concept. Concerning this new concept we provide a polynomial time algorithm which computes a regular partition for given graphs without too dense induced subgraphs. As an application we show that for the above mentioned class of graphs the problem MAX-CUT can be approximated within a multiplicative factor of (1+o(1)) in polynomial time. Furthermore, pursuing the line of research of Chung, Graham and Wilson on quasi-random graphs we study the notion of quasi-randomness resulting from the new notion of regularity and concerning this we provide a characterisation in terms of eigenvalue separation of the normalised Laplacian matrix.
|
66 |
Regular partitions of hypergraphs and property testingSchacht, Mathias 28 October 2010 (has links)
Die Regularitätsmethode für Graphen wurde vor über 30 Jahren von Szemerédi, für den Beweis seines Dichteresultates über Teilmengen der natürlichen Zahlen, welche keine arithmetischen Progressionen enthalten, entwickelt. Grob gesprochen besagt das Regularitätslemma, dass die Knotenmenge eines beliebigen Graphen in konstant viele Klassen so zerlegt werden kann, dass fast alle induzierten bipartiten Graphen quasi-zufällig sind, d.h. sie verhalten sich wie zufällige bipartite Graphen mit derselben Dichte. Das Regularitätslemma hatte viele weitere Anwendungen, vor allem in der extremalen Graphentheorie, aber auch in der theoretischen Informatik und der kombinatorischen Zahlentheorie, und gilt mittlerweile als eines der zentralen Hilfsmittel in der modernen Graphentheorie. Vor wenigen Jahren wurden Regularitätslemmata für andere diskrete Strukturen entwickelt. Insbesondere wurde die Regularitätsmethode für uniforme Hypergraphen und dünne Graphen verallgemeinert. Ziel der vorliegenden Arbeit ist die Weiterentwicklung der Regularitätsmethode und deren Anwendung auf Probleme der theoretischen Informatik. Im Besonderen wird gezeigt, dass vererbbare (entscheidbare) Hypergrapheneigenschaften, das sind Familien von Hypergraphen, welche unter Isomorphie und induzierten Untergraphen abgeschlossen sind, testbar sind. D.h. es existiert ein randomisierter Algorithmus, der in konstanter Laufzeit mit hoher Wahrscheinlichkeit zwischen Hypergraphen, welche solche Eigenschaften haben und solchen die „weit“ davon entfernt sind, unterscheidet. / About 30 years ago Szemerédi developed the regularity method for graphs, which was a key ingredient in the proof of his famous density result concerning the upper density of subsets of the integers which contain no arithmetic progression of fixed length. Roughly speaking, the regularity lemma asserts, that the vertex set of every graph can be partitioned into a constant number of classes such that almost all of the induced bipartite graphs are quasi-random, i.e., they mimic the behavior of random bipartite graphs of the same density. The regularity lemma had have many applications mainly in extremal graph theory, but also in theoretical computer science and additive number theory, and it is considered one of the central tools in modern graph theory. A few years ago the regularity method was extended to other discrete structures. In particular extensions for uniform hypergraphs and sparse graphs were obtained. The main goal of this thesis is the further development of the regularity method and its application to problems in theoretical computer science. In particular, we will show that hereditary, decidable properties of hypergraphs, that are properties closed under isomorphism and vertex removal, are testable. I.e., there exists a randomised algorithm with constant running time, which distinguishes between Hypergraphs displaying the property and those which are “far” from it.
|
67 |
Rainbow Colouring and Some Dimensional Problems in Graph TheoryRajendraprasad, Deepak January 2013 (has links) (PDF)
This thesis touches three different topics in graph theory, namely, rainbow colouring, product dimension and boxicity.
Rainbow colouring An edge colouring of a graph is called a rainbow colouring, if every pair of vertices is connected by atleast one path in which no two edges are coloured the same. The rainbow connection number of a graph is the minimum number of colours required to rainbow colour it. In this thesis we give upper bounds on rainbow connection number based on graph invariants like minimum degree, vertex connectivity, and radius. We also give some computational complexity results for special graph classes.
Product dimension The product dimension or Prague dimension of a graph G is the smallest natural number k such that G is an induced subgraph of a direct product of k complete graphs. In this thesis, we give upper bounds on the product dimension for forests, bounded tree width graphs and graphs of bounded degeneracy.
Boxicity and cubicity The boxicity (cubicity of a graph G is the smallest natural number k such that G can be represented as an intersection graph of axis-parallel rectangular boxes(axis-parallel unit cubes) in Rk .In this thesis, we study the boxicity and the cubicity of Cartesian, strong and direct products of graphs and give estimates on the boxicity and the cubicity of a product graph based on invariants of the component graphs.
Separation dimension The separation dimension of a hypergraph H is the smallest natural number k for which the vertices of H can be embedded in Rk such that any two disjoint edges of H can be separated by a hyper plane normal to one of the axes. While studying the boxicity of line graphs, we noticed that a box representation of the line graph of a hypergraph has a nice geometric interpretation. Hence we introduced this new parameter and did an extensive study of the same.
|
68 |
Models and algorithms applied to metabolism : from revealing the responses to perturbations towards the design of microbial consortia / Modéliser le métabolisme : expliciter les réponses aux perturbations et composer des consortia microbiensJulien-Laferriere, Alice 08 December 2016 (has links)
Lors de cette thèse, je me suis intéressée à la modélisation du métabolisme des micro-organismes. Nous nous sommes focalisé sur le métabolisme des petites molécules qui ne prend pas en compte les réactions associées aux macromolécules, telle que la synthèse des protéines.Nous avons ainsi utilisé différents formalismes de modélisation.Tout d'abord, nous avons développé TOTORO où les réseaux métaboliques sont représentés par des hypergraphes dirigés et qui permet d'identifier les réactions ayant participé à une transition métabolique. TOTORO a été utilisé sur un jeu de données sur la levure en présence de cadmium. Nous avons pu montrer que nous retrouvons les mécanismes connus de désintoxication.Ensuite, en utilisant une méthode de modélisation par contraintes, nous discutons d'un développement en cours, KOTOURA, qui propose d'utiliser les connaissances actuelles de concentrations de métabolites entre différentes conditions pour inférer de manière quantitative les possibles asynchronies des réactions lors du passage d'un état stable à un autre. Nous avons testé son implémentation sur des données simulées.Enfin, nous proposons MULTIPUS, une méthode d'extraction d'(hyper)-arbres de Steiner dirigés qui permet de sélectionner les voies métaboliques pour la production de composés au sein d'une communauté bactérienne. Les réseaux métaboliques sont modélisés en utilisant des hypergraphes dirigés et pondérés. Nous proposons un algorithme de programmation dynamique paramétré ainsi qu'une formulation utilisant la programmation par ensemble réponse. Ces deux propositions sont ensuite comparées dans deux cas d'applications / In this PhD work, we proposed to model metabolism. Our focus was to develop generic models, that are not specific to one organism or condition, but are instead based on general assumptions that we tried to validate using data from the literature.We first present TOTORO that uses a qualitative measurement of concentrations in two steady-states to infer the reaction changes that lead to differences in metabolite pools in both conditions.TOTORO enumerates all sub-(hyper)graphs that represent a sufficient explanation for the observed differences in concentrations. We exploit a dataset of Yeast (Saccharomyces cerevisiae) exposed to cadmium and show that we manage to retrieve the known pathways used by the organisms. We then address the same issue, but using a constraint-based programming framework, called KOTOURA, that allows to infer more quantitatively the reaction changes during the perturbed state. We use in this case exact concentration measurements and the stoichiometric matrix, and show on simulated datasets that the overall variations of reaction fluxes can be captured by our formulation.Finally, we propose MULTIPUS, a method to infer microbial communities and metabolic roads to produce specific target compounds from a set of defined substrates. We use in this case a weighted directed hypergraph. We apply MULTIPUS to the production of antibiotics using a consortium composed of an archae and an actinobacteria and show hat their metabolic capacities are complementary. We then infer for another community the excretion of an inhibitory product (acetate) by a 1,3-propanediol (PDO) producer and its consumption by a methanogene archae
|
69 |
Nonlinear Perron-Frobenius theory and mean-payoff zero-sum stochastic games / Théorie de Perron-Frobenius non-linéaire et jeux stochastiques à somme nulle avec paiement moyenHochart, Antoine 14 November 2016 (has links)
Les jeux stochastiques à somme nulle possèdent une structure récursive qui s'exprime dans leur opérateur de programmation dynamique, appelé opérateur de Shapley. Ce dernier permet d'étudier le comportement asymptotique de la moyenne des paiements par unité de temps. En particulier, le paiement moyen existe et ne dépend pas de l'état initial si l'équation ergodique - une équation non-linéaire aux valeurs propres faisant intervenir l'opérateur de Shapley - admet une solution. Comprendre sous quelles conditions cette équation admet une solution est un problème central de la théorie de Perron-Frobenius non-linéaire, et constitue le principal thème d'étude de cette thèse. Diverses classes connues d'opérateur de Shapley peuvent être caractérisées par des propriétés basées entièrement sur la relation d'ordre ou la structure métrique de l'espace. Nous étendons tout d'abord cette caractérisation aux opérateurs de Shapley "sans paiements", qui proviennent de jeux sans paiements instantanés. Pour cela, nous établissons une expression sous forme minimax des fonctions homogènes de degré un et non-expansives par rapport à une norme faible de Minkowski. Nous nous intéressons ensuite au problème de savoir si l'équation ergodique a une solution pour toute perturbation additive des paiements, problème qui étend la notion d'ergodicité des chaînes de Markov. Quand les paiements sont bornés, cette propriété d'"ergodicité" est caractérisée par l'unicité, à une constante additive près, du point fixe d'un opérateur de Shapley sans paiement. Nous donnons une solution combinatoire s'exprimant au moyen d'hypergraphes à ce problème, ainsi qu'à des problèmes voisins d'existence de points fixes. Puis, nous en déduisons des résultats de complexité. En utilisant la théorie des opérateurs accrétifs, nous généralisons ensuite la condition d'hypergraphes à tous types d'opérateurs de Shapley, y compris ceux provenant de jeux dont les paiements ne sont pas bornés. Dans un troisième temps, nous considérons le problème de l'unicité, à une constante additive près, du vecteur propre. Nous montrons d'abord que l'unicité a lieu pour une perturbation générique des paiements. Puis, dans le cadre des jeux à information parfaite avec un nombre fini d'actions, nous précisons la nature géométrique de l'ensemble des perturbations où se produit l'unicité. Nous en déduisons un schéma de perturbations qui permet de résoudre les instances dégénérées pour l'itération sur les politiques. / Zero-sum stochastic games have a recursive structure encompassed in their dynamic programming operator, so-called Shapley operator. The latter is a useful tool to study the asymptotic behavior of the average payoff per time unit. Particularly, the mean payoff exists and is independent of the initial state as soon as the ergodic equation - a nonlinear eigenvalue equation involving the Shapley operator - has a solution. The solvability of the latter equation in finite dimension is a central question in nonlinear Perron-Frobenius theory, and the main focus of the present thesis. Several known classes of Shapley operators can be characterized by properties based entirely on the order structure or the metric structure of the space. We first extend this characterization to "payment-free" Shapley operators, that is, operators arising from games without stage payments. This is derived from a general minimax formula for functions homogeneous of degree one and nonexpansive with respect to a given weak Minkowski norm. Next, we address the problem of the solvability of the ergodic equation for all additive perturbations of the payment function. This problem extends the notion of ergodicity for finite Markov chains. With bounded payment function, this "ergodicity" property is characterized by the uniqueness, up to the addition by a constant, of the fixed point of a payment-free Shapley operator. We give a combinatorial solution in terms of hypergraphs to this problem, as well as other related problems of fixed-point existence, and we infer complexity results. Then, we use the theory of accretive operators to generalize the hypergraph condition to all Shapley operators, including ones for which the payment function is not bounded. Finally, we consider the problem of uniqueness, up to the addition by a constant, of the nonlinear eigenvector. We first show that uniqueness holds for a generic additive perturbation of the payments. Then, in the framework of perfect information and finite action spaces, we provide an additional geometric description of the perturbations for which uniqueness occurs. As an application, we obtain a perturbation scheme allowing one to solve degenerate instances of stochastic games by policy iteration.
|
70 |
Reproducible geoscientific modelling with hypergraphsSemmler, Georg 04 September 2023 (has links)
Reproducing the construction of a geoscientific model is a hard task. It requires the availability of all required data and an exact description how the construction was performed. In practice data availability and the exactness of the description is often lacking. As part of this thesis I introduce a conceptual framework how geoscientific model constructions can be described as directed acyclic hypergraphs, how such recorded construction graphs can be used to reconstruct the model, and how repetitive constructions can be used to verify the reproducibility of a geoscientific model construction process. In addition I present a software prototype, implementing these concepts. The prototype is tested with three different case studies, including a geophysical measurement analysis, a subsurface model construction and the calculation of a hydrological balance model.:1. Introduction
1.1. Survey on Reproducibility and Automation for Geoscientific Model Construction
1.2. Motivating Example
1.3. Previous Work
1.4. Problem Description
1.5. Structure of this Thesis
1.6. Results Accomplished by this Thesis
2. Terms, Definitions and Requirements
2.1. Terms and Definitions
2.1.1. Geoscientific model
2.1.2. Reproducibility
2.1.3. Realisation
2.2. Requirements
3. Related Work
3.1. Overview
3.2. Geoscientific Data Storage Systems
3.2.1. PostGIS and Similar Systems
3.2.2. Geoscience in Space and Time (GST)
3.3. Geoscientific Modelling Software
3.3.1. gOcad
3.3.2. GemPy
3.4. Experimentation Management Software
3.4.1. DataLad
3.4.2. Data Version Control (DVC)
3.5. Reproducible Software Builds
3.6. Summarised Releated Work
4. Concept
4.1. Construction Hypergraphs
4.1.1. Reproducibility Based on Construction Hypergraphs
4.1.2. Equality definitions
4.1.3. Design Constraints
4.2. Data Handling
5. Design
5.1. Application Structure
5.1.1. Choice of Application Architecture for GeoHub
5.2. Extension Mechanisms
5.2.1. Overview
5.2.2. A Shared Library Based Extension System
5.2.3. Inter-Process Communication Based Extension System
5.2.4. An Extension System Based on a Scripting Language
5.2.5. An Extension System Based on a WebAssembly Interface
5.2.6. Comparison
5.3. Data Storage
5.3.1. Overview
5.3.2. Stored Data
5.3.3. Potential Solutions
5.3.4. Model Versioning
5.3.5. Transactional security
6. Implementation
6.1. General Application Structure
6.2. Data Storage
6.2.1. Database
6.2.2. User-provided Data-processing Extensions
6.3. Operation Executor
6.3.1. Construction Step Descriptions
6.3.2. Construction Step Scheduling
6.3.3. Construction Step Execution
7. Case Studies
7.1. Overview
7.2. Geophysical Model of the BHMZ block
7.2.1. Provided Data and Initial Situation
7.2.2. Construction Process Description
7.2.3. Reproducibility
7.2.4. Identified Problems and Construction Process Improvements
7.2.5. Recommendations
7.3. Three-Dimensional Subsurface Model of the Kolhberg Region
7.3.1. Provided Data and Initial Situation
7.3.2. Construction Process Description
7.3.3. Reproducibility
7.3.4. Identified Problems and Construction Process Improvements
7.3.5. Recommendations
7.4. Hydrologic Balance Model of a Saxonian Stream
7.4.1. Provided Data and Initial Situation
7.4.2. Construction Process Description
7.4.3. Reproducibility
7.4.4. Identified Problems and Construction Process Improvements
7.4.5. Recommendations
7.5. Lessons Learned
8. Conclusions
8.1. Summary
8.2. Outlook
8.2.1. Parametric Model Construction Process
8.2.2. Pull and Push Nodes
8.2.3. Parallelize Single Construction Steps
8.2.4. Provable Model Construction Process Attestation
References
Appendix
|
Page generated in 0.056 seconds