• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 39
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 54
  • 54
  • 16
  • 15
  • 14
  • 14
  • 12
  • 11
  • 11
  • 9
  • 9
  • 7
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Statistical Inference for Models with Intractable Normalizing Constants

Jin, Ick Hoon 16 December 2013 (has links)
In this dissertation, we have proposed two new algorithms for statistical inference for models with intractable normalizing constants: the Monte Carlo Metropolis-Hastings algorithm and the Bayesian Stochastic Approximation Monte Carlo algorithm. The MCMH algorithm is a Monte Carlo version of the Metropolis-Hastings algorithm. At each iteration, it replaces the unknown normalizing constant ratio by a Monte Carlo estimate. Although the algorithm violates the detailed balance condition, it still converges, as shown in the paper, to the desired target distribution under mild conditions. The BSAMC algorithm works by simulating from a sequence of approximated distributions using the SAMC algorithm. A strong law of large numbers has been established for BSAMC estimators under mild conditions. One significant advantage of our algorithms over the auxiliary variable MCMC methods is that they avoid the requirement for perfect samples, and thus it can be applied to many models for which perfect sampling is not available or very expensive. In addition, although the normalizing constant approximation is also involved in BSAMC, BSAMC can perform very robustly to initial guesses of parameters due to the powerful ability of SAMC in sample space exploration. BSAMC has also provided a general framework for approximated Bayesian inference for the models for which the likelihood function is intractable: sampling from a sequence of approximated distributions with their average converging to the target distribution. With these two illustrated algorithms, we have demonstrated how the SAMCMC method can be applied to estimate the parameters of ERGMs, which is one of the typical examples of statistical models with intractable normalizing constants. We showed that the resulting estimate is consistent, asymptotically normal and asymptotically efficient. Compared to the MCMLE and SSA methods, a significant advantage of SAMCMC is that it overcomes the model degeneracy problem. The strength of SAMCMC comes from its varying truncation mechanism, which enables SAMCMC to avoid the model degeneracy problem through re-initialization. MCMLE and SSA do not possess the re-initialization mechanism, and tend to converge to a solution near the starting point, so they often fail for the models which suffer from the model degeneracy problem.
32

Random trees, graphs and recursive partitions

Broutin, Nicolas 05 July 2013 (has links) (PDF)
Je présente dans ce mémoire mes travaux sur les limites d'échelle de grandes structures aléatoires. Il s'agit de décrire les structures combinatoires dans la limite des grandes tailles en prenant un point de vue objectif dans le sens où on cherche des limites des objets, et non pas seulement de paramètres caractéristiques (même si ce n'est pas toujours le cas dans les résultats que je présente). Le cadre général est celui des structures critiques pour lesquelles on a typiquement des distances caractéristiques polynomiales en la taille, et non concentrées. Sauf exception, ces structures ne sont en général pas adaptées aux applications informatiques. Elles sont cependant essentielles de part l'universalité de leurs propriétés asymptotiques, prouvées ou attendues. Je parle en particulier d'arbres uniformément choisis, de graphes aléatoires, d'arbres couvrant minimaux et de partitions récursives de domaines du plan:<br/> <strong>Arbres aléatoires uniformes.</strong> Il s'agit ici de mieux comprendre un objet limite essentiel, l'arbre continu brownien (CRT). Je présente quelques résultats de convergence pour des modèles combinatoires ''non-branchants'' tels que des arbres sujets aux symétries et les arbres à distribution de degrés fixée. Je décris enfin une nouvelle décomposition du CRT basée sur une destruction partielle.<br/> <strong>Graphes aléatoires.</strong> J'y décris la construction algorithmique de la limite d'échel-le des graphes aléatoires du modèle d'Erdös--Rényi dans la zone critique, et je fais le lien avec le CRT et donne des constructions de l'espace métrique limite. <strong>Arbres couvrant minimaux.</strong> J'y montre qu'une connection avec les graphes aléatoires permet de quantifier les distances dans un arbre convrant aléatoire. On obtient non seulement l'ordre de grandeur de l'espérance du diamètre, mais aussi la limite d'échelle en tant qu'espace métrique mesuré. Partitions récursives. Sur deux exemples, les arbres cadrant et les laminations du disque, je montre que des idées basées sur des théorèmes de point fixe conduisent à des convergences de processus, où les limites sont inhabituelles, et caractérisées par des décompositions récursives.
33

Random graph processes and optimisation

Cain, Julie A Unknown Date (has links) (PDF)
Random graph processes are most often used to investigate theoretical questions about random graphs. A randomised algorithm can be defined specifically for the purpose of finding some structure in a graph, such as a matching, a colouring or a particular kind of sub graph. Properties of the related random graph process then suggest properties, or bounds on properties, of the structure. In this thesis, we use a random graph process to analyse a particular load balancing algorithm from theoretical computer science. By doing so, we demonstrate that random graph processes may also be used to analyse other algorithms and systems of a random nature, from areas such as computer science, telecommunications and other areas of engineering and mathematics. Moreover, this approach can lead to theoretical results on the performance of algorithms that are difficult to obtain by other methods. In the course of our analysis we are also led to some results of the first kind, relating to the structure of the random graph. / The particular algorithm that we analyse is a randomised algorithm for an off-line load balancing problem with two choices. The load balancing algorithm, in an initial stage, mirrors an algorithm which finds the k-core of a graph. This latter algorithm and the related random graph process have been previously analysed by Pittel, Spencer and Wormald, using a differential equation method, to determine the thresholds for the existence of a k-core in a random graph. We modify their approach by using a random pseudograph model due to Bollobas and Frieze, and Chvatal, in place of the uniform random graph. This makes the analysis somewhat simpler, and leads to a shortened derivation of the thresholds and other properties of k-cores.(For complete abstract open document)
34

Convergence Rates in Dynamic Network Models

Kück, Fabian 04 September 2017 (has links)
No description available.
35

Toward a Theory of Social Stability: Investigating Relationships Among the Valencian Bronze Age Peoples of Mediterranean Iberia

January 2020 (has links)
abstract: What causes social systems to resist change? Studies of the emergence of social complexity in archaeology have focused primarily on drivers of change with much less emphasis on drivers of stability. Social stability, or the persistence of social systems, is an essential feature without which human society is not possible. By combining quantitative modeling (Exponential Random Graph Modeling) and the comparative archaeological record where the social system is represented by networks of relations between settlements, this research tests several hypotheses about social and geographic drivers of social stability with an explicit focus on a better understanding of contexts and processes that resist change. The Valencian Bronze Age in eastern Spain along the Mediterranean, where prior research appears to indicate little, regional social change for 700 years, serves as a case study. The results suggest that social stability depends on a society’s ability to integrate change and promote interdependency. In part, this ability is constrained or promoted by social structure and the different, relationship dependencies among individuals that lead to a particular social structure. Four elements are important to constraining or promoting social stability—structural cohesion, transitivity and social dependency, geographic isolation, and types of exchange. Through the framework provided in this research, an archaeologist can recognize patterns in the archaeological data that reflect and promote social stability, or lead to collapse. Results based on comparisons between the social networks of the Northern and Southern regions of the Valencian Bronze Age show that the Southern Region’s social structure was less stable through time. The Southern Region’s social structure consisted of competing cores of exchange. This type of competition often leads to power imbalances, conflict, and instability. Strong dependencies on the neighboring Argaric during the Early and Middle Bronze Ages and contributed to the Southern Region’s inability to maintain social stability after the Argaric collapsed. Furthermore, the Southern Region participated in the exchange of more complex technology—bronze. Complex technologies produce networks with hub and spoke structures highly vulnerable to collapse after the destruction of a hub. The Northern Region’s social structure remained structurally cohesive through time, promoting social stability. / Dissertation/Thesis / Webpage with data tables and R code / Doctoral Dissertation Anthropology 2020
36

Sur certains problèmes de diffusion et de connexité dans le modèle de configuration / On some diffusion and spanning problems in configuration model

Gaurav, Kumar 18 November 2016 (has links)
Un certain nombre de systèmes dans le monde réel, comprenant des agents interagissant, peut être utilement modélisé par des graphes, où les agents sont représentés par les sommets du graphe et les interactions par les arêtes. De tels systèmes peuvent être aussi divers et complexes que les réseaux sociaux (traditionnels ou virtuels), les réseaux d'interaction protéine-protéine, internet, réseaux de transport et les réseaux de prêts interbancaires. Une question importante qui se pose dans l'étude de ces réseaux est: dans quelle mesure, les statistiques locales d'un réseau déterminent sa topologie globale. Ce problème peut être approché par la construction d'un graphe aléatoire contraint d'avoir les mêmes statistiques locales que celles observées dans le graphe d'intérêt. Le modèle de configuration est un tel modèle de graphe aléatoire conçu de telle sorte qu'un sommet uniformément choisi présente une distribution de degré donnée. Il fournit le cadre sous-jacent à cette thèse. En premier lieu nous considérons un problème de propagation de l'influence sur le modèle de configuration, où chaque sommet peut être influencé par l'un de ses voisins, mais à son tour, il ne peut influencer qu'un sous-ensemble aléatoire de ses voisins. Notre modèle étendu est décrit par le degré total du sommet typique et le nombre de voisins il est capable d'influencer. Nous donnons une condition stricte sur la distribution conjointe de ces deux degrés, qui permet à l'influence de parvenir, avec une forte probabilité, à un ensemble non négligeable de sommets, essentiellement unique, appelé la composante géante influencée, à condition que le sommet de la source soit choisi à partir d'un ensemble de bons pionniers. Nous évaluons explicitement la taille relative asymptotique de la composant géante influencée, ainsi que de l'ensemble des bons pionniers, à condition qu'ils soient non-négligeable. Notre preuve utilise l'exploration conjointe du modèle de configuration et de la propagation de l'influence jusqu'au moment où une grande partie est influencée, une technique introduite dans Janson et Luczak (2008). Notre modèle peut être vu comme une généralisation de la percolation classique par arêtes ou par sites sur le modèle de configuration, avec la différence résultant de la conductivité orientée des arêtes dans notre modèle. Nous illustrons ces résultats en utilisant quelques exemples, en particulier, motivés par le marketing viral - un phénomène connu dans le contexte des réseaux sociaux… / A number of real-world systems consisting of interacting agents can be usefully modelled by graphs, where the agents are represented by the vertices of the graph and the interactions by the edges. Such systems can be as diverse and complex as social networks (traditional or online), protein-protein interaction networks, internet, transport network and inter-bank loan networks. One important question that arises in the study of these networks is: to what extent, the local statistics of a network determine its global topology. This problem can be approached by constructing a random graph constrained to have some of the same local statistics as those observed in the graph of interest. One such random graph model is configuration model, which is constructed in such a way that a uniformly chosen vertex has a given degree distribution. This is the random graph which provides the underlying framework for this thesis. As our first problem, we consider propagation of influence on configuration model, where each vertex can be influenced by any of its neighbours but in its turn, it can only influence a random subset of its neighbours. Our (enhanced) model is described by the total degree of the typical vertex and the number of neighbours it is able to influence. We give a tight condition, involving the joint distribution of these two degrees, which allows with high probability the influence to reach an essentially unique non-negligible set of the vertices, called a big influenced component, provided that the source vertex is chosen from a set of good pioneers. We explicitly evaluate the asymptotic relative size of the influenced component as well as of the set of good pioneers, provided it is non-negligible. Our proof uses the joint exploration of the configuration model and the propagation of the influence up to the time when a big influenced component is completed, a technique introduced in Janson and Luczak (2008). Our model can be seen as a generalization of the classical Bond and Node percolation on configuration model, with the difference stemming from the oriented conductivity of edges in our model. We illustrate these results using a few examples which are interesting from either theoretical or real-world perspective. The examples are, in particular, motivated by the viral marketing phenomenon in the context of social networks...
37

A Social Interaction Model with Endogenous Network Formation

Weng, Huibin 22 October 2020 (has links)
No description available.
38

Random graph processes with dependencies

Warnke, Lutz January 2012 (has links)
Random graph processes are basic mathematical models for large-scale networks evolving over time. Their systematic study was pioneered by Erdös and Rényi around 1960, and one key feature of many 'classical' models is that the edges appear independently. While this makes them amenable to a rigorous analysis, it is desirable, both mathematically and in terms of applications, to understand more complicated situations. In this thesis the main goal is to improve our rigorous understanding of evolving random graphs with significant dependencies. The first model we consider is known as an Achlioptas process: in each step two random edges are chosen, and using a given rule only one of them is selected and added to the evolving graph. Since 2000 a large class of 'complex' rules has eluded a rigorous analysis, and it was widely believed that these could give rise to a striking and unusual phenomenon. Making this explicit, Achlioptas, D'Souza and Spencer conjectured in Science that one such rule yields a very abrupt (discontinuous) percolation phase transition. We disprove this, showing that the transition is in fact continuous for all Achlioptas process. In addition, we give the first rigorous analysis of the more 'complex' rules, proving that certain key statistics are tightly concentrated (i) in the subcritical evolution, and (ii) also later on if an associated system of differential equations has a unique solution. The second model we study is the H-free process, where random edges are added subject to the constraint that they do not complete a copy of some fixed graph H. The most important open question for such 'constrained' processes is due to Erdös, Suen and Winkler: in 1995 they asked what the typical final number of edges is. While Osthus and Taraz answered this in 2000 up to logarithmic factors for a large class of graphs H, more precise bounds are only known for a few special graphs. We close this gap for the cases where a cycle of fixed length is forbidden, determining the final number of edges up to constants. Our result not only establishes several conjectures, it is also the first which answers the more than 15-year old question of Erdös et. al. for a class of forbidden graphs H.
39

Security Analysis on Network Systems Based on Some Stochastic Models

Li, Xiaohu 01 December 2014 (has links)
Due to great effort from mathematicians, physicists and computer scientists, network science has attained rapid development during the past decades. However, because of the complexity, most researches in this area are conducted only based upon experiments and simulations, it is critical to do research based on theoretical results so as to gain more insight on how the structure of a network affects the security. This dissertation introduces some stochastic and statistical models on certain networks and uses a k-out-of-n tolerant structure to characterize both logically and physically the behavior of nodes. Based upon these models, we draw several illuminating results in the following two aspects, which are consistent with what computer scientists have observed in either practical situations or experimental studies. Suppose that the node in a P2P network loses the designed function or service when some of its neighbors are disconnected. By studying the isolation probability and the durable time of a single user, we prove that the network with the user's lifetime having more NWUE-ness is more resilient in the sense of having a smaller probability to be isolated by neighbors and longer time to be online without being interrupted. Meanwhile, some preservation properties are also studied for the durable time of a network. Additionally, in order to apply the model in practice, both graphical and nonparametric statistical methods are developed and are employed to a real data set. On the other hand, a stochastic model is introduced to investigate the security of network systems based on their vulnerability graph abstractions. A node loses its designed function when certain number of its neighbors are compromised in the sense of being taken over by the malicious codes or the hacker. The attack compromises some nodes, and the victimized nodes become accomplices. We derived an equation to solve the probability for a node to be compromised in a network. Since this equation has no explicit solution, we also established new lower and upper bounds for the probability. The two models proposed herewith generalize existing models in the literature, the corresponding theoretical results effectively improve those known results and hence carry an insight on designing a more secure system and enhancing the security of an existing system.
40

Stochastical models for networks in the life sciences

Behrisch, Michael 21 January 2008 (has links)
Motiviert durch strukturelle Eigenschaften molekularer Ähnlichkeitsnetzwerke werden die Evolution der größten Komponente eines Netzwerkes in zwei verschiedenen stochastischen Modellen, zufälligen Hypergraphen und zufälligen Schnittgraphen, untersucht. Zuerst wird bewiesen, dass die Anzahl der Knoten in der größten Komponente d-uniformer Hypergraphen einer Normalverteilung folgt. Der Beweis nutzt dabei ausschließlich probabilistische Argumente und keine enumerative Kombinatorik. Diesem grundlegenden Resultat folgen weitere Grenzwertsätze für die gemeinsame Verteilung von Knoten- und Kantenzahl sowie Sätze zur Zusammenhangswahrscheinlichkeit zufälliger Hypergraphen und zur asymptotischen Anzahl zusammenhängender Hypergraphen. Da das Hypergraphenmodell einige Eigenschaften der Realweltdaten nur unzureichend abbildet, wird anschließend die Evolution der größten Komponente in zufälligen Schnittgraphen, die Clustereigenschaften realer Netzwerke widerspiegeln, untersucht. Es wird gezeigt, dass zufällige Schnittgraphen sich von zufälligen (Hyper-)Graphen dadurch unterscheiden, dass (bei einer durchschnittlichen Nachbaranzahl von mehr als eins) weder die größte Komponente linear noch die zweitgrößte Komponente logarithmisch groß in Abhängigkeit von der Knotenzahl ist. Weiterhin wird ein Polynomialzeitalgorithmus zur Überdeckung der Kanten eines Graphen mit möglichst wenigen Cliquen (vollständigen Graphen) beschrieben und seine asymptotische Optimalität im Modell der zufälligen Schnittgraphen bewiesen. Anschließend wird die Entwicklung der chromatischen Zahl untersucht und gezeigt, dass zufällige Schnittgraphen mit hoher Wahrscheinlichkeit mittels verschiedener Greedystrategien optimal gefärbt werden können. Letztendlich zeigen Experimente auf realen Netzen eine Übereinstimmung mit den theoretischen Vorhersagen und legen eine gegenseitige Zertifizierung der Optimalität von Cliquen- und Färbungszahl durch Heuristiken nahe. / Motivated by structural properties of molecular similarity networks we study the behaviour of the component evolution in two different stochastic network models, that is random hypergraphs and random intersection graphs. We prove gaussian distribution for the number of vertices in the giant component of a random d-uniform hypergraph. We provide a proof using only probabilistic arguments, avoiding enumerative methods completely. This fundamental result is followed by further limit theorems concerning joint distributions of vertices and edges as well as the connectivity probability of random hypergraphs and the number of connected hypergraphs. Due to deficiencies of the hypergraph model in reflecting properties of the real--world data, we switch the model and study the evolution of the order of the largest component in the random intersection graph model which reflects some clustering properties of real--world networks. We show that for appropriate choice of the parameters random intersection graphs differ from random (hyper-)graphs in that neither the so-called giant component, appearing when the average number of neighbours of a vertex gets larger than one, has linear order nor is the second largest of logarithmic order in the number of vertices. Furthermore we describe a polynomial time algorithm for covering graphs with cliques, prove its asymptotic optimality in a random intersection graph model and study the evolution of the chromatic number in the model showing that, in a certain range of parameters, these random graphs can be coloured optimally with high probability using different greedy algorithms. Experiments on real network data confirm the positive theoretical predictions and suggest that heuristics for the clique and the chromatic number can work hand in hand proving mutual optimality.

Page generated in 0.0828 seconds