• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 61
  • 18
  • 7
  • 5
  • 2
  • 1
  • 1
  • Tagged with
  • 111
  • 111
  • 26
  • 21
  • 20
  • 20
  • 17
  • 16
  • 15
  • 15
  • 14
  • 13
  • 13
  • 13
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

An Overview of the Chromatic Number of the Erdos-Renyi Random Graph: Results and Techniques

Berglund, Kenneth January 2021 (has links)
No description available.
52

Analyzing The Community Structure Of Web-like Networks: Models And Algorithms

Cami, Aurel 01 January 2005 (has links)
This dissertation investigates the community structure of web-like networks (i.e., large, random, real-life networks such as the World Wide Web and the Internet). Recently, it has been shown that many such networks have a locally dense and globally sparse structure with certain small, dense subgraphs occurring much more frequently than they do in the classical Erdös-Rényi random graphs. This peculiarity--which is commonly referred to as community structure--has been observed in seemingly unrelated networks such as the Web, email networks, citation networks, biological networks, etc. The pervasiveness of this phenomenon has led many researchers to believe that such cohesive groups of nodes might represent meaningful entities. For example, in the Web such tightly-knit groups of nodes might represent pages with a common topic, geographical location, etc., while in the neural networks they might represent evolved computational units. The notion of community has emerged in an effort to formalize the empirical observation of the locally dense globally sparse structure of web-like networks. In the broadest sense, a community in a web-like network is defined as a group of nodes that induces a dense subgraph which is sparsely linked with the rest of the network. Due to a wide array of envisioned applications, ranging from crawlers and search engines to network security and network compression, there has recently been a widespread interest in finding efficient community-mining algorithms. In this dissertation, the community structure of web-like networks is investigated by a combination of analytical and computational techniques: First, we consider the problem of modeling the web-like networks. In the recent years, many new random graph models have been proposed to account for some recently discovered properties of web-like networks that distinguish them from the classical random graphs. The vast majority of these random graph models take into account only the addition of new nodes and edges. Yet, several empirical observations indicate that deletion of nodes and edges occurs frequently in web-like networks. Inspired by such observations, we propose and analyze two dynamic random graph models that combine node and edge addition with a uniform and a preferential deletion of nodes, respectively. In both cases, we find that the random graphs generated by such models follow power-law degree distributions (in agreement with the degree distribution of many web-like networks). Second, we analyze the expected density of certain small subgraphs--such as defensive alliances on three and four nodes--in various random graphs models. Our findings show that while in the binomial random graph the expected density of such subgraphs is very close to zero, in some dynamic random graph models it is much larger. These findings converge with our results obtained by computing the number of communities in some Web crawls. Next, we investigate the computational complexity of the community-mining problem under various definitions of community. Assuming the definition of community as a global defensive alliance, or a global offensive alliance we prove--using transformations from the dominating set problem--that finding optimal communities is an NP-complete problem. These and other similar complexity results coupled with the fact that many web-like networks are huge, indicate that it is unlikely that fast, exact sequential algorithms for mining communities may be found. To handle this difficulty we adopt an algorithmic definition of community and a simpler version of the community-mining problem, namely: find the largest community to which a given set of seed nodes belong. We propose several greedy algorithms for this problem: The first proposed algorithm starts out with a set of seed nodes--the initial community--and then repeatedly selects some nodes from community's neighborhood and pulls them in the community. In each step, the algorithm uses clustering coefficient--a parameter that measures the fraction of the neighbors of a node that are neighbors themselves--to decide which nodes from the neighborhood should be pulled in the community. This algorithm has time complexity of order , where denotes the number of nodes visited by the algorithm and is the maximum degree encountered. Thus, assuming a power-law degree distribution this algorithm is expected to run in near-linear time. The proposed algorithm achieved good accuracy when tested on some real and computer-generated networks: The fraction of community nodes classified correctly is generally above 80% and often above 90% . A second algorithm based on a generalized clustering coefficient, where not only the first neighborhood is taken into account but also the second, the third, etc., is also proposed. This algorithm achieves a better accuracy than the first one but also runs slower. Finally, a randomized version of the second algorithm which improves the time complexity without affecting the accuracy significantly, is proposed. The main target application of the proposed algorithms is focused crawling--the selective search for web pages that are relevant to a pre-defined topic.
53

Problems in Generic Combinatorial Rigidity: Sparsity, Sliders, and Emergence of Components

Theran, Louis Simon 01 September 2010 (has links)
Rigidity theory deals in problems of the following form: given a structure defined by geometric constraints on a set of objects, what information about its geometric behavior is implied by the underlying combinatorial structure. The most well-studied class of structures is the bar-joint framework, which is made of fixed-length bars connected by universal joints with full rotational degrees of freedom; the allowed motions preserve the lengths and connectivity of the bars, and a framework is rigid if the only allowed motions are trivial motions of Euclidean space. A remarkable theorem of Maxwell-Laman says that rigidity of generic bar-joint frameworks depends only on the graph that has as its edges the bars and as its vertices the joints. We generalize the "degree of freedom counts that appear in the Maxwell-Laman theorem to the very general setting of (k,l)-sparse and (k,l)-graded sparse hypergraphs. We characterize these in terms of their graph-graph theoretic and matroidal properties. For the fundamental algorithmic problems Decision, Extraction, Components, and Decomposition, we give efficient, implementable pebble game algorithms for all the (k,l)-sparse and (k,l)-graded-sparse families of hypergraphs we study. We then prove that all the matroids arising from (k,l)-sparse are linearly representable by matrices with a certain "natural" structure that captures the incidence structure of the hypergraph and the sparsity parameters k and l. Building on the combinatorial and linear theory discussed above, we introduce a new rigidity model: slider-pinning rigidity. This is an elaboration of the planar bar-joint model to include sliders, which constrain a vertex to move on a specific line. We prove the analogue of the Maxwell-Laman Theorem for slider pinning, using, as a lemma, a new proof of Whiteley's Parallel Redrawing Theorem. We conclude by studying the emergence of non-trivial rigid substructures in generic planar frameworks given by Erdos-Renyi random graphs. We prove that there is a sharp threshold for such substructures to emerge, and that, when they do, they are all linear size. This is consistent with experimental and simulation-based work done in the physics community on the formation of certain glasses.
54

Inference in ERGMs and Ising Models.

Xu, Yuanzhe January 2023 (has links)
Discrete exponential families have drawn a lot of attention in probability, statistics, and machine learning, both classically and in the recent literature. This thesis studies in depth two discrete exponential families of concrete interest, (i) Exponential Random Graph Models (ERGMs) and (ii) Ising Models. In the ERGM setting, this thesis consider a “degree corrected” version of standard ERGMs, and in the Ising model setting, this thesis focus on Ising models on dense regular graphs, both from the point of view of statistical inference. The first part of the thesis studies the problem of testing for sparse signals present on the vertices of ERGMs. It proposes computably efficient tests for a wide class of ERGMs. Focusing on the two star ERGM, it shows that the tests studied are “asymptotically efficient” in all parameter regimes except one, which is referred to as “critical point”. In the critical regime, it is shown that improved detection is possible. This shows that compared to the standard belief, in this setting dependence is actually beneficial to the inference problem. The main proof idea for analyzing the two star ERGM is a correlations estimate between degrees under local alternatives, which is possibly of independent interest. In the second part of the thesis, we derive the limit of experiments for a class of one parameter Ising models on dense regular graphs. In particular, we show that the limiting experiment is Gaussian in the “low temperature” regime, non Gaussian in the “critical” regime, and an infinite collection of Gaussians in the “high temperature” regime. We also derive the limiting distributions of commonlt studied estimators, and study limiting power for tests of hypothesis against contiguous alternatives (whose scaling changes across the regimes). To the best of our knowledge, this is the first attempt at establishing the classical limits of experiments for Ising models (and more generally, Markov random fields).
55

Quantum Walks and Structured Searches on Free Groups and Networks

Ratner, Michael January 2017 (has links)
Quantum walks have been utilized by many quantum algorithms which provide improved performance over their classical counterparts. Quantum search algorithms, the quantum analogues of spatial search algorithms, have been studied on a wide variety of structures. We study quantum walks and searches on the Cayley graphs of finitely-generated free groups. Return properties are analyzed via Green’s functions, and quantum searches are examined. Additionally, the stopping times and success rates of quantum searches on random networks are experimentally estimated. / Mathematics
56

Essays on Bayesian Inference for Social Networks

Koskinen, Johan January 2004 (has links)
<p>This thesis presents Bayesian solutions to inference problems for three types of social network data structures: a single observation of a social network, repeated observations on the same social network, and repeated observations on a social network developing through time.</p><p>A social network is conceived as being a structure consisting of actors and their social interaction with each other. A common conceptualisation of social networks is to let the actors be represented by nodes in a graph with edges between pairs of nodes that are relationally tied to each other according to some definition. Statistical analysis of social networks is to a large extent concerned with modelling of these relational ties, which lends itself to empirical evaluation.</p><p>The first paper deals with a family of statistical models for social networks called exponential random graphs that takes various structural features of the network into account. In general, the likelihood functions of exponential random graphs are only known up to a constant of proportionality. A procedure for performing Bayesian inference using Markov chain Monte Carlo (MCMC) methods is presented. The algorithm consists of two basic steps, one in which an ordinary Metropolis-Hastings up-dating step is used, and another in which an importance sampling scheme is used to calculate the acceptance probability of the Metropolis-Hastings step.</p><p>In paper number two a method for modelling reports given by actors (or other informants) on their social interaction with others is investigated in a Bayesian framework. The model contains two basic ingredients: the unknown network structure and functions that link this unknown network structure to the reports given by the actors. These functions take the form of probit link functions. An intrinsic problem is that the model is not identified, meaning that there are combinations of values on the unknown structure and the parameters in the probit link functions that are observationally equivalent. Instead of using restrictions for achieving identification, it is proposed that the different observationally equivalent combinations of parameters and unknown structure be investigated a posteriori. Estimation of parameters is carried out using Gibbs sampling with a switching devise that enables transitions between posterior modal regions. The main goal of the procedures is to provide tools for comparisons of different model specifications.</p><p>Papers 3 and 4, propose Bayesian methods for longitudinal social networks. The premise of the models investigated is that overall change in social networks occurs as a consequence of sequences of incremental changes. Models for the evolution of social networks using continuos-time Markov chains are meant to capture these dynamics. Paper 3 presents an MCMC algorithm for exploring the posteriors of parameters for such Markov chains. More specifically, the unobserved evolution of the network in-between observations is explicitly modelled thereby avoiding the need to deal with explicit formulas for the transition probabilities. This enables likelihood based parameter inference in a wider class of network evolution models than has been available before. Paper 4 builds on the proposed inference procedure of Paper 3 and demonstrates how to perform model selection for a class of network evolution models.</p>
57

Erdos-Ko-Rado em famílias aleatórias / Erdos-Ko-Rado in random families

Gauy, Marcelo Matheus 11 July 2014 (has links)
Estudamos o problema de famílias intersectantes extremais em um subconjunto aleatório da família dos subconjuntos com exatamente k elementos de um conjunto dado. Obtivemos uma descrição quase completa da evolução do tamanho de tais famílias. Versões semelhantes do problema foram estudadas por Balogh, Bohman e Mubayi em 2009, e por Hamm e Kahn, e Balogh, Das, Delcourt, Liu e Sharifzadeh de maneira concorrente a este trabalho. / We studied the problem of maximal intersecting families in a random subset of the family of subsets with exactly k elements from a given set. We obtained a nearly complete description of the evolution of the size of such families. Similar versions of this problem have been studied by Balogh, Bohman and Mubayi in 2009, and by Hamm and Kahn, and Balogh, Das, Delcourt, Liu and Sharifzadeh concurrently with this work.
58

Colourings of random graphs

Heckel, Annika January 2016 (has links)
We study graph parameters arising from different types of colourings of random graphs, defined broadly as an assignment of colours to either the vertices or the edges of a graph. The chromatic number X(G) of a graph is the minimum number of colours required for a vertex colouring where no two adjacent vertices are coloured the same. Determining the chromatic number is one of the classic challenges in random graph theory. In Chapter 3, we give new upper and lower bounds for the chromatic number of the dense random graph G(n,p)) where p &isin; (0,1) is constant. These bounds are the first to match up to an additive term of order o(1) in the denominator, and in particular, they determine the average colour class size in an optimal colouring up to an additive term of order o(1). In Chapter 4, we study a related graph parameter called the equitable chromatic number. This is defined as the minimum number of colours needed for a vertex colouring where no two adjacent vertices are coloured the same and, additionally, all colour classes are as equal in size as possible. We prove one point concentration of the equitable chromatic number of the dense random graph G(n,m) with m = pn(n-1)/2, p &LT; 1-1/e<sup>2</sup> constant, on a subsequence of the integers. We also show that whp, the dense random graph G(n,p) allows an almost equitable colouring with a near optimal number of colours. We call an edge colouring of a graph G a rainbow colouring if every pair of vertices is joined by a rainbow path, which is a path where no colour is repeated. The least number of colours where this is possible is called the rainbow connection number rc(G). Since its introduction in 2008 as a new way to quantify how well connected a given graph is, the rainbow connection number has attracted the attention of a great number of researchers. For any graph G, rc(G)&ge;diam(G), where diam(G) denotes the diameter. In Chapter 5, we will see that in the random graph G(n,p), rainbow connection number 2 is essentially equivalent to diameter 2. More specifically, we consider G ~ G(n,p) close to the diameter 2 threshold and show that whp rc(G) = diam(G) &isin; {2,3}. Furthermore, we show that in the random graph process, whp the hitting times of diameter 2 and of rainbow connection number 2 coincide. In Chapter 6, we investigate sharp thresholds for the property rc(G)&le;=r where r is a fixed integer. The results of Chapter 6 imply that for r=2, the properties rc(G)&le;=2 and diam(G)&le;=2 share the same sharp threshold. For r&ge;3, the situation seems quite different. We propose an alternative threshold and prove that this is an upper bound for the sharp threshold for rc(G)&le;=r where r&ge;3.
59

Essays on Bayesian Inference for Social Networks

Koskinen, Johan January 2004 (has links)
This thesis presents Bayesian solutions to inference problems for three types of social network data structures: a single observation of a social network, repeated observations on the same social network, and repeated observations on a social network developing through time. A social network is conceived as being a structure consisting of actors and their social interaction with each other. A common conceptualisation of social networks is to let the actors be represented by nodes in a graph with edges between pairs of nodes that are relationally tied to each other according to some definition. Statistical analysis of social networks is to a large extent concerned with modelling of these relational ties, which lends itself to empirical evaluation. The first paper deals with a family of statistical models for social networks called exponential random graphs that takes various structural features of the network into account. In general, the likelihood functions of exponential random graphs are only known up to a constant of proportionality. A procedure for performing Bayesian inference using Markov chain Monte Carlo (MCMC) methods is presented. The algorithm consists of two basic steps, one in which an ordinary Metropolis-Hastings up-dating step is used, and another in which an importance sampling scheme is used to calculate the acceptance probability of the Metropolis-Hastings step. In paper number two a method for modelling reports given by actors (or other informants) on their social interaction with others is investigated in a Bayesian framework. The model contains two basic ingredients: the unknown network structure and functions that link this unknown network structure to the reports given by the actors. These functions take the form of probit link functions. An intrinsic problem is that the model is not identified, meaning that there are combinations of values on the unknown structure and the parameters in the probit link functions that are observationally equivalent. Instead of using restrictions for achieving identification, it is proposed that the different observationally equivalent combinations of parameters and unknown structure be investigated a posteriori. Estimation of parameters is carried out using Gibbs sampling with a switching devise that enables transitions between posterior modal regions. The main goal of the procedures is to provide tools for comparisons of different model specifications. Papers 3 and 4, propose Bayesian methods for longitudinal social networks. The premise of the models investigated is that overall change in social networks occurs as a consequence of sequences of incremental changes. Models for the evolution of social networks using continuos-time Markov chains are meant to capture these dynamics. Paper 3 presents an MCMC algorithm for exploring the posteriors of parameters for such Markov chains. More specifically, the unobserved evolution of the network in-between observations is explicitly modelled thereby avoiding the need to deal with explicit formulas for the transition probabilities. This enables likelihood based parameter inference in a wider class of network evolution models than has been available before. Paper 4 builds on the proposed inference procedure of Paper 3 and demonstrates how to perform model selection for a class of network evolution models.
60

Intersection Graphs Of Boxes And Cubes

Francis, Mathew C 07 1900 (has links)
A graph Gis said to be an intersection graph of sets from a family of sets if there exists a function ƒ : V(G)→ such that for u,v V(G), (u,v) E(G) ƒ (u) ƒ (v) ≠ . Interval graphs are thus the intersection graphs of closed intervals on the real line and unit interval graphs are the intersection graphs of unit length intervals on the real line. An interval on the real line can be generalized to a “kbox” in Rk.A kbox B =(R1,R2,...,Rk), where each Riis a closed interval on the real line, is defined to be the Cartesian product R1x R2x…x Rk. If each Ri is a unit length interval, we call B a k-cube. Thus, 1-boxes are just closed intervals on the real line whereas 2-boxes are axis-parallel rectangles in the plane. We study the intersection graphs of k-boxes and k-cubes. The parameter boxicity of a graph G, denoted as box(G), is the minimum integer k such that G is an intersection graph of k-boxes. Similarly, the cubicity of G, denoted as cub(G), is the minimum integer k such that G is an intersection graph of k-cubes. Thus, interval graphs are the graphs with boxicity at most 1 and unit interval graphs are the graphs with cubicity at most 1. These parameters were introduced by F. S.Roberts in 1969. In some sense, the boxicity of a graph is a measure of how different a graph is from an interval graph and in a similar way, the cubicity is a measure of how different the graph is from a unit interval graph. We prove several upper bounds on the boxicity and cubicity of general as well as special classes of graphs in terms of various graph parameters such as the maximum degree, the number of vertices and the bandwidth. The following are some of the main results presented. 1. We show that for any graph G with maximum degree Δ , box(G)≤ Δ 22 . This result implies that bounded degree graphs have bounded boxicity no matter how large the graph might be. 2. It was shown in [18] that the boxicity of a graph on n vertices with maximum degree Δ is O(Δ ln n). But a similar bound does not hold for the average degree davof a graph. [18] gives graphs in which the boxicity is exponentially larger than davln n. We show that even though an O(davln n) upper bound for boxicity does not hold for all graphs, for almost all graphs, boxicity is O(davln n). 3. The ratio of the cubicity to boxicity of any graph shown in [15] when combined with the results on boxicity show that cub(G) is O(Δ ln 2 n) and O(2 ln n) for any graph G on n vertices and with maximum degree . By using a randomized construction, we prove the better upper bound cub(G) ≤ [4(Δ + 1) ln n.] 4. Two results relating the cubicity of a graph to its bandwidth b are presented. First, it is shown that cub(G) ≤ 12(Δ + 1)[ ln(2b)] + 1. Next, we derive the upper bound cub(G) ≤ b + 1. This bound is used to derive new upper bounds on the cubicity of special graph classes like circular arc graphs, cocomparability graphs and ATfree graphs in relation to the maximum degree. 5. The upper bound for cubicity in terms of the bandwidth gives an upper bound of Δ + 1 for the cubicity of interval graphs. This bound is improved to show that for any interval graph G with maximum degree , cub(G) ≤[ log2 Δ] + 4. 6. Scheinerman [54] proved that the boxicity of any outerplanar graph is at most 2. We present an independent proof for the same theorem. 7. Halin graphs are planar graphs formed by adding a cycle connecting the leaves of a tree none of whose vertices have degree 2. We prove that the boxicity of any Halin graph is equal to 2 unless it is a complete graph on 4 vertices, in which case its boxicity is 1.

Page generated in 0.113 seconds