• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 38
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 54
  • 54
  • 16
  • 15
  • 14
  • 14
  • 12
  • 11
  • 11
  • 9
  • 9
  • 7
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Stochastical models for networks in the life sciences

Behrisch, Michael 21 January 2008 (has links)
Motiviert durch strukturelle Eigenschaften molekularer Ähnlichkeitsnetzwerke werden die Evolution der größten Komponente eines Netzwerkes in zwei verschiedenen stochastischen Modellen, zufälligen Hypergraphen und zufälligen Schnittgraphen, untersucht. Zuerst wird bewiesen, dass die Anzahl der Knoten in der größten Komponente d-uniformer Hypergraphen einer Normalverteilung folgt. Der Beweis nutzt dabei ausschließlich probabilistische Argumente und keine enumerative Kombinatorik. Diesem grundlegenden Resultat folgen weitere Grenzwertsätze für die gemeinsame Verteilung von Knoten- und Kantenzahl sowie Sätze zur Zusammenhangswahrscheinlichkeit zufälliger Hypergraphen und zur asymptotischen Anzahl zusammenhängender Hypergraphen. Da das Hypergraphenmodell einige Eigenschaften der Realweltdaten nur unzureichend abbildet, wird anschließend die Evolution der größten Komponente in zufälligen Schnittgraphen, die Clustereigenschaften realer Netzwerke widerspiegeln, untersucht. Es wird gezeigt, dass zufällige Schnittgraphen sich von zufälligen (Hyper-)Graphen dadurch unterscheiden, dass (bei einer durchschnittlichen Nachbaranzahl von mehr als eins) weder die größte Komponente linear noch die zweitgrößte Komponente logarithmisch groß in Abhängigkeit von der Knotenzahl ist. Weiterhin wird ein Polynomialzeitalgorithmus zur Überdeckung der Kanten eines Graphen mit möglichst wenigen Cliquen (vollständigen Graphen) beschrieben und seine asymptotische Optimalität im Modell der zufälligen Schnittgraphen bewiesen. Anschließend wird die Entwicklung der chromatischen Zahl untersucht und gezeigt, dass zufällige Schnittgraphen mit hoher Wahrscheinlichkeit mittels verschiedener Greedystrategien optimal gefärbt werden können. Letztendlich zeigen Experimente auf realen Netzen eine Übereinstimmung mit den theoretischen Vorhersagen und legen eine gegenseitige Zertifizierung der Optimalität von Cliquen- und Färbungszahl durch Heuristiken nahe. / Motivated by structural properties of molecular similarity networks we study the behaviour of the component evolution in two different stochastic network models, that is random hypergraphs and random intersection graphs. We prove gaussian distribution for the number of vertices in the giant component of a random d-uniform hypergraph. We provide a proof using only probabilistic arguments, avoiding enumerative methods completely. This fundamental result is followed by further limit theorems concerning joint distributions of vertices and edges as well as the connectivity probability of random hypergraphs and the number of connected hypergraphs. Due to deficiencies of the hypergraph model in reflecting properties of the real--world data, we switch the model and study the evolution of the order of the largest component in the random intersection graph model which reflects some clustering properties of real--world networks. We show that for appropriate choice of the parameters random intersection graphs differ from random (hyper-)graphs in that neither the so-called giant component, appearing when the average number of neighbours of a vertex gets larger than one, has linear order nor is the second largest of logarithmic order in the number of vertices. Furthermore we describe a polynomial time algorithm for covering graphs with cliques, prove its asymptotic optimality in a random intersection graph model and study the evolution of the chromatic number in the model showing that, in a certain range of parameters, these random graphs can be coloured optimally with high probability using different greedy algorithms. Experiments on real network data confirm the positive theoretical predictions and suggest that heuristics for the clique and the chromatic number can work hand in hand proving mutual optimality.
42

Environnements pour l'analyse expérimentale d'applications de calcul haute performance / Environments for the experimental analysis of HPC applications.

Perarnau, Swann 01 December 2011 (has links)
Les machines du domaine du calcul haute performance (HPC) gagnent régulièrement en com- plexité. De nos jours, chaque nœud de calcul peut être constitué de plusieurs puces ou de plusieurs cœurs se partageant divers caches mémoire de façon hiérarchique. Que se soit pour comprendre les performances ob- tenues par une application sur ces architectures ou pour développer de nouveaux algorithmes et valider leur performance, une phase d'expérimentation est souvent nécessaire. Dans cette thèse, nous nous intéressons à deux formes d'analyse expérimentale : l'exécution sur machines réelles et la simulation d'algorithmes sur des jeux de données aléatoires. Dans un cas comme dans l'autre, le contrôle des paramètres de l'environnement (matériel ou données en entrée) permet une meilleure analyse des performances de l'application étudiée. Ainsi, nous proposons deux méthodes pour contrôler l'utilisation par une application des ressources ma- térielles d'une machine : l'une pour le temps processeur alloué et l'autre pour la quantité de cache mémoire disponible. Ces deux méthodes nous permettent notamment d'étudier les changements de comportement d'une application en fonction de la quantité de ressources allouées. Basées sur une modification du compor- tement du système d'exploitation, nous avons implémenté ces méthodes pour un système Linux et démontré leur utilité dans l'analyse de plusieurs applications parallèles. Du point de vue de la simulation, nous avons étudié le problème de la génération aléatoire de graphes orientés acycliques (DAG) pour la simulation d'algorithmes d'ordonnancement. Bien qu'un grand nombre d'algorithmes de génération existent dans ce domaine, la plupart des publications repose sur des implémen- tations ad-hoc et peu validées de ces derniers. Pour pallier ce problème, nous proposons un environnement de génération comprenant la majorité des méthodes rencontrées dans la littérature. Pour valider cet envi- ronnement, nous avons réalisé de grande campagnes d'analyses à l'aide de Grid'5000, notamment du point de vue des propriétés statistiques connues de certaines méthodes. Nous montrons aussi que la performance d'un algorithme est fortement influencée par la méthode de génération des entrées choisie, au point de ren- contrer des phénomènes d'inversion : un changement d'algorithme de génération inverse le résultat d'une comparaison entre deux ordonnanceurs. / High performance computing systems are increasingly complex. Nowadays, each compute node can contain several sockets or several cores and share multiple memory caches in a hierarchical way. To understand an application's performance on such systems or to develop new algorithms and validate their behavior, an experimental study is often required. In this thesis, we consider two types of experimental analysis : execution on real systems and simulation using randomly generated inputs. In both cases, a scientist can improve the quality of its performance analysis by controlling the environment (hardware or input data) used. Therefore, we discuss two methods to control hardware resources allocation inside a system : one for the processing time given to an application, the other for the amount of cache memory available to it. Both methods allow us to study how an application's behavior change according to the amount of resources allocated. Based on modifications of the operating system, we implemented these methods for Linux and demonstrated their use for the analysis of several parallel applications. Regarding simulation, we studied the issue of the random generation of directed acyclic graphs for scheduler simulations. While numerous algorithms can be found for such problem, most papers in this field rely on ad-hoc implementations and provide little validation of their generator. To tackle this issue, we propose a complete environment providing most of the classical generation methods. We validated this environment using big analysis campaigns on Grid'5000, verifying known statistical properties of most algorithms. We also demonstrated that the performance of a scheduler can be impacted by the generation method used, identifying a reversing phenomenon : changing the generating algorithm can reverse the comparison between two schedulers.
43

A Dynamic Longitudinal Examination of Social Networks and Political Behavior: The Moderating Effect of Local Network Properties and Its Implication for Social Influence Processes

Song, Hyunjin 21 May 2015 (has links)
No description available.
44

Endomorphisms of Fraïssé limits and automorphism groups of algebraically closed relational structures

McPhee, Jillian Dawn January 2012 (has links)
Let Ω be the Fraïssé limit of a class of relational structures. We seek to answer the following semigroup theoretic question about Ω. What are the group H-classes, i.e. the maximal subgroups, of End(Ω)? Fraïssé limits for which we answer this question include the random graph R, the random directed graph D, the random tournament T, the random bipartite graph B, Henson's graphs G[subscript n] (for n greater or equal to 3) and the total order Q. The maximal subgroups of End(Ω) are closely connected to the automorphism groups of the relational structures induced by the images of idempotents from End(Ω). It has been shown that the relational structure induced by the image of an idempotent from End(Ω) is algebraically closed. Accordingly, we investigate which groups can be realised as the automorphism group of an algebraically closed relational structure in order to determine the maximal subgroups of End(Ω) in each case. In particular, we show that if Γ is a countable graph and Ω = R,D,B, then there exist 2[superscript aleph-naught] maximal subgroups of End(Ω) which are isomorphic to Aut(Γ). Additionally, we provide a complete description of the subsets of Q which are the image of an idempotent from End(Q). We call these subsets retracts of Q and show that if Ω is a total order and f is an embedding of Ω into Q such that im f is a retract of Q, then there exist 2[superscript aleph-naught] maximal subgroups of End(Q) isomorphic to Aut(Ω). We also show that any countable maximal subgroup of End(Q) must be isomorphic to Zⁿ for some natural number n. As a consequence of the methods developed, we are also able to show that when Ω = R,D,B,Q there exist 2[superscript aleph-naught] regular D-classes of End(Ω) and when Ω = R,D,B there exist 2[superscript aleph-naught] J-classes of End(Ω). Additionally we show that if Ω = R,D then all regular D-classes contain 2[superscript aleph-naught] group H-classes. On the other hand, we show that when Ω = B,Q there exist regular D-classes which contain countably many group H-classes.
45

Some problems related to the Karp-Sipser algorithm on random graphs

Kreacic, Eleonora January 2017 (has links)
We study certain questions related to the performance of the Karp-Sipser algorithm on the sparse Erdös-Rényi random graph. The Karp-Sipser algorithm, introduced by Karp and Sipser [34] is a greedy algorithm which aims to obtain a near-maximum matching on a given graph. The algorithm evolves through a sequence of steps. In each step, it picks an edge according to a certain rule, adds it to the matching and removes it from the remaining graph. The algorithm stops when the remining graph is empty. In [34], the performance of the Karp-Sipser algorithm on the Erdös-Rényi random graphs G(n,M = [<sup>cn</sup>/<sub>2</sub>]) and G(n, p = <sup>c</sup>/<sub>n</sub>), c &GT; 0 is studied. It is proved there that the algorithm behaves near-optimally, in the sense that the difference between the size of a matching obtained by the algorithm and a maximum matching is at most o(n), with high probability as n → ∞. The main result of [34] is a law of large numbers for the size of a maximum matching in G(n,M = <sup>cn</sup>/<sub>2</sub>) and G(n, p = <sup>c</sup>/<sub>n</sub>), c &GT; 0. Aronson, Frieze and Pittel [2] further refine these results. In particular, they prove that for c &LT; e, the Karp-Sipser algorithm obtains a maximum matching, with high probability as n → ∞; for c &GT; e, the difference between the size of a matching obtained by the algorithm and the size of a maximum matching of G(n,M = <sup>cn</sup>/<sub>2</sub>) is of order Θ<sub>log n</sub>(n<sup>1/5</sup>), with high probability as n → ∞. They further conjecture a central limit theorem for the size of a maximum matching of G(n,M = <sup>cn</sup>/<sub>2</sub>) and G(n, p = <sup>c</sup>/<sub>n</sub>) for all c &GT; 0. As noted in [2], the central limit theorem for c &LT; 1 is a consequence of the result of Pittel [45]. In this thesis, we prove a central limit theorem for the size of a maximum matching of both G(n,M = <sup>cn</sup>/<sub>2</sub>) and G(n, p = <sup>c</sup>/<sub>n</sub>) for c &GT; e. (We do not analyse the case 1 ≤ c ≤ e). Our approach is based on the further analysis of the Karp-Sipser algorithm. We use the results from [2] and refine them. For c &GT; e, the difference between the size of a matching obtained by the algorithm and the size of a maximum matching is of order Θ<sub>log n</sub>(n<sup>1/5</sup>), with high probability as n → ∞, and the study [2] suggests that this difference is accumulated at the very end of the process. The question how the Karp-Sipser algorithm evolves in its final stages for c > e, motivated us to consider the following problem in this thesis. We study a model for the destruction of a random network by fire. Let us assume that we have a multigraph with minimum degree at least 2 with real-valued edge-lengths. We first choose a uniform random point from along the length and set it alight. The edges burn at speed 1. If the fire reaches a node of degree 2, it is passed on to the neighbouring edge. On the other hand, a node of degree at least 3 passes the fire either to all its neighbours or none, each with probability 1/2. If the fire extinguishes before the graph is burnt, we again pick a uniform point and set it alight. We study this model in the setting of a random multigraph with N nodes of degree 3 and α(N) nodes of degree 4, where α(N)/N → 0 as N → ∞. We assume the edges to have i.i.d. standard exponential lengths. We are interested in the asymptotic behaviour of the number of fires we must set alight in order to burn the whole graph, and the number of points which are burnt from two different directions. Depending on whether α(N) » √N or not, we prove that after the suitable rescaling these quantities converge jointly in distribution to either a pair of constants or to (complicated) functionals of Brownian motion. Our analysis supports the conjecture that the difference between the size of a matching obtained by the Karp-Sipser algorithm and the size of a maximum matching of the Erdös-Rényi random graph G(n,M = <sup>cn</sup>/<sub>2</sub>) for c > e, rescaled by n<sup>1/5</sup>, converges in distribution.
46

Statistical inference on random graphs and networks / Inferência estatística para grafos aleatórios e redes

Cerqueira, Andressa 28 February 2018 (has links)
In this thesis we study two probabilistic models defined on graphs: the Stochastic Block model and the Exponential Random Graph. Therefore, this thesis is divided in two parts. In the first part, we introduce the Krichevsky-Trofimov estimator for the number of communities in the Stochastic Block Model and prove its eventual almost sure convergence to the underlying number of communities, without assuming a known upper bound on that quantity. In the second part of this thesis we address the perfect simulation problem for the Exponential random graph model. We propose an algorithm based on the Coupling From The Past algorithm using a Glauber dynamics. This algorithm is efficient in the case of monotone models. We prove that this is the case for a subset of the parametric space. We also propose an algorithm based on the Backward and Forward algorithm that can be applied for monotone and non monotone models. We prove the existence of an upper bound for the expected running time of both algorithms. / Nessa tese estudamos dois modelos probabilísticos definidos em grafos: o modelo estocástico por blocos e o modelo de grafos exponenciais. Dessa forma, essa tese está dividida em duas partes. Na primeira parte nós propomos um estimador penalizado baseado na mistura de Krichevsky-Trofimov para o número de comunidades do modelo estocástico por blocos e provamos sua convergência quase certa sem considerar um limitante conhecido para o número de comunidades. Na segunda parte dessa tese nós abordamos o problema de simulação perfeita para o modelo de grafos aleatórios Exponenciais. Nós propomos um algoritmo de simulação perfeita baseado no algoritmo Coupling From the Past usando a dinâmica de Glauber. Esse algoritmo é eficiente apenas no caso em que o modelo é monotóno e nós provamos que esse é o caso para um subconjunto do espaço paramétrico. Nós também propomos um algoritmo de simulação perfeita baseado no algoritmo Backward and Forward que pode ser aplicado à modelos monótonos e não monótonos. Nós provamos a existência de um limitante superior para o número esperado de passos de ambos os algoritmos.
47

Statistical inference on random graphs and networks / Inferência estatística para grafos aleatórios e redes

Andressa Cerqueira 28 February 2018 (has links)
In this thesis we study two probabilistic models defined on graphs: the Stochastic Block model and the Exponential Random Graph. Therefore, this thesis is divided in two parts. In the first part, we introduce the Krichevsky-Trofimov estimator for the number of communities in the Stochastic Block Model and prove its eventual almost sure convergence to the underlying number of communities, without assuming a known upper bound on that quantity. In the second part of this thesis we address the perfect simulation problem for the Exponential random graph model. We propose an algorithm based on the Coupling From The Past algorithm using a Glauber dynamics. This algorithm is efficient in the case of monotone models. We prove that this is the case for a subset of the parametric space. We also propose an algorithm based on the Backward and Forward algorithm that can be applied for monotone and non monotone models. We prove the existence of an upper bound for the expected running time of both algorithms. / Nessa tese estudamos dois modelos probabilísticos definidos em grafos: o modelo estocástico por blocos e o modelo de grafos exponenciais. Dessa forma, essa tese está dividida em duas partes. Na primeira parte nós propomos um estimador penalizado baseado na mistura de Krichevsky-Trofimov para o número de comunidades do modelo estocástico por blocos e provamos sua convergência quase certa sem considerar um limitante conhecido para o número de comunidades. Na segunda parte dessa tese nós abordamos o problema de simulação perfeita para o modelo de grafos aleatórios Exponenciais. Nós propomos um algoritmo de simulação perfeita baseado no algoritmo Coupling From the Past usando a dinâmica de Glauber. Esse algoritmo é eficiente apenas no caso em que o modelo é monotóno e nós provamos que esse é o caso para um subconjunto do espaço paramétrico. Nós também propomos um algoritmo de simulação perfeita baseado no algoritmo Backward and Forward que pode ser aplicado à modelos monótonos e não monótonos. Nós provamos a existência de um limitante superior para o número esperado de passos de ambos os algoritmos.
48

Модельное управление развитием локализованных экономических сообществ на территории муниципалитета : магистерская диссертация / Model management of the localized economic communities development in the territory of the municipality

Адияк, Е. В., Adiyak, E. V. January 2021 (has links)
Из-за сформировавшихся внешне и внутренне политических условий, снижение зависимости экономики РФ от внешних факторов становится все более приоритетным направлением развития. Более того, имеется ряд проблем, связанных с недостаточно эффективным развитием нецентральных районов. В их числе проблемы миграции, низкого уровня рождаемости и другие. Более того, изменяется роль институтов государства и местного самоуправления по отношению к гражданскому обществу. Тем самым возникает потребность в разработке и внедрении современных механизмов функционирования местных сообществ для достижения долгосрочных целей социально-экономического развития муниципальных образований. Указанные обстоятельства актуализируют изучение цифровой финансовой экосистемы местных сообществ. Учитывая вышесказанное, целью работы является разработка методики идентификации отдельного экономического агента на предмет его потенциальной принадлежности к локальному экономическому сообществу. Для этого были проанализированы данные о банковских транзакциях. Замкнутые цепи обмена были найдены. Также были рассчитаны 4 типа коэффициентов для определения полезности потенциального включения в сеть экономического агента. И были рассмотрены 4 возможных стратегии развития местного сообщества. Рассчитан экономический эффект. / Due to the formed external and internal political conditions, reducing the dependence of the Russian economy on external factors is becoming an increasingly priority area of development. Moreover, there are a number of problems associated with the insufficiently effective development of non-central regions. Among them are the problems of migration, low birth rate and others. Moreover, the role of state and local government institutions in relation to civil society is changing. Thus, there is a need for the development and implementation of modern mechanisms for the functioning of local communities to achieve long-term goals of socio-economic development of municipalities. These circumstances actualize the study of the digital financial ecosystem of local communities. Considering the above, the aim of the work is to develop a methodology for identifying an individual economic agent for its potential belonging to the local economic community. For this, data on banking transactions were analyzed. Closed circuits of exchange have been found. Also, 4 types of coefficients were calculated to determine the usefulness of a potential inclusion in the network of an economic agent. And 4 possible strategies for the development of the local community were considered. The economic effect is calculated.
49

Neighbor Effects: The Influence of Colony-level Social Structure on Within-group Dynamics in a Social Fish

Hellmann, Jennifer K. 26 October 2016 (has links)
No description available.
50

An enforced cooperation : understanding scientific assessments in adversarial polities through Quebec shale gas policymaking, 2010-2014

Harvey, Alexandre 07 1900 (has links)
Les biotechnologies, le réchauffement climatique, les ressources naturelles et la gestion des écosystèmes sont tous représentatifs de la “nouvelle politique de la nature” (Hajer 2003), un terme englobant les enjeux marqués par une grande incertitude scientifique et un encadrement réglementaire inadapté aux nouvelles réalités, suscitant de fait un conflit politique hors du commun. Dans l'espoir de diminuer ces tensions et de générer un savoir consensuel, de nombreux gouvernements se tournent vers des institutions scientifiques ad hoc pour documenter l'élaboration des politiques et répondre aux préoccupations des partie-prenantes. Mais ces évaluations scientifiques permettent-elles réellement de créer une compréhension commune partagée par ces acteurs politiques polarisés? Alors que l'on pourrait croire que celles-ci génèrent un climat d'apprentissage collectif rassembleur, un environnement politique conflictuel rend l'apprentissage entre opposant extrêmement improbable. Ainsi, cette recherche documente le potentiel conciliateur des évaluation scientifique en utilisant le cas des gaz de schiste québécois (2010-2014). Ce faisant, elle mobilise la littérature sur les dimensions politiques du savoir et de la science afin de conceptualiser le rôle des évaluations scientifiques au sein d'une théorie de la médiation scientifique (scientific brokerage). Une analyse de réseau (SNA) des 5751 références contenues dans les documents déposés par 268 organisations participant aux consultations publiques de 2010 et 2014 constitue le corps de la démonstration empirique. Précisément, il y est démontré comment un médiateur scientifique peut rediriger le flux d'information afin de contrer l'incompatibilité entre apprentissage collectif et conflit politique. L'argument mobilise les mécanismes cognitifs traditionnellement présents dans la théorie des médiateurs de politique (policy broker), mais introduit aussi les jeux de pouvoir fondamentaux à la circulation de la connaissance entre acteurs politiques. / Biotechnology, climate change, natural resources, and ecosystem management are all representative of the “new politics of nature” (Hajer 2003), a term encompassing policy issues with high scientific uncertainties, unadapted regulatory regimes, and acute political conflict. In the hope of diminishing these tensions and generating a consensual understanding, several governments mandated ad hoc scientific institutions to document policymaking and answer stakeholder’s concerns. But do those scientific assessments really help to generate a shared understanding between otherwise polarized policy actors? While it would be possible that these create inclusive collective learning dynamics, policy learning has been shown as being extremely unlikely among competing policy actors. Accordingly, this research documents the conciliatory power of scientific assessments using the Quebec shale gas policymaking case (2010–2014). In doing so, it mobilizes the literature stressing the political nature of science to conceptualize scientific assessment in light of a scientific brokerage theory. Empirically, the research uses Social Network Analysis to unravel the collective learning dynamics found in two information networks built from the 5751 references found in the advocacy and technical documents published by 268 organizations during two public consultations. Precisely, findings demonstrate that scientific brokerage can redirect information flows to counteract the divide between collective learning and political conflict. The argument mobilizes cognitive mechanisms traditionally found in policy brokerage theory, but also introduces often forgotten power interplays prominent in policy-related knowledge diffusion.

Page generated in 0.0679 seconds