• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 4
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 25
  • 10
  • 9
  • 7
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Independent Sets and Eigenspaces

Newman, Michael William January 2004 (has links)
The problems we study in this thesis arise in computer science, extremal set theory and quantum computing. The first common feature of these problems is that each can be reduced to characterizing the independent sets of maximum size in a suitable graph. A second common feature is that the size of these independent sets meets an eigenvalue bound due to Delsarte and Hoffman. Thirdly, the graphs that arise belong to association schemes that have already been studied in other contexts. Our first problem involves covering arrays on graphs, which arises in computer science. The goal is to find a smallest covering array on a given graph <i>G</i>. It is known that this is equivalent to determining whether <i>G</i> has a homomorphism into a <i>covering array graph</i>, <i>CAG(n,g)</i>. Thus our question: Are covering array graphs cores? A covering array graph has as vertex set the partitions of <i>{1,. . . ,n}</i> into <i>g</i> cells each of size at least <i>g</i>, with two vertices being adjacent if their meet has size <i>g<sup>2</sup></i>. We determine that <i>CAG(9,3)</i> is a core. We also determine some partial results on the family of graphs <i>CAG(g<sup>2</sup>,g)</i>. The key to our method is characterizing the independent sets that meet the Delsarte-Hoffman bound---we call these sets <i>ratio-tight</i>. It turns out that <i>CAG(9,3)</i> sits inside an association scheme, which will be useful but apparently not essential. We then turn our attention to our next problem: the Erdos-Ko-Rado theorem and its <i>q</i>-analogue. We are motivated by a desire to find a unifying proof that will cover both versions. The EKR theorem gives the maximum number of pairwise disjoint <i>k</i>-sets of a fixed <i>v</i>-set, and characterizes the extremal cases. Its <i>q</i>-analogue does the same for <i>k</i>-dimensional subspaces of a fixed <i>v</i>-dimensional space over <i>GF(q)</i>. We find that the methods we developed for covering array graphs apply to the EKR theorem. Moreover, unlike most other proofs of EKR, our argument applies equally well to the <i>q</i>-analogue. We provide a proof of the characterization of the extremal cases for the <i>q</i>-analogue when <i>v=2k</i>; no such proof has appeared before. Again, the graphs we consider sit inside of well-known association schemes; this time the schemes play a more central role. Finally, we deal with the problem in quantum computing. There are tasks that can be performed using quantum entanglement yet apparently are beyond the reach of methods using classical physics only. One particular task can be solved classically if and only if the graph &Omega;(<i>n</i>) has chromatic number <i>n</i>. The graph &Omega;(<i>n</i>) has as vertex set the set of all <i>?? 1</i> vectors of length <i>n</i>, with two vertices adjacent if they are orthogonal. We find that <i>n</i> is a trivial upper bound on the chromatic number, and that this bound holds with equality if and only if the Delsarte-Hoffman bound on independent sets does too. We are thus led to characterize the ratio-tight independent sets. We are then able to leverage our result using a recursive argument to show that <i>&chi;</i>(&Omega;(<i>n</i>)) > <i>n</i> for all <i>n</i> > 8. It is notable that the reduction to independent sets, the characterization of ratio-tight sets, and the recursive argument all follow from different proofs of the Delsarte-Hoffman bound. Furthermore, &Omega;(<i>n</i>) also sits inside a well-known association scheme, which again plays a central role in our approach.
12

Independent Sets and Eigenspaces

Newman, Michael William January 2004 (has links)
The problems we study in this thesis arise in computer science, extremal set theory and quantum computing. The first common feature of these problems is that each can be reduced to characterizing the independent sets of maximum size in a suitable graph. A second common feature is that the size of these independent sets meets an eigenvalue bound due to Delsarte and Hoffman. Thirdly, the graphs that arise belong to association schemes that have already been studied in other contexts. Our first problem involves covering arrays on graphs, which arises in computer science. The goal is to find a smallest covering array on a given graph <i>G</i>. It is known that this is equivalent to determining whether <i>G</i> has a homomorphism into a <i>covering array graph</i>, <i>CAG(n,g)</i>. Thus our question: Are covering array graphs cores? A covering array graph has as vertex set the partitions of <i>{1,. . . ,n}</i> into <i>g</i> cells each of size at least <i>g</i>, with two vertices being adjacent if their meet has size <i>g<sup>2</sup></i>. We determine that <i>CAG(9,3)</i> is a core. We also determine some partial results on the family of graphs <i>CAG(g<sup>2</sup>,g)</i>. The key to our method is characterizing the independent sets that meet the Delsarte-Hoffman bound---we call these sets <i>ratio-tight</i>. It turns out that <i>CAG(9,3)</i> sits inside an association scheme, which will be useful but apparently not essential. We then turn our attention to our next problem: the Erdos-Ko-Rado theorem and its <i>q</i>-analogue. We are motivated by a desire to find a unifying proof that will cover both versions. The EKR theorem gives the maximum number of pairwise disjoint <i>k</i>-sets of a fixed <i>v</i>-set, and characterizes the extremal cases. Its <i>q</i>-analogue does the same for <i>k</i>-dimensional subspaces of a fixed <i>v</i>-dimensional space over <i>GF(q)</i>. We find that the methods we developed for covering array graphs apply to the EKR theorem. Moreover, unlike most other proofs of EKR, our argument applies equally well to the <i>q</i>-analogue. We provide a proof of the characterization of the extremal cases for the <i>q</i>-analogue when <i>v=2k</i>; no such proof has appeared before. Again, the graphs we consider sit inside of well-known association schemes; this time the schemes play a more central role. Finally, we deal with the problem in quantum computing. There are tasks that can be performed using quantum entanglement yet apparently are beyond the reach of methods using classical physics only. One particular task can be solved classically if and only if the graph &Omega;(<i>n</i>) has chromatic number <i>n</i>. The graph &Omega;(<i>n</i>) has as vertex set the set of all <i>± 1</i> vectors of length <i>n</i>, with two vertices adjacent if they are orthogonal. We find that <i>n</i> is a trivial upper bound on the chromatic number, and that this bound holds with equality if and only if the Delsarte-Hoffman bound on independent sets does too. We are thus led to characterize the ratio-tight independent sets. We are then able to leverage our result using a recursive argument to show that <i>&chi;</i>(&Omega;(<i>n</i>)) > <i>n</i> for all <i>n</i> > 8. It is notable that the reduction to independent sets, the characterization of ratio-tight sets, and the recursive argument all follow from different proofs of the Delsarte-Hoffman bound. Furthermore, &Omega;(<i>n</i>) also sits inside a well-known association scheme, which again plays a central role in our approach.
13

Analysis and Evaluation of Social Network Anomaly Detection

Zhao, Meng John 27 October 2017 (has links)
As social networks become more prevalent, there is significant interest in studying these network data, the focus often being on detecting anomalous events. This area of research is referred to as social network surveillance or social network change detection. While there are a variety of proposed methods suitable for different monitoring situations, two important issues have yet to be completely addressed in network surveillance literature. First, performance assessments using simulated data to evaluate the statistical performance of a particular method. Second, the study of aggregated data in social network surveillance. The research presented tackle these issues in two parts, evaluation of a popular anomaly detection method and investigation of the effects of different aggregation levels on network anomaly detection. / Ph. D. / Social networks are increasingly becoming a part of our normal lives. These networks contain a wealth of information that can be immensely useful in a variety of areas, from targeting a specific audience for advertisement, to apprehending criminals, to detecting terrorist activities. The research presented focus evaluating popular methods on monitoring these social networks, and the potential information loss one might encounter when only limited information can be collected over a specific time period, we present our commendations on social network monitoring that are applicable to a wide range of scenarios as well.
14

On Generalized Measures Of Information With Maximum And Minimum Entropy Prescriptions

Dukkipati, Ambedkar 03 1900 (has links)
Kullback-Leibler relative-entropy or KL-entropy of P with respect to R defined as ∫xlnddPRdP , where P and R are probability measures on a measurable space (X, ), plays a basic role in the definitions of classical information measures. It overcomes a shortcoming of Shannon entropy – discrete case definition of which cannot be extended to nondiscrete case naturally. Further, entropy and other classical information measures can be expressed in terms of KL-entropy and hence properties of their measure-theoretic analogs will follow from those of measure-theoretic KL-entropy. An important theorem in this respect is the Gelfand-Yaglom-Perez (GYP) Theorem which equips KL-entropy with a fundamental definition and can be stated as: measure-theoretic KL-entropy equals the supremum of KL-entropies over all measurable partitions of X . In this thesis we provide the measure-theoretic formulations for ‘generalized’ information measures, and state and prove the corresponding GYP-theorem – the ‘generalizations’ being in the sense of R ´enyi and nonextensive, both of which are explained below. Kolmogorov-Nagumo average or quasilinear mean of a vector x = (x1, . . . , xn) with respect to a pmf p= (p1, . . . , pn)is defined ashxiψ=ψ−1nk=1pkψ(xk), whereψis an arbitrarycontinuous and strictly monotone function. Replacing linear averaging in Shannon entropy with Kolmogorov-Nagumo averages (KN-averages) and further imposing the additivity constraint – a characteristic property of underlying information associated with single event, which is logarithmic – leads to the definition of α-entropy or R ´enyi entropy. This is the first formal well-known generalization of Shannon entropy. Using this recipe of R´enyi’s generalization, one can prepare only two information measures: Shannon and R´enyi entropy. Indeed, using this formalism R´enyi characterized these additive entropies in terms of axioms of KN-averages. On the other hand, if one generalizes the information of a single event in the definition of Shannon entropy, by replacing the logarithm with the so called q-logarithm, which is defined as lnqx =x1− 1 −1 −q , one gets what is known as Tsallis entropy. Tsallis entropy is also a generalization of Shannon entropy but it does not satisfy the additivity property. Instead, it satisfies pseudo-additivity of the form x ⊕qy = x + y + (1 − q)xy, and hence it is also known as nonextensive entropy. One can apply R´enyi’s recipe in the nonextensive case by replacing the linear averaging in Tsallis entropy with KN-averages and thereby imposing the constraint of pseudo-additivity. A natural question that arises is what are the various pseudo-additive information measures that can be prepared with this recipe? We prove that Tsallis entropy is the only one. Here, we mention that one of the important characteristics of this generalized entropy is that while canonical distributions resulting from ‘maximization’ of Shannon entropy are exponential in nature, in the Tsallis case they result in power-law distributions. The concept of maximum entropy (ME), originally from physics, has been promoted to a general principle of inference primarily by the works of Jaynes and (later on) Kullback. This connects information theory and statistical mechanics via the principle: the states of thermodynamic equi- librium are states of maximum entropy, and further connects to statistical inference via select the probability distribution that maximizes the entropy. The two fundamental principles related to the concept of maximum entropy are Jaynes maximum entropy principle, which involves maximizing Shannon entropy and the Kullback minimum entropy principle that involves minimizing relative-entropy, with respect to appropriate moment constraints. Though relative-entropy is not a metric, in cases involving distributions resulting from relative-entropy minimization, one can bring forth certain geometrical formulations. These are reminiscent of squared Euclidean distance and satisfy an analogue of the Pythagoras’ theorem. This property is referred to as Pythagoras’ theorem of relative-entropy minimization or triangle equality and plays a fundamental role in geometrical approaches to statistical estimation theory like information geometry. In this thesis we state and prove the equivalent of Pythagoras’ theorem in the nonextensive formalism. For this purpose we study relative-entropy minimization in detail and present some results. Finally, we demonstrate the use of power-law distributions, resulting from ME-rescriptions of Tsallis entropy, in evolutionary algorithms. This work is motivated by the recently proposed generalized simulated annealing algorithm based on Tsallis statistics. To sum up, in light of their well-known axiomatic and operational justifications, this thesis establishes some results pertaining to the mathematical significance of generalized measures of information. We believe that these results represent an important contribution towards the ongoing research on understanding the phenomina of information. (For formulas pl see the original document) ii
15

Multifractal zeta functions

Mijović, Vuksan January 2017 (has links)
Multifractals have during the past 20 − 25 years been the focus of enormous attention in the mathematical literature. Loosely speaking there are two main ingredients in multifractal analysis: the multifractal spectra and the Renyi dimensions. One of the main goals in multifractal analysis is to understand these two ingredients and their relationship with each other. Motivated by the powerful techniques provided by the use of the Artin-Mazur zeta-functions in number theory and the use of the Ruelle zeta-functions in dynamical systems, Lapidus and collaborators (see books by Lapidus & van Frankenhuysen [32, 33] and the references therein) have introduced and pioneered use of zeta-functions in fractal geometry. Inspired by this development, within the past 7−8 years several authors have paralleled this development by introducing zeta-functions into multifractal geometry. Our result inspired by this work will be given in section 2.2.2. There we introduce geometric multifractal zeta-functions providing precise information of very general classes of multifractal spectra, including, for example, the multifractal spectra of self-conformal measures and the multifractal spectra of ergodic Birkhoff averages of continuous functions. Results in that section are based on paper [37]. Dynamical zeta-functions have been introduced and developed by Ruelle [63, 64] and others, (see, for example, the surveys and books [3, 54, 55] and the references therein). It has been a major challenge to introduce and develop a natural and meaningful theory of dynamical multifractal zeta-functions paralleling existing theory of dynamical zeta functions. In particular, in the setting of self-conformal constructions, Olsen [49] introduced a family of dynamical multifractal zeta-functions designed to provide precise information of very general classes of multifractal spectra, including, for example, the multifractal spectra of self-conformal measures and the multifractal spectra of ergodic Birkhoff averages of continuous functions. However, recently it has been recognised that while self-conformal constructions provide a useful and important framework for studying fractal and multifractal geometry, the more general notion of graph-directed self-conformal constructions provide a substantially more flexible and useful framework, see, for example, [36] for an elaboration of this. In recognition of this viewpoint, in section 2.3.11 we provide main definitions of the multifractal pressure and the multifractal dynamical zeta-functions and we state our main results. This section is based on paper [38]. Setting we are working unifies various different multifractal spectra including fine multifractal spectra of self-conformal measures or Birkhoff averages of continuous function. It was introduced by Olsen in [43]. In section 2.1 we propose answer to problem of defining Renyi spectra in more general settings and provide slight improvement of result regrading multifractal spectra in the case of Subshift of finite type.
16

On Modern Measures and Tests of Multivariate Independence

Paler, Mary Elvi Aspiras 19 November 2015 (has links)
No description available.
17

Inhomogeneous self-similar sets and measures

Snigireva, Nina January 2008 (has links)
The thesis consists of four main chapters. The first chapter includes an introduction to inhomogeneous self-similar sets and measures. In particular, we show that these sets and measures are natural generalizations of the well known self-similar sets and measures. We then investigate the structure of these sets and measures. In the second chapter we study various fractal dimensions (Hausdorff, packing and box dimensions) of inhomogeneous self-similar sets and compare our results with the well-known results for (ordinary) self-similar sets. In the third chapter we investigate the L {q} spectra and the Renyi dimensions of inhomogeneous self-similar measures and prove that new multifractal phenomena, not exhibited by (ordinary) self-similar measures, appear in the inhomogeneous case. Namely, we show that inhomogeneous self-similar measures may have phase transitions which is in sharp contrast to the behaviour of the L {q} spectra of (ordinary) self-similar measures satisfying the Open Set Condition. Then we study the significantly more difficult problem of computing the multifractal spectra of inhomogeneous self-similar measures. We show that the multifractal spectra of inhomogeneous self-similar measures may be non-concave which is again in sharp contrast to the behaviour of the multifractal spectra of (ordinary) self-similar measures satisfying the Open Set Condition. Then we present a number of applications of our results. Many of them are related to the notoriously difficult problem of computing (or simply obtaining non-trivial bounds) for the multifractal spectra of self-similar measures not satisfying the Open Set Condition. More precisely, we will show that our results provide a systematic approach to obtain non-trivial bounds (and in some cases even exact values) for the multifractal spectra of several large and interesting classes of self-similar measures not satisfying the Open Set Condition. In the fourth chapter we investigate the asymptotic behaviour of the Fourier transforms of inhomogeneous self-similar measures and again we present a number of applications of our results, in particular to non-linear self-similar measures.
18

Rede complexa e criticalidade auto-organizada: modelos e aplicações / Complex network and self-organized criticality: models and applications

Castro, Paulo Alexandre de 05 February 2007 (has links)
Modelos e teorias científicas surgem da necessidade do homem entender melhor o funcionamento do mundo em que vive. Constantemente, novos modelos e técnicas são criados com esse objetivo. Uma dessas teorias recentemente desenvolvida é a da Criticalidade Auto-Organizada. No Capítulo 2 desta tese, apresentamos uma breve introdução a Criticalidade Auto-Organizada. Tendo a criticalidade auto-organizada como pano de fundo, no Capítulo 3, estudamos a dinâmica Bak-Sneppen (e diversas variantes) e a comparamos com alguns algoritmos de otimização. Apresentamos no Capítulo 4, uma revisão histórica e conceitual das redes complexas. Revisamos alguns importantes modelos tais como: Erdös-Rényi, Watts-Strogatz, de configuração e Barabási-Albert. No Capítulo 5, estudamos o modelo Barabási-Albert não-linear. Para este modelo, obtivemos uma expressão analítica para a distribuição de conectividades P(k), válida para amplo espectro do espaço de parâmetros. Propusemos também uma forma analítica para o coeficiente de agrupamento, que foi corroborada por nossas simulações numéricas. Verificamos que a rede Barabási-Albert não-linear pode ser assortativa ou desassortativa e que, somente no caso da rede Barabási-Albert linear, ela é não assortativa. No Capítulo 6, utilizando dados coletados do CD-ROM da revista Placar, construímos uma rede bastante peculiar -- a rede do futebol brasileiro. Primeiramente analisamos a rede bipartida formada por jogadores e clubes. Verificamos que a probabilidade de que um jogador tenha participado de M partidas decai exponencialmente com M, ao passo que a probabilidade de que um jogador tenha marcado G gols segue uma lei de potência. A partir da rede bipartida, construímos a rede unipartida de jogadores, que batizamos de rede de jogadores do futebol brasileiro. Nessa rede, determinamos várias grandezas: o comprimento médio do menor caminho e os coeficientes de agrupamento e de assortatividade. A rede de jogadores de futebol brasileiro nos permitiu analisar a evolução temporal dessas grandezas, uma oportunidade rara em se tratando de redes reais. / Models and scientific theories arise from the necessity of the human being to better understand how the world works. Driven by this purpose new models and techniques have been created. For instance, one of these theories recently developed is the Self-Organized Criticality, which is shortly introduced in the Chapter 2 of this thesis. In the framework of the Self-Organized Criticality theory, we investigate the standard Bak-Sneppen dynamics as well some variants of it and compare them with optimization algorithms (Chapter 3). We present a historical and conceptual review of complex networks in the Chapter 4. Some important models like: Erdös-Rényi, Watts-Strogatz, configuration model and Barabási-Albert are revised. In the Chapter 5, we analyze the nonlinear Barabási-Albert model. For this model, we got an analytical expression for the connectivity distribution P(k), which is valid for a wide range of the space parameters. We also proposed an exact analytical expression for the clustering coefficient which corroborates very well with our numerical simulations. The nonlinear Barabási-Albert network can be assortative or disassortative and only in the particular case of the linear Barabási-Albert model, the network is no assortative. In the Chapter 6, we used collected data from a CD-ROM released by the magazine Placar and constructed a very peculiar network -- the Brazilian soccer network. First, we analyzed the bipartite network formed by players and clubs. We find out that the probability of a footballer has played M matches decays exponentially with M, whereas the probability of a footballer to score G gols follows a power-law. From the bipartite network, we built the unipartite Brazilian soccer players network. For this network, we determined several important quantities: the average shortest path length, the clustering coefficient and the assortative coefficient. We were also able to analise the time evolution of these quantities -- which represents a very rare opportunity in the study of real networks.
19

Analysis of Internal Boundaries and Transition Regions in Geophysical Systems with Advanced Processing Techniques

Krützmann, Nikolai Christian January 2013 (has links)
This thesis examines the utility of the Rényi entropy (RE), a measure of the complexity of probability density functions, as a tool for finding physically meaningful patterns in geophysical data. Initially, the RE is applied to observational data of long-lived atmospheric tracers in order to analyse the dynamics of stratospheric transitions regions associated with barriers to horizontal mixing. Its wider applicability is investigated by testing the RE as a method for highlighting internal boundaries in snow and ice from ground penetrating radar (GPR) recordings. High-resolution 500 MHz GPR soundings of dry snow were acquired at several sites near Scott Base, Antarctica, in 2008 and 2009, with the aim of using the RE to facilitate the identification and tracking of subsurface layers to extrapolate point measurements of accumulation from snow pits and firn cores to larger areas. The atmospheric analysis focuses on applying the RE to observational tracer data from the EOS-MLS satellite instrument. Nitrous oxide (N2O) is shown to exhibit subtropical RE maxima in both hemispheres. These peaks are a measure of the tracer gradients that mark the transition between the tropics and the mid-latitudes in the stratosphere, also referred to as the edges of the tropical pipe. The RE maxima are shown to be located closer to the equator in winter than in summer. This agrees well with the expected behaviour of the tropical pipe edges and is similar to results reported by other studies. Compared to other stratospheric mixing metrics, the RE has the advantage that it is easy to calculate as it does not, for example, require conversion to equivalent latitude and does not rely on dynamical information such as wind fields. The RE analysis also reveals occasional sudden poleward shifts of the southern hemisphere tropical pipe edge during austral winter which are accompanied by increased mid-latitude N2O levels. These events are investigated in more detail by creating daily high-resolution N2O maps using a two-dimensional trajectory model and MERRA reanalysis winds to advect N2O observations forwards and backwards in time on isentropic surfaces. With the aid of this ‘domain filling’ technique it is illustrated that the increase in southern hemisphere mid-latitude N2O during austral winter is probably the result of the cumulative effect of several large-scale, episodic leaks of N2O-rich air from the tropical pipe. A comparison with the global distribution of potential vorticity strongly suggests that irreversible mixing related to planetary wave breaking is the cause of the leak events. Between 2004 and 2011 the large-scale leaks are shown to occur approximately every second year and a connection to the equatorial quasi-biennial oscillation is found to be likely, though this cannot be established conclusively due to the relatively short data set. Identification and tracking of subsurface boundaries, such as ice layers in snow or the bedrock of a glacier, is the focus of the cryospheric part of this project. The utility of the RE for detecting amplitude gradients associated with reflections in GPR recordings is initially tested on a 25 MHz sounding of an Antarctic glacier. The results show distinct regions of increased RE values that allow identification of the glacial bedrock along large parts of the profile. Due to the low computational requirements, the RE is found to be an effective pseudo gain function for initial analysis of GPR data in the field. While other gain functions often have to be tuned to give a good contrast between reflections and background noise over the whole vertical range of a profile, the RE tends to assign all detectable amplitude gradients a similar (high) value, resulting in a clear contrast between reflections and background scattering. Additionally, theoretical considerations allow the definition of a ‘standard’ data window size with which the RE can be applied to recordings made by most pulsed GPR systems and centre frequencies. This is confirmed by tests with higher frequency recordings (50 and 500 MHz) acquired on the McMurdo Ice Shelf. However, these also reveal that the RE processing is less reliable for identifying more closely spaced reflections from internal layers in dry snow. In order to complete the intended high-resolution analysis of accumulation patterns by tracking internal snow layers in the 500 MHz data from two test sites, a different processing approach is developed. Using an estimate of the emitted waveform from direct measurement, deterministic deconvolution via the Fourier domain is applied to the high-resolution GPR data. This reveals unambiguous reflection horizons which can be observed in repeat measurements made one year apart. Point measurements of average accumulation from snow pits and firn cores are extrapolated to larger areas by identifying and tracking a dateable dust layer horizon in the radargrams. Furthermore, it is shown that annual compaction rates of snow can be estimated by tracking several internal reflection horizons along the deconvolved radar profiles and calculating the average change in separation of horizon pairs from one year to the next. The technique is complementary to point measurements from other studies and the derived compaction rates agree well with published values and theoretical estimates.
20

Rede complexa e criticalidade auto-organizada: modelos e aplicações / Complex network and self-organized criticality: models and applications

Paulo Alexandre de Castro 05 February 2007 (has links)
Modelos e teorias científicas surgem da necessidade do homem entender melhor o funcionamento do mundo em que vive. Constantemente, novos modelos e técnicas são criados com esse objetivo. Uma dessas teorias recentemente desenvolvida é a da Criticalidade Auto-Organizada. No Capítulo 2 desta tese, apresentamos uma breve introdução a Criticalidade Auto-Organizada. Tendo a criticalidade auto-organizada como pano de fundo, no Capítulo 3, estudamos a dinâmica Bak-Sneppen (e diversas variantes) e a comparamos com alguns algoritmos de otimização. Apresentamos no Capítulo 4, uma revisão histórica e conceitual das redes complexas. Revisamos alguns importantes modelos tais como: Erdös-Rényi, Watts-Strogatz, de configuração e Barabási-Albert. No Capítulo 5, estudamos o modelo Barabási-Albert não-linear. Para este modelo, obtivemos uma expressão analítica para a distribuição de conectividades P(k), válida para amplo espectro do espaço de parâmetros. Propusemos também uma forma analítica para o coeficiente de agrupamento, que foi corroborada por nossas simulações numéricas. Verificamos que a rede Barabási-Albert não-linear pode ser assortativa ou desassortativa e que, somente no caso da rede Barabási-Albert linear, ela é não assortativa. No Capítulo 6, utilizando dados coletados do CD-ROM da revista Placar, construímos uma rede bastante peculiar -- a rede do futebol brasileiro. Primeiramente analisamos a rede bipartida formada por jogadores e clubes. Verificamos que a probabilidade de que um jogador tenha participado de M partidas decai exponencialmente com M, ao passo que a probabilidade de que um jogador tenha marcado G gols segue uma lei de potência. A partir da rede bipartida, construímos a rede unipartida de jogadores, que batizamos de rede de jogadores do futebol brasileiro. Nessa rede, determinamos várias grandezas: o comprimento médio do menor caminho e os coeficientes de agrupamento e de assortatividade. A rede de jogadores de futebol brasileiro nos permitiu analisar a evolução temporal dessas grandezas, uma oportunidade rara em se tratando de redes reais. / Models and scientific theories arise from the necessity of the human being to better understand how the world works. Driven by this purpose new models and techniques have been created. For instance, one of these theories recently developed is the Self-Organized Criticality, which is shortly introduced in the Chapter 2 of this thesis. In the framework of the Self-Organized Criticality theory, we investigate the standard Bak-Sneppen dynamics as well some variants of it and compare them with optimization algorithms (Chapter 3). We present a historical and conceptual review of complex networks in the Chapter 4. Some important models like: Erdös-Rényi, Watts-Strogatz, configuration model and Barabási-Albert are revised. In the Chapter 5, we analyze the nonlinear Barabási-Albert model. For this model, we got an analytical expression for the connectivity distribution P(k), which is valid for a wide range of the space parameters. We also proposed an exact analytical expression for the clustering coefficient which corroborates very well with our numerical simulations. The nonlinear Barabási-Albert network can be assortative or disassortative and only in the particular case of the linear Barabási-Albert model, the network is no assortative. In the Chapter 6, we used collected data from a CD-ROM released by the magazine Placar and constructed a very peculiar network -- the Brazilian soccer network. First, we analyzed the bipartite network formed by players and clubs. We find out that the probability of a footballer has played M matches decays exponentially with M, whereas the probability of a footballer to score G gols follows a power-law. From the bipartite network, we built the unipartite Brazilian soccer players network. For this network, we determined several important quantities: the average shortest path length, the clustering coefficient and the assortative coefficient. We were also able to analise the time evolution of these quantities -- which represents a very rare opportunity in the study of real networks.

Page generated in 0.0259 seconds