Spelling suggestions: "subject:"power las"" "subject:"power law""
1 |
Managing space in forward pick areas of warehouses for small partsSubramanian, Sriram 13 January 2014 (has links)
Many high-volume warehouses for small parts such as pharmaceuticals, cosmetics and office supplies seek to improve efficiency by creating forward pick areas in which many popular products are stored in a small area that is replenished from reserve storage. This thesis addresses the question of how to stock forward pick areas to maximum benefit by answering two key, inter-related decisions that have been called Assignment-Allocation. The assignment question asks which SKUs should be stored in the forward pick area? And the allocation question asks how much space should be allocated to each SKU? We show fast, simple, near-optimal algorithms to answer these questions in a variety of circumstances. To allocate space to SKUS, we introduce a Powers of Two allocation scheme designed to simplify shelf management. In addition, we present a ranking-based algorithm to assign SKUs and allocate space among multiple forward pick areas. We show that a similar algorithm that accounts for constraints on congestion and workload within the forward pick area. We also show how to determine the optimal assignment for warehouses with one or more forward pick areas that allocate space in ways that are common in practice.
Warehouses frequently use the 80-20 rule to manage SKUs based on their popularity. We examine empirical data from thirty warehouses and analyze whether the power law distribution is a suitable fit. We test the hypothesis that the power law fits of warehouses in similar industries are themselves similar. We review explanations for why power laws arise in other settings and identify those that are plausible in the warehouse setting.
|
2 |
Aplicação de leis de potência para tratamento e classificação de tamanho de empresas: uma proposta metodológica para pesquisas contábeis / Application of the power laws for treatment and classification of companies: a methodological proposal for accounting researchesSilva, Marli Auxiliadôra da 28 March 2008 (has links)
Em contabilidade, tamanho de empresa é comumente utilizado como proxy para caracterizar inúmeros conceitos teóricos. Muito se tem discutido sobre a validade e confiabilidade desta proxy devido a alta variância das medidas operacionais usadas para a mensuração de tamanho de empresa. Com o intuito de reduzir esta variância alguns procedimentos estatísticos são aplicados para ajustar os valores. Pesquisas internacionais evidenciam que a distribuição de probabilidade da variável tamanho de empresa segue uma lei de potência. Diante desse cenário esta pesquisa teve como objetivo investigar se é possível tratar e classificar as medidas operacionais para proxy de tamanho de empresas brasileiras, por meio de Leis de Potência. Foram utilizados dados do período de 1997 a 2006, relativos às medidas operacionais Receitas (REC), Ativo Total (AT), Patrimônio Líquido (PL) e de 1996 a 2004 para a medida Número de Empregados (NE), de bases diferentes, FIPECAFI, Economática® e IBGE. Observou-se a ocorrência da lei de potência, -m > P(v ) = c v , em toda a extensão da variável v = REC , com expoente m próximo de 1, como verificado em todas as pesquisas internacionais, enquanto que para as demais variáveis foi confirmada a lei de potência, apenas na cauda da distribuição. Na seqüência a formulação proposta, P(v> )*v , foi aplicada para a proxy em um estudo específico na área contábil. Observou-se que a ponderação de v pela probabilidade de ocorrência de valores superiores a v, utilizada como proxy, leva à significância estatística exigida nos modelos da área contábil. Os resultados obtidos são satisfatórios, pois, confirmam a natureza da lei de potência de P(v> ) para o cenário brasileiro e, com relação à proxy proposta, espera-se validá-la no cenário da pesquisa contábil, pois a mesma inclui informações sobre a natureza estatística de medidas operacionais para tamanho de empresa. / Firm size is frequently used, in accounting, as a proxy to characterize several theoretical concepts. It has been discussing a lot about the validity and reliability of this proxy due to the variance of the operational measures used for the firm size mensuration. With the intention of reducing this variance, statistical procedures are applied to adjust the values. International researches evidence that, statistically, the company size can be represented by power law distributions. The present research has the objective of investigate if it is possible to treat and classify the operational measures for proxy of Brazilian firm size as a power law. Using data from 1997 to 2006, relative to the measures of operational Incomes (REC), Total Assets (AT), Equity (PL) and from 1997 to 2004 for the measure Number of Employees (NE), of three different data bases, FIPECAFI, Economática® and IBGE. The power law, -m > P(v ) = c v , was verified for v = REC , with values of m equivalent to that obtained on international researches, while for the other variables, the power law was confirmed just in the tail of the distribution. In the sequence, the proposed formulation, P(v> )*v , was applied to a specific study in the accounting area. It was observed that this proxy alows for statistical significance required to validate researches into accounting area. The obtained results confirm the power law nature of P(v> ) for Brazilian firms and, related to the proxy suggested, the expectation is that it can be validate into the accounting research area, once it includes information on the statistical nature of operational measurements for firm size.
|
3 |
A Hubterranean View of Syntax: An Analysis of Linguistic Form through Network TheoryJulie Louise Steele Unknown Date (has links)
Language is part of nature, and as such, certain general principles that generate the form of natural systems, will also create the patterns found within linguistic form. Since network theory is one of the best theoretical frameworks for extracting general principles from diverse systems, this thesis examines how a network perspective can shed light on the characteristics and the learning of syntax. It is demonstrated that two word co-occurrence networks constructed from adult and child speech (BNC World Edition 2001; Sachs 1983; MacWhinney 2000a) exhibit three non-atomic syntactic primitives namely, the truncated power law distributions of frequency, degree and the link length between two nodes (the link representing a precedence relation). Since a power law distribution of link lengths characterises a hubterranean structure (Kasturirangan 1999) i.e. a structure that has a few highly connected nodes and many poorly connected nodes, both the adult and the child word co-occurrence networks exhibit hubterranean structure. This structure is formed by an optimisation process that minimises the link length whilst maximising connectivity (Mathias & Gopal 2001 a&b). The link length in a word co-occurrence network is the storage cost of representing two adjacently co-occurring words and is inversely proportion to the transitional probability (TP) of the word pair. Adjacent words that co-occur often together i.e. have a high TP, exhibit a high cohesion and tend to form chunks. These chunks are a cost effective method of storing representations. Thus, on this view, the (multi-) power law of link lengths represents the distribution of storage costs or cohesions within adjacent words. Such cohesions form groupings of linguistic form known as syntactic constituents. Thus, syntactic constituency is not specific to language and is a property derived from the optimisation of the network. In keeping with other systems generated by a cost constraint on the link length, it is demonstrated that both the child and adult word co-occurrence networks are not hierarchically organised in terms of degree distribution (Ravasz and Barabási 2003:1). Furthermore, both networks are disassortative, and in line with other disassortative networks, there is a correlation between degree and betweenness centrality (BC) values (Goh, Kahng and Kim 2003). In agreement with scale free networks (Goh, Oh, Jeong, Kahng and Kim 2002), the BC values in both networks follow a power law distribution. In this thesis, a motif analysis of the two word co-occurrence networks is a richly detailed (non-functional) distributional analysis and reveals that the adult and child significance profiles for triad subgraphs correlate closely. Furthermore, the most significant 4-node motifs in the adult network are also the most significant in the child network. Utilising this non-functional distributional analysis in a word co-occurrence network, it is argued that the notion of a general syntactic category is not evidenced and as such is inadmissible. Thus, non-general or construction-specific categories are preferred (in line with Croft 2001). Function words tend to be the hub words of the network (see Ferrer i Cancho and Solé 2001a), being defined and therefore identified by their high type and token frequency. These properties are useful for identifying syntactic categories since function words are traditionally associated with particular syntactic categories (see Cann 2000). Consequently, a function word and thus a syntactic category may be identified by the interception of the frequency and degree power laws with their truncated tails. As a given syntactic category captures the type of words that may co-occur with the function word, the category then encourages consistency within the functional patterns in the network and re-enforces the network’s (near-) optimised state. Syntax then, on this view, is both a navigator, manoeuvring through the ever varying sea of linguistic form and a guide, forging an uncharted course through novel expression. There is also evidence suggesting that the hubterranean structure is not only found in the word co-occurrence network, but within other theoretical syntactic levels. Factors affecting the choice of a verb that is generalised early relate to the formation and the characteristics of hubs. In that, the property of a high (token) frequency in combination with either a high degree (type frequency) or a low storage cost, point to certain verbs within the network and these highly ‘visible’ verbs tend to be generalised early (in line with Boyd and Goldberg forthcoming). Furthermore, the optimisation process that creates hubterranean structure is implicated in the verb-construction subpart network of the adult’s linguistic knowledge, the mapping of the constructions’ form-to-meaning pairings, the construction inventory size as well as certain strategies aiding first language learning and adult artificial language learning.
|
4 |
A Hubterranean View of Syntax: An Analysis of Linguistic Form through Network TheoryJulie Louise Steele Unknown Date (has links)
Language is part of nature, and as such, certain general principles that generate the form of natural systems, will also create the patterns found within linguistic form. Since network theory is one of the best theoretical frameworks for extracting general principles from diverse systems, this thesis examines how a network perspective can shed light on the characteristics and the learning of syntax. It is demonstrated that two word co-occurrence networks constructed from adult and child speech (BNC World Edition 2001; Sachs 1983; MacWhinney 2000a) exhibit three non-atomic syntactic primitives namely, the truncated power law distributions of frequency, degree and the link length between two nodes (the link representing a precedence relation). Since a power law distribution of link lengths characterises a hubterranean structure (Kasturirangan 1999) i.e. a structure that has a few highly connected nodes and many poorly connected nodes, both the adult and the child word co-occurrence networks exhibit hubterranean structure. This structure is formed by an optimisation process that minimises the link length whilst maximising connectivity (Mathias & Gopal 2001 a&b). The link length in a word co-occurrence network is the storage cost of representing two adjacently co-occurring words and is inversely proportion to the transitional probability (TP) of the word pair. Adjacent words that co-occur often together i.e. have a high TP, exhibit a high cohesion and tend to form chunks. These chunks are a cost effective method of storing representations. Thus, on this view, the (multi-) power law of link lengths represents the distribution of storage costs or cohesions within adjacent words. Such cohesions form groupings of linguistic form known as syntactic constituents. Thus, syntactic constituency is not specific to language and is a property derived from the optimisation of the network. In keeping with other systems generated by a cost constraint on the link length, it is demonstrated that both the child and adult word co-occurrence networks are not hierarchically organised in terms of degree distribution (Ravasz and Barabási 2003:1). Furthermore, both networks are disassortative, and in line with other disassortative networks, there is a correlation between degree and betweenness centrality (BC) values (Goh, Kahng and Kim 2003). In agreement with scale free networks (Goh, Oh, Jeong, Kahng and Kim 2002), the BC values in both networks follow a power law distribution. In this thesis, a motif analysis of the two word co-occurrence networks is a richly detailed (non-functional) distributional analysis and reveals that the adult and child significance profiles for triad subgraphs correlate closely. Furthermore, the most significant 4-node motifs in the adult network are also the most significant in the child network. Utilising this non-functional distributional analysis in a word co-occurrence network, it is argued that the notion of a general syntactic category is not evidenced and as such is inadmissible. Thus, non-general or construction-specific categories are preferred (in line with Croft 2001). Function words tend to be the hub words of the network (see Ferrer i Cancho and Solé 2001a), being defined and therefore identified by their high type and token frequency. These properties are useful for identifying syntactic categories since function words are traditionally associated with particular syntactic categories (see Cann 2000). Consequently, a function word and thus a syntactic category may be identified by the interception of the frequency and degree power laws with their truncated tails. As a given syntactic category captures the type of words that may co-occur with the function word, the category then encourages consistency within the functional patterns in the network and re-enforces the network’s (near-) optimised state. Syntax then, on this view, is both a navigator, manoeuvring through the ever varying sea of linguistic form and a guide, forging an uncharted course through novel expression. There is also evidence suggesting that the hubterranean structure is not only found in the word co-occurrence network, but within other theoretical syntactic levels. Factors affecting the choice of a verb that is generalised early relate to the formation and the characteristics of hubs. In that, the property of a high (token) frequency in combination with either a high degree (type frequency) or a low storage cost, point to certain verbs within the network and these highly ‘visible’ verbs tend to be generalised early (in line with Boyd and Goldberg forthcoming). Furthermore, the optimisation process that creates hubterranean structure is implicated in the verb-construction subpart network of the adult’s linguistic knowledge, the mapping of the constructions’ form-to-meaning pairings, the construction inventory size as well as certain strategies aiding first language learning and adult artificial language learning.
|
5 |
Análise computacional da disseminação de epidemias considerando a diluição e a mobilidade dos agentes / Analysis of epidemic dissemination considering dilution and mobility of the agentsCruz, Vicente Silva January 2013 (has links)
Pesquisas sobre a propagação de epidemias são uma constante devido a sua relevância para a contenção de doenças. Porém, devido aos diversos tipos de doenças existentes, a observação de um comportamento genérico e aproximado torna-se impraticável. Neste âmbito, a elaboração de modelos matemáticos epidêmicos auxiliam no fornecimento de informações que podem ser usadas por orgãos públicos para o combate de surtos epidêmicos reais. Em paralelo, por causa do grande volume de dados que são processados na execução da simulação desses modelos, o constante aumento dos recursos computacionais desenvolvidos vem em auxílio desta tarefa. O objetivo desta dissertação é estudar o comportamento da disseminação de uma epidemia simulada computacionalmente através do modelo epidêmico SIR em reticulados quadrados considerando duas propriedades: a existência de vértices vazios e a movimentação aleatória dos agentes. Essas propriedades são conhecidas por taxas de diluição e mobilidade, respectivamente. Para alcançar esse objetivo, algumas técnicas físico-estatística, tais como a análise das transições de fase e fenômenos críticos, foram aplicadas. Através destas técnicas, é possível observar a passagem do sistema da fase em que ocorre um surto epidêmico para a fase em que a epidemia é contida, bem como estudar a dinâmica do modelo quando ele está na criticidade, ou seja, no ponto de mudança de fase, conhecido por ponto crítico. Foi constatado que a taxa de diluição influencia a disseminação das epidemias porque desloca a transição de fase negativamente, reduzindo o valor crítico da imunização. Por sua vez, a taxa da movimentação dos agentes favorece o espalhamento da doença, pois a transição de fase é positivamente deslocada e seu ponto crítico, aumentado. Além disso foi observado que, apesar desse incremento, ele não é completamente restaurado devido às restrições de mobilidade dos agentes e ao alto grau de desconectividade da rede causado pelas altas taxas de diluição. Neste trabalho nós mostramos as razões deste comportamento. / Research on the spreading of epidemics are frequent because of their relevance for the containment of diseases. However, due to the variety of existing illnesses, the observation of an approximated generic behavior becomes impractical. In this context, the development of mathematical models of epidemics assists in providing information that can be used to make strategic decisions for the combat of real epidemic outbreaks. In parallel, because of the large volume of data which has to be processed in the simulation of these models, the increase of computational performance helps with this task. The objective of this thesis is to study the behavior of the spreading of an epidemic, by computationally simulating an SIR epidemic model on square lattices, considering two properties: the existence of empty vertices and random movement of agents. These properties are known as dilution rate and mobility rate, respectively. To achieve this goal, techniques of statistical physics, such as the analysis of phase transition and power laws, were applied. With these techniques, it is possible to observe the transition of the system from the phase in which an outbreak occurs to the phase where the epidemic is contained. Additionally, we studied the dynamics of the model when it is in criticality, that is, at the point of phase transition, known as the critical point. It was found that a higher dilution rate reduces the spreading of epidemics because it shifts the phase transition negatively, reducing the value of its critical point. On the other hand, increasing the rate of movement of the agents favors the spreading of the disease, because the phase transition is shifted positively and its critical point is increased. It was noticed that, despite of this increasing, this point is not completely restored due to restricted mobility of agents and the high degree of the network disconectivity caused by the high dilution rates. In this work we show the reasons for this behavior.
|
6 |
Aplicação de leis de potência para tratamento e classificação de tamanho de empresas: uma proposta metodológica para pesquisas contábeis / Application of the power laws for treatment and classification of companies: a methodological proposal for accounting researchesMarli Auxiliadôra da Silva 28 March 2008 (has links)
Em contabilidade, tamanho de empresa é comumente utilizado como proxy para caracterizar inúmeros conceitos teóricos. Muito se tem discutido sobre a validade e confiabilidade desta proxy devido a alta variância das medidas operacionais usadas para a mensuração de tamanho de empresa. Com o intuito de reduzir esta variância alguns procedimentos estatísticos são aplicados para ajustar os valores. Pesquisas internacionais evidenciam que a distribuição de probabilidade da variável tamanho de empresa segue uma lei de potência. Diante desse cenário esta pesquisa teve como objetivo investigar se é possível tratar e classificar as medidas operacionais para proxy de tamanho de empresas brasileiras, por meio de Leis de Potência. Foram utilizados dados do período de 1997 a 2006, relativos às medidas operacionais Receitas (REC), Ativo Total (AT), Patrimônio Líquido (PL) e de 1996 a 2004 para a medida Número de Empregados (NE), de bases diferentes, FIPECAFI, Economática® e IBGE. Observou-se a ocorrência da lei de potência, -m > P(v ) = c v , em toda a extensão da variável v = REC , com expoente m próximo de 1, como verificado em todas as pesquisas internacionais, enquanto que para as demais variáveis foi confirmada a lei de potência, apenas na cauda da distribuição. Na seqüência a formulação proposta, P(v> )*v , foi aplicada para a proxy em um estudo específico na área contábil. Observou-se que a ponderação de v pela probabilidade de ocorrência de valores superiores a v, utilizada como proxy, leva à significância estatística exigida nos modelos da área contábil. Os resultados obtidos são satisfatórios, pois, confirmam a natureza da lei de potência de P(v> ) para o cenário brasileiro e, com relação à proxy proposta, espera-se validá-la no cenário da pesquisa contábil, pois a mesma inclui informações sobre a natureza estatística de medidas operacionais para tamanho de empresa. / Firm size is frequently used, in accounting, as a proxy to characterize several theoretical concepts. It has been discussing a lot about the validity and reliability of this proxy due to the variance of the operational measures used for the firm size mensuration. With the intention of reducing this variance, statistical procedures are applied to adjust the values. International researches evidence that, statistically, the company size can be represented by power law distributions. The present research has the objective of investigate if it is possible to treat and classify the operational measures for proxy of Brazilian firm size as a power law. Using data from 1997 to 2006, relative to the measures of operational Incomes (REC), Total Assets (AT), Equity (PL) and from 1997 to 2004 for the measure Number of Employees (NE), of three different data bases, FIPECAFI, Economática® and IBGE. The power law, -m > P(v ) = c v , was verified for v = REC , with values of m equivalent to that obtained on international researches, while for the other variables, the power law was confirmed just in the tail of the distribution. In the sequence, the proposed formulation, P(v> )*v , was applied to a specific study in the accounting area. It was observed that this proxy alows for statistical significance required to validate researches into accounting area. The obtained results confirm the power law nature of P(v> ) for Brazilian firms and, related to the proxy suggested, the expectation is that it can be validate into the accounting research area, once it includes information on the statistical nature of operational measurements for firm size.
|
7 |
Análise computacional da disseminação de epidemias considerando a diluição e a mobilidade dos agentes / Analysis of epidemic dissemination considering dilution and mobility of the agentsCruz, Vicente Silva January 2013 (has links)
Pesquisas sobre a propagação de epidemias são uma constante devido a sua relevância para a contenção de doenças. Porém, devido aos diversos tipos de doenças existentes, a observação de um comportamento genérico e aproximado torna-se impraticável. Neste âmbito, a elaboração de modelos matemáticos epidêmicos auxiliam no fornecimento de informações que podem ser usadas por orgãos públicos para o combate de surtos epidêmicos reais. Em paralelo, por causa do grande volume de dados que são processados na execução da simulação desses modelos, o constante aumento dos recursos computacionais desenvolvidos vem em auxílio desta tarefa. O objetivo desta dissertação é estudar o comportamento da disseminação de uma epidemia simulada computacionalmente através do modelo epidêmico SIR em reticulados quadrados considerando duas propriedades: a existência de vértices vazios e a movimentação aleatória dos agentes. Essas propriedades são conhecidas por taxas de diluição e mobilidade, respectivamente. Para alcançar esse objetivo, algumas técnicas físico-estatística, tais como a análise das transições de fase e fenômenos críticos, foram aplicadas. Através destas técnicas, é possível observar a passagem do sistema da fase em que ocorre um surto epidêmico para a fase em que a epidemia é contida, bem como estudar a dinâmica do modelo quando ele está na criticidade, ou seja, no ponto de mudança de fase, conhecido por ponto crítico. Foi constatado que a taxa de diluição influencia a disseminação das epidemias porque desloca a transição de fase negativamente, reduzindo o valor crítico da imunização. Por sua vez, a taxa da movimentação dos agentes favorece o espalhamento da doença, pois a transição de fase é positivamente deslocada e seu ponto crítico, aumentado. Além disso foi observado que, apesar desse incremento, ele não é completamente restaurado devido às restrições de mobilidade dos agentes e ao alto grau de desconectividade da rede causado pelas altas taxas de diluição. Neste trabalho nós mostramos as razões deste comportamento. / Research on the spreading of epidemics are frequent because of their relevance for the containment of diseases. However, due to the variety of existing illnesses, the observation of an approximated generic behavior becomes impractical. In this context, the development of mathematical models of epidemics assists in providing information that can be used to make strategic decisions for the combat of real epidemic outbreaks. In parallel, because of the large volume of data which has to be processed in the simulation of these models, the increase of computational performance helps with this task. The objective of this thesis is to study the behavior of the spreading of an epidemic, by computationally simulating an SIR epidemic model on square lattices, considering two properties: the existence of empty vertices and random movement of agents. These properties are known as dilution rate and mobility rate, respectively. To achieve this goal, techniques of statistical physics, such as the analysis of phase transition and power laws, were applied. With these techniques, it is possible to observe the transition of the system from the phase in which an outbreak occurs to the phase where the epidemic is contained. Additionally, we studied the dynamics of the model when it is in criticality, that is, at the point of phase transition, known as the critical point. It was found that a higher dilution rate reduces the spreading of epidemics because it shifts the phase transition negatively, reducing the value of its critical point. On the other hand, increasing the rate of movement of the agents favors the spreading of the disease, because the phase transition is shifted positively and its critical point is increased. It was noticed that, despite of this increasing, this point is not completely restored due to restricted mobility of agents and the high degree of the network disconectivity caused by the high dilution rates. In this work we show the reasons for this behavior.
|
8 |
Análise computacional da disseminação de epidemias considerando a diluição e a mobilidade dos agentes / Analysis of epidemic dissemination considering dilution and mobility of the agentsCruz, Vicente Silva January 2013 (has links)
Pesquisas sobre a propagação de epidemias são uma constante devido a sua relevância para a contenção de doenças. Porém, devido aos diversos tipos de doenças existentes, a observação de um comportamento genérico e aproximado torna-se impraticável. Neste âmbito, a elaboração de modelos matemáticos epidêmicos auxiliam no fornecimento de informações que podem ser usadas por orgãos públicos para o combate de surtos epidêmicos reais. Em paralelo, por causa do grande volume de dados que são processados na execução da simulação desses modelos, o constante aumento dos recursos computacionais desenvolvidos vem em auxílio desta tarefa. O objetivo desta dissertação é estudar o comportamento da disseminação de uma epidemia simulada computacionalmente através do modelo epidêmico SIR em reticulados quadrados considerando duas propriedades: a existência de vértices vazios e a movimentação aleatória dos agentes. Essas propriedades são conhecidas por taxas de diluição e mobilidade, respectivamente. Para alcançar esse objetivo, algumas técnicas físico-estatística, tais como a análise das transições de fase e fenômenos críticos, foram aplicadas. Através destas técnicas, é possível observar a passagem do sistema da fase em que ocorre um surto epidêmico para a fase em que a epidemia é contida, bem como estudar a dinâmica do modelo quando ele está na criticidade, ou seja, no ponto de mudança de fase, conhecido por ponto crítico. Foi constatado que a taxa de diluição influencia a disseminação das epidemias porque desloca a transição de fase negativamente, reduzindo o valor crítico da imunização. Por sua vez, a taxa da movimentação dos agentes favorece o espalhamento da doença, pois a transição de fase é positivamente deslocada e seu ponto crítico, aumentado. Além disso foi observado que, apesar desse incremento, ele não é completamente restaurado devido às restrições de mobilidade dos agentes e ao alto grau de desconectividade da rede causado pelas altas taxas de diluição. Neste trabalho nós mostramos as razões deste comportamento. / Research on the spreading of epidemics are frequent because of their relevance for the containment of diseases. However, due to the variety of existing illnesses, the observation of an approximated generic behavior becomes impractical. In this context, the development of mathematical models of epidemics assists in providing information that can be used to make strategic decisions for the combat of real epidemic outbreaks. In parallel, because of the large volume of data which has to be processed in the simulation of these models, the increase of computational performance helps with this task. The objective of this thesis is to study the behavior of the spreading of an epidemic, by computationally simulating an SIR epidemic model on square lattices, considering two properties: the existence of empty vertices and random movement of agents. These properties are known as dilution rate and mobility rate, respectively. To achieve this goal, techniques of statistical physics, such as the analysis of phase transition and power laws, were applied. With these techniques, it is possible to observe the transition of the system from the phase in which an outbreak occurs to the phase where the epidemic is contained. Additionally, we studied the dynamics of the model when it is in criticality, that is, at the point of phase transition, known as the critical point. It was found that a higher dilution rate reduces the spreading of epidemics because it shifts the phase transition negatively, reducing the value of its critical point. On the other hand, increasing the rate of movement of the agents favors the spreading of the disease, because the phase transition is shifted positively and its critical point is increased. It was noticed that, despite of this increasing, this point is not completely restored due to restricted mobility of agents and the high degree of the network disconectivity caused by the high dilution rates. In this work we show the reasons for this behavior.
|
9 |
Criticality in Cooperative SystemsVanni, Fabio 05 1900 (has links)
Cooperative behavior arises from the interactions of single units that globally produce a complex dynamics in which the system acts as a whole. As an archetype I refer to a flock of birds. As a result of cooperation the whole flock gets special abilities that the single individuals would not have if they were alone. This research work led to the discovery that the function of a flock, and more in general, that of cooperative systems, surprisingly rests on the occurrence of organizational collapses. In this study, I used cooperative systems based on self-propelled particle models (the flock models) which have been proved to be virtually equivalent to sociological network models mimicking the decision making processes (the decision making model). The critical region is an intermediate condition between a highly disordered state and a strong ordered one. At criticality the waiting times distribution density between two consecutive collapses shows an inverse power law form with an anomalous statistical behavior. The scientific evidences are based on measures of information theory, correlation in time and space, and fluctuation statistical analysis. In order to prove the benefit for a system to live at criticality, I made a flock system interact with another similar system, and then observe the information transmission for different disturbance values. I proved that at criticality the transfer of information gets the maximal efficiency. As last step, the flock model has been shown that, despite its simplicity, is sufficiently a realistic model as proved via the use of 3D simulations and computer animations.
|
10 |
Making Heads and Tails of Distributional Patterns: A Value-Creation-Type and Sector-Based Analysis Among Private-Equity-Owned CompaniesTuretsky, Abraham I. 04 June 2018 (has links)
No description available.
|
Page generated in 0.0449 seconds