• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 65
  • 20
  • 9
  • 8
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 135
  • 135
  • 20
  • 20
  • 19
  • 19
  • 14
  • 14
  • 14
  • 14
  • 12
  • 12
  • 11
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

[pt] O VALOR DA FLEXIBILIDADE DE PRODUÇÃO: UMA APLICAÇÃO REGIONAL NO SETOR SUCRO-ALCOOLEIRO BRASILEIRO / [en] THE PRODUCTION FLEXIBILITY VALUE: A REGIONAL APPLICATION IN THE BRASILIAN ETHANOL-SUGAR SECTOR

11 November 2021 (has links)
[pt] As novas tecnologias no setor sucroalcooleiro permitem flexibilidade no que diz respeito a produzir etanol ou açúcar, podendo em qualquer momento concentrar a produção na commodity que gere maior rentabilidade. Ao longo da pesquisa foi identificado que o Imposto sobre Operações relativas à Circulação de Mercadorias e Prestação de Serviços de Transporte Interestadual e Intermunicipal e de Comunicação (ICMS), gera uma variação importante no preço destas duas commodities nas diferentes regiões do Brasil. Como exemplo, podemos citar o fato de São Paulo em 2013 ter um ICMS de 12 porcento, enquanto no estado do Pará foi observada uma taxa de 30 porcento para a mesma data. Neste estudo trabalha-se com duas regiões do Brasil, a sudeste representada pelo estado de São Paulo e a região Nordeste representada pelos estados de Alagoas e Pernambuco. Para conseguir avaliar a flexibilidade gerencial, nesta dissertação é utilizada a teoria das Opções Reais (OR) que enfatiza o valor da flexibilidade que o tomador de decisões possui ao possibilitar a alteração dos rumos de um projeto especialmente em condições de incerteza. Neste estudo foram feitos testes que determinaram que os preços das commodities seguem um Movimento de Reversão à Média Aritmética, com o qual determinou-se os parâmetros que descrevem este tipo de comportamento. Utilizou-se as abordagens de árvores recombinantes de Nelson e Ramaswamy (1990) e de arvores bivariáveis de Hahn e Dyer (2011) é construído para estas teorias um algoritmo programado em Matlab. Esta pesquisa revela que para as duas regiões estudadas a flexibilidade de produção gera rentabilidade maior que nas usinas cuja produção não é flexível, além disso mostra que a capacidade de produção maior e um imposto ICMS menor, proporcionam às empresas flexíveis da região sudeste um maior valor quando comparadas com as usinas da região nordeste. / [en] New technologies in the Sugar-Alcohol sector allow flexibility with regard to produce ethanol or sugar, it may at any time concentrate production in the commodity that generates higher benefits. During the research, it was identified that tax over operations related to Circulation of Goods and Supply of Services of Interstate and Intermunicipal Transportation and Communication (ICMS) generates a significant variation in the price of these two commodities in different regions of Brazil. As an example, we can mention the fact that Sao Paulo in 2013 had a ICMS of 12 percent, while in the state of Pará a rate of 30 percent was observed for the same date. In this study we work with two regions of Brazil, the southeast region represented by the state of São Paulo and the Northeast region represented by the states of Alagoas and Pernambuco. To be able to assess the managerial flexibility, this dissertation is to the theory of real options (OR) that emphasizes the value of flexibility that the decision maker has to enable the change of the direction of a project especially under conditions of uncertainty. This study made tests that determined that commodity prices follow a Reversal Movement to the Arithmetic Mean, with which it was determined the parameters that describe this behavior. We used the approach of recombinant tree Nelson and Ramaswamy (1990) and Hahn bivariate trees and Dyer (2011) is constructed for these theories an algorithm programmed in Matlab. This research reveals that for both regions studied the production flexibility generates higher profitability than in plants whose production is not flexible, also shows that the increased production capacity and a smaller ICMS tax, provide flexible companies from the Southeast greater value when compared with the plants in the northeastern.
92

Stochastic Performance and Maintenance Optimization Models for Pavement Infrastructure Management

Mohamed S. Yamany (8803016) 07 May 2020 (has links)
<p>Highway infrastructure, including roads/pavements, contributes significantly to a country’s economic growth, quality of life improvement, and negative environmental impacts. Hence, highway agencies strive to make efficient and effective use of their limited funding to maintain their pavement infrastructure in good structural and functional conditions. This necessitates predicting pavement performance and scheduling maintenance interventions accurately and reliably by using appropriate performance modeling and maintenance optimization methodologies, while considering the impact of influential variables and the uncertainty inherent in pavement condition data.</p> <p> </p> <p>Despite the enormous research efforts toward stochastic pavement performance modeling and maintenance optimization, several research gaps still exist. Prior research has not provided a synthesis of Markovian models and their associated methodologies that could assist researchers and highway agencies in selecting the Markov methodology that is appropriate for use with the data available to the agency. In addition, past Markovian pavement performance models did not adequately account for the marginal effects of the preventive maintenance (PM) treatments due to the lack of historical PM data, resulting in potentially unreliable models. The primary components of a Markov model are the transition probability matrix, number of condition states (NCS), and length of duty cycle (LDC). Previous Markovian pavement performance models were developed using NCS and LDC based on data availability, pavement condition indicator and data collection frequency. However, the selection of NCS and LDC should also be based on producing pavement performance models with high levels of prediction accuracy. Prior stochastic pavement maintenance optimization models account for the uncertainty of the budget allocated to pavement preservation at the network level. Nevertheless, variables such as pavement condition deterioration and improvement that are also associated with uncertainty, were not included in stochastic optimization models due to the expected large size of the optimization problem.</p><p>The overarching goal of this dissertation is to contribute to filling these research gaps with a view to improving pavement management systems, helping to predict probabilistic pavement performance and schedule pavement preventive maintenance accurately and reliably. This study reviews Markovian pavement performance models using various Markov methodologies and transition probabilities estimation methods, presents a critical analysis of the different aspects of Markovian models as applied in the literature, reveals gaps in knowledge, and offers suggestions for bridging those gaps. This dissertation develops a decision tree which could be used by researchers and highway agencies to select appropriate Markov methodologies to model pavement performance under different conditions of data availability. The lack of consideration of pavement PM impacts into probabilistic pavement performance models due to absence of historical PM data may result in erroneous and often biased pavement condition predictions, leading to non-optimal pavement maintenance decisions. Hence, this research introduces and validates a hybrid approach to incorporate the impact of PM into probabilistic pavement performance models when historical PM data are limited or absent. The types of PM treatments and their times of application are estimated using two approaches: (1) Analysis of the state of practice of pavement maintenance through literature and expert surveys, and (2) Detection of PM times from probabilistic pavement performance curves. Using a newly developed optimization algorithm, the estimated times and types of PM treatments are integrated into pavement condition data. A non-homogeneous Markovian pavement performance model is developed by estimating the transition probabilities of pavement condition using the ordered-probit method. The developed hybrid approach and performance models are validated using cross-validation with out-of-sample data and through surveys of subject matter experts in pavement engineering and management. The results show that the hybrid approach and models developed can predict probabilistic pavement condition incorporating PM effects with an accuracy of 87%.</p><p>The key Markov chain methodologies, namely, homogeneous, staged-homogeneous, non-homogeneous, semi- and hidden Markov, have been used to develop stochastic pavement performance models. This dissertation hypothesizes that the NCS and LDC significantly influence the prediction accuracy of Markov models and that the nature of such influence varies across the different Markov methodologies. As such, this study develops and compares the Markovian pavement performance models using empirical data and investigates the sensitivity of Markovian model prediction accuracy to the NCS and LDC. The results indicate that the semi-Markov is generally statistically superior to the homogeneous and staged-homogeneous Markov (except in a few cases of NCS and LDC combinations) and that Markovian model prediction accuracy is significantly sensitive to the NCS and LDC: an increase in NCS improves the prediction accuracy until a certain NCS threshold after which the accuracy decreases, plausibly due to data overfitting. In addition, an increase in LDC improves the prediction accuracy when the NCS is small.</p><p>Scheduling pavement maintenance at road network level without considering the uncertainty of pavement condition deterioration and improvement over the long-term (typically, pavement design life) likely results in mistiming maintenance applications and less optimal decisions. Hence, this dissertation develops stochastic pavement maintenance optimization models that account for the uncertainty of pavement condition deterioration and improvement as well as the budget constraint. The objectives of the stochastic optimization models are to minimize the overall deterioration of road network condition while minimizing the total maintenance cost of the road network over a 20-year planning horizon (typical pavement design life). Multi-objective Genetic Algorithm (MOGA) is used because of its robust search capabilities, which lead to global optimal solutions. In order to reduce the number of combinations of solutions of stochastic MOGA models, three approaches are proposed and applied: (1) using PM treatments that are most commonly used by highway agencies, (2) clustering pavement sections based on their ages, and (3) creating a filtering constraint that applies a rest period after treatment applications. The results of the stochastic MOGA models show that the Pareto optimal solutions change significantly when the uncertainty of pavement condition deterioration and improvement is included.</p>
93

Budget Management in Auctions: Bidding Algorithms and Equilibrium Analysis

Kumar, Rachitesh January 2024 (has links)
Advertising is the economic engine of the internet. It allows online platforms to fund services that are free at the point of use, while providing businesses the opportunity to target their ads at relevant users. The mechanism of choice for selling these advertising opportunities is real-time auctions: whenever a user visits the platform, an auction is run among interested advertisers, and the winner gets to display their ad to the user. The entire process runs in milliseconds and is implemented via automated algorithms which bid on behalf of the advertisers in every auction. These automated bidders take as input the high-level objectives of the advertiser like value-per-click and budget, and then participate in the auctions with the goal of maximizing the utility of the advertiser subject to budget constraints. Thus motivated, this thesis develops a theory of bidding in auctions under budget constraints, with the goal of informing the design of automated bidding algorithms and analyzing the market-level outcomes that emerge from their simultaneous use. First, we take the perspective of an individual advertiser and tackle algorithm-design questions. How should one bid in repeated second-price auctions subject to a global budget constraint? What is the optimal way to incorporate data into bidding decisions? Can data be incorporated in a way that is robust to common forms of variability in the market? As we analyze these questions, we go beyond the problem of bidding under budget constraints and develop algorithms for more general online resource allocation problems. In Chapter 2, we study a non-stationary stochastic model of sequential auctions, which despite immense practical importance has received little attention, and propose a natural algorithm for it. With access to just one historical sample per auction/distribution, we show that our algorithm attains (nearly) the same performance as that possible under full knowledge of the distributions, while also being robust to distribution shifts which typically occur between the sampling and true distributions. Chapter 3 investigates the impact of uncertainty about the total number of auctions on the performance of bidding algorithms. We prove upper bounds on the best-possible performance that can be achieved in the face of such uncertainty, and propose an algorithm that (nearly) achieves this optimal performance guarantee. We also provide a fast method for incorporating predictions about the total number of auctions into our algorithm. All of our proposed algorithms implement some version of FTRL/Mirror-Descent in the dual space, making them ideal for large-scale low-latency markets like online advertising. Next, we look at the market as a whole and analyze the equilibria which emerge from the simultaneous use of automated bidding algorithms. For example, we address questions like: Does an equilibrium always exist? How does the auction format (first-price vs second-price) impact the structure of the equilibria? Do automated bidding algorithms always efficiently converge to some equilibrium? What are the social welfare properties of these equilibrium outcomes? We systematically examine such questions using a variety of tools, ranging from infinite-dimensional fixed-point arguments for proving existence of structured equilibria, to computational complexity results about finding them. In Chapter 4, we start by establishing the existence of equilibria based on pacing—a practically-popular and theoretically-optimal budget management strategy—for all standard auctions, including first-price and second-price auctions. We then leverage its structure to establish a revenue equivalence result and bound the price of anarchy of liquid welfare. Chapter 5 looks at the market from a computational lens and investigates the complexity of finding pacing-based equilibria. We show that the problem is PPAD complete, which in turn implies the impossibility of polynomial-time convergence of any pacing-based automated bidding algorithms (under standard complexity-theoretic assumptions). Finally, in Chapter 6, we move beyond pacing-based strategies and investigate throttling, which is another popular method for managing budgets in practice. Here, we describe a simple tâtonnement-style algorithm which efficiently converges to an equilibrium in first-price auctions, and show that no such algorithm exists for second-price auctions (under standard complexity-theoretic assumptions). Furthermore, we prove tight bounds on the price of anarchy for liquid welfare, and compare platform revenue under throttling and pacing.
94

Understanding the impact of an HIV intervention package for adolescents

Bruce, Faikah 12 1900 (has links)
Thesis (MSc)--Stellenbosch University, 2013. / ENGLISH ABSTRACT: Adolescents are regarded as a high risk group in South Africa with the highest human immunodeficiency virus (HIV) incidence occurring in this group. Prevention among adolescents is therefore a key in decreasing the HIV burden. This thesis aims to assist in the design of trials by simulating the potential outcomes of a combination prevention trial in adolescents. We develop a stochastic individual-based model stratified by sex and age. We then use this model to determine the impact of various prevention packages on HIV incidence among adolescents participating in a hypothetical trial over a three year period. The trial that is simulated involves an intervention arm, in which adolescents are offered a choice of a prevention methods (including medical male circumcision (MMC), oral pre-exposure prophylaxis (PrEP) and antiretroviral-based vaginal microbicides (ARV-VM)), and a control arm. We predict that the impact of a full prevention package on HIV incidence would be a 46% per personyear( PPY) (95% CI 45–47%) risk reduction. The combination of MMC and PrEP has a substantial impact on HIV incidence in males, with a 51% PPY (95% CI 49–53%) relative risk of HIV infection. Offering women the choice of PrEP, a microbicide gel or a microbicide in the form of a vaginal ring would be less effective, with a 57% PPY (95% CI 56–58%) relative risk of HIV acquisition. This is not substantially different from the relative risk estimated when the vaginal ring alone is offered, as the ring is assumed to be the most accept able of the three prevention methods. We determine a sample size requirement of approximately 1013 in each arm of a trial would achieve 80% power to detect a statistically significant reduction in HIV risk. We find that the relative risk is sensitive to the assumed degree of correlation between condom use and the acceptability of the prevention method. We also find that the most efficient trial design may be to offer both MMC and PrEP to males but to offer only a microbicide ring to females. Further work is required to better understand the processes by which adolescent prevention method choices are made. / AFRIKAANSE OPSOMMING: Adolessente word beskou as ‘n hoe risiko groep in Suid Afrika, met die hoogste menslike immuniteitsgebrekvirus (MIV) insidensie in hierdie groep. Voorkoming van MIV onder adolessente is daarom noodsaaklik om die MIV las te verminder. Die doel van hierdie tesis is om te help met die ontwerp van studies deur die moontlike uitkomste van ‘n kombinasie-voorkoming studie in adolessente te simuleer. Ons het ‘n stogastiese individu-gebaseerde model, gestratifiseer met betrekking tot seks en ouderdom, ontwikkel. Ons het toe die model gebruik om die impak van ‘n verskeinheid van voorkomingspakette op MIV insidensie onder adolessente wat deelneem aan ‘n hipotetiese proef oor ‘n drie jaar periode, te bepaal. Die proef wat gesimuleer word behels a intervensie groep, waarin die jong volwassenes ‘n keuse van voorbehoedings metodes (insluitende mediese manlike besnydenis (MMB), pre-blootstelling profilakse (PrBP) en anti-retrovirale vaginale mikrobisiedes (ARV-VM)) aangebied word, en ‘n kontrole groep. Ons voorspel dat die impak van ‘n volle voorkomingspaket op MIV insidensie ‘n 46% per persoon-jaar (PPJ) (95% VI 47–47%) risiko vermindering sal wees. Die kombinasie van MMB en PrBP het ‘n substansiele impak op MIV insidensie onder mans, met ‘n relatiewe risiko van MIV infeksie van 51% PPJ (95% VI 49–53%). Om die keuse van PrBP, ‘n mikrobisiede gel of ‘n mikrobisiede in die vorm van ‘n vaginale ring aan vrouens te bied, is minder effektief, met ‘n relatiewe risiko van MIV infeksie van 57% PPJ (95% VI 56%–58%). Hierdie verskil nie substansieel van die beraamde relatiewe risiko in die geval waar slegs die vaginale ring gebied word nie, aangesien daar aanvaar word dat die ring die mees aanvaarde van die drie voorkomingsmetodes is. Ons het bepaal dat ‘n steekproef van ongeveer 1013 individue in elke arm van die proef nodig is om ‘n 80% kans te he om ‘n statisties betekenisvolle afname in MIV-risiko te bespeur. Ons vind dat die relatiewe risiko sensitief is tot die aanvaarde graad van die korrelasies tussen kondoom-gebruik en die aanvaarding van die voorkomings metodes. Ons het ook gevind dat dit mag wees dat die mees doeltreffende proef ontwerp is om beide MMB en PrBP vir mans en slegs ‘n mikrobisiede ring vir vrouens te bied. Verdere werk word benodig om die prosesse waarby jong volwassenes keuses maak oor voorkomingsmetodes te verstaan.
95

Stochastic methods for unsteady aerodynamic analysis of wings and wind turbine blades

Fluck, Manuel 25 April 2017 (has links)
Advancing towards `better' wind turbine designs engineers face two central challenges: first, current aerodynamic models (based on Blade Element Momentum theory) are inherently limited to comparatively simple designs of flat rotors with straight blades. However, such designs present only a subset of possible designs. Better concepts could be coning rotors, swept or kinked blades, or blade tip modifications. To be able to extend future turbine optimization to these new concepts a different kind of aerodynamic model is needed. Second, it is difficult to include long term loads (life time extreme and fatigue loads) directly into the wind turbine design optimization. This is because with current methods the assessment of long term loads is computationally very expensive -- often too expensive for optimization. This denies the optimizer the possibility to fully explore the effects of design changes on important life time loads, and one might settle with a sub-optimal design. In this dissertation we present work addressing these two challenges, looking at wing aerodynamics in general and focusing on wind turbine loads in particular. We adopt a Lagrangian vortex model to analyze bird wings. Equipped with distinct tip feathers, these wings present very complex lifting surfaces with winglets, stacked in sweep and dihedral. Very good agreement between experimental and numerical results is found, and thus we confirm that a vortex model is actually capable of analyzing complex new wing and rotor blade geometries. Next stochastic methods are derived to deal with the time and space coupled unsteady aerodynamic equations. In contrast to deterministic models, which repeatedly analyze the loads for different input samples to eventually estimate life time load statistics, the new stochastic models provide a continuous process to assess life time loads in a stochastic context -- starting from a stochastic wind field input through to a stochastic solution for the load output. Hence, these new models allow obtaining life time loads much faster than from the deterministic approach, which will eventually make life time loads accessible to a future stochastic wind turbine optimization algorithm. While common stochastic techniques are concerned with random parameters or boundary conditions (constant in time), a stochastic treatment of turbulent wind inflow requires a technique capable to handle a random field. The step from a random parameter to a random field is not trivial, and hence the new stochastic methods are introduced in three stages. First the bird wing model from above is simplified to a one element wing/ blade model, and the previously deterministic solution is substituted with a stochastic solution for a one-point wind speed time series (a random process). Second, the wind inflow is extended to an $n$-point correlated random wind field and the aerodynamic model is extended accordingly. To complete this step a new kind of wind model is introduced, requiring significantly fewer random variables than previous models. Finally, the stochastic method is applied to wind turbine aerodynamics (for now based on Blade Element Momentum theory) to analyze rotor thrust, torque, and power. Throughout all these steps the stochastic results are compared to result statistics obtained via Monte Carlo analysis from unsteady reference models solved in the conventional deterministic framework. Thus it is verified that the stochastic results actually reproduce the deterministic benchmark. Moreover, a considerable speed-up of the calculations is found (for example by a factor 20 for calculating blade thrust load probability distributions). Results from this research provide a means to much more quickly analyze life time loads and an aerodynamic model to be used a new wind turbine optimization framework, capable of analyzing new geometries, and actually optimizing wind turbine blades with life time loads in mind. However, to limit the scope of this work, we only present the aerodynamic models here and will not proceed to turbine optimization itself, which is left for future work. / Graduate / 0538 / 0548 / mfluck@uvic.ca
96

STATISTICAL MODELS AND ANALYSIS OF GROWTH PROCESSES IN BIOLOGICAL TISSUE

Xia, Jun 15 December 2016 (has links)
The mechanisms that control growth processes in biology tissues have attracted continuous research interest despite their complexity. With the emergence of big data experimental approaches there is an urgent need to develop statistical and computational models to fit the experimental data and that can be used to make predictions to guide future research. In this work we apply statistical methods on growth process of different biological tissues, focusing on development of neuron dendrites and tumor cells. We first examine the neuron cell growth process, which has implications in neural tissue regenerations, by using a computational model with uniform branching probability and a maximum overall length constraint. One crucial outcome is that we can relate the parameter fits from our model to real data from our experimental collaborators, in order to examine the usefulness of our model under different biological conditions. Our methods can now directly compare branching probabilities of different experimental conditions and provide confidence intervals for these population-level measures. In addition, we have obtained analytical results that show that the underlying probability distribution for this process follows a geometrical progression increase at nearby distances and an approximately geometrical series decrease for far away regions, which can be used to estimate the spatial location of the maximum of the probability distribution. This result is important, since we would expect maximum number of dendrites in this region; this estimate is related to the probability of success for finding a neural target at that distance during a blind search. We then examined tumor growth processes which have similar evolutional evolution in the sense that they have an initial rapid growth that eventually becomes limited by the resource constraint. For the tumor cells evolution, we found an exponential growth model best describes the experimental data, based on the accuracy and robustness of models. Furthermore, we incorporated this growth rate model into logistic regression models that predict the growth rate of each patient with biomarkers; this formulation can be very useful for clinical trials. Overall, this study aimed to assess the molecular and clinic pathological determinants of breast cancer (BC) growth rate in vivo.
97

Estudos no modelo de Axelrod de disseminação cultural: transição de fase e campo externo / Studies in the Axelrod model of cultural dissemination: Phase transition and external field

Peres, Lucas Vieira Guerreiro Rodrigues 08 August 2014 (has links)
Estudos sobre a manutenção da diversidade cultural sugerem que o mecanismo de interação social, normalmente considerado como responsável pela homogeneização cultural, também pode gerar diversidade. Com o intuito de estudar esse fenômeno, o cientista político Robert Axelrod propôs um modelo baseado em agentes que exibe estados absorventes multiculturais a partir de uma interação homofílica homogeneizadora entre os agentes. Nesse modelo, a diversidade (ou desordem) cultural é produzida pela escolha dos fatores culturais iniciais dos agentes e a interação homofílica age apenas no sentido de reduzir a desordem inicial. Em virtude de sua simplicidade, várias releituras e variações do modelo de Axelrod são encontradas na literatura: introdução de uma mídia externa, alterações da conectividade dos agentes, inserção de perturbações aleatórias, etc. Entretanto, essas propostas carecem de uma análise sistemática do comportamento do modelo no limite termodinâmico, ou seja, no limite em que o número de agentes tende a infinito. Essa tese foca primariamente nesse tipo de análise nos casos em que os agentes estão fixos nos sítios de uma rede quadrada ou nos sítios de uma cadeia unidimensional. Em particular, quando os fatores culturais iniciais dos agentes são gerados por uma distribuição de Poisson, caracterizamos, através de simulações de Monte Carlo, a transição entre a fase ordenada (pelo menos um domínio cultural ´e macroscópico) e a fase desordenada (todos os domínios culturais são microscópicos) na rede quadrada. Entretanto, não encontramos evidência de uma fase ordenada na cadeia unidimensional. Já para fatores culturais iniciais gerados por uma distribuição uniforme, observamos a transição de fase tanto na rede unidimensional como na bidimensional. Por fim, mostramos que a introdução de um campo externo espacialmente uniforme, cuja interpretação é a de uma mídia global influenciando a opinião dos agentes, elimina o regime monocultural do modelo de Axelrod no limite termodinâmico. / Studies on the maintenance of cultural diversity suggest that the mechanism of social interaction, generally regarded as responsible for cultural homogenization, may also generate diversity. In order to study this phenomenon, the political scientist Robert Axelrod proposed an agent-based model that exhibits multicultural absorbing states, despite the homophilic and homogenizing character of the interaction between agents. In this model the cultural diversity (or disorder) is produced by the choice of the initial cultural traits of the agents, and the homphilic interaction acts towards the reduction of the initial disorder. Due to its simplicity, several re-examinations and variants of Axelrods model are found in the literature: the introduction of an external media, changes in the connectivity of the agents, introduction of random perturbations, etc. However, these proposals lack a systematic analysis of the behavior of the model in the thermodynamic limit, i.e., in the limit that the number of agents tends to infinity. This thesis focuses mainly on that type of analysis in the cases the agents are fixed in the sites of a square lattice or in the sites of a chain. In particular, when the initial cultural traits of the agents are generated by a Poisson distribution we characterize, through Monte Carlo simulations, the transition between the ordered phase (at least one macroscopic cultural domain) and the disordered phase (only microscopic domains) in the square lattice. However, we found no evidence of an ordered phase in the one-dimensional lattice (chain). For initial cultural traits generated by a uniform distribution, we find a phase transition in both the one and two-dimensional lattices. Finally, we show that the introduction of a spatially uniform external field, which can be interpreted as a global media influencing the opinion of the agents, eliminates the monocultural regime of Axelrods model in the thermodynamic limit.
98

Modelo computacional de um rebanho bovino de corte virtual utilizando simulação Monte Carlo e redes neurais artificiais / Computational model of virtual beef cattle heard applying Monte Carlo simulation and artificial neural networks

Meirelles, Flávia Devechio Providelo 04 February 2005 (has links)
Neste trabalho, foram utilizadas duas ferramentas computacionais para fins de auxiliar tomadas de decisões na produção de bovinos de corte, criados de maneira extensivas, em condições de manejo encontrados no Brasil. A primeira parte do trabalho visou à construção de um software utilizando a técnica de Simulação Monte Carlo para analisar características de produção (ganho de peso) e manejo (fertilidade, anestro pós-parto, taxa de natalidade e puberdade). Na segunda parte do trabalho foi aplicada a técnica de Redes Neurais Artificiais para classificar animais, segundo ganho de peso nas fases de crescimento (nascimento ao desmame, do desmame ao sobreano) relacionado com o valor genético do ganho de peso do desmame ao sobreano (GP345) obtidos pelo BLUP. Ambos modelos mostraram potencial para auxiliar a produção de gado de corte / Herein we applied two different computational techniques with the specific objective to help the decision-making at Brazilian extensive beef cattle production systems. The first part of the work was dedicated to the construction of software based on Monte Carlo Simulation. Two different models were designed for further fusion and willing the analysis of productions (weight gain) and reproduction traits (fertility, post partum anestrus, born rate and puberty). The second part of the work applied Artificial Neural Network techniques to classify animals related to the weight gain during growing period (Weight at Calving, Weaning Weight, Weight at 550 days) comparing data with genetic value of the daily gain from weaning to 550 days adjusted to 345 days BLUP output. The results obtained in both models showed potential to help beef cattle production
99

Modelo computacional de um rebanho bovino de corte virtual utilizando simulação Monte Carlo e redes neurais artificiais / Computational model of virtual beef cattle heard applying Monte Carlo simulation and artificial neural networks

Flávia Devechio Providelo Meirelles 04 February 2005 (has links)
Neste trabalho, foram utilizadas duas ferramentas computacionais para fins de auxiliar tomadas de decisões na produção de bovinos de corte, criados de maneira extensivas, em condições de manejo encontrados no Brasil. A primeira parte do trabalho visou à construção de um software utilizando a técnica de Simulação Monte Carlo para analisar características de produção (ganho de peso) e manejo (fertilidade, anestro pós-parto, taxa de natalidade e puberdade). Na segunda parte do trabalho foi aplicada a técnica de Redes Neurais Artificiais para classificar animais, segundo ganho de peso nas fases de crescimento (nascimento ao desmame, do desmame ao sobreano) relacionado com o valor genético do ganho de peso do desmame ao sobreano (GP345) obtidos pelo BLUP. Ambos modelos mostraram potencial para auxiliar a produção de gado de corte / Herein we applied two different computational techniques with the specific objective to help the decision-making at Brazilian extensive beef cattle production systems. The first part of the work was dedicated to the construction of software based on Monte Carlo Simulation. Two different models were designed for further fusion and willing the analysis of productions (weight gain) and reproduction traits (fertility, post partum anestrus, born rate and puberty). The second part of the work applied Artificial Neural Network techniques to classify animals related to the weight gain during growing period (Weight at Calving, Weaning Weight, Weight at 550 days) comparing data with genetic value of the daily gain from weaning to 550 days adjusted to 345 days BLUP output. The results obtained in both models showed potential to help beef cattle production
100

Comparison of Two Parameter Estimation Techniques for Stochastic Models

Robacker, Thomas C 01 August 2015 (has links)
Parameter estimation techniques have been successfully and extensively applied to deterministic models based on ordinary differential equations but are in early development for stochastic models. In this thesis, we first investigate using parameter estimation techniques for a deterministic model to approximate parameters in a corresponding stochastic model. The basis behind this approach lies in the Kurtz limit theorem which implies that for large populations, the realizations of the stochastic model converge to the deterministic model. We show for two example models that this approach often fails to estimate parameters well when the population size is small. We then develop a new method, the MCR method, which is unique to stochastic models and provides significantly better estimates and smaller confidence intervals for parameter values. Initial analysis of the new MCR method indicates that this method might be a viable method for parameter estimation for continuous time Markov chain models.

Page generated in 0.1048 seconds