Spelling suggestions: "subject:"bayesian"" "subject:"eayesian""
841 |
Três ensaios sobre intermediação financeira em modelos DSGE aplicados ao BrasilNunes, André Francisco Nunes de January 2015 (has links)
Esta tese é composta por três ensaios sobre a estimação bayesiana de modelos DSGE com fricções financeiras para o Brasil. O primeiro ensaio tem o objetivo de analisar como a incorporação de intermediários financeiros num modelo DSGE influenciam na análise do ciclo econômico, bem como uma política de crédito pode ser utilizada para mitigar os choques no mercado de crédito sobre a atividade. O governo brasileiro expandiu o crédito na economia através das instituições financeiras públicas tendo como custo o aumento da dívida pública. Para isso, foi estimado um modelo inspirado em Gertler e Karadi (2011) para avaliar o comportamento da economia brasileira sob a influência de uma política de crédito. Política de crédito mostrou-se efetiva para mitigar os efeitos recessivos de uma crise financeira que atinja a cotação dos ativos privados ou o patrimônio das instituições financeiras. Contudo, a política monetária tradicional se mostrou mais eficiente para a estabilização da inflação em momentos de normalidade. O segundo ensaio consiste na estimação de um modelo DSGE-VAR para a economia brasileira. A parte DSGE consiste em uma economia pequena, aberta e com fricções financeiras na linha de Gertler, Gilchrist e Natalucci (2007). A estimação do modelo indicou que flexibilização do espaço paramétrico possibilitado pelo modelo DSGE-VAR proporcionou ganhos em relação ao ajuste aos dados em relação a modelos alternativos. O exercício também obteve indicações de que os choques externos apresentam impactos significativos no patrimônio e no endividamento das firmas domésticas. Esse resultado fortalece a evidência de que um canal importante de transmissão dos movimentos da economia mundial para a o Brasil ocorre através das firmas. Por fim, no terceiro ensaio tem como foco a transmissão dos choques no spread de crédito bancário para as demais variáveis da economia e suas implicações para a condução da política monetária no brasil. Para isso, foi estimado um modelo DSGE com fricções financeiras para a economia brasileira. O modelo é baseado em Cúrdia e Woodford (2010), que propuseram uma extensão do modelo de Woodford (2003) para incorporar a existência de um diferencial entre a taxa de juros disponíveis aos poupadores e tomadores de empréstimos, que pode variar por razões tanto endógenas quanto exógenos. Nessa economia, a política monetária pode responder não somente às variações na taxa de inflação e hiato do produto através de uma regra simples, como também por meio de uma regra ajustada pelo spread de crédito da economia. Os resultados mostram que a inclusão do spread de crédito no modelo Novo Keynesiano não altera significativamente as conclusões dos modelos DSGE em respostas a perturbações exógenas tradicionais, como choques na taxa de juros, na produtividade da economia e no dispêndio público. Porém, nos eventos que ocasionam a deterioração da intermediação financeira, por meio de choques exógenos sobre o spread de crédito, o impacto sobre o ciclo econômico foi significativo e a adoção de uma regra de política monetária ajustada pelo spread pode conseguir estabilizar a economia mais rapidamente do que uma regra tradicional. / The present thesis is a collection of three essays on Bayesian estimation of DSGE models with financial frictions in the Brazilian economy. The first essay intends to investigate how the incorporation of financial intermediaries in a DSGE model influences the analysis of the economic cycle, as well as how the credit policy can be employed to mitigate the effects of shocks in the credit market on the economic activity. The Brazilian government expanded the credit in the economy through public financial institutions, which resulted in an increase of public debt. it estimated a model inspired by Gertler and Karadi (2011) to evaluate the performance of the Brazilian economy under the influence of a credit policy. Credit policy was effective to mitigate the recessionary effects of a financial crisis that affects the valuation of private assets and the net worth of financial institutions. However, the traditional monetary policy was more efficient for the stabilization of inflation in times of normality. The second essay consist of a DSGE-VAR model for the Brazilian economy. The DSGE model was estimated for a small, open economy with financial frictions, in line with Gertler, Gilchrist and Natalucci (2007). The results indicates that the estimation of DSGE-VAR provides an advantage for the data fitting in comparison to alternative models. In addition, the results indicate that external shocks have significant impacts in the equity and debt of domestic firms. This result strengthens (supports) the evidence that an important channel of transmission of the movements of the world economy for the Brazil takes place through productive sector. The third essay analyze the transmission of shocks in the banking credit spread for the other variables of the economy and its implications for the conduct of monetary policy in Brazil. We do so by estimating a DSGE model with financial frictions for the Brazilian economy. The model is based on Cúrdia and Woodford (2010), who proposed an extension of the model Woodford (2003) to incorporate the existence of a differential between the interest rates available to savers and borrowers, which can vary by both endogenous and exogenous reasons. In this model, monetary policy can respond not only to changes in the inflation rate and output gap through a simple rule, but also through a rule set by the credit spread of the economy. The results show that the inclusion of credit spread in the New Keynesian model does not significantly changes the conclusions of DSGE models in traditional responses to exogenous shocks, such as shocks in the interest rate, in the productivity of the economy and in public spending. However, in the events that cause the deterioration of financial intermediation through exogenous shocks on the credit spread, the impact on the business cycle was significant and the adoption of a monetary policy rule set by the spread can achieve a faster stabilization of the economy than a traditional rule.
|
842 |
Lung cancer assistant : a hybrid clinical decision support application in lung cancer treatment selectionŞeşen, Mustafa Berkan January 2013 (has links)
We describe an online clinical decision support (CDS) system, Lung Cancer Assistant (LCA), which we have developed to aid the clinicians in arriving at informed treatment decisions for lung cancer patients at multidisciplinary team (MDT) meetings. LCA integrates rule-based and probabilistic decision support within a single platform. To our knowledge, this is the first time this has been achieved in the context of CDS in cancer care. Rule-based decision support is achieved by an original ontological guideline rule inference framework that operates on a domain-specific module of Systematized Nomenclature of Medicine-Clinical Terms (SNOMED-CT), containing clinical concepts and guideline rule knowledge elicited from the major national and international guideline publishers. It adopts a conventional argumentation-based decision model, whereby the decision options are listed along with arguments derived by matching the patient records to the guideline rule base. As an additional feature of this framework, when a new patient is entered, LCA displays the most similar patients to the one being viewed. Probabilistic inference is provided by a Bayesian Network (BN) whose structure and parameters have been learned based on the English Lung Cancer Database (LUCADA). This allows LCA to predict the probability of patient survival and lay out how the selection of different treatment plans would affect it. Based on a retrospective patient subset from LUCADA, we present empirical results on the treatment recommendations provided by both functionalities of LCA and discuss their strengths and weaknesses. Finally, we present preliminary work, which may allow utilising the BN to calculate survival odd ratios that could be translated into quantitative degrees of support for the guideline rule-based arguments. An online version of LCA is accessible on http://lca.eng.ox.ac.uk.
|
843 |
Automatic extraction of behavioral patterns for elderly mobility and daily routine analysisLi, Chen 08 June 2018 (has links)
The elderly living in smart homes can have their daily movement recorded and analyzed. Given the fact that different elders can have their own living habits, a methodology that can automatically identify their daily activities and discover their daily routines will be useful for better elderly care and support. In this thesis research, we focus on developing data mining algorithms for automatic detection of behavioral patterns from the trajectory data of an individual for activity identification, daily routine discovery, and activity prediction. The key challenges for the human activity analysis include the need to consider longer-range dependency of the sensor triggering events for activity modeling and to capture the spatio-temporal variations of the behavioral patterns exhibited by human. We propose to represent the trajectory data using a behavior-aware flow graph which is a probabilistic finite state automaton with its nodes and edges attributed with some local behavior-aware features. Subflows can then be extracted from the flow graph using the kernel k-means as the underlying behavioral patterns for activity identification. Given the identified activities, we propose a novel nominal matrix factorization method under a Bayesian framework with Lasso to extract highly interpretable daily routines. To better take care of the variations of activity durations within each daily routine, we further extend the Bayesian framework with a Markov jump process as the prior to incorporate the shift-invariant property into the model. For empirical evaluation, the proposed methodologies have been compared with a number of existing activity identification and daily routine discovery methods based on both synthetic and publicly available real smart home data sets with promising results obtained. In the thesis, we also illustrate how the proposed unsupervised methodology could be used to support exploratory behavior analysis for elderly care.
|
844 |
A Joint Modeling Approach to Studying English Language Proficiency Development and Time-to-ReclassificationMatta, Tyler 01 May 2017 (has links)
The development of academic English proficiency and the time it takes to reclassify to fluent English proficient status are key issues in monitoring achievement of English learners. Yet, little is known about academic English language development at the domain-level (listening, speaking, reading, and writing), or how English language development is associated with time-to-reclassification as an English proficient student. Although the substantive findings surrounding English proficiency and reclassification are of great import, the main focus of this dissertation was methodological: the exploration and testing of joint modeling methods for studying both issues. The first joint model studied was a multilevel, multivariate random effects model that estimated the student-specific and school-specific association between different domains of English language proficiency. The second model was a multilevel shared random effects model that estimated English proficiency development and time-to-reclassification simultaneously and treated the student-specific random effects as latent covariates in the time-to-reclassification model. These joint modeling approaches were illustrated using annual English language proficiency test scores and time-to-reclassification data from a large Arizona school district.
Results from the multivariate random effects model revealed correlations greater than .5 among the reading, writing and oral English proficiency random intercepts. The analysis of English proficiency development illustrated that some students had attained proficiency in particular domains at different times, and that some students had not attained proficiency in a particular domain even when their total English proficiency score met the state benchmark for proficiency. These more specific domain score analyses highlight important differences in language development that may have implications for instruction and policy. The shared random effects model resulted in predictions of time-to-reclassification that were 97% accurate compared to 80\% accuracy from a conventional discrete-time hazard model. The time-to-reclassification analysis suggested that use of information about English language development is critical for making accurate predictions of the time a student will reclassify in this Arizona school district.
|
845 |
Modeling and projection of respondent driven network samplesZhuang, Zhihe January 1900 (has links)
Master of Science / Department of Statistics / Perla E. Reyes Cuellar / The term network has become part of our everyday vocabulary. The more popular are perhaps the social ones, but the concept also includes business partnerships, literature citations, biological networks, among others. Formally, networks are defined as sets of items and their connections. Often modeled as the mathematic object known as a graph, networks have been studied extensively for several years, and research is widely available. In statistics, a variety of modeling techniques and statistical terms have been developed to analyze them and predict individual behaviors. Specifically, certain statistics like degree distribution, clustering coefficient, and so on are considered important indicators in traditional social network studies. However, while conventional network models assume that the whole network population is known, complete information is not always available. Thus, different sampling methods are often required when the population data is inaccessible. Less time has been dedicated to studying the accuracy of these sampling methods to produce a representative sample. As such, the aim of this report is to identify the capacity of sampling techniques to reflect the features of the original network. In particular, we study Anti-cluster Respondent Driven Sampling (AC-RDS). We also explore whether standard modeling techniques paired with sample data could estimate statistics often used in the study of social networks.
Respondent Driven Sampling (RDS) is a chain referral approach to study rare and/or hidden populations. Originating from the link-tracing design, RDS has been further developed into a series of methods utilized in social network studies, such as locating target populations or estimating the number and proportion of needle-sharing among drug addicts. However, RDS does not always perform as well as expected. When the social network contains tight communities (or clusters) with few connections between them, traditional RDS tends to oversample one community, introducing bias. AC-RDS is a special Markov chain process that collects samples across communities, capturing the whole network. With special referral requests, the initial seeds are more likely to refer to the individuals that are outside their communities. In this report, we fitted the Exponential Random Graph Model (ERGM) and a Stochastic Block Model (SBM) to an empirical study of the Facebook friendship network of 1034 participants. Then, given our goal of identifying techniques that will produce a representative sample, we decided to compare two version of AC-RDSs, in addition to traditional RDS, with Simple Random Sampling (SRS). We compared the methods by drawing 100 network samples using each sampling technique, then fitting an SBM to each sample network we used the results to project the network into one of population size. We calculated essential network statistics, such as degree distribution, of each sampling method and then compared the result to the original network observed statistics.
|
846 |
Pipeline Integrity Management System (PIMS) using Bayesian networks for lifetime extensionSulaiman, Nurul Sa'aadah January 2017 (has links)
The majority of the world's offshore infrastructures are now showing the sign of aging and are approaching the end of their original design life. Their ability to withstand various operational and environmental changes have been the main concerns over the years. This is because the pipeline will still need to operate for a few more decades with increasing demand of oil and gas supply. To address the issues, an effective pipeline integrity management system is required to manage pipeline systems and to ensure the reliability and availability of the pipeline. The main goal is to identify, apply, and assess the applicability of the Bayesian network approach in evaluating the integrity of subsea pipelines that evolves with time. The study is aimed to specifically handle knowledge uncertainties and assist in the decision making of subsea pipeline integrity assessment. A static Bayesian network model was developed to compute the probability of pipeline condition and investigate the underlying factors that lead to pipeline damage. From the model, the most influential factors were identified and the sensitivity analysis demonstrated that the developed model was robust and accurate. The proposed model was then extended to develop a decision tool model using an Influence Diagram. The results from the proposed influence diagram were used to prioritize the maintenance scheme of the pipeline segments. Benefit to cost ratio was applied to determine the pipeline maintenance intervals. Dynamic Bayesian network was utilized to model timedependent deterioration of pipeline structural reliability. A good agreement with conventional structural reliability method is achieved. The present thesis has demonstrated the applicability and effectiveness of Bayesian network approach in the field of oil and gas. It is hoped that the proposed models can be applied by oil and gas pipeline practitioners to enhance the integrity and lifeltime of the oil and gas pipeline.
|
847 |
Bayesian M/EEG source localization with possible joint skull conductivity estimation / Méthodes bayésiennes pour la localisation des sources M/EEG et estimation de la conductivité du crâneCosta, Facundo hernan 02 March 2017 (has links)
Les techniques M/EEG permettent de déterminer les changements de l'activité du cerveau, utiles au diagnostic de pathologies cérébrales, telle que l'épilepsie. Ces techniques consistent à mesurer les potentiels électriques sur le scalp et le champ magnétique autour de la tête. Ces mesures sont reliées à l'activité électrique du cerveau par un modèle linéaire dépendant d'une matrice de mélange liée à un modèle physique. La localisation des sources, ou dipôles, des mesures M/EEG consiste à inverser le modèle physique. Cependant, la non-unicité de la solution (due à la loi fondamentale de physique) et le faible nombre de dipôles rendent le problème inverse mal-posé. Sa résolution requiert une forme de régularisation pour restreindre l'espace de recherche. La littérature compte un nombre important de travaux traitant de ce problème, notamment avec des approches variationnelles. Cette thèse développe des méthodes Bayésiennes pour résoudre des problèmes inverses, avec application au traitement des signaux M/EEG. L'idée principale sous-jacente à ce travail est de contraindre les sources à être parcimonieuses. Cette hypothèse est valide dans plusieurs applications, en particulier pour certaines formes d'épilepsie. Nous développons différents modèles Bayésiens hiérarchiques pour considérer la parcimonie des sources. En théorie, contraindre la parcimonie des sources équivaut à minimiser une fonction de coût pénalisée par la norme l0 de leurs positions. Cependant, la régularisation l0 générant des problèmes NP-complets, l'approximation de cette pseudo-norme par la norme l1 est souvent adoptée. Notre première contribution consiste à combiner les deux normes dans un cadre Bayésien, à l'aide d'une loi a priori Bernoulli-Laplace. Un algorithme Monte Carlo par chaîne de Markov est utilisé pour estimer conjointement les paramètres du modèle et les positions et intensités des sources. La comparaison des résultats, selon plusieurs scenarii, avec ceux obtenus par sLoreta et la régularisation par la norme l1 montre des performances intéressantes, mais au détriment d'un coût de calcul relativement élevé. Notre modèle Bernoulli Laplace résout le problème de localisation des sources pour un instant donné. Cependant, il est admis que l'activité cérébrale a une certaine structure spatio-temporelle. L'exploitation de la dimension temporelle est par conséquent intéressante pour contraindre d'avantage le problème. Notre seconde contribution consiste à formuler un modèle de parcimonie structurée pour exploiter ce phénomène biophysique. Précisément, une distribution Bernoulli-Laplacienne multivariée est proposée comme loi a priori pour les dipôles. Une variable latente est introduite pour traiter la loi a posteriori complexe résultante et un algorithme d'échantillonnage original de type Metropolis Hastings est développé. Les résultats montrent que la technique d'échantillonnage proposée améliore significativement la convergence de la méthode MCMC. Une analyse comparative des résultats a été réalisée entre la méthode proposée, une régularisation par la norme mixte l21, et l'algorithme MSP (Multiple Sparse Priors). De nombreuses expérimentations ont été faites avec des données synthétiques et des données réelles. Les résultats montrent que notre méthode a plusieurs avantages, notamment une meilleure localisation des dipôles. Nos deux précédents algorithmes considèrent que le modèle physique est entièrement connu. Cependant, cela est rarement le cas dans les applications pratiques. Au contraire, la matrice du modèle physique est le résultat de méthodes d'approximation qui conduisent à des incertitudes significatives. / M/EEG mechanisms allow determining changes in the brain activity, which is useful in diagnosing brain disorders such as epilepsy. They consist of measuring the electric potential at the scalp and the magnetic field around the head. The measurements are related to the underlying brain activity by a linear model that depends on the lead-field matrix. Localizing the sources, or dipoles, of M/EEG measurements consists of inverting this linear model. However, the non-uniqueness of the solution (due to the fundamental law of physics) and the low number of dipoles make the inverse problem ill-posed. Solving such problem requires some sort of regularization to reduce the search space. The literature abounds of methods and techniques to solve this problem, especially with variational approaches. This thesis develops Bayesian methods to solve ill-posed inverse problems, with application to M/EEG. The main idea underlying this work is to constrain sources to be sparse. This hypothesis is valid in many applications such as certain types of epilepsy. We develop different hierarchical models to account for the sparsity of the sources. Theoretically, enforcing sparsity is equivalent to minimizing a cost function penalized by an l0 pseudo norm of the solution. However, since the l0 regularization leads to NP-hard problems, the l1 approximation is usually preferred. Our first contribution consists of combining the two norms in a Bayesian framework, using a Bernoulli-Laplace prior. A Markov chain Monte Carlo (MCMC) algorithm is used to estimate the parameters of the model jointly with the source location and intensity. Comparing the results, in several scenarios, with those obtained with sLoreta and the weighted l1 norm regularization shows interesting performance, at the price of a higher computational complexity. Our Bernoulli-Laplace model solves the source localization problem at one instant of time. However, it is biophysically well-known that the brain activity follows spatiotemporal patterns. Exploiting the temporal dimension is therefore interesting to further constrain the problem. Our second contribution consists of formulating a structured sparsity model to exploit this biophysical phenomenon. Precisely, a multivariate Bernoulli-Laplacian distribution is proposed as an a priori distribution for the dipole locations. A latent variable is introduced to handle the resulting complex posterior and an original Metropolis-Hastings sampling algorithm is developed. The results show that the proposed sampling technique improves significantly the convergence. A comparative analysis of the results is performed between the proposed model, an l21 mixed norm regularization and the Multiple Sparse Priors (MSP) algorithm. Various experiments are conducted with synthetic and real data. Results show that our model has several advantages including a better recovery of the dipole locations. The previous two algorithms consider a fully known leadfield matrix. However, this is seldom the case in practical applications. Instead, this matrix is the result of approximation methods that lead to significant uncertainties. Our third contribution consists of handling the uncertainty of the lead-field matrix. The proposed method consists in expressing this matrix as a function of the skull conductivity using a polynomial matrix interpolation technique. The conductivity is considered as the main source of uncertainty of the lead-field matrix. Our multivariate Bernoulli-Laplacian model is then extended to estimate the skull conductivity jointly with the brain activity. The resulting model is compared to other methods including the techniques of Vallaghé et al and Guttierez et al. Our method provides results of better quality without requiring knowledge of the active dipole positions and is not limited to a single dipole activation.
|
848 |
Strategic choices in realistic settingsWang, Rongyu January 2016 (has links)
In this thesis, we study Bayesian games with two players and two actions (2 by 2 games) in realistic settings where private information is correlated or players have scarcity of attention. The contribution of this thesis is to shed further light on strategic interactions in realistic settings. Chapter 1 gives an introduction of the research and contributions of this thesis. In Chapter 2, we study how the correlation of private information affects rational agents’ choice in a symmetric game of strategic substitutes. The game we study is a static 2 by 2 entry game. Private information is assumed to be jointly normally distributed. The game can, for some parameter values, be solved by a cutoff strategy: that is enter if the private payoff shock is above some cutoff value and do not enter otherwise. Chapter 2 shows that there is a restriction on the value of correlation coefficient such that the game can be solved by the use of cutoff strategies. In this strategic-substitutes game, there are two possibilities. When the game can be solved by cutoff strategies, either, the game exhibits a unique (symmetric) equilibrium for any value of correlation coefficient; or, there is a threshold value for the correlation coefficient such that there is a unique (symmetric) equilibrium if the correlation coefficient is below the threshold, while if the correlation coefficient is above the threshold value, there are three equilibria: a symmetric equilibrium and two asymmetric equilibria. To understand how parameter changes affect players’ equilibrium behaviour, a comparative statics analysis on symmetric equilibrium is conducted. It is found that increasing monopoly profit or duopoly profit encourages players to enter the market, while increasing information correlation or jointly increasing the variances of players’ prior distribution will make players more likely to choose entry if the equilibrium cutoff strategies are below the unconditional mean, and less likely to choose entry if the current equilibrium cutoff strategies are above the unconditional mean. In Chapter 3, we study a 2 by 2 entry game of strategic complements in which players’ private information is correlated. As in Chapter 2, the game is symmetric and private information is modelled by a joint normal distribution. We use a cutoff strategy as defined in Chapter 2 to solve the game. Given other parameters, there exists a critical value of the correlation coefficient. For correlation coefficient below this critical value, cutoff strategies cannot be used to solve the game. We explore the number of equilibria and comparative static properties of the solution with respect to the correlation coefficient and the variance of the prior distribution. As the correlation coefficient changes from the lowest feasible (such that cutoff strategies are applicable) value to one, the sequence of the number of equilibrium will be 3 to 2 to 1, or 3 to 1. Alternatively, under some parameter specifications, the game exhibits a unique equilibrium for all feasible value of the correlation coefficient. The comparative statics of equilibrium strategies depends on the sign of the equilibrium cutoff strategies and the equilibrium’s stability. We provide a necessary and sufficient condition for the existence of a unique equilibrium. This necessary and sufficient condition nests the sufficient condition for uniqueness given by Morris and Shin (2005). Finally, if the correlation coefficient is negative for the strategic-complements games or positive for the strategic-substitutes games, there exists a critical value of variance such that for a variance below this threshold, the game cannot be solved in cutoff strategies. This implies that Harsanyi’s (1973) purification rationale, supposing the perturbed games are solved by cutoff strategies and the uncertainty of perturbed games vanishes as the variances of the perturbation-error distribution converge to zero, cannot be applied for a strategic-substitutes (strategic-complements) game with dependent perturbation errors that follow a joint normal distribution if the correlation coefficient is positive (negative). However, if the correlation coefficient is positive for the strategic-complements games or negative for the strategic-substitutes games, the purification rationale is still applicable even with dependent perturbation errors. There are Bayesian games that converge to the underlying complete information game as the perturbation errors degenerate to zero, and every pure strategy Bayesian Nash equilibrium of the perturbed games will converge to the corresponding Nash equilibrium of the complete information game in the limit. In Chapter 4, we study how scarcity of attention affects strategic choice behaviour in a 2 by 2 incomplete information strategic-substitutes entry game. Scarcity of attention is a common psychological characteristic (Kahneman 1973) and it is modelled by the rational inattention approach introduced by Sims (1998). In our game, players acquire information about their own private payoff shocks (which here follows a high-low binary distribution) at a cost. We find that, given the opponent’s strategy, as the unit cost of information acquisition increases a player’s best response will switch from acquiring information to simply comparing the ex-ante expected payoff of each action (using the player’s prior). By studying symmetric Bayesian games, we find that scarcity of attention can generate multiple equilibria in games that ordinarily have a unique equilibrium. These multiple equilibria are generated by the information cost. In any Bayesian game where there are multiple equilibria, there always exists one pair of asymmetric equilibria in which at least one player plays the game without acquiring information. The number of equilibria differs with the value of the unit information cost. There can be 1, 5 or 3 equilibria. Increasing the unit information cost could encourage or discourage a player from choosing entry. It depends on whether the prior probability of a high payoff shock is greater or less than some threshold value. We compare the rational inattention Bayesian game with a Bayesian quantal response equilibrium game where the observation errors are additive and follow a Type I extreme value distribution. A necessary and sufficient condition is established such that both the rational inattention Bayesian game and quantal response game have a common equilibrium.
|
849 |
Genes de efeito principal e locos de características quantitativas (QTL) em suínosGonçalves, Tarcísio de Moraes [UNESP] January 2003 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:32:59Z (GMT). No. of bitstreams: 0
Previous issue date: 2003Bitstream added on 2014-06-13T20:04:47Z : No. of bitstreams: 1
goncalves_tm_dr_botfmvz.pdf: 444010 bytes, checksum: c1849f380080443d0dab0cbcb119af68 (MD5) / Foi utilizada uma análise de segregação com o uso da inferência Bayesiana para se verificar a presença de genes de efeito principal (GEP) afetando duas características de carcaça: gordura intramuscular em % (GIM) e espessura de toucinho em mm (ET); e uma de crescimento, ganho de peso (g/dia) no período entre 25 a 90 kg de peso vivo (GP). Para este estudo foram usadas informações de 1.257 animais provenientes de um experimento de cruzamento de suínos machos da raça Meishan (raça chinesa) e fêmeas de linhagens holandesas de Large White e Landrace. No melhoramento genético animal, Modelos Poligênicos Finitos (MPF) podem ser uma alternativa a Modelos Poligênicos Infinitesimais (MPI) para avaliação genética de características quantitativas usando pedigris complexos. MPI, MPF e MPI combinado com MPF, foram empiricamente testados para estimar componentes de variâncias e número de genes no MPF. Para a estimação de médias marginais a posteriori de componentes de variância e parâmetros foi usado uma metodologia Bayesiana, através do uso da Cadeia de Markov, algoritmos de Monte Carlo (MCMC), via Amostrador de Gibbs e “Reversible Jump Sampler (Metropolis-Hastings)”. Em função dos resultados obtidos, pode-se evidenciar quatro GEP, isto é, dois para GIM e dois para ET. Para ET, o GEP explicou a maior parte da variação genética, enquanto para GIM, o GEP reduziu significativamente a variação poligênica. Para a variação do GP não foi possível determinar a influência do GEP. As herdabilidades estimadas para GIM, ET e GP foram de 0,37, 0,24 e 0,37 respectivamente. A metodologia Bayesiana foi implementada satisfatoriamente usando o pacote computacional FlexQTLTM. Estudos futuros baseados neste experimento que usem marcadores moleculares para mapear os genes de efeito principal que afetem, principalmente GIM e ET, poderão lograr êxito. / A Bayesian marker-free segregation analysis was applied to search for evidence of segregation genes affecting two carcass traits: Intramuscular Fat in % (IMF) and Backfat Thickness in mm (BF), and one growth trait: Liveweight Gain from approximately 25 to 90 kg liveweight, in g/day (LG). For this study 1257 animals from an experimental cross between pigs Meishan (male) and Dutch Large White and Landrace lines (female) were used. In animal breeding, Finite Polygenic Models (FPM) may be an alternative to the Infinitesimal Polygenic Model (IPM) for genetic evaluation of pedigree multiple-generations populations for multiple quantitative traits. FPM, IPM and FPM combined with IPM were empirically tested for estimation of variance components and number of genes in the FPM. Estimation of marginal posteriori means of variance components and parameters was performed by use Markov Chain Monte Carlo techniques by use of the Gibbs sampler and the reversible Jump sampler (Metropolis-Hastings). The results showed evidence for four Major Genes (MG), i.e., two for IMF and two BF. For BF, the MG explained almost all of the genetic variance while for IMF, the MG reduced the polygenic variance significantly. For LG was not found to be likely influenced by MG. The polygenic heritability estimates for IMF, BF and LG were 0.37, 0.24 and 0.37 respectively. The Bayesian methodology was satisfactorily implemented in the software package FlexQTLTM. Further molecular genetic research, based on the same experimental data, effort to map single genes affecting, mainly IMF and BF, has a high probability of success.
|
850 |
Uncertainty analysis in product service system : Bayesian network modelling for availability contractNarayana, Swetha January 2016 (has links)
There is an emerging trend of manufacturing companies offering combined products and services to customers as integrated solutions. Availability contracts are an apt instance of such offerings, where product use is guaranteed to customer and is enforced by incentive-penalty schemes. Uncertainties in such an industry setting, where all stakeholders are striving to achieve their respective performance goals and at the same time collaborating intensively, is increased. Understanding through-life uncertainties and their impact on cost is critical to ensure sustainability and profitability of the industries offering such solutions. In an effort to address this challenge, the aim of this research study is to provide an approach for the analysis of uncertainties in Product Service System (PSS) delivered in business-to-business application by specifying a procedure to identify, characterise and model uncertainties with an emphasis to provide decision support and prioritisation of key uncertainties affecting the performance outcomes. The thesis presents a literature review in research areas which are at the interface of topics such as uncertainty, PSS and availability contracts. From this seven requirements that are vital to enhance the understanding and quantification of uncertainties in Product Service System are drawn. These requirements are synthesised into a conceptual uncertainty framework. The framework prescribes four elements, which include identifying a set of uncertainties, discerning the relationships between uncertainties, tools and techniques to treat uncertainties and finally, results that could ease uncertainty management and analysis efforts. The conceptual uncertainty framework was applied to an industry case study in availability contracts, where each of the four elements was realised. This application phase of the research included the identification of uncertainties in PSS, development of a multi-layer uncertainty classification, deriving the structure of Bayesian Network and finally, evaluation and validation of the Bayesian Network. The findings suggest that understanding uncertainties from a system perspective is essential to capture the network aspect of PSS. This network comprises of several stakeholders, where there is increased flux of information and material flows and this could be effectively represented using Bayesian Networks.
|
Page generated in 0.0452 seconds