Spelling suggestions: "subject:"inference"" "subject:"lnference""
411 |
Abordagem clássica e bayesiana para os modelos de séries temporais da família GARMA com aplicações para dados de contagem / Classical and bayesian approach for time series models of the family GARMA with applications to count dataPhilippsen, Adriana Strieder 31 March 2011 (has links)
Nesta dissertação estudou-se o modelo GARMA para modelar séries temporais de dados de contagem com as distribuições condicionais de Poisson, binomial e binomial negativa. A principal finalidade foi analisar no contexto clássico e bayesiano, o desempenho e a qualidade do ajuste dos modelos de interesse, bem como o desempenho dos percentis de cobertura dos intervalos de confiança dos parâmetros para os modelos adotados. Para atingir tal finalidade considerou-se a análise dos estimadores pontuais bayesianos e foram analisados intervalos de credibilidade. Neste estudo é proposta uma distribuição a priori conjugada para os parâmetros dos modelos e busca-se a distribuição a posteriori, a qual associada a certas funções de perda permite encontrar estimativas bayesianas para os parâmetros. Na abordagem clássica foram calculados estimadores de máxima verossimilhança, usandose o método de score de Fisher e verificou-se por meio de simulação a consistência dos mesmos. Com os estudos desenvolvidos pode-se observar que, tanto a inferência clássica quanto a inferência bayesiana para os parâmetros dos modelos em questão, apresentou boas propriedades analisadas por meio das propriedades dos estimadores pontuais. A última etapa do trabalho consiste na análise de um conjunto de dados reais, sendo uma série real correspondente ao número de internações por causa da dengue em Campina Grande. Estes resultados mostram que tanto o estudo clássico, quanto o bayesiano, são capazes de descrever bem o comportamento da série / In this work, it was studied the GARMA model to model time series count data with Poisson, binomial and negative binomial discrete conditional distributions. The main goal is to analyze, in the bayesian and classic context, the performance and the quality of fit of the corresponding models, as well as the coverage percentages performance to these models. To achieve this purpose we considered the analysis of Bayesian estimators and credible intervals were analyzed. To the Bayesian study it was proposed a priori distribution joined to the models parameters and sought a posteriori distribution, which one associate with to certain loss functions allows finding out Bayesian estimates to the parameters. In the classical approach, it was calculated the maximum likelihood estimators using the method of Fisher scoring, whose interest was to verify, by simulation, the consistence. With the studies developed we can notice that, both classical and inference Bayesian inference for the parameters of those models, presented good properties analysed through the properties of the punctual estimators. The last stage of the work consisted of the analysis of one real data set, being a real serie corresponding to the admission number because of dengue in the city of Campina Grande. These results show that both the classic and the Bayesian studies are able to describe well the behavior of the serie
|
412 |
O ensino de estatística na universidade e a controvérsia sobre os fundamentos da inferência / Teaching Statistics at the University and the inference controversyCordani, Lisbeth Kaiserlian 18 June 2001 (has links)
A maioria dos cursos universitários tem, em seu currículo, uma disciplina básica obrigatória de elementos de probabilidade e estatística. Além dos procedimentos de natureza descritiva, associados a análise de dados, fazem parte da ementa dessas disciplinas procedimentos inferenciais, geralmente apresentados dentro da teoria clássica(ou frequentista) de Neyman-Pearson. Não é costume nesta disciplina nem discutir aspectos epistemológicos ligados à inferência estatística e nem incluir a apresentação da escola Bayesiana, como uma possível alternativa. Sabidamente, tal disciplina é um entrave na vida escolar, tanto do aluno como do professor. Do aluno, porque este se depara, em boa parte das vezes, com um oferecimento mecânico da disciplina, sem motivação de natureza aplicada e sem vínculo aparente com sua realidade próxima curricular. Do professor, porque encontra geralmente alunos, além de despreparados com relação aos conceitos primários de incerteza e variabilidade, também com predisposição negativa, devido ao tabu associado à disciplina. Com o intuito de discutir a necessidade do oferecimento das primeiras noções inferenciais nessa disciplina, bem como responder a pergunta qual a inferência que deve ser ensinada numa disciplina básica de um curso universitário? buscamos caracterizar, ao longo de trabalho, as relações da estatística com: criação científica em geral e racionalismo e empirismo em particular; a existência ou não de um método científico; o objetivismo e o subjetivismo; os paradigmas das escolas clássica e Bayesiana; aprendizagem e cognição. Foram analisadas e comparadas as abordagens inferenciais feitas segundo cada escola, bem como apresentados alguns exemplos. A sugestão deste trabalho é de que o programa de uma primeira disciplina inclua os aspectos epistemológicos ligados à inferência, bem como a apresentação do tópico inferência estatística segundo as duas abordagens: clássica e Bayesiana. Isto eliminaria, pelo menos nos primeiros contatos do aluno com a área, a proposta de rompimento com a escola clássica preconizada por muitos adeptos da escola Bayesiana, bem como a proposta de resistência (manutenção do status quo), defendida por muitos elementos da escola clássica. Na verdade, a proposta preconiza a coexistência entre as duas escolas numa apresentação de curso básico, pois entendemos que o dever do professor é mostrar o estado da arte da área a seus alunos, deixando a opção (se isto fizer sentido) para uma etapa futura, seja acadêmica ou profissional. / In general most of the undergraduate courses in Brazil offer a basic discipline on probability and statistics. Beyond the descriptive procedures, associated with data analysis, these courses present to the students some inferential techniques, usually linked to the classical (frequentist) Neyman-Pearson school. It is not common to present the inferential aspects from the Bayesian point of view. Everybody knows that both student and teacher have problems with this basic discipline. The student, because he/she receives, in general, a mechanical course, without motivation, with no links to their other disciplines, and the teacher, because he/she usulally teaches to very naïve students concerning concept like uncertainty and variability. Added to that, students seem to have some fear towards the discipline (taboo). In order to discuss the first inferential notions presented in this discipline, and to answer the question which inference should we teach in a basic discipline of statistics to undergraduate students? we have tried, in this work, to characterise the relationship between statistics and the following aspects: scientific creation in general and empirism and rationalism in particular; the existence or not of a scientific method; objectivism and subjectivism; the paradigms associated to the classical and to the Bayesian schools; learning and some cognitive aspects. We have compared the inferential approaches, and some examples have been presented. This work suggests that the first program of a basic discipline of probability and statistics should include some epistemological inferential aspects as well as the introduction of inferential statistics by means of both approaches: classical and Bayesian. This action will prevent, at least at the first contact, the members of the Bayesian school from proposing the rupture with the classical, and also the members of the classical one from maintaining the status quo. In fact, the proposal is of coexistence of both schools in a first level, because we think it is a teachers duty to show the state of art to his/her students, giving the possibility of option (if necessary) for a following step.
|
413 |
Vuxna med förvärvad traumatisk hjärnskada - omställningsprocesser och konsekvenser i vardagslivet : En studie av femton personers upplevelser och erfarenheter av att leva med förvärvad traumatisk hjärnskada / Adults with acquired traumatic brain injury – the changeover process and consequences in every day life : A study of fifteen persons’ experience of living with acquired traumatic bran injuryStrandberg, Thomas January 2006 (has links)
<p>The overall purpose of this study is to illuminate the changeover process experienced by individuals who as adults acquired a traumatic brain injury (TBI), to increase the knowledge and the understanding of this process, and describe the meaning of support in every day life.</p><p>Persons who acquired a TBI as adults were administered a semi-structured interview covering six areas: consequences of TBI, family and social networks, working life and occupation, life-changes, support from society and everyday life. The interviews were qualitative and in-depth. A total of 15 informants participated, aged between 19-53 years when injured. Data were structured and underwent two phases of analysis. In the first phase, data underwent latent content analysis, underpinned by a hermeneutic approach, and in the subsequent phase, reanalysed within a framework derived from the theory of social recognition.</p><p>Findings from the first phase of inductive analysis elicited key themes: (i) the meaning of care, a question of formal and/or informal support; (ii) the meaning of action, a question of activity versus inactivity; (iii) autonomy, a question of dependence versus independence; (iv) social interaction, a question of encounter and/or treatment; (v) the theme of changes, a question of process versus stagnation; and (vi) emotions, an oscillation between hope versus hopelessness. After the construction of the six themes each of them were, through a discursive analysis, connected with theories, earlier studies in the field of brain injuries and important interview quotations from the empirical material. During this phase, an interest developed to study the material from a new theoretical point of view. The second phase of analysis therefore involved the development of a framework derived from Honneth’s (1995) theory of social recognition. The central construct of ‘recognition’ was analysed from three different dimensions proposed by Honneth: the individual dimension, the legal dimension, the value dimension. Using this framework, the data were reanalysed. The scientific term for this process of re-contextualisation and re-description of data is abduction inference.</p><p>Reported consequences were negative as well as positive. Significant others (e.g. next of kin) had an important function as a driving force for training and preparation for life-situation after injury. A majority of the informants were satisfied with support from society, such as hospital-care, rehabilitation and community support. Such support, initially, proceeded without problems but as time passed, the responsibility shifted to the person with TBI to take the initiative in arranging longer-term services. Long-term support which addresses physical, cognitive as well as psychosocial consequences of the TBI is important for outcomes. The majority of the informants had difficulties in returning to working life after the injury. The outcomes and recovery seemed to be a prolonged process, probably never ending, but which gradually over time becomes integrated as a part of life. The informants gave varying accounts of the extent to which they experienced social recognition.</p>
|
414 |
The Role of High-Level Reasoning and Rule-Based Representations in the Inverse Base-Rate EffectWennerholm, Pia January 2001 (has links)
<p>The inverse base-rate effect is the observation that on certain occasions people classify new objects as belonging to rare base-rate categories rather than common ones (e.g., D. L. Medin & S. M. Edelson, 1988). This finding is inconsistent with normative prescriptions of rationality, and provides an anomaly for current theories of human knowledge representation, such as the exemplar-based models of categorization, which predict a consistent use of base-rates (e.g., D. L. Medin & M. M. Schaffer, 1978). This thesis presents a novel explanation of the inverse base-rate effect. The proposal is that participants sometimes eliminate category options that are inconsistent with well-supported inference rules. These assumptions contrast with those by attentional theory (J. K. Kruschke, in press), according to which the inverse base-rate effect is the outcome of rapid attention shifts operating on cue-category associations. Study I, II, and III verified seven qualitative predictions derived from the eliminative inference idea. None of these phenomena can be explained by attentional theory. The most important of these findings were that elimination of well-known, common categories mediate the inverse base-rate effect rather than the strongest cue-category associations (Study I), that only participants with a rule-based mode of generalization exhibit the inverse base-rate effect (Study II), and that rapid attentional shifts per se do not accelerate learning, but rather decelerate it (Study III). In addition, Study I provided a quantitative implementation of the eliminative inference idea, ELMO, that demonstrated that this high-level reasoning process can produce the basic pattern of base-rate effects in the inverse base-rate design. Taken together, as an account of the inverse base-rate effect the empirical evidence of this thesis suggest that rule-based elimination is a powerful component of the inverse base-rate effect. But previous studies have indicated that attentional shifts affect the inverse base-rate effect, too. Therefore, a complete account of the inverse base-rate effect needs to integrate inductive and eliminative inferences operating on rule-based representations with attentional shifts. The Discussion of this thesis propose a number of suggestions for such integrative work. </p>
|
415 |
The Role of High-Level Reasoning and Rule-Based Representations in the Inverse Base-Rate EffectWennerholm, Pia January 2001 (has links)
The inverse base-rate effect is the observation that on certain occasions people classify new objects as belonging to rare base-rate categories rather than common ones (e.g., D. L. Medin & S. M. Edelson, 1988). This finding is inconsistent with normative prescriptions of rationality, and provides an anomaly for current theories of human knowledge representation, such as the exemplar-based models of categorization, which predict a consistent use of base-rates (e.g., D. L. Medin & M. M. Schaffer, 1978). This thesis presents a novel explanation of the inverse base-rate effect. The proposal is that participants sometimes eliminate category options that are inconsistent with well-supported inference rules. These assumptions contrast with those by attentional theory (J. K. Kruschke, in press), according to which the inverse base-rate effect is the outcome of rapid attention shifts operating on cue-category associations. Study I, II, and III verified seven qualitative predictions derived from the eliminative inference idea. None of these phenomena can be explained by attentional theory. The most important of these findings were that elimination of well-known, common categories mediate the inverse base-rate effect rather than the strongest cue-category associations (Study I), that only participants with a rule-based mode of generalization exhibit the inverse base-rate effect (Study II), and that rapid attentional shifts per se do not accelerate learning, but rather decelerate it (Study III). In addition, Study I provided a quantitative implementation of the eliminative inference idea, ELMO, that demonstrated that this high-level reasoning process can produce the basic pattern of base-rate effects in the inverse base-rate design. Taken together, as an account of the inverse base-rate effect the empirical evidence of this thesis suggest that rule-based elimination is a powerful component of the inverse base-rate effect. But previous studies have indicated that attentional shifts affect the inverse base-rate effect, too. Therefore, a complete account of the inverse base-rate effect needs to integrate inductive and eliminative inferences operating on rule-based representations with attentional shifts. The Discussion of this thesis propose a number of suggestions for such integrative work.
|
416 |
Rigorous System-level Modeling and Performance Evaluation for Embedded System Design / Modélisation et Évaluation de Performance pour la Conception des Systèmes Embarqués : Approche Rigoureuse au Niveau SystèmeNouri, Ayoub 08 April 2015 (has links)
Les systèmes embarqués ont évolué d'une manière spectaculaire et sont devenus partie intégrante de notre quotidien. En réponse aux exigences grandissantes en termes de nombre de fonctionnalités et donc de flexibilité, les parties logicielles de ces systèmes se sont vues attribuer une place importante malgré leur manque d'efficacité, en comparaison aux solutions matérielles. Par ailleurs, vu la prolifération des systèmes nomades et à ressources limités, tenir compte de la performance est devenu indispensable pour bien les concevoir. Dans cette thèse, nous proposons une démarche rigoureuse et intégrée pour la modélisation et l'évaluation de performance tôt dans le processus de conception. Cette méthode permet de construire des modèles, au niveau système, conformes aux spécifications fonctionnelles, et intégrant les contraintes non-fonctionnelles de l'environnement d'exécution. D'autre part, elle permet d'analyser quantitativement la performance de façon rapide et précise. Cette méthode est guidée par les modèles et se base sur le formalisme $mathcal{S}$BIP que nous proposons pour la modélisation stochastique selon une approche formelle et par composants. Pour construire des modèles conformes au niveau système, nous partons de modèles purement fonctionnels utilisés pour générer automatiquement une implémentation distribuée, étant donnée une architecture matérielle cible et un schéma de répartition. Dans le but d'obtenir une description fidèle de la performance, nous avons conçu une technique d'inférence statistique qui produit une caractérisation probabiliste. Cette dernière est utilisée pour calibrer le modèle fonctionnel de départ. Afin d'évaluer la performance de ce modèle, nous nous basons sur du model checking statistique que nous améliorons à l'aide d'une technique d'abstraction. Nous avons développé un flot de conception qui automatise la majorité des phases décrites ci-dessus. Ce flot a été appliqué à différentes études de cas, notamment à une application de reconnaissance d'image déployée sur la plateforme multi-cœurs STHORM. / In the present work, we tackle the problem of modeling and evaluating performance in the context of embedded systems design. These have become essential for modern societies and experienced important evolution. Due to the growing demand on functionality and programmability, software solutions have gained in importance, although known to be less efficient than dedicated hardware. Consequently, considering performance has become a must, especially with the generalization of resource-constrained devices. We present a rigorous and integrated approach for system-level performance modeling and analysis. The proposed method enables faithful high-level modeling, encompassing both functional and performance aspects, and allows for rapid and accurate quantitative performance evaluation. The approach is model-based and relies on the $mathcal{S}$BIP formalism for stochastic component-based modeling and formal verification. We use statistical model checking for analyzing performance requirements and introduce a stochastic abstraction technique to enhance its scalability. Faithful high-level models are built by calibrating functional models with low-level performance information using automatic code generation and statistical inference. We provide a tool-flow that automates most of the steps of the proposed approach and illustrate its use on a real-life case study for image processing. We consider the design and mapping of a parallel version of the HMAX models algorithm for object recognition on the STHORM many-cores platform. We explored timing aspects and the obtained results show not only the usability of the approach but also its pertinence for taking well-founded decisions in the context of system-level design.
|
417 |
Neuroninių-neraiškiųjų tinklų naudojimas verslo taisyklių sistemose / Use of neuro-fuzzy networks with business rules enginesDmitrijev, Gintaras 09 July 2009 (has links)
Baigiamajame magistro darbe nagrinėjamos neraiškiųjų verslo taisyklių naudojimo informacinėse sistemose problemos, „minkštųjų skaičiavimų“ intelektinėse informacinėse sistemose problematika, neuroninių-neraiškiųjų sistemų principai. Išnagrinėti pagrindiniai neraiškiosios logikos dėsniai, kuriais remiantis naudojamos neraiskiosios verslo taisyklės intelektinėse informacinėse sistemose. Pateiktas būdas, kaip neuroninės-neraiškiosios sistemos gali būti naudojamos verslo taisyklių sistemose naudojant RuleML, taisyklių žymėjimo kalbos, standartą. Baigiamajame darbe aprašomas eksperimentas, atliktas naudojant Matlab aplinką, XMLBeans taikomąją programą ir autoriaus sukurta neraiškaus išvedimo sistemos perkelimo į RuleML formatą taikomąją programą. Išnagrinėjus teorinius ir praktinius neuroninių-neraiškiųjų sistemų naudojimo aspektus, pateikiamos baigiamojo darbo išvados ir siūlymai. Darbą sudaro 5 dalys: įvadas, analitinė-metodinė dalis, eksperimentinė-tiriamoji dalis, išvados ir siūlymai, literatūros sąrašas. Darbo apimtis – 58 p. teksto be priedų, 30 iliustr., 30 bibliografiniai šaltiniai. Atskirai pridedami darbo priedai. / This work investigates the problems of use of fuzzy business rules in information systems, „soft computing“ in intelligent information systems issues, neuro-fuzzy systems principles. Main fuzzy logic laws are considered, which are used as the basis of fuzzy business rules in intelligent information systems. Suggested an approach, based on RuleML standard, how neuro-fuzzy systems could be used together with business rules engines. This paper describes the experiment carried out using the Matlab environment, XMLBeans application and the author created application for fuzzy inference system migration to RuleML standard format. Structure: introduction, analysis , project, conclusions and suggestions, references. Thesis consist of: 58 p. text without appendixes, 30 pictures, 30 bibliographical entries. Appendixes included.
|
418 |
Rigorous System-level Modeling and Performance Evaluation for Embedded System Design / Modélisation et Évaluation de Performance pour la Conception des Systèmes Embarqués : Approche Rigoureuse au Niveau SystèmeNouri, Ayoub 08 April 2015 (has links)
Les systèmes embarqués ont évolué d'une manière spectaculaire et sont devenus partie intégrante de notre quotidien. En réponse aux exigences grandissantes en termes de nombre de fonctionnalités et donc de flexibilité, les parties logicielles de ces systèmes se sont vues attribuer une place importante malgré leur manque d'efficacité, en comparaison aux solutions matérielles. Par ailleurs, vu la prolifération des systèmes nomades et à ressources limités, tenir compte de la performance est devenu indispensable pour bien les concevoir. Dans cette thèse, nous proposons une démarche rigoureuse et intégrée pour la modélisation et l'évaluation de performance tôt dans le processus de conception. Cette méthode permet de construire des modèles, au niveau système, conformes aux spécifications fonctionnelles, et intégrant les contraintes non-fonctionnelles de l'environnement d'exécution. D'autre part, elle permet d'analyser quantitativement la performance de façon rapide et précise. Cette méthode est guidée par les modèles et se base sur le formalisme $mathcal{S}$BIP que nous proposons pour la modélisation stochastique selon une approche formelle et par composants. Pour construire des modèles conformes au niveau système, nous partons de modèles purement fonctionnels utilisés pour générer automatiquement une implémentation distribuée, étant donnée une architecture matérielle cible et un schéma de répartition. Dans le but d'obtenir une description fidèle de la performance, nous avons conçu une technique d'inférence statistique qui produit une caractérisation probabiliste. Cette dernière est utilisée pour calibrer le modèle fonctionnel de départ. Afin d'évaluer la performance de ce modèle, nous nous basons sur du model checking statistique que nous améliorons à l'aide d'une technique d'abstraction. Nous avons développé un flot de conception qui automatise la majorité des phases décrites ci-dessus. Ce flot a été appliqué à différentes études de cas, notamment à une application de reconnaissance d'image déployée sur la plateforme multi-cœurs STHORM. / In the present work, we tackle the problem of modeling and evaluating performance in the context of embedded systems design. These have become essential for modern societies and experienced important evolution. Due to the growing demand on functionality and programmability, software solutions have gained in importance, although known to be less efficient than dedicated hardware. Consequently, considering performance has become a must, especially with the generalization of resource-constrained devices. We present a rigorous and integrated approach for system-level performance modeling and analysis. The proposed method enables faithful high-level modeling, encompassing both functional and performance aspects, and allows for rapid and accurate quantitative performance evaluation. The approach is model-based and relies on the $mathcal{S}$BIP formalism for stochastic component-based modeling and formal verification. We use statistical model checking for analyzing performance requirements and introduce a stochastic abstraction technique to enhance its scalability. Faithful high-level models are built by calibrating functional models with low-level performance information using automatic code generation and statistical inference. We provide a tool-flow that automates most of the steps of the proposed approach and illustrate its use on a real-life case study for image processing. We consider the design and mapping of a parallel version of the HMAX models algorithm for object recognition on the STHORM many-cores platform. We explored timing aspects and the obtained results show not only the usability of the approach but also its pertinence for taking well-founded decisions in the context of system-level design.
|
419 |
Meta-análise de parâmetros genéticos de características de crescimento em bovinos de corte sob enfoques clássico e Bayesiano. / Meta-analysis of genetic parameters of growth traits on beef cattle under classic and bayesian approach.Juliana Di Giorgio Giannotti 03 September 2004 (has links)
O crescente volume de publicações científicas gerado pelo desenvolvimento das pesquisas e as conclusões, algumas vezes destoantes, obtidas em diferentes trabalhos versando sobre um mesmo tema, são as duas principais motivações de pesquisadores em compilar informações publicadas. Em vista disso, procedimentos estatísticos, dentre os quais destaca-se a meta-análise, vêm sendo desenvolvidos para obtenção de uma resposta única e confiável para um conjunto de resultados publicados.No melhoramento genético animal há um grande número de trabalhos contendo estimativas de herdabilidade de características de crescimento em bovinos de corte. Através de pesquisa bibliográfica foram encontrados, em 186 artigos publicados, 869 estimativas de herdabilidade de efeito direto, 186 estimativas de herdabilidade de efeito materno e 123 estimativas do coeficiente de correlação genética entre os efeitos direto e materno, das características de crescimento peso ao nascimento, peso a desmama, peso aos 365 dias e peso aos 550 dias em bovinos de corte de origem indiana. De posse deste conjunto de dados, foram realizadas meta-análises, dentro de cada uma das quatro características de crescimento, cujo objetivo principal foi obter uma resposta combinada, para estes parâmetros genéticos, sob enfoques clássico e bayesiano. No enfoque clássico conduziram-se as meta-análises utilizando modelos fixo e aleatório, em que dois estimadores, o de máxima verossimilhança restrita e o proposto por DerSimonian & Laird, foram empregados para estimar a variância entre os estudos. Também foi realizada meta-análise de acordo com a técnica de agrupamento de Ward. Sob o enfoque bayesiano, as meta-análises foram conduzidas utilizando-se um modelo hierárquico e, a variância entre os estudos, foi obtida via simulação através do modelo proposto. As estimativas combinadas de herdabilidade de efeito direto variaram de 0,18 a 0,33, nos diferentes grupos formados a partir da análise de agrupamento, sendo sempre menores àquelas obtidas para peso à desmama e sempre maiores àquelas obtidas para peso aos 550 dias. As estimativas combinadas de herdabilidade de efeito materno foram 0,09 para peso ao nascimento, 0,13 para peso à desmama, 0,12 para peso aos 365 dias e 0,05 para peso aos 550 dias. As estimativas combinadas para correlação entre os efeitos diretos e maternos foram de 0,16 para peso ao nascimento, à desmama e aos 550 dias e -0,20 para peso aos 365 dias. Os três métodos utilizados para estimar a variância entre os estudos, o da máxima verossimilhança restrita, o proposto por DerSimonian & Laird e o Bayesiano, conduziram a valores distintos para esta variância, sendo sempre maiores os valores obtidos através do método Bayesiano e sempre menores os obtidos por DerSimonian & Laird. Porém, os valores das estimativas combinadas para herdabilidades de efeito direto, obtidas através destes três estimadores, muito próximos, para as quatro características. Devido ao fato de comparar e combinar resultados de estudos distintos, permitindo inferir sobre um conjunto de resultados publicados, recomenda-se a meta-análise, como procedimento estatístico, para obtenção de valores combinados das estimativas de herdabilidade de efeito direto, materno e suas correlações, nas características de crescimento em bovinos de corte. / The increasing volume of research publications as a consequence of scientific development and eventually with divergent conclusions obtained in different studies about the same subject are the two main motivations for compiling the information of these works. Statistical procedures, particularly the meta-analysis, were developed in order to obtain a unique and realistic answer from a set of published results. In the field of animal breeding there is a large amount of research work on heritability estimates for growth traits in beef cattle. A total of 186 articles was found in literature, reporting 869 direct heritability estimates, 186 maternal heritability estimates and 123 direct-maternal genetic correlation for birth weight, weaning weight, weight at 365 and at 550 days of age in zebu beef cattle. Based on this data set, meta-analysis, under Classic and Bayesian approaches, were performed in order to obtain a pooled estimate of those genetic parameters for each trait. Regarding the Classic approach, the meta-analysis were performed using a random effect model, where two estimators, the Restricted Maximum Likelihood and the one proposed by DerSimonian & Laird were used to evaluate the variance between studies. Also, it was performed a meta-analysis using the method of cluster analysis of Ward to group the estimates. Under the Bayesian approach, the meta-analysis was performed using a hierarchical model and the variances between the studies, were obtained by simulation using the proposed model. The pooled estimates for direct heritabilities ranged from 0.18 to 0.33 for the different groups composed by the cluster analysis. The lower values were obtained for weaning weight and higher values were obtained for weight at 550 days of age. The pooled estimates for maternal heritabilities were 0.09 for birth weight, 0.13 for weaning weight, 0.12 for weight at 365 days of age and 0.05 for weight at 550 days of age. The pooled estimates for direct-maternal genetic correlations were -0.16 for birth weight, weaning weight and weight at 550 days of age and -0.20 for weight at 365 days of age. The three methods, Restricted Maximum likelihood, the estimator proposed by DerSimonian & Laird and the Bayesian, used to estimate the variance between studies lead to different values, the greater ones obtained by Bayesian method and the lower by DerSimonian & Laird. In general, pooled estimates values for direct heritabilities, obtained by those three estimators, were very close. Meta-analysis is recommended as a statistical procedure to compare and combine results from different studies in order to obtain pooled values of direct and maternal heritabilities and direct-maternal genetic correlations of growth traits of beef cattle.
|
420 |
Regressão binária nas abordagens clássica e bayesianaFernandes, Amélia Milene Correia 16 December 2016 (has links)
Submitted by Aelson Maciera (aelsoncm@terra.com.br) on 2017-05-23T16:23:56Z
No. of bitstreams: 1
DissAMCF.pdf: 1964890 bytes, checksum: 84bcbd06f74840be6fc5f38659c34c07 (MD5) / Approved for entry into archive by Ronildo Prado (ronisp@ufscar.br) on 2017-06-05T19:07:22Z (GMT) No. of bitstreams: 1
DissAMCF.pdf: 1964890 bytes, checksum: 84bcbd06f74840be6fc5f38659c34c07 (MD5) / Approved for entry into archive by Ronildo Prado (ronisp@ufscar.br) on 2017-06-05T19:07:28Z (GMT) No. of bitstreams: 1
DissAMCF.pdf: 1964890 bytes, checksum: 84bcbd06f74840be6fc5f38659c34c07 (MD5) / Made available in DSpace on 2017-06-05T19:18:45Z (GMT). No. of bitstreams: 1
DissAMCF.pdf: 1964890 bytes, checksum: 84bcbd06f74840be6fc5f38659c34c07 (MD5)
Previous issue date: 2016-12-16 / Não recebi financiamento / The objective of this work is to study the binary regression model under the frequentist and Bayesian approaches using the probit, logit, log-log complement, Box-Cox transformation and skewprobit as link functions. In the classical approach we presented assumpti- ons and procedures used in the regression modeling. We verified the accuracy of the estimated parameters by building confidence intervals and conducting hypothesis tests. In the Bayesian appro- ach we made a comparative study using two methodologies. For the first methodology, we considered non-informative prior dis- tributions and the Metropolis-Hastings algorithm to estimate the model. In the second methodology we used auxiliary variables to obtain the known a posteriori distribution, allowing the use of the Gibbs Sampler algorithm. However, the introduction of these auxiliary variables can generate correlated values and needs the use of clustering of unknown quantities in blocks to reduce the autocorrelation. In the simulation study we used the AIC and BIC information criteria to select the most appropriate model and we evaluated whether the coverage probabilities of the confidence interval is in agre- ement with that expected by the asymptotic theory. In Bayesian approach we found that the inclusion of auxiliary variables in the model results in a more efficient algoritm according to the MSE, MAPE and SMAPE criteria. In this work we also present applications to two real datasets. The first dataset used is the variation of the Ibovespa and variation of the daily value of the American dollar at the time of closing the 2013 to 2016. The second dataset, used is an educational data set (INEP-2013), where we are interested in studying the factors that infuence the approval of the student. / Este trabalho tem como objetivo estudar o modelo de regressão binária nas abordagens clássica e bayesiana utilizando as funcoes de ligacoes probito, logito, complemento log-log, transformaçao box-cox e probito-assimetrico. Na abordagem clássica apresentamos as suposicoes e o procedimento para ajustar o modelo de regressao e verificamos a precisão dos parâmetros estimados, construindo intervalos de confianca e testes de hipóteses. Enquanto que, na inferência bayesiana fizemos um estudo comparativo utilizando duas metodologias. Na primeira metodologia consideramos densidades a priori nao informativas e utilizamos o algoritmo Metropolis-Hastings para ajustar o modelo. Na segunda metodologia utilizamos variáaveis auxiliares para obter a distribuiçcaão a posteriori conhecida, facilitando a implementacão do algoritmo do Amostrador de Gibbs. No entanto, a introduçao destas variaveis auxiliares podem gerar valores correlacionados, o que leva à necessidade de se utilizar o agrupamento das quantidades desconhecidas em blocos para reduzir a autocorrelaçcãao.
Atraves do estudo de simulacao mostramos que na inferência classica podemos usar os critérios AIC e BIC para escolher o melhor modelo e avaliamos se o percentual de cobertura do intervalo de confianca assintotica está de acordo com o esperado na teoria assintática. Na inferência bayesiana constatamos que o uso de va-riaáveis auxiliares resulta em um algoritmo mais eficiente segundo os critérios: erro quadrâtico medio (EQM), erro percentual absoluto medio (MAPE) e erro percentual absoluto medio simetrico (SMAPE).
Como ilustração apresentamos duas aplicações com dados reais. Na primeira, consideramos um conjunto de dados da variaçao do Ibovespa e a variacao do valor diário do fechamento da cotacao do dólar no período de 2013 a 2016. Na segunda aplicação, trabalhamos com um conjunto de dados educacionais (INEP-2013), focando nos estudos das variaveis que influenciam a aprovacao do aluno.
|
Page generated in 0.0832 seconds