Spelling suggestions: "subject:"bayesian methods"" "subject:"eayesian methods""
21 |
Bayesian variable selection for linear mixed models when p is much larger than n with applications in genome wide association studiesWilliams, Jacob Robert Michael 05 June 2023 (has links)
Genome-wide association studies (GWAS) seek to identify single nucleotide polymorphisms (SNP) causing phenotypic responses in individuals. Commonly, GWAS analyses are done by using single marker association testing (SMA) which investigates the effect of a single SNP at a time and selects a candidate set of SNPs using a strict multiple correction penalty. As SNPs are not independent but instead strongly correlated, SMA methods lead to such high false discovery rates (FDR) that the results are difficult to use by wet lab scientists. To address this, this dissertation proposes three different novel Bayesian methods: BICOSS, BGWAS, and IEB. From a Bayesian modeling point of view, SNP search can be seen as a variable selection problem in linear mixed models (LMMs) where $p$ is much larger than $n$. To deal with the $p>>n$ issue, our three proposed methods use novel Bayesian approaches based on two steps: a screening step and a model selection step. To control false discoveries, we link the screening and model selection steps through a common probability of a null SNP. To deal with model selection, we propose novel priors that are extensions for LMMs of nonlocal priors, Zellner-g prior, unit Information prior, and Zellner-Siow prior. For each method, extensive simulation studies and case studies show that these methods improve the recall of true causal SNPs and, more importantly, drastically decrease FDR. Because our Bayesian methods provide more focused and precise results, they may speed up discovery of important SNPs and significantly contribute to scientific progress in the areas of biology, agricultural productivity, and human health. / Doctor of Philosophy / Genome-wide association studies (GWAS) seek to identify locations in DNA known as single nucleotide polymorphisms (SNPs) that are the underlying cause of observable traits such as height or breast cancer. Commonly, GWAS analyses are performed by investigating each SNP individually and seeing which SNPs are highly correlated with the response. However, as the SNPs themselves are highly correlated, investigating each one individually leads to a high number of false positives. To address this, this dissertation proposes three different advanced statistical methods: BICOSS, BGWAS, and IEB. Through extensive simulations, our methods are shown to not only drastically reduce the number of falsely detected SNPs but also increase the detection rate of true causal SNPs. Because our novel methods provide more focused and precise results, they may speed up discovery of important SNPs and significantly contribute to scientific progress in the areas of biology, agricultural productivity, and human health.
|
22 |
Estimation of Variance Components in Finite Polygenic Models and Complex PedigreesLahti, Katharine Gage 22 June 1998 (has links)
Various models of the genetic architecture of quantitative traits have been considered to provide the basis for increased genetic progress. The finite polygenic model (FPM), which contains a finite number of unlinked polygenic loci, is proposed as an improvement to the infinitesimal model (IM) for estimating both additive and dominance variance for a wide range of genetic models. Analysis under an additive five-loci FPM by either a deterministic Maximum Likelihood (DML) or a Markov chain Monte Carlo (MCMC) Bayesian method (BGS) produced accurate estimates of narrow-sense heritability (0.48 to 0.50 with true values of h2 = 0.50) for phenotypic data from a five-generation, 6300-member pedigree simulated without selection under either an IM, FPMs containing five or forty loci with equal homozygote difference, or a FPM with eighteen loci of diminishing homozygote difference. However, reducing the analysis to a three- or four-loci FPM resulted in some biased estimates of heritability (0.53 to 0.55 across all genetic models for the 3-loci BGS analysis and 0.47 to 0.48 for the 40-loci FPM and the infinitesimal model for both the 3- and 4-loci DML analyses). The practice of cutting marriage and inbreeding loops utilized by the DML method expectedly produced overestimates of additive genetic variance (55.4 to 66.6 with a true value of sigma squared sub a = 50.0 across all four genetic models) for the same pedigree structure under selection, while the BGS method was mostly unaffected by selection, except for slight overestimates of additive variance (55.0 and 58.8) when analyzing the 40-loci FPM and the infinitesimal model, the two models with the largest numbers of loci. Changes to the BGS method to accommodate estimation of dominance variance by sampling genotypes at individual loci are explored. Analyzing the additive data sets with the BGS method, assuming a five-loci FPM including both additive and dominance effects, resulted in accurate estimates of additive genetic variance (50.8 to 52.2 for true sigma squared sub a = 50.0) and no significant dominance variance (3.7 to 3.9) being detected where none existed. The FPM has the potential to produce accurate estimates of dominance variance for large, complex pedigrees containing inbreeding, whereas the IM suffers severe limitations under inbreeding. Inclusion of dominance effects into the genetic evaluations of livestock, with the potential increase in accuracy of additive breeding values and added ability to exploit specific combining abilities, is the ultimate goal. / Master of Science
|
23 |
On the Use of Grouped Covariate Regression in Oversaturated ModelsLoftus, Stephen Christopher 11 December 2015 (has links)
As data collection techniques improve, oftentimes the number of covariates exceeds the number of observations. When this happens, regression models become oversaturated and, thus, inestimable. Many classical and Bayesian techniques have been designed to combat this difficulty, with various means of combating the oversaturation. However, these techniques can be tricky to implement well, difficult to interpret, and unstable.
What is proposed is a technique that takes advantage of the natural clustering of variables that can often be found in biological and ecological datasets known as the omics datasests. Generally speaking, omics datasets attempt to classify host species structure or function by characterizing a group of biological molecules, such as genes (Genomics), the proteins (Proteomics), and metabolites (Metabolomics). By clustering the covariates and regressing on a single value for each cluster, the model becomes both estimable and stable. In addition, the technique can account for the variability within each cluster, allow for the inclusion of expert judgment, and provide a probability of inclusion for each cluster. / Ph. D.
|
24 |
Efficient Bayesian methods for mixture models with genetic applicationsZuanetti, Daiane Aparecida 14 December 2016 (has links)
Submitted by Alison Vanceto (alison-vanceto@hotmail.com) on 2017-01-16T12:38:12Z
No. of bitstreams: 1
TeseDAZ.pdf: 20535130 bytes, checksum: 82585444ba6f0568a20adac88fdfc626 (MD5) / Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2017-01-17T11:47:35Z (GMT) No. of bitstreams: 1
TeseDAZ.pdf: 20535130 bytes, checksum: 82585444ba6f0568a20adac88fdfc626 (MD5) / Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2017-01-17T11:47:42Z (GMT) No. of bitstreams: 1
TeseDAZ.pdf: 20535130 bytes, checksum: 82585444ba6f0568a20adac88fdfc626 (MD5) / Made available in DSpace on 2017-01-17T11:47:50Z (GMT). No. of bitstreams: 1
TeseDAZ.pdf: 20535130 bytes, checksum: 82585444ba6f0568a20adac88fdfc626 (MD5)
Previous issue date: 2016-12-14 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / We propose Bayesian methods for selecting and estimating di erent types of mixture models which are widely used in Genetics and Molecular Biology. We speci cally propose data-driven selection and estimation methods for a generalized mixture model, which accommodates the usual (independent) and the rst-order (dependent) models in one framework, and QTL (quantitative trait locus) mapping models for independent and pedigree data. For clustering genes through a mixture model, we propose three nonparametric Bayesian methods: a marginal nested Dirichlet process (NDP), which is able to cluster distributions and, a predictive recursion clustering scheme (PRC) and a subset nonparametric Bayesian (SNOB) clustering algorithm for clustering big data. We analyze and compare the performance of the proposed methods and traditional procedures of selection, estimation and clustering in simulated and real data sets. The proposed methods are more exible, improve the convergence of the algorithms and provide more accurate estimates in many situations. In addition, we propose methods for predicting
nonobservable QTLs genotypes and missing parents and improve the Mendelian probability of inheritance of nonfounder genotype using conditional independence structures. We also suggest applying diagnostic measures to check the goodness of t of QTL mapping models. / N os propomos métodos Bayesianos para selecionar e estimar diferentes tipos de modelos de mistura que são amplamente utilizados em Genética e Biologia
Molecular. Especificamente, propomos métodos direcionados pelos dados para
selecionar e estimar um modelo de mistura generalizado, que descreve o modelo
de mistura usual (independente) e o de primeira ordem numa mesma estrutura,
e modelos de mapeamento de QTL com dados independentes e familiares. Para agrupar genes através de modelos de mistura, nós propomos três métodos Bayesianos
não-paramétricos: o processo de Dirichlet aninhado que possibilita agrupamento
de distribuições e, um algoritmo preditivo recursivo e outro Bayesiano nãoparamétrico exato para agrupar dados de alta dimensão. Analisamos e comparamos o desempenho dos métodos propostos e dos procedimentos tradicionais de seleção e estimação de modelos e agrupamento de dados em conjuntos de dados simulados
e reais. Os métodos propostos são mais
extáveis, aprimoram a convergência dos
algoritmos e apresentam estimativas mais precisas em muitas situações. Além disso,
nós propomos procedimentos para predizer o genótipo não observável dos QTLs e
de pais faltantes e melhorar a probabilidade Mendeliana de herança genética do
genótipo dos descendentes através da estrutura de independência condicional entre
os indivíduos. Também sugerimos aplicar medidas de diagnóstico para verificar a
qualidade do ajuste dos modelos de mapeamento de QTLs.
|
25 |
Time-varying frequency analysis of bat echolocation signals using Monte Carlo methodsNagappa, Sharad January 2010 (has links)
Echolocation in bats is a subject that has received much attention over the last few decades. Bat echolocation calls have evolved over millions of years and can be regarded as well suited to the task of active target-detection. In analysing the time-frequency structure of bat calls, it is hoped that some insight can be gained into their capabilities and limitations. Most analysis of calls is performed using non-parametric techniques such as the short time Fourier transform. The resulting time-frequency distributions are often ambiguous, leading to further uncertainty in any subsequent analysis which depends on the time-frequency distribution. There is thus a need to develop a method which allows improved time-frequency characterisation of bat echolocation calls. The aim of this work is to develop a parametric approach for signal analysis, specifically taking into account the varied nature of bat echolocation calls in the signal model. A time-varying harmonic signal model with a polynomial chirp basis is used to track the instantaneous frequency components of the signal. The model is placed within a Bayesian context and a particle filter is used to implement the filter. Marginalisation of parameters is considered, leading to the development of a new marginalised particle filter (MPF) which is used to implement the algorithm. Efficient reversible jump moves are formulated for estimation of the unknown (and varying) number of frequency components and higher harmonics. The algorithm is applied to the analysis of synthetic signals and the performance is compared with an existing algorithm in the literature which relies on the Rao-Blackwellised particle filter (RBPF) for online state estimation and a jump Markov system for estimation of the unknown number of harmonic components. A comparison of the relative complexity of the RBPF and the MPF is presented. Additionally, it is shown that the MPF-based algorithm performs no worse than the RBPF, and in some cases, better, for the test signals considered. Comparisons are also presented from various reversible jump sampling schemes for estimation of the time-varying number of tones and harmonics. The algorithm is subsequently applied to the analysis of bat echolocation calls to establish the improvements obtained from the new algorithm. The calls considered are both amplitude and frequency modulated and are of varying durations. The calls are analysed using polynomial basis functions of different orders and the performance of these basis functions is compared. Inharmonicity, which is deviation of overtones away from integer multiples of the fundamental frequency, is examined in echolocation calls from several bat species. The results conclude with an application of the algorithm to the analysis of calls from the feeding buzz, a sequence of extremely short duration calls emitted at high pulse repetition frequency, where it is shown that reasonable time-frequency characterisation can be achieved for these calls.
|
26 |
Vývoj trénovatelných strategií řízení pro dialogové systémy / Development of trainable policies for spoken dialogue systemsLe, Thanh Cong January 2016 (has links)
Abstract Development of trainable policies for spoken dialogue systems Thanh Le In humanhuman interaction, speech is the most natural and effective manner of communication. Spoken Dialogue Systems (SDS) have been trying to bring that high level interaction to computer systems, so with SDS, you could talk to machines rather than learn to use mouse and keyboard for performing a task. However, as inaccuracy in speech recognition and inherent ambiguity in spoken language, the dialogue state (user's desire) can never be known with certainty, and therefore, building such a SDS is not trivial. Statistical approaches have been proposed to deal with these uncertainties by maintaining a probability distribution over every possible dialogue state. Based on these distributions, the system learns how to interact with users, somehow to achieve the final goal in the most effective manner. In Reinforcement Learning (RL), the learning process is understood as optimizing a policy of choosing action conditioned on the current belief state. Since the space of dialogue...
|
27 |
Implementace aproximativních Bayesovských metod pro odhad stavu v dialogových systémech / Approximative Bayes methods for belief monitoring in spoken dialogue systemsMarek, David January 2013 (has links)
The most important component of virtually any dialog system is a dialogue manager. The aim of the dialog manager is to propose an action (a continuation of the dialogue) given the last dialog state. The dialog state summarises all the past user input and the system input and ideally it includes all information necessary for natural progress in the dialog. For the dialog manager to work efficiently, it is important to model the probability distribution over all dialog states as precisely as possible. It is possible that the set of dialog states will be very large, so approximative methods usually must be used. In this thesis we will discuss an implementation of approximate Bayes methods for belief state monitoring. The result is a library for dialog state monitoring in real dialog systems. 1
|
28 |
Avaliação de métodos estatísticos aplicados ao estudo de testes diagnósticos na presença do viés de verificação / Evaluation of statistical methods applied to diagnostics tests in the presence of the verification bias.Aragon, Davi Casale 31 August 2007 (has links)
O estudo de métodos estatísticos na avaliação de métodos diagnósticos tem aumentado consideravelmente nas últimas décadas. Desde o início, quando Yerushalmy (1947) publicou seu traba lho sobre confiabilidade do roentgeno grama na identificação da tuberculose, novas metodologias surgiram para que fosse possível a obtenção de valores de sensibilidade e especificidade de testes diagnósticos. A sensibilidade é definida como a probabilidade de o teste sob investigação fornecer um resultado positivo, dado que o indivíduo é realmen te portador da enfermidade. A especifi cidade, por sua vez, é definida como a probabilidade de o teste fornecer um resultado negativo, dado que o indivíduo está livre da enfermidade. Na prática, é comum ocorrerem situações em que uma proporção de indivíduos selecionados não pode ter o estado real da doença verificado, por se tratar de procedimentos invasivos, como no diagnóstico de câncer de pulmão, ou quaisquer outros casos em que são envolvidos riscos, portanto não praticá veis, nem éticos, ou ainda por serem de alto custo. Assim, em vez de se contornar o proble ma, muitos estudos de avaliação de performance de testes diagnósticos são elaborados apenas com informações de indivíduos verificados. Esse procedimento pode levar a resultados viesados. É o chamado viés de verificação, que consiste no cálculo de estimativas de sensibilidade e especi ficidade de testes diagnósticos quando apenas os indivíduos verificados pelo padrão ouro são inseridos na análise e os não verificados são descartados ou considerados livres de doença. Este trabalho apresenta uma revisão das metodologias já propostas para se calcularem a sensibilidade e a especificidade quando existe o viés de verificação, bem como uma análise detalhada da influência da proporção de indivíduos não verificados, o efeito do tamanho amostral e a escolha de distri buições a priori, quando utilizada a metodologia bayesiana, no cálculo dessas estimativas. Também foi introduzida uma metodologia, sob enfoque bayesiano, para a estimação das medidas de desempenho de dois testes diagnósticos, na presen ça do viés de verificação. / The study of statistical methods on diagnostic tests evaluation has increa sed in the last decades. Since the beginning, when Yerushalmy (1947) published his work about trustwor thiness of the roentgenogram in the identification of the tuberculosis, new methodologies had appeared and so that it was possible to get values of sensi tivity and specificity of diagnostic tests. Sensitivity is defined as the probability of the test under inquiry supply a positive result, since that the individual is really carrying on the disease. The specificity, in the other hand, is defined as the probability of the test supply a negative result, since that the individual is free of the disease. In practice, it is usual to occur situations where a proportion of selected individuals cannot have verified the real state of the illness, to the fact that the verification test can be an invasive procedure, as in the diagnosis of lung cancer, or any other cases where risks are involved, therefore not practicable, nor ethical, or still procedures with high cost. Thus, instead of solve the problem, many studies of evaluation of performance of diagnostic tests are elaborated only using the information of verified individuals. This procedure can leads to biased results. This is known as verification bias, that consists of the calculation of estimates of sensitivity and specificity of diagnostic tests when only the individuals verified by the gold standard test are inserted in the analysis and the unverified ones, discarded or considered that they are free of the disease. This work presents a revision of the methodologies already proposed to calculate sensitivity and the specificity in the presence of the verifi cation bias, as well as a detailed analysis of the influence of the propor tion of individuals not verified, the effect of the sample size and the influ ence of choosing different prior densi ties, when using the bayesian methodo logy, in the calculation of these estima tes. It was also introduced a bayesian methodology to estimate performance measures of two diagnostic tests when the verification bias is present.
|
29 |
Padrões espaço-temporais da incidência da tuberculose em Ribeirão Preto, SP: uso de um modelo bayesiano auto-regressivo condicional / Spatio-temporal patterns of tuberculosis incidence in Ribeirão Preto: using a conditional autoregressive modelDaiane Leite da Roza 24 August 2011 (has links)
Neste trabalho foram utilizados modelos de regressão espaço-temporais bayesianos para estimar a incidência de TB em Ribeirão Preto (anos de 2006 a 2009) por área de abrangência de unidades de saúde, associando-a a covariáveis de interesse (IPVS, Renda e Educação predominantes naquelas áreas). O método baseia-se em simulações MCMC para estimar as distribuições a posteriori das incidências de TB em Ribeirão Preto. Como resultado, temos mapas que mostram mais claramente um padrão espacial, com estimativas mais suavizadas e com menos flutuações aleatórias. Observamos que as áreas com as mais altas taxas de incidência também possuem índice de vulnerabilidade social médio e alto. Em relação à renda, a faixa salarial predominante dos responsáveis pelo domicílio nestas regiões é entre 0 e 3 salários mínimos e o nível de escolaridade predominante dos chefes do domicílio nestas regiões é o ensino fundamental. Os resultados dos modelos bayesianos analisados nos evidenciam que com o aumento da vulnerabilidade social aumentamos significativamente a incidência de TB em Ribeirão Preto. Nas áreas onde a vulnerabilidade é alta a incidência de TB chega a quase 15 vezes a incidência das áreas sem vulnerabilidade. Houve um aumento significativo em relação à incidência de tuberculose em Ribeirão Preto durante os anos estudados, sendo as maiores incidências registradas no ano de 2009. O uso de mapas facilitou a visualização de áreas que merecem uma atenção especial no controle da TB, além disso, a associação da doença com renda, escolaridade e vulnerabilidade social trazem subsídios para que os gestores responsáveis pelo planejamento do município planejem intervenções com uma atenção especial a estas áreas, reunindo esforços para a redução da pobreza e da desigualdade social, alternativas para uma melhor distribuição de renda e melhorar o acesso ao saneamento básico dentre outras prioridades. / In this study we used Bayesian space-temporal regression models to estimate the incidence of TB in Ribeirão Preto, SP (years 2006 to 2009) by the coverage area of health units, associating it with the covariates of interest (IPVS, Income and Education predominant those areas). The method is based on MCMC simulations for estimate the posterior distributions of TB incidence in Ribeirão Preto. As a result, we have maps that show a spatial pattern more clearly, with estimates smoother and less random fluctuations. We observed that the areas with the highest incidence rates also have medium and high social vulnerability index. Concerning income, the prevailing salary range of household heads in these regions is between 0 and 3 minimum wages and the prevailing level of education of household heads in these regions is the elementary school. The results of the models in Bayesian analysis show that with increasing social vulnerability significantly increased the incidence of TB in Ribeirao Preto. In areas where vulnerability is high incidence of TB is nearly 15 times the incidence of areas without vulnerability. There was a significant increase in the incidence of tuberculosis in Ribeirão Preto during the years studied, the highest incidence recorded in 2009. The use of maps improved visualization of areas that deserve special attention for TB control, in addition, the association of disease with income, education and social vulnerability that bring benefits to the managers responsible for planning the municipality to plan interventions with special attention these areas, uniting efforts to reduce poverty and social inequality, alternatives to improve income distribution and improve access to basic sanitation among other priorities.
|
30 |
Modelos de séries temporais de dados de contagem baseados na distribuição Poisson Dupla / Count data time series models based on Double Poisson distributionAragon, Davi Casale 30 November 2016 (has links)
Dados de s´eries temporais s~ao originados a partir de estudos em que se reportam, por exemplo, taxas de mortalidade, n´umero de hospitaliza¸c~oes, de infec¸c~oes por alguma doen¸ca ou outro evento de interesse, em per´?odos definidos (dia, semana, m^es ou ano), objetivando-se observar tend^encias, sazonalidades ou fatores associados. Dados de contagem s~ao aqueles representados pelas vari´aveis quantitativas discretas, ou seja, observa¸c~oes que assumem valores inteiros, no intervalo {0, 1, 2, 3, ...}, por exemplo, o n´umero de filhos de casais residentes em um bairro. Diante dessa particularidade, ferramentas estat´?sticas adequadas devem ser utilizadas, e modelos baseados na distribui¸c~ao de Poisson apresentam-se como op¸c~oes mais indicadas do que os baseados nos m´etodos propostos por Box e Jenkins (2008), usualmente utilizados para an´alise de dados cont´?nuos, mas empregados para dados discretos, ap´os transforma¸c~oes logar´?tmicas. Uma limita¸c~ao da distribui¸c~ao de Poisson ´e que ela assume m´edia e vari^ancia iguais, sendo um obst´aculo nos casos em que h´a superdispers~ao (vari^ancia maior que a m´edia) ou subdispers~ao (vari^ancia menor que a m´edia). Diante disso, a distribui¸c~ao Poisson Dupla, proposta por Efron (1986), surge como alternativa, pois permite se estimarem os par^ametros de m´edia e vari^ancia, nos casos em que a vari^ancia dos dados ´e menor, igual ou maior que a m´edia, fornecendo grande flexibilidade aos modelos. Este trabalho teve como objetivo principal o desenvolvimento de modelos Bayesianos de s´eries temporais para dados de contagem, utilizando-se distribui¸c~oes de probabilidade para vari´aveis discretas, tais como de Poisson e Poisson Dupla. Al´em disso, foi introduzido um modelo baseado na distribui¸c~ao Poisson Dupla para dados de contagem com excesso de zeros. Os resultados obtidos pelo ajuste dos modelos de s´eries temporais baseados na distribui¸c~ao Poisson Dupla foram comparados com aqueles obtidos por meio do uso da distribui¸c~ao de Poisson. Como aplica¸c~oes principais, foram apresentados resultados obtidos pelo ajuste de modelos para dados de registros de acidentes com picadas de cobras, no Estado de S~ao Paulo, e picadas de escorpi~oes, na cidade de Ribeir~ao Preto, SP, entre os anos de 2007 e 2014. Com rela¸c~ao a esta ´ultima aplica¸c~ao, foram consideradas covari´aveis referentes a dados clim´aticos, como temperaturas m´aximas e m´?nimas m´edias mensais e precipita¸c~ao. Nas situa¸c~oes em que a vari^ancia era diferente da m´edia, modelos baseados na distribui¸c~ao Poisson Dupla mostraram melhor ajuste aos dados, quando comparados aos modelos de Poisson. / Time series data are derived from studies in which there are reported mortality, number of hospitalizations infections by disease or other event of interest per day, week, month or year, in order to observe trends, seasonality or associated factors. Count data are represented by discrete quantitative variables, i.e. observations that take integer values in the range {0, 1, 2, 3, ...}. In view of this particular characteristic, such data must be analyzed by adequate statistical tools and the Poisson distribution is an option for modeling, being more suitable than models based on methods proposed by Box and Jenkins (2008), usually applied for continuous data, but used in the modeling of discrete data after logarithmic transformation. A limitation of the Poisson distribution is that it assumes equal mean and variance being an obstacle in cases which there are data overdispersion (variance higher than mean) or underdispersion (variance lower than mean). Therefore the Double Poisson distribution, proposed by Efron (1986), is an alternative because it allows to estimate the mean and variance parameters in cases wich variance of the data is lower, equal, or higher than mean providing great flexibility to the models. This work aims to develop time series models for count data, under Bayesian approach using probability distributions for discrete variables such as Poisson and Double Poisson. Furthermore it will be introduced a zero-inflated Double Poisson model to excess zeros counting data. The results obtained by adjusting the time series models based on Double Poisson distribution are compared with those obtained by considering the Poisson distribution. As main applications modeling of snake bites reports in the State of S~ao Paulo and scorpion stings in the city of Ribeir~ao Preto considering covariates as maximum and minimum average monthly temperatures and rainfall among the years 2007 and 2014 will be presented. Regression models based on double Poisson distribution showed a better fit to the data, when compared to Poisson models.
|
Page generated in 0.0767 seconds