• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 25
  • 9
  • 8
  • 1
  • 1
  • Tagged with
  • 54
  • 54
  • 54
  • 10
  • 10
  • 9
  • 9
  • 9
  • 9
  • 8
  • 8
  • 7
  • 7
  • 7
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Synthesizing Phylogeography and Community Ecology to Understand Patterns of Community Diversity

Williams, Trevor J. 29 July 2021 (has links)
Community ecology is the study of the patterns and processes governing species abundance, distribution, and diversity within and between communities. Likewise, phylogeography is the study of the historic processes controlling genetic diversity across space. Both fields investigate diversity, albeit at different temporal, spatial and taxonomic scales and therefore have varying assumptions. Community ecology typically focuses on contemporary mechanisms whereas phylogeography studies historic ones. However, new research has discovered that both genetic and community diversity can be influenced by contemporary and historic processes in tandem. As such, a growing number of researchers have called for greater integration of phylogeography and ecology to better understand the mechanisms structuring diversity. In this dissertation I attempt to add to this integration by investigating ways that phylogeography and population genetics can enhance studies on community ecology. First, I review traditional studies on freshwater fish community assembly using null model analyses of species co-occurrence, which shows that fish are largely structured by deterministic processes, though the importance of different mechanisms varies across climates, habitats, and spatial scales. Next, I show how phylogeographic data can greatly enhance inferences of community assembly in freshwater fish communities in Costa Rica and Utah respectively. My Costa Rican analyses indicate that historic eustatic sea-level change can be better at predicting community structure within a biogeographic province than contemporary processes. In comparison, my Utah analyses show that historic dispersal between isolated basins in conjunction with contemporary habitat filtering, dispersal limitation, and extinction dynamics both influence community assembly through time. Finally, I adapt a forward-time population genetics stochastic simulation model to work in a metacommunity context and integrate it with Approximate Bayesian Computation to infer the processes that govern observed community composition patterns. Overall, I show that community ecology can be greatly enhanced by including information and methods from different but related fields and encourage future ecologists to further this research to gain a greater understanding of biological diversity.
32

Improving hydrological post-processing for assessing the conditional predictive uncertainty of monthly streamflows

Romero Cuellar, Jonathan 07 January 2020 (has links)
[ES] La cuantificación de la incertidumbre predictiva es de vital importancia para producir predicciones hidrológicas confiables que soporten y apoyen la toma de decisiones en el marco de la gestión de los recursos hídricos. Los post-procesadores hidrológicos son herramientas adecuadas para estimar la incertidumbre predictiva de las predicciones hidrológicas (salidas del modelo hidrológico). El objetivo general de esta tesis es mejorar los métodos de post-procesamiento hidrológico para estimar la incertidumbre predictiva de caudales mensuales. Esta tesis pretende resolver dos problemas del post-procesamiento hidrológico: i) la heterocedasticidad y ii) la función de verosimilitud intratable. Los objetivos específicos de esta tesis son tres. Primero y relacionado con la heterocedasticidad, se propone y evalúa un nuevo método de post-procesamiento llamado GMM post-processor que consiste en la combinación del esquema de modelado de probabilidad Bayesiana conjunta y la mezcla de Gaussianas múltiples. Además, se comparó el desempeño del post-procesador propuesto con otros métodos tradicionales y bien aceptados en caudales mensuales a través de las doce cuencas hidrográficas del proyecto MOPEX. A partir de este objetivo (capitulo 2), encontramos que GMM post-processor es el mejor para estimar la incertidumbre predictiva de caudales mensuales, especialmente en cuencas de clima seco. Segundo, se propone un método para cuantificar la incertidumbre predictiva en el contexto de post-procesamiento hidrológico cuando sea difícil calcular la función de verosimilitud (función de verosimilitud intratable). Algunas veces en modelamiento hidrológico es difícil calcular la función de verosimilitud, por ejemplo, cuando se trabaja con modelos complejos o en escenarios de escasa información como en cuencas no aforadas. Por lo tanto, se propone el ABC post-processor que intercambia la estimación de la función de verosimilitud por el uso de resúmenes estadísticos y datos simulados. De este objetivo específico (capitulo 3), se demuestra que la distribución predictiva estimada por un método exacto (MCMC post-processor) o por un método aproximado (ABC post-processor) es similar. Este resultado es importante porque trabajar con escasa información es una característica común en los estudios hidrológicos. Finalmente, se aplica el ABC post-processor para estimar la incertidumbre de los estadísticos de los caudales obtenidos desde las proyecciones de cambio climático, como un caso particular de un problema de función de verosimilitud intratable. De este objetivo específico (capitulo 4), encontramos que el ABC post-processor ofrece proyecciones de cambio climático más confiables que los 14 modelos climáticos (sin post-procesamiento). De igual forma, ABC post-processor produce bandas de incertidumbre más realista para los estadísticos de los caudales que el método clásico de múltiples conjuntos (ensamble). / [CA] La quantificació de la incertesa predictiva és de vital importància per a produir prediccions hidrològiques confiables que suporten i recolzen la presa de decisions en el marc de la gestió dels recursos hídrics. Els post-processadors hidrològics són eines adequades per a estimar la incertesa predictiva de les prediccions hidrològiques (eixides del model hidrològic). L'objectiu general d'aquesta tesi és millorar els mètodes de post-processament hidrològic per a estimar la incertesa predictiva de cabals mensuals. Els objectius específics d'aquesta tesi són tres. Primer, es proposa i avalua un nou mètode de post-processament anomenat GMM post-processor que consisteix en la combinació de l'esquema de modelatge de probabilitat Bayesiana conjunta i la barreja de Gaussianes múltiples. A més, es compara l'acompliment del post-processador proposat amb altres mètodes tradicionals i ben acceptats en cabals mensuals a través de les dotze conques hidrogràfiques del projecte MOPEX. A partir d'aquest objectiu (capítol 2), trobem que GMM post-processor és el millor per a estimar la incertesa predictiva de cabals mensuals, especialment en conques de clima sec. En segon lloc, es proposa un mètode per a quantificar la incertesa predictiva en el context de post-processament hidrològic quan siga difícil calcular la funció de versemblança (funció de versemblança intractable). Algunes vegades en modelació hidrològica és difícil calcular la funció de versemblança, per exemple, quan es treballa amb models complexos o amb escenaris d'escassa informació com a conques no aforades. Per tant, es proposa l'ABC post-processor que intercanvia l'estimació de la funció de versemblança per l'ús de resums estadístics i dades simulades. D'aquest objectiu específic (capítol 3), es demostra que la distribució predictiva estimada per un mètode exacte (MCMC post-processor) o per un mètode aproximat (ABC post-processor) és similar. Aquest resultat és important perquè treballar amb escassa informació és una característica comuna als estudis hidrològics. Finalment, s'aplica l'ABC post-processor per a estimar la incertesa dels estadístics dels cabals obtinguts des de les projeccions de canvi climàtic. D'aquest objectiu específic (capítol 4), trobem que l'ABC post-processor ofereix projeccions de canvi climàtic més confiables que els 14 models climàtics (sense post-processament). D'igual forma, ABC post-processor produeix bandes d'incertesa més realistes per als estadístics dels cabals que el mètode clàssic d'assemble. / [EN] The predictive uncertainty quantification in monthly streamflows is crucial to make reliable hydrological predictions that help and support decision-making in water resources management. Hydrological post-processing methods are suitable tools to estimate the predictive uncertainty of deterministic streamflow predictions (hydrological model outputs). In general, this thesis focuses on improving hydrological post-processing methods for assessing the conditional predictive uncertainty of monthly streamflows. This thesis deal with two issues of the hydrological post-processing scheme i) the heteroscedasticity problem and ii) the intractable likelihood problem. Mainly, this thesis includes three specific aims. First and relate to the heteroscedasticity problem, we develop and evaluate a new post-processing approach, called GMM post-processor, which is based on the Bayesian joint probability modelling approach and the Gaussian mixture models. Besides, we compare the performance of the proposed post-processor with the well-known exiting post-processors for monthly streamflows across 12 MOPEX catchments. From this aim (chapter 2), we find that the GMM post-processor is the best suited for estimating the conditional predictive uncertainty of monthly streamflows, especially for dry catchments. Secondly, we introduce a method to quantify the conditional predictive uncertainty in hydrological post-processing contexts when it is cumbersome to calculate the likelihood (intractable likelihood). Sometimes, it can be challenging to estimate the likelihood itself in hydrological modelling, especially working with complex models or with ungauged catchments. Therefore, we propose the ABC post-processor that exchanges the requirement of calculating the likelihood function by the use of some sufficient summary statistics and synthetic datasets. With this aim in mind (chapter 3), we prove that the conditional predictive distribution is similarly produced by the exact predictive (MCMC post-processor) or the approximate predictive (ABC post-processor), qualitatively speaking. This finding is significant because dealing with scarce information is a common condition in hydrological studies. Finally, we apply the ABC post-processing method to estimate the uncertainty of streamflow statistics obtained from climate change projections, such as a particular case of intractable likelihood problem. From this specific objective (chapter 4), we find that the ABC post-processor approach: 1) offers more reliable projections than 14 climate models (without post-processing); 2) concerning the best climate models during the baseline period, produces more realistic uncertainty bands than the classical multi-model ensemble approach. / I would like to thank the Gobernación del Huila Scholarship Program No. 677 (Colombia) for providing the financial support for my PhD research. / Romero Cuellar, J. (2019). Improving hydrological post-processing for assessing the conditional predictive uncertainty of monthly streamflows [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/133999 / TESIS
33

Understanding the Diversification of Central American Freshwater Fishes Using Comparative Phylogeography and Species Delimitation

Bagley, Justin C 01 December 2014 (has links) (PDF)
Phylogeography and molecular phylogenetics have proven remarkably useful for understanding the patterns and processes influencing historical diversification of biotic lineages at and below the species level, as well as delimiting morphologically cryptic species. In this dissertation, I used an integrative approach coupling comparative phylogeography and coalescent-based species delimitation to improve our understanding of the biogeography and species limits of Central American freshwater fishes. In Chapter 1, I conducted a literature review of the contributions of phylogeography to understanding the origins and maintenance of lower Central American biodiversity, in light of the geological and ecological setting. I highlighted emerging phylogeographic patterns, along with the need for improving regional historical biogeographical inference and conservation efforts through statistical and comparative phylogeographic studies. In Chapter 2, I compared mitochondrial phylogeographic patterns among three species of livebearing fishes (Poeciliidae) codistributed in the lower Nicaraguan depression and proximate uplands. I found evidence for mixed spatial and temporal divergences, indicating phylogeographic “pseudocongruence” suggesting that multiple evolutionary responses to historical processes have shaped population structuring of regional freshwater biota, possibly linked to recent community assembly and/or the effects of ecological differences among species on their responses to late Cenozoic environmental events. In Chapter 3, I used coalescent-based species tree and species delimitation analyses of a multilocus dataset to delimit species and infer their evolutionary relationships in the Poecilia sphenops species complex (Poeciliidae), a widespread but morphologically conserved group of fishes. Results indicated that diversity is underestimated and overestimated in different clades by c. ±15% (including candidate species); that lineages diversified since the Miocene; and that some evidence exists for a more probable role of hybridization, rather than incomplete lineage sorting, in shaping observed gene tree discordances. Last, in Chapter 4, I used a comparative phylogeographical analysis of eight codistributed species/genera of freshwater fishes to test for shared evolutionary responses predicted by four drainage-based hypotheses of Neotropical fish diversification. Integrating phylogeographic analyses with paleodistribution modeling revealed incongruent genetic structuring among lineages despite overlapping ancestral Pleistocene distributions, suggesting multiple routes to community assembly. Hypotheses tests using the latest approximate Bayesian computation model averaging methods also supported one pulse of diversification in two lineages diverged in the San Carlos River, but multiple divergences of three lineages across the Sixaola River basin, Costa Rica, correlated to Neogene sea level events and continental shelf width. Results supported complex biogeographical patterns illustrating how species responses to historical drainage-controlling processes have influenced Neotropical fish diversification.
34

Computer Model Emulation and Calibration using Deep Learning

Bhatnagar, Saumya January 2022 (has links)
No description available.
35

Applying mathematical and statistical methods to the investigation of complex biological questions

Scarpino, Samuel Vincent 18 September 2014 (has links)
The research presented in this dissertation integrates data and theory to examine three important topics in biology. In the first chapter, I investigate genetic variation at two loci involved in a genetic incompatibility in the genus Xiphophorus. In this genus, hybrids develop a fatal melanoma due to the interaction of an oncogene and its repressor. Using the genetic variation data from each locus, I fit evolutionary models to test for coevolution between the oncogene and the repressor. The results of this study suggest that the evolutionary trajectory of a microsatellite element in the proximal promoter of the repressor locus is affected by the presence of the oncogene. This study significantly advances our understanding of how loci involved in both a genetic incompatibility and a genetically determined cancer evolve. Chapter two addresses the role polyploidy, or whole genome duplication, has played in generating flowering plant diversity. The question of whether polyploidy events facilitate diversification has received considerable attention among plant and evolutionary biologists. To address this question, I estimated the speciation and genome duplication rates for 60 genera of flowering plants. The results suggest that diploids, as opposed to polyploids, generate more species diversity. This study represents the broadest comparative analysis to date of the effect of polyploidy on flowering plant diversity. In the final chapter, I develop a computational method for designing disease surveillance networks. The method is a data-driven, geographic optimization of surveillance sites. Networks constructed using this method are predicted to significantly outperform existing networks, in terms of information quality, efficiency, and robustness. This work involved the coordinated efforts of researchers in biology, epidemiology, and operations research with public health decision makers. Together, the results of this dissertation demonstrate the utility of applying quantitative theory and statistical methods to data in order to address complex, biological processes. / text
36

Data-Adaptive Multivariate Density Estimation Using Regular Pavings, With Applications to Simulation-Intensive Inference

Harlow, Jennifer January 2013 (has links)
A regular paving (RP) is a finite succession of bisections that partitions a multidimensional box into sub-boxes using a binary tree-based data structure, with the restriction that an existing sub-box in the partition may only be bisected on its first widest side. Mapping a real value to each element of the partition gives a real-mapped regular paving (RMRP) that can be used to represent a piecewise-constant function density estimate on a multidimensional domain. The RP structure allows real arithmetic to be extended to density estimates represented as RMRPs. Other operations such as computing marginal and conditional functions can also be carried out very efficiently by exploiting these arithmetical properties and the binary tree structure. The purpose of this thesis is to explore the potential for density estimation using RPs. The thesis is structured in three parts. The first part formalises the operational properties of RP-structured density estimates. The next part considers methods for creating a suitable RP partition for an RMRP-structured density estimate. The advantages and disadvantages of a Markov chain Monte Carlo algorithm, already developed, are investigated and this is extended to include a semi-automatic method for heuristic diagnosis of convergence of the chain. An alternative method is also proposed that uses an RMRP to approximate a kernel density estimate. RMRP density estimates are not differentiable and have slower convergence rates than good multivariate kernel density estimators. The advantages of an RMRP density estimate relate to its operational properties. The final part of this thesis describes a new approach to Bayesian inference for complex models with intractable likelihood functions that exploits these operational properties.
37

Modélisations de la dispersion du pollen et estimation à partir de marqueurs génétiques. / Modellings of pollen dispersal and estimation from genetic markers

Carpentier, Florence 29 June 2010 (has links)
La dispersion du pollen est une composante majeure des flux de gènes chez les plantes, contribuant à la diversité génétique et à sa structure spatiale. Son étude à l'échelle d'un épisode de reproduction permet de comprendre l'impact des changements actuels (fragmentation, anthropisation....) et de proposer des politiques de conservation. Deux types de méthodes basées sur les marqueurs microsatellites estiment la fonction de dispersion du pollen: (i) les méthodes directes (e.g. mating model) basées sur l'assignation de paternité et nécessitant un échantillonnage exhaustif (position et génotype des individus du site étudié, génotypes de graines échantillonnées sur des mères); (ii) les méthodes indirectes (e.g. TwoGener), nécessitant un échantillonnage réduit (génotypes des graines, génotypes et positions des mères) et résumant les données en indices génétiques. Nous proposons la formalisation statistique de ces deux types de méthodes et montrons qu'elles utilisent des fonctions de dispersion différentes: les méthodes directes estiment une fonction forward potentielle (déplacement du pollen depuis le père), les méthodes indirectes une fonction backward intégrative (de la fécondation jusqu'à l'existence du père). Nous explicitons le lien entre fonctions backward et forward, des hypothèses menant à leur équivalence, et des contraintes affectant les fonctions backward. Nous développons enfin une méthode de calcul bayésien approché qui permet (i) une estimation forward, (ii) avec des intervalles de crédibilité, (iii) à partir d'un jeu de données non exhaustif et d'informations partielles (e.g. positions sans génotype) et (iv) l'utilisation de différents modèles de dispersion. / Pollen dispersal is a major component of gene flow in plants. It determines to genetic diversity and spatial genetic structure.Studying it at the scale of a single reproduction event enables to understand the impact of current changes (fragmentation, anthropization ...) and to propose conservation practices.Two types of methods, based on microsatellite markers, estimate pollen dispersal functions : (i) direct methods (e.g. mating model) based on paternity assignment require exhaustif sampling (position and genotype of individuals in the study plot, genotypes of seeds harvested on mothers); (ii) indirect methods (e.g. TwoGener), require a weaker sampling (seeds genotypes, genotypes and positions of their mothers) and summarize data through genetic indices.We propose a statistical formalization of both types of methods and show that they rely on different dispersal functions : the direct methods estimate a potential forward function (pollen transfer from the father), whereas the indirect methods estimate an integrative backward one (from fecondation to father existence). We exhibit the link between forward and backward functions, assumptions leading to their equivalence and constrains affecting the backward functions.Finally, we develop an Approximate Bayesian Computation method, which enable (i) a forward estimation, (ii) with credibility intervals, (iii) from a non exhaustive dataset and partial information (e.g. positions without genotypes) and (iv) the use of different dispersal models.
38

Estimação do índice de memória em processos estocásticos com memória longa: uma abordagem via ABC / Estimation of the memory index of stochastic processes with long memory: an ABC approach

Andrade, Plinio Lucas Dias 28 March 2016 (has links)
Neste trabalho propomos o uso de um método Bayesiano para estimar o parâmetro de memória de um processo estocástico com memória longa quando sua função de verossimilhança é intratável ou não está disponível. Esta abordagem fornece uma aproximação para a distribuição a posteriori sobre a memória e outros parâmetros e é baseada numa aplicação simples do método conhecido como computação Bayesiana aproximada (ABC). Alguns estimadores populares para o parâmetro de memória serão revisados e comparados com esta abordagem. O emprego de nossa proposta viabiliza a solução de problemas complexos sob o ponto de vista Bayesiano e, embora aproximativa, possui um desempenho muito satisfatório quando comparada com métodos clássicos. / In this work we propose the use of a Bayesian method for estimating the memory parameter of a stochastic process with long-memory when its likelihood function is intractable or unavailable. Such approach provides an approximation for the posterior distribution on the memory and other parameters and it is based on a simple application of the so-called approximate Bayesian computation (ABC). Some popular existing estimators for the memory parameter are reviewed and compared to this method. The use of our proposal allows for the solution of complex problems under a Bayesian point of view and this proposal, although approximative, has a satisfactory performance when compared to classical methods.
39

Méthodes d'inférence statistique pour champs de Gibbs / Statistical inference methods for Gibbs random fields

Stoehr, Julien 29 October 2015 (has links)
La constante de normalisation des champs de Markov se présente sous la forme d'une intégrale hautement multidimensionnelle et ne peut être calculée par des méthodes analytiques ou numériques standard. Cela constitue une difficulté majeure pour l'estimation des paramètres ou la sélection de modèle. Pour approcher la loi a posteriori des paramètres lorsque le champ de Markov est observé, nous remplaçons la vraisemblance par une vraisemblance composite, c'est à dire un produit de lois marginales ou conditionnelles du modèle, peu coûteuses à calculer. Nous proposons une correction de la vraisemblance composite basée sur une modification de la courbure au maximum afin de ne pas sous-estimer la variance de la loi a posteriori. Ensuite, nous proposons de choisir entre différents modèles de champs de Markov cachés avec des méthodes bayésiennes approchées (ABC, Approximate Bayesian Computation), qui comparent les données observées à de nombreuses simulations de Monte-Carlo au travers de statistiques résumées. Afin de pallier l'absence de statistiques exhaustives pour ce choix de modèle, des statistiques résumées basées sur les composantes connexes des graphes de dépendance des modèles en compétition sont introduites. Leur efficacité est étudiée à l'aide d'un taux d'erreur conditionnel original mesurant la puissance locale de ces statistiques à discriminer les modèles. Nous montrons alors que nous pouvons diminuer sensiblement le nombre de simulations requises tout en améliorant la qualité de décision, et utilisons cette erreur locale pour construire une procédure ABC qui adapte le vecteur de statistiques résumés aux données observées. Enfin, pour contourner le calcul impossible de la vraisemblance dans le critère BIC (Bayesian Information Criterion) de choix de modèle, nous étendons les approches champs moyens en substituant la vraisemblance par des produits de distributions de vecteurs aléatoires, à savoir des blocs du champ. Le critère BLIC (Block Likelihood Information Criterion), que nous en déduisons, permet de répondre à des questions de choix de modèle plus large que les méthodes ABC, en particulier le choix conjoint de la structure de dépendance et du nombre d'états latents. Nous étudions donc les performances de BLIC dans une optique de segmentation d'images. / Due to the Markovian dependence structure, the normalizing constant of Markov random fields cannot be computed with standard analytical or numerical methods. This forms a central issue in terms of parameter inference or model selection as the computation of the likelihood is an integral part of the procedure. When the Markov random field is directly observed, we propose to estimate the posterior distribution of model parameters by replacing the likelihood with a composite likelihood, that is a product of marginal or conditional distributions of the model easy to compute. Our first contribution is to correct the posterior distribution resulting from using a misspecified likelihood function by modifying the curvature at the mode in order to avoid overly precise posterior parameters.In a second part we suggest to perform model selection between hidden Markov random fields with approximate Bayesian computation (ABC) algorithms that compare the observed data and many Monte-Carlo simulations through summary statistics. To make up for the absence of sufficient statistics with regard to this model choice, we introduce summary statistics based on the connected components of the dependency graph of each model in competition. We assess their efficiency using a novel conditional misclassification rate that evaluates their local power to discriminate between models. We set up an efficient procedure that reduces the computational cost while improving the quality of decision and using this local error rate we build up an ABC procedure that adapts the summary statistics to the observed data.In a last part, in order to circumvent the computation of the intractable likelihood in the Bayesian Information Criterion (BIC), we extend the mean field approaches by replacing the likelihood with a product of distributions of random vectors, namely blocks of the lattice. On that basis, we derive BLIC (Block Likelihood Information Criterion) that answers model choice questions of a wider scope than ABC, such as the joint selection of the dependency structure and the number of latent states. We study the performances of BLIC in terms of image segmentation.
40

Estimação do índice de memória em processos estocásticos com memória longa: uma abordagem via ABC / Estimation of the memory index of stochastic processes with long memory: an ABC approach

Plinio Lucas Dias Andrade 28 March 2016 (has links)
Neste trabalho propomos o uso de um método Bayesiano para estimar o parâmetro de memória de um processo estocástico com memória longa quando sua função de verossimilhança é intratável ou não está disponível. Esta abordagem fornece uma aproximação para a distribuição a posteriori sobre a memória e outros parâmetros e é baseada numa aplicação simples do método conhecido como computação Bayesiana aproximada (ABC). Alguns estimadores populares para o parâmetro de memória serão revisados e comparados com esta abordagem. O emprego de nossa proposta viabiliza a solução de problemas complexos sob o ponto de vista Bayesiano e, embora aproximativa, possui um desempenho muito satisfatório quando comparada com métodos clássicos. / In this work we propose the use of a Bayesian method for estimating the memory parameter of a stochastic process with long-memory when its likelihood function is intractable or unavailable. Such approach provides an approximation for the posterior distribution on the memory and other parameters and it is based on a simple application of the so-called approximate Bayesian computation (ABC). Some popular existing estimators for the memory parameter are reviewed and compared to this method. The use of our proposal allows for the solution of complex problems under a Bayesian point of view and this proposal, although approximative, has a satisfactory performance when compared to classical methods.

Page generated in 0.1518 seconds