Spelling suggestions: "subject:"bayesian statistics"" "subject:"eayesian statistics""
71 |
Análise Bayesiana de dois problemas em Astrofísica Relativística: neutrinos do colapso gravitacional e massas das estrelas de nêutrons / Bayesian analysis of two problems in Relativistic Astrophysics: neutrinos from gravitational collapse and mass distribution of neutron stars.Lima, Rodolfo Valentim da Costa 19 April 2012 (has links)
O evento estraordinário de SN1987A vem sendo investigado há mais de vinte e cinco anos. O fascínio que cerca tal evento astronômico está relacionado com a observação em tempo real da explosão à luz da Física de neutrinos. Detectores espalhados pelo mundo observaram um surto neutrinos que dias mais tarde foi confirmado como sendo a SN1987A. Kamiokande, IMB e Baksan apresentaram os eventos detectados que permitiu o estudo de modelos para a explosão e resfriamento da hipotética estrela de nêutrons remanescente. Até hoje não há um consenso a origem do progenitor e a natureza do objeto compacto remanescente. O trabalho se divide em duas partes: estudo dos neutrinos de SN1987A através de Análise Estatística Bayesiana através de um modelo proposto com duas temperaturas que evidenciam dois surtos de neutrinos. A motivação está na hipótese do segundo surto como resultado da formação de matéria estranha no objeto compacto. A metodologia empregada foi a desenvolvida por um trabalho interessante de Loredo (2002) que permite modelar e testar hipóteses sobre os modelos via Bayesian Information Criterion (BIC). A segunda parte do trabalho, a mesma metodologia estatística é usada no estudo da distribuição de massas das estrelas de nêutrons usando a base de dados disponível (http://stellarcollapse.org). A base de dados foi analisada utilizando somente o valor do objeto e seu desvio padrão. Construindo uma função de verossimilhança e utilizando distribuições ``a priori\'\' com hipótese de bimodalidade da distribuição das massas contra uma distribuição unimodal sobre todas as massas dos objetos. O teste BIC indica forte tendência favorável à existência da bimodalidade com valores centrados em 1.37M para objetos de baixa massa e 1.73M para objetos de alta massa e a confirmação da fraca evidência de um terceiro pico esperado em 1.25M. / The extraordinary event of supernova has been investigated twenty five years ago. The fascination surrounds such astronomical event is on the real time observation the explosion at light to neutrino Physics. Detectors spread for the world had observed one burst neutrinos that days later it was confirmed as being of SN1987A. Kamiokande, IMB and Baksan had presented the detected events that allowed to the study of models for the explosion and cooling of hypothetical neutron star remain. Until today it does not have a consensus the origin of the progenitor and the nature of the remaining compact object. The work is divided in two parts: study of the neutrinos of SN1987A through Analysis Bayesiana Statistics through a model considered with two temperatures that two evidence bursts of neutrinos. The motivation is in the hypothesis of as burst as resulted of the formation of strange matter in the compact object. The employed methodology was developed for an interesting work of Loredo & Lamb (2002) that it allows shape and to test hypotheses on the models saw Bayesian Information Criterion (BIC). The second part of the work, the same methodology statistics is used in the study of the distribution of masses of the neutron stars using the available database http://stellarcollapse.org/. The database was analyzed only using the value of the object and its shunting line standard. Constructing to a a priori function likelihood and using distributions with hypothesis of bimodal distribution of the masses against a unimodal distribution on all the masses of objects. Test BIC indicates fort favorable trend the existence of the bimodality with values centered in 1.37M for objects of low mass and 1.73M for objects of high mass and week evidence of one third peak around 1.25M.
|
72 |
Construção de redes usando estatística clássica e Bayesiana - uma comparação / Building complex networks through classical and Bayesian statistics - a comparisonThomas, Lina Dornelas 13 March 2012 (has links)
Nesta pesquisa, estudamos e comparamos duas maneiras de se construir redes. O principal objetivo do nosso estudo é encontrar uma forma efetiva de se construir redes, especialmente quando temos menos observações do que variáveis. A construção das redes é realizada através da estimação do coeficiente de correlação parcial com base na estatística clássica (inverse method) e na Bayesiana (priori conjugada Normal - Wishart invertida). No presente trabalho, para resolver o problema de se ter menos observações do que variáveis, propomos uma nova metodologia, a qual chamamos correlação parcial local, que consiste em selecionar, para cada par de variáveis, as demais variáveis que apresentam maior coeficiente de correlação com o par. Aplicamos essas metodologias em dados simulados e as comparamos traçando curvas ROC. O resultado mais atrativo foi que, mesmo com custo computacional alto, usar inferência Bayesiana é melhor quando temos menos observações do que variáveis. Em outros casos, ambas abordagens apresentam resultados satisfatórios. / This research is about studying and comparing two different ways of building complex networks. The main goal of our study is to find an effective way to build networks, particularly when we have fewer observations than variables. We construct networks estimating the partial correlation coefficient on Classic Statistics (Inverse Method) and on Bayesian Statistics (Normal - Invese Wishart conjugate prior). In this current work, in order to solve the problem of having less observations than variables, we propose a new methodology called local partial correlation, which consists of selecting, for each pair of variables, the other variables most correlated to the pair. We applied these methods on simulated data and compared them through ROC curves. The most atractive result is that, even though it has high computational costs, to use Bayesian inference is better when we have less observations than variables. In other cases, both approaches present satisfactory results.
|
73 |
Statistical Methods for Characterizing Genomic Heterogeneity in Mixed SamplesZhang, Fan 12 December 2016 (has links)
"Recently, sequencing technologies have generated massive and heterogeneous data sets. However, interpretation of these data sets is a major barrier to understand genomic heterogeneity in complex diseases. In this dissertation, we develop a Bayesian statistical method for single nucleotide level analysis and a global optimization method for gene expression level analysis to characterize genomic heterogeneity in mixed samples. The detection of rare single nucleotide variants (SNVs) is important for understanding genetic heterogeneity using next-generation sequencing (NGS) data. Various computational algorithms have been proposed to detect variants at the single nucleotide level in mixed samples. Yet, the noise inherent in the biological processes involved in NGS technology necessitates the development of statistically accurate methods to identify true rare variants. At the single nucleotide level, we propose a Bayesian probabilistic model and a variational expectation maximization (EM) algorithm to estimate non-reference allele frequency (NRAF) and identify SNVs in heterogeneous cell populations. We demonstrate that our variational EM algorithm has comparable sensitivity and specificity compared with a Markov Chain Monte Carlo (MCMC) sampling inference algorithm, and is more computationally efficient on tests of relatively low coverage (27x and 298x) data. Furthermore, we show that our model with a variational EM inference algorithm has higher specificity than many state-of-the-art algorithms. In an analysis of a directed evolution longitudinal yeast data set, we are able to identify a time-series trend in non-reference allele frequency and detect novel variants that have not yet been reported. Our model also detects the emergence of a beneficial variant earlier than was previously shown, and a pair of concomitant variants. Characterization of heterogeneity in gene expression data is a critical challenge for personalized treatment and drug resistance due to intra-tumor heterogeneity. Mixed membership factorization has become popular for analyzing data sets that have within-sample heterogeneity. In recent years, several algorithms have been developed for mixed membership matrix factorization, but they only guarantee estimates from a local optimum. At the gene expression level, we derive a global optimization (GOP) algorithm that provides a guaranteed epsilon-global optimum for a sparse mixed membership matrix factorization problem for molecular subtype classification. We test the algorithm on simulated data and find the algorithm always bounds the global optimum across random initializations and explores multiple modes efficiently. The GOP algorithm is well-suited for parallel computations in the key optimization steps. "
|
74 |
Identification and photometric redshifts for type-I quasars with medium- and narrow-band filter surveys / Identificação e redshifts fotométricos para quasares do tipo-I com sistemas de filtros de bandas médias e estreitasSilva, Carolina Queiroz de Abreu 16 November 2015 (has links)
Quasars are valuable sources for several cosmological applications. In particular, they can be used to trace some of the heaviest halos and their high intrinsic luminosities allow them to be detected at high redshifts. This implies that quasars (or active galactic nuclei, in a more general sense) have a huge potential to map the large-scale structure. However, this potential has not yet been fully realized, because instruments which rely on broad-band imaging to pre-select spectroscopic targets usually miss most quasars and, consequently, are not able to properly separate broad-line emitting quasars from other point-like sources (such as stars and low resolution galaxies). This work is an initial attempt to investigate the realistic gains on the identification and separation of quasars and stars when medium- and narrow-band filters in the optical are employed. The main novelty of our approach is the use of Bayesian priors both for the angular distribution of stars of different types on the sky and for the distribution of quasars as a function of redshift. Since the evidence from these priors convolve the angular dependence of stars with the redshift dependence of quasars, this allows us to control for the near degeneracy between these objects. However, our results are inconclusive to quantify the efficiency of star-quasar separation by using this approach and, hence, some critical refinements and improvements are still necessary. / Quasares são objetos valiosos para diversas aplicações cosmológicas. Em particular, eles podem ser usados para localizar alguns dos halos mais massivos e suas luminosidades intrinsecamente elevadas permitem que eles sejam detectados a altos redshifts. Isso implica que quasares (ou núcleos ativos de galáxias, de um modo geral) possuem um grande potencial para mapear a estrutura em larga escala. Entretanto, esse potencial ainda não foi completamente atingido, porque instrumentos que se baseiam no imageamento por bandas largas para pré-selecionar alvos espectroscópicos perdem a maioria dos quasares e, consequentemente, não são capazes de separar adequadamente quasares com linhas de emissão largas de outras fontes pontuais (como estrelas e galáxias de baixa resolução). Esse trabalho é uma tentativa inicial de investigar os ganhos reais na identificação e separação de quasares e estrelas quando são usados filtros de bandas médias e estreitas. A principal novidade desse método é o uso de priors Bayesianos tanto para a distribuição angular de estrelas de diferentes tipos no céu quanto para a distribuição de quasares como função do redshift. Como a evidência desses priors é uma convolução entre a dependência angular das estrelas e a dependência em redshift dos quasares, isso permite que a degenerescência entre esses objetos seja levada em consideração. Entretanto, nossos resultados ainda são inconclusivos para quantificar a eficiência da separação entre estrelas e quasares utilizando esse método e, portanto, alguns refinamentos críticos são necessários.
|
75 |
Construção de redes usando estatística clássica e Bayesiana - uma comparação / Building complex networks through classical and Bayesian statistics - a comparisonLina Dornelas Thomas 13 March 2012 (has links)
Nesta pesquisa, estudamos e comparamos duas maneiras de se construir redes. O principal objetivo do nosso estudo é encontrar uma forma efetiva de se construir redes, especialmente quando temos menos observações do que variáveis. A construção das redes é realizada através da estimação do coeficiente de correlação parcial com base na estatística clássica (inverse method) e na Bayesiana (priori conjugada Normal - Wishart invertida). No presente trabalho, para resolver o problema de se ter menos observações do que variáveis, propomos uma nova metodologia, a qual chamamos correlação parcial local, que consiste em selecionar, para cada par de variáveis, as demais variáveis que apresentam maior coeficiente de correlação com o par. Aplicamos essas metodologias em dados simulados e as comparamos traçando curvas ROC. O resultado mais atrativo foi que, mesmo com custo computacional alto, usar inferência Bayesiana é melhor quando temos menos observações do que variáveis. Em outros casos, ambas abordagens apresentam resultados satisfatórios. / This research is about studying and comparing two different ways of building complex networks. The main goal of our study is to find an effective way to build networks, particularly when we have fewer observations than variables. We construct networks estimating the partial correlation coefficient on Classic Statistics (Inverse Method) and on Bayesian Statistics (Normal - Invese Wishart conjugate prior). In this current work, in order to solve the problem of having less observations than variables, we propose a new methodology called local partial correlation, which consists of selecting, for each pair of variables, the other variables most correlated to the pair. We applied these methods on simulated data and compared them through ROC curves. The most atractive result is that, even though it has high computational costs, to use Bayesian inference is better when we have less observations than variables. In other cases, both approaches present satisfactory results.
|
76 |
Um ambiente computacional para um teste de significância bayesiano / An computational environment for a bayesian significance testSilvio Rodrigues de Faria Junior 09 October 2006 (has links)
Em 1999, Pereira e Stern [Pereira and Stern, 1999] propuseram o Full Baye- sian Significance Test (FBST), ou Teste de Significancia Completamente Bayesiano, especialmente desenhado para fornecer um valor de evidencia dando suporte a uma hip otese precisa H. Apesar de possuir boas propriedades conceituais e poder tratar virtual- mente quaisquer classes de hip oteses precisas em modelos param etricos, a difus ao deste m etodo na comunidade cient fica tem sido fortemente limitada pela ausencia de um ambiente integrado onde o pesquisador possa formular e implementar o teste de seu interesse. O objetivo deste trabalho e apresentar uma proposta de implementa c ao de um ambiente integrado para o FBST, que seja suficientemente flex vel para tratar uma grande classe de problemas. Como estudo de caso, apresentamos a formula c ao do FBST para um problema cl assico em gen etica populacional, o Equil brio de Hardy-Weinberg / In 1999, Pereira and Stern [Pereira and Stern, 1999] introduced the Full Bayesian Significance Test (FBST), developed to give a value of evidence for a precise hypothesis H. Despite having good conceptual properties and being able to dealing with virtually any classes of precise hypotheses under parametric models, the FBST did not achieve a large difusion among the academic community due to the abscence of an computational environment where the researcher can define and assess the evidence for hypothesis under investigation. In this work we propose an implementation of an flexible computatio- nal environment for FBST and show a case study in a classical problem in population genetics, the Hardy-Weinberg Equilibrium Law.
|
77 |
[en] EVOLUTIONARY INFERENCE APPROACHES FOR ADAPTIVE MODELS / [pt] ABORDAGENS DE INFERÊNCIA EVOLUCIONÁRIA EM MODELOS ADAPTATIVOSEDISON AMERICO HUARSAYA TITO 17 July 2003 (has links)
[pt] Em muitas aplicações reais de processamento de sinais, as
observações do fenômeno em estudo chegam seqüencialmente no
tempo. Consequentemente, a tarefa de análise destes dados
envolve estimar quantidades desconhecidas em cada
observação concebida do fenômeno.
Na maioria destas aplicações, entretanto, algum
conhecimento prévio sobre o fenômeno a ser modelado está
disponível. Este conhecimento prévio permite formular
modelos Bayesianos, isto é, uma distribuição a priori sobre
as quantidades desconhecidas e uma função de
verossimilhança relacionando estas quantidades com as
observações do fenômeno. Dentro desta configuração, a
inferência Bayesiana das quantidades desconhecidas é
baseada na distribuição a posteriori, que é obtida através
do teorema de Bayes.
Infelizmente, nem sempre é possível obter uma solução
analítica exata para esta distribuição a posteriori. Graças
ao advento de um formidável poder computacional a baixo
custo, em conjunto com os recentes desenvolvimentos na
área de simulações estocásticas, este problema tem sido
superado, uma vez que esta distribuição a posteriori pode
ser aproximada numericamente através de uma distribuição
discreta, formada por um conjunto de amostras.
Neste contexto, este trabalho aborda o campo de simulações
estocásticas sob a ótica da genética Mendeliana e do
princípio evolucionário da sobrevivência dos mais aptos.
Neste enfoque, o conjunto de amostras que aproxima a
distribuição a posteriori pode ser visto como uma população
de indivíduos que tentam sobreviver num ambiente
Darwiniano, sendo o indivíduo mais forte, aquele que
possui maior probabilidade. Com base nesta analogia,
introduziu-se na área de simulações estocásticas (a) novas
definições de núcleos de transição inspirados nos
operadores genéticos de cruzamento e mutação e (b) novas
definições para a probabilidade de aceitação, inspirados no
esquema de seleção, presente nos Algoritmos Genéticos.
Como contribuição deste trabalho está o estabelecimento de
uma equivalência entre o teorema de Bayes e o princípio
evolucionário, permitindo, assim, o desenvolvimento de um
novo mecanismo de busca da solução ótima das quantidades
desconhecidas, denominado de inferência evolucionária.
Destacamse também: (a) o desenvolvimento do Filtro de
Partículas Genéticas, que é um algoritmo de aprendizado
online e (b) o Filtro Evolutivo, que é um algoritmo de
aprendizado batch. Além disso, mostra-se que o Filtro
Evolutivo, é em essência um Algoritmo Genético pois, além
da sua capacidade de convergência a distribuições de
probabilidade, o Filtro Evolutivo converge também a sua moda
global. Em conseqüência, a fundamentação teórica do Filtro
Evolutivo demonstra, analiticamente, a convergência dos
Algoritmos Genéticos em espaços contínuos.
Com base na análise teórica de convergência dos algoritmos
de aprendizado baseados na inferência evolucionária e nos
resultados dos experimentos numéricos, comprova-se que esta
abordagem se aplica a problemas reais de processamento de
sinais, uma vez que permite analisar sinais complexos
caracterizados por comportamentos não-lineares, não-
gaussianos e nãoestacionários. / [en] In many real-world signal processing applications, the phenomenon s observations arrive sequentially in time; consequently, the signal data analysis task involves estimating unknown quantities for each phenomenon observation. However, in most of these applications, prior knowledge about the phenomenon being modeled is available. This prior knowledge allows us to formulate a Bayesian model, which is
a prior distribution for the unknown quantities and the likelihood functions relating these quantities to the
observations. Within these settings, the Bayesian inference on the unknown quantities is based on the posterior distributions obtained from the Bayes theorem. Unfortunately, it is not always possible to obtain a closed-form analytical solution for this posterior distribution. By the advent of a cheap and formidable computational power, in conjunction with some recent developments in stochastic simulations, this problem has been overcome, since this posterior distribution can be obtained by numerical approximation. Within this context, this work studies the stochastic simulation field from the Mendelian genetic view, as well
as the evolutionary principle of the survival of the fittest perspective. In this approach, the set of samples
that approximate the posteriori distribution can be seen as a population of individuals which are trying to survival in a Darwinian environment, where the strongest individual is the one with the highest probability. Based in this analogy, we introduce into the stochastic simulation field: (a) new definitions for the transition kernel, inspired in the genetic operators of crossover and mutation and (b) new definitions for the acceptation probability, inspired in the selection scheme used in the Genetic Algorithms. The contribution of this work is the establishment of a relation between the Bayes theorem and the evolutionary principle, allowing the development of a new optimal solution search engine for the unknown quantities, called evolutionary inference. Other contributions: (a) the development of the Genetic Particle Filter, which is an evolutionary online learning algorithm and (b) the Evolution Filter, which is an evolutionary batch learning algorithm. Moreover, we show that the Evolution Filter is a Genetic algorithm, since, besides its
capacity of convergence to probability distributions, it also converges to its global modal distribution. As a
consequence, the theoretical foundation of the Evolution Filter demonstrates the convergence of Genetic Algorithms in continuous search space. Through the theoretical convergence analysis of the learning algorithms based on the evolutionary inference, as well as the numerical experiments results, we verify that this approach can be applied to real problems of signal processing, since it allows us to analyze complex signals characterized by non-linear, nongaussian and non-stationary behaviors.
|
78 |
Um ambiente computacional para um teste de significância bayesiano / An computational environment for a bayesian significance testFaria Junior, Silvio Rodrigues de 09 October 2006 (has links)
Em 1999, Pereira e Stern [Pereira and Stern, 1999] propuseram o Full Baye- sian Significance Test (FBST), ou Teste de Significancia Completamente Bayesiano, especialmente desenhado para fornecer um valor de evidencia dando suporte a uma hip otese precisa H. Apesar de possuir boas propriedades conceituais e poder tratar virtual- mente quaisquer classes de hip oteses precisas em modelos param etricos, a difus ao deste m etodo na comunidade cient fica tem sido fortemente limitada pela ausencia de um ambiente integrado onde o pesquisador possa formular e implementar o teste de seu interesse. O objetivo deste trabalho e apresentar uma proposta de implementa c ao de um ambiente integrado para o FBST, que seja suficientemente flex vel para tratar uma grande classe de problemas. Como estudo de caso, apresentamos a formula c ao do FBST para um problema cl assico em gen etica populacional, o Equil brio de Hardy-Weinberg / In 1999, Pereira and Stern [Pereira and Stern, 1999] introduced the Full Bayesian Significance Test (FBST), developed to give a value of evidence for a precise hypothesis H. Despite having good conceptual properties and being able to dealing with virtually any classes of precise hypotheses under parametric models, the FBST did not achieve a large difusion among the academic community due to the abscence of an computational environment where the researcher can define and assess the evidence for hypothesis under investigation. In this work we propose an implementation of an flexible computatio- nal environment for FBST and show a case study in a classical problem in population genetics, the Hardy-Weinberg Equilibrium Law.
|
79 |
Trend Analysis and Modeling of Health and Environmental Data: Joinpoint and Functional ApproachKafle, Ram C. 04 June 2014 (has links)
The present study is divided into two parts: the first is on developing the statistical analysis and modeling of mortality (or incidence) trends using Bayesian joinpoint regression and the second is on fitting differential equations from time series data to derive the rate of change of carbon dioxide in the atmosphere.
Joinpoint regression model identifies significant changes in the trends of the incidence, mortality, and survival of a specific disease in a given population. Bayesian approach of joinpoint regression is widely used in modeling statistical data to identify the points in the trend where the significant changes occur. The purpose of the present study is to develop an age-stratified Bayesian joinpoint regression model to describe mortality trends assuming that the observed counts are probabilistically characterized by the Poisson distribution. The proposed model is based on Bayesian model selection criteria with the smallest number of joinpoints that are sufficient to explain the Annual Percentage Change (APC). The prior probability distributions are chosen in such a way that they are automatically derived from the model index contained in the model space. The proposed model and methodology estimates the age-adjusted mortality rates in different epidemiological studies to compare the trends by accounting the confounding effects of age. The future mortality rates are predicted using the Bayesian Model Averaging (BMA) approach.
As an application of the Bayesian joinpoint regression, first we study the childhood brain cancer mortality rates (non age-adjusted rates) and their Annual Percentage Change (APC) per year using the existing Bayesian joinpoint regression models in the literature. We use annual observed mortality counts of children ages 0-19 from 1969-2009 obtained from Surveillance Epidemiology and End Results (SEER) database of the National Cancer Institute (NCI). The predictive distributions are used to predict the future mortality rates. We also compare this result with the mortality trend obtained using joinpoint software of NCI, and to fit the age-stratified model, we use the cancer mortality counts of adult lung and bronchus cancer (25-85+ years), and brain and other Central Nervous System (CNS) cancer (25-85+ years) patients obtained from the Surveillance Epidemiology and End Results (SEER) data base of the National Cancer Institute (NCI).
The second part of this study is the statistical analysis and modeling of noisy data using functional data analysis approach. Carbon dioxide is one of the major contributors to Global Warming. In this study, we develop a system of differential equations using time series data of the major sources of the significant contributable variables of carbon dioxide in the atmosphere. We define the differential operator as data smoother and use the penalized least square fitting criteria to smooth the data. Finally, we optimize the profile error sum of squares to estimate the necessary differential operator. The proposed models will give us an estimate of the rate of change of carbon dioxide in the atmosphere at a particular time. We apply the model to fit emission of carbon dioxide data in the continental United States. The data set is obtained from the Carbon Dioxide Information Analysis Center (CDIAC), the primary climate-change data and information analysis center of the United States Department of Energy.
The first four chapters of this dissertation contribute to the development and application of joinpiont and the last chapter discusses the statistical modeling and application of differential equations through data using functional data analysis approach.
|
80 |
Terrain Aided Underwater Navigation using Bayesian Statistics / Terrängstöttad undervattensnavigering baserad på Bayesiansk statistikKarlsson, Tobias January 2002 (has links)
<p>For many years, terrain navigation has been successfully used in military airborne applications. Terrain navigation can essentially improve the performance of traditional inertial-based navigation. The latter is typically built around gyros and accelerometers, measuring the kinetic state changes. Although inertial-based systems benefit from their high independence, they, unfortunately, suffer from increasing error-growth due to accumulation of continuous measurement errors. </p><p>Undersea, the number of options for navigation support is fairly limited. Still, the navigation accuracy demands on autonomous underwater vehicles are increasing. For many military applications, surfacing to receive a GPS position- update is not an option. Lately, some attention has, instead, shifted towards terrain aided navigation. </p><p>One fundamental aim of this work has been to show what can be done within the field of terrain aided underwater navigation, using relatively simple means. A concept has been built around a narrow-beam altimeter, measuring the depth directly beneath the vehicle as it moves ahead. To estimate the vehicle location, based on the depth measurements, a particle filter algorithm has been implemented. A number of MATLAB simulations have given a qualitative evaluation of the chosen algorithm. In order to acquire data from actual underwater terrain, a small area of the Swedish lake, Lake Vättern has been charted. Results from simulations made on this data strongly indicate that the particle filter performs surprisingly well, also within areas containing relatively modest terrain variation.</p>
|
Page generated in 0.118 seconds