• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2015
  • 601
  • 260
  • 260
  • 61
  • 32
  • 26
  • 19
  • 15
  • 14
  • 8
  • 7
  • 6
  • 6
  • 5
  • Tagged with
  • 4104
  • 797
  • 753
  • 723
  • 715
  • 705
  • 697
  • 655
  • 566
  • 447
  • 427
  • 416
  • 400
  • 366
  • 311
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Bayesian learning with catastrophe risk : information externalities in a large economy

Zantedeschi, Daniel 30 September 2011 (has links)
Based on a previous study by Amador and Weill (2009), I study the diffusion of dispersed private information in a large economy subject to a ”catastrophe risk” state. I assume that agents learn from the actions of oth- ers through two channels: a public channel, that represents learning from prices, and a bi-dimensional private channel that represents learning from lo- cal interactions via information concerning the good state and the catastrophe probability. I show an equilibrium solution based on conditional Bayes rule, which weakens the usual condition of ”slow learning” as presented in Amador and Weill and first introduced by Vives (1993). I study asymptotic conver- gence ”to the truth” deriving that ”catastrophe risk” can lead to ”non-linear” adjustments that could in principle explain fluctuations of price aggregates. I finally discuss robustness issues and potential applications of this work to models of ”reaching consensus”, ”investments under uncertainty”, ”market efficiency” and ”prediction markets”. / text
242

Bayes and empirical Bayes estimation for the panel threshold autoregressive model and non-Gaussian time series

Liu, Ka-yee., 廖家怡. January 2005 (has links)
published_or_final_version / abstract / toc / Statistics and Actuarial Science / Master / Master of Philosophy
243

Bayesian methods for astrophysical data analysis

Thaithara Balan, Sreekumar January 2013 (has links)
No description available.
244

A systematic approach to Bayesian inference for long memory processes

Graves, Timothy January 2013 (has links)
No description available.
245

Scoring rules, divergences and information in Bayesian machine learning

Huszár, Ferenc January 2013 (has links)
No description available.
246

Bayesian decision analysis of a statistical rainfall/runoff relation

Gray, Howard Axtell January 1972 (has links)
No description available.
247

Quantifying Urban and Agricultural Nonpoint Source Total Phosphorus Fluxes Using Distributed Watershed Models and Bayesian Inference

Wellen, Christopher Charles 14 January 2014 (has links)
Despite decades of research, the water quality of many lakes is impaired by excess total phosphorus loading. Four studies were undertaken using watershed models to understand the temporal and spatial variability of diffuse urban and agricultural total phosphorus pollution to Hamilton Harbour, Ontario, Canada. In the first study, a novel Bayesian framework was introduced to apply Spatially Referenced Regressions on Watershed Attributes (SPARROW) to catchments with few long term load monitoring sites but many sporadic monitoring sites. The results included reasonable estimates of whole-basin total phosphorus load and recommendations to optimize future monitoring. In the second study, the static SPARROW model was extended to allow annual time series estimates of watershed loads and the attendant source-sink processes. Results suggest that total phosphorus loads and source areas vary significantly at annual timescales. Further, the total phosphorus export rate of agricultural areas was estimated to be nearly twice that of urban areas. The third study presents a novel Bayesian framework that postulates that the watershed response to precipitation occurs in distinct states, which in turn are characterized by different model parameterizations. This framework is applied to Soil-Water Assessment Tool (SWAT) models of an urban creek (Redhill Creek) and an agricultural creek (Grindstone Creek) near Hamilton. The results suggest that during the limnological growing season (May – September), urban areas are responsible for the bulk of overland flow in both Creeks: In Redhill Creek, between 90% and 98% of all surface runoff, and in Grindstone Creek, between 95% and 99% of all surface runoff. In the fourth chapter, suspended sediment is used as a surrogate for total phosphorus. Despite disagreements regarding sediment source apportionment between three model applications, Bayesian model averaging allows an unambiguous identification of urban land uses as the main source of suspended sediments during the growing season. Taken together, these results suggest that multiple models must be used to arrive at a comprehensive understanding of total phosphorus loading. Further, while urban land uses may not be the primary source of sediment (and total phosphorus) loading annually, their source strength is increased relative to agricultural land uses during the growing season.
248

Pamokų tvarkaraščio optimizavimas profiliuotoms mokykloms / Optimization of profiled school schedule

Norkus, Aurimas 25 May 2005 (has links)
There are three implemented algorithms in this work: lessons permutation, lessons permutation with simulated annealing adjustment, lessons permutation using Bayesian approach theory to optimize SA parameters algorithms. Algorithms and graphical user interface are programmed with JSP which is based on Java object programming language. To evaluate schedule goodness algorithms are computing every penalty points which are given for some inconvenieces. User is able to define how much penalty points will be given if some inconveniece is satisfied. Also he is able to assign stochastic algorithm parameters. There was accomplished theory, where was observed using of simulated annealing and Bayesian approch methods in other stochastic algorithms and their different combination. There is a description of profiled school schedule optimization algorithm, which is based on SA searching methodology: searching for the optima through lower quality solutions, using temperature function which convergence, difference in quality. Algorythm which is using BA was created in case to improve SA searching methodology. User by changing systems temperature or annealing speed througth parameters can make big influence to SA behaviour. Passing parameters then using algorithm with BA meaner influence is made to behaviour because this method prognosticates, acording to him, better parameters with which SA should work effectively and changing them. Researches with three stochastic algorithms were made... [to full text]
249

Combining measurements with deterministic model outputs: predicting ground-level ozone

Liu, Zhong 05 1900 (has links)
The main topic of this thesis is how to combine model outputs from deterministic models with measurements from monitoring stations for air pollutants or other meteorological variables. We consider two different approaches to address this particular problem. The first approach is by using the Bayesian Melding (BM) model proposed by Fuentes and Raftery (2005). We successfully implement this model and conduct several simulation studies to examine the performance of this model in different scenarios. We also apply the melding model to the ozone data to show the importance of using the Bayesian melding model to calibrate the model outputs. That is, to adjust the model outputs for the prediction of measurements. Due to the Bayesian framework of the melding model, we can extend it to incorporate other components such as ensemble models, reversible jump MCMC for variable selection. However, the BM model is purely a spatial model and we generally have to deal with space-time dataset in practice. The deficiency of the BM approach leads us to a second approach, an alternative to the BM model, which is a linear mixed model (different from most linear mixed models, the random effects being spatially correlated) with temporally and spatially correlated residuals. We assume the spatial and temporal correlation are separable and use an AR process to model the temporal correlation. We also develop a multivariate version of this model. Both the melding model and linear mixed model are Bayesian hierarchical models, which can better estimate the uncertainties of the estimates and predictions.
250

Monte Carlo integration in discrete undirected probabilistic models

Hamze, Firas 05 1900 (has links)
This thesis contains the author’s work in and contributions to the field of Monte Carlo sampling for undirected graphical models, a class of statistical model commonly used in machine learning, computer vision, and spatial statistics; the aim is to be able to use the methodology and resultant samples to estimate integrals of functions of the variables in the model. Over the course of the study, three different but related methods were proposed and have appeared as research papers. The thesis consists of an introductory chapter discussing the models considered, the problems involved, and a general outline of Monte Carlo methods. The three subsequent chapters contain versions of the published work. The second chapter, which has appeared in (Hamze and de Freitas 2004), is a presentation of new MCMC algorithms for computing the posterior distributions and expectations of the unknown variables in undirected graphical models with regular structure. For demonstration purposes, we focus on Markov Random Fields (MRFs). By partitioning the MRFs into non-overlapping trees, it is possible to compute the posterior distribution of a particular tree exactly by conditioning on the remaining tree. These exact solutions allow us to construct efficient blocked and Rao-Blackwellised MCMC algorithms. We show empirically that tree sampling is considerably more efficient than other partitioned sampling schemes and the naive Gibbs sampler, even in cases where loopy belief propagation fails to converge. We prove that tree sampling exhibits lower variance than the naive Gibbs sampler and other naive partitioning schemes using the theoretical measure of maximal correlation. We also construct new information theory tools for comparing different MCMC schemes and show that, under these, tree sampling is more efficient. Although the work discussed in Chapter 2 exhibited promise on the class of graphs to which it was suited, there are many cases where limiting the topology is quite a handicap. The work in Chapter 3 was an exploration in an alternative methodology for approximating functions of variables representable as undirected graphical models of arbitrary connectivity with pairwise potentials, as well as for estimating the notoriously difficult partition function of the graph. The algorithm, published in (Hamze and de Freitas 2005), fits into the framework of sequential Monte Carlo methods rather than the more widely used MCMC, and relies on constructing a sequence of intermediate distributions which get closer to the desired one. While the idea of using “tempered” proposals is known, we construct a novel sequence of target distributions where, rather than dropping a global temperature parameter, we sequentially couple individual pairs of variables that are, initially, sampled exactly from a spanning treeof the variables. We present experimental results on inference and estimation of the partition function for sparse and densely-connected graphs. The final contribution of this thesis, presented in Chapter 4 and also in (Hamze and de Freitas 2007), emerged from some empirical observations that were made while trying to optimize the sequence of edges to add to a graph so as to guide the population of samples to the high-probability regions of the model. Most important among these observations was that while several heuristic approaches, discussed in Chapter 1, certainly yielded improvements over edge sequences consisting of random choices, strategies based on forcing the particles to take large, biased random walks in the state-space resulted in a more efficient exploration, particularly at low temperatures. This motivated a new Monte Carlo approach to treating complex discrete distributions. The algorithm is motivated by the N-Fold Way, which is an ingenious event-driven MCMC sampler that avoids rejection moves at any specific state. The N-Fold Way can however get “trapped” in cycles. We surmount this problem by modifying the sampling process to result in biased state-space paths of randomly chosen length. This alteration does introduce bias, but the bias is subsequently corrected with a carefully engineered importance sampler.

Page generated in 0.0335 seconds