• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 12
  • 12
  • 8
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Sparse Bayesian Time-Varying Covariance Estimation in Many Dimensions

Kastner, Gregor 18 September 2016 (has links) (PDF)
Dynamic covariance estimation for multivariate time series suffers from the curse of dimensionality. This renders parsimonious estimation methods essential for conducting reliable statistical inference. In this paper, the issue is addressed by modeling the underlying co-volatility dynamics of a time series vector through a lower dimensional collection of latent time-varying stochastic factors. Furthermore, we apply a Normal-Gamma prior to the elements of the factor loadings matrix. This hierarchical shrinkage prior effectively pulls the factor loadings of unimportant factors towards zero, thereby increasing parsimony even more. We apply the model to simulated data as well as daily log-returns of 300 S&P 500 stocks and demonstrate the effectiveness of the shrinkage prior to obtain sparse loadings matrices and more precise correlation estimates. Moreover, we investigate predictive performance and discuss different choices for the number of latent factors. Additionally to being a stand-alone tool, the algorithm is designed to act as a "plug and play" extension for other MCMC samplers; it is implemented in the R package factorstochvol. (author's abstract) / Series: Research Report Series / Department of Statistics and Mathematics
2

Probabilistic Flood Forecast Using Bayesian Methods

Han, Shasha January 2019 (has links)
The number of flood events and the estimated costs of floods have increased dramatically over the past few decades. To reduce the negative impacts of flooding, reliable flood forecasting is essential for early warning and decision making. Although various flood forecasting models and techniques have been developed, the assessment and reduction of uncertainties associated with the forecast remain a challenging task. Therefore, this thesis focuses on the investigation of Bayesian methods for producing probabilistic flood forecasts to accurately quantify predictive uncertainty and enhance the forecast performance and reliability. In the thesis, hydrologic uncertainty was quantified by a Bayesian post-processor - Hydrologic Uncertainty Processor (HUP), and the predictability of HUP with different hydrologic models under different flow conditions were investigated. Followed by an extension of HUP into an ensemble prediction framework, which constitutes the Bayesian Ensemble Uncertainty Processor (BEUP). Then the BEUP with bias-corrected ensemble weather inputs was tested to improve predictive performance. In addition, the effects of input and model type on BEUP were investigated through different combinations of BEUP with deterministic/ensemble weather predictions and lumped/semi-distributed hydrologic models. Results indicate that Bayesian method is robust for probabilistic flood forecasting with uncertainty assessment. HUP is able to improve the deterministic forecast from the hydrologic model and produces more accurate probabilistic forecast. Under high flow condition, a better performing hydrologic model yields better probabilistic forecast after applying HUP. BEUP can significantly improve the accuracy and reliability of short-range flood forecasts, but the improvement becomes less obvious as lead time increases. The best results for short-range forecasts are obtained by applying both bias correction and BEUP. Results also show that bias correcting each ensemble member of weather inputs generates better flood forecast than only bias correcting the ensemble mean. The improvement on BEUP brought by the hydrologic model type is more significant than the input data type. BEUP with semi-distributed model is recommended for short-range flood forecasts. / Dissertation / Doctor of Philosophy (PhD) / Flood is one of the top weather related hazards and causes serious property damage and loss of lives every year worldwide. If the timing and magnitude of the flood event could be accurately predicted in advance, it will allow time to get well prepared, and thus reduce its negative impacts. This research focuses on improving flood forecasts through advanced Bayesian techniques. The main objectives are: (1) enhancing reliability and accuracy of flood forecasting system; and (2) improving the assessment of predictive uncertainty associated with the flood forecasts. The key contributions include: (1) application of Bayesian forecasting methods in a semi-urban watershed to advance the predictive uncertainty quantification; and (2) investigation of the Bayesian forecasting methods with different inputs and models and combining bias correction technique to further improve the forecast performance. It is expected that the findings from this research will benefit flood impact mitigation, watershed management and water resources planning.
3

A novel Bayesian hierarchical model for road safety hotspot prediction

Fawcett, Lee, Thorpe, Neil, Matthews, Joseph, Kremer, Karsten 30 September 2020 (has links)
In this paper, we propose a Bayesian hierarchical model for predicting accident counts in future years at sites within a pool of potential road safety hotspots. The aim is to inform road safety practitioners of the location of likely future hotspots to enable a proactive, rather than reactive, approach to road safety scheme implementation. A feature of our model is the ability to rank sites according to their potential to exceed, in some future time period, a threshold accident count which may be used as a criterion for scheme implementation. Our model specification enables the classical empirical Bayes formulation – commonly used in before-and-after studies, wherein accident counts from a single before period are used to estimate counterfactual counts in the after period – to be extended to incorporate counts from multiple time periods. This allows site-specific variations in historical accident counts (e.g. locally-observed trends) to offset estimates of safety generated by a global accident prediction model (APM), which itself is used to help account for the effects of global trend and regression-to-mean (RTM). The Bayesian posterior predictive distribution is exploited to formulate predictions and to properly quantify our uncertainty in these predictions. The main contributions of our model include (i) the ability to allow accident counts from multiple time-points to inform predictions, with counts in more recent years lending more weight to predictions than counts from time-points further in the past; (ii) where appropriate, the ability to offset global estimates of trend by variations in accident counts observed locally, at a site-specific level; and (iii) the ability to account for unknown/unobserved site-specific factors which may affect accident counts. We illustrate our model with an application to accident counts at 734 potential hotspots in the German city of Halle; we also propose some simple diagnostics to validate the predictive capability of our model. We conclude that our model accurately predicts future accident counts, with point estimates from the predictive distribution matching observed counts extremely well.
4

On Fuzzy Bayesian Inference

Frühwirth-Schnatter, Sylvia January 1990 (has links) (PDF)
In the paper at hand we apply it to Bayesian statistics to obtain "Fuzzy Bayesian Inference". In the subsequent sections we will discuss a fuzzy valued likelihood function, Bayes' theorem for both fuzzy data and fuzzy priors, a fuzzy Bayes' estimator, fuzzy predictive densities and distributions, and fuzzy H.P.D .-Regions. (author's abstract) / Series: Forschungsberichte / Institut für Statistik
5

Fully bayesian structure learning of bayesian networks and their hypergraph extensions / Estimation bayésienne de la structure des réseaux bayésiens puis d'hypergraphes

Datta, Sagnik 07 July 2016 (has links)
Dans cette thèse, j’aborde le problème important de l’estimation de la structure des réseaux complexes, à l’aide de la classe des modèles stochastiques dits réseaux Bayésiens. Les réseaux Bayésiens permettent de représenter l’ensemble des relations d’indépendance conditionnelle. L’apprentissage statistique de la structure de ces réseaux complexes par les réseaux Bayésiens peut révéler la structure causale sous-jacente. Il peut également servir pour la prédiction de quantités qui sont difficiles, coûteuses, ou non éthiques comme par exemple le calcul de la probabilité de survenance d’un cancer à partir de l’observation de quantités annexes, plus faciles à obtenir. Les contributions de ma thèse consistent en : (A) un logiciel développé en langage C pour l’apprentissage de la structure des réseaux bayésiens; (B) l’introduction d’un nouveau "jumping kernel" dans l’algorithme de "Metropolis-Hasting" pour un échantillonnage rapide de réseaux; (C) l’extension de la notion de réseaux Bayésiens aux structures incluant des boucles et (D) un logiciel spécifique pour l’apprentissage des structures cycliques. Notre principal objectif est l’apprentissage statistique de la structure de réseaux complexes représentée par un graphe et par conséquent notre objet d’intérêt est cette structure graphique. Un graphe est constitué de nœuds et d’arcs. Tous les paramètres apparaissant dans le modèle mathématique et différents de ceux qui caractérisent la structure graphique sont considérés comme des paramètres de nuisance. / In this thesis, I address the important problem of the determination of the structure of complex networks, with the widely used class of Bayesian network models as a concrete vehicle of my ideas. The structure of a Bayesian network represents a set of conditional independence relations that hold in the domain. Learning the structure of the Bayesian network model that represents a domain can reveal insights into its underlying causal structure. Moreover, it can also be used for prediction of quantities that are difficult, expensive, or unethical to measure such as the probability of cancer based on other quantities that are easier to obtain. The contributions of this thesis include (A) a software developed in C language for structure learning of Bayesian networks; (B) introduction a new jumping kernel in the Metropolis-Hasting algorithm for faster sampling of networks (C) extending the notion of Bayesian networks to structures involving loops and (D) a software developed specifically to learn cyclic structures. Our primary objective is structure learning and thus the graph structure is our parameter of interest. We intend not to perform estimation of the parameters involved in the mathematical models.
6

Dlouhodobá dynamika Ledum palustre - testování modelu rozšíření pomocí paleoekologických dat / Long-term dynamics of Ledum palustre - testing the distribution model with paleoecological data

Radoměřský, Tomáš January 2016 (has links)
On the territory of the Czech Switzerland National Park took place during the Holocene significant changes in vegetation cover to the form is most enrolled medium Holocene climatic optimum when broadleaf deciduous forests expanded into Central Europe. These transformations are caused by climatic changes. However, it started the process of soil acidification to this day that caused the other variations of the vegetation composition, even the extinction a variety of species especially in sandstone areas. In addition, in the last few centuries the human impact is graduating, which more or less of the original forests changes due to agricultural and economic reasons to breed-specific and the same-aged plantations which supports the already declining species diversity and relative abundance of the undergrowth species. This work focuses on a single species, evergreen undergrowth shrub Ledum palustre which is characterized by strong demands on its habitat and indicates the specific habitat type. It grows on the upper north-facing edges of rocks with plenty of light and humidity. At these locations stores organic material thanks the favourable hydrology. This makes possible to study the use of pollen and macroremains the paleoecology of the species. On the basis of recent occurrences and the relationships...
7

Optimization and Bayesian Modeling of Road Distance for Inventory of Potholes in Gävle Municipality / Optimering och bayesiansk modellering av bilvägsavstånd för inventering av potthål i Gävle kommun

Lindblom, Timothy Rafael, Tollin, Oskar January 2022 (has links)
Time management and distance evaluation have long been a difficult task for workers and companies. This thesis studies 6712 pothole coordinates in Gävle municipality, and evaluates the minimal total road distance needed to visit each pothole once, and return to an initial pothole. Road distance is approximated using the flight distance and a simple random sample of 113 road distances from Google Maps. Thereafter, the data from the sample along with a Bayesian approach is used to find a distribution of the ratio between road distance and flight distance. Lastly, a solution to the shortest distance is devised using the Nearest Neighbor algorithm (NNA) and Simulated Annealing (SA). Computational work is performed with Markov Chain Monte Carlo (MCMC). The results provide a minimal road distance of 717 km. / Tidshantering och distansutvärdering är som regel en svår uppgift för arbetare och företag. Den här uppsatsen studerar 6712 potthål i Gävle kommun, och utvärderar den bilväg som på kortast sträcka besöker varje potthål och återgår till den ursprungliga startpunkten. Bilvägsavståndet mellan potthålen uppskattas med hjälp av flygavståndet, där ett obundet slumpmässigt urval av 113 bilvägsavstånd mellan potthålens koordinatpunkter dras. Bilvägsdistanser hittas med hjälp av Google Maps. Därefter används data från urvalet tillsammans med en bayesiansk modell för att hitta en fördelning för förhållandet mellan bilvägsavstånd och flygavstånd. Slutligen framförs en lösning på det kortaste bilvägsavståndet med hjälp av en Nearest Neighbour algoritm (NNA) samt Simulated Annealing (SA). Statistiskt beräkningsarbete utförs med Markov Chain Monte Carlo (MCMC). Resultaten ger en kortaste bilvägssträcka på 717 km.
8

Variational Bayesian Learning and its Applications

Zhao, Hui January 2013 (has links)
This dissertation is devoted to studying a fast and analytic approximation method, called the variational Bayesian (VB) method, and aims to give insight into its general applicability and usefulness, and explore its applications to various real-world problems. This work has three main foci: 1) The general applicability and properties; 2) Diagnostics for VB approximations; 3) Variational applications. Generally, the variational inference has been developed in the context of the exponential family, which is open to further development. First, it usually consider the cases in the context of the conjugate exponential family. Second, the variational inferences are developed only with respect to natural parameters, which are often not the parameters of immediate interest. Moreover, the full factorization, which assumes all terms to be independent of one another, is the most commonly used scheme in the most of the variational applications. We show that VB inferences can be extended to a more general situation. We propose a special parameterization for a parametric family, and also propose a factorization scheme with a more general dependency structure than is traditional in VB. Based on these new frameworks, we develop a variational formalism, in which VB has a fast implementation, and not be limited to the conjugate exponential setting. We also investigate its local convergence property, the effects of choosing different priors, and the effects of choosing different factorization scheme. The essence of the VB method relies on making simplifying assumptions about the posterior dependence of a problem. By definition, the general posterior dependence structure is distorted. In addition, in the various applications, we observe that the posterior variances are often underestimated. We aim to develop diagnostics test to assess VB approximations, and these methods are expected to be quick and easy to use, and to require no sophisticated tuning expertise. We propose three methods to compute the actual posterior covariance matrix by only using the knowledge obtained from VB approximations: 1) To look at the joint posterior distribution and attempt to find an optimal affine transformation that links the VB and true posteriors; 2) Based on a marginal posterior density approximation to work in specific low dimensional directions to estimate true posterior variances and correlations; 3) Based on a stepwise conditional approach, to construct and solve a set of system of equations that lead to estimates of the true posterior variances and correlations. A key computation in the above methods is to calculate a uni-variate marginal or conditional variance. We propose a novel way, called the VB Adjusted Independent Metropolis-Hastings (VBAIMH) method, to compute these quantities. It uses an independent Metropolis-Hastings (IMH) algorithm with proposal distributions configured by VB approximations. The variance of the target distribution is obtained by monitoring the acceptance rate of the generated chain. One major question associated with the VB method is how well the approximations can work. We particularly study the mean structure approximations, and show how it is possible using VB approximations to approach model selection tasks such as determining the dimensionality of a model, or variable selection. We also consider the variational application in Bayesian nonparametric modeling, especially for the Dirichlet process (DP). The posterior inference for DP has been extensively studied in the context of MCMC methods. This work presents a a full variational solution for DP with non-conjugate settings. Our solution uses a truncated stick-breaking representation. We propose an empirical method to determine the number of distinct components in a finite dimensional DP. The posterior predictive distribution for DP is often not available in a closed form. We show how to use the variational techniques to approximate this quantity. As a concrete application study, we work through the VB method on regime-switching lognormal models and present solutions to quantify both the uncertainty in the parameters and model specification. Through a series numerical comparison studies with likelihood based methods and MCMC methods on the simulated and real data sets, we show that the VB method can recover exactly the model structure, gives the reasonable point estimates, and is very computationally efficient.
9

Benthic habitats of the extended Faial Island shelf and their relationship to geologic, oceanographic and infralittoral biologic features

Tempera, Fernando January 2009 (has links)
This thesis presents a new template for multidisciplinary habitat mapping that combines the analyses of seafloor geomorphology, oceanographic proxies and modelling of associated biologic features. High resolution swath bathymetry of the Faial and western Pico shelves is used to present the first state-of-the-art geomorphologic assessment of submerged island shelves in the Azores. Solid seafloor structures are described in previously unreported detail together with associated volcanic, tectonic and erosion processes. The large sedimentary expanses identified in the area are also investigated and the large bedforms identified are discussed in view of new data on the local hydrodynamic conditions. Coarse-sediment zones of types hitherto unreported for volcanic island shelves are described using swath data and in situ imagery together with sub-bottom profiles and grainsize information. The hydrodynamic and geological processes producing these features are discussed. New oceanographic information extracted from satellite imagery is presented including yearly and seasonal sea surface temperature and chlorophyll-a concentration fields. These are used as proxies to understand the spatio-temporal variability of water temperature and primary productivity in the immediate island vicinity. The patterns observed are discussed, including onshore-offshore gradients and the prevalence of colder/more productive waters in the Faial-Pico passage and shelf areas in general. Furthermore, oceanographic proxies for swell exposure and tidal currents are derived from GIS analyses and shallow-water hydrographic modelling. Finally, environmental variables that potentially regulate the distribution of benthic organisms (seafloor nature, depth, slope, sea surface temperature, chlorophyll-a concentration, swell exposure and maximum tidal currents) are brought together and used to develop innovative statistical models of the distribution of six macroalgae taxa dominant in the infralittoral (articulated Corallinaceae, Codium elisabethae, Dictyota spp., Halopteris filicina, Padina pavonica and Zonaria tournefortii). Predictive distributions of these macroalgae are spatialized around Faial island using ordered logistic regression equations and raster fields of the explanatory variables found to be statistically significant. This new approach represents a potentially highly significant step forward in modelling benthic communities not only in the Azores but also in other oceanic island shelves where the management of benthic species and biotopes is critical to preserve ecosystem health.
10

利用預測分析-篩選及檢視再保險契約中之承保風險 / Selecting and Monitoring Insurance Risk on Reinsurance Treaties Using Predictive Analysis

吳家安, Wu, Chiao-An Unknown Date (has links)
傳統的保險人在面對保險契約所承保的風險時,常會藉由國際上的再保險市場來分散其保險風險。由於所承保險事件的不確定性,保險人需要謹慎小心評估其保險風險並將承保風險轉移至再保險人。再保險有兩種主要的保險型式,可區分成比例再保契約及超額損失再保契約,保險人將利用這些再保險契約來分散求償給付時的損失,加強保險人本身的財務清償能力。 本研究,主要在於建構未來損失求償幅度或頻率的預測分佈並模擬未來支付求償的損失。簡單重點重複抽樣法是一種從危險參數的驗後分佈中抽樣的抽樣方法。然而,蒙地卡羅模擬是一種利用大量電腦運算計算近似預測分佈的逼近方法。利用被選取危險參數的驗前分佈來模擬其驗後分佈,並建構可能的承保危險參數結構,將基於馬可夫鏈蒙地卡羅理論的吉普生抽樣方法決定最適自留額,同時運用於再保險合約決策擬定過程。 最後,考慮於不同的再保險契約下來衡量再保險人的自負財務風險。基本上我們研究的對象是針對保險人所承保的風險,再藉由上述的方法來模擬、近似以量化所衍生的財務風險。這將有助於保險人清楚地瞭解其承保的風險,並對其承保業務做妥善的財務風險管理。本研究提供保險人具體的模型建構方法並對此建構技巧做詳細說明及實證分析。 / Insurers traditionally transfer their insurance risk through the international reinsurance market. Due to the uncertainty of these insured risks, the primary insurer need to carefully evaluate the insured risk and further transfer these risks to his ceding reinsurers. There are two major types of reinsurance, i.e. pro rata treaty and excess of loss treaty, used in protecting the claim losses. In this article, the predictive distribution of the claim size is constructed to monitor the future claim underwriting losses based on the reinsurance agreement. Simple Importance Resampling (SIR) are employed in sampling the posterior distribution of risk parameters. Then Monte Carlo simulations are used to approximate the predictive distribution. Plausible prior distributions of these risk parameters are chosen in simulation its posterior distribution. Markov chain Monte Carlo (MCMC) method using Gibbs sampling scheme is also performed based on possible parametric structures. Both the pro rata and excess of loss treaties are investigated to quantify the retention risks of the ceding reinsurers. The insurance risks are focused in our model. Through the implemented model and simulation techniques, it is beneficial for the primary insurer in projecting his underwriting risks. The results show a significant advantage and flexibility using this approach in risk management. This article outlines the procedure of building the model. Finally a practical case study is performed for numerical illustrated.

Page generated in 0.099 seconds