• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 2
  • 1
  • 1
  • Tagged with
  • 16
  • 16
  • 9
  • 8
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Joint Calibration of a Cladding Oxidation and a Hydrogen Pick-up Model for Westinghouse Electric Sweden AB

Nyman, Joakim January 2020 (has links)
Knowledge regarding a nuclear power plants potential and limitations is of utmost importance when working in the nuclear field. One way to extend the knowledge is using fuel performance codes that to its best ability mimics the real-world phenomena. Fuel performance codes involve a system of interlinked and complex models to predict the thermo-mechanical behaviour of the fuel rods. These models use several different model parameters that can be imprecise and therefore the parameters need to be fitted/calibrated against measurement data. This thesis presents two methods to calibrate model parameters in the presence of unknown sources of uncertainty. The case where these methods have been tested are the oxidation and hydrogen pickup of the zirconium cladding around the fuel rods. Initially, training and testing data were sampled by using the Dakota software in combination with the nuclear simulation program TRANSURANUS so that a Gaussian process surrogate model could be built. The model parameters were then calibrated in a Bayesian way by a MCMC algorithm. Additionally, two models are presented to handle unknown sources of uncertainty that may arise from model inadequacies, nuisance parameters or hidden measurement errors, these are the Marginal likelihood optimization method and the Margin method. To calibrate the model parameters, data from two sources were used. One source that only had data regarding the oxide thickness but the data was extensive, and another that had both oxide data and hydrogen concentration data, but less data was available.  The model parameters were calibrated by the use of the presented methods. But an unforeseen non-linearity for the joint oxidation and hydrogen pick-up case when predicting the correlation of the model parameters made this result unreliable.
12

A new estimation approach for modeling activity-travel behavior : applications of the composite marginal likelihood approach in modeling multidimensional choices

Ferdous, Nazneen 04 November 2011 (has links)
The research in the field of travel demand modeling is driven by the need to understand individuals’ behavior in the context of travel-related decisions as accurately as possible. In this regard, the activity-based approach to modeling travel demand has received substantial attention in the past decade, both in the research arena as well as in practice. At the same time, recent efforts have been focused on more fully realizing the potential of activity-based models by explicitly recognizing the multi-dimensional nature of activity-travel decisions. However, as more behavioral elements/dimensions are added, the dimensionality of the model systems tends to explode, making the estimation of such models all but infeasible using traditional inference methods. As a result, analysts and practitioners often trade-off between recognizing attributes that will make a model behaviorally more representative (from a theoretical viewpoint) and being able to estimate/implement a model (from a practical viewpoint). An alternative approach to deal with the estimation complications arising from multi-dimensional choice situations is the technique of composite marginal likelihood (CML). This is an estimation technique that is gaining substantial attention in the statistics field, though there has been relatively little coverage of this method in transportation and other fields. The CML approach is a conceptually and pedagogically simpler simulation-free procedure (relative to traditional approaches that employ simulation techniques), and has the advantage of reproducibility of the results. Under the usual regularity assumptions, the CML estimator is consistent, unbiased, and asymptotically normally distributed. The discussion above indicates that the CML approach has the potential to contribute in the area of travel demand modeling in a significant way. For example, the approach can be used to develop conceptually and behaviorally more appealing models to examine individuals’ travel decisions in a joint framework. The overarching goal of the current research work is to demonstrate the applicability of the CML approach in the area of activity-travel demand modeling and to highlight the enhanced features of the choice models estimated using the CML approach. The goal of the dissertation is achieved in three steps as follows: (1) by evaluating the performance of the CML approach in multivariate situations, (2) by developing multidimensional choice models using the CML approach, and (3) by demonstrating applications of the multidimensional choice models developed in the current dissertation. / text
13

Addressing Challenges in Graphical Models: MAP estimation, Evidence, Non-Normality, and Subject-Specific Inference

Sagar K N Ksheera (15295831) 17 April 2023 (has links)
<p>Graphs are a natural choice for understanding the associations between variables, and assuming a probabilistic embedding for the graph structure leads to a variety of graphical models that enable us to understand these associations even further. In the realm of high-dimensional data, where the number of associations between interacting variables is far greater than the available number of data points, the goal is to infer a sparse graph. In this thesis, we make contributions in the domain of Bayesian graphical models, where our prior belief on the graph structure, encoded via uncertainty on the model parameters, enables the estimation of sparse graphs.</p> <p><br></p> <p>We begin with the Gaussian Graphical Model (GGM) in Chapter 2, one of the simplest and most famous graphical models, where the joint distribution of interacting variables is assumed to be Gaussian. In GGMs, the conditional independence among variables is encoded in the inverse of the covariance matrix, also known as the precision matrix. Under a Bayesian framework, we propose a novel prior--penalty dual called the `graphical horseshoe-like' prior and penalty, to estimate precision matrix. We also establish the posterior convergence of the precision matrix estimate and the frequentist consistency of the maximum a posteriori (MAP) estimator.</p> <p><br></p> <p>In Chapter 3, we develop a general framework based on local linear approximation for MAP estimation of the precision matrix in GGMs. This general framework holds true for any graphical prior, where the element-wise priors can be written as a Laplace scale mixture. As an application of the framework, we perform MAP estimation of the precision matrix under the graphical horseshoe penalty.</p> <p><br></p> <p>In Chapter 4, we focus on graphical models where the joint distribution of interacting variables cannot be assumed Gaussian. Motivated by the quantile graphical models, where the Gaussian likelihood assumption is relaxed, we draw inspiration from the domain of precision medicine, where personalized inference is crucial to tailor individual-specific treatment plans. With an aim to infer Directed Acyclic Graphs (DAGs), we propose a novel quantile DAG learning framework, where the DAGs depend on individual-specific covariates, making personalized inference possible. We demonstrate the potential of this framework in the regime of precision medicine by applying it to infer protein-protein interaction networks in Lung adenocarcinoma and Lung squamous cell carcinoma.</p> <p><br></p> <p>Finally, we conclude this thesis in Chapter 5, by developing a novel framework to compute the marginal likelihood in a GGM, addressing a longstanding open problem. Under this framework, we can compute the marginal likelihood for a broad class of priors on the precision matrix, where the element-wise priors on the diagonal entries can be written as gamma or scale mixtures of gamma random variables and those on the off-diagonal terms can be represented as normal or scale mixtures of normal. This result paves new roads for model selection using Bayes factors and tuning of prior hyper-parameters.</p>
14

MARGINAL LIKELIHOOD INFERENCE FOR FRAILTY AND MIXTURE CURE FRAILTY MODELS UNDER BIRNBAUM-SAUNDERS AND GENERALIZED BIRNBAUM-SAUNDERS DISTRIBUTIONS

Liu, Kai January 2018 (has links)
Survival analytic methods help to analyze lifetime data arising from medical and reliability experiments. The popular proportional hazards model, proposed by Cox (1972), is widely used in survival analysis to study the effect of risk factors on lifetimes. An important assumption in regression type analysis is that all relative risk factors should be included in the model. However, not all relative risk factors are observed due to measurement difficulty, inaccessibility, cost considerations, and so on. These unobservable risk factors can be modelled by the so-called frailty model, originally introduced by Vaupel et al. (1979). Furthermore, the frailty model is also applicable to clustered data. Cluster data possesses the feature that observations within the same cluster share similar conditions and environment, which are sometimes difficult to observe. For example, patients from the same family share similar genetics, and patients treated in the same hospital share the same group of profes- sionals and same environmental conditions. These factors are indeed hard to quantify or measure. In addition, this type of similarity introduces correlation among subjects within clusters. In this thesis, a semi-parametric frailty model is proposed to address aforementioned issues. The baseline hazards function is approximated by a piecewise constant function and the frailty distribution is assumed to be a Birnbaum-Saunders distribution. Due to the advancement in modern medical sciences, many diseases are curable, which in turn leads to the need of incorporating cure proportion in the survival model. The frailty model discussed here is further extended to a mixture cure rate frailty model by integrating the frailty model into the mixture cure rate model proposed originally by Boag (1949) and Berkson and Gage (1952). By linking the covariates to the cure proportion through logistic/logit link function and linking observable covariates and unobservable covariates to the lifetime of the uncured population through the frailty model, we obtain a flexible model to study the effect of risk factors on lifetimes. The mixture cure frailty model can be reduced to a mixture cure model if the effect of frailty term is negligible (i.e., the variance of the frailty distribution is close to 0). On the other hand, it also reduces to the usual frailty model if the cure proportion is 0. Therefore, we can use a likelihood ratio test to test whether the reduced model is adequate to model the given data. We assume the baseline hazard to be that of Weibull distribution since Weibull distribution possesses increasing, constant or decreasing hazard rate, and the frailty distribution to be Birnbaum-Saunders distribution. D ́ıaz-Garc ́ıa and Leiva-Sa ́nchez (2005) proposed a new family of life distributions, called generalized Birnbaum-Saunders distribution, including Birnbaum-Saunders distribution as a special case. It allows for various degrees of kurtosis and skewness, and also permits unimodality as well as bimodality. Therefore, integration of a generalized Birnbaum-Saunders distribution as the frailty distribution in the mixture cure frailty model results in a very flexible model. For this general model, parameter estimation is carried out using a marginal likelihood approach. One of the difficulties in the parameter estimation is that the likelihood function is intractable. The current technology in computation enables us to develop a numerical method through Monte Carlo simulation, and in this approach, the likelihood function is approximated by the Monte Carlo method and the maximum likelihood estimates and standard errors of the model parameters are then obtained numerically by maximizing this approximate likelihood function. An EM algorithm is also developed for the Birnbaum-Saunders mixture cure frailty model. The performance of this estimation method is then assessed by simulation studies for each proposed model. Model discriminations is also performed between the Birnbaum-Saunders frailty model and the generalized Birnbaum-Saunders mixture cure frailty model. Some illustrative real life examples are presented to illustrate the models and inferential methods developed here. / Thesis / Doctor of Science (PhD)
15

競爭風險下長期存活資料之貝氏分析 / Bayesian analysis for long-term survival data

蔡佳蓉 Unknown Date (has links)
當造成失敗的原因不只一種時,若各對象同一時間最多只經歷一種失敗原因,則這些失敗原因稱為競爭風險。然而,有些個體不會失敗或者經過治療之後已痊癒,我們稱這部分的群體為治癒群。本文考慮同時處理競爭風險及治癒率的混合模式,即競爭風險的治癒率模式,亦將解釋變數結合到治癒率、競爭風險的條件失敗機率,或未治癒下競爭風險的條件存活函數中,並以建立在完整資料上之擴充的概似函數為貝氏分析的架構。對於右設限對象則以插補方式決定是否會治癒或會因何種風險而失敗,並推導各參數的完全條件後驗分配及其性質。由於邊際後驗分配的數學形式無法明確呈現,再加上需對右設限者判斷其狀態,所以採用屬於馬可夫鏈蒙地卡羅法的Gibbs抽樣法及適應性拒絕抽樣法(adaptive rejection sampling) ,執行參數之模擬抽樣及設算右設限者之治癒或失敗狀態。實證部分,我們分析Klein and Moeschberger (1997)書中骨髓移植後的血癌病患的資料,並用不同模式之下的參數模擬值計算各對象之條件預測指標(CPO),換算成各模式的對數擬邊際概似函數值(LPML),比較不同模式的優劣。 / In case that there are more than one possible failure types, if each subject experiences at most one failure type at one time, then these failure types are called competing risks. Moreover, some subjects have been cured or are immune so they never fail, then they are called the cured ones. This dissertation discusses several mixture models containing competing risks and cure rate. Furthermore, covariates are associated with cure rate, conditional failure rate of each risk, or conditional survival function of each risk, and we propose the Bayesian procedure based on the augmented likelihood function of complete data. For right censored subjects, we make use of imputation to determine whether they were cured or failed by which risk and derive full conditional posterior distributions. Since all marginal posterior distributions don’t have closed forms and right censored subjects need to be identified their statuses, we take Gibbs sampling and adaptive rejection sampling of Markov chain Monte Carlo method to simulate parameter values. We illustrate how to conduct Bayesian analysis by using the bone marrow transplant data from the book written by Klein and Moeschberger (1997). To do model selection, we compute the conditional predictive ordinate(CPO) for every subject under each model, then the goodness is determined by the comparing the value of log of pseudo marginal likelihood (LMPL) of each model.
16

Computational Bayesian techniques applied to cosmology

Hee, Sonke January 2018 (has links)
This thesis presents work around 3 themes: dark energy, gravitational waves and Bayesian inference. Both dark energy and gravitational wave physics are not yet well constrained. They present interesting challenges for Bayesian inference, which attempts to quantify our knowledge of the universe given our astrophysical data. A dark energy equation of state reconstruction analysis finds that the data favours the vacuum dark energy equation of state $w {=} -1$ model. Deviations from vacuum dark energy are shown to favour the super-negative ‘phantom’ dark energy regime of $w {< } -1$, but at low statistical significance. The constraining power of various datasets is quantified, finding that data constraints peak around redshift $z = 0.2$ due to baryonic acoustic oscillation and supernovae data constraints, whilst cosmic microwave background radiation and Lyman-$\alpha$ forest constraints are less significant. Specific models with a conformal time symmetry in the Friedmann equation and with an additional dark energy component are tested and shown to be competitive to the vacuum dark energy model by Bayesian model selection analysis: that they are not ruled out is believed to be largely due to poor data quality for deciding between existing models. Recent detections of gravitational waves by the LIGO collaboration enable the first gravitational wave tests of general relativity. An existing test in the literature is used and sped up significantly by a novel method developed in this thesis. The test computes posterior odds ratios, and the new method is shown to compute these accurately and efficiently. Compared to computing evidences, the method presented provides an approximate 100 times reduction in the number of likelihood calculations required to compute evidences at a given accuracy. Further testing may identify a significant advance in Bayesian model selection using nested sampling, as the method is completely general and straightforward to implement. We note that efficiency gains are not guaranteed and may be problem specific: further research is needed.

Page generated in 0.0769 seconds