• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2018
  • 601
  • 260
  • 260
  • 61
  • 32
  • 26
  • 19
  • 15
  • 14
  • 8
  • 7
  • 6
  • 6
  • 5
  • Tagged with
  • 4107
  • 797
  • 753
  • 724
  • 716
  • 705
  • 697
  • 655
  • 567
  • 448
  • 427
  • 416
  • 401
  • 366
  • 311
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Inference for Populations:  Uncertainty Propagation via Bayesian Population Synthesis

Grubb, Christopher Thomas 16 August 2023 (has links)
In this dissertation, we develop a new type of prior distribution, specifically for populations themselves, which we denote the Dirichlet Spacing prior. This prior solves a specific problem that arises when attempting to create synthetic populations from a known subset: the unfortunate reality that assuming independence between population members means that every synthetic population will be essentially the same. This is a problem because any model which only yields one result (several very similar results), when we have very incomplete information, is fundamentally flawed. We motivate our need for this new class of priors using Agent-based Models, though this prior could be used in any situation requiring synthetic populations. / Doctor of Philosophy / Typically, statisticians work with parametric distributions governing independent observations. However, sometimes operating under the assumption of independence severely limits us. We motivate the move away from independent sampling via the scope of Agent-based Modeling (ABM), where full populations are needed. The assumption of independence, when applied to synthesizing populations, leads to unwanted results; specifically, all synthetic populations generated from the sample sample data are essentially the same. As statisticians, this is clearly problematic because given only a small subset of the population, we clearly do not know what the population looks like, and thus any model which always gives the same answer is fundamentally flawed. We fix this problem by utilizing a new class of distributions which we call spacing priors, which allow us to create synthetic populations of individuals which are not independent of each other.
62

Incorporating Historical Data via Bayesian Analysis Based on The Logit Model

Chenxi, Yu January 2018 (has links)
This thesis presents a Bayesian approach to incorporate historical data. Usually, in statistical inference, a large data size is required to establish a strong evidence. However, in most bioassay experiments, dataset is of limited size. Here, we proposed a method that is able to incorporate control groups data from historical studies. The approach is framed in the context of testing whether an increased dosage of the chemical is associated with increased probability of the adverse event. To test whether such a relationship exists, the proposed approach compares two logit models via Bayes factor. In particular, we eliminate the effect of survival time by using poly-k test. We test the performance of the proposed approach by applying it to six simulated scenarios. / Thesis / Master of Science (MSc) / This thesis presents a Bayesian approach to incorporate historical data. Usually, in statistical inference, a large data size is required to establish a strong evidence. However, in most bioassay experiments, dataset is of limited size. Here, we proposed a method that is able to incorporate control groups data from historical studies. The approach is framed in the context of testing whether an increased dosage of the chemical is associated with increased probability of the adverse event. To test whether such a relationship exists, the proposed approach compares two logit models via Bayes factor. In particular, we eliminate the effect of survival time by using poly-k test. We test the performance of the proposed approach by applying it to six simulated scenarios.
63

Categorizing Abortions By Safety Category: A Bayesian Hierarchical Modeling Approach

Kang, Zhenning 09 July 2018 (has links) (PDF)
Since 1990s, World Health Organization defines abortion as safe if it was done with a recommended method that was appropriate to the pregnancy duration and if the person providing the abortion was trained. In this study, we used a three-tiered categorization on abortion safety. Abortion is less safe if the pregnancy was terminated either by untrained individuals or under dangerous methods, and least safe if neither of the two criteria was met. We included all available empirical data on abortion methods, providers, and settings, and factors affecting safety as covariates to estimate the global, regional, and sub regional distributions of abortion by safety categories for the period 2010-2014. We applied a Bayesian hierarchical model with two regression submodels to estimate abortion safety. One submodel estimated safe proportions and the other one divided unsafe into less safe and least safe proportions. Country intercepts were included in both submodels and estimated using hierarchical models. Data sources were assigned varying levels of uncertainty or treated as minima or maxima to reflect quality of reporting. We constructed 90% highest density intervals as credible intervals to reflect uncertainty in final outcomes. We carried out model selection using information criteria. We examined model validation and carried out various checks to verify the sensitivity of reporting to prior distributions used and outlying countries. We found that the model was reasonably well calibrated and subregional estimates were not sensitive to outlying observations or prior choice. Of the 55· 7 million abortions that occurred worldwide each year between 2010–14, we estimated that 30·6 million (54·9%, 90% uncertainty interval 49·9–59·4) were safe, 17·1 million (30·7%, 25·5–35·6) were less safe, and 8·0 million (14·4%, 11·5–18·1) were least safe. The proportion of unsafe abortions was significantly higher in developing countries than developed countries, and significantly higher in countries with highly restrictive abortion laws than in those with less restrictive laws. In-depth assessments of data quality and factors affecting abortion safety in outlying countries may result in further model improvements.
64

Applying an Intrinsic Conditional Autoregressive Reference Prior for Areal Data

Porter, Erica May 09 July 2019 (has links)
Bayesian hierarchical models are useful for modeling spatial data because they have flexibility to accommodate complicated dependencies that are common to spatial data. In particular, intrinsic conditional autoregressive (ICAR) models are commonly assigned as priors for spatial random effects in hierarchical models for areal data corresponding to spatial partitions of a region. However, selection of prior distributions for these spatial parameters presents a challenge to researchers. We present and describe ref.ICAR, an R package that implements an objective Bayes intrinsic conditional autoregressive prior on a vector of spatial random effects. This model provides an objective Bayesian approach for modeling spatially correlated areal data. ref.ICAR enables analysis of spatial areal data for a specified region, given user-provided data and information about the structure of the study region. The ref.ICAR package performs Markov Chain Monte Carlo (MCMC) sampling and outputs posterior medians, intervals, and trace plots for fixed effect and spatial parameters. Finally, the functions provide regional summaries, including medians and credible intervals for fitted values by subregion. / Master of Science / Spatial data is increasingly relevant in a wide variety of research areas. Economists, medical researchers, ecologists, and policymakers all make critical decisions about populations using data that naturally display spatial dependence. One such data type is areal data; data collected at county, habitat, or tract levels are often spatially related. Most convenient software platforms provide analyses for independent data, as the introduction of spatial dependence increases the complexity of corresponding models and computation. Use of analyses with an independent data assumption can lead researchers and policymakers to make incorrect, simplistic decisions. Bayesian hierarchical models can be used to effectively model areal data because they have flexibility to accommodate complicated dependencies that are common to spatial data. However, use of hierarchical models increases the number of model parameters and requires specification of prior distributions. We present and describe ref.ICAR, an R package available to researchers that automatically implements an objective Bayesian analysis that is appropriate for areal data.
65

Autonomous Navigation, Perception and Probabilistic Fire Location for an Intelligent Firefighting Robot

Kim, Jong Hwan 09 October 2014 (has links)
Firefighting robots are actively being researched to reduce firefighter injuries and deaths as well as increase their effectiveness on performing tasks. There has been difficulty in developing firefighting robots that autonomously locate a fire inside of a structure that is not in the direct robot field of view. The commonly used sensors for robots cannot properly function in fire smoke-filled environments where high temperature and zero visibility are present. Also, the existing obstacle avoidance methods have limitations calculating safe trajectories and solving local minimum problem while avoiding obstacles in real time under cluttered and dynamic environments. In addition, research for characterizing fire environments to provide firefighting robots with proper headings that lead the robots to ultimately find the fire is incomplete. For use on intelligent firefighting robots, this research developed a real-time local obstacle avoidance method, local dynamic goal-based fire location, appropriate feature selection for fire environment assessment, and probabilistic classification of fire, smoke and their thermal reflections. The real-time local obstacle avoidance method called the weighted vector method is developed to perceive the local environment through vectors, identify suitable obstacle avoidance modes by applying a decision tree, use weighting functions to select necessary vectors and geometrically compute a safe heading. This method also solves local obstacle avoidance problems by integrating global and local goals to reach the final goal. To locate a fire outside of the robot field of view, a local dynamic goal-based 'Seek-and-Find' fire algorithm was developed by fusing long wave infrared camera images, ultraviolet radiation sensor and Lidar. The weighted vector method was applied to avoid complex static and unexpected dynamic obstacles while moving toward the fire. This algorithm was successfully validated for a firefighting robot to autonomously navigate to find a fire outside the field of view. An improved 'Seek-and-Find' fire algorithm was developed using Bayesian classifiers to identify fire features using thermal images. This algorithm was able to discriminate fire and smoke from thermal reflections and other hot objects, allowing the prediction of a more robust heading for the robot. To develop this algorithm, appropriate motion and texture features that can accurately identify fire and smoke from their reflections were analyzed and selected by using multi-objective genetic algorithm optimization. As a result, mean and variance of intensity, entropy and inverse difference moment in the first and second order statistical texture features were determined to probabilistically classify fire, smoke, their thermal reflections and other hot objects simultaneously. This classification performance was measured to be 93.2% accuracy based on validation using the test dataset not included in the original training dataset. In addition, the precision, recall, F-measure, and G-measure were 93.5 - 99.9% for classifying fire and smoke using the test dataset. / Ph. D.
66

Bayesian Analysis of Cancer Mortality Rates from Different Types and their Relative Occurrences

Delcroix, Sophie M. 14 December 1999 (has links)
"We analyze mortality data from prostate, colon, lung, and all other types (called other cancer) to obtain age specific and age adjusted mortality rates for white males in the U.S. A related problem is to estimate the relative occurrences of these four types of cancer. We use Bayesian method because it permits a degree of smoothing which is needed to analyze data at a small area level and to assess the patterns. In the recent Atlas of the United States Mortality (1996) each type of cancer was analyzed individually. The difficulty in doing so is that there are many small areas with zero deaths. We conjecture that simultaneous analyses might help to overcome this problem, and at the same time to estimate the relative occurrences. We start with a Poisson model for the deaths, which produces a likelihood function that separates into two parts: a Poisson likelihood for the rates and a multinomial likelihood for the relative occurrences. These permit the use of a standard Poisson regression model on age as in Nandram, Sedransk and Pickle (1999), and the novelty is a multivariate logit model on the relative occurrences in which per capita income, the percent of people below poverty level, education (percent of people with four years of college) and two criteria pollutants, EPAPM25 and EPASO2, are used as covariates. We fitted the models using Markov chain Monte Carlo methods. We used one of the models to present maps of occurrences and rates for the four types. An alternative model did not work well because it provides the same pattern by age and disease. We found that while EPAPM25 has a negative effect on the occurrences, EPASO2 has a positive effect. Also, we found some interesting patterns associated with the geographical variations of mortality rates and the relative occurrences of the four cancer types."
67

Structured Bayesian learning through mixture models

PETRALIA, FRANCESCA January 2013 (has links)
<p>In this thesis, we develop some Bayesian mixture density estimation for univariate and multivariate data. We start proposing a repulsive process favoring mixture components further apart. While conducting inferences on the cluster-specific parameters, current frequentist and Bayesian methods often encounter problems when clusters are placed too close together to be scientifically meaningful. Current Bayesian practice generates component-specific parameters independently from a common prior, which tends to favor similar components and often leads to substantial probability assigned to redundant components that are not needed to fit the data. As an alternative, we propose to generate components from a repulsive process, which leads to fewer, better separated and more interpretable clusters. </p><p>In the second part of the thesis, we face the problem of modeling the conditional distribution of a response variable given a high dimensional vector of predictors potentially concentrated near a lower dimensional subspace or manifold. In many settings it is important to allow not only the mean but also the variance and shape of the response density to change flexibly with features, which are massive-dimensional. We propose a multiresolution model that scales efficiently to massive numbers of features, and can be implemented efficiently with slice sampling.</p><p> In the third part of the thesis, we deal with the problem of characterizing the conditional density of a multivariate vector of response given a potentially high dimensional vector of predictors. The proposed model flexibly characterizes the density of the response variable by hierarchically coupling a collection of factor models, each one defined on a different scale of resolution. As it is illustrated in Chapter 4, our proposed method achieves good predictive performance compared to competitive models while efficiently scaling to high dimensional predictors.</p> / Dissertation
68

Dynamic Operational Risk Assessment with Bayesian Network

Barua, Shubharthi 2012 August 1900 (has links)
Oil/gas and petrochemical plants are complicated and dynamic in nature. Dynamic characteristics include ageing of equipment/components, season changes, stochastic processes, operator response times, inspection and testing time intervals, sequential dependencies of equipment/components and timing of safety system operations, all of which are time dependent criteria that can influence dynamic processes. The conventional risk assessment methodologies can quantify dynamic changes in processes with limited capacity. Therefore, it is important to develop method that can address time-dependent effects. The primary objective of this study is to propose a risk assessment methodology for dynamic systems. In this study, a new technique for dynamic operational risk assessment is developed based on the Bayesian networks, a structure optimal suitable to organize cause-effect relations. The Bayesian network graphically describes the dependencies of variables and the dynamic Bayesian network capture change of variables over time. This study proposes to develop dynamic fault tree for a chemical process system/sub-system and then to map it in Bayesian network so that the developed method can capture dynamic operational changes in process due to sequential dependency of one equipment/component on others. The developed Bayesian network is then extended to the dynamic Bayesian network to demonstrate dynamic operational risk assessment. A case study on a holdup tank problem is provided to illustrate the application of the method. A dryout scenario in the tank is quantified. It has been observed that the developed method is able to provide updated probability different equipment/component failure with time incorporating the sequential dependencies of event occurrence. Another objective of this study is to show parallelism of Bayesian network with other available risk assessment methods such as event tree, HAZOP, FMEA. In this research, an event tree mapping procedure in Bayesian network is described. A case study on a chemical reactor system is provided to illustrate the mapping procedure and to identify factors that have significant influence on an event occurrence. Therefore, this study provides a method for dynamic operational risk assessment capable of providing updated probability of event occurrences considering sequential dependencies with time and a model for mapping event tree in Bayesian network.
69

A Bayesian Network Approach to Early Reliability Assessment of Complex Systems

January 2016 (has links)
abstract: Bayesian networks are powerful tools in system reliability assessment due to their flexibility in modeling the reliability structure of complex systems. This dissertation develops Bayesian network models for system reliability analysis through the use of Bayesian inference techniques. Bayesian networks generalize fault trees by allowing components and subsystems to be related by conditional probabilities instead of deterministic relationships; thus, they provide analytical advantages to the situation when the failure structure is not well understood, especially during the product design stage. In order to tackle this problem, one needs to utilize auxiliary information such as the reliability information from similar products and domain expertise. For this purpose, a Bayesian network approach is proposed to incorporate data from functional analysis and parent products. The functions with low reliability and their impact on other functions in the network are identified, so that design changes can be suggested for system reliability improvement. A complex system does not necessarily have all components being monitored at the same time, causing another challenge in the reliability assessment problem. Sometimes there are a limited number of sensors deployed in the system to monitor the states of some components or subsystems, but not all of them. Data simultaneously collected from multiple sensors on the same system are analyzed using a Bayesian network approach, and the conditional probabilities of the network are estimated by combining failure information and expert opinions at both system and component levels. Several data scenarios with discrete, continuous and hybrid data (both discrete and continuous data) are analyzed. Posterior distributions of the reliability parameters of the system and components are assessed using simultaneous data. Finally, a Bayesian framework is proposed to incorporate different sources of prior information and reconcile these different sources, including expert opinions and component information, in order to form a prior distribution for the system. Incorporating expert opinion in the form of pseudo-observations substantially simplifies statistical modeling, as opposed to the pooling techniques and supra Bayesian methods used for combining prior distributions in the literature. The methods proposed are demonstrated with several case studies. / Dissertation/Thesis / Doctoral Dissertation Industrial Engineering 2016
70

[en] LINEAR GROWTH BAYESIAN MODEL USING DISCOUNT FACTORS / [pt] MODELO BAYESIANO DE CRESCIMENTO LINEAR COM DESCONTOS

CRISTIANO AUGUSTO COELHO FERNANDES 17 November 2006 (has links)
[pt] O objetivo principal desta dissertação é descrever e discutir o Modelo Bayesiano de Crescimento Linear Sazonal, formulação Estados múltiplos, utilizando descontos. As idéias originais deste modelo foram desenvolvidas por Ameen e Harrison. Na primeira parte do trabalho (capítulos 2 e 3) apresentamos idéias bem gerais sobre Séries Temporais e os principais modelos da literatura. A segunda parte (capítulos 4, 5 e 6) é dedicada à Estatística Bayesiana (conceitos gerais), ao MDL na sua formulação original, e ao nosso modelo de interesse. São apresentadas algumas sugestões operacionais e um fluxograma de operação do modelo, com vistas a uma futura implementação computacional. / [en] The aim of this thesis is to discuss in details the Multiprocess Linear Grawth Bayesian Model for seasonal and/or nonseasonal series, using discount factors. The original formulation of this model was put forward recently by Ameen and Harrison. In the first part of the thesis (chapters 2 and 3) we show some general concepts related to time series and time series modelling, whereas in the second (chapters 4, 5 and 6) we formally presented / the Bayesian formulation of the proposed model. A flow chart and some optional parameter setings aiming a computational implementation is also presented.

Page generated in 0.0411 seconds