Spelling suggestions: "subject:"bayesian"" "subject:"eayesian""
331 |
Bayesian methods for gravitational waves and neural networksGraff, Philip B. January 2012 (has links)
Einstein’s general theory of relativity has withstood 100 years of testing and will soon be facing one of its toughest challenges. In a few years we expect to be entering the era of the first direct observations of gravitational waves. These are tiny perturbations of space-time that are generated by accelerating matter and affect the measured distances between two points. Observations of these using the laser interferometers, which are the most sensitive length-measuring devices in the world, will allow us to test models of interactions in the strong field regime of gravity and eventually general relativity itself. I apply the tools of Bayesian inference for the examination of gravitational wave data from the LIGO and Virgo detectors. This is used for signal detection and estimation of the source parameters. I quantify the ability of a network of ground-based detectors to localise a source position on the sky for electromagnetic follow-up. Bayesian criteria are also applied to separating real signals from glitches in the detectors. These same tools and lessons can also be applied to the type of data expected from planned space-based detectors. Using simulations from the Mock LISA Data Challenges, I analyse our ability to detect and characterise both burst and continuous signals. The two seemingly different signal types will be overlapping and confused with one another for a space-based detector; my analysis shows that we will be able to separate and identify many signals present. Data sets and astrophysical models are continuously increasing in complexity. This will create an additional computational burden for performing Bayesian inference and other types of data analysis. I investigate the application of the MOPED algorithm for faster parameter estimation and data compression. I find that its shortcomings make it a less favourable candidate for further implementation. The framework of an artificial neural network is a simple model for the structure of a brain which can “learn” functional relationships between sets of inputs and outputs. I describe an algorithm developed for the training of feed-forward networks on pre-calculated data sets. The trained networks can then be used for fast prediction of outputs for new sets of inputs. After demonstrating capabilities on toy data sets, I apply the ability of the network to classifying handwritten digits from the MNIST database and measuring ellipticities of galaxies in the Mapping Dark Matter challenge. The power of neural networks for learning and rapid prediction is also useful in Bayesian inference where the likelihood function is computationally expensive. The new BAMBI algorithm is detailed, in which our network training algorithm is combined with the nested sampling algorithm MULTINEST to provide rapid Bayesian inference. Using samples from the normal inference, a network is trained on the likelihood function and eventually used in its place. This is able to provide significant increase in the speed of Bayesian inference while returning identical results. The trained networks can then be used for extremely rapid follow-up analyses with different priors, obtaining orders of magnitude of speed increase. Learning how to apply the tools of Bayesian inference for the optimal recovery of gravitational wave signals will provide the most scientific information when the first detections are made. Complementary to this, the improvement of our analysis algorithms to provide the best results in less time will make analysis of larger and more complicated models and data sets practical.
|
332 |
The development of object oriented Bayesian networks to evaluate the social, economic and environmental impacts of solar PVLeicester, Philip A. January 2016 (has links)
Domestic and community low carbon technologies are widely heralded as valuable means for delivering sustainability outcomes in the form of social, economic and environmental (SEE) policy objectives. To accelerate their diffusion they have benefited from a significant number and variety of subsidies worldwide. Considerable aleatory and epistemic uncertainties exist, however, both with regard to their net energy contribution and their SEE impacts. Furthermore the socio-economic contexts themselves exhibit enormous variability, and commensurate uncertainties in their parameterisation. This represents a significant risk for policy makers and technology adopters. This work describes an approach to these problems using Bayesian Network models. These are utilised to integrate extant knowledge from a variety of disciplines to quantify SEE impacts and endogenise uncertainties. A large-scale Object Oriented Bayesian network has been developed to model the specific case of solar photovoltaics (PV) installed on UK domestic roofs. Three specific model components have been developed. The PV component characterises the yield of UK systems, the building energy component characterises the energy consumption of the dwellings and their occupants and a third component characterises the building stock in four English urban communities. Three representative SEE indicators, fuel affordability, carbon emission reduction and discounted cash flow are integrated and used to test the model s ability to yield meaningful outputs in response to varying inputs. The variability in the percentage of the three indicators is highly responsive to the dwellings built form, age and orientation, but is not just due to building and solar physics but also to socio-economic factors. The model can accept observations or evidence in order to create scenarios which facilitate deliberative decision making. The BN methodology contributes to the synthesis of new knowledge from extant knowledge located between disciplines . As well as insights into the impacts of high PV penetration, an epistemic contribution has been made to transdisciplinary building energy modelling which can be replicated with a variety of low carbon interventions.
|
333 |
Spatial Analysis and Determinants of Asthma Health and Health Services Use Outcomes in OntarioOuedraogo, Alexandra January 2016 (has links)
This thesis explores the spatial patterns and determinants of asthma prevalence and
health services use (ICD-10 codes J45, J46) for the total population (all ages and both sexes combined) of the province of Ontario, Canada, between 2003 and 2013. Asthma is characterized by high health services use and reduced quality of life for asthma sufferers, representing a considerable burden on individuals, society and the health care system. While recent evidence suggests increasing asthma prevalence in Ontario, little research has been done to understand the identified spatial variability of this disease. Using population-based, ecological-level data and refined spatial analysis techniques, this research aims to explore the spatial patterns of asthma prevalence and health services use in Ontario, and examine the contribution of potential risk factors including air pollution, pollen, deprivation, physician supply and rurality. Results indicated considerable spatial variability in asthma outcomes across Ontario. Similar patterns were found between asthma prevalence and physician visits; clusters of high rates were generally found in southern urban/suburban areas, and clusters of low rates were mainly identified in most northern and southern rural areas. Conversely, clusters of high rates of ED visits and hospitalizations were found in most northern and southern rural areas, whereas clusters of low rates were found in south urban/suburban areas near Toronto. Findings from the spatial regression analysis indicated that while rurality was negatively associated with asthma prevalence and physician visits, it was positively associated with ED visits. Moreover, positive associations were also found between material deprivation and asthma prevalence and ED visits, and between NO2 and asthma physician visits. This
research contributes to a better understanding of area characteristics that influence asthma disparities, which can help develop better, locally relevant public health strategies aimed at reducing the burden of asthma in Ontario. Further, it demonstrates the importance of using a population-based framework and spatial analysis approaches, which take into account the spatial nature of asthma morbidity and their determinants.
|
334 |
Complete Bayesian analysis of some mixture time series modelsHossain, Shahadat January 2012 (has links)
In this thesis we consider some finite mixture time series models in which each component is following a well-known process, e.g. AR, ARMA or ARMA-GARCH process, with either normal-type errors or Student-t type errors. We develop MCMC methods and use them in the Bayesian analysis of these mixture models. We introduce some new models such as mixture of Student-t ARMA components and mixture of Student-t ARMA-GARCH components with complete Bayesian treatments. Moreover, we use component precision (instead of variance) with an additional hierarchical level which makes our model more consistent with the MCMC moves. We have implemented the proposed methods in R and give examples with real and simulated data.
|
335 |
A comparative assessment of Dempster-Shafer and Bayesian belief in civil engineering applicationsLuo, Wuben January 1988 (has links)
The Bayesian theory has long been the predominate method in dealing with uncertainties in civil engineering practice including water resources engineering. However, it imposes unnecessary restrictive requirements on inferential problems. Concerns thus arise about the effectiveness of using Bayesian theory in dealing with more general inferential problems. The recently developed Dempster-Shafer theory appears to be able to surmount the limitations of Bayesian theory. The new theory was originally proposed as a pure mathematical theory. A reasonable amount of work has been done in trying to adopt this new theory in practice, most of this work being related to inexact inference in expert systems and all of the work still remaining in the fundamental stage. The purpose of this research is first to compare the two theories and second to try to apply Dempster-Shafer theory in solving real problems in water resources engineering.
In comparing Bayesian and Dempster-Shafer theory, the equivalent situation between these two theories under a special situation is discussed first. The divergence of results from Dempster-Shafer and Bayesian approaches under more general situations where Bayesian theory is unsatisfactory is then examined. Following this, the conceptual difference between the two theories is argued. Also discussed in the first part of this research is the issue of dealing with evidence including classifying sources of evidence and expressing them through belief functions.
In attempting to adopt Dempster-Shafer theory in engineering practice, the Dempster-Shafer decision theory, i.e. the application of Dempster-Shafer theory within the framework of conventional decision theory, is introduced. The application of this new decision theory is demonstrated through a water resources engineering design example. / Applied Science, Faculty of / Civil Engineering, Department of / Graduate
|
336 |
Bayesian decision analysis for pavement managementBein, Piotr January 1981 (has links)
Ideally, pavement management is a process of sequential decisions on a network of pavement sections. The network is subjected to uncertainties arising from material variability, random traffic, and fluctuating environmental
inputs. The pavement manager optimizes the whole system subject to resource constraints, and avoids sub optimization of sections. The optimization
process accounts for the dynamics of the pavement system. In addition to objective data the manager seeks information from a number of experts, and considers selected social-political factors and also potential implementation
difficulties.
Nine advanced schemes that have been developed for various pavement administrations are compared to the ideal. Although the schemes employ methods capable of handling the pavement system's complexities in isolation, not one can account for all complexities simultaneously.
Bayesian decision analysis with recent extensions is useful for attacking the problem at hand. The method prescribes that when a decision maker is faced with a choice in an uncertain situation, he should pick the alternative with the maximum expected utility.
To illustrate the potential of Bayesian decision analysis for pavement management, the author develops a Markov decision model for the operation of one pavement section. Consequences in each stage are evaluated by multi-attribute utility. The states are built of multiple pavement variables, such as strength, texture, roughness, etc. Group opinion and network optimization
are recommended for future research, and decision analysis suggested
as a promising way to attack these more complex problems.
This thesis emphasizes the utility part of decision analysis, while it modifies an existing approach to handle the probability part. A procedure is developed for Bayesian updating of Markov transition matrices
where the prior distributions are of the beta class, and are based on surveys of pavement condition and on engineering judgement.
Preferences of six engineers are elicited and tested in a simulated decision situation. Multi-attribute utility theory is a reasonable approximation
of the elicited value judgements and provides an expedient analytical tool. The model is programmed in PL1 and an example problem is analysed by a computer.
Conclusions discuss the pavement maintenance problem from the decision analytical perspective. A revision is recommended of the widespread additive evaluation models from the standpoint of principles for rational choice. Those areas of decision theory which may be of interest to the pavement engineer, and to the civil engineer in general, are suggested for further study and monitoring. / Applied Science, Faculty of / Civil Engineering, Department of / Graduate
|
337 |
Bayesian Nonparametrics for BiophysicsMeysam Tavakoli (8767965) 28 April 2020 (has links)
<p>The main goal of data analysis is to summarize huge amount of data (as our observation) with a few numbers that come up us with some sort of intuition into the process that generated the data. Regardless of the method we use to analyze the data, the process of analysis includes (1) create the mathematical formulation for the problem, (2) data collection, (3) create a probability model for the data, (4) estimate the parameters of the model, and (5) summarize the results in a proper way-a process that is called ”statistical inference”.<br></p><p>Recently it has been suggested that using the concept of Bayesian approach and more specifically Bayesian nonparametrics (BNPs) is showed to have a deep influence in the area of data analysis [1], and in this field, they have just begun to be extracted [2–4]. However, to our best knowledge, there is no single resource yet avail-able that explain it, both its concepts, and implementation, as would be needed to bring the capacity of BNPs to relieve on data analysis and accelerate its unavoidable extensive acceptance.<br></p><p>Therefore, in this dissertation, we provide a description of the concepts and implementation of an important, and computational tool that extracts BNPs in this area specifically its application in the field of biophysics. Here, the goal is using BNPs to understand the rules of life (in vivo) at the scale at which life occurs (single molecule)from the fastest possible acquirable data (single photons).<br></p><p>In chapter 1, we introduce a brief introduction to Data Analysis in biophysics.Here, our overview is aimed for anyone, from student to established researcher, who plans to understand what can be accomplished with statistical methods to modeling and where the field of data analysis in biophysics is headed. For someone just getting started, we present a special on the logic, strengths and shortcomings of data analysis frameworks with a focus on very recent approaches.<br></p><p>In chapter 2, we provide an overview on data analysis in single molecule bio-physics. We discuss about data analysis tools and model selection problem and mainly Bayesian approach. We also discuss about BNPs and their distinctive characteristics that make them ideal mathematical tools in modeling of complex biomolecules as they offer meaningful and clear physical interpretation and let full posterior probabilities over molecular-level models to be deduced with minimum subjective choices.<br></p><p>In chapter 3, we work on spectroscopic approaches and fluorescence time traces.These traces are employed to report on dynamical features of biomolecules. The fundamental unit of information came from these time traces is the single photon.Individual photons have information from the biomolecule, from which they are emit-ted, to the detector on timescales as fast as microseconds. Therefore, from confocal microscope viewpoint it is theoretically feasible to monitor biomolecular dynamics at such timescales. In practice, however, signals are stochastic and in order to derive dynamical information through traditional means such as fluorescence correlation spectroscopy (FCS) and related methods fluorescence time trace signals are gathered and temporally auto-correlated over many minutes. So far, it has been unfeasible to analyze dynamical attributes of biomolecules on timescales near data acquisition as this requests that we estimate the biomolecule numbers emitting photons and their locations within the confocal volume. The mathematical structure of this problem causes that we leave the normal (”parametric”) Bayesian paradigm. Here, we utilize novel mathematical tools, BNPs, that allow us to extract in a principled fashion the same information normally concluded from FCS but from the direct analysis of significantly smaller datasets starting from individual single photon arrivals. Here, we specifically are looking for diffusion coefficient of the molecules. Diffusion coefficient allows molecules to find each other in a cell and at the cellular level, determination of the diffusion coefficient can provide us valuable insights about how molecules interact with their environment. We discuss the concepts of this method in assisting significantly reduce phototoxic damage on the sample and the ability to monitor the dynamics of biomolecules, even down to the single molecule level, at such timescales.<br></p><p>In chapter 4, we present a new approach to infer lifetime. In general, fluorescenceLifetime Imaging (FLIM) is an approach which provides us information on the numberof species and their associated lifetimes. Current lifetime data analysis methods relyon either time correlated single photon counting (TCSPC) or phasor analysis. These methods require large numbers of photons to converge to the appropriate lifetimes and do not determine how many species are responsible for those lifetimes. Here, we propose a new method to analyze lifetime data based on BNPs that precisely takes into account several experimental complexities. Using BNPs, we can not only identify the most probable number of species but also their lifetimes with at least an order magnitudes less data than competing methods (TCSPC or phasors). To evaluate our method, we test it with both simulated and experimental data for one, two, three and four species with both stationary and moving molecules. Also, we compare our species estimate and lifetime determination with both TCSPC and phasor analysis for different numbers of photons used in the analysis.<br></p><p>In conclusion, the basis of every spectroscopic method is the detection of photons.Photon arrivals encode complex dynamical and chemical information and methods to analyze such arrivals have the capability to reveal dynamical and chemical processes on fast timescales. Here, we turn our attention to fluorescence lifetime imaging and single spot fluorescence confocal microscopy where individual photon arrivals report on dynamics and chemistry down to the single molecule level. The reason this could not previously be achieved is because of the uncertainty in the number of chemical species and numbers of molecules contributing for the signal (i.e., responsible for contributing photons). That is, to learn dynamical or kinetic parameters (like diffusion coefficients or lifetime) we need to be able to interpret which photon is reporting on what process. For this reason, we abandon the parametric Bayesian paradigm and use the nonparametric paradigm that allows us to flexibly explore and learn numbers of molecules and chemical reaction space. We demonstrate the power of BNPs over traditional methods in single spot confocal and FLIM analysis in fluorescence lifetime imaging.<br></p>
|
338 |
SCAT Model Based on Bayesian Networks for Lost-Time Accident Prevention and Rate Reduction in Peruvian Mining OperationsZiegler-Barranco, Ana, Mera-Barco, Luis, Aramburu-Rojas, Vidal, Raymundo, Carlos, Mamani-Macedo, Nestor, Dominguez, Francisco 01 January 2020 (has links)
El texto completo de este trabajo no está disponible en el Repositorio Académico UPC por restricciones de la casa editorial donde ha sido publicado. / Several factors affect the activities of the mining industry. For example, accident rates are critical because they affect company ratings in the stock market (Standard & Poors). Considering that the corporate image is directly related to its stakeholders, this study conducts an accident analysis using quantitative and qualitative methods. In this way, the contingency rate is controlled, mitigated, and prevented while serving the needs) of the stakeholders. The Bayesian network method contributes to decision-making through a set of variables and the dependency relationships between them, establishing an earlier probability of unknown variables. Bayesian models have different applications, such as diagnosis, classification, and decision, and establish relationships among variables and cause–effect links. This study uses Bayesian inference to identify the various patterns that influence operator accident rates at a contractor mining company, and therefore, study and assess the possible differences in its future operations.
|
339 |
Analysis of a Lateral Spreading Case History from the 2007 Pisco, Peru EarthquakeGangrade, Rajat Mukesh 21 June 2013 (has links)
On August 15, 2007, Pisco, Peru was hit by an earthquake of Magnitude (Mw) = 8.0 which triggered multiple liquefaction induced lateral spreads. The subduction earthquake lasted for approximately 100 seconds and showed a complex rupture. From the geotechnical perspective, the Pisco earthquake was significant for the amount of soil liquefaction observed. A massive liquefaction induced seaward displacement of a marine terrace was observed in the Canchamana complex. Later analysis using the pre- and post-earthquake images showed that the lateral displacements were concentrated only on some regions. Despite the lateral homogeneity of the marine terrace, some cross-sections showed large displacements while others had minimal displacements. The detailed documentation of this case-history makes it an ideal case-study for the determination of the undrained strength of the liquefied soils; hence, the main objective of this research is to use the extensive data from the Canchamana Slide to estimate the shear strength of the liquefied soils. In engineering practice, the undrained strength of liquefied soil is typically estimated by correlating SPT-N values to: 1) absolute value of residual strength, or 2) residual strength ratio. Our research aims to contribute an important data point that will add to the current understanding of the residual strength of liquefied soils. / Master of Science
|
340 |
Subjective Bayesian analysis of elliptical modelsVan Niekerk, Janet 21 June 2013 (has links)
The problem of estimation has been widely investigated with all different kinds of assumptions.
This study focusses on the subjective Bayesian estimation of a location vector
and characteristic matrix for the univariate and multivariate elliptical model as oppose
to objective Bayesian estimation that has been thoroughly discussed (see Fang and Li
(1999) amongst others). The prior distributions that will be assumed is the conjugate
normal-inverse Wishart prior and also the normal-Wishart prior which has not yet been
considered in literature. The posterior distributions, joint and marginal, as well as the
Bayes estimators will be derived. The newly developed results are applied to the multivariate
normal and multivariate t-distribution. For subjective Bayesian analysis the
vector-spherical matrix elliptical model is also studied. / Dissertation (MSc)--University of Pretoria, 2012. / Statistics / MSc / Unrestricted
|
Page generated in 0.0624 seconds