Spelling suggestions: "subject:"bayesian"" "subject:"eayesian""
341 |
Uncertainty in inverse elasticity problemsGendin, Daniel I. 27 September 2021 (has links)
The non-invasive differential diagnosis of breast masses through ultrasound imaging motivates the following class of elastic inverse problems: Given one or more measurements of the displacement field within an elastic material, determine the material property distribution within the material. This thesis is focused on uncertainty quantification in inverse problem solutions, with application to inverse problems in linear and nonlinear elasticity.
We consider the inverse nonlinear elasticity problem in the context of Bayesian statistics. We show the well-known result that computing the Maximum A Posteriori (MAP) estimate is consistent with previous optimization formulations of the inverse elasticity problem. We show further that certainty in this estimate may be quantified using concepts from information theory, specifically, information gain as measured by the Kullback-Leibler (K-L) divergence and mutual information. A particular challenge in this context is the computational expense associated with computing these quantities. A key contribution of this work is a novel approach that exploits the mathematical structure of the inverse problem and properties of conjugate gradient method to make these calculations feasible.
A focus of this work is estimating the spatial distribution of the elastic nonlinearity of a material. Measurement sensitivity to the nonlinearity is much higher for large (finite) strains than for smaller strains, and so large strains tend to be used for such measurements. Measurements of larger deformations, however, tend to show greater levels of noise. A key finding of this work is that, when identifying nonlinear elastic properties, information gain can be used to characterize a trade-off between larger strains with higher noise levels and smaller strains with lower noise levels. These results can be used to inform experimental design.
An approach often used to estimate both linear and nonlinear elastic property distributions is to do so sequentially: Use a small strain deformation to estimate the linear properties, and a large strain deformation to estimate the nonlinearity. A key finding of this work is that accurate characterization of the joint posterior probability distribution over both linear and nonlinear elastic parameters requires that the estimates be performed jointly rather than sequentially.
All the methods described above are demonstrated in applications to problems in elasticity for both simulated data as well as clinically measured data (obtained in vivo). In the context of the clinical data, we evaluate repeatability of measurements and parameter reconstructions in a clinical setting.
|
342 |
On Bayesian Multiplicity Adjustment in Multiple TestingGecili, Emrah January 2018 (has links)
No description available.
|
343 |
Automatic detection of pulmonary embolism using computational intelligence.Scurrell, Simon John 14 February 2007 (has links)
Student Number : 0418382M -
MSc(Eng) dissertation -
School of Electrical Engineering and Information Technology -
Faculty of Engineering and the Built Environment / Pulmonary embolism (PE) is a potentially fatal, yet potentially treatable condition.
The problem of diagnosing PE with any degree of confidence arises from the nonspecific
nature of the symptoms. In difficult cases, multiple tests will need to be
performed on a patient before an accurate diagnosis can be made. These tests
include Ventilation-Perfusion (V/Q) scanning, Spiral CT, leg ultrasound and d-
Dimer testing. The aim of this research is to test the performance of using neural
networks, namely Bayesian Neural Networks, to make a diagnosis based on available
information. The information contains of a set of 12 V/Q scans which have been
processed, and from which features have been extracted to provide inputs to the
neural network. This system will act as a second opinion, and is not intended to
replace an experienced clinician.
The V/Q scans are analysed using image processing techniques in order to segment
the lung from the background image and to determine if any abnormalities are
present in the lung. The system must be able to discriminate between a genuine
case of PE and of other diseases showing similar symptoms such as tuberculosis and
parenchymal lung disease. Relevant features to be used in classification were then
extracted from the images.
The goal of this system is to make use of Bayesian neural networks. Using Bayesian
networks, confidence levels can be calculated for each prediction the network makes.
This makes them more informative than traditional multi layer perceptron (MLP)
networks. The V/Q scans themselves are greyscale images of [256x256] size. In
order to reduce redundancy and increase computational speed, Principal Component
Analysis (PCA) is used to obtain the most significant information in each of the
scans.
Usually the Gold Standard for such a system would be pulmonary angiography,
but in this case the Bayesian MLP (BMLP) is trained based on diagnosis by an experienced nuclear medicine physician. The system will be used to look at new
cases for which the accuracy of the system can be established. Results showed good
training performance, while validation performance was reasonable. Intermediate
cases proved to be the most difficult to diagnose correctly.
|
344 |
Vortex Detection in CFD Datasets Using a Multi-Model Ensemble ApproachBassou, Randa 09 December 2016 (has links)
Over the past few decades, visualization and application researchers have been investigating vortices and have developed several algorithms for detecting vortex-like structures in the flow. These techniques can adequately identify vortices in most computational datasets, each with its own degree of accuracy. However, despite these efforts, there still does not exist an entirely reliable vortex detection method that does not require significant user intervention. The objective of this research is to solve this problem by introducing a novel vortex analysis technique that provides more accurate results by optimizing the threshold for several computationally-efficient, local vortex detectors, before merging them using the Bayesian method into a more robust detector that assimilates global domain knowledge based on labeling performed by an expert. Results show that when choosing the threshold well, combining the methods does not improve accuracy; whereas, if the threshold is chosen poorly, combining the methods produces significant improvement.
|
345 |
Learning Optimal Bayesian Networks with Heuristic SearchMalone, Brandon M 11 August 2012 (has links)
Bayesian networks are a widely used graphical model which formalize reasoning under uncertainty. Unfortunately, construction of a Bayesian network by an expert is timeconsuming, and, in some cases, all expertsmay not agree on the best structure for a problem domain. Additionally, for some complex systems such as those present in molecular biology, experts with an understanding of the entire domain and how individual components interact may not exist. In these cases, we must learn the network structure from available data. This dissertation focuses on score-based structure learning. In this context, a scoring function is used to measure the goodness of fit of a structure to data. The goal is to find the structure which optimizes the scoring function. The first contribution of this dissertation is a shortest-path finding perspective for the problem of learning optimal Bayesian network structures. This perspective builds on earlier dynamic programming strategies, but, as we show, offers much more flexibility. Second, we develop a set of data structures to improve the efficiency of many of the integral calculations for structure learning. Most of these data structures benefit our algorithms, dynamic programming and other formulations of the structure learning problem. Next, we introduce a suite of algorithms that leverage the new data structures and shortest-path finding perspective for structure learning. These algorithms take advantage of a number of new heuristic functions to ignore provably sub-optimal parts of the search space. They also exploit regularities in the search that previous approaches could not. All of the algorithms we present have their own advantages. Some minimize work in a provable sense; others use external memory such as hard disk to scale to datasets with more variables. Several of the algorithms quickly find solutions and improve them as long as they are given more resources. Our algorithms improve the state of the art in structure learning by running faster, using less memory and incorporating other desirable characteristics, such as anytime behavior. We also pose unanswered questions to drive research into the future.
|
346 |
Bayesian optimal design for changepoint problemsAtherton, Juli. January 2007 (has links)
No description available.
|
347 |
A Monte Carlo Investigation of Smoothing Methods for Error Density Estimation in Functional Data Analysis with an Illustrative Application to a Chemometric Data SetThompson, John R.J. 06 1900 (has links)
Functional data analysis is a eld in statistics that analyzes data which are dependent
on time or space and from which inference can be conducted. Functional data
analysis methods can estimate residuals from functional regression models that in
turn require robust univariate density estimators for error density estimation. The
accurate estimation of the error density from the residuals allows evaluation of the
performance of functional regression estimation. Kernel density estimation using
maximum likelihood cross-validation and Bayesian bandwidth selection techniques
with a Gaussian kernel are reproduced and compared to least-squares cross-validation
and plug-in bandwidth selection methods with an Epanechnikov kernel. For simulated
data, Bayesian bandwidth selection methods for kernel density estimation are
shown to give the minimum mean expected square error for estimating the error density,
but are computationally ine cient and may not be adequately robust for real
data. The (bounded) Epanechnikov kernel function is shown to give similar results as
the Gaussian kernel function for error density estimation after functional regression.
When the functional regression model is applied to a chemometric data set, the local
least-squares cross-validation method, used to select the bandwidth for the functional
regression estimator, is shown to give a signi cantly smaller mean square predicted
error than that obtained with Bayesian methods. / Thesis / Master of Science (MSc)
|
348 |
Dynamic Risk Assessment in Desalination Plants: A Multilevel Bayesian Network ApproachAlfageh, Alyah 09 July 2023 (has links)
The criticality of desalination plants, which greatly rely on Industrial Control
Systems (ICS), has heightened due to the scarcity of clean water. This reliance
greatly emphasizes the necessity of securing these systems, alongside implementing a robust risk assessment protocol. To address these challenges and the existing limitations in prevalent risk assessment methodologies, this thesis proposes a risk assessment approach for ICS within desalination facilities. The proposed strategy integrates Bayesian Networks (BNs) and Dynamic Programming (DP). The thesis develops BNs into multilevel Bayesian Networks (MBNs), a form that effectively handles system complexity, aids inference, and dynamically modifies risk profiles.
These networks account for the interactions and dynamic behaviors of system components,providing a level of responsiveness often missing in traditional methods. A standout feature of this approach is its consideration of the potential attackers’perspective, often neglected but critical for a comprehensive risk assessment and the development of solid defense strategies. DP supplements this approach by simplifying complex problems and and identifying the most optimal paths for potential attacks. Therefore, this thesis contributes greatly to enhancing the safety of critical infrastructures like water desalination plants, addressing key deficiencies in existing safety precautions.
|
349 |
Operation Oriented Digital Twin of Hydro Test RigKhademi, Ali January 2022 (has links)
It has become increasingly important to introduce the Digital Twin in additive manufacturing as it is perceived as a promising step forward in its development and a vital component of Industry 4.0. Digital Twin is an up-to-date representation of a real asset in operation. The aim of this thesis is to develop a Digital Twin of a hydro test rig. Digital Twins are created by developing and simulating mathematical models, which should be integrated and validated. A downscale turbine test rig in which its runner and drafttube are replicates of the Porjus U9 turbine. This test rig is located in the John-Fieldlaboratory of the Division of Fluid and Experimental Mechanics at Luleå University of Technology (LTU). A mathematical model of the test rig has been made in the MATLAB environment Simulink. The test rig itself has components such as a Kaplan turbine, hydraulic pump, magnetic braking system, rotor, and a flow meter in a closed loop system. It is known that some test rig parameters are unknown, and so two methods have been used to optimize these parameters during the validation of the mathematical model. Optimization means finding either the maximum or the minimum of the target function with a particular set of parameters. An optimization of seven total parameters was made for the mathematical model in Simulink. The parameters were optimized using two different methods: Fmincon in MATLAB and Bayesian Optimization, a machine learning tool. Due to the fact that Fmincon could only find local minima and get stuck in that area, it could not reach the global minima. In contrast, Bayesian Optimization produced better results for minimizing the cost function and finding the global minima. / AFC4Hydro
|
350 |
The "Fair" Triathlon: Equating Standard Deviations Using Non-Linear Bayesian ModelsCurtis, Steven McKay 14 May 2004 (has links) (PDF)
The Ironman triathlon was created in 1978 by combining events with the longest distances for races then contested in Hawaii in swimming, cycling, and running. The Half Ironman triathlon was formed using half the distances of each of the events in the Ironman. The Olympic distance triathlon was created by combining events with the longest distances for races sanctioned by the major federations for swimming, cycling, and running. The relative importance of each event in overall race outcome was not given consideration when determining the distances of each of the races in modern triathlons. Thus, there is a general belief among triathletes that the swimming portion of the standard-distance triathlons is underweighted. We present a nonlinear Bayesian model for triathlon finishing times that models time and standard deviation of time as a function of distance. We use this model to create "fair" triathlons by equating the standard deviations of the times taken to complete the swimming, cycling, and running events. Thus, in these "fair" triathlons, a one standard deviation improvement in any event has an equivalent impact on overall race time.
|
Page generated in 0.0445 seconds