• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2036
  • 601
  • 261
  • 260
  • 61
  • 32
  • 26
  • 19
  • 15
  • 14
  • 9
  • 8
  • 6
  • 6
  • 5
  • Tagged with
  • 4131
  • 808
  • 759
  • 729
  • 720
  • 716
  • 707
  • 660
  • 574
  • 449
  • 432
  • 416
  • 405
  • 369
  • 312
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
351

Development of Informative Priors in Microarray Studies

Fronczyk, Kassandra M. 19 July 2007 (has links) (PDF)
Microarrays measure the abundance of DNA transcripts for thousands of gene sequences, simultaneously facilitating genomic comparisons across tissue types or disease status. These experiments are used to understand fundamental aspects of growth and development and to explore the underlying genetic causes of many diseases. The data from most microarray studies are found in open-access online databases. Bayesian models are ideal for the analysis of microarray data because of their ability to integrate prior information; however, most current Bayesian analyses use empirical or flat priors. We present a Perl script to build an informative prior by mining online databases for similar microarray experiments. Four prior distributions are investigated: a power prior including information from multiple previous experiments, an informative prior using information from one previous experiment, an empirically estimated prior, and a flat prior. The method is illustrated with a two-sample experiment to determine the preferential regulation of genes by tamoxifen in breast cancer cells.
352

Bayesian Epistemology and Having Evidence

Dunn, Jeffrey 01 September 2010 (has links)
Bayesian Epistemology is a general framework for thinking about agents who have beliefs that come in degrees. Theories in this framework give accounts of rational belief and rational belief change, which share two key features: (i) rational belief states are represented with probability functions, and (ii) rational belief change results from the acquisition of evidence. This dissertation focuses specifically on the second feature. I pose the Evidence Question: What is it to have evidence? Before addressing this question we must have an understanding of Bayesian Epistemology. The first chapter argues that we should understand Bayesian Epistemology as giving us theories that are evaluative and not action-guiding. I reach this verdict after considering the popular ‘ought’-implies-‘can’ objection to Bayesian Epistemology. The second chapter argues that it is important for theories in Bayesian Epistemology to answer the Evidence Question, and distinguishes between internalist and externalist answers. The third and fourth chapters present and defend a specific answer to the Evidence Question. The account is inspired by reliabilist accounts of justification, and attempts to understand what it is to have evidence by appealing solely to considerations of reliability. Chapter 3 explains how to understand reliability, and how the account fits with Bayesian Epistemology, in particular, the requirement that an agent’s evidence receive probability 1. Chapter 4 responds to objections, which maintain that the account gives the wrong verdict in a variety of situations including skeptical scenarios, lottery cases, scientific cases, and cases involving inference. After slight modifications, I argue that my account has the resources to answer the objections. The fifth chapter considers the possibility of losing evidence. I show how my account can model these cases. To do so, however, we require a modification to Conditionalization, the orthodox principle governing belief change. I present such a modification. The sixth and seventh chapters propose a new understanding of Dutch Book Arguments, historically important arguments for Bayesian principles. The proposal shows that the Dutch Book Arguments for implausible principles are defective, while the ones for plausible principles are not. The final chapter is a conclusion.
353

Cubature Kalman Filtering Theory & Applications

Arasaratnam, Ienkaran 04 1900 (has links)
<p> Bayesian filtering refers to the process of sequentially estimating the current state of a complex dynamic system from noisy partial measurements using Bayes' rule. This thesis considers Bayesian filtering as applied to an important class of state estimation problems, which is describable by a discrete-time nonlinear state-space model with additive Gaussian noise. It is known that the conditional probability density of the state given the measurement history or simply the posterior density contains all information about the state. For nonlinear systems, the posterior density cannot be described by a finite number of sufficient statistics, and an approximation must be made instead.</p> <p> The approximation of the posterior density is a challenging problem that has engaged many researchers for over four decades. Their work has resulted in a variety of approximate Bayesian filters. Unfortunately, the existing filters suffer from possible divergence, or the curse of dimensionality, or both, and it is doubtful that a single filter exists that would be considered effective for applications ranging from low to high dimensions. The challenge ahead of us therefore is to derive an approximate nonlinear Bayesian filter, which is theoretically motivated, reasonably accurate, and easily extendable to a wide range of applications at a minimal computational cost.</p> <p> In this thesis, a new approximate Bayesian filter is derived for discrete-time nonlinear filtering problems, which is named the cubature Kalman filter. To develop this filter, it is assumed that the predictive density of the joint state-measurement random variable is Gaussian. In this way, the optimal Bayesian filter reduces to the problem of how to compute various multi-dimensional Gaussian-weighted moment integrals. To numerically compute these integrals, a third-degree spherical-radial cubature rule is proposed. This cubature rule entails a set of cubature points scaling linearly with the state-vector dimension. The cubature Kalman filter therefore provides an efficient solution even for high-dimensional nonlinear filtering problems. More remarkably, the cubature Kalman filter is the closest known approximate filter in the sense of completely preserving second-order information due to the maximum entropy principle. For the purpose of mitigating divergence, and improving numerical accuracy in systems where there are apparent computer roundoff difficulties, the cubature Kalman filter is reformulated to propagate the square roots of the error-covariance matrices. The formulation of the (square-root) cubature Kalman filter is validated through three different numerical experiments, namely, tracking a maneuvering ship, supervised training of recurrent neural networks, and model-based signal detection and enhancement. All three experiments clearly indicate that this powerful new filter is superior to other existing nonlinear filters. </p> / Thesis / Doctor of Philosophy (PhD)
354

A Study of Bayesian Inference in Medical Diagnosis

Herzig, Michael 05 1900 (has links)
<p> Bayes' formula may be written as follows: </p> <p> P(yᵢ|X) = P(X|yᵢ)・P(yᵢ)/j=K Σ j=1 P(X|yⱼ)・P(yⱼ) where (1) </p> <p> Y = {y₁, y₂,..., y_K} </p> <P> X = {x₁, x₂,..., xₖ} </p> <p> Assuming independence of attributes x₁, x₂,..., xₖ, Bayes' formula may be rewritten as follows: </p> <p> P(yᵢ|X) = P(x₁|yᵢ)・P(x₂|yᵢ)・...・P(xₖ|yᵢ)・P(yᵢ)/j=K Σ j=1 P(x₁|yⱼ)・P(x₂|yⱼ)・...・P(xₖ|yⱼ)・P(yⱼ) (2) </p> <p> In medical diagnosis the y's denote disease states and the x's denote the presence or absence of symptoms. Bayesian inference is applied to medical diagnosis as follows: for an individual with data set X, the predicted diagnosis is the disease yⱼ such that P(yⱼ|X) = max_i P(yᵢ|X), i=1,2,...,K (3) </p> <p> as calculated from (2). </p> <p> Inferences based on (2) and (3) correctly allocate a high proportion of patients (>70%) in studies to date, despite violations of the independence assumption. The aim of this thesis is modest, (i) to demonstrate the applicability of Bayesian inference to the problem of medical diagnosis (ii) to review pertinent literature (iii) to present a Monte Carlo method which simulates the application of Bayes' formula to distinguish among diseases (iv) to present and discuss the results of Monte Carlo experiments which allow statistical statements to be made concerning the accuracy of Bayesian inference when the assumption of independence is violated. </p> <p> The Monte Carlo study considers paired dependence among attributes when Bayes' formula is used to predict diagnoses from among 6 disease categories. A parameter which measured deviations from attribute independence is defined by DH=(j=6 Σ j=1|P(x_B|x_A,yⱼ)-P(x_B|yⱼ)|)/6, where x_A and x_B denote a dependent attribute pair. It was found that the correct number of Bayesian predictions, M, decreases markedly as attributes increasing diverge from independence, ie, as DH increases. However, a simple first order linear model of the form M = B₀+B₁・DH does not consistently explain the variation in M. </p> / Thesis / Master of Science (MSc)
355

Worlds Collide through Gaussian Processes: Statistics, Geoscience and Mathematical Programming

Christianson, Ryan Beck 04 May 2023 (has links)
Gaussian process (GP) regression is the canonical method for nonlinear spatial modeling among the statistics and machine learning communities. Geostatisticians use a subtly different technique known as kriging. I shall highlight key similarities and differences between GPs and kriging through the use of large scale gold mining data. Most importantly GPs are largely hands-off, automatically learning from the data whereas kriging requires an expert human in the loop to guide analysis. To emphasize this, I show an imputation method for left censored values frequently seen in mining data. Oftentimes geologists ignore censored values due to the difficulty of imputing with kriging, but GPs execute imputation with relative ease leading to better estimates of the gold surface. My hope is that this research can serve as a springboard to encourage the mining community to consider using GPs over kriging for diverse utility after GP model fitting. Another common use of GPs that would be inefficient for kriging is Bayesian Optimization (BO). Traditionally BO is designed to find a global optima by sequentially sampling from a function of interest using an acquisition function. When two or more local or global optima of the function of interest have similar objective values, it often makes some sense to target the more "robust" solution with a wider domain of attraction. However, traditional BO weighs these solutions the same, favoring whichever has a slightly better objective value. By combining the idea of expected improvement (EI) from the BO community with mathematical programming's concept of an adversary, I introduce a novel algorithm to target robust solutions called robust expected improvement (REI). The adversary penalizes "peaked" areas of the objective function making those values appear less desirable. REI performs acquisitions using EI on the adversarial space yielding data sets focused on the robust solution that exhibit EI's already proven excellent balance of exploration and exploitation. / Doctor of Philosophy / Since its origins in the 1940's, spatial statistics modeling has adapted to fit different communities. The geostatistics community developed with an emphasis on modeling mining operations and has further evolved to cover a slew of different applications largely focused on two or three physical dimensions. The computer experiments community developed later when these physical experiments started moving into the virtual realm with advances in computer technology. While birthed from the same foundation, computer experimenters often look at ten or sometimes even higher dimension problems. Due to these differences among others, each community tailored their methods to best fit their common problems. My research compares the modern instantiations of the differing methodology on two sets of real gold mining data. Ultimately, I prefer the computer experiments methods for their ease of adaptation to downstream tasks at no cost to model performance. A statistical model is almost never a standalone development; it is created with a specific goal in mind. The first case I show of this is "imputation" of mining data. Mining data often have a detection threshold such that any observation with very small mineral concentrations are recorded at the threshold. Frequently, geostatisticians simply throw out these observations because they cause problems in modeling. Statisticians try to use the information that there is a low concentration combined with the rest of the fully observed data to derive a best guess at the concentration of thresholded locations. Under the geostatistics framework, this is cumbersome, but the computer experiments community consider imputation an easy extension. Another common model task is creating an experiment to best learn a surface. The surface may be a gold deposit on Earth or an unknown virtual function or anything measurable really. To do this, computer experimenters often use "active learning" by sampling one point at a time, using that point to generate a better informed model which suggests a new point to sample, repeating until a satisfactory number of points are sampled. Geostatisticians often prefer "one-shot" experiments by deciding all samples prior to collecting any. Thus the geostatistics framework is not appropriate for active learning. Active learning tries to find the "best" location of the surface with either the maximum or minimum response. I adapt this problem to redefine best to find a "robust" location where the response does not change much even if the location is not perfectly specified. As an example, consider setting operating conditions for a factory. If locations produce a similar amount of product, but one needs an exact pressure setting or else it blows up the factory, the other is certainly preferred. To design experiments to find robust locations, I borrow ideas from the mathematical programming community to develop a novel method for robust active learning.
356

Bayesian Principles and Causal Judgment

Kelley, Amanda M. 20 June 2007 (has links)
No description available.
357

Modeling the mail survey response pattern and determining the optimal number of questionnaires: A Bayesian approach

Singer, Ethan Lloyd "Mendel". January 1991 (has links)
No description available.
358

Estimation and detection of nonlinear/chaotic signals: A Bayesian-based approach

Bozek-Kuzmicki, Maribeth January 1995 (has links)
No description available.
359

Blind Image Deconvolution with Conditionally Gaussian Hypermodels

Munch, James Joseph 16 June 2011 (has links)
No description available.
360

Bayesian Models for Computer Model Calibration and Prediction

Vaidyanathan, Sivaranjani 08 October 2015 (has links)
No description available.

Page generated in 0.0275 seconds