• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1990
  • 601
  • 260
  • 258
  • 61
  • 32
  • 26
  • 19
  • 15
  • 14
  • 8
  • 6
  • 6
  • 6
  • 5
  • Tagged with
  • 4074
  • 791
  • 745
  • 720
  • 711
  • 703
  • 693
  • 648
  • 565
  • 440
  • 422
  • 416
  • 391
  • 362
  • 307
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Bayesian Unsupervised Labeling of Web Document Clusters

Liu, Ting 22 August 2011 (has links)
Information technologies have recently led to a surge of electronic documents in the form of emails, webpages, blogs, news articles, etc. To help users decide which documents may be interesting to read, it is common practice to organize documents by categories/topics. A wide range of supervised and unsupervised learning techniques already exist for automated text classification and text clustering. However, supervised learning requires a training set of documents already labeled with topics/categories, which is not always readily available. In contrast, unsupervised learning techniques do not require labeled documents, but assigning a suitable category to each resulting cluster remains a difficult problem. The state of the art consists of extracting keywords based on word frequency (or related heuristics). In this thesis, we improve the extraction of keywords for unsupervised labeling of document clusters by designing a Bayesian approach based on topic modeling. More precisely, we describe an approach that uses a large side corpus to infer a language model that implicitly encodes the semantic relatedness of different words. This language model is then used to build a generative model of the cluster in such a way that the probability of generating each word depends on its frequency in the cluster as well as the frequency of its semantically related words. The words with the highest probability of generation are then extracted to label the cluster. In this approach, the side corpus can be thought as a source of domain knowledge or context. However, there are two potential problems: processing a large side corpus can be time consuming and if the content of this corpus is not similar enough to the cluster, the resulting language model may be biased. We deal with those issues by designing a Bayesian transfer learning framework that allows us to process the side corpus just once offline and to weigh its importance based on the degree of similarity with the cluster.
102

Some Theory and Applications of Probability in Quantum Mechanics

Ferrie, Christopher January 2012 (has links)
This thesis investigates three distinct facets of the theory of quantum information. The first two, quantum state estimation and quantum process estimation, are closely related and deal with the question of how to estimate the classical parameters in a quantum mechanical model. The third attempts to bring quantum theory as close as possible to classical theory through the formalism of quasi-probability. Building a large scale quantum information processor is a significant challenge. First, we require an accurate characterization of the dynamics experienced by the device to allow for the application of error correcting codes and other tools for implementing useful quantum algorithms. The necessary scaling of computational resources needed to characterize a quantum system as a function of the number of subsystems is by now a well studied problem (the scaling is generally exponential). However, irrespective of the computational resources necessary to just write-down a classical description of a quantum state, we can ask about the experimental resources necessary to obtain data (measurement complexity) and the computational resources necessary to generate such a characterization (estimation complexity). These problems are studied here and approached from two directions. The first problem we address is that of quantum state estimation. We apply high-level decision theoretic principles (applied in classical problems such as, for example, universal data compression) to the estimation of a qubit state. We prove that quantum states are more difficult to estimate than their classical counterparts by finding optimal estimation strategies. These strategies, requiring the solution to a difficult optimization problem, are difficult to implement in practise. Fortunately, we find estimation algorithms which come close to optimal but require far fewer resources to compute. Finally, we provide a classical analog of this quantum mechanical problem which reproduces, and gives intuitive explanations for, many of its features, such as why adaptive tomography can quadratically reduce its difficulty. The second method for practical characterization of quantum devices takes is applied to the problem of quantum process estimation. This differs from the above analysis in two ways: (1) we apply strong restrictions on knowledge of various estimation and control parameters (the former making the problem easier, the latter making it harder); and (2) we consider the problem of designing future experiments based on the outcomes of past experiments. We show in test cases that adaptive protocols can exponentially outperform their off-line counterparts. Moreover, we adapt machine learning algorithms to the problem which bring these experimental design methodologies to realm of experimental feasibility. In the final chapter we move away from estimation problems to show formally that a classical representation of quantum theory is not tenable. This intuitive conclusion is formally borne out through the connection to quasi-probability -- where it is equivalent to the necessity of negative probability in all such representations of quantum theory. In particular, we generalize previous no-go theorems to arbitrary classical representations of quantum systems of arbitrary dimension. We also discuss recent progress in the program to identify quantum resources for subtheories of quantum theory and operational restrictions motivated by quantum computation.
103

Using Bayesian Network to Develop Drilling Expert Systems

Alyami, Abdullah 2012 August 1900 (has links)
Long years of experience in the field and sometimes in the lab are required to develop consultants. Texas A&M University recently has established a new method to develop a drilling expert system that can be used as a training tool for young engineers or as a consultation system in various drilling engineering concepts such as drilling fluids, cementing, completion, well control, and underbalanced drilling practices. This method is done by proposing a set of guidelines for the optimal drilling operations in different focus areas, by integrating current best practices through a decision-making system based on Artificial Bayesian Intelligence. Optimum practices collected from literature review and experts' opinions, are integrated into a Bayesian Network BN to simulate likely scenarios of its use that will honor efficient practices when dictated by varying certain parameters. The advantage of the Artificial Bayesian Intelligence method is that it can be updated easily when dealing with different opinions. To the best of our knowledge, this study is the first to show a flexible systematic method to design drilling expert systems. We used these best practices to build decision trees that allow the user to take an elementary data set and end up with a decision that honors the best practices.
104

Bayesian mixture modelling for characterising environmental exposures and outcomes

Wraith, Darren January 2008 (has links)
Environmental exposure and outcomes assessment is a great challenge to scientists. Increasingly more and more detailed data are becoming available to understand the nature and complexity of the relationships involved. The methodology of mixture models provides a means to understand, quantify and describe features and relation- ships within complex data sets. In this thesis, we focussed on a number of applied problems to characterise complex environmental exposure and outcomes, including: assessing the interaction between environmental exposures as risk factors for health outcomes; identifying di®ering environmental outcomes across a region; and estab- lishing patterns in the size and concentration of aerosol particles over time. Mixture model approaches to address these problems are developed and examined for their suitability in these contexts.
105

Bayesian analysis of rainfall-runoff models: insights to parameter estimation, model comparison and hierarchical model development

Marshall, Lucy Amanda, Civil & Environmental Engineering, Faculty of Engineering, UNSW January 2006 (has links)
One challenge that faces hydrologists in water resources planning is to predict the catchment???s response to a given rainfall. Estimation of parameter uncertainty (and model uncertainty) allows assessment of the risk in likely applications of hydrological models. Bayesian statistical inference, with computations carried out via Markov Chain Monte Carlo (MCMC) methods, offers an attractive approach to model specification, allowing for the combination of any pre-existing knowledge about individual models and their respective parameters with the available catchment data to assess both parameter and model uncertainty. This thesis develops and applies Bayesian statistical tools for parameter estimation, comparison of model performance and hierarchical model aggregation. The work presented has three main sections. The first area of research compares four MCMC algorithms for simplicity, ease of use, efficiency and speed of implementation in the context of conceptual rainfall-runoff modelling. Included is an adaptive Metropolis algorithm that has characteristics that are well suited to hydrological applications. The utility of the proposed adaptive algorithm is further expanded by the second area of research in which a probabilistic regime for comparing selected models is developed and applied. The final area of research introduces a methodology for hydrologic model aggregation that is flexible and dynamic. Rigidity in the model structure limits representation of the variability in the flow generation mechanism, which becomes a limitation when the flow processes are not clearly understood. The proposed Hierarchical Mixtures of Experts (HME) model architecture is designed to do away with this limitation by selecting individual models probabilistically based on predefined catchment indicators. In addition, the approach allows a more flexible specification of the model error to better assess the risk of likely outcomes based on the model simulations. Application of the approach to lumped and distributed rainfall runoff models for a variety of catchments shows that by assessing different catchment predictors the method can be a useful tool for prediction of catchment response.
106

Using box-scores to determine a position's contribution to winning basketball games /

Page, Garritt L., January 2005 (has links) (PDF)
Project (M.S.)--Brigham Young University. Dept. of Statistics, 2005. / Includes bibliographical references (p. 108-109).
107

A simulation of Industry and occupation codes in 1970 and 1980 U.S Census

Avcioglu-Ayturk, Mubeccel Didem . January 2005 (has links)
Thesis (M.S.) -- Worcester Polytechnic Institute. / Keywords: industry and occupation codes; Census classification; Bayesian. Includes bibliographical references (p.).
108

Bayesian adaptive designs for non-inferiority and dose selection trials

Spann, Melissa Elizabeth. Seaman, John Weldon, January 2006 (has links)
Thesis (Ph.D.)--Baylor University, 2006. / Includes bibliographical references (p. 123-128).
109

Essays in Fiscal Policy

Falconer, Jean 06 September 2018 (has links)
The subject of this dissertation is fiscal policy in the United States. In recent years the limitations of monetary policy have become more evident, generating greater interest in the use of fiscal policy as a stabilization tool. Despite considerable advances in the fiscal policy literature, many important questions about the effects and implementation of such policy remain unresolved. This motivates the present work, which explores both topics in the chapters that follow. I begin in the second chapter by estimating Federal Reserve responses to changes in taxes and spending. Monetary responses are a critical determinant of fiscal policy effectiveness since central banks have the ability to offset many of the economic changes resulting from fiscal shocks. Using techniques commonly employed in the fiscal multiplier literature, my results indicate a willingness by monetary policymakers to alter policy directly in response to fiscal shocks in a way that either reinforces or counteracts the resulting effects. In the third and fourth chapters I shift my focus to the conduct of fiscal policy. Specifically, I use Bayesian methods to estimate the response of federal discretionary policy to different macroeconomic variables. I allow for uncertainty about various characteristics of the underlying model which enables me to determine, for example, which variables matter to policymakers; whether policy conduct has changed over time; and whether policy responses are state dependent. My results indicate, among other things, that policy responds countercyclically to changes in the labor market, but only during periods of weak economic activity.
110

Understanding the genetic basis of complex polygenic traits through Bayesian model selection of multiple genetic models and network modeling of family-based genetic data

Bae, Harold Taehyun 12 March 2016 (has links)
The global aim of this dissertation is to develop advanced statistical modeling to understand the genetic basis of complex polygenic traits. In order to achieve this goal, this dissertation focuses on the development of (i) a novel methodology to detect genetic variants with different inheritance patterns formulated as a Bayesian model selection problem, (ii) integration of genetic data and non-genetic data to dissect the genotype-phenotype associations using Bayesian networks with family-based data, and (iii) an efficient technique to model the family-based data in the Bayesian framework. In the first part of my dissertation, I present a coherent Bayesian framework for selection of the most likely model from the five genetic models (genotypic, additive, dominant, co-dominant, and recessive) used in genetic association studies. The approach uses a polynomial parameterization of genetic data to simultaneously fit the five models and save computations. I provide a closed-form expression of the marginal likelihood for normally distributed data, and evaluate the performance of the proposed method and existing methods through simulated and real genome-wide data sets. The second part of this dissertation presents an integrative analytic approach that utilizes Bayesian networks to represent the complex probabilistic dependency structure among many variables from family-based data. I propose a parameterization that extends mixed effects regression models to Bayesian networks by using random effects as additional nodes of the networks to model the between-subjects correlations. I also present results of simulation studies to compare different model selection metrics for mixed models that can be used for learning BNs from correlated data and application of this methodology to real data from a large family-based study. In the third part of this dissertation, I describe an efficient way to account for family structure in Bayesian inference Using Gibbs Sampling (BUGS). In linear mixed models, a random effects vector has a variance-covariance matrix whose dimension is as large as the sample size. However, a direct handling of this multivariate normal distribution is not computationally feasible in BUGS. Therefore, I propose a decomposition of this multivariate normal distribution into univariate normal distributions using singular value decomposition, and implementation in BUGS is presented.

Page generated in 0.025 seconds