• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2049
  • 601
  • 263
  • 260
  • 61
  • 32
  • 26
  • 19
  • 15
  • 14
  • 10
  • 8
  • 6
  • 6
  • 5
  • Tagged with
  • 4149
  • 815
  • 761
  • 732
  • 723
  • 722
  • 714
  • 661
  • 582
  • 451
  • 433
  • 416
  • 413
  • 370
  • 315
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
401

A Simulation of Industry and Occupation Codes in 1970 and 1980 U.S Census

Avcioglu-Ayturk, Mubeccel Didem 01 June 2005 (has links)
"Classification systems change from census to census for a variety of reasons. The change from 1970 U.S Census to 1980 U.S Census classification was so dramatic that studying the changes and making comparisons are too complicated and expensive. Treating the actual census results as unknown, we simulated a new Census data base reflecting the real situation in 1970 & 1980 classification systems. One of our objective is to explain the process by which codes change so that the researchers can better understand how the new data bases were created. The second objective is to show how this newly created data base is then used to study the comparability of the two classification systems. In this project we do not attempt any estimative or predictive inference. We simply simulate the industry and occupation codes in the U.S. Census public-use samples via a model similar to the one used for multiple imputation."
402

Improved paired comparison models for NFL point spreads by data transformation

Matthews, Gregory J 05 May 2005 (has links)
Each year millions of dollars are wagered on the NFL during the season. A few people make some money, but most often the only real winner is the sports book. In this project, the effect of data transformation on the paired comparison model of Glickman and Stern (1998) is explored. Usual transformations such as logarithm and square-root are used as well as a transformation involving a threshold. The motivation for each of the transformations if to reduce the influence of blowouts on future predictions. Data from the 2003 and 2004 NFL seasons are examined to see if these transformations aid in improving model fit and prediction rate against a point spread. Strategies for model-based wagering are also explored.
403

Aplicações do approximate Bayesian computation a controle de qualidade / Applications of approximate Bayesian computation in quality control

Thiago Feitosa Campos 11 June 2015 (has links)
Neste trabalho apresentaremos dois problemas do contexto de controle estatístico da qualidade: monitoramento \"on-line\'\' de qualidade e environmental stress screening, analisados pela óptica bayesiana. Apresentaremos os problemas dos modelos bayesianos relativos a sua aplicação e, os reanalisamos com o auxílio do ABC o que nos fornece resultados de uma maneira mais rápida, e assim possibilita análises diferenciadas e a previsão novas observações. / In this work we will present two problems in the context of statistical quality control: on line quality monitoring and environmental stress screening, analyzed from the Bayesian perspective. We will present problems of the Bayesian models related to their application, and also we reanalyze the problems with the assistance of ABC methods which provides results in a faster way, and so enabling differentiated analyzes and new observations forecast.
404

Análise bayesiana de densidades aleatórias simples / Bayesian analysis of simple random densities

Paulo Cilas Marques Filho 19 December 2011 (has links)
Definimos, a partir de uma partição de um intervalo limitado da reta real formada por subintervalos, uma distribuição a priori sobre uma classe de densidades em relação à medida de Lebesgue construindo uma densidade aleatória cujas realizações são funções simples não negativas que assumem um valor constante em cada subintervalo da partição e possuem integral unitária. Utilizamos tais densidades aleatórias simples na análise bayesiana de um conjunto de observáveis absolutamente contínuos e provamos que a distribuição a priori é fechada sob amostragem. Exploramos as distribuições a priori e a posteriori via simulações estocásticas e obtemos soluções bayesianas para o problema de estimação de densidade. Os resultados das simulações exibem o comportamento assintótico da distribuição a posteriori quando crescemos o tamanho das amostras dos dados analisados. Quando a partição não é conhecida a priori, propomos um critério de escolha a partir da informação contida na amostra. Apesar de a esperança de uma densidade aleatória simples ser sempre uma densidade descontínua, obtemos estimativas suaves resolvendo um problema de decisão em que os estados da natureza são realizações da densidade aleatória simples e as ações são densidades suaves de uma classe adequada. / We define, from a known partition in subintervals of a bounded interval of the real line, a prior distribution over a class of densities with respect to Lebesgue measure constructing a random density whose realizations are nonnegative simple functions that integrate to one and have a constant value on each subinterval of the partition. These simple random densities are used in the Bayesian analysis of a set of absolutely continuous observables and the prior distribution is proved to be closed under sampling. We explore the prior and posterior distributions through stochastic simulations and find Bayesian solutions to the problem of density estimation. Simulations results show the asymptotic behavior of the posterior distribution as we increase the size of the analyzed data samples. When the partition is unknown, we propose a choice criterion based on the information contained in the sample. In spite of the fact that the expectation of a simple random density is always a discontinuous density, we get smooth estimates solving a decision problem where the states of nature are realizations of the simple random density and the actions are smooth densities of a suitable class.
405

Análise bayesiana de densidades aleatórias simples / Bayesian analysis of simple random densities

Marques Filho, Paulo Cilas 19 December 2011 (has links)
Definimos, a partir de uma partição de um intervalo limitado da reta real formada por subintervalos, uma distribuição a priori sobre uma classe de densidades em relação à medida de Lebesgue construindo uma densidade aleatória cujas realizações são funções simples não negativas que assumem um valor constante em cada subintervalo da partição e possuem integral unitária. Utilizamos tais densidades aleatórias simples na análise bayesiana de um conjunto de observáveis absolutamente contínuos e provamos que a distribuição a priori é fechada sob amostragem. Exploramos as distribuições a priori e a posteriori via simulações estocásticas e obtemos soluções bayesianas para o problema de estimação de densidade. Os resultados das simulações exibem o comportamento assintótico da distribuição a posteriori quando crescemos o tamanho das amostras dos dados analisados. Quando a partição não é conhecida a priori, propomos um critério de escolha a partir da informação contida na amostra. Apesar de a esperança de uma densidade aleatória simples ser sempre uma densidade descontínua, obtemos estimativas suaves resolvendo um problema de decisão em que os estados da natureza são realizações da densidade aleatória simples e as ações são densidades suaves de uma classe adequada. / We define, from a known partition in subintervals of a bounded interval of the real line, a prior distribution over a class of densities with respect to Lebesgue measure constructing a random density whose realizations are nonnegative simple functions that integrate to one and have a constant value on each subinterval of the partition. These simple random densities are used in the Bayesian analysis of a set of absolutely continuous observables and the prior distribution is proved to be closed under sampling. We explore the prior and posterior distributions through stochastic simulations and find Bayesian solutions to the problem of density estimation. Simulations results show the asymptotic behavior of the posterior distribution as we increase the size of the analyzed data samples. When the partition is unknown, we propose a choice criterion based on the information contained in the sample. In spite of the fact that the expectation of a simple random density is always a discontinuous density, we get smooth estimates solving a decision problem where the states of nature are realizations of the simple random density and the actions are smooth densities of a suitable class.
406

A novel sequential ABC algorithm with applications to the opioid crisis using compartmental models

Langenfeld, Natalie Rose 01 May 2018 (has links)
The abuse of and dependence on opioids are major public health problems, and have been the focus of intense media coverage and scholarly inquiry. This research explores the problem in Iowa through the lens of infectious disease modeling. We wanted to identify the current state of the crisis, factors affecting the progression of the addiction process, and evaluate interventions as data becomes available. We introduced a novel sequential Approximate Bayesian Computation technique to address shortcomings of existing methods in this complex problem space, after surveying the literature for available Bayesian computation techniques. A spatial compartmental model was used which allowed forward and backward progression through susceptible, exposed, addicted, and removed disease states. Data for this model were compiled over the years 2006-2016 for Iowa counties, from a variety of sources. Prescription overdose deaths and treatment data were obtained from the Iowa Department of Public Health, possession and distribution arrest data were acquired from the Iowa Department of Public Safety, a measure of total available pain reliever prescriptions was derived from private health insurance claims data, and population totals were obtained from the US Census Bureau. Inference was conducted in a Bayesian framework. A measure called the empirically adjusted reproductive number which estimates the expected number of new users generated from a single user was used to examine the growth of the crisis. Results expose the trend in recruitment of new users, and peak recruitment times. While we identify an overall decrease in the rate of spread during the study period, the scope of the problem remains severe, and interesting outlying trends require further investigation. In addition, an examination of the reproductive numbers estimated for contact within and between counties indicates that medical exposure, rather than spread through social networks, may be the key driver of this crisis.
407

What You Know Counts: Why We Should Elicit Prior Probabilities from Experts to Improve Quantitative Analysis with Qualitative Knowledge in Special Education Science

Hicks, Tyler Aaron 03 March 2015 (has links)
Qualitative knowledge is about types of things, and their excellences. There are many ways we humans produce qualitative knowledge about the world, and much of it is derived from non-quantitative sources (e.g., narratives, clinical experiences, intuitions). The purpose of my dissertation was to investigate the possibility of using Bayesian inferences to improve quantitative analysis in special education research with qualitative knowledge. It is impossible, however, to fully disentangle philosophy of inquiry, methodology, and methods. My evaluation of Bayesian estimators, thus, addresses each of these areas. Chapter Two offers a philosophical argument to substantiate the thesis that Bayesian inference is usually more applicable in education science than classical inference. I then moved on, in Chapter Three, to consider methodology. I used simulation procedures to show that even a minimum amount of qualitative information can suffice to improve Bayesian t-tests' frequency properties. Finally, in Chapter Four, I offered a practical demonstration of how Bayesian methods could be utilized in special education research to solve technical problems. In Chapter Five, I show how these three chapters, taken together, evidence that Bayesian analysis can promote a romantic science of special education - i.e., a non-positivistic science that invites teleological explanation. These explanations are often produced by researchers in the qualitative tradition, and Bayesian priors are formal mechanism for strengthening quantitative analysis with such qualitative bits of information. Researchers are also free to use their favorite qualitative methods to elicit such priors from experts.
408

Bayesian optimization with empirical constraints

Azimi, Javad 05 September 2012 (has links)
Bayesian Optimization (BO) methods are often used to optimize an unknown function f(���) that is costly to evaluate. They typically work in an iterative manner. In each iteration, given a set of observation points, BO algorithms select k ��� 1 points to be evaluated. The results of those points are then added to the set of observations and the procedure is repeated until a stopping criterion is met. The goal is to optimize the function f(���) with a small number of experiment evaluations. While this problem has been extensively studied, most existing approaches ignored some real world constraints frequently encountered in practical applications. In this thesis, we extend the BO framework in a number of important directions to incorporate some of these constraints. First, we introduce a constrained BO framework where instead of selecting a precise point at each iteration, we request a constrained experiment that is characterized by a hyper-rectangle in the input space. We introduce efficient sequential and non-sequential algorithms to select a set of constrained experiments that best optimize f(���) within a given budget. Second, we introduce one of the first attempts in batch BO where instead of selecting one experiment at each iteration, a set of k > 1 experiments is selected. This can significantly speedup the overall running time of BO. Third, we introduce scheduling algorithms for the BO framework when: 1) it is possible to run concurrent experiments; 2) the durations of experiments are stochastic, but with a known distribution; and 3) there is a limited number of experiments to run in a fixed amount of time. We propose both online and offline scheduling algorithms that effectively handle these constraints. Finally, we introduce a hybrid BO approach which switches between the sequential and batch mode. The proposed hybrid approach provides us with a substantial speedup against sequential policies without significant performance loss. / Graduation date: 2013
409

Bayesian methods for knowledge transfer and policy search in reinforcement learning

Wilson, Aaron (Aaron Creighton) 28 July 2012 (has links)
How can an agent generalize its knowledge to new circumstances? To learn effectively an agent acting in a sequential decision problem must make intelligent action selection choices based on its available knowledge. This dissertation focuses on Bayesian methods of representing learned knowledge and develops novel algorithms that exploit the represented knowledge when selecting actions. Our first contribution introduces the multi-task Reinforcement Learning setting in which an agent solves a sequence of tasks. An agent equipped with knowledge of the relationship between tasks can transfer knowledge between them. We propose the transfer of two distinct types of knowledge: knowledge of domain models and knowledge of policies. To represent the transferable knowledge, we propose hierarchical Bayesian priors on domain models and policies respectively. To transfer domain model knowledge, we introduce a new algorithm for model-based Bayesian Reinforcement Learning in the multi-task setting which exploits the learned hierarchical Bayesian model to improve exploration in related tasks. To transfer policy knowledge, we introduce a new policy search algorithm that accepts a policy prior as input and uses the prior to bias policy search. A specific implementation of this algorithm is developed that accepts a hierarchical policy prior. The algorithm learns the hierarchical structure and reuses components of the structure in related tasks. Our second contribution addresses the basic problem of generalizing knowledge gained from previously-executed policies. Bayesian Optimization is a method of exploiting a prior model of an objective function to quickly identify the point maximizing the modeled objective. Successful use of Bayesian Optimization in Reinforcement Learning requires a model relating policies and their performance. Given such a model, Bayesian Optimization can be applied to search for an optimal policy. Early work using Bayesian Optimization in the Reinforcement Learning setting ignored the sequential nature of the underlying decision problem. The work presented in this thesis explicitly addresses this problem. We construct new Bayesian models that take advantage of sequence information to better generalize knowledge across policies. We empirically evaluate the value of this approach in a variety of Reinforcement Learning benchmark problems. Experiments show that our method significantly reduces the amount of exploration required to identify the optimal policy. Our final contribution is a new framework for learning parametric policies from queries presented to an expert. In many domains it is difficult to provide expert demonstrations of desired policies. However, it may still be a simple matter for an expert to identify good and bad performance. To take advantage of this limited expert knowledge, our agent presents experts with pairs of demonstrations and asks which of the demonstrations best represents a latent target behavior. The goal is to use a small number of queries to elicit the latent behavior from the expert. We formulate a Bayesian model of the querying process, an inference procedure that estimates the posterior distribution over the latent policy space, and an active procedure for selecting new queries for presentation to the expert. We show, in multiple domains, that the algorithm successfully learns the target policy and that the active learning strategy generally improves the speed of learning. / Graduation date: 2013
410

Bayesian Hierarchical, Semiparametric, and Nonparametric Methods for International New Product Di ffusion

Hartman, Brian Matthew 2010 August 1900 (has links)
Global marketing managers are keenly interested in being able to predict the sales of their new products. Understanding how a product is adopted over time allows the managers to optimally allocate their resources. With the world becoming ever more global, there are strong and complex interactions between the countries in the world. My work explores how to describe the relationship between those countries and determines the best way to leverage that information to improve the sales predictions. In Chapter II, I describe how diffusion speed has changed over time. The most recent major study on this topic, by Christophe Van den Bulte, investigated new product di ffusions in the United States. Van den Bulte notes that a similar study is needed in the international context, especially in developing countries. Additionally, his model contains the implicit assumption that the diffusion speed parameter is constant throughout the life of a product. I model the time component as a nonparametric function, allowing the speed parameter the flexibility to change over time. I find that early in the product's life, the speed parameter is higher than expected. Additionally, as the Internet has grown in popularity, the speed parameter has increased. In Chapter III, I examine whether the interactions can be described through a reference hierarchy in addition to the cross-country word-of-mouth eff ects already in the literature. I also expand the word-of-mouth e ffect by relating the magnitude of the e ffect to the distance between the two countries. The current literature only applies that e ffect equally to the n closest countries (forming a neighbor set). This also leads to an analysis of how to best measure the distance between two countries. I compare four possible distance measures: distance between the population centroids, trade ow, tourism ow, and cultural similarity. Including the reference hierarchy improves the predictions by 30 percent over the current best model. Finally, in Chapter IV, I look more closely at the Bass Diffusion Model. It is prominently used in the marketing literature and is the base of my analysis in Chapter III. All of the current formulations include the implicit assumption that all the regression parameters are equal for each country. One dollar increase in GDP should have more of an eff ect in a poor country than in a rich country. A Dirichlet process prior enables me to cluster the countries by their regression coefficients. Incorporating the distance measures can improve the predictions by 35 percent in some cases.

Page generated in 0.0874 seconds