• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 120
  • 28
  • 23
  • 20
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 262
  • 262
  • 42
  • 38
  • 33
  • 31
  • 31
  • 30
  • 30
  • 27
  • 26
  • 26
  • 24
  • 24
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Diagramas de influência e teoria estatística / Influence Diagrams and Statistical Theory

Stern, Rafael Bassi 09 January 2009 (has links)
O objetivo principal deste trabalho foi analisar o controverso conceito de informação em estatística. Para tal, primeiramente foi estudado o conceito de informação dado por Basu. A seguir, a análise foi dividida em três partes: informação nos dados, informação no experimento e diagramas de influência. Nas duas primeiras etapas, sempre se tentou definir propriedades que uma função de informação deveria satisfazer para se enquadrar ao conceito. Na primeira etapa, foi estudado como o princípio da verossimilhança é uma classe de equivalência decorrente de acreditar que experimentos triviais não trazem informação. Também foram apresentadas métricas que satisfazem o princípio da verossimilhança e estas foram usadas para avaliar um exemplo intuitivo. Na segunda etapa, passamos para o problema da informação de um experimento. Foi apresentada a relação da suficiência de Blackwell com experimentos triviais e o conceito usual de suficiência. Também foi analisada a equivalência de Blackwell e a sua relação com o Princípio da Verossimilhança anteriormente estudado. Além disso, as métricas apresentadas para medir a informação de conjuntos de dados foram adaptadas para também medir a informação de um experimento. Finalmente, observou-se que nas etapas anteriores uma série de simetrias mostraram-se como elementos essenciais do conceito de informação. Para ganhar intuição sobre elas, estas foram reescritas através da ferramenta gráfica dos diagramas de influência. Assim, definições como suficiência, suficiência de Blackwell, suficiência mínima e completude foram reapresentadas apenas usando essa ferramenta. / The main objective of this work is to analyze the controversial concept of information in Statistics. To do so, firstly the concept of information according to Basu is presented. Next, the analysis is divided in three parts: information in a data set, information in an experiment and influence diagrams. In the first two parts, we always tried to define properties an information function should satisfy in order to be in accordance to the concept of Basu. In the first part, it was studied how the likelihood principle is an equivalence class which follows from believing that trivial experiments do not bring information. Metrics which satisfy the likelihood principle were also presented and used to analyze an intuitive example. In the second part, the problem became that of determining information of a particular experiment. The relation between Blackwell\'s suciency, trivial experiments and classical suciency was presented. Blackwell\'s equivalence was also analyzed and its relationship with the Likelihood Principle was exposed. The metrics presented to evaluate the information in a data set were also adapted to do so with experiments. Finally, in the first parts a number of symmetries were shown as essencial elements of the concept of information. To gain more intuition about these elements, we tried to rewrite them using the graphic tool of influence diagrams. Therefore, definitions as sufficiency, Blackwell\'s sufficiency, minimal sufficiency and completeness were shown again, only using influence diagrams.
142

Paired Comparison Models for Ranking National Soccer Teams

Hallinan, Shawn E. 05 May 2005 (has links)
National soccer teams are currently ranked by soccer's governing body, the Federation Internationale de Football Association (FIFA). Although the system used by FIFA is thorough, taking into account many different factors, many of the weights used in the system's calculations are somewhat arbitrary. It is investigated here how the use of a statistical model might better compare the teams for ranking purposes. By treating each game played as a pairwise comparison experiment and by using the Bradley-Terry model as a starting point some suitable models are presented. A key component of the final model introduced here its ability to differentiate between friendly matches and competitive matches when determining the impact of a match on a teams ranking. Posterior distributions of the rating parameters are obtained, and the rankings and results obtained from each model are compared to FIFA's rankings and each other.
143

Análise estatística na interpretação de imagens: microarranjos de DNA e ressonância magnética funcional / Statistical analysis of image interpretation: DNA microarrays and functional magnetic resonance

Vencio, Ricardo Zorzetto Nicoliello 01 September 2006 (has links)
O objetivo deste trabalho é apresentar os métodos originais em Bioinformática desenvolvidos para a análise estatística na interpretação dos dados de duas técnicas baseadas em imagens: a técnica de microarranjos de DNA e a técnica de ressonância magnética funcional. O interesse principal é abordar essas técnicas experimentais quando enfrenta-se uma situação clara de amostras escassas, isto é, quando existem relativamente poucas observações experimentais do fenômeno estudado, sendo a análise individual/personalizada o representante extremo desta situação, que tem que ser resolvida. Para tanto, opta-se pelo uso da Inferência Bayesiana no contexto da Teoria da Decisão sob Incerteza, implementada computacionalmente sob o arcabouço dos Sistemas de Suporte à Decisão. Ambas as tecnologias estudadas produzem dados complexos, baseados na interpretação das diferenças entre imagens obtidas da resposta do sistema a um estímulo e da resposta numa situação controle. O resultado deste trabalho é o desenvolvimento de dois sistemas de suporte à decisão, chamados HTself e Dotslashen, para a análise de dados de microarranjos e ressonância magnética funcional, respectivamente; e de seus métodos matemáticos/computacionais subjacentes. Os sistemas desenvolvidos extraem conhecimento racional de bancos-de-dados normativos, através de modelos matemáticos específicos, contornando então o problema de amostras escassas. Finalmente, neste trabalho são descritas aplicações a problemas reais, para destacar a utilidade dos sistemas de suporte à decisão desenvolvidos nas áreas de Biologia Molecular e Neuroimagem Funcional. / The goal of this work is to present the novel Bioinformatics methods that were developed aiming the statistical analysis of two image-based techniques: DNA microarrays and functional magnetic resonance imaging. The main interest is to approach these experimental techniques in small sample size situations, i.e., when there are relatively few experimental observations of the phenomena of interest, for which the case of single subject/datum analysis is its most extreme. In order to approach these problems we chose to use Bayesian Inference in the context of the Decision Theory under Uncertainty, computationally implemented under the Decision Support Systems framework. Both technologies produce complex data, based on the interpretation of differences between images from the response to a given stimulus and the control situation. The result of this work is the development of two decision support systems, called HTself and Dotslashen, to analyze microarray and functional magnetic resonance imaging data, respectively; and the underling mathematical and computational methods. These systems use the rational knowledge from normative databases implemented in specific mathematical models, overcoming the problem of small sample size. Finally, in this work it is described applications to real problems in order to stress the utility for Molecular Biology and Functional Neuroimaging of the developed decision support systems.
144

Using LiDAR Data to Analyze Access Management Criteria in Utah

Seat, Marlee Lyn 01 April 2017 (has links)
The Utah Department of Transportation (UDOT) has completed a Light Detection and Ranging (LiDAR) data inventory that includes access locations across the UDOT network. The new data are anticipated to be extremely useful in better defining safety and in completing a systemwide analysis of locations where safety could be improved, or where safety has been improved across the state. The Department of Civil and Environmental Engineering at Brigham Young University (BYU) has worked with the new data to perform a safety analysis of the state related to access management, particularly related to driveway spacing and raised medians. The primary objective of this research was to increase understanding of the safety impacts across the state related to access management. These objectives were accomplished by using the LiDAR database to evaluate driveway spacing and locations to aid in hot spot identification and to develop relationships between access design and location as a function of safety and access category (AC). Utah Administrative Rule R930-6 contains access management guidelines to balance the access found on a roadway with traffic and safety operations. These guidelines were used to find the maximum number of driveways recommended for a roadway. ArcMap 10.3 and Microsoft Excel were used to visualize the data and identify hot spot locations. An analysis conducted in this study compared current roadway characteristics to the R930-6 guidelines to find locations where differences occurred. This analysis does not indicate the current AC is incorrect; it simply means that the assigned AC does not meet current roadway characteristic based on the LiDAR data analysis. UDOT can decide what this roadway will become in the future and help shape each segment using the AC outlined in the R930-6. A hierarchal Bayesian statistical before-after model, created in previous BYU safety research, was used to analyze locations where raised medians have been installed. Twenty locations where raised medians were installed in Utah between 2002 to 2014 were used in this model. The model analyzed the raised medians by AC. Only three AC were represented in the data. Regression plots depicting a decrease in crashes before and after installation, posterior distribution plots showing the probability of a decrease in crashes after installation, and crash modification factor (CMF) plots presenting the CMF values estimated for different vehicle miles traveled (VMT) values were all created as output from the before-after model. Overall, installing a raised median gives an approximate reduction of 53 percent for all crashes. Individual AC analysis yielded results ranging from 32 to 44 percent for all severity groups except severity 4 and 5. When the model was only run for crash severity 4 and 5, a larger reduction of 57 to 58 percent was found.
145

Unstable Consumer Learning Models: Structural Estimation and Experimental Examination

Lovett, Mitchell James 21 October 2008 (has links)
<p>This dissertation explores how consumers learn from repeated experiences with a product offering. It develops a new Bayesian consumer learning model, the unstable learning model. This model expands on existing models that explore learning when quality is stable, by considering when quality is changing. Further, the dissertation examines situations in which consumers may act as if quality is changing when it is stable or vice versa. This examination proceeds in two essays.</p><p>The first essay uses two experiments to examine how consumers learn when product quality is stable or changing. By collecting repeated measures of expectation data and experiences, more information enables estimation to discriminate between stable and unstable learning. The key conclusions are that (1) most consumers act as if quality is unstable, even when it is stable, and (2) consumers respond to the environment they face, adjusting their learning in the correct direction. These conclusions have important implications for the formation and value of brand equity.</p><p>Based on the conclusions of this first essay, the second essay develops a choice model of consumer learning when consumers believe quality is changing, even though it is not. A Monte Carlo experiment tests the efficacy of this model versus the standard model. The key conclusion is that both models perform similarly well when the model assumptions match the way consumers actually learn, but with a mismatch the existing model is biased, while the new model continues to perform well. These biases could lead to suboptimal branding decisions.</p> / Dissertation
146

Bayesian and Information-Theoretic Learning of High Dimensional Data

Chen, Minhua January 2012 (has links)
<p>The concept of sparseness is harnessed to learn a low dimensional representation of high dimensional data. This sparseness assumption is exploited in multiple ways. In the Bayesian Elastic Net, a small number of correlated features are identified for the response variable. In the sparse Factor Analysis for biomarker trajectories, the high dimensional gene expression data is reduced to a small number of latent factors, each with a prototypical dynamic trajectory. In the Bayesian Graphical LASSO, the inverse covariance matrix of the data distribution is assumed to be sparse, inducing a sparsely connected Gaussian graph. In the nonparametric Mixture of Factor Analyzers, the covariance matrices in the Gaussian Mixture Model are forced to be low-rank, which is closely related to the concept of block sparsity. </p><p>Finally in the information-theoretic projection design, a linear projection matrix is explicitly sought for information-preserving dimensionality reduction. All the methods mentioned above prove to be effective in learning both simulated and real high dimensional datasets.</p> / Dissertation
147

Bayesian Semiparametric Models For Nonignorable Missing Datamechanisms In Logistic Regression

Ozturk, Olcay 01 May 2011 (has links) (PDF)
In this thesis, Bayesian semiparametric models for the missing data mechanisms of nonignorably missing covariates in logistic regression are developed. In the missing data literature, fully parametric approach is used to model the nonignorable missing data mechanisms. In that approach, a probit or a logit link of the conditional probability of the covariate being missing is modeled as a linear combination of all variables including the missing covariate itself. However, nonignorably missing covariates may not be linearly related with the probit (or logit) of this conditional probability. In our study, the relationship between the probit of the probability of the covariate being missing and the missing covariate itself is modeled by using a penalized spline regression based semiparametric approach. An efficient Markov chain Monte Carlo (MCMC) sampling algorithm to estimate the parameters is established. A WinBUGS code is constructed to sample from the full conditional posterior distributions of the parameters by using Gibbs sampling. Monte Carlo simulation experiments under different true missing data mechanisms are applied to compare the bias and efficiency properties of the resulting estimators with the ones from the fully parametric approach. These simulations show that estimators for logistic regression using semiparametric missing data models maintain better bias and efficiency properties than the ones using fully parametric missing data models when the true relationship between the missingness and the missing covariate has a nonlinear form. They are comparable when this relationship has a linear form.
148

Hessian-based response surface approximations for uncertainty quantification in large-scale statistical inverse problems, with applications to groundwater flow

Flath, Hannah Pearl 11 September 2013 (has links)
Subsurface flow phenomena characterize many important societal issues in energy and the environment. A key feature of these problems is that subsurface properties are uncertain, due to the sparsity of direct observations of the subsurface. The Bayesian formulation of this inverse problem provides a systematic framework for inferring uncertainty in the properties given uncertainties in the data, the forward model, and prior knowledge of the properties. We address the problem: given noisy measurements of the head, the pdf describing the noise, prior information in the form of a pdf of the hydraulic conductivity, and a groundwater flow model relating the head to the hydraulic conductivity, find the posterior probability density function (pdf) of the parameters describing the hydraulic conductivity field. Unfortunately, conventional sampling of this pdf to compute statistical moments is intractable for problems governed by large-scale forward models and high-dimensional parameter spaces. We construct a Gaussian process surrogate of the posterior pdf based on Bayesian interpolation between a set of "training" points. We employ a greedy algorithm to find the training points by solving a sequence of optimization problems where each new training point is placed at the maximizer of the error in the approximation. Scalable Newton optimization methods solve this "optimal" training point problem. We tailor the Gaussian process surrogate to the curvature of the underlying posterior pdf according to the Hessian of the log posterior at a subset of training points, made computationally tractable by a low-rank approximation of the data misfit Hessian. A Gaussian mixture approximation of the posterior is extracted from the Gaussian process surrogate, and used as a proposal in a Markov chain Monte Carlo method for sampling both the surrogate as well as the true posterior. The Gaussian process surrogate is used as a first stage approximation in a two-stage delayed acceptance MCMC method. We provide evidence for the viability of the low-rank approximation of the Hessian through numerical experiments on a large scale atmospheric contaminant transport problem and analysis of an infinite dimensional model problem. We provide similar results for our groundwater problem. We then present results from the proposed MCMC algorithms. / text
149

Facilitating Clinical Trials of Parenteral Lipid Strategies for the Prevention of Intestinal Failure Associated Liver Disease (IFALD) in Infants

Diamond, Ivan R. 15 November 2013 (has links)
Objective: The objective of this thesis was to facilitate clinical trials of the optimal lipid based approach (e.g.: omega-3 containing lipid emulsions or minimization of conventional lipid) for the prevention of Intestinal Failure Associated Liver Disease (IFALD). This was achieved through 3 related projects. Project 1: The first project examined the risk of advanced IFALD associated with exposure to conventional intravenous lipid in a logistic regression model. The study demonstrated that each day of conventional lipid (> 2.5 g/kg/day) was associated with a significant increase in the risk of advanced IFALD [Odds Ratio: 1.04 95% CI: 1.003 – 1.06]. Project 2: The second project surveyed experts in Intestinal Failure regarding their beliefs of the efficacy of lipid minimization and lipid emulsions containing omega-3 fatty acids relative to conventional emulsions. The goal of the project was to develop prior distributions of the treatment response for these therapies that can be used in Bayesian analyses of clinical trials. Our results demonstrated consistent expert opinion that the novel lipid based approaches are superior to conventional therapy. Estimates of the treatment effect were similar for the two approaches (median elicited treatment response, relative to conventional lipid, was a relative risk of 0.53 for omega-3 lipid and 0.45 for lipid minimization). Project 3: The final project was a pilot randomized controlled trial of an omega-3 emulsion. The study demonstrated that the randomized design is a feasible strategy for evaluating lipid based approaches for the prevention of IFALD. A Bayesian preliminary assessment of the results of the trial, suggests a high likelihood that the trial will demonstrate a difference between the conventional and omega-3 emulsion evaluated in the trial. However, since the analysis was blinded, the direction of the difference is not known. Conclusion: This thesis will contribute to the design and analysis of high quality and feasible randomized trials that will allow investigators to address the optimal lipid based approach to the management of IFALD.
150

Facilitating Clinical Trials of Parenteral Lipid Strategies for the Prevention of Intestinal Failure Associated Liver Disease (IFALD) in Infants

Diamond, Ivan R. 15 November 2013 (has links)
Objective: The objective of this thesis was to facilitate clinical trials of the optimal lipid based approach (e.g.: omega-3 containing lipid emulsions or minimization of conventional lipid) for the prevention of Intestinal Failure Associated Liver Disease (IFALD). This was achieved through 3 related projects. Project 1: The first project examined the risk of advanced IFALD associated with exposure to conventional intravenous lipid in a logistic regression model. The study demonstrated that each day of conventional lipid (> 2.5 g/kg/day) was associated with a significant increase in the risk of advanced IFALD [Odds Ratio: 1.04 95% CI: 1.003 – 1.06]. Project 2: The second project surveyed experts in Intestinal Failure regarding their beliefs of the efficacy of lipid minimization and lipid emulsions containing omega-3 fatty acids relative to conventional emulsions. The goal of the project was to develop prior distributions of the treatment response for these therapies that can be used in Bayesian analyses of clinical trials. Our results demonstrated consistent expert opinion that the novel lipid based approaches are superior to conventional therapy. Estimates of the treatment effect were similar for the two approaches (median elicited treatment response, relative to conventional lipid, was a relative risk of 0.53 for omega-3 lipid and 0.45 for lipid minimization). Project 3: The final project was a pilot randomized controlled trial of an omega-3 emulsion. The study demonstrated that the randomized design is a feasible strategy for evaluating lipid based approaches for the prevention of IFALD. A Bayesian preliminary assessment of the results of the trial, suggests a high likelihood that the trial will demonstrate a difference between the conventional and omega-3 emulsion evaluated in the trial. However, since the analysis was blinded, the direction of the difference is not known. Conclusion: This thesis will contribute to the design and analysis of high quality and feasible randomized trials that will allow investigators to address the optimal lipid based approach to the management of IFALD.

Page generated in 0.1097 seconds