• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 154
  • 45
  • 32
  • 15
  • 4
  • 4
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 300
  • 300
  • 76
  • 53
  • 50
  • 47
  • 44
  • 42
  • 42
  • 42
  • 35
  • 34
  • 28
  • 27
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Statistical Models and Analysis of Growth Processes in Biological Tissue

Xia, Jun 15 December 2016 (has links)
The mechanisms that control growth processes in biology tissues have attracted continuous research interest despite their complexity. With the emergence of big data experimental approaches there is an urgent need to develop statistical and computational models to fit the experimental data and that can be used to make predictions to guide future research. In this work we apply statistical methods on growth process of different biological tissues, focusing on development of neuron dendrites and tumor cells. We first examine the neuron cell growth process, which has implications in neural tissue regenerations, by using a computational model with uniform branching probability and a maximum overall length constraint. One crucial outcome is that we can relate the parameter fits from our model to real data from our experimental collaborators, in order to examine the usefulness of our model under different biological conditions. Our methods can now directly compare branching probabilities of different experimental conditions and provide confidence intervals for these population-level measures. In addition, we have obtained analytical results that show that the underlying probability distribution for this process follows a geometrical progression increase at nearby distances and an approximately geometrical series decrease for far away regions, which can be used to estimate the spatial location of the maximum of the probability distribution. This result is important, since we would expect maximum number of dendrites in this region; this estimate is related to the probability of success for finding a neural target at that distance during a blind search. We then examined tumor growth processes which have similar evolutional evolution in the sense that they have an initial rapid growth that eventually becomes limited by the resource constraint. For the tumor cells evolution, we found an exponential growth model best describes the experimental data, based on the accuracy and robustness of models. Furthermore, we incorporated this growth rate model into logistic regression models that predict the growth rate of each patient with biomarkers; this formulation can be very useful for clinical trials. Overall, this study aimed to assess the molecular and clinic pathological determinants of breast cancer (BC) growth rate in vivo.
52

Considerations for Screening Designs and Follow-Up Experimentation

Leonard, Robert D 01 January 2015 (has links)
The success of screening experiments hinges on the effect sparsity assumption, which states that only a few of the factorial effects of interest actually have an impact on the system being investigated. The development of a screening methodology to harness this assumption requires careful consideration of the strengths and weaknesses of a proposed experimental design in addition to the ability of an analysis procedure to properly detect the major influences on the response. However, for the most part, screening designs and their complementing analysis procedures have been proposed separately in the literature without clear consideration of their ability to perform as a single screening methodology. As a contribution to this growing area of research, this dissertation investigates the pairing of non-replicated and partially–replicated two-level screening designs with model selection procedures that allow for the incorporation of a model-independent error estimate. Using simulation, we focus attention on the ability to screen out active effects from a first order with two-factor interactions model and the possible benefits of using partial replication as part of an overall screening methodology. We begin with a focus on single-criterion optimum designs and propose a new criterion to create partially replicated screening designs. We then extend the newly proposed criterion into a multi-criterion framework where estimation of the assumed model in addition to protection against model misspecification are considered. This is an important extension of the work since initial knowledge of the system under investigation is considered to be poor in the cases presented. A methodology to reduce a set of competing design choices is also investigated using visual inspection of plots meant to represent uncertainty in design criterion preferences. Because screening methods typically involve sequential experimentation, we present a final investigation into the screening process by presenting simulation results which incorporate a single follow-up phase of experimentation. In this concluding work we extend the newly proposed criterion to create optimal partially replicated follow-up designs. Methodologies are compared which use different methods of incorporating knowledge gathered from the initial screening phase into the follow-up phase of experimentation.
53

Automated Support for Model Selection Using Analytic Hierarchy Process

Missakian, Mario Sarkis 01 January 2011 (has links)
Providing automated support for model selection is a significant research challenge in model management. Organizations maintain vast growing repositories of analytical models, typically in the form of spreadsheets. Effective reuse of these models could result in significant cost savings and improvements in productivity. However, in practice, model reuse is severely limited by two main challenges: (1) lack of relevant information about the models maintained in the repository, and (2) lack of end user knowledge that prevents them from selecting appropriate models for a given problem solving task. This study built on the existing model management literature to address these research challenges. First, this research captured the relevant meta-information about the models. Next, it identified the features based on which models are selected. Finally, it used Analytic Hierarchy Process (AHP) to select the most appropriate model for any specified problem. AHP is an established method for multi-criteria decision-making that is suitable for the model selection task. To evaluate the proposed method for automated model selection, this study developed a simulated prototype system that implemented this method and tested it in two realistic end-user model selection scenarios based on previously benchmarked test problems.
54

Modelos de regressão sobre dados composicionais / Regression model for Compositional data

Camargo, André Pierro de 09 December 2011 (has links)
Dados composicionais são constituídos por vetores cujas componentes representam as proporções de algum montante, isto é: vetores com entradas positivas cuja soma é igual a 1. Em diversas áreas do conhecimento, o problema de estimar as partes $y_1, y_2, \\dots, y_D$ correspondentes aos setores $SE_1, SE_2, \\dots, SE_D$, de uma certa quantidade $Q$, aparece com frequência. As porcentagens $y_1, y_2, \\dots, y_D$ de intenção de votos correspondentes aos candidatos $Ca_1, Ca_2, \\dots, Ca_D$ em eleições governamentais ou as parcelas de mercado correspondentes a industrias concorrentes formam exemplos típicos. Naturalmente, é de grande interesse analisar como variam tais proporções em função de certas mudanças contextuais, por exemplo, a localização geográfica ou o tempo. Em qualquer ambiente competitivo, informações sobre esse comportamento são de grande auxílio para a elaboração das estratégias dos concorrentes. Neste trabalho, apresentamos e discutimos algumas abordagens propostas na literatura para regressão sobre dados composicionais, assim como alguns métodos de seleção de modelos baseados em inferência bayesiana. \\\\ / Compositional data consist of vectors whose components are the proportions of some whole. The problem of estimating the portions $y_1, y_2, \\dots, y_D$ corresponding to the pieces $SE_1, SE_2, \\dots, SE_D$ of some whole $Q$ is often required in several domains of knowledge. The percentages $y_1, y_2, \\dots, y_D$ of votes corresponding to the competitors $Ca_1, Ca_2, \\dots, Ca_D$ in governmental elections or market share problems are typical examples. Of course, it is of great interest to study the behavior of such proportions according to some contextual transitions. In any competitive environmet, additional information of such behavior can be very helpful for the strategists to make proper decisions. In this work we present and discuss some approaches proposed by different authors for compositional data regression as well as some model selection methods based on bayesian inference.\\\\
55

Seleção de modelos cópula-GARCH: uma abordagem bayesiana / Copula-Garch model model selection: a bayesian approach

Rossi, João Luiz 04 June 2012 (has links)
Esta dissertação teve como objetivo o estudo de modelos para séries temporais bivariadas, que tem a estrutura de dependência determinada por meio de funções de cópulas. A vantagem desta abordagem é que as cópulas fornecem uma descrição completa da estrutura de dependência. Em termos de inferência, foi adotada uma abordagem Bayesiana com utilização dos métodos de Monte Carlo via cadeias de Markov (MCMC). Primeiramente, um estudo de simulações foi realizado para verificar como os seguintes fatores, tamanho das séries e variações nas funções de cópula, nas distribuições marginais, nos valores do parâmetro de cópula e nos métodos de estimação, influenciam a taxa de seleção de modelos segundo os critérios EAIC, EBIC e DIC. Posteriormente, foram realizadas aplicações a dados reais dos modelos com estrutura de dependência estática e variante no tempo / The aim of this work was to study models for bivariate time series, where the dependence structure among the series is modeled by copulas. The advantage of this approach is that copulas provide a complete description of dependence structure. In terms of inference was adopted the Bayesian approach with utilization of Markov chain Monte Carlo (MCMC) methods. First, a simulation study was performed to verify how the factors, length of the series and variations on copula functions, on marginal distributions, on copula parameter value and on estimation methods, may affect models selection rate given by EAIC, EBIC and DIC criteria. After that, we applied the models with static and time-varying dependence structure to real data
56

A comparison of Bayesian model selection based on MCMC with an application to GARCH-type models

Miazhynskaia, Tatiana, Frühwirth-Schnatter, Sylvia, Dorffner, Georg January 2003 (has links) (PDF)
This paper presents a comprehensive review and comparison of five computational methods for Bayesian model selection, based on MCMC simulations from posterior model parameter distributions. We apply these methods to a well-known and important class of models in financial time series analysis, namely GARCH and GARCH-t models for conditional return distributions (assuming normal and t-distributions). We compare their performance vis--vis the more common maximum likelihood-based model selection on both simulated and real market data. All five MCMC methods proved feasible in both cases, although differing in their computational demands. Results on simulated data show that for large degrees of freedom (where the t-distribution becomes more similar to a normal one), Bayesian model selection results in better decisions in favour of the true model than maximum likelihood. Results on market data show the feasibility of all model selection methods, mainly because the distributions appear to be decisively non-Gaussian. / Series: Report Series SFB "Adaptive Information Systems and Modelling in Economics and Management Science"
57

Seleção de modelos lineares mistos utilizando critérios de informação / Mixed linear model selection using information criterion

Yamanouchi, Tatiana Kazue 18 August 2017 (has links)
O modelo misto é comumente utilizado em dados de medidas repetidas devido a sua flexibilidade de incorporar no modelo a correlação existente entre as observações medidas no mesmo indivíduo e a heterogeneidade de variâncias das observações feitas ao longo do tempo. Este modelo é composto de efeitos fixos, efeitos aleatórios e o erro aleatório e com isso na seleção do modelo misto muitas vezes é necessário selecionar os melhores componentes do modelo misto de tal forma que represente bem os dados. Os critérios de informação são ferramentas muito utilizadas na seleção de modelos, mas não há muitos estudos que indiquem como os critérios de informação se desempenham na seleção dos efeitos fixos, efeitos aleatórios e da estrutura de covariância que compõe o erro aleatório. Diante disso, neste trabalho realizou-se um estudo de simulação para avaliar o desempenho dos critérios de informação AIC, BIC e KIC na seleção dos componentes do modelo misto, medido pela taxa TP (Taxa de verdadeiro positivo). De modo geral, os critérios de informação se desempenharam bem, ou seja, tiveram altos valores de taxa TP em situações em que o tamanho da amostra é maior. Na seleção de efeitos fixos e na seleção da estrutura de covariância, em quase todas as situações, o critério BIC teve um desempenho melhor em relação aos critérios AIC e KIC. Na seleção de efeitos aleatórios nenhum critério teve um bom desempenho, exceto na seleção de efeitos aleatórios em que considera a estrutura de simetria composta, situação em que BIC teve o melhor desempenho. / The mixed model is commonly used in data of repeated measurements because of its flexibility to incorporate in the model the correlation existing between the observations measured in the same individual and the heterogeneity of variances of observations made over time. This model is composed of fixed effects, random effects and random error and with this in the selection of the mixed model it is often necessary to select the best components of the mixed model in such a way that it represents the data well. Information criteria are tools widely used in model selection, but there are not many studies that indicate how information criteria play out in the selection of fixed effects, random effects, and the covariance structure that makes up the random error. In this work, a simulation study was performed to evaluate the performance of the AIC, BIC and KIC information criteria in the selection of the components of the mixed model, measured by the TP (True positive Rate). In general, the information criteria performed well, that is, they had high TP rate in situations where the sample size is larger. In the selection of fixed effects and in the selection of the covariance structure, in almost all situations, the BIC criterion had a better performance in relation to the AIC and KIC criteria. In the selection of random effects no criterion had a good performance, except in the selection of Random effects in which it considers the compound symmetric structure, situation in which BIC had the best performance.
58

Flexible shrinkage in high-dimensional Bayesian spatial autoregressive models

Pfarrhofer, Michael, Piribauer, Philipp January 2019 (has links) (PDF)
Several recent empirical studies, particularly in the regional economic growth literature, emphasize the importance of explicitly accounting for uncertainty surrounding model specification. Standard approaches to deal with the problem of model uncertainty involve the use of Bayesian model-averaging techniques. However, Bayesian model-averaging for spatial autoregressive models suffers from severe drawbacks both in terms of computational time and possible extensions to more flexible econometric frameworks. To alleviate these problems, this paper presents two global-local shrinkage priors in the context of high-dimensional matrix exponential spatial specifications. A simulation study is conducted to evaluate the performance of the shrinkage priors. Results suggest that they perform particularly well in high-dimensional environments, especially when the number of parameters to estimate exceeds the number of observations. Moreover, we use pan-European regional economic growth data to illustrate the performance of the proposed shrinkage priors.
59

Chromosome 3D Structure Modeling and New Approaches For General Statistical Inference

Rongrong Zhang (5930474) 03 January 2019 (has links)
<div>This thesis consists of two separate topics, which include the use of piecewise helical models for the inference of 3D spatial organizations of chromosomes and new approaches for general statistical inference. The recently developed Hi-C technology enables a genome-wide view of chromosome</div><div>spatial organizations, and has shed deep insights into genome structure and genome function. However, multiple sources of uncertainties make downstream data analysis and interpretation challenging. Specically, statistical models for inferring three-dimensional (3D) chromosomal structure from Hi-C data are far from their maturity. Most existing methods are highly over-parameterized, lacking clear interpretations, and sensitive to outliers. We propose a parsimonious, easy to interpret, and robust piecewise helical curve model for the inference of 3D chromosomal structures</div><div>from Hi-C data, for both individual topologically associated domains and whole chromosomes. When applied to a real Hi-C dataset, the piecewise helical model not only achieves much better model tting than existing models, but also reveals that geometric properties of chromatin spatial organization are closely related to genome function.</div><div><br></div><div><div>For potential applications in big data analytics and machine learning, we propose to use deep neural networks to automate the Bayesian model selection and parameter estimation procedures. Two such frameworks are developed under different scenarios. First, we construct a deep neural network-based Bayes estimator for the parameters of a given model. The neural Bayes estimator mitigates the computational challenges faced by traditional approaches for computing Bayes estimators. When applied to the generalized linear mixed models, the neural Bayes estimator</div><div>outperforms existing methods implemented in R packages and SAS procedures. Second, we construct a deep convolutional neural networks-based framework to perform</div><div>simultaneous Bayesian model selection and parameter estimation. We refer to the neural networks for model selection and parameter estimation in the framework as the</div><div>neural model selector and parameter estimator, respectively, which can be properly trained using labeled data systematically generated from candidate models. Simulation</div><div>study shows that both the neural selector and estimator demonstrate excellent performances.</div></div><div><br></div><div><div>The theory of Conditional Inferential Models (CIMs) has been introduced to combine information for efficient inference in the Inferential Models framework for priorfree</div><div>and yet valid probabilistic inference. While the general theory is subject to further development, the so-called regular CIMs are simple. We establish and prove a</div><div>necessary and sucient condition for the existence and identication of regular CIMs. More specically, it is shown that for inference based on a sample from continuous</div><div>distributions with unknown parameters, the corresponding CIM is regular if and only if the unknown parameters are generalized location and scale parameters, indexing</div><div>the transformations of an affine group.</div></div>
60

Ranked sparsity: a regularization framework for selecting features in the presence of prior informational asymmetry

Peterson, Ryan Andrew 01 May 2019 (has links)
In this dissertation, we explore and illustrate the concept of ranked sparsity, a phenomenon that often occurs naturally in the presence of derived variables. Ranked sparsity arises in modeling applications when an expected disparity exists in the quality of information between different feature sets. Its presence can cause traditional model selection methods to fail because statisticians commonly presume that each potential parameter is equally worthy of entering into the final model - we call this principle "covariate equipoise". However, this presumption does not always hold, especially in the presence of derived variables. For instance, when all possible interactions are considered as candidate predictors, the presumption of covariate equipoise will often produce misclassified and opaque models. The sheer number of additional candidate variables grossly inflates the number of false discoveries in the interactions, resulting in unnecessarily complex and difficult-to-interpret models with many (truly spurious) interactions. We suggest a modeling strategy that requires a stronger level of evidence in order to allow certain variables (e.g. interactions) to be selected in the final model. This ranked sparsity paradigm can be implemented either with a modified Bayesian information criterion (RBIC) or with the sparsity-ranked lasso (SRL). In chapter 1, we provide a philosophical motivation for ranked sparsity by describing situations where traditional model selection methods fail. Chapter 1 also presents some of the relevant literature, and motivates why ranked sparsity methods are necessary in the context of interactions. Finally, we introduce RBIC and SRL as possible recourses. In chapter 2, we explore the performance of SRL relative to competing methods for selecting polynomials and interactions in a series of simulations. We show that the SRL is a very attractive method because it is fast, accurate, and does not tend to inflate the number of Type I errors in the interactions. We illustrate its utility in an application to predict the survival of lung cancer patients using a set of gene expression measurements and clinical covariates, searching in particular for gene-environment interactions, which are very difficult to find in practice. In chapter 3, we present three extensions of the SRL in very different contexts. First, we show how the method can be used to optimize for cost and prediction accuracy simulataneously when covariates have differing collection costs. In this setting, the SRL produces what we call "minimally invasive" models, i.e. models that can easily (and cheaply) be applied to new data. Second, we investigate the use of the SRL in the context of time series regression, where we evaluate our method against several other state-of-the-art techniques in predicting the hourly number of arrivals at the Emergency Department of the University of Iowa Hospitals and Clinics. Finally, we show how the SRL can be utilized to balance model stability and model adaptivity in an application which uses a rich new source of smartphone thermometer data to predict flu incidence in real time.

Page generated in 0.0667 seconds