• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 81
  • 17
  • 9
  • 7
  • 7
  • 6
  • 5
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 170
  • 170
  • 42
  • 41
  • 36
  • 33
  • 30
  • 30
  • 23
  • 22
  • 18
  • 18
  • 17
  • 16
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Model comparison and assessment by cross validation

Shen, Hui 11 1900 (has links)
Cross validation (CV) is widely used for model assessment and comparison. In this thesis, we first review and compare three v-fold CV strategies: best single CV, repeated and averaged CV and double CV. The mean squared errors of the CV strategies in estimating the best predictive performance are illustrated by using simulated and real data examples. The results show that repeated and averaged CV is a good strategy and outperforms the other two CV strategies for finite samples in terms of the mean squared error in estimating prediction accuracy and the probability of choosing an optimal model. In practice, when we need to compare many models, conducting repeated and averaged CV strategy is not computational feasible. We develop an efficient sequential methodology for model comparison based on CV. It also takes into account the randomness in CV. The number of models is reduced via an adaptive, multiplicity-adjusted sequential algorithm, where poor performers are quickly eliminated. By exploiting matching of individual observations, it is sometimes even possible to establish the statistically significant inferiority of some models with just one execution of CV. This adaptive and computationally efficient methodology is demonstrated on a large cheminformatics data set from PubChem. Cross validated mean squared error (CVMSE) is widely used to estimate the prediction mean squared error (MSE) of statistical methods. For linear models, we show how CVMSE depends on the number of folds, v, used in cross validation, the number of observations, and the number of model parameters. We establish that the bias of CVMSE in estimating the true MSE decreases with v and increases with model complexity. In particular, the bias may be very substantial for models with many parameters relative to the number of observations, even if v is large. These results are used to correct CVMSE for its bias. We compare our proposed bias correction with that of Burman (1989), through simulated and real examples. We also illustrate that our method of correcting for the bias of CVMSE may change the results of model selection.
2

Model comparison and assessment by cross validation

Shen, Hui 11 1900 (has links)
Cross validation (CV) is widely used for model assessment and comparison. In this thesis, we first review and compare three v-fold CV strategies: best single CV, repeated and averaged CV and double CV. The mean squared errors of the CV strategies in estimating the best predictive performance are illustrated by using simulated and real data examples. The results show that repeated and averaged CV is a good strategy and outperforms the other two CV strategies for finite samples in terms of the mean squared error in estimating prediction accuracy and the probability of choosing an optimal model. In practice, when we need to compare many models, conducting repeated and averaged CV strategy is not computational feasible. We develop an efficient sequential methodology for model comparison based on CV. It also takes into account the randomness in CV. The number of models is reduced via an adaptive, multiplicity-adjusted sequential algorithm, where poor performers are quickly eliminated. By exploiting matching of individual observations, it is sometimes even possible to establish the statistically significant inferiority of some models with just one execution of CV. This adaptive and computationally efficient methodology is demonstrated on a large cheminformatics data set from PubChem. Cross validated mean squared error (CVMSE) is widely used to estimate the prediction mean squared error (MSE) of statistical methods. For linear models, we show how CVMSE depends on the number of folds, v, used in cross validation, the number of observations, and the number of model parameters. We establish that the bias of CVMSE in estimating the true MSE decreases with v and increases with model complexity. In particular, the bias may be very substantial for models with many parameters relative to the number of observations, even if v is large. These results are used to correct CVMSE for its bias. We compare our proposed bias correction with that of Burman (1989), through simulated and real examples. We also illustrate that our method of correcting for the bias of CVMSE may change the results of model selection.
3

Model comparison and assessment by cross validation

Shen, Hui 11 1900 (has links)
Cross validation (CV) is widely used for model assessment and comparison. In this thesis, we first review and compare three v-fold CV strategies: best single CV, repeated and averaged CV and double CV. The mean squared errors of the CV strategies in estimating the best predictive performance are illustrated by using simulated and real data examples. The results show that repeated and averaged CV is a good strategy and outperforms the other two CV strategies for finite samples in terms of the mean squared error in estimating prediction accuracy and the probability of choosing an optimal model. In practice, when we need to compare many models, conducting repeated and averaged CV strategy is not computational feasible. We develop an efficient sequential methodology for model comparison based on CV. It also takes into account the randomness in CV. The number of models is reduced via an adaptive, multiplicity-adjusted sequential algorithm, where poor performers are quickly eliminated. By exploiting matching of individual observations, it is sometimes even possible to establish the statistically significant inferiority of some models with just one execution of CV. This adaptive and computationally efficient methodology is demonstrated on a large cheminformatics data set from PubChem. Cross validated mean squared error (CVMSE) is widely used to estimate the prediction mean squared error (MSE) of statistical methods. For linear models, we show how CVMSE depends on the number of folds, v, used in cross validation, the number of observations, and the number of model parameters. We establish that the bias of CVMSE in estimating the true MSE decreases with v and increases with model complexity. In particular, the bias may be very substantial for models with many parameters relative to the number of observations, even if v is large. These results are used to correct CVMSE for its bias. We compare our proposed bias correction with that of Burman (1989), through simulated and real examples. We also illustrate that our method of correcting for the bias of CVMSE may change the results of model selection. / Science, Faculty of / Statistics, Department of / Graduate
4

Leave-Group-Out Cross-Validation for Latent Gaussian Models

Liu, Zhedong 04 1900 (has links)
Cross-validation is a widely used technique in statistics and machine learning for predictive performance assessment and model selection. It involves dividing the available data into multiple sets, training the model on some of the data and testing it on the rest, and repeating this process multiple times. The goal of cross-validation is to assess the model’s predictive performance on unseen data. Two standard methods for cross-validation are leave-one-out cross-validation and K-fold cross-validation. However, these methods may not be suitable for structured models with many potential prediction tasks, as they do not take into account the structure of the data. As a solution, leave-group-out cross-validation is an extension of cross-validation that allows the left-out groups to make training sets and testing points to adapt to different prediction tasks. In this dissertation, we propose an automatic group construction procedure for leave-group-out cross-validation to estimate the predictive performance of the model when the prediction task is not specified. We also propose an efficient approximation of leave-group-out cross-validation for latent Gaussian models. Both of these procedures are implemented in the R-INLA software. We demonstrate the usefulness of our proposed leave-group-out cross-validation method through its application in the joint modeling of survival data and longitudinal data. The example shows the effectiveness of this method in real-world scenarios.
5

Cross-Validation for Model Selection in Model-Based Clustering

O'Reilly, Rachel 04 September 2012 (has links)
Clustering is a technique used to partition unlabelled data into meaningful groups. This thesis will focus on the area of clustering called model-based clustering, where it is assumed that data arise from a finite number of subpopulations, each of which follows a known statistical distribution. The number of groups and shape of each group is unknown in advance, and thus one of the most challenging aspects of clustering is selecting these features. Cross-validation is a model selection technique which is often used in regression and classification, because it tends to choose models that predict well, and are not over-fit to the data. However, it has rarely been applied in a clustering framework. Herein, cross-validation is applied to select the number of groups and covariance structure within a family of Gaussian mixture models. Results are presented for both real and simulated data. / Ontario Graduate Scholarship Program
6

Cross-validatory Model Comparison and Divergent Regions Detection using iIS and iWAIC for Disease Mapping

2015 March 1900 (has links)
The well-documented problems associated with mapping raw rates of disease have resulted in an increased use of Bayesian hierarchical models to produce maps of "smoothed'' estimates of disease rates. Two statistical problems arise in using Bayesian hierarchical models for disease mapping. The first problem is in comparing goodness of fit of various models, which can be used to test different hypotheses. The second problem is in identifying outliers/divergent regions with unusually high or low residual risk of disease, or those whose disease rates are not well fitted. The results of outlier detection may generate further hypotheses as to what additional covariates might be necessary for explaining the disease. Leave-one-out cross-validatory (LOOCV) model assessment has been used for these two problems. However, actual LOOCV is time-consuming. This thesis introduces two methods, namely iIS and iWAIC, for approximating LOOCV, using only Markov chain samples simulated from a posterior distribution based on a full data set. In iIS and iWAIC, we first integrate the latent variables without reference to holdout observation, then apply IS and WAIC approximations to the integrated predictive density and evaluation function. We apply iIS and iWAIC to two real data sets. Our empirical results show that iIS and iWAIC can provide significantly better estimation of LOOCV model assessment than existing methods including DIC, Importance Sampling, WAIC, posterior checking and Ghosting methods.
7

Optimal weight settings in locally weighted regression: A guidance through cross-validation approach

Puri, Roshan January 2023 (has links)
Locally weighted regression is a powerful tool that allows the estimation of different sets of coefficients for each location in the underlying data, challenging the assumption of stationary regression coefficients across a study region. The accuracy of LWR largely depends on how a researcher establishes the relationship across locations, which is often constructed using a weight matrix or function. This paper explores the different kernel functions used to assign weights to observations, including Gaussian, bi-square, and tri-cubic, and how the choice of weight variables and window size affects the accuracy of the estimates. We guide this choice through the cross-validation approach and show that the bi-square function outperforms the choice of other kernel functions. Our findings demonstrate that an optimal window size for LWR models depends on the cross-validation (CV) approach employed. In our empirical application, the full-sample CV guides the choice of a higher window-size case, and CV by proxy guides the choice of a lower window size. Since the CV by Proxy approach focuses on the predictive ability of the model in the vicinity of one specific point (usually a policy point/site), we note that guiding a model choice through this approach makes more intuitive sense when the aim of the researcher is to predict the outcome in one specific site (policy or target point). To identify the optimal weight variables, while we suggest exploring various combinations of weight variables, we argue that an efficient alternative is to merge all continuous variables in the dataset into a single weight variable. / M.A. / Locally weighted regression (LWR) is a statistical technique that establishes a relationship between dependent and explanatory variables, focusing primarily on data points in proximity to a specific point of interest/target point. This technique assigns varying degrees of importance to the observations that are in proximity to the target point, thereby allowing for the modeling of relationships that may exhibit spatial variability within the dataset. The accuracy of LWR largely depends on how researchers define relationships across different locations/studies, which is often done using a “weight setting”. We define weight setting as a combination of weight functions (determines how the observations around a point of interest are weighted before they enter the model), weight variables (determines proximity between the point of interest and all other observations), and window sizes (determines the number of observations that can be allowed in the local regression). To find which weight setting is an optimal one or which combination of weight functions, weight variables, and window sizes generates the lowest predictive error, researchers often employ a cross-validation (CV) approach. Cross-validation is a statistical method used to assess and validate the performance of a predictive model. It entails removing a host observation (a point of interest), predicting that point, and evaluating the accuracy of such predicted point by comparing it with its actual value. In our study, we employ two CV approaches. The first one is a full-sample CV approach, where we remove a host observation, and predict it using the full set of observations used in the given local regression. The second one is the CV by proxy approach, which uses a similar mechanism as full-sample CV to check the accuracy of the prediction, however, by focusing only on the vicinity points that share similar characteristics as a target point. We find that the bi-square function consistently outperforms the choice of Gaussian and tri-cubic weight functions, regardless of the CV approaches. However, the choice of an optimal window size in LWR models depends on the CV approach that we employ. While the full-sample CV method guides us toward the selection of a larger window size, the CV by proxy directs us toward a smaller window size. In the context of identifying the optimal weight variables, we recommend exploring various combinations of weight variables. However, we also propose an efficient alternative, which involves using all continuous variables within the dataset into a single-weight variable instead of striving to identify the best of thousands of different weight variable settings.
8

Otimização dos processos de calibração e validação do modelo cropgro-soybean / Optimization of the cropgro-soybean model calibration and validation processes

Fensterseifer, Cesar Augusto Jarutais 06 December 2016 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / Crop models are important tools to improve the management and yield of agricultural systems. These improvements are helpful to meet the growing food and fuel demand without increase the crop areas. The conventional approach for calibrating/validating a crop model considers few to many experiments. However, few experiments could lead to higher uncertainties and a large number of experiments is too expensive. Traditionally, the classical procedure use to share an experimental dataset one part to calibrate and the other to validate the model. However, if only few experiments are available, split it could increase the uncertainties on simulation performance. On the other hand, to calibrate/validate the model using several experiments is too expensive and time consuming. Methods that can optimize these procedures, decreasing the processing time and costs, with a reliable performance are always welcome. The first chapter of this study was conducted to evaluate and compare a statistically robust method with the classical calibration/validation procedure. These two procedure, were applied to estimate the genetic coefficients of the CROPGRO-soybean model, using multiple experiments. The cross-validation leave-one-out method, was applied to 21 experiments, using the NA 5909 RG variety, across a southern state of Brazil. The cross-validation reduced the classical calibration/validation procedure average RMSE from 2.6, 4.6, 4.8, 7.3, 10.2, 677 and 551 to 1.1, 4.1, 4.1, 6.2, 6.3, 347 and 447 for emergence, R1, R3, R5, R7 (days), grains.m-2 and kg.ha-1, respectively. There was stability in the estimated ecotype and genetic coefficient among the 21 experiments. Considering the wide range of environment conditions, the CROPGRO-soybean model provided robust predictions of phenology, biomass and grain yield. Finally, to improve the calibration/validation procedure performance, the cross-validation method should be used whenever possible. For the second chapter of this study, the main objectives were to evaluate the calibration/validation uncertainties using different numbers of experiments and to find out the minimum number of experiments required for a reliable CROPGRO-Soybean simulation. This study also used 21 field experiments (BMX Potencia RR variety) sown in eight different locations of Southern Brazil between 2010 and 2014. The experiments were grouped in four classes (Individual sowings, season/year per location, experimental sites, and all data together). As the grouping level increase, the developmental stages RRMSE (%), decreased from 22.2% to 7.8% from individual sowings to all data together, respectively. The use of only one individual sowings experiment could lead to a RRMSE of 28.4, 48, and 36% for R1, LAI and yield, respectively. However, the largest decrease occurred from the individual sowings to the season/year per location. Then, is recommended, use at least the season/year per location (early, recommended and late sowing dates) class. It will allow understand the behavior of the variety, avoiding the high costs of several experiments and keeping a reliable performance of the model. / Modelos agrícolas são ferramentas importantes para aprimorar técnicas de manejo e consequentemente a eficiência dos sistemas agrícolas. Esse acréscimo na eficiência são úteis para atender a crescente demanda de alimentos e combustíveis, sem avançar a fronteira agrícola. A calibração e validação de um modelo agrícola, historicamente considerou conjuntos de dados que variam de poucos á muitos experimentos. Poucos experimentos podem aumentar as incertezas e muitos experimentos tem alto custo financeiro e demanda de tempo. Pelo método de partição em dois grupos, o conjunto de experimentos é dividido em duas partes, uma para calibrar e a outra validar o modelo. Se apenas um conjunto pequeno de experimentos está disponível, dividi-los pode prejudicar o desempenho do modelo. Assim, métodos que otimizem esses processos, diminuindo o tempo e o custo de experimentos necessários para a calibração e validação, são sempre bem vindos. O objetivo do primeiro capítulo desta tese, foi comparar o método tradicionalmente utilizado na calibração e validação de modelos com um método mais robusto (cross-validation). Ambos os métodos foram aplicados para estimar os coeficientes genéticos na calibração e validação do modelo CROPGRO-soybean, utilizando múltiplos experimentos. Um conjunto com os 3 experimentos mais detalhados foram utilizados para calibração utilizando o método de partição em dois grupos. Já o método cross-validation, foi aplicado utilizando 21 experimentos. A cultivar NA5909 RG foi selecionada por ser uma das mais cultivadas no sul do Brasil nos últimos 5 anos, conduzida em experimentos distribuídos em oitos locais do Estado do Rio Grande do Sul durante as safras de 2010/2011 ate 2013/2014. O método cross-validation reduziu os RMSEs encontrados no método tradicionalmente utilizado de 2.6, 4.6, 4.8, 7.3, 10.2, 677 e 551 para 1.1, 4.1, 4.1, 6.2, 6.3, 347 e 447 para emergência, R1, R3, R5, R7 (em dias), grãos.m-2 e kg.ha-1, respectivamente. Foi observado estabilidade na maioria das estimativas de coeficientes genéticos, o que sugere a possibilidade de utilizar um menor número de experimentos no processo. Considerando a ampla faixa de condições ambientais, o modelo apresentou desempenho satisfatório na previsão fenológica, de biomassa e produtividade. Para otimizar os processos de calibração e validação, indica-se que o método cross-validation seja utilizado sempre que possível. No segundo capítulo, o principal objetivo foi avaliar o desempenho do uso de diferentes números de experimentos, e estimar o número mínimo necessário para garantir desempenho satisfatório do modelo CROPGRO-soybean. Esse estudo também utilizou 21 experimentos, com a cultivar BMX Potência RR. Os experimentos foram organizados em quatro grupos: Grupo 1 (semeaduras individuais), grupo 2 (ano agrícola por local), grupo 3 (local experimental) e grupo 4 (todos os experimentos juntos). Conforme o número de experimentos aumentou, a variabilidade dos coeficientes e os erros relativos (RRMSE) diminuíram. O primeiro grupo apresentou os maiores erros relativos, com até 28.4, 48 e 36% de erros nas simulações de R1, IAF e produtividade, respectivamente. O maior decréscimo nos erros relativos, ocorreu quando avançamos do grupo 1 para o grupo 2. Em alguns casos os erros foram reduzidos em mais que duas vezes. Assim, considerando o elevado custo financeiro e a demanda de tempo que os grupos 3 e 4 apresentam, recomenda-se a escolha de pelo menos o grupo 2, com 3 experimentos no mesmo ano agrícola. Essa estratégia vai permitir um melhor entendimento sobre o desempenho da cultivar, além de calibrar e validar o modelo CROPGRO-soybean, evitando os altos custos de vários experimentos, garantindo o desempenho satisfatório do modelo.
9

The design and analysis of benchmark experiments

Hothorn, Torsten, Leisch, Friedrich, Zeileis, Achim, Hornik, Kurt January 2003 (has links) (PDF)
The assessment of the performance of learners by means of benchmark experiments is established exercise. In practice, benchmark studies are a tool to compare the performance of several competing algorithms for a certain learning problem. Cross-validation or resampling techniques are commonly used to derive point estimates of the performances which are compared to identify algorithms with good properties. For several benchmarking problems, test procedures taking the variability of those point estimates into account have been suggested. Most of the recently proposed inference procedures are based on special variance estimators for the cross-validated performance. We introduce a theoretical framework for inference problems in benchmark experiments and show that standard statistical test procedures can be used to test for differences in the performances. The theory is based on well defined distributions of performance measures which can be compared with established tests. To demonstrate the usefulness in practice, the theoretical results are applied to benchmark studies in a supervised learning situation based on artificial and real-world data. / Series: Report Series SFB "Adaptive Information Systems and Modelling in Economics and Management Science"
10

Removing Noise from Signals via Neural Networks

Zheng, Xiang-Ren 01 August 2003 (has links)
The main objective of this paper is to develop a method of removing noise from signal. This method is based on the radial-basis function networks and the principle of cross-validation in statistics. In this method, we detect noise by estimating the magnitude of validation error after training the network. Besides, this paper applies the concept of predictive coding to select data set from image when the proposed method used to deal with the noise removal problem of two-dimensional image signals. Finally, the proposed method has been employed to deal with noise removal problems of one-dimensional and two-dimensional signals. From the result of simulation, the proposed method could remove noise from signals effectively.

Page generated in 0.1555 seconds