• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 160
  • 45
  • 32
  • 16
  • 4
  • 4
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 312
  • 312
  • 80
  • 54
  • 52
  • 49
  • 44
  • 42
  • 42
  • 42
  • 35
  • 34
  • 32
  • 28
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Automatic Development of Pharmacokinetic Structural Models

Hamdan, Alzahra January 2022 (has links)
Introduction: The current development strategy of population pharmacokinetic models is a complex and iterative process that is manually performed by modellers. Such a strategy is time-demanding, subjective, and dependent on the modellers’ experience. This thesis presents a novel model building tool that automates the development process of pharmacokinetic (PK) structural models. Methods: Modelsearch is a tool in Pharmpy library, an open-source package for pharmacometrics modelling, that searches for the best structural model using an exhaustive stepwise search algorithm. Given a dataset, a starting model and a pre-specified model search space of structural model features, the tool creates and fits a series of candidate models that are then ranked based on a selection criterion, leading to the selection of the best model. The Modelsearch tool was used to develop structural models for 10 clinical PK datasets (5 orally and 5 i.v. administered drugs). A starting model for each dataset was generated using the assemblerr package in R, which included a first-order (FO) absorption without any absorption delay for oral drugs, one-compartment disposition, FO elimination, a proportional residual error model, and inter-individual variability on the starting model parameters with a correlation between clearance (CL) and central volume of distribution (VC). The model search space included aspects of absorption and absorption delay (for oral drugs), distribution and elimination. In order to understand the effects of different IIV structures on structural model selection, five model search approaches were investigated that differ in the IIV structure of candidate models: 1. naïve pooling, 2. IIV on starting model parameters only, 3. additional IIV on mean delay time parameter, 4. additional diagonal IIVs on newly added parameters, and 5. full block IIVs. Additionally, the implementation of structural model selection in the workflow of the fully automatic model development was investigated. Three strategies were evaluated: SIR, SRI, and RSI depending on the development order of structural model (S), IIV model (I) and residual error model (R). Moreover, the NONMEM errors encountered when using the tool were investigated and categorized in order to be handled in the automatic model building workflow. Results: Differences in the final selected structural models for each drug were observed between the five different model search approaches. The same distribution components were selected through Approaches 1 and 2 for 6/10 drugs. Approach 2 has also identified an absorption delay component in 4/5 oral drugs, whilst the naïve pooling approach only identified an absorption delay model in 2 drugs. Compared to Approaches 1 and 2, Approaches 3, 4 and 5 tended to select more complex models and more often resulted in minimization errors during the search. For the SIR, SRI and RSI investigations, the same structural model was selected in 9/10 drugs with a significant higher run time in RSI strategy compared to the other strategies. The NONMEM errors were categorized into four categories based on the handling suggestions which is valuable to further improve the tool in its automatic error handling. Conclusions: The Modelsearch tool was able to automatically select a structural model with different strategies of setting the IIV model structure. This novel tool enables the evaluation of numerous combinations of model components, which would not be possible using a traditional manual model building strategy. Furthermore, the tool is flexible and can support multiple research investigations for how to best implement structural model selection in a fully automatic model development workflow.
242

Portfolio management using computational intelligence approaches. Forecasting and Optimising the Stock Returns and Stock Volatilities with Fuzzy Logic, Neural Network and Evolutionary Algorithms.

Skolpadungket, Prisadarng January 2013 (has links)
Portfolio optimisation has a number of constraints resulting from some practical matters and regulations. The closed-form mathematical solution of portfolio optimisation problems usually cannot include these constraints. Exhaustive search to reach the exact solution can take prohibitive amount of computational time. Portfolio optimisation models are also usually impaired by the estimation error problem caused by lack of ability to predict the future accurately. A number of Multi-Objective Genetic Algorithms are proposed to solve the problem with two objectives subject to cardinality constraints, floor constraints and round-lot constraints. Fuzzy logic is incorporated into the Vector Evaluated Genetic Algorithm (VEGA) to but solutions tend to cluster around a few points. Strength Pareto Evolutionary Algorithm 2 (SPEA2) gives solutions which are evenly distributed portfolio along the effective front while MOGA is more time efficient. An Evolutionary Artificial Neural Network (EANN) is proposed. It automatically evolves the ANN¿s initial values and structures hidden nodes and layers. The EANN gives a better performance in stock return forecasts in comparison with those of Ordinary Least Square Estimation and of Back Propagation and Elman Recurrent ANNs. Adaptation algorithms for selecting a pair of forecasting models, which are based on fuzzy logic-like rules, are proposed to select best models given an economic scenario. Their predictive performances are better than those of the comparing forecasting models. MOGA and SPEA2 are modified to include a third objective to handle model risk and are evaluated and tested for their performances. The result shows that they perform better than those without the third objective.
243

Adaptive Foraging in a Generalist Predator: Implications of Habitat Structure, Density, Prey Availability and Nutrients

Schmidt, Jason M. 09 August 2011 (has links)
No description available.
244

Model Selection and Adaptive Lasso Estimation of Spatial Models

Liu, Tuo 07 December 2017 (has links)
No description available.
245

[en] GETTING THE MOST OUT OF THE WISDOM OF THE CROWDS: IMPROVING FORECASTING PERFORMANCE THROUGH ENSEMBLE METHODS AND VARIABLE SELECTION TECHNIQUES / [pt] TIRANDO O MÁXIMO PROVEITO DA SABEDORIA DAS MASSAS: APRIMORANDO PREVISÕES POR MEIO DE MÉTODOS DE ENSEMBLE E TÉCNICAS DE SELEÇÃO DE VARIÁVEIS

ERICK MEIRA DE OLIVEIRA 03 June 2020 (has links)
[pt] A presente pesquisa tem como foco o desenvolvimento de abordagens híbridas que combinam algoritmos de aprendizado de máquina baseados em conjuntos (ensembles) e técnicas de modelagem e previsão de séries temporais. A pesquisa também inclui o desenvolvimento de heurísticas inteligentes de seleção, isto é, procedimentos capazes de selecionar, dentre o pool de preditores originados por meio dos métodos de conjunto, aqueles com os maiores potenciais de originar previsões agregadas mais acuradas. A agregação de funcionalidades de diferentes métodos visa à obtenção de previsões mais acuradas sobre o comportamento de uma vasta gama de eventos/séries temporais. A tese está dividida em uma sequência de ensaios. Como primeiro esforço, propôsse um método alternativo de geração de conjunto de previsões, o que resultou em previsões satisfatórias para certos tipos de séries temporais de consumo de energia elétrica. A segunda iniciativa consistiu na proposição de uma nova abordagem de previsão combinando algoritmos de Bootstrap Aggregation (Bagging) e técnicas de regularização para se obter previsões acuradas de consumo de gás natural e de abastecimento de energia em diferentes países. Uma nova variante de Bagging, na qual a construção do conjunto de classificadores é feita por meio de uma reamostragem de máxima entropia, também foi proposta. A terceira contribuição trouxe uma série de inovações na maneira pela qual são conduzidas as rotinas de seleção e combinação de modelos de previsão. Os ganhos em acurácia oriundos dos procedimentos propostos são demonstrados por meio de um experimento extensivo utilizando séries das Competições M1, M3 e M4. / [en] This research focuses on the development of hybrid approaches that combine ensemble-based supervised machine learning techniques and time series methods to obtain accurate forecasts for a wide range of variables and processes. It also includes the development of smart selection heuristics, i.e., procedures that can select, among the pool of forecasts originated via ensemble methods, those with the greatest potential of delivering accurate forecasts after aggregation. Such combinatorial approaches allow the forecasting practitioner to deal with different stylized facts that may be present in time series, such as nonlinearities, stochastic components, heteroscedasticity, structural breaks, among others, and deliver satisfactory forecasting results, outperforming benchmarks on many occasions. The thesis is divided into a series of essays. The first endeavor proposed an alternative method to generate ensemble forecasts which delivered satisfactory forecasting results for certain types of electricity consumption time series. In a second effort, a novel forecasting approach combining Bootstrap aggregating (Bagging) algorithms, time series methods and regularization techniques was introduced to obtain accurate forecasts of natural gas consumption and energy supplied series across different countries. A new variant of Bagging, in which the set of classifiers is built by means of a Maximum Entropy Bootstrap routine, was also put forth. The third contribution brought a series of innovations to model selection and model combination in forecasting routines. Gains in accuracy for both point forecasts and prediction intervals were demonstrated by means of an extensive empirical experiment conducted on a wide range of series from the M- Competitions.
246

Contributions to Structured Variable Selection Towards Enhancing Model Interpretation and Computation Efficiency

Shen, Sumin 07 February 2020 (has links)
The advances in data-collecting technologies provides great opportunities to access large sample-size data sets with high dimensionality. Variable selection is an important procedure to extract useful knowledge from such complex data. While in many real-data applications, appropriate selection of variables should facilitate the model interpretation and computation efficiency. It is thus important to incorporate domain knowledge of underlying data generation mechanism to select key variables for improving the model performance. However, general variable selection techniques, such as the best subset selection and the Lasso, often do not take the underlying data generation mechanism into considerations. This thesis proposal aims to develop statistical modeling methodologies with a focus on the structured variable selection towards better model interpretation and computation efficiency. Specifically, this thesis proposal consists of three parts: an additive heredity model with coefficients incorporating the multi-level data, a regularized dynamic generalized linear model with piecewise constant functional coefficients, and a structured variable selection method within the best subset selection framework. In Chapter 2, an additive heredity model is proposed for analyzing mixture-of-mixtures (MoM) experiments. The MoM experiment is different from the classical mixture experiment in that the mixture component in MoM experiments, known as the major component, is made up of sub-components, known as the minor components. The proposed model considers an additive structure to inherently connect the major components with the minor components. To enable a meaningful interpretation for the estimated model, we apply the hierarchical and heredity principles by using the nonnegative garrote technique for model selection. The performance of the additive heredity model was compared to several conventional methods in both unconstrained and constrained MoM experiments. The additive heredity model was then successfully applied in a real problem of optimizing the Pringlestextsuperscript{textregistered} potato crisp studied previously in the literature. In Chapter 3, we consider the dynamic effects of variables in the generalized linear model such as logistic regression. This work is motivated from the engineering problem with varying effects of process variables to product quality caused by equipment degradation. To address such challenge, we propose a penalized dynamic regression model which is flexible to estimate the dynamic coefficient structure. The proposed method considers modeling the functional coefficient parameter as piecewise constant functions. Specifically, under the penalized regression framework, the fused lasso penalty is adopted for detecting the changes in the dynamic coefficients. The group lasso penalty is applied to enable a sparse selection of variables. Moreover, an efficient parameter estimation algorithm is also developed based on alternating direction method of multipliers. The performance of the dynamic coefficient model is evaluated in numerical studies and three real-data examples. In Chapter 4, we develop a structured variable selection method within the best subset selection framework. In the literature, many techniques within the LASSO framework have been developed to address structured variable selection issues. However, less attention has been spent on structured best subset selection problems. In this work, we propose a sparse Ridge regression method to address structured variable selection issues. The key idea of the proposed method is to re-construct the regression matrix in the angle of experimental designs. We employ the estimation-maximization algorithm to formulate the best subset selection problem as an iterative linear integer optimization (LIO) problem. the mixed integer optimization algorithm as the selection step. We demonstrate the power of the proposed method in various structured variable selection problems. Moverover, the proposed method can be extended to the ridge penalized best subset selection problems. The performance of the proposed method is evaluated in numerical studies. / Doctor of Philosophy / The advances in data-collecting technologies provides great opportunities to access large sample-size data sets with high dimensionality. Variable selection is an important procedure to extract useful knowledge from such complex data. While in many real-data applications, appropriate selection of variables should facilitate the model interpretation and computation efficiency. It is thus important to incorporate domain knowledge of underlying data generation mechanism to select key variables for improving the model performance. However, general variable selection techniques often do not take the underlying data generation mechanism into considerations. This thesis proposal aims to develop statistical modeling methodologies with a focus on the structured variable selection towards better model interpretation and computation efficiency. The proposed approaches have been applied to real-world problems to demonstrate their model performance.
247

Comparing generalized additive neural networks with multilayer perceptrons / Johannes Christiaan Goosen

Goosen, Johannes Christiaan January 2011 (has links)
In this dissertation, generalized additive neural networks (GANNs) and multilayer perceptrons (MLPs) are studied and compared as prediction techniques. MLPs are the most widely used type of artificial neural network (ANN), but are considered black boxes with regard to interpretability. There is currently no simple a priori method to determine the number of hidden neurons in each of the hidden layers of ANNs. Guidelines exist that are either heuristic or based on simulations that are derived from limited experiments. A modified version of the neural network construction with cross–validation samples (N2C2S) algorithm is therefore implemented and utilized to construct good MLP models. This algorithm enables the comparison with GANN models. GANNs are a relatively new type of ANN, based on the generalized additive model. The architecture of a GANN is less complex compared to MLPs and results can be interpreted with a graphical method, called the partial residual plot. A GANN consists of an input layer where each of the input nodes has its own MLP with one hidden layer. Originally, GANNs were constructed by interpreting partial residual plots. This method is time consuming and subjective, which may lead to the creation of suboptimal models. Consequently, an automated construction algorithm for GANNs was created and implemented in the SAS R statistical language. This system was called AutoGANN and is used to create good GANN models. A number of experiments are conducted on five publicly available data sets to gain insight into the similarities and differences between GANN and MLP models. The data sets include regression and classification tasks. In–sample model selection with the SBC model selection criterion and out–of–sample model selection with the average validation error as model selection criterion are performed. The models created are compared in terms of predictive accuracy, model complexity, comprehensibility, ease of construction and utility. The results show that the choice of model is highly dependent on the problem, as no single model always outperforms the other in terms of predictive accuracy. GANNs may be suggested for problems where interpretability of the results is important. The time taken to construct good MLP models by the modified N2C2S algorithm may be shorter than the time to build good GANN models by the automated construction algorithm / Thesis (M.Sc. (Computer Science))--North-West University, Potchefstroom Campus, 2011.
248

Comparing generalized additive neural networks with multilayer perceptrons / Johannes Christiaan Goosen

Goosen, Johannes Christiaan January 2011 (has links)
In this dissertation, generalized additive neural networks (GANNs) and multilayer perceptrons (MLPs) are studied and compared as prediction techniques. MLPs are the most widely used type of artificial neural network (ANN), but are considered black boxes with regard to interpretability. There is currently no simple a priori method to determine the number of hidden neurons in each of the hidden layers of ANNs. Guidelines exist that are either heuristic or based on simulations that are derived from limited experiments. A modified version of the neural network construction with cross–validation samples (N2C2S) algorithm is therefore implemented and utilized to construct good MLP models. This algorithm enables the comparison with GANN models. GANNs are a relatively new type of ANN, based on the generalized additive model. The architecture of a GANN is less complex compared to MLPs and results can be interpreted with a graphical method, called the partial residual plot. A GANN consists of an input layer where each of the input nodes has its own MLP with one hidden layer. Originally, GANNs were constructed by interpreting partial residual plots. This method is time consuming and subjective, which may lead to the creation of suboptimal models. Consequently, an automated construction algorithm for GANNs was created and implemented in the SAS R statistical language. This system was called AutoGANN and is used to create good GANN models. A number of experiments are conducted on five publicly available data sets to gain insight into the similarities and differences between GANN and MLP models. The data sets include regression and classification tasks. In–sample model selection with the SBC model selection criterion and out–of–sample model selection with the average validation error as model selection criterion are performed. The models created are compared in terms of predictive accuracy, model complexity, comprehensibility, ease of construction and utility. The results show that the choice of model is highly dependent on the problem, as no single model always outperforms the other in terms of predictive accuracy. GANNs may be suggested for problems where interpretability of the results is important. The time taken to construct good MLP models by the modified N2C2S algorithm may be shorter than the time to build good GANN models by the automated construction algorithm / Thesis (M.Sc. (Computer Science))--North-West University, Potchefstroom Campus, 2011.
249

Model selection

Hildebrand, Annelize 11 1900 (has links)
In developing an understanding of real-world problems, researchers develop mathematical and statistical models. Various model selection methods exist which can be used to obtain a mathematical model that best describes the real-world situation in some or other sense. These methods aim to assess the merits of competing models by concentrating on a particular criterion. Each selection method is associated with its own criterion and is named accordingly. The better known ones include Akaike's Information Criterion, Mallows' Cp and cross-validation, to name a few. The value of the criterion is calculated for each model and the model corresponding to the minimum value of the criterion is then selected as the "best" model. / Mathematical Sciences / M. Sc. (Statistics)
250

Essays on economic and econometric applications of Bayesian estimation and model comparison

Li, Guangjie January 2009 (has links)
This thesis consists of three chapters on economic and econometric applications of Bayesian parameter estimation and model comparison. The first two chapters study the incidental parameter problem mainly under a linear autoregressive (AR) panel data model with fixed effect. The first chapter investigates the problem from a model comparison perspective. The major finding in the first chapter is that consistency in parameter estimation and model selection are interrelated. The reparameterization of the fixed effect parameter proposed by Lancaster (2002) may not provide a valid solution to the incidental parameter problem if the wrong set of exogenous regressors are included. To estimate the model consistently and to measure its goodness of fit, the Bayes factor is found to be more preferable for model comparson than the Bayesian information criterion based on the biased maximum likelihood estimates. When the model uncertainty is substantial, Bayesian model averaging is recommended. The method is applied to study the relationship between financial development and economic growth. The second chapter proposes a correction function approach to solve the incidental parameter problem. It is discovered that the correction function exists for the linear AR panel model of order p when the model is stationary with strictly exogenous regressors. MCMC algorithms are developed for parameter estimation and to calculate the Bayes factor for model comparison. The last chapter studies how stock return's predictability and model uncertainty affect a rational buy-and-hold investor's decision to allocate her wealth for different lengths of investment horizons in the UK market. The FTSE All-Share Index is treated as the risky asset, and the UK Treasury bill as the riskless asset in forming the investor's portfolio. Bayesian methods are employed to identify the most powerful predictors by accounting for model uncertainty. It is found that though stock return predictability is weak, it can still affect the investor's optimal portfolio decisions over different investment horizons.

Page generated in 0.1017 seconds