• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • Tagged with
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Statistical Methods for High Throughput Screening Drug Discovery Data

Wang, Yuanyuan (Marcia) January 2005 (has links)
High Throughput Screening (HTS) is used in drug discovery to screen large numbers of compounds against a biological target. Data on activity against the target are collected for a representative sample of compounds selected from a large library. The goal of drug discovery is to relate the activity of a compound to its chemical structure, which is quantified by various explanatory variables, and hence to identify further active compounds. Often, this application has a very unbalanced class distribution, with a rare active class. <br /><br /> Classification methods are commonly proposed as solutions to this problem. However, regarding drug discovery, researchers are more interested in ranking compounds by predicted activity than in the classification itself. This feature makes my approach distinct from common classification techniques. <br /><br /> In this thesis, two AIDS data sets from the National Cancer Institute (NCI) are mainly used. Local methods, namely K-nearest neighbours (KNN) and classification and regression trees (CART), perform very well on these data in comparison with linear/logistic regression, neural networks, and Multivariate Adaptive Regression Splines (MARS) models, which assume more smoothness. One reason for the superiority of local methods is the local behaviour of the data. Indeed, I argue that conventional classification criteria such as misclassification rate or deviance tend to select too small a tree or too large a value of <em>k</em> (the number of nearest neighbours). A more local model (bigger tree or smaller <em>k</em>) gives a better performance in terms of drug discovery. <br /><br /> Because off-the-shelf KNN works relatively well, this thesis takes this promising method and makes several novel modifications, which further improve its performance. The choice of <em>k</em> is optimized for each test point to be predicted. The empirically observed superiority of allowing <em>k</em> to vary is investigated. The nature of the problem, ranking of objects rather than estimating the probability of activity, enables the <em>k</em>-varying algorithm to stand out. Similarly, KNN combined with a kernel weight function (weighted KNN) is proposed and demonstrated to be superior to the regular KNN method. <br /><br /> High dimensionality of the explanatory variables is known to cause problems for KNN and many other classifiers. I propose a novel method (subset KNN) of averaging across multiple classifiers based on building classifiers on subspaces (subsets of variables). It improves the performance of KNN for HTS data. When applied to CART, it also performs as well as or even better than the popular methods of bagging and boosting. Part of this improvement is due to the discovery that classifiers based on irrelevant subspaces (unimportant explanatory variables) do little damage when averaged with good classifiers based on relevant subspaces (important variables). This result is particular to the ranking of objects rather than estimating the probability of activity. A theoretical justification is proposed. The thesis also suggests diagnostics for identifying important subsets of variables and hence further reducing the impact of the curse of dimensionality. <br /><br /> In order to have a broader evaluation of these methods, subset KNN and weighted KNN are applied to three other data sets: the NCI AIDS data with Constitutional descriptors, Mutagenicity data with BCUT descriptors and Mutagenicity data with Constitutional descriptors. The <em>k</em>-varying algorithm as a method for unbalanced data is also applied to NCI AIDS data with Constitutional descriptors. As a baseline, the performance of KNN on such data sets is reported. Although different methods are best for the different data sets, some of the proposed methods are always amongst the best. <br /><br /> Finally, methods are described for estimating activity rates and error rates in HTS data. By combining auxiliary information about repeat tests of the same compound, likelihood methods can extract interesting information about the magnitudes of the measurement errors made in the assay process. These estimates can be used to assess model performance, which sheds new light on how various models handle the large random or systematic assay errors often present in HTS data.
2

Statistical Methods for High Throughput Screening Drug Discovery Data

Wang, Yuanyuan (Marcia) January 2005 (has links)
High Throughput Screening (HTS) is used in drug discovery to screen large numbers of compounds against a biological target. Data on activity against the target are collected for a representative sample of compounds selected from a large library. The goal of drug discovery is to relate the activity of a compound to its chemical structure, which is quantified by various explanatory variables, and hence to identify further active compounds. Often, this application has a very unbalanced class distribution, with a rare active class. <br /><br /> Classification methods are commonly proposed as solutions to this problem. However, regarding drug discovery, researchers are more interested in ranking compounds by predicted activity than in the classification itself. This feature makes my approach distinct from common classification techniques. <br /><br /> In this thesis, two AIDS data sets from the National Cancer Institute (NCI) are mainly used. Local methods, namely K-nearest neighbours (KNN) and classification and regression trees (CART), perform very well on these data in comparison with linear/logistic regression, neural networks, and Multivariate Adaptive Regression Splines (MARS) models, which assume more smoothness. One reason for the superiority of local methods is the local behaviour of the data. Indeed, I argue that conventional classification criteria such as misclassification rate or deviance tend to select too small a tree or too large a value of <em>k</em> (the number of nearest neighbours). A more local model (bigger tree or smaller <em>k</em>) gives a better performance in terms of drug discovery. <br /><br /> Because off-the-shelf KNN works relatively well, this thesis takes this promising method and makes several novel modifications, which further improve its performance. The choice of <em>k</em> is optimized for each test point to be predicted. The empirically observed superiority of allowing <em>k</em> to vary is investigated. The nature of the problem, ranking of objects rather than estimating the probability of activity, enables the <em>k</em>-varying algorithm to stand out. Similarly, KNN combined with a kernel weight function (weighted KNN) is proposed and demonstrated to be superior to the regular KNN method. <br /><br /> High dimensionality of the explanatory variables is known to cause problems for KNN and many other classifiers. I propose a novel method (subset KNN) of averaging across multiple classifiers based on building classifiers on subspaces (subsets of variables). It improves the performance of KNN for HTS data. When applied to CART, it also performs as well as or even better than the popular methods of bagging and boosting. Part of this improvement is due to the discovery that classifiers based on irrelevant subspaces (unimportant explanatory variables) do little damage when averaged with good classifiers based on relevant subspaces (important variables). This result is particular to the ranking of objects rather than estimating the probability of activity. A theoretical justification is proposed. The thesis also suggests diagnostics for identifying important subsets of variables and hence further reducing the impact of the curse of dimensionality. <br /><br /> In order to have a broader evaluation of these methods, subset KNN and weighted KNN are applied to three other data sets: the NCI AIDS data with Constitutional descriptors, Mutagenicity data with BCUT descriptors and Mutagenicity data with Constitutional descriptors. The <em>k</em>-varying algorithm as a method for unbalanced data is also applied to NCI AIDS data with Constitutional descriptors. As a baseline, the performance of KNN on such data sets is reported. Although different methods are best for the different data sets, some of the proposed methods are always amongst the best. <br /><br /> Finally, methods are described for estimating activity rates and error rates in HTS data. By combining auxiliary information about repeat tests of the same compound, likelihood methods can extract interesting information about the magnitudes of the measurement errors made in the assay process. These estimates can be used to assess model performance, which sheds new light on how various models handle the large random or systematic assay errors often present in HTS data.
3

GDP forecasting and nowcasting : Utilizing a system for averaging models to improve GDP predictions for six countries around the world

Lundberg, Otto January 2017 (has links)
This study was issued by Swedbank because they wanted too improve their GDP growth forecast capabilites.  A program was developed and tested on six countries; USA, Sweden, Germany, UK, Brazil and Norway. In this paper I investigate if I can reduce forecasting error for GDP growth by taking a smart average from a variety of models compared to both the best individual models and a random walk. I combine the forecasts from four model groups: Vector autoregression, principal component analysis, machine learning and random walk. The smart average is given by a system that give more weight to the predictions of models with a lower historical error. Different weighting schemas are explored; how far into the past should we look? How much should bad performance be punished? I show that for the six countries studied the smart average outperforms the single best model and that for five out of six countries it beats a random walk by at least 25%. / Den här studien beställdes av Swedbank eftersom de ville förbättra sin BNP-prediktionsförmåga. Ett dataprogram utvecklades och testades på sex länder; USA, Sverige, Tyskland, Storbritannien, Brasilien och Norge. I den här rapporten undersöker jag om jag kan minska felmarginalen för BNP-utvecklingsprognoser genom att ta ett smart genomsnitt från flera olika modeller jämfört med både den bästa individuella modellen och en random walk. Jag kombinerar prognoser från fyra modellgrupper: Vektor autoregression, principalkomponentanalys, maskininlärning och random walk. Det smarta genomsnittet skapas genom att ge mer vikt till de modeller som har lägst historiskt felmarginal. Olika viktningsscheman utforskas; hur långt bak i tiden ska vi mäta? Hur hårt ska dåliga prediktioner bestraffas? Jag visar att för de sex länderna i studien presterar det smarta genomsnittet bättre än den enskilt bästa modellen och fem av de sex länderna slår en random walk med mer än 25%.

Page generated in 0.1002 seconds