• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 199
  • 39
  • 28
  • 12
  • 10
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 348
  • 130
  • 83
  • 54
  • 52
  • 33
  • 32
  • 27
  • 26
  • 25
  • 24
  • 24
  • 23
  • 20
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

High Precision Dynamic Power System Frequency Estimation Algorithm Based on Phasor Approach

Zhang, Xuan 11 February 2004 (has links)
An internet-based, real-time, Global Positioning System (GPS) ---synchronized relative to the wide-area frequency-monitoring network (FNET) ---has been developed at Virginia Tech. In this FNET system, an algorithm that employs the relationship between phasor angles and deviated frequency [13] is used to calculate both frequency and its rate of change. Tests of the algorithm disclose that, for non-pure sinusoidal input (as compared to pure sinusoidal input), significant errors in the output frequency will result. Three approaches for increasing the accuracy of the output frequency were compared. The first---increasing the number of samples per cycle N---proved ineffective. The second---using the average of the first estimated frequencies rather than the instant first estimated frequency as the resampling frequency---produces a moderate increase in accuracy of the frequency estimation. The third---multiple resampling---significantly increased accuracy. But both the second and the third become ineffective to the extent the input is not pure sinusoidal. From a practical standpoint, attention needs to be paid toward eliminating noise in the input data from the power grid so as to make it more purely sinusoidal. Therefore, it will be worthwhile to test more sophisticated digital filters for processing the input data before feeding it to the algorithm. / Master of Science
2

Application of Order-Reduction Techniques in the Multiscale Analysis of Composites

Ricks, Trenton Mitchell 08 December 2017 (has links)
Multiscale analysis procedures for composites often involve coupling the macroscale (e.g., structural) and meso/microscale (e.g., ply, constituent) levels. These procedures are often computationally inefficient and thus are limited to coarse subscale discretizations. In this work, various computational strategies were employed to enhance the efficiency of multiscale analysis procedures. An ensemble averaging technique was applied to stochastic microscale simulation results based on the generalized method of cells (GMC) to assess the discretization required in multiscale models. The procedure was shown to be applicable for micromechanics analyses involving both elastic materials with damage and viscoplastic materials. A trade-off in macro/microscale discretizations was assessed. By appropriately discretizing the macro/microscale domains, similar predicted strengths were obtained at a significantly less computational cost. Further improvements in the computational efficiency were obtained by appropriately initiating multiscale analyses in a macroscale domain. A stress-based criterion was used to initiate lower length scale GMC calculations at macroscale finite element integration points without any a priori knowledge of the critical regions. Adaptive multiscale analyses were 30% more efficient than full-domain multiscale analyses. The GMC sacrifices some accuracy in calculated local fields by assuming a low-order displacement field. More accurate microscale behavior can be obtained by using the highidelity GMC (HFGMC) at a significant computational cost. Proper orthogonal decomposition (POD) order-reduction methods were applied to the ensuing HFGMC sets of simultaneous equations as a means of improving the efficiency of their solution. A Galerkin-based POD method was used to both accurately and efficiently represent the HFGMC micromechanics relations for a linearly elastic E-glass/epoxy composite for both standalone and multiscale composite analyses. The computational efficiency significantly improved as the repeating unit cell discretization increased (10-85% reduction in computational runtime). A Petrov-Galerkin-based POD method was then applied to the nonlinear HFGMC micromechanics relations for a linearly elastic E-glass/elastic-perfectly plastic Nylon-12 composite. The use of accurate order-reduced models resulted in a 4.8-6.3x speedup in the equation assembly/solution runtimes (21-38% reduction in total runtimes). By appropriately discretizing model domains and enhancing the efficiency of lower length scale calculations, the goal of performing highidelity multiscale analyses of composites can be more readily realized.
3

Empirical Bayes Model Averaging in the Presence of Model Misfit

Wang, Junyan January 2016 (has links)
No description available.
4

Essays on Least Squares Model Averaging

Xie, TIAN 17 July 2013 (has links)
This dissertation adds to the literature on least squares model averaging by studying and extending current least squares model averaging techniques. The first chapter reviews existing literature and discusses the contributions of this dissertation. The second chapter proposes a new estimator for least squares model averaging. A model average estimator is a weighted average of common estimates obtained from a set of models. I propose computing weights by minimizing a model average prediction criterion (MAPC). I prove that the MAPC estimator is asymptotically optimal in the sense of achieving the lowest possible mean squared error. For statistical inference, I derive asymptotic tests on the average coefficients for the "core" regressors. These regressors are of primary interest to researchers and are included in every approximation model. In Chapter Three, two empirical applications for the MAPC method are conducted. I revisit the economic growth models in Barro (1991) in the first application. My results provide significant evidence to support Barro's (1991) findings. In the second application, I revisit the work by Durlauf, Kourtellos and Tan (2008) (hereafter DKT). Many of my results are consistent with DKT's findings and some of my results provide an alternative explanation to those outlined by DKT. In the fourth chapter, I propose using the model averaging method to construct optimal instruments for IV estimation when there are many potential instrument sets. The empirical weights are computed by minimizing the model averaging IV (MAIV) criterion through convex optimization. I propose a new loss function to evaluate the performance of the estimator. I prove that the instrument set obtained by the MAIV estimator is asymptotically optimal in the sense of achieving the lowest possible value of the loss function. The fifth chapter develops a new forecast combination method based on MAPC. The empirical weights are obtained through a convex optimization of MAPC. I prove that with stationary observations, the MAPC estimator is asymptotically optimal for forecast combination in that it achieves the lowest possible one-step-ahead second-order mean squared forecast error (MSFE). I also show that MAPC is asymptotically equivalent to the in-sample mean squared error (MSE) and MSFE. / Thesis (Ph.D, Economics) -- Queen's University, 2013-07-17 15:46:54.442
5

Statistical Methods for High Throughput Screening Drug Discovery Data

Wang, Yuanyuan (Marcia) January 2005 (has links)
High Throughput Screening (HTS) is used in drug discovery to screen large numbers of compounds against a biological target. Data on activity against the target are collected for a representative sample of compounds selected from a large library. The goal of drug discovery is to relate the activity of a compound to its chemical structure, which is quantified by various explanatory variables, and hence to identify further active compounds. Often, this application has a very unbalanced class distribution, with a rare active class. <br /><br /> Classification methods are commonly proposed as solutions to this problem. However, regarding drug discovery, researchers are more interested in ranking compounds by predicted activity than in the classification itself. This feature makes my approach distinct from common classification techniques. <br /><br /> In this thesis, two AIDS data sets from the National Cancer Institute (NCI) are mainly used. Local methods, namely K-nearest neighbours (KNN) and classification and regression trees (CART), perform very well on these data in comparison with linear/logistic regression, neural networks, and Multivariate Adaptive Regression Splines (MARS) models, which assume more smoothness. One reason for the superiority of local methods is the local behaviour of the data. Indeed, I argue that conventional classification criteria such as misclassification rate or deviance tend to select too small a tree or too large a value of <em>k</em> (the number of nearest neighbours). A more local model (bigger tree or smaller <em>k</em>) gives a better performance in terms of drug discovery. <br /><br /> Because off-the-shelf KNN works relatively well, this thesis takes this promising method and makes several novel modifications, which further improve its performance. The choice of <em>k</em> is optimized for each test point to be predicted. The empirically observed superiority of allowing <em>k</em> to vary is investigated. The nature of the problem, ranking of objects rather than estimating the probability of activity, enables the <em>k</em>-varying algorithm to stand out. Similarly, KNN combined with a kernel weight function (weighted KNN) is proposed and demonstrated to be superior to the regular KNN method. <br /><br /> High dimensionality of the explanatory variables is known to cause problems for KNN and many other classifiers. I propose a novel method (subset KNN) of averaging across multiple classifiers based on building classifiers on subspaces (subsets of variables). It improves the performance of KNN for HTS data. When applied to CART, it also performs as well as or even better than the popular methods of bagging and boosting. Part of this improvement is due to the discovery that classifiers based on irrelevant subspaces (unimportant explanatory variables) do little damage when averaged with good classifiers based on relevant subspaces (important variables). This result is particular to the ranking of objects rather than estimating the probability of activity. A theoretical justification is proposed. The thesis also suggests diagnostics for identifying important subsets of variables and hence further reducing the impact of the curse of dimensionality. <br /><br /> In order to have a broader evaluation of these methods, subset KNN and weighted KNN are applied to three other data sets: the NCI AIDS data with Constitutional descriptors, Mutagenicity data with BCUT descriptors and Mutagenicity data with Constitutional descriptors. The <em>k</em>-varying algorithm as a method for unbalanced data is also applied to NCI AIDS data with Constitutional descriptors. As a baseline, the performance of KNN on such data sets is reported. Although different methods are best for the different data sets, some of the proposed methods are always amongst the best. <br /><br /> Finally, methods are described for estimating activity rates and error rates in HTS data. By combining auxiliary information about repeat tests of the same compound, likelihood methods can extract interesting information about the magnitudes of the measurement errors made in the assay process. These estimates can be used to assess model performance, which sheds new light on how various models handle the large random or systematic assay errors often present in HTS data.
6

Averaging et commande optimale déterministe

Chaplais, François 20 November 1984 (has links) (PDF)
On considère un problème de contrôle optimal, déterministe dont la dynamique dépend de phénomènes "rapides", modélisés sous la forme d'un temps rapide t/epsilon , epsilon petit. On étudie le cas particulier ou le temps rapide intervient de manière périodique dans la dynamique. On étudie le problème moyenne dans les cas périodique et non périodique.
7

Statistical Methods for High Throughput Screening Drug Discovery Data

Wang, Yuanyuan (Marcia) January 2005 (has links)
High Throughput Screening (HTS) is used in drug discovery to screen large numbers of compounds against a biological target. Data on activity against the target are collected for a representative sample of compounds selected from a large library. The goal of drug discovery is to relate the activity of a compound to its chemical structure, which is quantified by various explanatory variables, and hence to identify further active compounds. Often, this application has a very unbalanced class distribution, with a rare active class. <br /><br /> Classification methods are commonly proposed as solutions to this problem. However, regarding drug discovery, researchers are more interested in ranking compounds by predicted activity than in the classification itself. This feature makes my approach distinct from common classification techniques. <br /><br /> In this thesis, two AIDS data sets from the National Cancer Institute (NCI) are mainly used. Local methods, namely K-nearest neighbours (KNN) and classification and regression trees (CART), perform very well on these data in comparison with linear/logistic regression, neural networks, and Multivariate Adaptive Regression Splines (MARS) models, which assume more smoothness. One reason for the superiority of local methods is the local behaviour of the data. Indeed, I argue that conventional classification criteria such as misclassification rate or deviance tend to select too small a tree or too large a value of <em>k</em> (the number of nearest neighbours). A more local model (bigger tree or smaller <em>k</em>) gives a better performance in terms of drug discovery. <br /><br /> Because off-the-shelf KNN works relatively well, this thesis takes this promising method and makes several novel modifications, which further improve its performance. The choice of <em>k</em> is optimized for each test point to be predicted. The empirically observed superiority of allowing <em>k</em> to vary is investigated. The nature of the problem, ranking of objects rather than estimating the probability of activity, enables the <em>k</em>-varying algorithm to stand out. Similarly, KNN combined with a kernel weight function (weighted KNN) is proposed and demonstrated to be superior to the regular KNN method. <br /><br /> High dimensionality of the explanatory variables is known to cause problems for KNN and many other classifiers. I propose a novel method (subset KNN) of averaging across multiple classifiers based on building classifiers on subspaces (subsets of variables). It improves the performance of KNN for HTS data. When applied to CART, it also performs as well as or even better than the popular methods of bagging and boosting. Part of this improvement is due to the discovery that classifiers based on irrelevant subspaces (unimportant explanatory variables) do little damage when averaged with good classifiers based on relevant subspaces (important variables). This result is particular to the ranking of objects rather than estimating the probability of activity. A theoretical justification is proposed. The thesis also suggests diagnostics for identifying important subsets of variables and hence further reducing the impact of the curse of dimensionality. <br /><br /> In order to have a broader evaluation of these methods, subset KNN and weighted KNN are applied to three other data sets: the NCI AIDS data with Constitutional descriptors, Mutagenicity data with BCUT descriptors and Mutagenicity data with Constitutional descriptors. The <em>k</em>-varying algorithm as a method for unbalanced data is also applied to NCI AIDS data with Constitutional descriptors. As a baseline, the performance of KNN on such data sets is reported. Although different methods are best for the different data sets, some of the proposed methods are always amongst the best. <br /><br /> Finally, methods are described for estimating activity rates and error rates in HTS data. By combining auxiliary information about repeat tests of the same compound, likelihood methods can extract interesting information about the magnitudes of the measurement errors made in the assay process. These estimates can be used to assess model performance, which sheds new light on how various models handle the large random or systematic assay errors often present in HTS data.
8

A Study on the Comparison of Dollar-Cost Averaging and Lump Sum Investing Performances in Mutual Fund.

Ho, Hsaio-fang 20 June 2008 (has links)
none
9

Kinetische Gleichungen und velocity averaging

Westdickenberg, Michael. January 1900 (has links)
Thesis (doctoral)--Rheinische Friedrich-Wilhelms-Universität Bonn, 2000. / Includes bibliographical references (p. 69-70).
10

Využití metody value averaging na akciových trzích

Škatuĺárová, Ivana January 2015 (has links)
Diploma thesis is focused on testing of value averaging investment method on real data of three global stock markets in years 1990 - 2013. The first part is devoted to analysis and comparison of return and risk of investments, which are using the value averaging method on different markets at different adjusted investments horizons. In conclusion there are recommendations for investors using value averaging method including presentation of results and their discussion with works focusing on similar theme.

Page generated in 0.053 seconds