• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 6
  • 1
  • 1
  • Tagged with
  • 30
  • 30
  • 10
  • 9
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Dual Model Robust Regression

Robinson, Timothy J. 15 April 1997 (has links)
In typical normal theory regression, the assumption of homogeneity of variances is often not appropriate. Instead of treating the variances as a nuisance and transforming away the heterogeneity, the structure of the variances may be of interest and it is desirable to model the variances. Aitkin (1987) proposes a parametric dual model in which a log linear dependence of the variances on a set of explanatory variables is assumed. Aitkin's parametric approach is an iterative one providing estimates for the parameters in the mean and variance models through joint maximum likelihood. Estimation of the mean and variance parameters are interrelatedas the responses in the variance model are the squared residuals from the fit to the means model. When one or both of the models (the mean or variance model) are misspecified, parametric dual modeling can lead to faulty inferences. An alternative to parametric dual modeling is to let the data completely determine the form of the true underlying mean and variance functions (nonparametric dual modeling). However, nonparametric techniques often result in estimates which are characterized by high variability and they ignore important knowledge that the user may have regarding the process. Mays and Birch (1996) have demonstrated an effective semiparametric method in the one regressor, single-model regression setting which is a "hybrid" of parametric and nonparametric fits. Using their techniques, we develop a dual modeling approach which is robust to misspecification in either or both of the two models. Examples will be presented to illustrate the new technique, termed here as Dual Model Robust Regression. / Ph. D.
2

A Model-Based Approach to Demodulation of Co-Channel MSK Signals

Ahmed, Yasir 03 January 2003 (has links)
Co-channel interference limits the capacity of cellular systems, reduces the throughput of wireless local area networks, and is the major hurdle in deployment of high altitude communication platforms. It is also a problem for systems operating in unlicensed bands such as the 2.4 GHz ISM band and for narrowband systems that have been overlaid with spread spectrum systems. In this work we have developed model-based techniques for the demodulation of co-channel MSK signals. It is shown that MSK signals can be written in the linear model form, hence a minimum variance unbiased (MVU) estimator exists that satisfies the Cramer-Rao lower bound (CRLB) with equality. This framework allows us to derive the best estimators for a single-user and a two-user case. These concepts can also be extended to wideband signals and it is shown that the MVU estimator for Direct Sequence Spread Spectrum signals is in fact a decorrelator-based multiuser detector. However, this simple linear representation does not always exist for continuous phase modulations. Furthermore, these linear estimators require perfect channel state information and phase synchronization at the receiver, which is not always implemented in wireless communication systems. To overcome these shortcomings of the linear estimation techniques, we employed an autoregressive modeling approach. It is well known that the AR model can accurately represent peaks in the spectrum and therefore can be used as a general FM demodulator. It does not require knowledge of the exact signal model or phase synchronization at the receiver. Since it is a non-coherent reception technique, its performance is compared to that of the limiter discriminator. Simulation results have shown that model-based demodulators can give significant gains for certain phase and frequency offsets between the desired signal and an interferer. / Master of Science
3

On Applications of Semiparametric Methods

Li, Zhijian 01 October 2018 (has links)
No description available.
4

Metoda převažování (kalibrace) ve výběrových šetřeních / The method of re-weighting (calibration) in survey sampling

Michálková, Anna January 2019 (has links)
In this thesis, we study re-weighting when estimating totals in survey sampling. The purpose of re-weighting is to adjust the structure of the sample in order to comply with the structure of the population (with respect to given auxiliary variables). We sum up some known results for methods of the traditional desin-based approach, more attention is given to the model-based approach. We generalize known asymptotic results in the model-based theory to a wider class of weighted estimators. Further, we propose a consistent estimator of asymptotic variance, which takes into consideration weights used in estimator of the total. This is in contrast to usually recommended variance estimators derived from the design-based approach. Moreover, the estimator is robust againts particular model misspecifications. In a simulation study, we investigate how the proposed estimator behaves in comparison with variance estimators which are usually recommended in the literature or used in practice. 1
5

New control charts for monitoring univariate autocorrelated processes and high-dimensional profiles

Lee, Joongsup 18 August 2011 (has links)
In this thesis, we first investigate the use of automated variance estimators in distribution-free statistical process control (SPC) charts for univariate autocorrelated processes. We introduce two variance estimators---the standardized time series overlapping area estimator and the so-called quick-and-dirty autoregressive estimator---that can be obtained from a training data set and used effectively with distribution-free SPC charts when those charts are applied to processes exhibiting nonnormal responses or correlation between successive responses. In particular, we incorporate the two estimators into DFTC-VE, a new distribution-free tabular CUSUM chart developed for autocorrelated processes; and we compare its performance with other state-of-the-art distribution-free SPC charts. Using either of the two variance estimators, the DFTC-VE outperforms its competitors in terms of both in-control and out-of-control average run lengths when all the competing procedures are tested on the same set of independently sampled realizations of selected autocorrelated processes with normal or nonnormal noise components. Next, we develop WDFTC, a wavelet-based distribution-free CUSUM chart for detecting shifts in the mean of a high-dimensional profile with noisy components that may exhibit nonnormality, variance heterogeneity, or correlation between profile components. A profile describes the relationship between a selected quality characteristic and an input (design) variable over the experimental region. Exploiting a discrete wavelet transform (DWT) of the mean in-control profile, WDFTC selects a reduced-dimension vector of the associated DWT components from which the mean in-control profile can be approximated with minimal weighted relative reconstruction error. Based on randomly sampled Phase I (in-control) profiles, the covariance matrix of the corresponding reduced-dimension DWT vectors is estimated using a matrix-regularization method; then the DWT vectors are aggregated (batched) so that the nonoverlapping batch means of the reduced-dimension DWT vectors have manageable covariances. To monitor shifts in the mean profile during Phase II operation, WDFTC computes a Hotelling's T-square--type statistic from successive nonoverlapping batch means and applies a CUSUM procedure to those statistics, where the associated control limits are evaluated analytically from the Phase I data. We compare WDFTC with other state-of-the-art profile-monitoring charts using both normal and nonnormal noise components having homogeneous or heterogenous variances as well as independent or correlated components; and we show that WDFTC performs well, especially for local shifts of small to medium size, in terms of both in-control and out-of-control average run lengths.
6

Speed of adjustment, volatility and noise in the Indonesia Stock Exchange

Husodo, Za??fri Ananto, Banking & Finance, Australian School of Business, UNSW January 2008 (has links)
This research contains three essays that explore the speed of adjustment, volatility and noise in the Indonesia Stock Exchange. The first essay explores the speed of adjustment in the Indonesia Stock Exchange at daily interval from 2000 to 2004. The model employed is the speed of adjustment with noise. Firstly, I work on the estimation of the speed of adjustment. The estimated speed of adjustment coefficient concludes that the large size leads the smaller size group to adjust to new information. Secondly, I analyse the component in the noise that contributes significantly to the speed of adjustment level. It is confirmed that the factor determining the noise is bid-ask fluctuations. Therefore, it is reasonable to infer the component in the noise from bid-ask component. The decomposition of bid-ask spread into transaction cost and asymmetric information reveals that the latter is found to be a significant component determining the speed of adjustment level. The second essay analyses the fine grain dynamics of the speed of price adjustment to new information from 2000 to 2007. The exact time of adjustment is estimated at intraday frequency instead of at daily frequency. In this work, as an alternative of first moment estimation, second moment model-free estimation using volatility signature plot to estimate of the speed of adjustment is proposed. Both first and second moment estimation of the speed of adjustment provide consistent result of 30 minute adjustment period. Negative relation after 5-minute return interval between speed of adjustment estimate and realized variance is found implying lower noise leads to smaller deviation between observed and equilibrium price. In the third essay, I concentrate the work on the second moment of continuously compounded returns from 2000 to 2007 in the Indonesia Stock Exchange. The main purpose of the last essay is to estimate the noise and efficient variance in the Indonesia Stock Exchange. The realized variance based estimator is employed in the third essay. During the period of the study, noise variance decreases indicating smaller deviation between the observed and equilibrium price, hence improving market quality in the Indonesia Stock Exchange. The optimal frequency to estimate the efficient variance, on average, is nine minutes. The variance ratio of daily efficient variance to daily open-to-close reveals significant private information underlying price process in the Indonesia Stock Exchange.
7

Generalizing Results from Randomized Trials to Target Population via Weighting Methods Using Propensity Score

Chen, Ziyue January 2017 (has links)
No description available.
8

A Dual Metamodeling Perspective for Design and Analysis of Stochastic Simulation Experiments

Wang, Wenjing 17 July 2019 (has links)
Fueled by a growing number of applications in science and engineering, the development of stochastic simulation metamodeling methodologies has gained momentum in recent years. A majority of the existing methods, such as stochastic kriging (SK), only focus on efficiently metamodeling the mean response surface implied by a stochastic simulation experiment. As the simulation outputs are stochastic with the simulation variance varying significantly across the design space, suitable methods for variance modeling are required. This thesis takes a dual metamodeling perspective and aims at exploiting the benefits of fitting the mean and variance functions simultaneously for achieving an improved predictive performance. We first explore the effects of replacing the sample variances with various smoothed variance estimates on the performance of SK and propose a dual metamodeling approach to obtain an efficient simulation budget allocation rule. Second, we articulate the links between SK and least-square support vector regression and propose to use a ``dense and shallow'' initial design to facilitate selection of important design points and efficient allocation of the computational budget. Third, we propose a variational Bayesian inference-based Gaussian process (VBGP) metamodeling approach to accommodate the situation where either one or multiple simulation replications are available at every design point. VBGP can fit the mean and variance response surfaces simultaneously, while taking into full account the uncertainty in the heteroscedastic variance. Lastly, we generalize VBGP for handling large-scale heteroscedastic datasets based on the idea of ``transductive combination of GP experts.'' / Doctor of Philosophy / In solving real-world complex engineering problems, it is often helpful to learn the relationship between the decision variables and the response variables to better understand the real system of interest. Directly conducting experiments on the real system can be impossible or impractical, due to the high cost or time involved. Instead, simulation models are often used as a surrogate to model the complex stochastic systems for conducting simulation-based design and analysis. However, even simulation models can be very expensive to run. To alleviate the computational burden, a metamodel is often built based on the outputs of the simulation runs at some selected design points to map the performance response surface as a function of the controllable decision variables, or uncontrollable environmental variables, to approximate the behavior of the original simulation model. There has been a plethora of work in the simulation research community dedicated to studying stochastic simulation metamodeling methodologies suitable for analyzing stochastic simulation experiments in science and engineering. A majority of the existing methods, such as stochastic kriging (SK), have been known as effective metamodeling tool for approximating a mean response surface implied by a stochastic simulation. Despite that SK has been extensively used as an effective metamodeling methodology for stochastic simulations, SK and metamodeling techniques alike still face four methodological barriers: 1) Lack of the study in variance estimates methods; 2) Absence of an efficient experimental design for simultaneous mean and variance metamodeling; 3) Lack of flexibility to accommodate situations where simulation replications are not available; and 4) Lack of scalability. To overcome the aforementioned barriers, this thesis takes a dual metamodeling perspective and aims at exploiting the benefits of fitting the mean and variance functions simultaneously for achieving an improved predictive performance. We first explore the effects of replacing the sample variances with various smoothed variance estimates on the performance of SK and propose a dual metamodeling approach to obtain an efficient simulation budget allocation rule. Second, we articulate the links between SK and least-square support vector regression and propose to use a “dense and shallow” initial design to facilitate selection of important design points and efficient allocation of the computational budget. Third, we propose a variational Bayesian inference-based Gaussian process (VBGP) metamodeling approach to accommodate the situation where either one or multiple simulation replications are available at every design point. VBGP can fit the mean and variance response surfaces simultaneously, while taking into full account the uncertainty in the heteroscedastic variance. Lastly, we generalize VBGP for handling large-scale heteroscedastic datasets based on the idea of “transductive combination of GP experts.”
9

On estimating variances for Gini coefficients with complex surveys: theory and application

Hoque, Ahmed 29 September 2016 (has links)
Obtaining variances for the plug-in estimator of the Gini coefficient for inequality has preoccupied researchers for decades with the proposed analytic formulae often being regarded as being too cumbersome to apply, as well as usually based on the assumption of an iid structure. We examine several variance estimation techniques for a Gini coefficient estimator obtained from a complex survey, a sampling design often used to obtain sample data in inequality studies. In the first part of the dissertation, we prove that Bhattacharya’s (2007) asymptotic variance estimator when data arise from a complex survey is equivalent to an asymptotic variance estimator derived by Binder and Kovačević (1995) nearly twenty years earlier. In addition, to aid applied researchers, we also show how auxiliary regressions can be used to generate the plug-in Gini estimator and its asymptotic variance, irrespective of the sampling design. In the second part of the dissertation, using Monte Carlo (MC) simulations with 36 data generating processes under the beta, lognormal, chi-square, and the Pareto distributional assumptions with sample data obtained under various complex survey designs, we explore two finite sample properties of the Gini coefficient estimator: bias of the estimator and empirical coverage probabilities of interval estimators for the Gini coefficient. We find high sensitivity to the number of strata and the underlying distribution of the population data. We compare the performance of two standard normal (SN) approximation interval estimators using the asymptotic variance estimators of Binder and Kovačević (1995) and Bhattacharya (2007), another SN approximation interval estimator using a traditional bootstrap variance estimator, and a standard MC bootstrap percentile interval estimator under a complex survey design. With few exceptions, namely with small samples and/or highly skewed distributions of the underlying population data where the bootstrap methods work relatively better, the SN approximation interval estimators using asymptotic variances perform quite well. Finally, health data on the body mass index and hemoglobin levels for Bangladeshi women and children, respectively, are used as illustrations. Inequality analysis of these two important indicators provides a better understanding about the health status of women and children. Our empirical results show that statistical inferences regarding inequality in these well-being variables, measured by the Gini coefficients, based on Binder and Kovačević’s and Bhattacharya’s asymptotic variance estimators, give equivalent outcomes. Although the bootstrap approach often generates slightly smaller variance estimates in small samples, the hypotheses test results or widths of interval estimates using this method are practically similar to those using the asymptotic variance estimators. Our results are useful, both theoretically and practically, as the asymptotic variance estimators are simpler and require less time to calculate compared to those generated by bootstrap methods, as often previously advocated by researchers. These findings suggest that applied researchers can often be comfortable in undertaking inferences about the inequality of a well-being variable using the Gini coefficient employing asymptotic variance estimators that are not difficult to calculate, irrespective of whether the sample data are obtained under a complex survey or a simple random sample design. / Graduate / 0534 / 0501 / 0463 / aahoque@gmail.com
10

Automatic speaker verification on site and by telephone: methods, applications and assessment

Melin, Håkan January 2006 (has links)
Speaker verification is the biometric task of authenticating a claimed identity by means of analyzing a spoken sample of the claimant's voice. The present thesis deals with various topics related to automatic speaker verification (ASV) in the context of its commercial applications, characterized by co-operative users, user-friendly interfaces, and requirements for small amounts of enrollment and test data. A text-dependent system based on hidden Markov models (HMM) was developed and used to conduct experiments, including a comparison between visual and aural strategies for prompting claimants for randomized digit strings. It was found that aural prompts lead to more errors in spoken responses and that visually prompted utterances performed marginally better in ASV, given that enrollment data were visually prompted. High-resolution flooring techniques were proposed for variance estimation in the HMMs, but results showed no improvement over the standard method of using target-independent variances copied from a background model. These experiments were performed on Gandalf, a Swedish speaker verification telephone corpus with 86 client speakers. A complete on-site application (PER), a physical access control system securing a gate in a reverberant stairway, was implemented based on a combination of the HMM and a Gaussian mixture model based system. Users were authenticated by saying their proper name and a visually prompted, random sequence of digits after having enrolled by speaking ten utterances of the same type. An evaluation was conducted with 54 out of 56 clients who succeeded to enroll. Semi-dedicated impostor attempts were also collected. An equal error rate (EER) of 2.4% was found for this system based on a single attempt per session and after retraining the system on PER-specific development data. On parallel telephone data collected using a telephone version of PER, 3.5% EER was found with landline and around 5% with mobile telephones. Impostor attempts in this case were same-handset attempts. Results also indicate that the distribution of false reject and false accept rates over target speakers are well described by beta distributions. A state-of-the-art commercial system was also tested on PER data with similar performance as the baseline research system. / QC 20100910

Page generated in 0.1372 seconds