• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 54
  • 9
  • 8
  • 8
  • 6
  • 5
  • 5
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 120
  • 51
  • 47
  • 29
  • 21
  • 20
  • 18
  • 15
  • 15
  • 14
  • 13
  • 12
  • 12
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Robust Distributed Compression of Symmetrically Correlated Gaussian Sources

Zhang, Xuan January 2018 (has links)
Consider a lossy compression system with l distributed encoders and a centralized decoder. Each encoder compresses its observed source and forwards the compressed data to the decoder for joint reconstruction of the target signals under the mean squared error distortion constraint. It is assumed that the observed sources can be expressed as the sum of the target signals and the corruptive noises, which are generated independently from two (possibly di erent) symmetric multivariate Gaussian distributions. Depending on the parameters of such Gaussian distributions, the rate-distortion limit of this lossy compression system is characterized either completely or for a subset of distortions (including, but not necessarily limited to, those su fficiently close to the minimum distortion achievable when the observed sources are directly available at the decoder). The results are further extended to the robust distributed compression setting, where the outputs of a subset of encoders may also be used to produce a non-trivial reconstruction of the corresponding target signals. In particular, we obtain in the high-resolution regime a precise characterization of the minimum achievable reconstruction distortion based on the outputs of k + 1 or more encoders when every k out of all l encoders are operated collectively in the same mode that is greedy in the sense of minimizing the distortion incurred by the reconstruction of the corresponding k target signals with respect to the average rate of these k encoders. / Thesis / Master of Applied Science (MASc)
42

Feasible Generalized Least Squares: theory and applications

González Coya Sandoval, Emilio 04 June 2024 (has links)
We study the Feasible Generalized Least-Squares (FGLS) estimation of the parameters of a linear regression model in which the errors are allowed to exhibit heteroskedasticity of unknown form and to be serially correlated. The main contribution is two fold; first we aim to demystify the reasons often advanced to use OLS instead of FGLS by showing that the latter estimate is robust, and more efficient and precise. Second, we devise consistent FGLS procedures, robust to misspecification, which achieves a lower mean squared error (MSE), often close to that of the correctly specified infeasible GLS. In the first chapter we restrict our attention to the case with independent heteroskedastic errors. We suggest a Lasso based procedure to estimate the skedastic function of the residuals. This estimate is then used to construct a FGLS estimator. Using extensive Monte Carlo simulations, we show that this Lasso-based FGLS procedure has better finite sample properties than OLS and other linear regression-based FGLS estimates. Moreover, the FGLS-Lasso estimate is robust to misspecification of both the functional form and the variables characterizing the skedastic function. The second chapter generalizes our investigation to the case with serially correlated errors. There are three main contributions; first we show that GLS is consistent requiring only pre-determined regressors, whereas OLS requires exogenous regressors to be consistent. The second contribution is to show that GLS is much more robust that OLS; even a misspecified GLS correction can achieve a lower MSE than OLS. The third contribution is to devise a FGLS procedure valid whether or not the regressors are exogenous, which achieves a MSE close to that of the correctly specified infeasible GLS. Extensive Monte Carlo experiments are conducted to assess the performance of our FGLS procedure against OLS in finite samples. FGLS achieves important reductions in MSE and variance relative to OLS. In the third chapter we consider an empirical application; we re-examine the Uncovered Interest Parity (UIP) hypothesis, which states that the expected rate of return to speculation in the forward foreign exchange market is zero. We extend the FGLS procedure to a setting in which lagged dependent variables are included as regressors. We thus provide a consistent and efficient framework to estimate the parameters of a general k-step-ahead linear forecasting equation. Finally, we apply our FGLS procedures to the analysis of the two main specifications to test the UIP.
43

Using Kullback-Leibler Divergence to Analyze the Performance of Collaborative Positioning

Nounagnon, Jeannette Donan 12 July 2016 (has links)
Geolocation accuracy is a very crucial and a life-or-death factor for rescue teams. Natural disasters or man-made disasters are just a few convincing reasons why fast and accurate position location is necessary. One way to unleash the potential of positioning systems is through the use of collaborative positioning. It consists of simultaneously solving for the position of two nodes that need to locate themselves. Although the literature has addressed the benefits of collaborative positioning in terms of accuracy, a theoretical foundation on the performance of collaborative positioning has been disproportionally lacking. This dissertation uses information theory to perform a theoretical analysis of the value of collaborative positioning.The main research problem addressed states: 'Is collaboration always beneficial? If not, can we determine theoretically when it is and when it is not?' We show that the immediate advantage of collaborative estimation is in the acquisition of another set of information between the collaborating nodes. This acquisition of new information reduces the uncertainty on the localization of both nodes. Under certain conditions, this reduction in uncertainty occurs for both nodes by the same amount. Hence collaboration is beneficial in terms of uncertainty. However, reduced uncertainty does not necessarily imply improved accuracy. So, we define a novel theoretical model to analyze the improvement in accuracy due to collaboration. Using this model, we introduce a variational analysis of collaborative positioning to deter- mine factors that affect the improvement in accuracy due to collaboration. We derive range conditions when collaborative positioning starts to degrade the performance of standalone positioning. We derive and test criteria to determine on-the-fly (ahead of time) whether it is worth collaborating or not in order to improve accuracy. The potential applications of this research include, but are not limited to: intelligent positioning systems, collaborating manned and unmanned vehicles, and improvement of GPS applications. / Ph. D.
44

Performance evaluation of ZF and MMSE equalizers for wavelets V-Blast

Asif, Rameez, Bin-Melha, Mohammed S., Hussaini, Abubakar S., Abd-Alhameed, Raed, Jones, Steven M.R., Noras, James M., Rodriguez, Jonathan January 2013 (has links)
No / In this work we present the work on the equalization algorithms to be used in future orthogonally multiplexed wavelets based multi signaling communication systems. The performance of ZF and MMSE algorithms has been analyzed using SISO and MIMO communication models. The transmitted electromagnetic waves were subjected through Rayleigh multipath fading channel with AWGN. The results showed that the performance of both of the above mentioned algorithms is the same in SISO channel but in MIMO environment MMSE has better performance.
45

Nonparametric Inference for Bioassay

Lin, Lizhen January 2012 (has links)
This thesis proposes some new model independent or nonparametric methods for estimating the dose-response curve and the effective dosage curve in the context of bioassay. The research problem is also of importance in environmental risk assessment and other areas of health sciences. It is shown in the thesis that our new nonparametric methods while bearing optimal asymptotic properties also exhibit strong finite sample performance. Although our specific emphasis is on bioassay and environmental risk assessment, the methodology developed in this dissertation applies broadly to general order restricted inference.
46

Carrier Frequency Offset Estimation for Orthogonal Frequency Division Multiplexing

Challakere, Nagaravind 01 May 2012 (has links)
This thesis presents a novel method to solve the problem of estimating the carrier frequency set in an Orthogonal Frequency Division Multiplexing (OFDM) system. The approach is based on the minimization of the probability of symbol error. Hence, this approach is called the Minimum Symbol Error Rate (MSER) approach. An existing approach based on Maximum Likelihood (ML) is chosen to benchmark the performance of the MSER-based algorithm. The MSER approach is computationally intensive. The thesis evaluates the approximations that can be made to the MSER-based objective function to make the computation tractable. A modified gradient function based on the MSER objective is developed which provides better performance characteristics than the ML-based estimator. The estimates produced by the MSER approach exhibit lower Mean Squared Error compared to the ML benchmark. The performance of MSER-based estimator is simulated with Quaternary Phase Shift Keying (QPSK) symbols, but the algorithm presented is applicable to all complex symbol constellations.
47

A Comparative Simulation of Type I Error and Power of Four Tests of Homogeneity of Effects For Random- and Fixed-Effects Models of Meta-Analysis

Aaron, Lisa Therese 01 December 2003 (has links)
In a Monte Carlo analysis of meta-analytic data, Type I and Type II error rates were compared for four homogeneity tests. The study controlled for violations of normality and homogeneity of variance. This study was modeled after Harwell (1997) and Kromrey and Hogarty's (1998) experimental design. Specifically, it entailed a 2x3x3x3x3x3x2 factorial design. The study also controlled for between-studies variance, as suggested by Hedges and Vevea's (1998) study. As with similar studies, this randomized factorial design was comprised of 5000 iterations for each of the following 7 independent variables: (1) number of studies within the meta-analysis (10 and 30); (2) primary study sample size (10, 40, 200); (3) score distribution skewness and kurtosis (0/0; 1/3; 2/6);(4) equal or random (around typical sample sizes, 1:1; 4:6; and 6:4) within-group sample sizes;(5) equal or unequal group variances (1:1; 2:1; and 4:1);(6)between-studies variance, tau-squared(0, .33, and 1); and (7)between-class effect size differences, delta(0 and .8). The study incorporated 1,458 experimental conditions. Simulated data from each sample were analyzed using each of four significance test statistics including: a)the fixed-effects Q test of homogeneity; b)the random-effects modification of the Q test; c) the conditionally-random procedure; and d)permuted Qbetween. The results of this dissertation will inform researchers regarding the relative effectiveness of these statistical approaches, based on Type I and Type II error rates. This dissertation extends previous investigations of the Q test of homogneity. Specifically, permuted Q provided the greatest frequency of effectiveness across extreme conditions of increasing heterogeneity of effects, unequal group variances and nonnormality. Small numbers of studies and increasing heterogeneity of effects presented the greatest challenges to power for all of the tests under investigation.
48

近單根模型之最小平方估計量的預測誤差 / Mean-squared prediction errors of the least squares predictors in near-integrated models

張凱君, Chang, Kai-Jiun Unknown Date (has links)
The asymptotic expression for the mean-squared prediction error is discussed for the near-unit-root models. We find the mean-squared prediction error based on the ordinary least square estimator is smaller than the one using pretest estimating under some certain conditions.
49

Testing the Hazard Rate, Part I

Liero, Hannelore January 2003 (has links)
We consider a nonparametric survival model with random censoring. To test whether the hazard rate has a parametric form the unknown hazard rate is estimated by a kernel estimator. Based on a limit theorem stating the asymptotic normality of the quadratic distance of this estimator from the smoothed hypothesis an asymptotic ®-test is proposed. Since the test statistic depends on the maximum likelihood estimator for the unknown parameter in the hypothetical model properties of this parameter estimator are investigated. Power considerations complete the approach.
50

Online transaction simulation sysyem of the Taiwan Stock Exchange

Liu, Hui-Wen 23 July 2008 (has links)
Taiwan Security Market is a typical order-driven market, and the business transactions are matched through the electronic trading system since 1988. In this work, we study the joint distributions of tick size changes of bid price and ask price, bid volume, and ask volume¡@for each matching order in Taiwan Stock Exchange (TSEC). Exponentially weighted moving average (EWMA) method is adopted to update the joint distribution of the incoming order variables aforementioned. Here we propose five methods to determine the update timing and consider three different initial matrices of the joint distributions. In empirical study, the daily matching data of two enterprises Uni-president Enterprises Corporation and Formosa Plastics Corporation in April, 2005 are considered. The goodness of fit for the joint distributions are determined by Chi-square Goodness of Fit Test. The results show that EWMA method provide good fit for most of the daily transaction data.

Page generated in 0.0441 seconds