Spelling suggestions: "subject:"mean squared error"" "subject:"mean squared arror""
11 |
Statistical properties of forward selection regression estimatorsThiebaut, Nicolene Magrietha 04 August 2011 (has links)
In practice, when one has many candidate variables as explanatory variables in multiple regression, there is always the possibility that variables that are important determinants of the response variable might be omitted from the model, while unimportant variables might be included. Both types of errors are important, and in this dissertation it is attempted to quantify the probabilities of these errors. A simulation study is reported in this dissertation. Different numbers of variables, i.e. p= 4 to 20 are assumed, and different sample sizes, i.e. n=0.5p, p, 2p, 4p. For each p the underlying model assumes that roughly half of the independent variables are actually correlated with the dependant variable and the other half not. The noise is ε~ N(0, σ2, where σ2, is set fixed. The data was simulated 10000 times for each combination of n and p using known underlying models and ε randomly selected from of a normal distribution. For this investigation the full model and forward selection regression are compared. The mean squared error of the estimated coefficient β(p) is determined from the true β of each n and p set. A full discussion, as well as graphs, is presented. / Dissertation (MSc)--University of Pretoria, 2011. / Statistics / unrestricted
|
12 |
A Report on the Statistical Properties of the Coefficient of Variation and Some ApplicationsIrvin, Howard P. 01 May 1970 (has links)
Examples from four disciplines were used to introduce the coefficient of variation which was considered to have considerable usage and application in solving Quality Control and Reliability problems.
The statistical properties were found in the statistical literature and are presented, namely, the mean and the variance of the coefficient of variation. The cumulative probability function was determined by two approximate methods and by using the noncentral t distribution. A graphical method to determine approximate confidence intervals and a method to determine if the coefficients of variation from two samples were significantly different from each other are also provided (with examples).
Applications of the coefficient of variation to solving some of the main problems encountered in industry that are included in this report are: (a) using the coefficient of variation to measure relative efficiency, (b) acceptance sampling, (c) stress versus strength reliability problem, and (d) estimating the shape parameter of the two parameter Weibull. (84 pages)
|
13 |
Symmetric Generalized Gaussian Multiterminal Source CodingChang, Yameng Jr January 2018 (has links)
Consider a generalized multiterminal source coding system, where (l choose m) encoders, each m observing a distinct size-m subset of l (l ≥ 2) zero-mean unit-variance symmetrically correlated Gaussian sources with correlation coefficient ρ, compress their observation in such a way that a joint decoder can reconstruct the sources within a prescribed mean squared error distortion based on the compressed data. The optimal rate- distortion performance of this system was previously known only for the two extreme cases m = l (the centralized case) and m = 1 (the distributed case), and except when ρ = 0, the centralized system can achieve strictly lower compression rates than the distributed system under all non-trivial distortion constaints. Somewhat surprisingly, it is established in the present thesis that the optimal rate-distortion preformance of the afore-described generalized multiterminal source coding system with m ≥ 2 coincides with that of the centralized system for all distortions when ρ ≤ 0 and for distortions below an explicit positive threshold (depending on m) when ρ > 0. Moreover, when ρ > 0, the minimum achievable rate of generalized multiterminal source coding subject to an arbitrary positive distortion constraint d is shown to be within a finite gap (depending on m and d) from its centralized counterpart in the large l limit except for possibly the critical distortion d = 1 − ρ. / Thesis / Master of Applied Science (MASc)
|
14 |
Robust Distributed Compression of Symmetrically Correlated Gaussian SourcesZhang, Xuan January 2018 (has links)
Consider a lossy compression system with l distributed encoders and a centralized decoder. Each encoder compresses its observed source and forwards the compressed data to the decoder for joint reconstruction of the target signals under the mean squared error distortion constraint. It is assumed that the observed sources can be expressed as the sum of the target signals and the corruptive noises, which are generated independently from two (possibly di erent) symmetric multivariate Gaussian
distributions. Depending on the parameters of such Gaussian distributions, the rate-distortion limit of this lossy compression system is characterized either completely or for a subset of distortions (including, but not necessarily limited to, those su fficiently close to the minimum distortion achievable when the observed sources are directly available at the decoder). The results are further extended to the robust distributed
compression setting, where the outputs of a subset of encoders may also be used to produce a non-trivial reconstruction of the corresponding target signals. In particular, we obtain in the high-resolution regime a precise characterization of the minimum achievable reconstruction distortion based on the outputs of k + 1 or more encoders when every k out of all l encoders are operated collectively in the same mode that is greedy in the sense of minimizing the distortion incurred by the reconstruction of the
corresponding k target signals with respect to the average rate of these k encoders. / Thesis / Master of Applied Science (MASc)
|
15 |
Feasible Generalized Least Squares: theory and applicationsGonzález Coya Sandoval, Emilio 04 June 2024 (has links)
We study the Feasible Generalized Least-Squares (FGLS) estimation of the parameters of a linear regression model in which the errors are allowed to exhibit heteroskedasticity of unknown form and to be serially correlated. The main contribution
is two fold; first we aim to demystify the reasons often advanced to use OLS instead of FGLS by showing that the latter estimate is robust, and more efficient and precise. Second, we devise consistent FGLS procedures, robust to misspecification, which achieves a lower mean squared error (MSE), often close to that of the correctly
specified infeasible GLS.
In the first chapter we restrict our attention to the case with independent heteroskedastic errors. We suggest a Lasso based procedure to estimate the skedastic function of the residuals. This estimate is then used to construct a FGLS estimator. Using extensive Monte Carlo simulations, we show that this Lasso-based FGLS procedure has better finite sample properties than OLS and other linear regression-based FGLS estimates. Moreover, the FGLS-Lasso estimate is robust to misspecification of
both the functional form and the variables characterizing the skedastic function.
The second chapter generalizes our investigation to the case with serially correlated errors. There are three main contributions; first we show that GLS is consistent requiring only pre-determined regressors, whereas OLS requires exogenous regressors to be consistent. The second contribution is to show that GLS is much more robust that OLS; even a misspecified GLS correction can achieve a lower MSE than OLS. The third contribution is to devise a FGLS procedure valid whether or not the regressors are exogenous, which achieves a MSE close to that of the correctly specified infeasible GLS. Extensive Monte Carlo experiments are conducted to assess the performance of our FGLS procedure against OLS in finite samples. FGLS achieves important reductions in MSE and variance relative to OLS.
In the third chapter we consider an empirical application; we re-examine the Uncovered Interest Parity (UIP) hypothesis, which states that the expected rate of return to speculation in the forward foreign exchange market is zero. We extend the FGLS procedure to a setting in which lagged dependent variables are included as regressors. We thus provide a consistent and efficient framework to estimate the parameters of a general k-step-ahead linear forecasting equation. Finally, we apply our FGLS procedures to the analysis of the two main specifications to test the UIP.
|
16 |
Using Kullback-Leibler Divergence to Analyze the Performance of Collaborative PositioningNounagnon, Jeannette Donan 12 July 2016 (has links)
Geolocation accuracy is a very crucial and a life-or-death factor for rescue teams. Natural disasters or man-made disasters are just a few convincing reasons why fast and accurate position location is necessary. One way to unleash the potential of positioning systems is through the use of collaborative positioning. It consists of simultaneously solving for the position of two nodes that need to locate themselves. Although the literature has addressed the benefits of collaborative positioning in terms of accuracy, a theoretical foundation on the performance of collaborative positioning has been disproportionally lacking.
This dissertation uses information theory to perform a theoretical analysis of the value of collaborative positioning.The main research problem addressed states: 'Is collaboration always beneficial? If not, can we determine theoretically when it is and when it is not?' We show that the immediate advantage of collaborative estimation is in the acquisition of another set of information between the collaborating nodes. This acquisition of new information reduces the uncertainty on the localization of both nodes. Under certain conditions, this reduction in uncertainty occurs for both nodes by the same amount. Hence collaboration is beneficial in terms of uncertainty.
However, reduced uncertainty does not necessarily imply improved accuracy. So, we define a novel theoretical model to analyze the improvement in accuracy due to collaboration. Using this model, we introduce a variational analysis of collaborative positioning to deter- mine factors that affect the improvement in accuracy due to collaboration. We derive range conditions when collaborative positioning starts to degrade the performance of standalone positioning. We derive and test criteria to determine on-the-fly (ahead of time) whether it is worth collaborating or not in order to improve accuracy.
The potential applications of this research include, but are not limited to: intelligent positioning systems, collaborating manned and unmanned vehicles, and improvement of GPS applications. / Ph. D.
|
17 |
Performance evaluation of ZF and MMSE equalizers for wavelets V-BlastAsif, Rameez, Bin-Melha, Mohammed S., Hussaini, Abubakar S., Abd-Alhameed, Raed, Jones, Steven M.R., Noras, James M., Rodriguez, Jonathan January 2013 (has links)
No / In this work we present the work on the equalization algorithms to be used in future orthogonally multiplexed wavelets based multi signaling communication systems. The performance of ZF and MMSE algorithms has been analyzed using SISO and MIMO communication models. The transmitted electromagnetic waves were subjected through Rayleigh multipath fading channel with AWGN. The results showed that the performance of both of the above mentioned algorithms is the same in SISO channel but in MIMO environment MMSE has better performance.
|
18 |
Carrier Frequency Offset Estimation for Orthogonal Frequency Division MultiplexingChallakere, Nagaravind 01 May 2012 (has links)
This thesis presents a novel method to solve the problem of estimating the carrier frequency set in an Orthogonal Frequency Division Multiplexing (OFDM) system. The approach is based on the minimization of the probability of symbol error. Hence, this approach is called the Minimum Symbol Error Rate (MSER) approach. An existing approach based on Maximum Likelihood (ML) is chosen to benchmark the performance of the MSER-based algorithm. The MSER approach is computationally intensive. The thesis evaluates the approximations that can be made to the MSER-based objective function to make the computation tractable. A modified gradient function based on the MSER objective is developed which provides better performance characteristics than the ML-based estimator. The estimates produced by the MSER approach exhibit lower Mean Squared Error compared to the ML benchmark. The performance of MSER-based estimator is simulated with Quaternary Phase Shift Keying (QPSK) symbols, but the algorithm presented is applicable to all complex symbol constellations.
|
19 |
COMPRESSIVE IMAGING FOR DIFFERENCE IMAGE FORMATION AND WIDE-FIELD-OF-VIEW TARGET TRACKINGShikhar January 2010 (has links)
Use of imaging systems for performing various situational awareness tasks in militaryand commercial settings has a long history. There is increasing recognition,however, that a much better job can be done by developing non-traditional opticalsystems that exploit the task-specific system aspects within the imager itself. Insome cases, a direct consequence of this approach can be real-time data compressionalong with increased measurement fidelity of the task-specific features. In others,compression can potentially allow us to perform high-level tasks such as direct trackingusing the compressed measurements without reconstructing the scene of interest.In this dissertation we present novel advancements in feature-specific (FS) imagersfor large field-of-view surveillence, and estimation of temporal object-scene changesutilizing the compressive imaging paradigm. We develop these two ideas in parallel.In the first case we show a feature-specific (FS) imager that optically multiplexesmultiple, encoded sub-fields of view onto a common focal plane. Sub-field encodingenables target tracking by creating a unique connection between target characteristicsin superposition space and the target's true position in real space. This isaccomplished without reconstructing a conventional image of the large field of view.System performance is evaluated in terms of two criteria: average decoding time andprobability of decoding error. We study these performance criteria as a functionof resolution in the encoding scheme and signal-to-noise ratio. We also includesimulation and experimental results demonstrating our novel tracking method. Inthe second case we present a FS imager for estimating temporal changes in the objectscene over time by quantifying these changes through a sequence of differenceimages. The difference images are estimated by taking compressive measurementsof the scene. Our goals are twofold. First, to design the optimal sensing matrixfor taking compressive measurements. In scenarios where such sensing matrices arenot tractable, we consider plausible candidate sensing matrices that either use theavailable <italic>a priori</italic> information or are non-adaptive. Second, we develop closed-form and iterative techniques for estimating the difference images. We present results to show the efficacy of these techniques and discuss the advantages of each.
|
20 |
Spatial Pattern of Yield Distributions: Implications for Crop InsuranceAnnan, Francis 11 August 2012 (has links)
Despite the potential benefits of larger datasets for crop insurance ratings, pooling yields with similar distributions is not a common practice. The current USDA-RMA county insurance ratings do not consider information across state lines, a politically driven assumption that ignores a wealth of climate and agronomic evidence suggesting that growing regions are not constrained by state boundaries. We test the appropriateness of this assumption, and provide empirical grounds for benefits of pooling datasets. We find evidence in favor of pooling across state lines, with poolable counties sometimes being as far as 2,500 miles apart. An out-of-sample performance exercise suggests our proposed pooling framework out-performs a no-pooling alternative, and supports the hypothesis that economic losses should be expected as a result of not adopting our pooling framework. Our findings have strong empirical and policy implications for accurate modeling of yield distributions and vis-à-vis the rating of crop insurance products.
|
Page generated in 0.0619 seconds