Spelling suggestions: "subject:"meansquared error"" "subject:"meansquare error""
1 |
On adaptive MMSE receiver strategies for TD-CDMAGarcia-Alis, Daniel January 2001 (has links)
No description available.
|
2 |
Performance of different wavelet families using DWT and DWPT-channel equalization using ZF and MMSEAsif, Rameez, Hussaini, Abubakar S., Abd-Alhameed, Raed, Jones, Steven M.R., Noras, James M., Elkhazmi, Elmahdi A., Rodriguez, Jonathan January 2013 (has links)
No / We have studied the performance of multidimensional signaling techniques using wavelets based modulation within an orthogonally multiplexed communication system. The discrete wavelets transform and wavelet packet modulation techniques have been studied using Daubechies 2 and 8, Biothogonal1.5 and 3.1 and reverse Biorthognal 1.5 and 3.1 wavelets in the presence of Rayleigh multipath fading channels with AWGN. Results showed that DWT based systems outperform WPM systems both in terms of BER vs. SNR performance as well as processing. The performances of two different equalizations techniques, namely zero forcing (ZF) and minimum mean square error (MMSE), were also compared using DWT. When the channel is modeled using Rayleigh multipath fading, AWGN and ISI both techniques yield similar performance.
|
3 |
Modeling Channel Estimation Error in Continuously Varying MIMO ChannelsPotter, Chris 10 1900 (has links)
ITC/USA 2007 Conference Proceedings / The Forty-Third Annual International Telemetering Conference and Technical Exhibition / October 22-25, 2007 / Riviera Hotel & Convention Center, Las Vegas, Nevada / The accuracy of channel estimation plays a crucial role in the demodulation of data symbols sent across an unknown wireless medium. In this work a new analytical expression for the channel estimation error of a multiple input multiple output (MIMO) system is obtained when the wireless medium is continuously changing in the temporal domain. Numerical examples are provided to illustrate our findings.
|
4 |
Kredibilitní přístupy k výpočtu rezerv na pojistná plnění / Credibility approach to claims reserves calculationDzugas, Erik January 2012 (has links)
In this work we summarize the various techniques of claims reserves evaluating which consist in estimate of the future uncertain and hardly antici- pated loss development. It appears that the methods which are based on some credibility formula bring in the mean squared error sense the most accurate results. We consider this in the text derived conclusion very relevant and con- tributing, therefore we illustrate and present it on the numerical example. The calculations are introduced in the attached charts that build the important sup- plement of the text. The topic of this work follows up the content of Nonlife Insurance and Risk Theory lectures, therefore this text can be useful also for the students of the Faculty of Mathematics and Physics to extend their knowledge. 1
|
5 |
NoneYen, Chia-Hsin 09 July 2006 (has links)
¡@¡@The purpose of this research is to employ the STAR model in discussing and analyzing the relationship between stock index and macroeconomic variables in Taiwan, Japan and Korea.
¡@¡@Monthly stock market index data is analyzed over the period January 1990 to December 2000, with the sample period from January 2001 to April 2005 being used in an out-of -sample forecasting exercise. The macroeconomic variables considered in this paper include money supply, consumer price index, industrial production index, interest rate and exchange rate.
¡@¡@The empirical results of Taiwan, Japan and Korea show that LSTAR & ESTAR model improve both the in-sample fit and out-of-sample forecast of the data over both the linear model alternative.
|
6 |
Optimum bit-by-bit power allocation for minimum distortion transmissionKaraer, Arzu 25 April 2007 (has links)
In this thesis, bit-by-bit power allocation in order to minimize mean-squared error (MSE) distortion of a basic communication system is studied. This communication system consists of a quantizer. There may or may not be a channel encoder and a Binary Phase Shift Keying (BPSK) modulator. In the quantizer, natural binary mapping is made. First, the case where there is no channel coding is considered. In the uncoded case, hard decision decoding is done at the receiver. It is seen that errors that occur in the more significant information bits contribute more to the distortion than less significant bits. For the uncoded case, the optimum power profile for each bit is determined analytically and through computer-based optimization methods like differential evolution. For low signal-to-noise ratio (SNR), the less significant bits are allocated negligible power compared to the more significant bits. For high SNRs, it is seen that the optimum bit-by-bit power allocation gives constant MSE gain in dB over the uniform power allocation. Second, the coded case is considered. Linear block codes like (3,2), (4,3) and (5,4) single parity check codes and (7,4) Hamming codes are used and soft-decision decoding is done at the receiver. Approximate expressions for the MSE are considered in order to find a near-optimum power profile for the coded case. The optimization is done through a computer-based optimization method (differential evolution). For a simple code like (7,4) Hamming code simulations show
that up to 3 dB MSE gain can be obtained by changing the power allocation on the
information and parity bits. A systematic method to find the power profile for linear block codes is also introduced given the knowledge of input-output weight enumerating function of the code. The information bits have the same power, and parity bits
have the same power, and the two power levels can be different.
|
7 |
Obtaining the Best Model Predictions and Parameter Estimates Using Limited DataMcLean, Kevin 27 September 2011 (has links)
Engineers who develop fundamental models for chemical processes are often unable to estimate all of the model parameters due to problems with parameter identifiability and estimability. The literature concerning these two concepts is reviewed and techniques for assessing parameter identifiability and estimability in nonlinear dynamic models are summarized. Modellers often face estimability problems when the available data are limited or noisy. In this situation, modellers must decide whether to conduct new experiments, change the model structure, or to estimate only a subset of the parameters and leave others at fixed values. Estimating only a subset of important model parameters is a technique often used by modellers who face estimability problems and it may lead to better model predictions with lower mean squared error (MSE) than the full model with all parameters estimated. Different methods in the literature for parameter subset selection are discussed and compared.
An orthogonalization algorithm combined with a recent MSE-based criterion has been used successfully to rank parameters from most to least estimable and to determine the parameter subset that should be estimated to obtain the best predictions. In this work, this strategy is applied to a batch reactor model using additional data and results are compared with computationally-expensive leave-one-out cross-validation. A new simultaneous ranking and selection technique based on this MSE criterion is also described. Unfortunately, results from these parameter selection techniques are sensitive to the initial parameter values and the uncertainty factors used to calculate sensitivity coefficients. A robustness test is proposed and applied to assess the sensitivity of the selected parameter subset to the initial parameter guesses. The selected parameter subsets are compared with those selected using another MSE-based method proposed by Chu et al. (2009). The computational efforts of these methods are compared and recommendations are provided to modellers. / Thesis (Master, Chemical Engineering) -- Queen's University, 2011-09-27 10:52:31.588
|
8 |
Frequentist Model Averaging For Functional Logistic Regression ModelJun, Shi January 2018 (has links)
Frequentist model averaging as a newly emerging approach provides us a way to overcome the uncertainty caused by traditional model selection in estimation. It acknowledges the contribution of multiple models, instead of making inference and prediction purely based on one single model. Functional logistic regression is also a burgeoning method in studying the relationship between functional covariates and a binary response. In this paper, the frequentist model averaging approach is applied to the functional logistic regression model. A simulation study is implemented to compare its performance with model selection. The analysis shows that when conditional probability is taken as the focus parameter, model averaging is superior to model selection based on BIC. When the focus parameter is the intercept and slopes, model selection performs better.
|
9 |
Iterative decoding of space-time-frequency block coded mimo concatenated with LDPH codesBotha, P.R. (Philippus Rudolph) January 2013 (has links)
In this dissertation the aim was to investigate the usage of algorithms found in computer
science and apply suitable algorithms to the problem of decoding multiple-input multipleoutput
(MIMO) space-time-frequency block coded signals. It was found that the sphere
decoder is a specific implementation of the A* tree search algorithm that is well known in
computer science. Based on this knowledge, the sphere decoder was extended to include
a priori information in the maximum a posteriori probability (MAP) joint decoding of the
STFC block coded MIMO signals. The added complexity the addition of a priori information
has on the sphere decoder was investigated and compared to the sphere decoder without
a priori information. To mitigate the potential additional complexity several algorithms that
determine the order in which the symbols are decoded were investigated. Three new algorithms
incorporating a priori information were developed and compared with two existing
algorithms. The existing algorithms compared against are sorting based on the norms of the
channel matrix columns and the sorted QR decomposition.
Additionally, the zero forcing (ZF) and minimum mean squared error (MMSE) decoderswith and without decision feedback (DF) were also extended to include a priori information.
The developed method of incorporating a priori information was compared to an existing
algorithm based on receive vector translation (RVT). The limitation of RVT to quadrature
phase shift keying (QPSK) and binary shift keying (BPSK) constellations was also shown in
its derivation. The impact of the various symbol sorting algorithms initially developed for
the sphere decoder on these decoders was also investigated. The developed a priori decoders
operate in the log domain and as such accept a priori information in log-likelihood ratios
(LLRs). In order to output LLRs to the forward error correcting (FEC) code, use of the
max-log approximation, occasionally referred to as hard-to-soft decoding, was made.
In order to test the developed decoders, an iterative turbo decoder structure was used together
with an LDPC decoder to decode threaded algebraic space-time (TAST) codes in a Rayleigh
faded MIMO channel. Two variables that have the greatest impact on the performance of the
turbo decoder were identified: the hard limit value of the LLRs to the LDPC decoder and the
number of independently faded bits in the LDPC code. / Dissertation (MEng)--University of Pretoria, 2013. / gm2014 / Electrical, Electronic and Computer Engineering / unrestricted
|
10 |
Regularization Techniques for Linear Least-Squares ProblemsSuliman, Mohamed Abdalla Elhag 04 1900 (has links)
Linear estimation is a fundamental branch of signal processing that deals with estimating the values of parameters from a corrupted measured data. Throughout the years, several optimization criteria have been used to achieve this task. The most astonishing attempt among theses is the linear least-squares. Although this criterion enjoyed a wide popularity in many areas due to its attractive properties, it appeared to suffer from some shortcomings. Alternative optimization criteria, as a result, have been proposed. These new criteria allowed, in one way or another, the incorporation of further prior information to the desired problem. Among theses alternative criteria
is the regularized least-squares (RLS). In this thesis, we propose two new algorithms to find the regularization parameter for linear least-squares problems. In the constrained perturbation regularization
algorithm (COPRA) for random matrices and COPRA for linear discrete ill-posed problems, an artificial perturbation matrix with a bounded norm is forced into the model matrix. This perturbation is introduced to enhance the singular value structure of the matrix. As a result, the new modified model is expected to provide a better stabilize substantial solution when used to estimate the original signal through minimizing the worst-case residual error function.
Unlike many other regularization algorithms that go in search of minimizing the estimated data error, the two new proposed algorithms are developed mainly to select the artifcial perturbation bound and the regularization parameter in a way that approximately minimizes the mean-squared error (MSE) between the original signal and its estimate under various conditions. The first proposed COPRA method is developed mainly to estimate the regularization parameter when the measurement matrix is complex Gaussian, with centered unit variance (standard), and independent and identically distributed (i.i.d.) entries. Furthermore, the second proposed COPRA method deals with discrete ill-posed problems when the singular values of the linear transformation matrix are decaying very fast to a significantly small value. For the both proposed algorithms, the regularization parameter is obtained as a solution of a non-linear characteristic equation. We provide a details study for the general
properties of these functions and address the existence and uniqueness of the root. To demonstrate the performance of the derivations, the first proposed COPRA method is applied to estimate different signals with various characteristics, while the second proposed COPRA method is applied to a large set of different real-world discrete ill-posed problems. Simulation results demonstrate that the two proposed methods outperform a set of benchmark regularization algorithms in most cases. In addition, the algorithms are also shown to have the lowest run time.
|
Page generated in 0.0663 seconds