391 |
An investigation of the market model when prices are observed with errorGendron, Michel January 1984 (has links)
The market model, which relates securities returns to their systematic risk (β), plays a major role in finance. The estimation of β , in particular, is fundamental to many empirical studies and investment decisions.
This dissertation develops a model which explains the observed serial correlations in returns and the intervaling effects which are inconsistent with the market model assumptions. The model accounts for thin trading and different frictions in the trading process and has as special cases other models of thin trading and frictions presented in the finance literature. The main assumption of the model is that the prices observed in the market and used to compute returns differ by an error from the true prices generated by a Geometric Brownian Motion model, hence its name, the error in prices (EIP) model.
Three estimation methods for β are examined for the EIP model: the Maximum Likelihood (ML) method, the Least Squares (LS) method and a method of moments. It is suggested to view the EIP model as a missing information model and use the EM algorithm to find the ML estimates of the parameters of the model. The approximate small sample and asymptotic properties of the LS estimate of β are derived. It is shown that replacing the true covariances by their sample moments estimates leads to a convenient and familiar form for a consistent estimate of β. Finally, some illustrations of six different estimation methods for β are presented using simulated and real securities returns. / Business, Sauder School of / Graduate Read more
|
392 |
Constrained sequences and coding for spectral and error controlBotha, Louis 11 February 2014 (has links)
D.Ing. / When digital information is to be transmitted over a communications channel or stored in a data recording system, it is first mapped onto a code sequence by an encoder. The code sequence has certain properties which makes it suitable for use on the channel, ie the sequence complies to the channel input restrictions. These input restrictions are often described in terms of a required power spectral density of the code sequence. In addition, the code sequence can also be chosen in such a way as to enable the receiver to correct errors which occur in the channel. The set of rules which governs the encoding process is referred to as a line code or a modulation code for the transmission or storage of data, respectively. Before a new line code or modulation code can be developed, the properties that the code sequence should have for compliance to the channel input, restrictions and possession of desired error correction capabilities have to be established. A code' construction algorithm, which is often time consuming and difficult to apply, is then used to obtain the new code. In this dissertation, new classes of sequences which comply to the input restrictions and error correction requirements of practical channels are defined, and new line codes and recording codes are developed for mapping data onto these sequences. Several theorems which show relations between' information theoretical aspects of different classes of code sequences are presented. Algorithms which can be used to transform an existing line code or modulation code into a new code for use on another channel are introduced. These algorithms are systematic and easy to apply, and precludes the necessity of applying a code construction algorithm. Read more
|
393 |
Coding and bounds for correcting insertion/deletion errorsSwart, Theo G. 10 September 2012 (has links)
M.Ing. / Certain properties of codewords after deletions or insertions of bits are investigated. This is used in the enumeration of the number of subwords or superwords after deletions or insertions. Also, new upper bounds for insertion/deletion correcting codes are derived from these properties. A decoding algorithm to correct up to two deletions per word for Helberg's s = 2 codes is proposed. By using subword and superword tables, new s = 2 codebooks with greater cardinalities than before are presented. An insertion/deletion channel model is presented which can be used in evaluating insertion/deletion correcting codes. By changing the parameters, various channel configurations can be attained. Furthermore, a new convolutional coding scheme for correcting insertion/deletion errors is introduced and an investigation of the performance is done by using the presented channel model.
|
394 |
The accuracy of parameter estimates and coverage probability of population values in regression models upon different treatments of systematically missing dataOthuon, Lucas Onyango A. 11 1900 (has links)
Several methods are available for the treatment of missing data. Most of the methods are
based on the assumption that data are missing completely at random (MCAR). However, data
sets that are MCAR are rare in psycho-educational research. This gives rise to the need for
investigating the performance of missing data treatments (MDTs) with non-randomly or
systematically missing data, an area that has not received much attention by researchers in the
past.
In the current simulation study, the performance of four MDTs, namely, mean
substitution (MS), pairwise deletion (PW), expectation-maximization method (EM), and
regression imputation (RS), was investigated in a linear multiple regression context. Four
investigations were conducted involving four predictors under low and high multiple R² , and nine
predictors under low and high multiple R² . In addition, each investigation was conducted under
three different sample size conditions (94, 153, and 265). The design factors were missing
pattern (2 levels), percent missing (3 levels) and non-normality (4 levels). This design gave rise
to 72 treatment conditions. The sampling was replicated one thousand times in each condition.
MDTs were evaluated based on accuracy of parameter estimates. In addition, the bias in
parameter estimates, and coverage probability of regression coefficients, were computed.
The effect of missing pattern, percent missing, and non-normality on absolute error for
R² estimate was of practical significance. In the estimation of R², EM was the most accurate under
the low R² condition, and PW was the most accurate under the high R² condition. No MDT was
consistently least biased under low R² condition. However, with nine predictors under the high
R² condition, PW was generally the least biased, with a tendency to overestimate population R².
The mean absolute error (MAE) tended to increase with increasing non-normality and increasing
percent missing. Also, the MAE in R²
estimate tended to be smaller under monotonic pattern than
under non-monotonic pattern. MDTs were most differentiated at the highest level of percent
missing (20%), and under non-monotonic missing pattern.
In the estimation of regression coefficients, RS generally outperformed the other MDTs
with respect to accuracy of regression coefficients as measured by MAE . However, EM was
competitive under the four predictors, low R² condition. MDTs were most differentiated only in
the estimation of β₁, the coefficient of the variable with no missing values. MDTs were
undifferentiated in their performance in the estimation for b₂,...,bp, p = 4 or 9, although the MAE
remained fairly the same across all the regression coefficients. The MAE increased with
increasing non-normality and percent missing, but decreased with increasing sample size. The
MAE was generally greater under non-monotonic pattern than under monotonic pattern. With
four predictors, the least bias was under RS regardless of the magnitude of population R². Under
nine predictors, the least bias was under PW regardless of population R².
The results for coverage probabilities were generally similar to those under estimation of
regression coefficients, with coverage probabilities closest to nominal alpha under RS. As
expected, coverage probabilities decreased with increasing non-normality for each MDT, with
values being closest to nominal value for normal data. MDTs were most differentiated with
respect to coverage probabilities under non-monotonic pattern than under monotonic pattern.
Important implications of the results to researchers are numerous. First, the choice of
MDT was found to depend on the magnitude of population R², number of predictors, as well as
on the parameter estimate of interest. With the estimation of R² as the goal of analysis, use of EM
is recommended if the anticipated R² is low (about .2). However, if the anticipated R² is high
(about .6), use of PW is recommended. With the estimation of regression coefficients as the goal
of analysis, the choice of MDT was found to be most crucial for the variable with no missing
data. The RS method is most recommended with respect to estimation accuracy of regression
coefficients, although greater bias was recorded under RS than under PW or MS when the
number of predictors was large (i.e., nine predictors). Second, the choice of MDT seems to be of
little concern if the proportion of missing data is 10 percent, and also if the missing pattern is
monotonic rather than non-monotonic. Third, the proportion of missing data seems to have less
impact on the accuracy of parameter estimates under monotonic missing pattern than under non-monotonic
missing pattern. Fourth, it is recommended for researchers that in the control of Type
I error rates under low R² condition, the EM method should be used as it produced coverage
probability of regression coefficients closest to nominal value at .05 level. However, in the
control of Type I error rates under high R² condition, the RS method is recommended.
Considering that simulated data were used in the present study, it is suggested that future research
should attempt to validate the findings of the present study using real field data. Also, a future
investigator could modify the number of predictors as well as the confidence interval in the
calculation of coverage probabilities to extend generalization of results. / Education, Faculty of / Educational and Counselling Psychology, and Special Education (ECPS), Department of / Graduate Read more
|
395 |
Joint Schemes for Physical Layer Security and Error CorrectionAdamo, Oluwayomi Bamidele 08 1900 (has links)
The major challenges facing resource constraint wireless devices are error resilience, security and speed. Three joint schemes are presented in this research which could be broadly divided into error correction based and cipher based. The error correction based ciphers take advantage of the properties of LDPC codes and Nordstrom Robinson code. A cipher-based cryptosystem is also presented in this research. The complexity of this scheme is reduced compared to conventional schemes. The securities of the ciphers are analyzed against known-plaintext and chosen-plaintext attacks and are found to be secure. Randomization test was also conducted on these schemes and the results are presented. For the proof of concept, the schemes were implemented in software and hardware and these shows a reduction in hardware usage compared to conventional schemes. As a result, joint schemes for error correction and security provide security to the physical layer of wireless communication systems, a layer in the protocol stack where currently little or no security is implemented. In this physical layer security approach, the properties of powerful error correcting codes are exploited to deliver reliability to the intended parties, high security against eavesdroppers and efficiency in communication system. The notion of a highly secure and reliable physical layer has the potential to significantly change how communication system designers and users think of the physical layer since the error control codes employed in this work will have the dual roles of both reliability and security. Read more
|
396 |
Matlab Implementation of a Tornado Forward Error Correction CodeNoriega, Alexandra 05 1900 (has links)
This research discusses how the design of a tornado forward error correcting channel code (FEC) sends digital data stream profiles to the receiver. The complete design was based on the Tornado channel code, binary phase shift keying (BPSK) modulation on a Gaussian channel (AWGN). The communication link was simulated by using Matlab, which shows the theoretical systems efficiency. Then the data stream was input as data to be simulated communication systems using Matlab. The purpose of this paper is to introduce the audience to a simulation technique that has been successfully used to determine how well a FEC expected to work when transferring digital data streams. The goal is to use this data to show how FEC optimizes a digital data stream to gain a better digital communications systems. The results conclude by making comparisons of different possible styles for the Tornado FEC code.
|
397 |
Evaluating Query Estimation Errors Using Bootstrap SamplingCal, Semih 29 July 2021 (has links)
No description available.
|
398 |
El error como mecanismo de interrupción a la representación que problematiza la distancia entre obra y espectador propuesta por las prácticas de teatro tradicionalNovoa Gómez, Tania Loreto January 2015 (has links)
Magíster en artes con mención en dirección teatral / La presente investigación se constituye como un proceso de doble pertinencia –teórica y práctica– que busca problematizar la distancia entre obra y espectador propuesta por las prácticas de teatro tradicional, a través de la elaboración y la aplicación del mecanismo del error a una puesta en escena, con el objetivo de potenciar el carácter relacional de los participantes durante el fenómeno teatral. Entenderemos error como la representación de imprevistos que buscan interrumpir el continuum de la representación, procedimiento que trabajamos a través de dos líneas metodológicas: manipulando una trama específica para que se dificulte la progresión lógica de las acciones hasta detenerla; e imposibilitando la pretensión de ocultamiento del actor detrás de la estructura del personaje mediante la revelación progresiva de la persona del actor. Esta tesis se pone a prueba en un objeto escénico que utiliza partes de la dramaturgia de Romeo y Julieta de William Shakespeare (trad. Pablo Neruda) para ser representada e interrumpida con el mecanismo del error, proceso de montaje que toma herramientas metodológicas del Devising Theatre. Proponemos como resultado esperado la constitución de un acontecimiento teatral que permita las condiciones para que el espectador pueda proporcionar respuestas desde su particularidad individual puesta en relación a los sucesos que plantea el fenómeno escénico. Read more
|
399 |
Odhady algebraické chyby a zastavovací kritéria v numerickém řešení parciálních diferenciálních rovnic / Odhady algebraické chyby a zastavovací kritéria v numerickém řešení parciálních diferenciálních rovnicPapež, Jan January 2011 (has links)
Title: Estimation of the algebraic error and stopping criteria in numerical solution of partial differential equations Author: Jan Papež Department: Department of Numerical Mathematics Supervisor of the master thesis: Zdeněk Strakoš Abstract: After introduction of the model problem and its properties we describe the Conjugate Gradient Method (CG). We present the estimates of the energy norm of the error and a heuristic for the adaptive refinement of the estimate. The difference in the local behaviour of the discretization and the algebraic error is illustrated by numerical experiments using the given model problem. A posteriori estimates for the discretization and the total error that take into account the inexact solution of the algebraic system are then discussed. In order to get a useful perspective, we briefly recall the multigrid method. Then the Cascadic Conjugate Gradient Method of Deuflhard (CCG) is presented. Using the estimates for the error presented in the preceding parts of the thesis, the new stopping criteria for CCG are proposed. The CCG method with the new stopping criteria is then tested. Keywords: numerical PDE, discretization error, algebraic error, error es- timates, locality of the error, adaptivity Read more
|
400 |
Analysis of radiation induced errors in transistors in memory elementsMasani, Deekshitha 01 December 2020 (has links)
From the first integrated circuit which has 16-transistor chip built by Heiman and Steven Hofstein in 1962 to the latest 39.54 billion MOSFET’s using 7nm FinFET technology as of 2019 the scaling of transistors is still challenging. The scaling always needs to satisfy the minimal power constraint, minimal area constraint and high speed as possible. As of 2020, the worlds smallest transistor is 1nm long build by a team at Lawrence Berkeley National Laboratory. Looking at the latest trends of 14nm, 7nm technologies present where a single die holds more than a billion transistors on it. Thinking of it, it is more challenging for dyeing a 1nm technology. The scaling keeps going on and if silicon does not satisfy the requirement, they switch to carbon nanotubes and molybdenum disulfide or some newer materials. The transistor sizing is reducing but the pressure of radiation effects on transistor is in quench of more and more efficient circuits to tolerate errors. The radiation errors which are of higher voltage are capable of hitting a node and flipping its value. However, it is not possible to have a perfect material to satisfy no error requirement for a circuit. But it is possible to maintain the value before causing the error and retain the value even after occurrence of the error. In the advanced technologies due to transistor scaling multiple simultaneous radiation induced errors are the issue. Different latch designs are proposed to fix this problem. Using the CMOS 90nm technology different latch designs are proposed which will recover the value even after the error strikes the latch. Initially the errors are generally Single event upsets (SEUs) which when the high radiation particle strikes only one transistor. Since the era of scaling, the multiple simultaneous radiation errors are common. The general errors are Double Node Upset (DNU) which occurs when the high radiation particle strikes the two transistors due to replacing one transistor by more than one after scaling. Existing designs of SEUs and DNUs accurately determine the error rates in a circuit. However, with reference to the dissertation of Dr. Adam Watkins, proposed HRDNUT latch in the paper “Analysis and mitigation of multiple radiation induced errors in modern circuits”, the circuits can retain its error value in 2.13ps. Two circuits are introduced to increase the speed in retaining the error value after the high energy particle strikes the node. Upon the evaluation of the past designs how the error is introduced inside the circuit is not clear. Some designs used a pass gate to actually introduce the error logic value but not in terms of voltage. The current thesis introduces a method to introduce error with reduced power and delay overhead compared to the previous circuits. Introducing the error in the circuits from the literature survey and comparing the delay and power with and without introducing the error is shown. Introducing the errors in the two new circuits are also shown and compared with when no errors are injected. Read more
|
Page generated in 0.0236 seconds