• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 261
  • 39
  • 34
  • 19
  • 11
  • 9
  • 8
  • 6
  • 6
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • Tagged with
  • 439
  • 439
  • 199
  • 94
  • 52
  • 46
  • 45
  • 43
  • 41
  • 37
  • 36
  • 33
  • 32
  • 32
  • 31
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.

An investigation of the market model when prices are observed with error

Gendron, Michel January 1984 (has links)
The market model, which relates securities returns to their systematic risk (β), plays a major role in finance. The estimation of β , in particular, is fundamental to many empirical studies and investment decisions. This dissertation develops a model which explains the observed serial correlations in returns and the intervaling effects which are inconsistent with the market model assumptions. The model accounts for thin trading and different frictions in the trading process and has as special cases other models of thin trading and frictions presented in the finance literature. The main assumption of the model is that the prices observed in the market and used to compute returns differ by an error from the true prices generated by a Geometric Brownian Motion model, hence its name, the error in prices (EIP) model. Three estimation methods for β are examined for the EIP model: the Maximum Likelihood (ML) method, the Least Squares (LS) method and a method of moments. It is suggested to view the EIP model as a missing information model and use the EM algorithm to find the ML estimates of the parameters of the model. The approximate small sample and asymptotic properties of the LS estimate of β are derived. It is shown that replacing the true covariances by their sample moments estimates leads to a convenient and familiar form for a consistent estimate of β. Finally, some illustrations of six different estimation methods for β are presented using simulated and real securities returns. / Business, Sauder School of / Graduate

The accuracy of parameter estimates and coverage probability of population values in regression models upon different treatments of systematically missing data

Othuon, Lucas Onyango A. 11 1900 (has links)
Several methods are available for the treatment of missing data. Most of the methods are based on the assumption that data are missing completely at random (MCAR). However, data sets that are MCAR are rare in psycho-educational research. This gives rise to the need for investigating the performance of missing data treatments (MDTs) with non-randomly or systematically missing data, an area that has not received much attention by researchers in the past. In the current simulation study, the performance of four MDTs, namely, mean substitution (MS), pairwise deletion (PW), expectation-maximization method (EM), and regression imputation (RS), was investigated in a linear multiple regression context. Four investigations were conducted involving four predictors under low and high multiple R² , and nine predictors under low and high multiple R² . In addition, each investigation was conducted under three different sample size conditions (94, 153, and 265). The design factors were missing pattern (2 levels), percent missing (3 levels) and non-normality (4 levels). This design gave rise to 72 treatment conditions. The sampling was replicated one thousand times in each condition. MDTs were evaluated based on accuracy of parameter estimates. In addition, the bias in parameter estimates, and coverage probability of regression coefficients, were computed. The effect of missing pattern, percent missing, and non-normality on absolute error for R² estimate was of practical significance. In the estimation of R², EM was the most accurate under the low R² condition, and PW was the most accurate under the high R² condition. No MDT was consistently least biased under low R² condition. However, with nine predictors under the high R² condition, PW was generally the least biased, with a tendency to overestimate population R². The mean absolute error (MAE) tended to increase with increasing non-normality and increasing percent missing. Also, the MAE in R² estimate tended to be smaller under monotonic pattern than under non-monotonic pattern. MDTs were most differentiated at the highest level of percent missing (20%), and under non-monotonic missing pattern. In the estimation of regression coefficients, RS generally outperformed the other MDTs with respect to accuracy of regression coefficients as measured by MAE . However, EM was competitive under the four predictors, low R² condition. MDTs were most differentiated only in the estimation of β₁, the coefficient of the variable with no missing values. MDTs were undifferentiated in their performance in the estimation for b₂,...,bp, p = 4 or 9, although the MAE remained fairly the same across all the regression coefficients. The MAE increased with increasing non-normality and percent missing, but decreased with increasing sample size. The MAE was generally greater under non-monotonic pattern than under monotonic pattern. With four predictors, the least bias was under RS regardless of the magnitude of population R². Under nine predictors, the least bias was under PW regardless of population R². The results for coverage probabilities were generally similar to those under estimation of regression coefficients, with coverage probabilities closest to nominal alpha under RS. As expected, coverage probabilities decreased with increasing non-normality for each MDT, with values being closest to nominal value for normal data. MDTs were most differentiated with respect to coverage probabilities under non-monotonic pattern than under monotonic pattern. Important implications of the results to researchers are numerous. First, the choice of MDT was found to depend on the magnitude of population R², number of predictors, as well as on the parameter estimate of interest. With the estimation of R² as the goal of analysis, use of EM is recommended if the anticipated R² is low (about .2). However, if the anticipated R² is high (about .6), use of PW is recommended. With the estimation of regression coefficients as the goal of analysis, the choice of MDT was found to be most crucial for the variable with no missing data. The RS method is most recommended with respect to estimation accuracy of regression coefficients, although greater bias was recorded under RS than under PW or MS when the number of predictors was large (i.e., nine predictors). Second, the choice of MDT seems to be of little concern if the proportion of missing data is 10 percent, and also if the missing pattern is monotonic rather than non-monotonic. Third, the proportion of missing data seems to have less impact on the accuracy of parameter estimates under monotonic missing pattern than under non-monotonic missing pattern. Fourth, it is recommended for researchers that in the control of Type I error rates under low R² condition, the EM method should be used as it produced coverage probability of regression coefficients closest to nominal value at .05 level. However, in the control of Type I error rates under high R² condition, the RS method is recommended. Considering that simulated data were used in the present study, it is suggested that future research should attempt to validate the findings of the present study using real field data. Also, a future investigator could modify the number of predictors as well as the confidence interval in the calculation of coverage probabilities to extend generalization of results. / Education, Faculty of / Educational and Counselling Psychology, and Special Education (ECPS), Department of / Graduate

Spatial Error Metrics and Registration for the Validation of Numerical Oceanographic Models

Ziegeler, Sean B 15 December 2012 (has links)
Numerical oceanographic models are constantly improving and must be validated when improvements are made. One means of determining how to improve these models and performing validations is to compare model predictions to the future observed outcome, which is measured many ways, including satellite imagery. Comparisons of model forecasts to future satellite images result in error measurements. One common problem with modern oceanographic models is spatial error, i.e., the incorrect placement and shape of ocean features, rendering traditional error metrics such as mean-square and cross-correlation ineffective. Such problems are common in meteorological forecast verification as well, so the application of spatial error metrics have been a recently popular topic in that field of study. Spatial error metrics separate model error into a displacement component and an amplitude component, providing a more reliable assessment of numerical model inaccuracies and a more descriptive portrayal of model prediction skill.The application of spatial error metrics to oceanographic models has been sparse, and significantly further advances exist in the medical imaging and registration field. These advances are presented, along with modifications necessary for application to oceanographic model output and satellite imagery. Standard approaches and options for those methods in the literature are explored, and where the best arrangements of options are unclear, comparison studies are conducted. The first of these trials require the reproduction of synthetic displacements in conjunction with synthetic amplitude perturbations across 480 Navy Coastal Ocean Model (NCOM) temperature fields from various regions of the globe throughout 2009. Results revealed the success of certain approaches novel to both meteorology and oceanography, including B-spline transforms and mutual information. That, combined with other common methods, such as quasi-Newton optimization and land masking, could best recover the synthetic displacements under various synthetic intensity changes. The second set of trials compare temperature fields from NCOM and Navy Layered Ocean Model (NLOM), both 1/16-degree and 1/32-degree, to Moderate Resolution Imaging Spectroradiometer (MODIS) satellite imagery. Lessons learned from the first trials were applied and extended. The resulting methods algorithmically reproduced portions of a previous hand-analyzed study and were successful in separating spatial from amplitude (temperature) errors.

The performance of three fitting criteria for multidimensional scaling /

McGlynn, Marion January 1990 (has links)
No description available.

Numerical Smoothness and Error Analysis for Parabolic Equations

Romutis, Todd 25 April 2018 (has links)
No description available.

On the relative properties of ordinary least squares estimation for the prediction problem with errors in variables /

Yum, Bong Jin, January 1981 (has links)
No description available.

An empirical study of relative orientation errors in aerial triangulation /

Forrest, Robert Brewster January 1964 (has links)
No description available.

Error propagation in strip triangulation and the standard errors of the adjusted coordinates /

Soliman, Afifi Hassan January 1968 (has links)
No description available.

A grammatical analysis of the spontaneous language use of schizophrenic versus normal L2 speakers of English

Smit, Mathilda 12 1900 (has links)
Thesis (MA (General Linguistics))—University of Stellenbosch, 2009. / ENGLISH ABSTRACT: It is well-known that there is an important relationship between language and schizophrenia, given that many of the primary symptoms of schizophrenia are language related (Cutting 1985; Wróbel 1990; Sadock & Sadock 2003; Paradis 2008). Furthermore, research has shown that certain schizophrenic bilinguals exhibit different symptoms in their first language (L1) than in their second language (L2) (De Zulueta 1984; De Zulueta, Gene-Cos & Grachev 2001; Paradis 2008; Southwood, Schoeman & Emsley 2009). This thesis investigates the L2 use of schizophrenic bilinguals to determine whether there are significant differences between the types and frequency of errors made in spontaneous L2 use by schizophrenic versus normal (i.e. non-schizophrenic) bilinguals. Four schizophrenic bilinguals and four normal bilinguals (the control group) participated in this study. The controls were matched to the schizophrenics in terms of age, gender, level of education, L1 (Afrikaans) and L2 (English). Informal, thirty minute interviews were conducted with each of the eight participants, recorded on video (for the schizophrenics) or audio tape (for the controls) and carefully transcribed. Each participant's speech sample was then analyzed grammatically by means of Morice & Ingram's (1982) assessment tool. This analysis involved determining the complexity of utterances (with reference to mean length of utterance, lexical density, and number of sentence-initial and sentence-medial conjunctions) and identifying phonological, morphological, lexical, syntactic and semantic errors. In this way a language profile was created for each participant and the differences between the two groups (schizophrenics and controls) were tested for statistical significance. On the basis of the results of these statistical tests, it is argued that the locus of differences between schizophrenic and normal L2 use is semantics, rather than any of the other aspects of grammar. The thesis concludes with a discussion of the main findings of the study, some criticisms of the assessment tool and suggestions for future research in this field. / AFRIKAANSE OPSOMMING: Navorsing dui op 'n belangrike verhouding tussen taal en skisofrenie, aangesien baie van die primêre simptome van skisofrenie taalverwant is (Cutting 1985; Wróbel 1990; Sadock & Sadock 2003; Paradis 2008). Verder dui navorsing ook daarop dat sekere skisofreniese tweetaliges verskillende simptome toon in hul eerstetaal (T1) as in hul tweedetaal (T2) (De Zulueta 1984; De Zulueta, Gene-Cos & Grachev 2001; Paradis 2008; Southwood, Schoeman & Emsley 2009). Hierdie tesis ondersoek die T2 gebruik van skisofreniese tweetaliges om vas te stel of daar beduidende verskille tussen die tipe en die gereeldheid van die foute is wat in spontane T2 gebruik deur skisofreniese teenoor normale (d.w.s nie-skisofreniese) tweetaliges gemaak word. Vier skisofreniese tweetaliges en vier normale tweetaliges (die kontrolegroep) het deelgeneem aan hierdie studie. Die skisofreniese groep en die kontrolegroep is eenders in terme van ouderdom, geslag, vlak van skoolopleiding, T1 (Afrikaans) en T2 (Engels). Informele dertig-minuut lange onderhoude is gevoer met elk van die agt deelnemers, opgeneem op video (vir die skisofrene) en op band (vir die kontrolegroep) en noukeurig getranskribeer. Elke deelnemer se spraakdata is hierna grammatikaal geanaliseer deur middel van Morice & Ingram se (1982) assesseringsinstrument. Hierdie analise het die volgende ingehou: die vasstel van die kompleksiteit van uitinge (met betrekking tot gemiddelde uitingslengte, leksikale digtheid, en die getal van sinsinisiële en sinsinterne voegwoorde) en die identifisering van fonologiese, morfologiese, leksikale, sintaktiese en semantiese foute. Op hierdie wyse is 'n taalprofiel vir elke deelnemer opgestel en die verskille tussen die twee groepe (skisofreniese- en kontrolegroep) is getoets vir statistiese beduidendheid. Op grond van die resultate van hierdie statistiese toetse word daar geargumenteer dat semantiek, eerder as enige van die ander aspekte van grammatika, die lokus van die belangrikste verskil tussen skisofreniese en normale T2 gebruik is. Die tesis sluit af met 'n bespreking van die belangrikste bevindinge van die studie, enkele kritiese opmerkings oor die assesseringsinstrument, asook voorstelle vir verdere navorsing in hierdie veld.


Jefferis, Robert P. 10 1900 (has links)
International Telemetering Conference Proceedings / October 25-28, 1999 / Riviera Hotel and Convention Center, Las Vegas, Nevada / Radio frequency power margins in well planned line-of-sight (LOS) air-to-ground digital data transmission systems usually produce signal to noise ratios (SNR) that can deliver error free service. Sometimes field performance falls short of design and customer expectations. Recent flight tests conducted by the tri-service Advanced Range Telemetry (ARTM) project confirm that the dominant source of bit errors and short term link failures are “clusters” of severe error burst activity produced by flat fading, dispersive fading and poor antenna patterns on airborne vehicles. This paper introduces the techniques used by ARTM to measure bit error performance of aeronautical telemetry links.

Page generated in 0.1275 seconds