251 |
Comparison of fitted and default error models in benchmarking with quarterly-annual data.January 2009 (has links)
Chan, Kin Kwok. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2009. / Includes bibliographical references (leaves 68-69). / Abstract also in Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- The effect of using a default error model --- p.8 / Chapter 2.1 --- Formulae to measure the prediction error --- p.9 / Chapter 2.2 --- The effect of autoregressive parameter on SD of prediction error --- p.10 / Chapter 2.3 --- Misspecification error of SD of prediction error when using a default error model --- p.12 / Chapter 2.4 --- Reporting error of SD of prediction error when using a default error model --- p.23 / Chapter 3 --- Error modelling by using benchmarks --- p.30 / Chapter 3.1 --- Review of an existing method --- p.30 / Chapter 3.2 --- Introduction of Benchmark Forecasting Method --- p.32 / Chapter 3.3 --- Comparison of estimation methods --- p.36 / Chapter 4 --- Performance of using fitted error model --- p.41 / Chapter 4.1 --- Fitted value and reporting value of SD of prediction error when using a fitted error model --- p.41 / Chapter 4.2 --- Misspecification error and reporting error when using a fitted error model --- p.45 / Chapter 4.3 --- Suggestions on the selection of default and fitted error model --- p.51 / Chapter 5 --- Benchmarking performance of using fitted AR(1) model for usual ARMA survey error --- p.55 / Chapter 5.1 --- Model settings for two usual ARMA survey error --- p.56 / Chapter 5.2 --- Simulation studies --- p.57 / Chapter 6 --- An illustrative example: Traveller Accommodation series --- p.62 / Chapter 7 --- Conclusion --- p.66 / Bibliography --- p.68
|
252 |
A reliability model of an inventory systemChatham, Michael Duane January 2011 (has links)
Typescript. / Digitized by Kansas Correctional Industries
|
253 |
Managing Dynamic Written Corrective Feedback: Perceptions of Experienced TeachersMessenger, Rachel A. 01 March 2017 (has links)
Error correction for English language learner's (ELL) writing has long been debated in the field of teaching English to learners of other languages (TESOL). Some researchers say that written corrective feedback (WCF) is beneficial, while others contest. This study takes a look at the manageability of the innovative strategy Dynamic Written Corrective Feedback (DWCF) and asks what factors influence the manageability of the strategy (including how long marking sessions take on average) and what suggestions experienced teachers of DWCF have. The strategy has shown to be highly effective in previous studies, but its manageability has recently been in question. A qualitative analysis of the manageability of DWCF was done via interviews of experienced teachers that have used DWCF and the author's experience and reflections using the strategy. The results indicate that this strategy can be manageable with some possible adaptions and while avoiding some common pitfalls.
|
254 |
Accelerating convex optimization in machine learning by leveraging functional growth conditionsXu, Yi 01 August 2019 (has links)
In recent years, unprecedented growths in scale and dimensionality of data raise big computational challenges for traditional optimization algorithms; thus it becomes very important to develop efficient and effective optimization algorithms for solving numerous machine learning problems. Many traditional algorithms (e.g., gradient descent method) are black-box algorithms, which are simple to implement but ignore the underlying geometrical property of the objective function. Recent trend in accelerating these traditional black-box algorithms is to leverage geometrical properties of the objective function such as strong convexity. However, most existing methods rely too much on the knowledge of strong convexity, which makes them not applicable to problems without strong convexity or without knowledge of strong convexity. To bridge the gap between traditional black-box algorithms without knowing the problem's geometrical property and accelerated algorithms under strong convexity, how can we develop adaptive algorithms that can be adaptive to the objective function's underlying geometrical property? To answer this question, in this dissertation we focus on convex optimization problems and propose to explore an error bound condition that characterizes the functional growth condition of the objective function around a global minimum. Under this error bound condition, we develop algorithms that (1) can adapt to the problem's geometrical property to enjoy faster convergence in stochastic optimization; (2) can leverage the problem's structural regularizer to further improve the convergence speed; (3) can address both deterministic and stochastic optimization problems with explicit max-structural loss; (4) can leverage the objective function's smoothness property to improve the convergence rate for stochastic optimization. We first considered stochastic optimization problems with general stochastic loss. We proposed two accelerated stochastic subgradient (ASSG) methods with theoretical guarantees by iteratively solving the original problem approximately in a local region around a historical solution with the size of the local region gradually decreasing as the solution approaches the optimal set. Second, we developed a new theory of alternating direction method of multipliers (ADMM) with a new adaptive scheme of the penalty parameter for both deterministic and stochastic optimization problems with structured regularizers. With LEB condition, the proposed deterministic and stochastic ADMM enjoy improved iteration complexities of $\widetilde O(1/\epsilon^{1-\theta})$ and $\widetilde O(1/\epsilon^{2(1-\theta)})$ respectively. Then, we considered a family of optimization problems with an explicit max-structural loss. We developed a novel homotopy smoothing (HOPS) algorithm that employs Nesterov's smoothing technique and accelerated gradient method and runs in stages. Under a generic condition so-called local error bound (LEB) condition, it can improve the iteration complexity of $O(1/\epsilon)$ to $\widetilde O(1/\epsilon^{1-\theta})$ omitting a logarithmic factor with $\theta\in(0,1]$. Next, we proposed new restarted stochastic primal-dual (RSPD) algorithms for solving the problem with stochastic explicit max-structural loss. We successfully got a better iteration complexity than $O(1/\epsilon^2)$ without bilinear structure assumption, which is a big challenge of obtaining faster convergence for the considered problem. Finally, we consider finite-sum optimization problems with smooth loss and simple regularizer. We proposed novel techniques to automatically search for the unknown parameter on the fly of optimization while maintaining almost the same convergence rate as an oracle setting assuming the involved parameter is given. Under the Holderian error bound (HEB) condition with $\theta\in(0,1/2)$, the proposed algorithm also enjoys intermediate faster convergence rates than its standard counterparts with only the smoothness assumption.
|
255 |
Estimating standard errors of estimated variance components in generalizability theory using bootstrap proceduresMoore, Joann Lynn 01 December 2010 (has links)
This study investigated the extent to which rules proposed by Tong and Brennan (2007) for estimating standard errors of estimated variance components held up across a variety of G theory designs, variance component structures, sample size patterns, and data types. Simulated data was generated for all combinations of conditions, and point estimates, standard error estimates, and coverage for three types of confidence intervals were calculated for each estimated variance component and relative and absolute error variance across a variety of bootstrap procedures for each combination of conditions. It was found that, with some exceptions, Tong and Brennan's (2007) rules produced adequate standard error estimates for normal and polytomous data, while some of the results differed for dichotomous data. Additionally, some refinements to the rules were suggested with respect to nested designs. This study provides support for the use of bootstrap procedures for estimating standard errors of estimated variance components when data are not normally distributed.
|
256 |
Misclassification of the dependent variable in binary choice modelsGu, Yuanyuan, Economics, Australian School of Business, UNSW January 2006 (has links)
Survey data are often subject to a number of measurement errors. The measurement error associated with a multinomial variable is called a misclassification error. In this dissertation we study such errors when the outcome is binary. It is known that ignoring such misclassification errors may affect the parameter estimates, see for example Hausman, Abrevaya and Scott-Morton (1998). However, previous studies showed that robust estimation of the parameters is achievable if we take misclassification into account. There are many attempts to do so in the literature and the major problem in implementing them is to avoid poor or fragile identifiability of the misclassification probabilities. Generally we restrict these parameters by imposing prior information on them. Such prior constraints on the parameters are simple to impose within a Bayesian framework. Hence we consider a Bayesian logistic regression model that takes into account the misclassification of the dependent variable. A very convenient way to implement such a Bayesian analysis is to estimate the hierarchical model using the WinBUGS software package developed by the MRC biostatistics group, Institute of Public Health, at Cambridge University. WinGUGS allows us to estimate the posterior distributions of all the parameters using relatively little programming and once the program is written it is trivial to change the link function, for example from logit to probit. If we wish to have more control over the sampling scheme or to deal with more complex models, then we propose a data augmentation approach using the Metropolis-Hastings algorithm within a Gibbs sampling framework. The sampling scheme can be made more efficient by using a one-step Newton-Raphson algorithm to form the Metropolis-Hastings proposal. Results from empirically analyzing real data and from the simulation studies suggest that if suitable priors are specified for the misclassification parameters and the regression parameters, then logistic regression allowing for misclassification results in better estimators than the estimators that do not take misclassification into account.
|
257 |
Operational Aspects of Decision Feedback EqualizersKennedy, Rodney Andrew, rodney.kennedy@anu.edu.au January 1989 (has links)
The central theme is the study of error propagation effects in decision feedback equalizers (DFEs). The thesis contains: a stochastic analysis of error propagation in a tuned DFE; an analysis of the effects of error propagation in a blindly adapted DFE; a deterministic analysis of error propagation through input-output stability ideas; and testing procedures for establishing correct tap convergence in blind adaptation. To a lesser extent, the decision directed equalizer (DDE) is also treated.¶ Characterizing error propagation using finite state Markov process (FSMP) techniques is first considered. We classify how the channel and DFE parameters affect the FSMP model and establish tight bounds on the error probability and mean error recovery time of a tuned DFE. These bounds are shown to be too conservative for practical use and highlight the need for imposing stronger hypotheses on the class of channels for which a DFE may be effectively used.¶ In blind DFE adaptation we show the effect of decision errors is to distort the adaptation relative to the use of a training sequence. The mean square error surface in a LMS type setting is shown to be a concatenation of quadratic functions exposing the possibility of false tap convergence to undesirable DFE parameter settings. Averaging analysis and simulation are used to verify this behaviour on some examples.¶ Error propagation in a tuned DFE is also examined in a deterministic setting. A finite error recovery time problem is set up as an input-output stability problem. Passivity theory is invoked to prove that a DFE can be effectively used on a channel satisfying a simple frequency domain condition. These results give performance bounds which relate well with practice.¶ Testing for false tap convergence in blind adaptation concludes our study. Simple statistic output tests are shown to be capable of discerning correct operation of a DDE. Similar tests are conjectured for the DFE, supported by proofs for the low dimensional cases.
|
258 |
Statistical aspects of two measurement problems : defining taxonomic richness and testing with unanchored responsesRitter, Kerry 03 April 2001 (has links)
Statisticians often focus on sampling or experimental design and data analysis while
paying less attention to how the response is measured. However, the ideas of statistics may be
applied to measurement problems with fruitful results. By examining the errors of measured
responses, we may gain insight into the limitations of current measures and develop a better
understanding of how to interpret and qualify the results. The first chapter considers the
problem of measuring taxonomic richness as an index of habitat quality and stream health. In
particular, we investigate numerical taxa richness (NTR), or the number of observed taxa in a
fixed-count, as a means to assess differences in taxonomic composition and reduce cost.
Because the number of observed taxa increases with the number of individuals counted, rare
taxa are often excluded from NTR with smaller counts. NTR measures based on different
counts effectively assess different levels of rarity, and hence target different parameters.
Determining the target parameter that NTR is "really" estimating is an important step toward
facilitating fair comparisons based on different sized samples. Our first study approximates the
parameter unbiasedly estimated by NTR and explores alternatives for estimation based on
smaller and larger counts.
The second investigation considers response error resulting from panel evaluations.
Because people function as the measurement instrument, responses are particularly susceptible
to variation not directly related to the experimental unit. As a result, observed differences may
not accurately reflect real differences in the products being measured. Chapter Two offers
several linear models to describe measurement error resulting from unanchored responses
across successive evaluations over time, which we call u-errors. We examine changes to Type I
and Type II error probabilities for standard F-tests in balanced factorial models where u-errors
are confounded with an effect under investigation. We offer a relatively simple method for
determining whether or not distributions of mean square ratios for testing fixed effects change
in the presence of u-error. In addition, the validity of the test is shown to depend both on the
level of confounding and whether not u-errors vary about a nonzero mean. / Graduation date: 2002
|
259 |
A simple RLS-POCS solution for reduced complexity ADSL impulse shorteningHelms, Sheldon J. 03 September 1999 (has links)
Recently, with the realization of the World Wide Web, the tremendous
need for high-speed data communications has grown. Several access techniques
have been proposed which utilize the existing copper twisted pair cabling. Of
these, the xDSL family, particularly ADSL and VDSL, have shown great promise
in providing broadband or near-broadband access through the common telephone
lines. A critical component of the ADSL and VDSL systems is the guard band
needed to eliminate the interference caused by the previously transmitted blocks.
This guard band must come in the form of redundant samples at the start of every
transmit block, and be at least as long as the channel impulse response. Since the
required guard band length is much greater than the length of the actual transmitted
samples, techniques to shorten the channel impulse response must be considered.
In this thesis, a new algorithm based on the RLS error minimization and POCS
optimization techniques will be applied to the channel impulse-shortening problem
in an ADSL environment. As will be shown, the proposed algorithm will provide a much better solution with a minimal increase in complexity as compared to the existing LMS techniques. / Graduation date: 2000
|
260 |
Classification context in a machine learning approach to predicting protein secondary structureLangford, Bill T. 13 May 1993 (has links)
An important problem in molecular biology is to predict the secondary
structure of proteins from their primary structure. The primary structure of a
protein is the sequence of amino acid residues. The secondary structure is an
abstract description of the shape of the folded protein, with regions identified
as alpha helix, beta strands, and random coil. Existing methods of secondary
structure prediction examine a short segment of the primary structure and predict
the secondary structure class (alpha, beta, coil) of an individual residue centered in
that segment. The last few years of research have failed to improve these methods
beyond the level of 65% correct predictions.
This thesis investigates whether these methods can be improved by permitting
them to examine externally-supplied predictions for the secondary structure
of other residues in the segment. The externally-supplied predictions are called
the "classification context," because they provide contextual information about
the secondary structure classifications of neighboring residues. The classification
context could be provided by an existing algorithm that made initial secondary
structure predictions, and then these could be taken as input by a second algorithm
that would attempt to improve the predictions.
A series of experiments on both real and simulated classification context
were performed to measure the possible improvement that could be obtained from
classification context. The results showed that the classification context provided
by current algorithms does not yield improved performance when used as input by
those same algorithms. However, if the classification context is generated by randomly
damaging the correct classifications, substantial performance improvements
are possible. Even small amounts of randomly damaged correct context improves
performance. / Graduation date: 1994
|
Page generated in 0.0343 seconds