61 |
Bayesian analysis of errors-in-variables in generalized linear models鄧沛權, Tang, Pui-kuen. January 1992 (has links)
published_or_final_version / Statistics / Doctoral / Doctor of Philosophy
|
62 |
On density theorems, connectedness results and error bounds in vector optimization.January 2001 (has links)
Yung Hon-wai. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2001. / Includes bibliographical references (leaves 133-139). / Abstracts in English and Chinese. / Chapter 0 --- Introduction --- p.1 / Chapter 1 --- Density Theorems in Vector Optimization --- p.7 / Chapter 1.1 --- Preliminary --- p.7 / Chapter 1.2 --- The Arrow-Barankin-Blackwell Theorem in Normed Spaces --- p.14 / Chapter 1.3 --- The Arrow-Barankin-Blackwell Theorem in Topolog- ical Vector Spaces --- p.27 / Chapter 1.4 --- Density Results in Dual Space Setting --- p.32 / Chapter 2 --- Density Theorem for Super Efficiency --- p.45 / Chapter 2.1 --- Definition and Criteria for Super Efficiency --- p.45 / Chapter 2.2 --- Henig Proper Efficiency --- p.53 / Chapter 2.3 --- Density Theorem for Super Efficiency --- p.58 / Chapter 3 --- Connectedness Results in Vector Optimization --- p.63 / Chapter 3.1 --- Set-valued Maps --- p.64 / Chapter 3.2 --- The Contractibility of the Efficient Point Sets --- p.67 / Chapter 3.3 --- Connectedness Results in Vector Optimization Prob- lems --- p.83 / Chapter 4 --- Error Bounds In Normed Spaces --- p.90 / Chapter 4.1 --- Error Bounds of Lower Semicontinuous Functionsin Normed Spaces --- p.91 / Chapter 4.2 --- Error Bounds of Lower Semicontinuous Convex Func- tions in Reflexive Banach Spaces --- p.100 / Chapter 4.3 --- Error Bounds with Fractional Exponents --- p.105 / Chapter 4.4 --- An Application to Quadratic Functions --- p.114 / Bibliography --- p.133
|
63 |
On merit functions, error bounds, minimizing and stationary sequences for nonsmooth variational inequality problems. / CUHK electronic theses & dissertations collectionJanuary 2005 (has links)
First, we study the associated regularized gap functions and the D-gap functions and compute their Clarke-Rockafellar directional derivatives and the Clarke generalized gradients. Second, using these tools and extending the works of Fukushima and Pang (who studied the case when F is smooth), we present results on the relationship between minimizing sequences and stationary sequences of the D-gap functions, regardless the existence of solutions of (VIP). Finally, as another application, we show that, under the strongly monotonicity assumption, the regularized gap functions have fractional exponent error bounds, and thereby we provide an algorithm of Armijo type to solve the (VIP). / In this thesis, we investigate a nonsmooth variational inequality problem (VIP) defined by a locally Lipschitz function F which is not necessarily differentiable or monotone on its domain which is a closed convex set in an Euclidean space. / Tan Lulin. / "December 2005." / Adviser: Kung Fu Ng. / Source: Dissertation Abstracts International, Volume: 67-11, Section: B, page: 6444. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2005. / Includes bibliographical references (p. 79-84) and index. / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.
|
64 |
Computational Algorithms for Improved Representation of the Model Error Covariance in Weak-Constraint 4D-VarShaw, Jeremy A. 07 March 2017 (has links)
Four-dimensional variational data assimilation (4D-Var) provides an estimate to the state of a dynamical system through the minimization of a cost functional that measures the distance to a prior state (background) estimate and observations over a time window. The analysis fit to each information input component is determined by the specification of the error covariance matrices in the data assimilation system (DAS). Weak-constraint 4D-Var (w4D-Var) provides a theoretical framework to account for modeling errors in the analysis scheme. In addition to the specification of the background error covariance matrix, the w4D-Var formulation requires information on the model error statistics and specification of the model error covariance. Up to now, the increased computational cost associated with w4D-Var has prevented its practical implementation. Various simplifications to reduce the computational burden have been considered, including writing the model error covariance as a scalar multiple of the background error covariance and modeling the model error.
In this thesis, the main objective is the development of computationally feasible techniques for the improved representation of the model error statistics in a data assimilation system. Three new approaches are considered. A Monte Carlo method that uses an ensemble of w4D-Var systems to obtain flow-dependent estimates to the model error statistics. The evaluation of statistical diagnostic equations involving observation residuals to estimate the model error covariance matrix. An adaptive tuning procedure based on the sensitivity of a short-range forecast error measure to the model error DAS parametrization.
The validity and benefits of these approaches are shown in two stages of numerical experiments. A proof-of-concept is shown using the Lorenz multi-scale model and the shallow water equations for a one-dimensional domain. The results show the potential of these methodologies to produce improved state estimates, as compared to other approaches in data assimilation. It is expected that the techniques presented will find an extended range of applications to assess and improve the performance of a w4D-Var system.
|
65 |
Toward the estimation of errors in cloud cover derived by threshold methodsChang, Fu-Lung 01 July 1991 (has links)
The accurate determination of cloud cover amount is important for characterizing
the role of cloud feedbacks in the climate system. Clouds have a large influence on
the climate system through their effect on the earth's radiation budget. As indicated
by the NASA Earth Radiation Budget Experiment (ERBE), the change in the earth's
radiation budget brought about by clouds is ~-15 Wm⁻² on a global scale, which
is several times the ~4 Wm⁻² gain in energy to the troposphere-surface system that
would arise from a doubling of CO₂ in the atmosphere. Consequently, even a small
change in global cloud amount may lead to a major change in the climate system.
Threshold methods are commonly used to derive cloud properties from satellite
imagery data. Here, in order to quantify errors due to thresholds, cloud cover is
obtained using three different values of thresholds. The three thresholds are applied to
the 11 μm, (4 km)² NOAA-9 AVHRR GAC satellite imagery data over four oceanic
regions. Regional cloud-cover fractions are obtained for two different scales, (60 km)²
and (250 km)². The spatial coherence method for obtaining cloud cover from imagery
data is applied to coincident data. The differences between cloud cover derived by the
spatial coherence method and by the threshold methods depends on the setting of the
threshold. Because the spatial coherence method is believed to provide good estimates
of cloud cover for opaque, single-layered cloud systems, this study is limited to such
systems, and the differences in derived cloud cover are interpreted as errors due to the
application of thresholds. The threshold errors are caused by pixels that are partially
covered by clouds and the errors have a dependence on the regional scale cloud cover.
The errors can be derived from the distribution of pixel-scale cloud cover.
Two simple models which assume idealized distributions for pixel-scale cloud
cover are constructed and used to estimate the threshold errors. The results show
that these models, though simple, perform rather well in estimating the differences
between cloud cover derived by the spatial coherence method and those obtained by
threshold methods. / Graduation date: 1992
|
66 |
Semiparametric maximum likelihood for regression with measurement errorSuh, Eun-Young 03 May 2001 (has links)
Semiparametric maximum likelihood analysis allows inference in errors-invariables
models with small loss of efficiency relative to full likelihood analysis but
with significantly weakened assumptions. In addition, since no distributional
assumptions are made for the nuisance parameters, the analysis more nearly
parallels that for usual regression. These highly desirable features and the high
degree of modelling flexibility permitted warrant the development of the approach
for routine use. This thesis does so for the special cases of linear and nonlinear
regression with measurement errors in one explanatory variable. A transparent and
flexible computational approach is developed, the analysis is exhibited on some
examples, and finite sample properties of estimates, approximate standard errors,
and likelihood ratio inference are clarified with simulation. / Graduation date: 2001
|
67 |
Using p-adic valuations to decrease computational errorLimmer, Douglas J. 08 June 1993 (has links)
The standard way of representing numbers on computers gives rise to errors
which increase as computations progress. Using p-adic valuations can reduce
error accumulation. Valuation theory tells us that p-adic and standard valuations
cannot be directly compared. The p-adic valuation can, however, be used in
an indirect way. This gives a method of doing arithmetic on a subset of the
rational numbers without any error. This exactness is highly desirable, and can
be used to solve certain kinds of problems which the standard valuation cannot
conveniently handle. Programming a computer to use these p-adic numbers is
not difficult, and in fact uses computer resources similar to the standard floating-point
representation for real numbers. This thesis develops the theory of p-adic
valuations, discusses their implementation, and gives some examples where p-adic
numbers achieve better results than normal computer computation. / Graduation date: 1994
|
68 |
Estimation of the standard error and confidence interval of the indirect effect in multiple mediator modelsBriggs, Nancy Elizabeth, January 2006 (has links)
Thesis (Ph. D.)--Ohio State University, 2006. / Title from first page of PDF file. Includes bibliographical references (p. 135-139).
|
69 |
Analysis of epidemiological data with covariate errorsDelongchamp, Robert 18 February 1993 (has links)
In regression analysis, random errors in an explanatory variable cause the
usual estimates of its regression coefficient to be biased. Although this problem has
been studied for many years, routine methods have not emerged. This thesis
investigates some aspects of this problem in the setting of analysis of epidemiological
data.
A major premise is that methods to cope with this problem must account for
the shape of the frequency distribution of the true covariable, e.g., exposure. This is
not widely recognized, and many existing methods focus only on the variability of the
true covariable, rather than on the shape of its distribution. Confusion about this
issue is exacerbated by the existence of two classical models, one in which the
covariable is a sample from a distribution and the other in which it is a collection of
fixed values. A unified approach is taken here, in which for the latter of these models
more attention than usual is given to the frequency distribution of the fixed values.
In epidemiology the distribution of exposures is often very skewed, making
these issues particularly important. In addition, the data sets can be very large, and
another premise is that differences in the performance of methods are much greater
when the samples are very large.
Traditionally, methods have largely been evaluated by their ability to remove
bias from the regression estimates. A third premise is that in large samples there may
be various methods that will adequately remove the bias, but they may differ widely in
how nearly they approximate the estimates that would be obtained using the
unobserved true values.
A collection of old and new methods is considered, representing a variety of
basic rationales and approaches. Some comparisons among them are made on
theoretical grounds provided by the unified model. Simulation results are given which
tend to confirm the major premises of this thesis. In particular, it is shown that the
performance of one of the most standard approaches, the "correction for attenuation"
method, is poor relative to other methods when the sample size is large and the
distribution of covariables is skewed. / Graduation date: 1993
|
70 |
Comparison of estimates of autoregressive models with superimposed errorsChong, Siu-yung. January 2001 (has links)
Thesis (M. Phil.)--University of Hong Kong, 2001. / Includes bibliographical references (leaves 89-94).
|
Page generated in 0.1124 seconds