Spelling suggestions: "subject:"error 2analysis"" "subject:"error 3analysis""
81 |
Estimating measurement error in blood pressure, using structural equations modellingKepe, Lulama Patrick January 2004 (has links)
Thesis (MSc)--Stellenbosch University, 2004. / ENGLISH ABSTRACT: Any branch in science experiences measurement error to some extent. This maybe due to
conditions under which measurements are taken, which may include the subject, the
observer, the measurement instrument, and data collection method. The inexactness
(error) can be reduced to some extent through the study design, but at some level further
reduction becomes difficult or impractical. It then becomes important to determine or
evaluate the magnitude of measurement error and perhaps evaluate its effect on the
investigated relationships. All this is particularly true for blood pressure measurement.
The gold standard for measunng blood pressure (BP) is a 24-hour ambulatory
measurement. However, this technology is not available in Primary Care Clinics in South
Africa and a set of three mercury-based BP measurements is the norm for a clinic visit.
The quality of the standard combination of the repeated measurements can be improved
by modelling the measurement error of each of the diastolic and systolic measurements
and determining optimal weights for the combination of measurements, which will give a
better estimate of the patient's true BP. The optimal weights can be determined through
the method of structural equations modelling (SEM) which allows a richer model than the
standard repeated measures ANOVA. They are less restrictive and give more detail than
the traditional approaches.
Structural equations modelling which is a special case of covariance structure modelling
has proven to be useful in social sciences over the years. Their appeal stem from the fact
that they includes multiple regression and factor analysis as special cases. Multi-type
multi-time (MTMT) models are a specific type of structural equations models that suit
the modelling of BP measurements. These designs (MTMT models) constitute a variant
of repeated measurement designs and are based on Campbell and Fiske's (1959)
suggestion that the quality of methods (time in our case) can be determined by comparing
them with other methods in order to reveal both the systematic and random errors. MTMT models also showed superiority over other data analysis methods because of their
accommodation of the theory of BP. In particular they proved to be a strong alternative to
be considered for the analysis of BP measurement whenever repeated measures are
available even when such measures do not constitute equivalent replicates. This thesis
focuses on SEM and its application to BP studies conducted in a community survey of
Mamre and the Mitchells Plain hypertensive clinic population. / AFRIKAANSE OPSOMMING: Elke vertakking van die wetenskap is tot 'n minder of meerdere mate onderhewig aan
metingsfout. Dit is die gevolg van die omstandighede waaronder metings gemaak word
soos die eenheid wat gemeet word, die waarnemer, die meetinstrument en die data
versamelingsmetode. Die metingsfout kan verminder word deur die studie ontwerp maar
op 'n sekere punt is verdere verbetering in presisie moeilik en onprakties. Dit is dan
belangrik om die omvang ven die metingsfout te bepaal en om die effek hiervan op
verwantskappe te ondersoek. Hierdie aspekte is veral waar vir die meting van bloeddruk
by die mens.
Die goue standaard vir die meet van bloeddruk is 'n 24-uur deurlopenee meting. Hierdie
tegnologie is egter nie in primêre gesondheidsklinieke in Suid-Afrika beskikbaar nie en
'n stel van drie kwik gebasseerde bloedrukmetings is die norm by 'n kliniek besoek. Die
kwaliteit van die standard kombinasie van die herhaalde metings kan verbeter word deur
die modellering van die metingsfout van diastoliese en sistoliese bloeddruk metings. Die
bepaling van optimale gewigte vir die lineêre kombinasie van die metings lei tot 'n beter
skatting van die pasiënt se ware bloedruk. Die gewigte kan berekening word met die
metode van strukturele vergelykings modellering (SVM) wat 'n ryker klas van modelle
bied as die standaard herhaalde metings analise van variansie modelle. Dié model het
minder beperkings en gee dus meer informasie as die tradisionele benaderings.
Strukurele vergelykings modellering wat 'n spesial geval van kovariansie strukturele
modellering is, is oor die jare nuttig aangewend in die sosiale wetenskap. Die aanhang is
die gevolg van die feit dat meervoudige lineêre regressie en faktor analise ook spesiale
gevalle van die metode is. Meervoudige-tipe meervoudige-tyd (MTMT) modelle is 'n
spesifieke strukturele vergelykings model wat die modellering van bloedruk pas. Hierdie
tipe model is 'n variant van die herhaalde metings ontwerp en is gebaseer op Campbell en
Fiske (1959) se voorstel dat die kwaliteit van verskillende metodes bepaal kan word deur
dit met ander metodes te vergelyk om sodoende sistematiese en stogastiese foute te
onderskei. Die MTMT model pas ook goed in by die onderliggende fisiologies aspekte van bloedruk en die meting daarvan. Dit is dus 'n goeie alternatief vir studies waar die
herhaalde metings nie ekwivalente replikate is nie.
Hierdie tesis fokus op die strukturele vergelykings model en die toepassing daarvan in
hipertensie studies uitgevoer in die Mamre gemeenskap en 'n hipertensie kliniek
populasie in Mitchells Plain.
|
82 |
Bayesian analysis of errors-in-variables in generalized linear models鄧沛權, Tang, Pui-kuen. January 1992 (has links)
published_or_final_version / Statistics / Doctoral / Doctor of Philosophy
|
83 |
Numerical errors in subfilter scalar variance models for large eddy simulation of turbulent combustionKaul, Colleen Marie, 1983- 03 September 2009 (has links)
Subfilter scalar variance is a key quantity for scalar mixing at the small scales of a turbulent flow and thus plays a crucial role in large eddy simulation (LES) of combustion. While prior studies have mainly focused on the physical aspects of modeling subfilter variance, the current work discusses variance models in conjunction with numerical errors due to their implementation using finite difference methods. Because of the prevalence of grid-based filtering in practical LES, the smallest filtered scales are generally under-resolved. These scales, however, are often important in determining the values of subfilter models. A priori tests on data from direct numerical simulation (DNS) of homogenous isotropic turbulence are performed to evaluate the numerical implications of specific model forms in the context of practical LES evaluated with finite differences. As with other subfilter quantities, such as kinetic energy, subfilter variance can be modeled according to one of two general methodologies. In the first of these, an algebraic equation relating the variance to gradients of the filtered scalar field is coupled with a dynamic procedure for coefficient estimation. Although finite difference methods substantially underpredict the gradient of the filtered scalar field, the dynamic method is shown to mitigate this error through overestimation of the model coefficient. The second group of models utilizes a transport equation for the subfilter variance itself or for the second moment of the scalar. Here, it is shown that the model formulation based on the variance transport equation is consistently biased toward underprediction of the subfilter variance. The numerical issues stem from making discrete approximations to the chain rule manipulations used to derive convective and diffusive terms in the variance transport equation associated with the square of the filtered scalar. This set of approximations can be avoided by solving the equation for the second moment of the scalar, suggesting that model's numerical superiority. / text
|
84 |
Entendendo alguns erros do Ensino Fundamental II que os alunos mantêm ao final do Ensino Médio / Understanding some mistakes from Secondary School that students hold until the end of High SchoolOzores, Ana Luiza Festa 15 April 2016 (has links)
É natural considerar o erro como algo que deve ser evitado, um indicador de mau desempenho. Desde pequenas, as crianças são habituadas a buscar os acertos, de forma que, quando o raciocínio está errado, elas devem refazê-lo. Tal resultado é cobrado em casa pela família e na escola pelos educadores. Porém, o erro é o mais antigo elemento no processo de aprendizagem, e, além de ser um indicador de desempenho, o erro também mostra aquilo que o aluno sabe ou pensa ter compreendido. É possível notar que alguns alunos do Ensino Médio mantêm erros e dúvidas que deveriam ter sido sanados ao longo do Ensino Fundamental. Neste trabalho, será analisado o porquê de essas dúvidas ainda se apresentarem, pois a análise desses erros pode auxiliar tanto o aluno como o professor. O aluno, com uma devolutiva do que foi feito para tentar aprimorar o seu saber e o professor, levando-o a elaborar novas estratégias didáticas e planos de ensino que melhor se adaptem ao seu público alvo. / It is expected to consider the error as something that must be avoided, a non-satisfactory performance indicator. Since childhood, the human being is used to seek the right answers, so that, when the reasoning is wrong, he/she should remake it. Such outcome is charged at home by the family and at school by the teachers. However, the error is the oldest element in the learning process and, in addition to being a performance indicator, the error also shows something that the student knows or thinks he/she has understood. It is possible to notice that some high school students make some mistakes or has some doubts that were supposed to be clarified during the elementary school. In this paper, it will be analyzed the reason why these doubts are still present, because the analysis of these errors can help both students and teachers. The students, with a feedback of what has been done to try to improve their knowledge and the teacher, leading him to design new teaching strategies and lesson plans to best suit his/her target audience.
|
85 |
On density theorems, connectedness results and error bounds in vector optimization.January 2001 (has links)
Yung Hon-wai. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2001. / Includes bibliographical references (leaves 133-139). / Abstracts in English and Chinese. / Chapter 0 --- Introduction --- p.1 / Chapter 1 --- Density Theorems in Vector Optimization --- p.7 / Chapter 1.1 --- Preliminary --- p.7 / Chapter 1.2 --- The Arrow-Barankin-Blackwell Theorem in Normed Spaces --- p.14 / Chapter 1.3 --- The Arrow-Barankin-Blackwell Theorem in Topolog- ical Vector Spaces --- p.27 / Chapter 1.4 --- Density Results in Dual Space Setting --- p.32 / Chapter 2 --- Density Theorem for Super Efficiency --- p.45 / Chapter 2.1 --- Definition and Criteria for Super Efficiency --- p.45 / Chapter 2.2 --- Henig Proper Efficiency --- p.53 / Chapter 2.3 --- Density Theorem for Super Efficiency --- p.58 / Chapter 3 --- Connectedness Results in Vector Optimization --- p.63 / Chapter 3.1 --- Set-valued Maps --- p.64 / Chapter 3.2 --- The Contractibility of the Efficient Point Sets --- p.67 / Chapter 3.3 --- Connectedness Results in Vector Optimization Prob- lems --- p.83 / Chapter 4 --- Error Bounds In Normed Spaces --- p.90 / Chapter 4.1 --- Error Bounds of Lower Semicontinuous Functionsin Normed Spaces --- p.91 / Chapter 4.2 --- Error Bounds of Lower Semicontinuous Convex Func- tions in Reflexive Banach Spaces --- p.100 / Chapter 4.3 --- Error Bounds with Fractional Exponents --- p.105 / Chapter 4.4 --- An Application to Quadratic Functions --- p.114 / Bibliography --- p.133
|
86 |
On merit functions, error bounds, minimizing and stationary sequences for nonsmooth variational inequality problems. / CUHK electronic theses & dissertations collectionJanuary 2005 (has links)
First, we study the associated regularized gap functions and the D-gap functions and compute their Clarke-Rockafellar directional derivatives and the Clarke generalized gradients. Second, using these tools and extending the works of Fukushima and Pang (who studied the case when F is smooth), we present results on the relationship between minimizing sequences and stationary sequences of the D-gap functions, regardless the existence of solutions of (VIP). Finally, as another application, we show that, under the strongly monotonicity assumption, the regularized gap functions have fractional exponent error bounds, and thereby we provide an algorithm of Armijo type to solve the (VIP). / In this thesis, we investigate a nonsmooth variational inequality problem (VIP) defined by a locally Lipschitz function F which is not necessarily differentiable or monotone on its domain which is a closed convex set in an Euclidean space. / Tan Lulin. / "December 2005." / Adviser: Kung Fu Ng. / Source: Dissertation Abstracts International, Volume: 67-11, Section: B, page: 6444. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2005. / Includes bibliographical references (p. 79-84) and index. / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.
|
87 |
Lexical and sublexical analysis of single-word reading and writing errorsRoss, Katrina 07 July 2016 (has links)
Within a dual-route neuropsychological model, two distinct but interrelated pathways are used to read and write, known as the lexical and sublexical routes. Individuals with reading and writing deficits often exhibit impairments in one or both of these routes, and therefore must rely on the combined power of the integrated system in print processing tasks. The resultant errors reflect varying degrees of lexical and sublexical accuracy in a single production. However, no system presently exists to analyze bimodal errors robustly in both routes. The goal of this project was to develop a system that simultaneously, quantitatively, and qualitatively captures lexical and sublexical errors for single-word reading and writing tasks. This system evaluates responses hierarchically in both routes according to proximity to a target. Each response earns a bivariate score [sublexical, lexical], which is plotted along x and y axes. This scoring system was developed using data from a novel treatment study for patients with acquired alexia/agraphia. Repeated-measures multivariate analyses of variance and post hoc analyses revealed a significant treatment effect in both the lexical and sublexical systems. Qualitative analyses were also conducted to evaluate patterns of change in both the trained and untrained modalities, in the sublexical and lexical systems. Overall, the results of this study indicate that treatment-induced evolution of reading/writing responses can be comprehensively represented by this novel scoring system. / 2018-07-07T00:00:00Z
|
88 |
Computational Algorithms for Improved Representation of the Model Error Covariance in Weak-Constraint 4D-VarShaw, Jeremy A. 07 March 2017 (has links)
Four-dimensional variational data assimilation (4D-Var) provides an estimate to the state of a dynamical system through the minimization of a cost functional that measures the distance to a prior state (background) estimate and observations over a time window. The analysis fit to each information input component is determined by the specification of the error covariance matrices in the data assimilation system (DAS). Weak-constraint 4D-Var (w4D-Var) provides a theoretical framework to account for modeling errors in the analysis scheme. In addition to the specification of the background error covariance matrix, the w4D-Var formulation requires information on the model error statistics and specification of the model error covariance. Up to now, the increased computational cost associated with w4D-Var has prevented its practical implementation. Various simplifications to reduce the computational burden have been considered, including writing the model error covariance as a scalar multiple of the background error covariance and modeling the model error.
In this thesis, the main objective is the development of computationally feasible techniques for the improved representation of the model error statistics in a data assimilation system. Three new approaches are considered. A Monte Carlo method that uses an ensemble of w4D-Var systems to obtain flow-dependent estimates to the model error statistics. The evaluation of statistical diagnostic equations involving observation residuals to estimate the model error covariance matrix. An adaptive tuning procedure based on the sensitivity of a short-range forecast error measure to the model error DAS parametrization.
The validity and benefits of these approaches are shown in two stages of numerical experiments. A proof-of-concept is shown using the Lorenz multi-scale model and the shallow water equations for a one-dimensional domain. The results show the potential of these methodologies to produce improved state estimates, as compared to other approaches in data assimilation. It is expected that the techniques presented will find an extended range of applications to assess and improve the performance of a w4D-Var system.
|
89 |
High-Dimensional Analysis of Convex Optimization-Based Massive MIMO DecodersBen Atitallah, Ismail 04 1900 (has links)
A wide range of modern large-scale systems relies on recovering a signal from noisy linear measurements. In many applications, the useful signal has inherent properties, such as sparsity, low-rankness, or boundedness, and making use of these properties
and structures allow a more efficient recovery. Hence, a significant amount of work has been dedicated to developing and analyzing algorithms that can take advantage of the signal structure. Especially, since the advent of Compressed Sensing (CS) there has been significant progress towards this direction. Generally speaking, the signal structure can be harnessed by solving an appropriate regularized or constrained M-estimator.
In modern Multi-input Multi-output (MIMO) communication systems, all transmitted signals are drawn from finite constellations and are thus bounded. Besides, most recent modulation schemes such as Generalized Space Shift Keying (GSSK) or Generalized Spatial Modulation (GSM) yield signals that are inherently sparse. In the recovery procedure, boundedness and sparsity can be promoted by using the ℓ1 norm regularization and by imposing an ℓ∞ norm constraint respectively.
In this thesis, we propose novel optimization algorithms to recover certain classes of structured signals with emphasis on MIMO communication systems. The exact analysis permits a clear characterization of how well these systems perform. Also, it allows an automatic tuning of the parameters. In each context, we define the appropriate performance metrics and we analyze them exactly in the High Dimentional Regime (HDR).
The framework we use for the analysis is based on Gaussian process inequalities; in particular, on a new strong and tight version of a classical comparison inequality (due to Gordon, 1988) in the presence of additional convexity assumptions. The new
framework that emerged from this inequality is coined as Convex Gaussian Min-max Theorem (CGMT).
|
90 |
An analysis of suprasegmental errors in the interlanguage of North Vietnamese students of EnglishDung, Le Thanh, n/a January 1991 (has links)
Stress and intonation play important roles in the production and
perception of the English language. They are always very difficult for
second language learners to acquire. Yet, a review of literature
reveals that these important suprasegmental features have not
received due attention from second language researchers or
teachers. In Vietnam in particular, there is no research to date which
studies the stress and intonation errors in the performance of
Vietnamese learners of English.
This study uses the procedures of Error Analysis to investigate
the problem. Chapter one and two give a review of relevant literature
and a description of the methodology of the study. In chapter three,
the students' stress and intonation errors are described and classified,
and the possible sources of those errors are discussed. Finally,
chapter four shows implications and makes suggestions for the
improvement of teaching and learning English stress and intonation.
|
Page generated in 0.0785 seconds