• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 157
  • 21
  • 15
  • 13
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 244
  • 244
  • 177
  • 46
  • 45
  • 30
  • 27
  • 26
  • 22
  • 20
  • 19
  • 19
  • 17
  • 16
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Estimating measurement error in blood pressure, using structural equations modelling

Kepe, Lulama Patrick January 2004 (has links)
Thesis (MSc)--Stellenbosch University, 2004. / ENGLISH ABSTRACT: Any branch in science experiences measurement error to some extent. This maybe due to conditions under which measurements are taken, which may include the subject, the observer, the measurement instrument, and data collection method. The inexactness (error) can be reduced to some extent through the study design, but at some level further reduction becomes difficult or impractical. It then becomes important to determine or evaluate the magnitude of measurement error and perhaps evaluate its effect on the investigated relationships. All this is particularly true for blood pressure measurement. The gold standard for measunng blood pressure (BP) is a 24-hour ambulatory measurement. However, this technology is not available in Primary Care Clinics in South Africa and a set of three mercury-based BP measurements is the norm for a clinic visit. The quality of the standard combination of the repeated measurements can be improved by modelling the measurement error of each of the diastolic and systolic measurements and determining optimal weights for the combination of measurements, which will give a better estimate of the patient's true BP. The optimal weights can be determined through the method of structural equations modelling (SEM) which allows a richer model than the standard repeated measures ANOVA. They are less restrictive and give more detail than the traditional approaches. Structural equations modelling which is a special case of covariance structure modelling has proven to be useful in social sciences over the years. Their appeal stem from the fact that they includes multiple regression and factor analysis as special cases. Multi-type multi-time (MTMT) models are a specific type of structural equations models that suit the modelling of BP measurements. These designs (MTMT models) constitute a variant of repeated measurement designs and are based on Campbell and Fiske's (1959) suggestion that the quality of methods (time in our case) can be determined by comparing them with other methods in order to reveal both the systematic and random errors. MTMT models also showed superiority over other data analysis methods because of their accommodation of the theory of BP. In particular they proved to be a strong alternative to be considered for the analysis of BP measurement whenever repeated measures are available even when such measures do not constitute equivalent replicates. This thesis focuses on SEM and its application to BP studies conducted in a community survey of Mamre and the Mitchells Plain hypertensive clinic population. / AFRIKAANSE OPSOMMING: Elke vertakking van die wetenskap is tot 'n minder of meerdere mate onderhewig aan metingsfout. Dit is die gevolg van die omstandighede waaronder metings gemaak word soos die eenheid wat gemeet word, die waarnemer, die meetinstrument en die data versamelingsmetode. Die metingsfout kan verminder word deur die studie ontwerp maar op 'n sekere punt is verdere verbetering in presisie moeilik en onprakties. Dit is dan belangrik om die omvang ven die metingsfout te bepaal en om die effek hiervan op verwantskappe te ondersoek. Hierdie aspekte is veral waar vir die meting van bloeddruk by die mens. Die goue standaard vir die meet van bloeddruk is 'n 24-uur deurlopenee meting. Hierdie tegnologie is egter nie in primêre gesondheidsklinieke in Suid-Afrika beskikbaar nie en 'n stel van drie kwik gebasseerde bloedrukmetings is die norm by 'n kliniek besoek. Die kwaliteit van die standard kombinasie van die herhaalde metings kan verbeter word deur die modellering van die metingsfout van diastoliese en sistoliese bloeddruk metings. Die bepaling van optimale gewigte vir die lineêre kombinasie van die metings lei tot 'n beter skatting van die pasiënt se ware bloedruk. Die gewigte kan berekening word met die metode van strukturele vergelykings modellering (SVM) wat 'n ryker klas van modelle bied as die standaard herhaalde metings analise van variansie modelle. Dié model het minder beperkings en gee dus meer informasie as die tradisionele benaderings. Strukurele vergelykings modellering wat 'n spesial geval van kovariansie strukturele modellering is, is oor die jare nuttig aangewend in die sosiale wetenskap. Die aanhang is die gevolg van die feit dat meervoudige lineêre regressie en faktor analise ook spesiale gevalle van die metode is. Meervoudige-tipe meervoudige-tyd (MTMT) modelle is 'n spesifieke strukturele vergelykings model wat die modellering van bloedruk pas. Hierdie tipe model is 'n variant van die herhaalde metings ontwerp en is gebaseer op Campbell en Fiske (1959) se voorstel dat die kwaliteit van verskillende metodes bepaal kan word deur dit met ander metodes te vergelyk om sodoende sistematiese en stogastiese foute te onderskei. Die MTMT model pas ook goed in by die onderliggende fisiologies aspekte van bloedruk en die meting daarvan. Dit is dus 'n goeie alternatief vir studies waar die herhaalde metings nie ekwivalente replikate is nie. Hierdie tesis fokus op die strukturele vergelykings model en die toepassing daarvan in hipertensie studies uitgevoer in die Mamre gemeenskap en 'n hipertensie kliniek populasie in Mitchells Plain.
82

Bayesian analysis of errors-in-variables in generalized linear models

鄧沛權, Tang, Pui-kuen. January 1992 (has links)
published_or_final_version / Statistics / Doctoral / Doctor of Philosophy
83

Classes of C(K) spaces with few operators

Schlackow, Iryna January 2008 (has links)
We investigate properties of Koszmider spaces. We show that if K and L are compact Hausdor spaces with no isolated points, K is Koszmider and C(K) is isomorphic to C(L), then K and L are homeomorphic and, in particular, L is also Koszmider. We also analyse topological properties of Koszmider spaces and show that a connected Koszmider space is strongly rigid. In addition to Koszmider spaces, we introduce the notion of weakly Koszmider spaces. Having established an alternative characterisation thereof, we show that, while it is evident that every Koszmider space is weakly Koszmider, the reverse implication does not hold. We also prove that if C(K) and C(L) are isomorphic and K is weakly Koszmider, then so is L. However, if K is Koszmider, there always exists a non-Koszmider space L such that C(K) and C(L) are isomorphic. In the second part of the thesis we present two separable Koszmider spaces the construction of which does not use any set-theoretical assumptions except for the usual (ZFC) axioms. The first space is zero-dimensional, being the Stone space of a Boolean algebra. The second construction results in a separable connected Koszmider space.
84

Extensions of the Katznelson-Tzafriri theorem for operator semigroups

Seifert, David H. January 2014 (has links)
This thesis is concerned with extensions and refinements of the Katznelson-Tzafriri theorem, a cornerstone of the asymptotic theory of operator semigroups which recently has received renewed interest in the context of damped wave equations. The thesis comprises three main parts. The key results in the first part are a version of the Katznelson-Tzafriri theorem for bounded C_0-semigroups in which a certain function appearing in the original statement of the result is allowed more generally to be a bounded Borel measure, and bounds on the rate of decay in an important special case. The second part deals with the discrete version of the Katznelson-Tzafriri theorem and establishes upper and lower bounds on the rate of decay in this setting too. In an important special case these general bounds are then shown to be optimal for general Banach spaces but not on Hilbert space. The third main part, finally, turns to general operator semigroups. It contains a version of the Katznelson-Tzafriri theorem in the Hilbert space setting which relaxes the main assumption of the original result. Various applications and extensions of this general result are also presented.
85

Cladistic analysis of juvenile and adult hominoid cranial shape variables / The role of ontogeny for reconstructing hominid phylogeny

Unknown Date (has links)
Phylogenies constructed from skeletal data often contradict those built from genetic data. This study evaluates the phylogenetic utility of adult male, female, and juvenile hominoid cranial bones. First, I used geometric morphometric methods to compare the cranial bone shapes of seven primate genera (Gorilla, Homo, Hylobates, Macaca, Nomascus, Pan, and Pongo). I then coded these shapes as continuous characters and constructed cladograms via parsimony analysis for the adult male, female, and juvenile character matrices. Finally, I evaluated the similarity of these cladograms to one another and to the genetic phylogeny using topological distance software. Cladograms did not differ from one another or the genetic phylogeny less than comparisons of randomly generated trees. These results suggest that cranial shapes are unlikely to provide accurate phylogenetic information, and agree with other analyses of skeletal data that fail to recover the molecular phylogeny (Collard & Wood, 2000, 2001; Springer et al., 2007). / by Thomas A. DiVito, II. / Title of the abstract: The role of ontogeny for reconstructing hominid phylogeny. / Thesis (M.A.)--Florida Atlantic University, 2011. / Includes bibliography. / Electronic reproduction. Boca Raton, Fla., 2011. Mode of access: World Wide Web.
86

On density theorems, connectedness results and error bounds in vector optimization.

January 2001 (has links)
Yung Hon-wai. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2001. / Includes bibliographical references (leaves 133-139). / Abstracts in English and Chinese. / Chapter 0 --- Introduction --- p.1 / Chapter 1 --- Density Theorems in Vector Optimization --- p.7 / Chapter 1.1 --- Preliminary --- p.7 / Chapter 1.2 --- The Arrow-Barankin-Blackwell Theorem in Normed Spaces --- p.14 / Chapter 1.3 --- The Arrow-Barankin-Blackwell Theorem in Topolog- ical Vector Spaces --- p.27 / Chapter 1.4 --- Density Results in Dual Space Setting --- p.32 / Chapter 2 --- Density Theorem for Super Efficiency --- p.45 / Chapter 2.1 --- Definition and Criteria for Super Efficiency --- p.45 / Chapter 2.2 --- Henig Proper Efficiency --- p.53 / Chapter 2.3 --- Density Theorem for Super Efficiency --- p.58 / Chapter 3 --- Connectedness Results in Vector Optimization --- p.63 / Chapter 3.1 --- Set-valued Maps --- p.64 / Chapter 3.2 --- The Contractibility of the Efficient Point Sets --- p.67 / Chapter 3.3 --- Connectedness Results in Vector Optimization Prob- lems --- p.83 / Chapter 4 --- Error Bounds In Normed Spaces --- p.90 / Chapter 4.1 --- Error Bounds of Lower Semicontinuous Functionsin Normed Spaces --- p.91 / Chapter 4.2 --- Error Bounds of Lower Semicontinuous Convex Func- tions in Reflexive Banach Spaces --- p.100 / Chapter 4.3 --- Error Bounds with Fractional Exponents --- p.105 / Chapter 4.4 --- An Application to Quadratic Functions --- p.114 / Bibliography --- p.133
87

On merit functions, error bounds, minimizing and stationary sequences for nonsmooth variational inequality problems. / CUHK electronic theses & dissertations collection

January 2005 (has links)
First, we study the associated regularized gap functions and the D-gap functions and compute their Clarke-Rockafellar directional derivatives and the Clarke generalized gradients. Second, using these tools and extending the works of Fukushima and Pang (who studied the case when F is smooth), we present results on the relationship between minimizing sequences and stationary sequences of the D-gap functions, regardless the existence of solutions of (VIP). Finally, as another application, we show that, under the strongly monotonicity assumption, the regularized gap functions have fractional exponent error bounds, and thereby we provide an algorithm of Armijo type to solve the (VIP). / In this thesis, we investigate a nonsmooth variational inequality problem (VIP) defined by a locally Lipschitz function F which is not necessarily differentiable or monotone on its domain which is a closed convex set in an Euclidean space. / Tan Lulin. / "December 2005." / Adviser: Kung Fu Ng. / Source: Dissertation Abstracts International, Volume: 67-11, Section: B, page: 6444. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2005. / Includes bibliographical references (p. 79-84) and index. / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.
88

On fundamental computational barriers in the mathematics of information

Bastounis, Alexander James January 2018 (has links)
This thesis is about computational theory in the setting of the mathematics of information. The first goal is to demonstrate that many commonly considered problems in optimisation theory cannot be solved with an algorithm if the input data is only known up to an arbitrarily small error (modelling the fact that most real numbers are not expressible to infinite precision with a floating point based computational device). This includes computing the minimisers to basis pursuit, linear programming, lasso and image deblurring as well as finding an optimal neural network given training data. These results are somewhat paradoxical given the success that existing algorithms exhibit when tackling these problems with real world datasets and a substantial portion of this thesis is dedicated to explaining the apparent disparity, particularly in the context of compressed sensing. To do so requires the introduction of a variety of new concepts, including that of a breakdown epsilon, which may have broader applicability to computational problems outside of the ones central to this thesis. We conclude with a discussion on future research directions opened up by this work.
89

Computational Algorithms for Improved Representation of the Model Error Covariance in Weak-Constraint 4D-Var

Shaw, Jeremy A. 07 March 2017 (has links)
Four-dimensional variational data assimilation (4D-Var) provides an estimate to the state of a dynamical system through the minimization of a cost functional that measures the distance to a prior state (background) estimate and observations over a time window. The analysis fit to each information input component is determined by the specification of the error covariance matrices in the data assimilation system (DAS). Weak-constraint 4D-Var (w4D-Var) provides a theoretical framework to account for modeling errors in the analysis scheme. In addition to the specification of the background error covariance matrix, the w4D-Var formulation requires information on the model error statistics and specification of the model error covariance. Up to now, the increased computational cost associated with w4D-Var has prevented its practical implementation. Various simplifications to reduce the computational burden have been considered, including writing the model error covariance as a scalar multiple of the background error covariance and modeling the model error. In this thesis, the main objective is the development of computationally feasible techniques for the improved representation of the model error statistics in a data assimilation system. Three new approaches are considered. A Monte Carlo method that uses an ensemble of w4D-Var systems to obtain flow-dependent estimates to the model error statistics. The evaluation of statistical diagnostic equations involving observation residuals to estimate the model error covariance matrix. An adaptive tuning procedure based on the sensitivity of a short-range forecast error measure to the model error DAS parametrization. The validity and benefits of these approaches are shown in two stages of numerical experiments. A proof-of-concept is shown using the Lorenz multi-scale model and the shallow water equations for a one-dimensional domain. The results show the potential of these methodologies to produce improved state estimates, as compared to other approaches in data assimilation. It is expected that the techniques presented will find an extended range of applications to assess and improve the performance of a w4D-Var system.
90

Toward the estimation of errors in cloud cover derived by threshold methods

Chang, Fu-Lung 01 July 1991 (has links)
The accurate determination of cloud cover amount is important for characterizing the role of cloud feedbacks in the climate system. Clouds have a large influence on the climate system through their effect on the earth's radiation budget. As indicated by the NASA Earth Radiation Budget Experiment (ERBE), the change in the earth's radiation budget brought about by clouds is ~-15 Wm⁻² on a global scale, which is several times the ~4 Wm⁻² gain in energy to the troposphere-surface system that would arise from a doubling of CO₂ in the atmosphere. Consequently, even a small change in global cloud amount may lead to a major change in the climate system. Threshold methods are commonly used to derive cloud properties from satellite imagery data. Here, in order to quantify errors due to thresholds, cloud cover is obtained using three different values of thresholds. The three thresholds are applied to the 11 μm, (4 km)² NOAA-9 AVHRR GAC satellite imagery data over four oceanic regions. Regional cloud-cover fractions are obtained for two different scales, (60 km)² and (250 km)². The spatial coherence method for obtaining cloud cover from imagery data is applied to coincident data. The differences between cloud cover derived by the spatial coherence method and by the threshold methods depends on the setting of the threshold. Because the spatial coherence method is believed to provide good estimates of cloud cover for opaque, single-layered cloud systems, this study is limited to such systems, and the differences in derived cloud cover are interpreted as errors due to the application of thresholds. The threshold errors are caused by pixels that are partially covered by clouds and the errors have a dependence on the regional scale cloud cover. The errors can be derived from the distribution of pixel-scale cloud cover. Two simple models which assume idealized distributions for pixel-scale cloud cover are constructed and used to estimate the threshold errors. The results show that these models, though simple, perform rather well in estimating the differences between cloud cover derived by the spatial coherence method and those obtained by threshold methods. / Graduation date: 1992

Page generated in 0.0826 seconds