451 |
Disease burden and seasonality of influenza in subtropical Hong KongYang, Lin, 楊琳 January 2008 (has links)
published_or_final_version / Community Medicine / Doctoral / Doctor of Philosophy
|
452 |
Understanding and evaluating population preventive strategies for breast cancer using statistical and decision analytic modelsWong, Oi-ling, Irene., 黃愛玲 January 2009 (has links)
published_or_final_version / Community Medicine / Doctoral / Doctor of Philosophy
|
453 |
Patterns of homoplasy in North American Astragalus L. (Fabaceae).Sanderson, Michael John. January 1989 (has links)
Patterns in the distribution of homoplasy are investigated from theoretical and empirical perspectives. The history of the term "homoplasy" as used by morphologists, evolutionary systematists, cladists, and others is reviewed, especially in relation to its complement, "homology." Homoplasy is defined relative to homology, which is viewed as any similarity shared through an unbroken line of common ancestry. An investigation of levels of homoplasy based on a statistical analysis of 60 published phylogenies reveals a strong dependence of homoplasy on the number of taxa included. This relation is independent of number of characters, type of data, taxonomic rank, or organism, and suggests that large taxa should be the focus of empirical studies of homoplasy. Hence, a phylogenetic analysis of the large genus Astragalus was undertaken using 113 representative species (and varieties) found in North America. Fifty-seven binary and multistate characters were scored and the resulting matrix was subjected to numerical cladistic analysis. Two large sets of equally parsimonious trees were found at 595 and 596 steps. The sets were analyzed using consensus methods, robust clades were discussed in detail, and the phylogenies were compared to previous classifications. Character evolution of a large set of taxonomically important and morphologically varied traits was investigated. Statistical tests were developed to detect patterns of topological clustering of homoplastic character changes in cladograms. The tests use Monte-Carlo computer simulations of four null models of character evolution in an attempt to reject the hypothesis of random homoplastic distributions. For the Astragalus data set only two of 17 characters were significantly clustered, and this is close to random expectation. Another data set from the literature was also tested, and in it no characters were clustered at the 5 percent level. The explanation for these negative findings regarding homoplastic "tendencies" is explored with respect to "scope", "scale", and character "resolution," factors believed to play an important role in the analysis of character evolution.
|
454 |
GEOSTATISTICAL METHODS FOR ESTIMATING SOIL PROPERTIES (KRIGING, COKRIGING, DISJUNCTIVE).YATES, SCOTT RAYMOND. January 1985 (has links)
Geostatistical methods were investigated in order to find efficient and accurate means for estimating a regionalized random variable in space based on limited sampling. The random variables investigated were (1) the bare soil temperature (BST) and crop canopy temperature (CCT) which were collected from a field located at the University of Arizona's Maricopa Agricultural Center, (2) the bare soil temperature and gravimetric moisture content (GMC) collected from a field located at the Campus Agricultural Center and (3) the electrical conductivity (EC) data collected by Al-Sanabani (1982). The BST was found to exhibit strong spatial auto-correlation (typically greater than 0.65 at 0⁺ lagged distance). The CCT generally showed a weaker spatial correlation (values varied from 0.15 to 0.84) which may be due to the length of time required to obtain an "instantaneous" sample as well as wet soil conditions. The GMC was found to be strongly spatially dependent and at least 71 samples were necessary in order to obtain reasonably well behaved covariance functions. Two linear estimators, the ordinary kriging and cokriging estimators, were investigated and compared in terms of the average kriging variance and the sum of squares error between the actual and estimated values. The estimate was obtained using the jackknifing technique. The results indicate that a significant improvement in the average kriging variance and the sum of squares could be expected by using cokriging for GMC and including 119 BST values in the analysis. A nonlinear estimator in one variable, the disjunctive kriging estimator, was also investigated and was found to offer improvements over the ordinary kriging estimator in terms of the average kriging variance and the sum of squares error. It was found that additional information at the estimation site is a more important consideration than whether the estimator is linear or nonlinear. Disjunctive kriging produces an estimator of the conditional probability that the value at an unsampled location is greater than an arbitrary cutoff level. This latter feature of disjunctive kriging is explored and has implications in aiding management decisions.
|
455 |
A comparison of Bayesian and classical statistical techniques used to identify hazardous traffic intersectionsHecht, Marie B. January 1988 (has links)
The accident rate at an intersection is one attribute used to evaluate the hazard associated with the intersection. Two techniques traditionally used to make such evaluations are the rate-quality technique and a technique based on the confidence interval of classical statistics. Both of these techniques label intersections as hazardous if their accident rate is greater than some critical accident rate determined by the technique. An alternative technique is one based on a Bayesian analysis of available accident number and traffic volume data. In contrast to the two classic techniques, the Bayesian technique identifies an intersection as hazardous based on a probabilistic assessment of accident rates. The goal of this thesis is to test and compare the ability of the three techniques to accurately identify traffic intersections known to be hazardous. Test data is generated from an empirical distribution of accident rates. The techniques are then applied to the generated data and compared based on the simulation results.
|
456 |
Intelligent Memory Management HeuristicsPanthulu, Pradeep 12 1900 (has links)
Automatic memory management is crucial in implementation of runtime systems even though it induces a significant computational overhead. In this thesis I explore the use of statistical properties of the directed graph describing the set of live data to decide between garbage collection and heap expansion in a memory management algorithm combining the dynamic array represented heaps with a mark and sweep garbage collector to enhance its performance. The sampling method predicting the density and the distribution of useful data is implemented as a partial marking algorithm. The algorithm randomly marks the nodes of the directed graph representing the live data at different depths with a variable probability factor p. Using the information gathered by the partial marking algorithm in the current step and the knowledge gathered in the previous iterations, the proposed empirical formula predicts with reasonable accuracy the density of live nodes on the heap, to decide between garbage collection and heap expansion. The resulting heuristics are tested empirically and shown to improve overall execution performance significantly in the context of the Jinni Prolog compiler's runtime system.
|
457 |
Mathematical Methods for Enhanced Information Security in Treaty VerificationMacGahan, Christopher, MacGahan, Christopher January 2016 (has links)
Mathematical methods have been developed to perform arms-control-treaty verification tasks for enhanced information security. The purpose of these methods is to verify and classify inspected items while shielding the monitoring party from confidential aspects of the objects that the host country does not wish to reveal. Advanced medical-imaging methods used for detection and classification tasks have been adapted for list-mode processing, useful for discriminating projection data without aggregating sensitive information. These models make decisions off of varying amounts of stored information, and their task performance scales with that information. Development has focused on the Bayesian ideal observer, which assumes com- plete probabilistic knowledge of the detector data, and Hotelling observer, which assumes a multivariate Gaussian distribution on the detector data. The models can effectively discriminate sources in the presence of nuisance parameters. The chan- nelized Hotelling observer has proven particularly useful in that quality performance can be achieved while reducing the size of the projection data set. The inclusion of additional penalty terms into the channelizing-matrix optimization offers a great benefit for treaty-verification tasks. Penalty terms can be used to generate non- sensitive channels or to penalize the model's ability to discriminate objects based on confidential information. The end result is a mathematical model that could be shared openly with the monitor. Similarly, observers based on the likelihood probabilities have been developed to perform null-hypothesis tasks. To test these models, neutron and gamma-ray data was simulated with the GEANT4 toolkit. Tasks were performed on various uranium and plutonium in- spection objects. A fast-neutron coded-aperture detector was simulated to image the particles.
|
458 |
Comparing the Powers of Several Proposed Tests for Testing the Equality of the Means of Two Populations When Some Data Are MissingDunu, Emeka Samuel 05 1900 (has links)
In comparing the means .of two normally distributed populations with unknown variance, two tests very often used are: the two independent sample and the paired sample t tests. There is a possible gain in the power of the significance test by using the paired sample design instead of the two independent samples design.
|
459 |
Economic Statistical Design of Inverse Gaussian Distribution Control ChartsGrayson, James M. (James Morris) 08 1900 (has links)
Statistical quality control (SQC) is one technique companies are using in the development of a Total Quality Management (TQM) culture. Shewhart control charts, a widely used SQC tool, rely on an underlying normal distribution of the data. Often data are skewed. The inverse Gaussian distribution is a probability distribution that is wellsuited to handling skewed data. This analysis develops models and a set of tools usable by practitioners for the constrained economic statistical design of control charts for inverse Gaussian distribution process centrality and process dispersion. The use of this methodology is illustrated by the design of an x-bar chart and a V chart for an inverse Gaussian distributed process.
|
460 |
Development of a Coaxiality IndicatorArendsee, Wayne C. 12 1900 (has links)
The geometric dimensioning and tolerancing concept of coaxiality is often required by design engineers for balance of rotating parts and precision mating parts. In current practice, it is difficult for manufacturers to measure coaxiality quickly and inexpensively. This study examines feasibility of a manually-operated, mechanical device combined with formulae to indicate coaxiality of a test specimen. The author designs, fabricates, and tests the system for measuring coaxiality of holes machined in a steel test piece.
Gage Repeatability and Reproducibility (gage R&R) and univariate analysis of variance is performed in accordance with Measurement System Analysis published by AIAG. Results indicate significant design flaws exist in the current configuration of the device; observed values vary greatly with operator technique. Suggestions for device improvements conclude the research.
|
Page generated in 0.1176 seconds