• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 71
  • 27
  • 12
  • 5
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 165
  • 165
  • 33
  • 33
  • 21
  • 21
  • 21
  • 17
  • 17
  • 17
  • 17
  • 17
  • 17
  • 17
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Adjusted Wald Confidence Interval for a Difference of Binomial Proportions Based on Paired Data

Bonett, Douglas G., Price, Robert M. 01 August 2012 (has links)
Adjusted Wald intervals for binomial proportions in one-sample and two-sample designs have been shown to perform about as well as the best available methods. The adjusted Wald intervals are easy to compute and have been incorporated into introductory statistics courses. An adjusted Wald interval for paired binomial proportions is proposed here and is shown to perform as well as the best available methods. A sample size planning formula is presented that should be useful in an introductory statistics course.
102

Application Of Statistical Methods In Risk And Reliability

Heard, Astrid 01 January 2005 (has links)
The dissertation considers construction of confidence intervals for a cumulative distribution function F(z) and its inverse at some fixed points z and u on the basis of an i.i.d. sample where the sample size is relatively small. The sample is modeled as having the flexible Generalized Gamma distribution with all three parameters being unknown. This approach can be viewed as an alternative to nonparametric techniques which do not specify distribution of X and lead to less efficient procedures. The confidence intervals are constructed by objective Bayesian methods and use the Jeffreys noninformative prior. Performance of the resulting confidence intervals is studied via Monte Carlo simulations and compared to the performance of nonparametric confidence intervals based on binomial proportion. In addition, techniques for change point detection are analyzed and further evaluated via Monte Carlo simulations. The effect of a change point on the interval estimators is studied both analytically and via Monte Carlo simulations.
103

Sample size effects related to nickel, titanium and nickel-titanium at the micron size scale

Norfleet, David M. 30 August 2007 (has links)
No description available.
104

Effect of Unequal Sample Sizes on the Power of DIF Detection: An IRT-Based Monte Carlo Study with SIBTEST and Mantel-Haenszel Procedures

Awuor, Risper Akelo 04 August 2008 (has links)
This simulation study focused on determining the effect of unequal sample sizes on statistical power of SIBTEST and Mantel-Haenszel procedures for detection of DIF of moderate and large magnitudes. Item parameters were estimated by, and generated with the 2PLM using WinGen2 (Han, 2006). MULTISIM was used to simulate ability estimates and to generate response data that were analyzed by SIBTEST. The SIBTEST procedure with regression correction was used to calculate the DIF statistics, namely the DIF effect size and the statistical significance of the bias. The older SIBTEST was used to calculate the DIF statistics for the M-H procedure. SAS provided the environment in which the ability parameters were simulated; response data generated and DIF analyses conducted. Test items were observed to determine if a priori manipulated items demonstrated DIF. The study results indicated that with unequal samples in any ratio, M-H had better Type I error rate control than SIBTEST. The results also indicated that not only the ratios, but also the sample size and the magnitude of DIF influenced the behavior of SIBTEST and M-H with regard to their error rate behavior. With small samples and moderate DIF magnitude, Type II errors were committed by both M-H and SIBTEST when the reference to focal group sample size ratio was 1:.10 due to low observed statistical power and inflated Type I error rates. / Ph. D.
105

Bayesian hierarchical approaches to analyze spatiotemporal dynamics of fish populations

Bi, Rujia 03 September 2020 (has links)
The study of spatiotemporal dynamics of fish populations is important for both stock assessment and fishery management. I explored the impacts of environmental and anthropogenic factors on spatiotemporal patterns of fish populations, and contributed to stock assessment and management by incorporating the inherent spatial structure. Hierarchical models were developed to specify spatial and temporal variations, and Bayesian methods were adopted to fit the models. Yellow perch (Perca flavescens) is one of the most important commercial and recreational fisheries in Lake Erie, which is currently managed using four management units (MUs), with each assessed by a spatially-independent stock-specific assessment model. The current spatially-independent stock-specific assessment assumes that movement of yellow perch among MUs in Lake Erie is statistically negligible and biologically insignificant. I investigated whether the assumption is violated and the effect this assumption has on assessment. I first explored the spatiotemporal patterns of yellow perch abundance in Lake Erie based on data from a 27-year gillnet survey, and analyzed the impacts of environmental factors on spatiotemporal dynamics of the population. I found that yellow perch relative biomass index displayed clear temporal variation and spatial heterogeneity, however the two middle MUs displayed spatial similarities. I then developed a state-space model based on a 7-year tag-recovery data to explore movements of yellow perch among MUs, and performed a simulation analysis to evaluate the impacts of sample size on movement estimates. The results suggested substantial movement between the two stocks in the central basin, and the accuracy and precision of movement estimates increased with increasing sample size. These results demonstrate that the assumption on movements among MUs is violated, and it is necessary to incorporate regional connectivity into stock assessment. I thus developed a tag-integrated multi-region model to incorporate movements into a spatial stock assessment by integrating the tag-recovery data with 45-years of fisheries data. I then compared population projections such as recruitment and abundance derived from the tag-integrated multi-region model and the current spatial-independent stock-specific assessment model to detect the influence of hypotheses on with/without movements among MUs. Differences between the population projections from the two models suggested that the integration of regional stock dynamics has significant influence on stock estimates. American Shad (Alosa sapidissima), Hickory Shad (A. mediocris) and river herrings, including Alewife (A. pseudoharengus) and Blueback Herring (A. aestivalis), are anadromous pelagic fishes that spend most of the annual cycle at sea and enter coastal rivers in spring to spawn. Alosa fisheries were once one of the most valuable along the Atlantic coast, but have declined in recent decades due to pollution, overfishing and dam construction. Management actions have been implemented to restore the populations, and stocks in different river systems have displayed different recovery trends. I developed a Bayesian hierarchical spatiotemporal model to identify the population trends of these species among rivers in the Chesapeake Bay basin and to identify environmental and anthropogenic factors influencing their distribution and abundance. The results demonstrated river-specific heterogeneity of the spatiotemporal dynamics of these species and indicated the river-specific impacts of multiple factors including water temperature, river flow, chlorophyll a concentration and total phosphorus concentration on their population dynamics. Given the importance of these two case studies, analyses to diagnose the factors influencing population dynamics and to develop models to consider spatial complexity are highly valuable to practical fisheries management. Models incorporating spatiotemporal variation describe population dynamics more accurately, improve the accuracy of stock assessments, and would provide better recommendations for management purposes. / Doctor of Philosophy / Many fish populations exhibit complex spatial structure, but the spatial patterns have been incorporated into stock assessment only in few cases. A full understanding of spatial structure of fish populations is needed to better manage the populations. Stock assessment and management strategies should depend on the inherent spatial structure of the target fish population. There have been many approaches developed to analyze spatial structure of fish populations. In this dissertation, I developed quantitative models to analyze fish demographic data and tagging data to explore spatial structure of fish populations. Yellow perch (Perca flavescens) in Lake Erie and Alosa group including American Shad (Alosa sapidissima), Hickory Shad (A. mediocris) and river herrings (Alewife A. pseudoharengus and Blueback Herring A. aestivalis) in selected tributaries of the Chesapeake Bay were taken as examples. Fishery-independent data for yellow perch displayed spatial similarities in the central basin of Lake Erie. Distinct temporal trends were observed in relative abundance data for Alosa sp. in different tributaries of the Chesapeake Bay. Substantial yellow perch movement among the central basin of the Lake was observed in tagging data. Ignoring the inherent spatial structure may cause fish to be overfished in some regions and underfished in others. To maximize the effectiveness of management in all regions for fish populations, I highly recommend incorporating spatial structure into stock assessment and management such as the ones developed in this dissertation.
106

The Development of a Coupled Physics and Kinetics Model to Computationally Predict the Powder to Power Performance of Solid Oxide Fuel Cell Anode Microstructures

Gaweł, Duncan Albert Wojciech 03 October 2013 (has links)
A numerical model was developed to evaluate the performance of detailed solid oxide fuel cell (SOFC) anode microstructures obtained from experimental reconstruction techniques or generated from synthetic computational techniques. The model is also capable of identifying the linear triple phase boundary (TPB) reaction sites and evaluating the effective transport within the detailed structures, allowing a comparison between the structural properties and performance to be conducted. To simulate the cell performance, a novel numerical coupling technique was developed in OpenFOAM and validated. The computational grid type and mesh properties were also evaluated to establish appropriate mesh resolutions to employ when studying the performance. The performance of a baseline synthetic electrode structure was evaluated using the model and under the applied conditions it was observed that the ionic potential had the largest influence over the performance. The model was used in conjunction with a computational synthetic electrode manufacturing algorithm to conduct a numerical powder to power parametric study investigating the effects of the manufacturing properties on the performance. An improvement in the overall performance was observed in structures which maximized the number of reaction sites and had well established transport networks in the ion phase. From the manufacturing parameters studied a performance increase was observed in structures with low porosity and ionic solid volume fractions near the percolation threshold, and when the anodes were manufactured from small monosized particles or binary mixtures comprising of smaller oxygen ion conductive particles. Insight into the anode thickness was also provided and it was observed that the current distribution within the anode was a function of the applied overpotential and an increase in the overpotential resulted in the majority of the current production to increase and shift closer to the electrode-electrolyte interface. / Thesis (Master, Mechanical and Materials Engineering) -- Queen's University, 2013-10-01 09:41:47.617
107

Increasing statistical power and generalizability in genomics microarray research

Ramasamy, Adaikalavan January 2009 (has links)
The high-throughput technologies developed in the last decade have revolutionized the speed of data accumulation in the life sciences. As a result we have very rich and complex data that holds great promise to solving many complex biological questions. One such technology that is very well established and widespread is DNA microarrays, which allows one to simultaneously measure the expression levels of tens of thousands of genes in a biological tissue. This thesis aims to contribute to the development of statistics that allow the end users to obtain robust and meaningful results from DNA microarrays for further investigation. The methodology, implementation and pragmatic issues of two important and related topics – sample size estimations for designing new studies and meta-analysis of existing studies – are presented here to achieve this aim. Real life case studies and guided steps are also given. Sample size estimation is important at the design stage to ensure a study has sufficient statistical power to address the stated objective given the financial constraints. The commonly used formula for estimating the number of biological samples, its short-comings and potential amelioration are discussed. The optimal number of biological samples and number of measurements per sample that minimizes the cost is also presented. Meta-analysis or the synthesis of information from existing studies is very attractive because it can increase the statistical power by making comprehensive and inexpensive use of available information. Furthermore, one can also easily test the generalizability of findings (i.e. the extent of results from a particular valid study can be applied to other circumstances). The key issues in conducting a meta-analysis for microarrays studies, a checklist and R codes are presented here. Finally, the poor availability of raw data in microarray studies is discussed here with recommendations for authors, journal editors and funding bodies. Good availability of data is important for meta-analysis in order to avoid biased results and for sample size estimation.
108

A simulation study of the error induced in one-sided reliability confidence bounds for the Weiball distribution using a small sample size with heavily censored data

Hartley, Michael A. 12 1900 (has links)
Approved for public release; distribution in unlimited. / Budget limitations have reduced the number of military components available for testing, and time constraints have reduced the amount of time available for actual testing resulting in many items still operating at the end of test cycles. These two factors produce small test populations (small sample size) with "heavily" censored data. The assumption of "normal approximation" for estimates based on these small sample sizes reduces the accuracy of confidence bounds of the probability plots and the associated quantities. This creates a problem in acquisition analysis because the confidence in the probability estimates influences the number of spare parts required to support a mission or deployment or determines the length of warranty ensuring proper operation of systems. This thesis develops a method that simulates small samples with censored data and examines the error of the Fisher-Matrix (FM) and the Likelihood Ratio Bounds (LRB) confidence methods of two test populations (size 10 and 20) with three, five, seven and nine observed failures for the Weibull distribution. This thesis includes a Monte Carlo simulation code written in S-Plus that can be modified by the user to meet their particular needs for any sampling and censoring scheme. To illustrate the approach, the thesis includes a catalog of corrected confidence bounds for the Weibull distribution, which can be used by acquisition analysts to adjust their confidence bounds and obtain a more accurate representation for warranty and reliability work. / Civilian, Department of the Air Force
109

An Empirical Study of Students’ Performance at Assessing Normality of Data Through Graphical Methods

Leander Aggeborn, Noah, Norgren, Kristian January 2019 (has links)
When applying statistical methods for analyzing data, with normality as an assumption there are different procedures of determining if a sample is drawn from a normally distributed population. Because normality is such a central assumption, the reliability of the procedures is of most importance. Much research focus on how good formal tests of normality are, while the performance of statisticians when using graphical methods are far less examined. Therefore, the aim of the study was to empirically examine how good students in statistics are at assessing if samples are drawn from normally distributed populations through graphical methods, done by a web survey. The results of the study indicate that the students distinctly get better at accurately determining normality in data drawn from a normally distributed population when the sample size increases. Further, the students are very good at accurately rejecting normality of data when the sample is drawn from a symmetrical non-normal population and fairly good when the sample is drawn from an asymmetrical distribution. In comparison to some common formal tests of normality, the students' performance is superior at accurately rejecting normality for small sample sizes and inferior for large, when drawn from a non-normal population.
110

Abordagem não-paramétrica para cálculo do tamanho da amostra com base em questionários ou escalas de avaliação na área de saúde / Non-parametric approach for calculation of sample size based on questionnaires or scales of assessment in the health care

Couto Junior, Euro de Barros 01 October 2009 (has links)
Este texto sugere sobre como calcular um tamanho de amostra com base no uso de um instrumento de coleta de dados formado por itens categóricos. Os argumentos para esta sugestão estão embasados nas teorias da Combinatória e da Paraconsistência. O propósito é sugerir um procedimento de cálculo simples e prático para obter um tamanho de amostra aceitável para coletar informações, organizá-las e analisar dados de uma aplicação de um instrumento de coleta de dados médicos baseado, exclusivamente, em itens discretos (itens categóricos), ou seja, cada item do instrumento é considerado como uma variável não-paramétrica com um número finito de categorias. Na Área de Saúde, é muito comum usar instrumentos para levantamento com base nesse tipo de itens: protocolos clínicos, registros hospitalares, questionários, escalas e outras ferramentas para inquirição consideram uma sequência organizada de itens categóricos. Uma fórmula para o cálculo do tamanho da amostra foi proposta para tamanhos de população desconhecidos e um ajuste dessa fórmula foi proposto para populações de tamanho conhecido. Pôde-se verificar, com exemplos práticos, a possibilidade de uso de ambas as fórmulas, o que permitiu considerar a praticidade de uso nos casos em que se tem disponível pouca ou nenhuma informação sobre a população de onde a amostra será coletada. / This text suggests how to calculate a sample size based on the use of a data collection instrument consisting of categorical items. The arguments for this suggestion are based on theories of Combinatorics and Paraconsistency. The purpose is to suggest a practical and simple calculation procedure to obtain an acceptable sample size to collect information, organize it and analyze data from an application of an instrument for collecting medical data, based exclusively on discrete items (categorical items), i.e., each item of the instrument is considered as a non-parametric variable with finite number of categories. In the health care it is very common to use survey instruments on the basis of such items: clinical protocols, hospital registers, questionnaires, scales and other tools for hearing consider a sequence of items organized categorically. A formula for calculating the sample size was proposed for a population of unknown size, and an adjusted formula has been proposed for population of known size. It was seen, with practical examples, the possibility of using both formulas, allowing to consider the practicality of the use in cases that have little or no information available about the population from which the sample is collected

Page generated in 0.0528 seconds