• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 551
  • 94
  • 78
  • 58
  • 36
  • 25
  • 25
  • 25
  • 25
  • 25
  • 24
  • 22
  • 15
  • 4
  • 3
  • Tagged with
  • 956
  • 956
  • 221
  • 163
  • 139
  • 126
  • 97
  • 92
  • 90
  • 74
  • 72
  • 69
  • 66
  • 65
  • 64
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

The statistical analysis of fatigue data.

Shen, Chi-liu. January 1994 (has links)
The overall objective of this study is to develop methods for providing a statistical summary of material fatigue stress-life (S-N) data for engineering design purposes. Specific goals are: (1) Development of an analytical model for characterizing fatigue strength. This model would include: (a) a description of the trend of the data (e.g., the median curve through the data), (b) a description of the scatter of the data (e.g., the standard deviation of N as a function of S), and (c) the statistical distribution of N given S or S given N. (2) Development of an algorithm for constructing a design curve from the data. The curve should be on the safe side of the data and should reflect uncertainties in the physical process as well as statistical uncertainty associated with small sample sizes. (3) Development of a statistical model that can be applied in a structural reliability analysis in which all design factors are treated as random variables. Significant achievements are: (1) Demonstration, using representative fatigue data sets, that the bilinear model seems to provide a consistently adequate description of the trend of fatigue data. (2) Demonstration, using representative fatigue data sets, that the pure X error source model seems to provide a consistently adequate description of the uncertainties observed in heteroscedastic fatigue data. The pure X error source model is based on recognition of the uncertainties in local fatigue stress. (3) Development of a procedure for constructing a design curve using the tolerance limit concept developed by D. B. Owen. A more practical simplified or approximate Owen curve was shown to have a minimum loss of confidence level, relative to exact Owen theory, under fairly general conditions. (4) Recommendations for methods of developing a statistical model for reliability analysis. A comprehensive study of this issue was not pursued.
262

TREE-RING RESPONSE FUNCTIONS. AN EVALUATION BY MEANS OF SIMULATIONS (DENDROCHRONOLOGY RIDGE REGRESSION, MULTICOLLINEARITY).

CROPPER, JOHN PHILIP. January 1985 (has links)
The problem of determining the response of tree ring width growth to monthly climate is examined in this study. The objective is to document which of the available regression methods are best suited to deciphering the complex link between tree growth variation and climate. Tree-ring response function analysis is used to determine which instrumental climatic variables are best associated with tree-ring width variability. Ideally such a determination would be accomplished, or verified, through detailed physiological monitoring of trees in their natural environment. A statistical approach is required because such biological studies on mature trees are currently too time consuming to perform. The use of lagged climatic data to duplicate a biological, rather than a calendar, year has resulted in an increase in the degree of intercorrelation (multicollinearity) of the independent climate variables. The presence of multicollinearity can greatly affect the sign and magnitude of estimated regression coefficients. Using series of known response, the effectiveness of five different regression methods were objectively assessed in this study. The results from each of the 2000 regressions were compared to the known regression weights and a measure of relative efficiency computed. The results indicate that ridge regression analysis is, on average, four times more efficient (average relative efficiency of 4.57) than unbiased multiple linear regression at producing good coefficient estimates. The results from principal components regression are slight improvements over those from multiple linear regression with an average relative efficiency of 1.45.
263

EARTHQUAKE HAZARD ASSESSMENT OF THE STATE OF ARIZONA.

Krieski, Mark. January 1984 (has links)
No description available.
264

APPLICATION OF GEOSTATISTICS TO AN OPERATING IRON ORE MINE

Nogueira Neto, Joao Antunes, 1952- January 1987 (has links)
The competition in the world market for iron ore has increased lately. Therefore, an improved method of estimating the ore quality in small working areas has become an attractive cost-cutting strategy in short-term mine plans. Estimated grades of different working areas of a mine form the basis of any short-term mine plan. The generally sparse exploration data obtained during the development phase is not enough to accurately estimate the grades of small working areas. Therefore, additional sample information is often required in any operating mine. The findings of this case study show that better utilization of all available exploration information at this mine would improve estimation of small working areas even without additional face samples. Through the use of kriging variance, this study also determined the optimum face sampling grid, whose spacing turned out to be approximately 100 meters as compared to 50 meters in use today. (Abstract shortened with permission of author.)
265

Evaluation of drug absorption by cubic spline and numerical deconvolution

Tsao, Su-Ching, 1961- January 1989 (has links)
A novel approach using smoothing cubic splines and point-area deconvolution to estimate the absorption kinetics of linear systems has been investigated. A smoothing cubic spline is employed as an interpolation function since it is superior to polynomials and other functions commonly used for representation of empirical data in several aspects. An advantage of the method is that results obtained from the same data set will be more consistent, irrespective of who runs the program or how many times you run it. In addition, no initial estimates are needed to run the program. The same sampling time or equally spaced measurement of unit impulse response and response of interest is not required. The method is compared with another method by using simulated data containing various degrees of random noise.
266

Conservation by Consensus: Reducing Uncertainty from Methodological Choices in Conservation-based Models

Poos, Mark S. 01 September 2010 (has links)
Modeling species of conservation concern, such as those that are rare, declining, or have a conservation designation (e.g. endangered or threatened), remains an activity filled with uncertainty. Species that are of conservation concern often are found infrequently, in small sample sizes and spatially fragmented distributions, thereby making accurate enumeration difficult and traditional statistical approaches often invalid. For example, there are numerous debates in the ecological literature regarding methodological choices in conservation-based models, such as how to measure functional traits to account for ecosystem function, the impact of including rare species in biological assessments and whether species-specific dispersal can be measured using distance based functions. This thesis attempts to address issues in methodological choices in conservation-based models in two ways. In the first section of the thesis, the impacts of methodological choices on conservation-based models are examined across a broad selection of available approaches, from: measuring functional diversity; to conducting bio-assessments in community ecology; to assessing dispersal in metapopulation analyses. It is the goal of this section to establish the potential for methodological choices to impact conservation-based models, regardless of the scale, study-system or species involved. In the second section of this thesis, the use of consensus methods is developed as a potential tool for reducing uncertainty with methodological choices in conservation-based models. Two separate applications of consensus methods are highlighted, including how consensus methods can reduce uncertainty from choosing a modeling type or to identify when methodological choices may be a problem.
267

An Investigation and Review of Futility Analysis Methods in Phase III Oncology Trials.

Winch, Chad 12 December 2012 (has links)
The general objective of this thesis was to improve understandings of design, conduct and analysis of randomized controlled trials (RCTs). The specific objective was to evaluate the methodological and statistical principles associated with conducting analyses of futility, a component of interim analysis, as part of the conduct of RCTs. This objective was addressed by first performing a systematic review, which included a detailed literature search, as well as data from a cohort of previously extracted studies. The systematic review was designed to identify futility analysis principles and methodologies in order to inform the design and conduct of retrospective futility analyses of two completed NCIC CTG trials. The results of these trials have been previously published; one trial met its stated endpoint and the other did not. Neither trial underwent an interim analysis of futility during its conduct. The retrospective futility analyses assessed the accuracy of frequently used methods, by comparing the results of each method to each other and to the original final analysis results. These assessments were performed at selected time points using both aggressive and conservative stopping rules. In order to increase the robustness of the comparisons, bootstrapping methods were applied. The results of this thesis demonstrate principles associated with the conduct of futility analyses and provide a basis for hypotheses-testing of optimum methodologies and their associated trade-offs. / Thesis (Master, Community Health & Epidemiology) -- Queen's University, 2012-12-12 13:10:15.619
268

Analysis of time-to-event data including frailty modeling.

Phipson, Belinda. January 2006 (has links)
There are several methods of analysing time-to-event data. These include nonparametric approaches such as Kaplan-Meier estimation and parametric approaches such as regression modeling. Parametric regression modeling involves specifying the distribution of the survival time of the individuals, which are commonly chosen to be either exponential, Weibull, log- normal, log-logistic or gamma distributed. Another well known model that does not require assumptions about the hazard function to be made is the Cox proportional hazards model. However, there may be deviations from proportional hazards which may be explained by unaccounted random heterogeneity. In the early 1980s, a series of studies showed concern with the possible bias in the estimated treatment e®ect when important covariates are omitted. Other problems may be encountered with the traditional proportional hazards model when there is a possibility of correlated data, for instance when there is clustering. A method of handling these types of problems is by making use of frailty modeling. Frailty modeling is a method whereby a random e®ect is incorporated in the Cox pro- portional hazards model. While this concept is fairly simple to understand, the method of estimation of the ¯xed and random e®ects becomes complicated. Various methods have been explored by several authors, including the Expectation-Maximisation (EM) algorithm, pe- nalized partial likelihood approach, Markov Chain Monte Carlo (MCMC) methods, Monte Carlo EM approach and di®erent methods using Laplace approximation. The lack of available software is problematic for ¯tting frailty models. These models are usually computationally extensive and may have long processing times. However, frailty modeling is an important aspect to consider, particularly if the Cox proportional hazards model does not adequately describe the distribution of survival time. / Thesis (M.Sc.)-University of KwaZulu-Natal, Pietermaritzburg, 2006.
269

The likelihood of gene trees under selective models

Coop, Graham M. January 2004 (has links)
The extent to which natural selection shapes diversity within populations is a key question for population genetics. Thus, there is considerable interest in quantifying the strength of selection. In this thesis a full likelihood approach for inference about selection at a single site within an otherwise neutral fully-linked sequence of sites is developed. Integral to many of the ideas introduced in this thesis is the reversibility of the diffusion process, and some past approaches to this concept are reviewed. A coalescent model of evolution is used to model the ancestry of a sample of DNA sequences which have the selected site segregating. A novel method for simulating the coalescent with selection, acting at a single biallelic site, is described. Selection is incorporated through modelling the frequency of the selected and neutral allelic classes stochastically back in time. The ancestry is then simulated using a subdivided population model considering the population frequencies through time as variable population sizes. The approach is general and can be used for any selection scheme at a biallelic locus. The mutation model, for the selected and neutral sites, is the infinitely-many-sites model where there is no back or parallel mutation at sites. This allows a unique perfect phylogeny, a gene tree, to be constructed from the configuration of mutations on the sample sequences. An importance sampling algorithm is described to explore over coalescent tree space consistent with this gene tree. The method is used to assess the evidence for selection in a number of data sets. These are as follows: a partial selective sweep in the G6PD gene (Verrelli et al., 2002); a recent full sweep in the Factor IX gene (Harris and Hey, 2001); and balancing selection in the DCP1 gene (Rieder et al., 1999). Little evidence of the action of selection is found in the data set of Verrelli et al. (2002) and the data set of Rieder et al. (1999) seems inconsistent with the model of balancing selection. The patterns of diversity in the data set of Harris and Hey (2001) offer support of the hypothesis of a full sweep.
270

Reference-free identification of genetic variation in metagenomic sequence data using a probabilistic model

Ahiska, Bartu January 2012 (has links)
Microorganisms are an indispensable part of our ecosystem, yet the natural metabolic and ecological diversity of these organisms is poorly understood due to a historical reliance of microbiology on laboratory grown cultures. The awareness that this diversity cannot be studied by laboratory isolation, together with recent advances in low cost scalable sequencing technology, have enabled the foundation of culture-independent microbiology, or metagenomics. The study of environmental microbial samples with metagenomics has led to many advances, but a number of technological and methodological challenges still remain. A potentially diverse set of taxa may be represented in anyone environmental sample. Existing tools for representing the genetic composition of such samples sequenced with short-read data, and tools for identifying variation amongst them, are still in their infancy. This thesis makes the case that a new framework based on a joint-genome graph can constitute a powerful tool for representing and manipulating the joint genomes of population samples. I present the development of a collection of methods, called SCRAPS, to construct these efficient graphs in small communities without the availability or bias of a reference genome. A key novelty is that genetic variation is identified from the data structure using a probabilistic algorithm that can provide a measure of the confidence in each call. SCRAPS is first tested on simulated short read data for accuracy and efficiency. At least 95% of non-repetitive small-scale genetic variation with a minor allele read depth greater than 10x is correctly identified; the number false positives per conserved nucleotide is consistently better than 1 part in 333 x 103. SCRAPS is then applied to artificially pooled experimental datasets. As part of this study, SCRAPS is used to identify genetic variation in an epidemiological 11 sample Neisseria meningitidis dataset collected from the African meningitis belt". In total 14,000 sites of genetic variation are identified from 48 million Illumina/Solexa reads. The results clearly show the genetic differences between two waves of infection that has plagued northern Ghana and Burkina Faso.

Page generated in 0.1211 seconds