• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 551
  • 94
  • 78
  • 58
  • 36
  • 25
  • 25
  • 25
  • 25
  • 25
  • 24
  • 22
  • 15
  • 4
  • 3
  • Tagged with
  • 956
  • 956
  • 221
  • 163
  • 139
  • 126
  • 97
  • 92
  • 90
  • 74
  • 72
  • 69
  • 66
  • 65
  • 64
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Sampling for estimating characteristics of mackerel in northeast Brazil

Albuquerque, José Jackson Lima de, 1937- January 1969 (has links)
No description available.
92

The effect of additional information on mineral deposit geostatistical grade estimates /

Milioris, George J. (George Joseph) January 1983 (has links)
No description available.
93

Grammar- and optimization-based mechanical packaging

Lomangino, F. Paul 05 1900 (has links)
No description available.
94

Multivariate morphometric analysis of seasonal changes in overwintering arctic charr (Salvelinus alpinus L.)

Idrus, Muhammad Rijal. January 1996 (has links)
This study developed a robust technique for the assessment of morphometric differences among overwintering northern fish populations. Arctic charr were sampled soon before the freeze-up and just after ice break-up at two subarctic Quebec lakes. A homogenous sample of 397 fish was used. Regression analyses of the length-weight relationships and their derived condition indices were insufficient, due to their inherent limitations, to recognize the differences between sampling groups. A series of multivariate analyses (canonical, stepwise and discriminant analysis), based on eleven morphometric characters of the fish, provided a better assessment. The analysis recognized the distinctions between sampling groups, correctly classified 70-100% of the fish into their appropriate groupings, and indicated that body height measured at the anal opening was the most discriminatory variable. Landmark variables related to shape differences were effective in discriminating fish according to their lake of origin, whereas length and weight variables, which closely reflected the size differences, were better at distinguishing seasonal changes. The study provides a simple, efficient assessment method based on phenotypic variations to explain different survival strategies, and the associated life history traits, adopted by fish.
95

Interpretation of results from simplified principal components

Uddin, Mudassir January 1999 (has links)
Linear multivariate statistical methods are widely used for analysing data sets which consist of a large number of variables. These techniques, which include principal component analysis, factor analysis, canonical correlation analysis, redundancy analysis and discriminant analysis, all produce a set of new variables, commonly called 'factors', according to some criterion which differs for different techniques. Among these techniques, principal component analysis is one of the most popular techniques used for reducing the dimensions of the multivariate data set. In many applications, when Principal Component Analysis (PCA) is performed on a large number of variables, the interpretation of the results is not simple. The derived eigenvectors of the sample covariance or correlation matrix are not necessarily in a simple form, with all coefficients either 'large' or 'negligible'. To aid interpretation, it is fairly common practice to rotate the retained set of components, often using orthogonal rotation. The purpose of rotation is to simplify structure, and thus to make it easier to interpret the low-dimensional space represented by the retained set of components. Thus, quantification of simplicity is a two step process. The first set involves the extraction of the feature from the data called components, while the second stage uses a rotation method to simplify the structure. One of the two main purposes of this thesis is to combine into one step these two separate stages of dimension reduction (finding the components) and simplification (rotation). This goal is achieved by combining these two objectives in the form of a single function leading to what we call Simplified Components (SCs). Another objective is to discover which of the many possible criteria suggested in factor analysis can be adopted in the proposed procedure of SCs. Thus, a simplified one-step procedure of SCs is proposed, using four measures of simplicity, namely varimax, quartimax, orthomax and equamax indices.
96

How large should a clinical trial be?

Pezeshk, Hamid January 2000 (has links)
One of the most important questions in the planning of medical experiments to assess the performance of new drugs or treatments, is how big to make the trial. The problem, in its statistical formulation, is to determine the optimal size of a trial. The most frequently used methods of determining sample size in clinical trials is based on the required p-value, and the required power of the trial for a specified treatment effect. In contrast to the Bayesian decision theoretic approach there is no explicit balancing of the cost of a possible increase in the size of the trial against the benefit of the more accurate information which it would give. In this work we consider a fully Bayesian (or decision theoretic) approach to sample size determination in which the number of subsequent users of the therapy under investigation, and hence also the total benefit resulting from the trial, depend on the strength of the evidence provided by the trial. Our procedure differs from the usual Bayesian decision theory methodology, which assumes a single decision maker, by recognizing the existence of three decision makers, namely: the pharmaceutical company conducting the trial, which decides on its size; the regulator, whose approval is necessary for the drug to be licenced for sale; and the public at large, who determine the ultimate usage. Moreover, we model the subsequent usage by plausible assumptions for actual behaviour, rather than assuming that this represents decisions which are in some sense optimal. For this reason the procedure may be called "Behavioural Bayes" (or BeBay for short), the word Bayes referring to the optimization of the sample size. In the BeBay methodology the total expected benefit from carrying out the trial minus the cost of the trial is maximized. For any additional sales to occur as a result of the trial it must provide sufficient evidence both to convince the regulator to issue the necessary licence and to convince potential users that they should use the new treatment. The necessary evidence is in the form of a high probability after the trial that the new treatment achieves a clinically relevant improvement compared to the alternative treatment. The regulator is assumed to start from a more sceptical and less well-informed view of the likely performance of the treatment than the company carrying out the trial. The total benefit from a conclusively favourable trial is assessed on the basis of the size of the potential market and aggregated over the anticipated life-time of the product, using appropriate discounting for future years.
97

Patterns of performance : implications for the Rey auditory verbal learning test

marie@ca.com.au, Marie Hardman January 2001 (has links)
Three studies investigated patterns of performance as demonstrated by the serial position on the Rey Auditory Verbal Leaning Test (RAVLT). Patterns of performance were explored in a sample of genuine traumatic brain injured subjects who were litigating (TBI-LIT; N = 22) and compared to a sample of genuine traumatic brain injured subjects who were not in litigation (TBI-NONLIT; N = 22). Comparisons were also made to a sample of subjects who were depressed but not neurologically compromised (PSY-DEP; N = 24). Results demonstrated that when time for loss of consciousness was controlled for, no difference existed between the litigating and non-litigating groups on any serial position. With this in mind the TBILIT and TBI NON-LIT groups were collapsed to form one traumatic brain injured group (TBI; N = 44). Patterns of performance were then compared between the TBI group, the PSY-DEP group and a normal control (NC; N = 68) group. No differences were demonstrated between the TBI and PSY-DEP groups on any serial position however, the NC group demonstrated significantly different primacy effects than the TBI group and significantly different recency effects than both the TBI and PSY-DEP groups (Study 1). Patterns of performance relative to the serial position were also compared in a group of Alzheimers Disease (AD; N=20) and dementia(DEM; N=20) subjects. Results indicated that the DEM group demonstrated a greater primacy effect than the AD group with both groups demonstrating a greater recency effect when compared to the primacy effect but no significantly so. Patterns of performance was also explored in a group of Huntington's Disease subjects (HD; =14) with this group demonstrating a significantly reduced primacy effect as compared to a recency effect (Study 2). In the third study patterns of performance were compared in a group of subjects having sustained fiontal lobe (FL; N=21) and posterior lobe (PL; N=21) lesions to the brain. Subjects with PL lesions demonstrated a signifckntly greater primacy effect as compared to the FL group with both groups demonstrating a reduced recency effect. Comparisons were also made between the PL and FL groups with normal control groups (FL-NC; N = 21: PL-NC; N = 21) and results indicated that the FL group demonstrated a significantly reduced primacy and recency effect when compared to the normal control group. When comparisons were made between the PL and a normal control group, the PL group demonstrated a significantly reduced recency effect as compared to normal controls. Pattern of performance were also explored in a small sample of subjects with diffuse DIFF; N=6) damage to the brain and results demonstrated that this group displayed a reduced recency effect as compared to the primacy effect (Study 3). Overall, when examining the serial position effects across all experimental groups, subjects who had sustained a traumatic injury to the brain or who were depressed all demonstrated a greater primacy effect as compared to the recency effect by recalling more words on that position. This contrasted the pattern of performance which emerged with various dementing processes where more words were recalled in the recency position as compared to the primacy position. Results for all studies were analyzed using MANOVA followed by the Sheffe procedure.
98

A classic statistical model developed towards predicting financial distress

Le Roux, Marrelie January 2013 (has links)
To date there has been significant research on the topic of financial distress prediction, due to its relevance to various stakeholders. Beaver (1966), Altman (1968) and Ohlson (1980) are generally regarded as the pioneers in this field of study, despite heavy criticism their models are widely accepted and used. Studies by Grice & Ingram (2001); Grice & Dugan (2001) and Sudarsanam & Taffler (1995) have shown that these models require to be updated regularly with new variables and coefficients due to various factors. This study proposes to add to the body of knowledge by developing a distress prediction model using a classic statistical method and financial ratios, calculated on published company data of organisations listed on the Johannesburg Stock Exchange. / Dissertation (MBA)--University of Pretoria, 2013. / zkgibs2014 / Gordon Institute of Business Science (GIBS) / MBA / Unrestricted
99

Short-term keratometric variation in the human eye

Cronje- Dunn, Sonja 10 February 2014 (has links)
M.Phil. (Optometry) / Previous studies of corneal and keratometric variation used incomplete or incorrect statistical methods. For the first time, proper multivariate statistical methods are applied to evaluate short-term keratometric variation in human eyes. Keratometric variation is represented graphically by means of stereo-pair scatter plots, trajectories of change in dioptric power, ellipsoidal confidence regions for mean dioptric power, as well as meridional profiles. Quantitative expressions of variation are given in terms of mean values, variance-covariance matrices and volumes of 95% distribution ellipsoids. Manual and automatic keratometry is compared, both on a steel ball and on an eye. It appears that the automatic keratometer exhibits less variation than the manual keratometer....
100

An evaluation of regional stream sediment data by advanced statistical procedures

Matysek, Paul Frank January 1985 (has links)
This study was directed towards the development of rigorous, systematic, computer-assisted statistical procedures for the interpretation of quantitative and qualitative data commonly encountered in practical exploration-oriented surveys. A suite of data analysis tools were developed to evaluate the quality of geochemical data sets, to investigate the value and utilization of categorical field data, and to recognize and rank anomalous samples. Data obtained from regional stream sediment surveys as undertaken by the British Columbia Ministry of Energy, Mines and Petroleum Resources in southern British Columbia were examined as a case history. A procedure based on a statistical analysis of field-site duplicates was developed to evaluate the quality of regional geochemical silt data. The technique determines: (1) whether differences in metal concentrations between sample sites reflect a real trend related to geological and geochemical features and not merely a consequence of sampling and analytical error, and (2) absolute precision estimates at any particular accumulation across a metal's concentration range. Results for metals Zn, Cu, Ni, Co, Fe and Mn indicated that combined variability due to local and procedural error averaged less than 5% of the total error and that precision estimates at the 95th percentile concentration value averaged less than 6.0%. Results presented indicate duplicates are more in accord with splits of individual samples (analytical duplicates) rather than separate field-site duplicates. This type of systematic approach provides a basis for interpreting geochemical trends within the survey area, while simultaneously allowing evaluation of the method of sampling and laboratory analysis. A procedure utilizing Duncan's Multiple Range Test examined the relationships between metal concentrations and class-interval and categorical observations of the drainage catchment, sample site and sediment sample. Results show that, many field observations can be systematically related to metal content of drainage sediments. Some elements are more susceptible than others to environmental factors and some factors influence few or many elements. For example, in sediments derived from granites there are significant relationships between bank type and concentration of 8 elements (Zn, Cu, Ni, Pb, Co, Fe, Mn and Hg). In contrast, the texture of these sediments, using estimates of fines contents as an index, did not significantly affect the concentration of any of the elements studied. In general, results indicate that groups of environmental factors acting collectively are more important than any single factor in determining background metal contents of drainage sediments. A procedure utilizing both a graphical and multiple regression approach was developed to identify and characterize anomalous samples. The procedure determines multivariate models based on background metal values which are used to describe very general geochemical relations of no interest for prospecting purposes. These models are then applied to sample subsets selected on the basis of factor/s known to strongly influence geochemical results. Individual samples are characterized after comparisons with relevant determined threshold levels and background multielemenmodels. One hundred and fifteen anomalous samples for zinc from seven provenance groups draining 1259 sample sites were identified and characterized by this procedure. Forty three of these samples had zinc concentrations greater than its calculated provenance threshold, while 72 of these anomalous samples were identified solely because their individual metal associations were significantly different than their provenance multivariate background model. The method provides a means to reduce the effects of background variations while simultaneously identifying and characterizing anomalous samples. The data analysis tools described here allow extraction of useful information from regional geochemical data, and as a result provide and effective means of defining problems of geological interest that warrant further investigation. / Science, Faculty of / Earth, Ocean and Atmospheric Sciences, Department of / Graduate

Page generated in 0.1325 seconds