Spelling suggestions: "subject:"[een] STATISTICAL METHODS"" "subject:"[enn] STATISTICAL METHODS""
191 |
Adaptive distance samplingPollard, John January 2002 (has links)
We investigate mechanisms to improve efficiency for line and point transect surveys of clustered populations by combining the distance methods with adaptive sampling. In adaptive sampling, survey effort is increased when areas of high animal density are located, thereby increasing the number of observations. We begin by building on existing adaptive sampling techniques, to create both point and line transect adaptive estimators, these are then extended to allow the inclusion of covariates in the detection function estimator. However, the methods are limited, as the total effort required cannot be forecast at the start of a survey, and so a new fixed total effort adaptive approach is developed. A key difference in the new method is that it does not require the calculation of the inclusion probabilities typically used by existing adaptive estimators. The fixed effort method is primarily aimed at line transect sampling, but point transect derivations are also provided. We evaluate the new methodology by computer simulation, and report on surveys of harbour porpoise in the Gulf of Maine, in which the approach was compared with conventional line transect sampling. Line transect simulation results for a clustered population showed up to a 6% improvement in the adaptive density variance estimate over the conventional, whilst when there was no clustering the adaptive estimate was 1% less efficient than the conventional. For the harbour porpoise survey, the adaptive density estimate cvs showed improvements of 8% for individual porpoise density and 14% for school density over the conventional estimates. The primary benefit of the fixed effort method is the potential to improve survey coverage, allowing a survey to complete within a fixed time and effort; an important feature if expensive survey resources are involved, such as an aircraft, crew and observers.
|
192 |
Métodos para comparação de curvas de crescimentoCarvalho, Lídia Raquel de [UNESP] 16 February 1996 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:31:37Z (GMT). No. of bitstreams: 0
Previous issue date: 1996-02-16Bitstream added on 2014-06-13T20:42:09Z : No. of bitstreams: 1
carvalho_lr_dr_botfca.pdf: 576008 bytes, checksum: d62e81066da0452991a1456ffa8c8118 (MD5) / As funções de crescimento logística e de Gompertz têm sido bastante estudadas e freqüentemente utilizadas na área biológica. Diversos pesquisadores têm ajustado as funções logística ou de Gompertz a dados provenientes de experimentos com vários tratamentos onde curvas são ajustadas e o interesse é saber se há diferença entre estes tratamentos. A verificação da adequacidade de ajustes das funções não-lineares e a comparação de diferentes funções para um determinado conjunto de dados estão bem contempladas na literatura. Porém, quando o mesmo tipo de função é ajustado a várias situações (tratamentos) e o interesse é fazer a comparação das mesmas, há dificuldades de se encontrar subsídios na literatura. O objetivo deste trabalho foi a apresentação de um método de comparação de curvas logísticas e de Gompertz. Compararam-se as equações ajustadas através de testes dos parâmetros, utilizando-se métodos paramétricos e nãoparamétricos. Determinaram-se também, valores da variável independente x a partir dos quais a diferença entre a assíntota e a curva ajustada deixa de ser significativa. Estudaram-se nesta pesquisa o modelo logístico com erro aditivo na ausência e na presença de autocorrelação nos resíduos, o modelo logístico com erro multiplicativo na ausência e na presença de autocorrelação nos resíduos, o modelo de Gompertz com erro aditivo na ausência e na presença de autocorrelação nos resíduos e o modelo de Gompertz com erro multiplicativo na ausência e na presença de autocorrelação nos resíduos. Para ilustração da metodologia utilizaram-se dados de peso de matéria fresca (g) de sementes de feijão Phaseolus vulgaris L. cv. carioca 80 SH, porcentagens médias do peso de frutos de araribá, pesos de frangos de corte de aves Indian River e pesos de ratos Rattus norvergicus, aos quais ajustaram-se,... / The logistic and the Gompertz growth functions have been considerably studied and frequently used in biological area. Several researchers have fitted the Logistic and the Gompertz to data from experiments with many treatments where the purpose is to detect the difference among them. The verification of the adequacy of the non-linear fits and the comparison of different functions for a set of data are well studied in the literature. However, when the same function is fitted to several situations (treatments) and the purpose is to compare them, there are difficulty to find subsidy in the literature. The purpose of this work was to determine a method of comparison of the Logistic and the Gompertz curves and to verify until when the difference between the curves and their superior asymptotes are significant. In this research were studied the logistic and the Gompertz models considering additive and multiplicative error terms with and without autocorrelation. For enlightenment of the methodology were used data of fresh matter of Phaseolus vulgaris L. cv carioca 80 SH seeds, percentage of araribá fruit weight, weight of chicken for slaughter Indian River and weight of rats Rattus norvergicus, where were fitted the Logistic model with additive errors terms and without autocorrelation, the Logistic model with additive errors terms and with autocorrelation, the Gompertz model with additive errors terms and without autocorrelation, the Gompertz model with additive errors terms and with autocorrelation...(Complete abstract, click electronic access below)
|
193 |
Statistical histogram characterization and modeling : theory and applicationsChoy, Siu Kai 01 January 2008 (has links)
No description available.
|
194 |
Complete spatial randomness tests, intensity-dependent marking and neighbourhood competition of spatial point processes with applications to ecologyHo, Lai Ping 01 January 2006 (has links)
No description available.
|
195 |
An application of factor analysis on a 24-item scale on the attitudes towards AIDS precautions using Pearson, Spearman and Polychoric correlation matricesAbdalmajid, Mohammed Babekir Elmalik January 2006 (has links)
Magister Scientiae - MSc / The 24-item scale has been used extensively to assess the attitudes towards AIDS precautions. This study investigated the usefulness and validity of the instrument in a South African setting, fourteen years after the development of the instrument. If a new structure could be found statistically, the HIV/AIDS prevention strategies could be more effective in aiding campaigns to change attitudes and sexual behaviour. / South Africa
|
196 |
Model selection for cointegrated relationships in small samplesHe, Wei January 2008 (has links)
Vector autoregression models have become widely used research tools in the analysis of macroeconomic time series. Cointegrated techniques are an essential part of empirical macroeconomic research. They infer causal long-run relationships between nonstationary variables. In this study, six information criteria were reviewed and compared. The methods focused on determining the optimum information criteria for detecting the correct lag structure of a two-variable cointegrated process.
|
197 |
Randomization in a two armed clinical trial: an overview of different randomization techniquesBatidzirai, Jesca Mercy January 2011 (has links)
Randomization is the key element of any sensible clinical trial. It is the only way we can be sure that the patients have been allocated into the treatment groups without bias and that the treatment groups are almost similar before the start of the trial. The randomization schemes used to allocate patients into the treatment groups play a role in achieving this goal. This study uses SAS simulations to do categorical data analysis and comparison of differences between two main randomization schemes namely unrestricted and restricted randomization in dental studies where there are small samples, i.e. simple randomization and the minimization method respectively. Results show that minimization produces almost equally sized treatment groups, but simple randomization is weak in balancing prognostic factors. Nevertheless, simple randomization can also produce balanced groups even in small samples, by chance. Statistical power is also improved when minimization is used than in simple randomization, but bigger samples might be needed to boost the power.
|
198 |
A comparison of longitudinal statistical methods in studies of pulmonary function declineDimich-Ward, Helen D. 05 1900 (has links)
Three longitudinal pulmonary function data sets were analyzed by several statistical methods for the purposes of:
1) determining to what degree the conclusions of an analysis for a given data set are method dependent;
2) assessing the properties of each method across the different data sets;
3) studying the correlates of FEV₁ decline including physical, behavioral, and respiratory factors, as well as city of residence and type of work.
4) assessing the appropriateness of modelling the standard linear relationship of FEV₁ with time and providing alternative approaches;
5) describing longitudinal change in various lung function variables, apart from FEV₁.
The three data sets were comprised of (1) yearly data on 141 veterans with mild chronic bronchitis, taken at three Canadian centres, for a maximum of 23 years of follow-up; their mean age at the start of the study was 49 years (s.d.=9) and only 10.6% were nonsmokers during the follow-up; (2) retrospective data on 384 coal workers categorized into four groups according to vital status (dead or alive) and smoking behavior, with irregular follow-up intervals ranging from 2 to 12 measurements per individual over a period of 9 to 30 years; (3) a relatively balanced data set on 269 grain workers and a control group of 58 civic workers, which consisted of 3 to 4 measurements taken over an average follow-up of 9 years. Their mean age at first measurement was 37 years (s.d.=10) and 53.2% of the subjects did not smoke.
A review of the pulmonary and statistical literature was carried out to identify methods of analysis which had been applied to calculate annual change in FEV₁. Five methods chosen for the data analyses were variants of ordinary least squares approaches. The other four methods were based on the use of transformations, weighted least squares, or covariance structure models using generalized least squares approaches.
For the coal workers, the groups that were alive at the time of ascertainment had significantly smaller average FEV₁ declines than the deceased groups. Post-retirement decline in FEV₁ was shown by one statistical method to significantly increase for coal workers who smoked, while a significant decrease was observed for nonsmokers. Veterans from Winnipeg consistently showed the lowest decline estimates in comparison to Halifax and Toronto; recorded air pollution measurements were found to be the lowest for Winnipeg, while no significant differences in smoking behavior were found between the veterans of each city. The data set of grain workers proved most ameniable to all the different analytical techniques, which were consistent in showing no significant differences in FEV₁ decline between the grain and civic workers groups and the lowest magnitude of FEV₁ decline.
It was shown that quadratic and allometric analyses provided additional information to the linear description of FEV₁ decline, particularly for the study of pulmonary decline among older or exposed populations over an extended period of time. Whether the various initial lung function variables were each predictive of later decline was dependent on whether absolute or percentage decline was evaluated. The pattern of change in these lung function measures over time showed group differences suggestive of different physiological responses.
Although estimates of FEV₁ decline were similar between the various methods, the magnitude and relative order of the different groups and the statistical significance of the observed inter-group comparisons were method-dependent No single method was optimal for analysis of all three data sets. The reliance on only one model, and one type of lung function measurement to describe the data, as is commonly found in the pulmonary literature, could lead to a false interpretation of the result Thus a comparative approach, using more than one justifiable model for analysis is recommended, especially in the usual circumstances where missing data or irregular follow-up times create imbalance in the longitudinal data set. / Graduate and Postdoctoral Studies / Graduate
|
199 |
Statistical methods for Mendelian randomization using GWAS summary dataHu, Xianghong 23 August 2019 (has links)
Mendelian Randomization (MR) is a powerful tool for accessing causality of exposure on an outcome using genetic variants as the instrumental variables. Much of the recent developments is propelled by the increasing availability of GWAS summary data. However, the accuracy of the MR causal effect estimates could be challenged in case of the MR assumptions are violated. The source of biases could attribute to the weak effects arising because of polygenicity, the presentence of horizontal pleiotropy and other biases, e.g., selection bias. In this thesis, we proposed two works, expecting to deal with these issues.In the first part, we proposed a method named 'Bayesian Weighted Mendelian Randomization (BMWR)' for causal inference using summary statistics from GWAS. In BWMR, we not only take into account the uncertainty of weak effects owning to polygenicity of human genomics but also models the weak horizontal pleiotropic effects. Moreover, BWMR adopts a Bayesian reweighting strategy for detection of large pleiotropic outliers. An efficient algorithm based on variational inference was developed to make BWMR computationally efficient and stable. Considering the underestimated variance provided by variational inference, we further derived a closed form variance estimator inspired by a linear response method. We conducted several simulations to evaluate the performance of BWMR, demonstrating the advantage of BWMR over other methods. Then, we applied BWMR to access causality between 126 metabolites and 90 complex traits, revealing novel causal relationships. In the second part, we further developed BWMR-C: Statistical correction of selection bias for Mendelian Randomization based on a Bayesian weighted method. Based on the framework of BWMR, the probability model in BWMR-C is built conditional on the IV selection criteria. In such way, BWMR-C delicated to reduce the influence of the selection process on the causal effect estimates and also preserve the good properties of BWMR. To make the causal inference computationally stable and efficient, we developed a variational EM algorithm. We conducted several comprehensive simulations to evaluate the performance of BWMR-C for correction of selection bias. Then, we applied BWMR-C on seven body fat distribution related traits and 140 UK Biobank traits. Our results show that BWMR-C achieves satisfactory performance for correcting selection bias. Keywords: Mendelian Randomization, polygenicity, horizontal pleiotropy, selection bias, variation inference.
|
200 |
Clustering Algorithm for Zero-Inflated DataJanuary 2020 (has links)
Zero-inflated data are common in biomedical research. In cluster analysis, the heuristic
approach fails to provide inferential properties to the outcome while the existing model-based
approach only works in the case of a mixture of multivariate normal. In this dissertation, I
developed two new model-based clustering algorithms- the multivariate zero-inflated log-normal
and the multivariate zero-inflated Poisson clustering algorithms. I then applied these methods to
the questionnaire data and compare the resulting clusters to the ones derived from assuming
multivariate normal distribution. Associations between clustering results and clinical outcomes
were also investigated.
|
Page generated in 0.0504 seconds