Spelling suggestions: "subject:"estimator""
81 |
Modélisation des données d'enquêtes cas-cohorte par imputation multiple : application en épidémiologie cardio-vasculaire / Modeling of case-cohort data by multiple imputation : application to cardio-vascular epidemiologyMarti soler, Helena 04 May 2012 (has links)
Les estimateurs pondérés généralement utilisés pour analyser les enquêtes cas-cohorte ne sont pas pleinement efficaces. Or, les enquêtes cas-cohorte sont un cas particulier de données incomplètes où le processus d'observation est contrôlé par les organisateurs de l'étude. Ainsi, des méthodes d'analyse pour données manquant au hasard (MA) peuvent être pertinentes, en particulier, l'imputation multiple, qui utilise toute l'information disponible et permet d'approcher l'estimateur du maximum de vraisemblance partielle.Cette méthode est fondée sur la génération de plusieurs jeux plausibles de données complétées prenant en compte les différents niveaux d'incertitude sur les données manquantes. Elle permet d'adapter facilement n'importe quel outil statistique disponible pour les données de cohorte, par exemple, l'estimation de la capacité prédictive d'un modèle ou d'une variable additionnelle qui pose des problèmes spécifiques dans les enquêtes cas-cohorte. Nous avons montré que le modèle d'imputation doit être estimé à partir de tous les sujets complètement observés (cas et non-cas) en incluant l'indicatrice de statut parmi les variables explicatives. Nous avons validé cette approche à l'aide de plusieurs séries de simulations: 1) données complètement simulées, où nous connaissions les vraies valeurs des paramètres, 2) enquêtes cas-cohorte simulées à partir de la cohorte PRIME, où nous ne disposions pas d'une variable de phase-1 (observée sur tous les sujets) fortement prédictive de la variable de phase-2 (incomplètement observée), 3) enquêtes cas-cohorte simulées à partir de la cohorte NWTS, où une variable de phase-1 fortement prédictive de la variable de phase-2 était disponible. Ces simulations ont montré que l'imputation multiple fournissait généralement des estimateurs sans biais des risques relatifs. Pour les variables de phase-1, ils approchaient la précision obtenue par l'analyse de la cohorte complète, ils étaient légèrement plus précis que l'estimateur calibré de Breslow et coll. et surtout que les estimateurs pondérés classiques. Pour les variables de phase-2, l'estimateur de l'imputation multiple était généralement sans biais et d'une précision supérieure à celle des estimateurs pondérés classiques et analogue à celle de l'estimateur calibré. Les résultats des simulations réalisées à partir des données de la cohorte NWTS étaient cependant moins bons pour les effets impliquant la variable de phase-2 : les estimateurs de l'imputation multiple étaient légèrement biaisés et moins précis que les estimateurs pondérés. Cela s'explique par la présence de termes d'interaction impliquant la variable de phase-2 dans le modèle d'analyse, d'où la nécessité d'estimer des modèles d'imputation spécifiques à différentes strates de la cohorte incluant parfois trop peu de cas pour que les conditions asymptotiques soient réunies.Nous recommandons d'utiliser l'imputation multiple pour obtenir des estimations plus précises des risques relatifs, tout en s'assurant qu'elles sont analogues à celles fournies par les analyses pondérées. Nos simulations ont également montré que l'imputation multiple fournissait des estimations de la valeur prédictive d'un modèle (C de Harrell) ou d'une variable additionnelle (différence des indices C, NRI ou IDI) analogues à celles fournies par la cohorte complète / The weighted estimators generally used for analyzing case-cohort studies are not fully efficient. However, case-cohort surveys are a special type of incomplete data in which the observation process is controlled by the study organizers. So, methods for analyzing Missing At Random (MAR) data could be appropriate, in particular, multiple imputation, which uses all the available information and allows to approximate the partial maximum likelihood estimator.This approach is based on the generation of several plausible complete data sets, taking into account all the uncertainty about the missing values. It allows adapting any statistical tool available for cohort data, for instance, estimators of the predictive ability of a model or of an additional variable, which meet specific problems with case-cohort data. We have shown that the imputation model must be estimated on all the completely observed subjects (cases and non-cases) including the case indicator among the explanatory variables. We validated this approach with several sets of simulations: 1) completely simulated data where the true parameter values were known, 2) case-cohort data simulated from the PRIME cohort, without any phase-1 variable (completely observed) strongly predictive of the phase-2 variable (incompletely observed), 3) case-cohort data simulated from de NWTS cohort, where a phase-1 variable strongly predictive of the phase-2 variable was available. These simulations showed that multiple imputation generally provided unbiased estimates of the risk ratios. For the phase-1 variables, they were almost as precise as the estimates provided by the full cohort, slightly more precise than Breslow et al. calibrated estimator and still more precise than classical weighted estimators. For the phase-2 variables, the multiple imputation estimator was generally unbiased, with a precision better than classical weighted estimators and similar to Breslow et al. calibrated estimator. The simulations performed with the NWTS cohort data provided less satisfactory results for the effects where the phase-2 variable was involved: the multiple imputation estimators were slightly biased and less precise than the weighted estimators. This can be explained by the interactions terms involving the phase-2 variable in the analysis model and the necessity of estimating specific imputation models in different strata not including sometimes enough cases to satisfy the asymptotic conditions. We advocate the use of multiple imputation for improving the precision of the risk ratios estimates while making sure they are similar to the weighted estimates.Our simulations also showed that multiple imputation provided estimates of a model predictive value (Harrell's C) or of an additional variable (difference of C indices, NRI or IDI) similar to those obtained from the full cohort.
|
82 |
Statistical Properties of Preliminary Test EstimatorsKorsell, Nicklas January 2006 (has links)
<p>This thesis investigates the statistical properties of preliminary test estimators of linear models with normally distributed errors. Specifically, we derive exact expressions for the mean, variance and quadratic risk (i.e. the Mean Square Error) of estimators whose form are determined by the outcome of a statistical test. In the process, some new results on the moments of truncated linear or quadratic forms in normal vectors are established.</p><p>In the first paper (Paper I), we consider the estimation of the vector of regression coefficients under a model selection procedure where it is assumed that the analyst chooses between two nested linear models by some of the standard model selection criteria. This is shown to be equivalent to estimation under a preliminary test of some linear restrictions on the vector of regression coefficients. The main contribution of Paper I compared to earlier research is the generality of the form of the test statistic; we only assume it to be a quadratic form in the (translated) observation vector. Paper II paper deals with the estimation of the regression coefficients under a preliminary test for homoscedasticity of the error variances. In Paper III, we investigate the statistical properties of estimators, truncated at zero, of variance components in linear models with random effects. Paper IV establishes some new results on the moments of truncated linear and/or quadratic forms in normally distributed vectors. These results are used in Papers I-III. In Paper V we study some algebraic properties of matrices that occur in the comparison of two nested models. Specifically we derive an expression for the inertia (the number of positive, negative and zero eigenvalues) of this type of matrices.</p>
|
83 |
Statistical Properties of Preliminary Test EstimatorsKorsell, Nicklas January 2006 (has links)
This thesis investigates the statistical properties of preliminary test estimators of linear models with normally distributed errors. Specifically, we derive exact expressions for the mean, variance and quadratic risk (i.e. the Mean Square Error) of estimators whose form are determined by the outcome of a statistical test. In the process, some new results on the moments of truncated linear or quadratic forms in normal vectors are established. In the first paper (Paper I), we consider the estimation of the vector of regression coefficients under a model selection procedure where it is assumed that the analyst chooses between two nested linear models by some of the standard model selection criteria. This is shown to be equivalent to estimation under a preliminary test of some linear restrictions on the vector of regression coefficients. The main contribution of Paper I compared to earlier research is the generality of the form of the test statistic; we only assume it to be a quadratic form in the (translated) observation vector. Paper II paper deals with the estimation of the regression coefficients under a preliminary test for homoscedasticity of the error variances. In Paper III, we investigate the statistical properties of estimators, truncated at zero, of variance components in linear models with random effects. Paper IV establishes some new results on the moments of truncated linear and/or quadratic forms in normally distributed vectors. These results are used in Papers I-III. In Paper V we study some algebraic properties of matrices that occur in the comparison of two nested models. Specifically we derive an expression for the inertia (the number of positive, negative and zero eigenvalues) of this type of matrices.
|
84 |
Recursive Passive Localization Methods Using Time Difference Of ArrivalCamlica, Sedat 01 October 2009 (has links) (PDF)
In this thesis, the passive localization problem is studied. Robust and recursive solutions are presented by the use of Time Difference of Arrival (TDOA). The TDOA measurements are assumed to be gathered by moving sensors which makes the number of the sensors increase synthetically.
First of all, a location estimator should be capable of processing the new measurements without omitting the past data. This task can be accomplished by updating the estimate recursively whenever new measurements are available. Convenient forms of the recursive filters, such as the Kalman filter, the Extended Kalman filter etc., can be applied. Recursive filter can be divided to two major groups: (a) The first type of recursive estimators process the TDOA measurements directly, and (b) the second type of the recursive estimators is the post processing estimators which process the TDOA indirectly, instead they fuse or smooth available location estimates. In this sense, recursive passive localization methods are presented for both types.
In practice, issues like being spatially distant from each other and/or a radar with a rotating narrow beam may prevent the sensors to receive the same pulse. In such a case, the sensors can not construct common TDOA measurements which means that they can not accomplish the location estimation procedure. Additionally, there may be more than one sensor group making TDOA measurements. An estimator should be capable of fusing the measurements from different sensor groups. A sensor group consists of sensors which are able to receive the same pulse. In this work, solutions of these tasks are also given.
Performances of the presented methods are compared by simulation studies. The method having the best performance, which is based on the Kalman Filter, is also capable of estimating the track of a moving emitter by directly processing the TDOA measurements.
|
85 |
Multi-tree Monte Carlo methods for fast, scalable machine learningHolmes, Michael P. 09 January 2009 (has links)
As modern applications of machine learning and data mining are forced to deal with ever more massive quantities of data, practitioners quickly run into difficulty with the scalability of even the most basic and fundamental methods. We propose to provide scalability through a marriage between classical, empirical-style Monte Carlo approximation and deterministic multi-tree techniques. This union entails a critical compromise: losing determinism in order to gain speed. In the face of large-scale data, such a compromise is arguably often not only the right but the only choice. We refer to this new approximation methodology as Multi-Tree Monte Carlo. In particular, we have developed the following fast approximation methods:
1. Fast training for kernel conditional density estimation, showing speedups as high as 10⁵ on up to 1 million points.
2. Fast training for general kernel estimators (kernel density estimation, kernel regression, etc.), showing speedups as high as 10⁶ on tens of millions of points.
3. Fast singular value decomposition, showing speedups as high as 10⁵ on matrices containing billions of entries.
The level of acceleration we have shown represents improvement over the prior state of the art by several orders of magnitude. Such improvement entails a qualitative shift, a commoditization, that opens doors to new applications and methods that were previously invisible, outside the realm of practicality. Further, we show how these particular approximation methods can be unified in a Multi-Tree Monte Carlo meta-algorithm which lends itself as scaffolding to the further development of new fast approximation methods. Thus, our contribution includes not just the particular algorithms we have derived but also the Multi-Tree Monte Carlo methodological framework, which we hope will lead to many more fast algorithms that can provide the kind of scalability we have shown here to other important methods from machine learning and related fields.
|
86 |
Αρνητική διωνυμική κατανομή και εκτίμηση των παραμέτρων τηςΔίκαρος, Ανδρέας 29 December 2010 (has links)
Η παρούσα μεταπτυχιακή διατριβή εντάσσεται ερευνητικά στην περιοχή της Στατιστικής θεωρίας Αποφάσεων και ειδικότερα στη μελέτη της αρνητικής διωνυμικής κατανομής καθώς επίσης και στην εκτίμηση των παραμέτρων της.
Στο Κεφάλαιο 1 παρουσιάζονται κάποιοι χρήσιμοι, για την πορεία της μελέτης μας, ορισμοί και θεωρήματα.
Στο Κεφάλαιο 2 μελετάται το μοντέλο της αρνητικής διωνυμικής κατανομής, δίνονται τα χαρακτηριστικά μεγέθη αυτής και παρουσιάζονται οι διαφορετικές παραμετρικοποιήσεις της.
Στο Κεφάλαιο 3, εξετάζεται το πρόβλημα εκτίμησης των παραμέτρων της αρνητικής διωνυμικής κατανομής και πιο ειδικά η εκτίμηση για τις διάφορες παραμετρικοποιήσης της. Για περισσότερη ανάλυση χρησιμοποιούνται η εκτίμηση μέγιστης πιθανοφάνειας, η εκτίμηση με τη μέθοδο των ροπών και πιο εξειδικευμένες υπολογιστικές μέθοδοι εκτίμησης.
Στο Κεφάλαιο 4, και για το ίδιο πρόβλημα εκτίμησης που πραγματεύεται το προηγούμενο κεφάλαιο, επιλέγεται ο βέλτιστος εκτιμητής των παραμέτρων της αρνητικής διωνυμικής κατανομής και παρουσιάζεται ένα παράδειγμα για την κατανόηση των μεθόδων εκτίμησης. / The master thesis we are going to introduce takes place in the region of Statistical Decision Theory and particularly in studying the Negative Binomial Distribution and the estimation of its parameters.
In Chapter 1 some useful definitions and theorems are presented.
In Chapter 2 the model of negative binomial distribution is studied and its different parameterizations are discussed.
In Chapter 3 we examine the problem of estimating the parameters of our model and for its parameterizations. In particular we give the method of Maximum Likelihood Estimation, the Method of Moments and more specified Estimation Methods.
In Chapter 4 and for the same estimation problem, as in previous chapter, it’s been chosen the best estimator of the parameters in our model and it’s been derived an example for the better understanding of the above methods.
|
87 |
Εκτίμηση των παραμέτρων της διπαραμετρικής εκθετικής κατανομής από ένα διπλά διακεκομμένο δείγμαΔασκαλάκη, Ιωάννα 05 January 2011 (has links)
Η παρούσα μεταπτυχιακή διατριβή εντάσσεται ερευνητικά στην περιοχή της Στατιστικής Θεωρίας Αποφάσεων και ειδικότερα στην εκτίμηση των παραμέτρων στο μοντέλο της διπαραμετρικής εκθετικής κατανομής με παράμετρο θέσης μ και παράμετρο κλίμακος σ. Θεωρούμε ένα δείγμα n τυχαίων μεταβλητών, καθεμία από τις οποίες ακολουθεί την διπαραμετρική εκθετική κατανομή. Λογοκρίνουμε κάποιες αρχικές παρατηρήσεις και έστω ότι τερματίζουμε το πείραμά μας πριν αποτύχουν όλες οι συνιστώσες. Τότε προκύπτει ένα διπλά διακεκομμένο δείγμα διατεταγμένων παρατηρήσεων. Η εκτίμηση των παραμέτρων της διπαραμετρικής εκθετικής κατανομής, γίνεται από το συγκεκριμένο δείγμα.
Πρώτα μελετάμε κάποιες βασικές έννοιες της Στατιστικής και της Εκτιμητικής και βρίσκουμε εκτιμητές για τις παραμέτρους. Πιο συγκεκριμένα, βρίσκουμε αμερόληπτο εκτιμητή ελάχιστης διασποράς, εκτιμητή μέγιστης πιθανοφάνειας, εκτιμητή με την μέθοδο των ροπών και τον βέλτιστο αναλλοίωτο εκτιμητή σε συγκεκριμένη κλάση, αντίστοιχα και για τις δύο παραμέτρους. Σαν βελτίωση των προηγούμενων εκτιμητών, ακολουθούν οι εκτιμητές τύπου Stein και, ολοκληρώνοντας, ασχολούμαστε με πρόβλεψη κατά Bayes για μια μελλοντική παρατήρηση / The present master thesis deals with the estimation of the location parameter μ and the scale parameter σ of the two-parameter exponential distribution. A sample n of random variables from the two-parameter exponential distribution is assumed. Part of the initial variables is censored and the experiment is terminated before all the components fail. A doubly censored sample emerges from which the two-parameter exponential distribution's parameters are estimated.
First of all, basic Statistics' concepts are studied in order to estimate the parameters. More specifically, the Minimum Variance Unbiased Estimator (MVUE), the Maximum Likelihood Estimator (MLE), the estimator based on the Method of Moments and the best affine equivariant estimator are computed for both the parameters. To improve the previous estimators, the Stein method is used and to conclude the Bayes prediction is used for future observation
|
88 |
Optimal estimation and sensor selection for autonomous landing of a helicopter on a ship deckIrwin, Shaun George 12 1900 (has links)
Thesis (MEng)--Stellenbosch University, 2014. / ENGLISH ABSTRACT: This thesis presents a complete state estimation framework for landing an unmanned
helicopter on a ship deck. In order to design and simulate an optimal state estimator,
realistic sensor models are required. Selected inertial, absolute and relative sensors
are modeled based on extensive data analysis. The short-listed relative sensors include
monocular vision, stereo vision and laser-based sensors.
A state estimation framework is developed to fuse available helicopter estimates, ship
estimates and relative measurements. The estimation structure is shown to be both
optimal, as it minimises variance on the estimates, and flexible, as it allows for varying
degrees of ship deck instrumentation. Deck instrumentation permitted ranges
from a fully instrumented deck, equipped with an inertial measurement unit and differential
GPS, to a completely uninstrumented ship deck. Optimal estimates of all
helicopter, relative and ship states necessary for the autonomous landing on the ship
deck are provided by the estimator. Active gyro bias estimation is incorporated into
the helicopter’s attitude estimator. In addition, the process and measurement noise
covariance matrices are derived from sensor noise analysis, rather than conventional
tuning methods.
A full performance analysis of the estimator is then conducted. The optimal relative
sensor combination is determined through Monte Carlo simulation. Results show
that the choice of sensors is primarily dependent on the desired hover height during
the ship motion prediction stage. For a low hover height, monocular vision is
sufficient. For greater altitudes, a combination of monocular vision and a scanning
laser beam greatly improves relative and ship state estimation. A communication
link between helicopter and ship is not required for landing, but is advised for added
accuracy. The estimator is implemented on a microprocessor running real-time Linux. The
successful performance of the system is demonstrated through hardware-in-the-loop
and actual flight testing. / AFRIKAANSE OPSOMMING: Hierdie tesis bied ’n volledige sensorfusie- en posisieskattingstruktuur om ’n onbemande
helikopter op ’n skeepsdek te laat land. Die ontwerp van ’n optimale posisieskatter
vereis die ontwikkeling van realistiese sensormodelle ten einde die skatter
akkuraat te simuleer. Die gekose inersie-, absolute en relatiewe sensors in hierdie
tesis is op grond van uitvoerige dataontleding getipeer, wat eenoogvisie-, stereovisieen
lasergegronde sensors ingesluit het.
’n Innoverende raamwerk vir die skatting van relatiewe en skeepsposisie is ontwikkel
om die beskikbare helikopterskattings, skeepskattings en relatiewe metings te kombineer.
Die skattingstruktuur blyk optimaal te wees in die beperking van skattingsvariansie,
en is terselfdertyd buigsaam aangesien dit vir wisselende mates van skeepsdekinstrumentasie
voorsiening maak. Die toegelate vlakke van dekinstrumentasie
wissel van ’n volledig geïnstrumenteerde dek wat met ’n inersiemetingseenheid en ’n
differensiële globale posisioneringstelsel (GPS) toegerus is, tot ’n algeheel ongeïnstrumenteerde
dek. Die skatter voorsien optimale skattings van alle vereiste helikopter-,
relatiewe en skeepsposisies vir die doeleinde van outonome landing op die skeepsdek.
Aktiewe giro-sydige skatting is by die posisieskatter van die helikopter ingesluit. Die
proses- en metingsmatrikse vir geruiskovariansie in die helikopterskatter is met behulp
van ’n ontleding van sensorgeruis, eerder as gebruiklike instemmingsmetodes,
afgelei. ’n Volledige werkingsontleding is daarna op die skatter uitgevoer. Die optimale relatiewe
sensorkombinasie vir landing op ’n skeepsdek is met Monte Carlo-simulasie
bepaal. Die resultate toon dat die keuse van sensors hoofsaaklik van die gewenste
sweefhanghoogte gedurende die voorspellingstadium van skeepsbeweging afhang.
Vir ’n lae sweefhanghoogte is eenoogvisie-sensors voldoende. Vir hoër hoogtes het
’n kombinasie van eenoogvisie-sensors en ’n aftaslaserbundel ’n groot verbetering in
relatiewe en skeepsposisieskatting teweeggebring. ’n Kommunikasieskakel tussen helikopter
en skip is nie ’n vereiste vir landing nie, maar word wel aanbeveel vir ekstra
akkuraatheid.
Die skatter is op ’n mikroverwerker met intydse Linux in werking gestel. Die suksesvolle werking van die stelsel is deur middel van hardeware-geïntegreerde simulasie
en werklike vlugtoetse aangetoon.
|
89 |
Estimateurs fonctionnels récursifs et leurs applications à la prévision / Recursive functional estimators with application to nonparametric predictionAmiri, Aboubacar 06 December 2010 (has links)
Nous nous intéressons dans cette thèse aux méthodes d’estimation non paramétriques par noyaux récursifs ainsi qu’à leurs applications à la prévision. Nous introduisons dans un premier chapitre une famille d’estimateurs récursifs de la densité indexée par un paramètre ℓ ∈ [0, 1]. Leur comportement asymptotique en fonction de ℓ va nous amener à introduire des critères de comparaison basés sur les biais, variance et erreur quadratique asymptotiques. Pour ces critères, nous comparons les estimateurs entre eux et aussi comparons notre famille à l’estimateur non récursif de la densité de Parzen-Rosenblatt. Ensuite, nous définissons à partir de notre famille d’estimateurs de la densité, une famille d’estimateurs récursifs à noyau de la fonction de régression. Nous étudions ses propriétés asymptotiques en fonction du paramètre ℓ. Nous utilisons enfin les résultats obtenus sur l’estimation de la régression pour construire un prédicteur non paramétrique par noyau. Nous obtenons ainsi une famille de prédicteurs non paramétriques qui permettent de réduire considérablement le temps de calcul. Des exemples d’application sont donnés pour valider la performance de nos estimateurs / The aim of this thesis is to study methods of nonparametric estimation based on recursive kernel and their applications to forecasting. We introduce in the first chapter a family of recursive density estimators indexed by a parameter ℓ ∈ [0, 1]. We study their asymptotic behavior according to ℓ, and then we introduce criteria of comparison based on bias, variance and asymptotic quadratic error. For these criteria, we compare our estimators in terms of ℓ, and also compare our family to the non-recursive density estimator of Parzen-Rosenblatt. As for density, we define a family of recursive kernel estimators of regression function. We study its asymptotic properties according to the parameter ℓ. Finally, results of regression estimation are applied to define a family of nonparametric predictors that reduce considerably the computing time and examples of application are given to validate the performance of our methods
|
90 |
Coeficientes de parentesco em espécies florestais /Sousa, Antonio Higo Moreira de January 2018 (has links)
Orientador: Evandro Vagner Tambarussi / Resumo: Parentesco entre indivíduos em populações naturais é uma informação com múltiplos usos e pode ser acessada por meio de estimadores que usam dados moleculares. Todavia, cada estimador possui pressuposições e, muitas vezes, rigorosas para espécies florestais. Assim, a modelagem dos dados é uma etapa importante e, quase em todos os casos, negligenciada. Este trabalho teve como objetivo avaliar a eficácia de diferentes estimadores de parentesco em Acrocomia aculeata (Jacq.) Lodd. ex Mart., Hymenaea stigonocarpa Mart. ex Hayne e Dipteryx alata Vogel., bem como em quatro diferentes populações simuladas. A partir das coancestrias médias ( ������̅ ) estimadas, foi calculado o erro dos estimadores pressupondo que os indivíduos analisados eram meios-irmãos (������̅=0,125). As estimativas de parentesco oscilaram conforme a espécie e o método utilizado, gerando diferentes valores de erro para cada estimador. A correlação foi observada apenas entre estimadores que possuíam o mesmo método de estimativa ou com pressupostos similares. As populações simuladas tiveram melhores valores estimados e menores erros em comparação com dados reais. Os valores de erro dos estimadores encontrados, demonstram que somente aplicação dos estimadores para a inferência de determinado grau de parentesco, pode gerar resultados viesados e corroborar para ineficácia da tomada de decisão, sendo necessário o uso de informações complementares associadas ao parentesco, como análise do sistema de reprodução, estrutura gen... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: Understanding kinship between individuals in natural populations offers useful information that can be assessed based on estimators of molecular data. However, each estimation method is based on assumptions and is often restricted for forest species. Thus, modeling the data is an important step that is almost always neglected. This study aims to evaluate the efficacy of different kinship estimators for Acrocomia aculeata (Jacq.) Lodd, ex Mart., Hymenaea stigonocarpa Mart. Ex Hayne, and Dipteryx alata Vogel., and four different simulated populations. From the estimated mean coancestry (�����̅), the error estimates were calculated assuming that analyzed individuals were perfect half-siblings (�����̅=0,125). Estimates of kinship ranged according to the species and method used, generating different error values for each estimator. A correlation was observed only between estimators that used the same estimation method or similar assumptions. Simulated populations showed more accurate estimates and lower error values compared to actual data. The error values of the estimators demonstrate that the application of estimators to infer a certain degree of kinship can generate biased results and lead to inefficient decision making. Thus, the use of complementary information associated with kinship is necessary, such as analysis of the reproduction system and genetic structure of the population, enabling more precise inferences of the kinship between evaluated individuals. / Mestre
|
Page generated in 0.061 seconds