• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 62
  • 6
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 103
  • 103
  • 47
  • 21
  • 17
  • 17
  • 16
  • 13
  • 13
  • 12
  • 12
  • 12
  • 11
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Comparaison empirique des méthodes bootstrap dans un contexte d'échantillonnage en population finie.

Dabdoubi, Oussama 08 1900 (has links)
No description available.
92

Statistical methods for assessing and managing wild populations

Hoyle, Simon David January 2005 (has links)
This thesis is presented as a collection of five papers and one report, each of which has been either published after peer review or submitted for publication. It covers a broad range of applied statistical methods, from deterministic modelling to integrated Bayesian modelling using MCMC, via bootstrapping and stochastic simulation. It also covers a broad range of subjects, from analysis of recreational fishing diaries, to genetic mark recapture for wombats. However, it focuses on practical applications of statistics to the management of wild populations. The first chapter (Hoyle and Jellyman 2002, published in Marine and Freshwater Research) applies a simple deterministic yield per recruit model to a fishery management problem: possible overexploitation of the New Zealand longfin eel. The chapter has significant implications for longfin eel fishery management. The second chapter (Hoyle and Cameron 2003, published in Fisheries Management and Ecology) focuses on uncertainty in the classical paradigm, by investigating the best way to estimate bootstrap confidence limits on recreational harvest and catch rate using catch diary data. The third chapter (Hoyle et al., in press with Molecular Ecology Notes) takes a different path by looking at genetic mark-recapture in a fisheries management context. Genetic mark-recapture was developed for wildlife abundance estimation but has not previously been applied to fish harvest rate estimation. The fourth chapter (Hoyle and Banks, submitted) addresses genetic mark-recapture, but in the wildlife context for estimates of abundance rather than harvest rate. Our approach uses individual-based modeling and Bayesian analysis to investigate the effect of shadows on abundance estimates and confidence intervals, and to provide guidelines for developing sets of loci for populations of different sizes and levels of relatedness. The fifth chapter (Hoyle and Maunder 2004, Animal Biodiversity and Conservation) applies integrated analysis techniques developed in fisheries to the modeling of protected species population dynamics - specifically the north-eastern spotted dolphin, Stenella attenuata. It combines data from a number of different sources in a single statistical model, and estimates parameters using both maximum likelihood and Bayesian MCMC. The sixth chapter (Hoyle 2002, peer reviewed and published as Queensland Department of Primary Industries Information Series) results directly from a pressing management issue: developing new management procedures for the Queensland east coast Spanish mackerel fishery. It uses an existing stock assessment as a starting point for an integrated Bayesian management strategy evaluation. Possibilities for further research have been identified within the subject areas of each chapter, both within the chapters and in the final discussion chapter.
93

O processo de Poisson estendido e aplicações. / O processo de Poisson estendido e aplicações.

Salasar, Luis Ernesto Bueno 14 June 2007 (has links)
Made available in DSpace on 2016-06-02T20:05:59Z (GMT). No. of bitstreams: 1 DissLEBS.pdf: 1626270 bytes, checksum: c18112f89ed0a1eea09a198885cf2c2c (MD5) Previous issue date: 2007-06-14 / Financiadora de Estudos e Projetos / Abstract In this dissertation we will study how extended Poisson process can be applied to construct discrete probabilistic models. An Extended Poisson Process is a continuous time stochastic process with the state space being the natural numbers, it is obtained as a generalization of homogeneous Poisson process where transition rates depend on the current state of the process. From its transition rates and Chapman-Kolmogorov di¤erential equations, we can determine the probability distribution at any …xed time of the process. Conversely, given any probability distribution on the natural numbers, it is possible to determine uniquely a sequence of transition rates of an extended Poisson process such that, for some instant, the unidimensional probability distribution coincides with the provided probability distribution. Therefore, we can conclude that extended Poisson process is as a very ‡exible framework on the analysis of discrete data, since it generalizes all probabilistic discrete models. We will present transition rates of extended Poisson process which generate Poisson, Binomial and Negative Binomial distributions and determine maximum likelihood estima- tors, con…dence intervals, and hypothesis tests for parameters of the proposed models. We will also perform a bayesian analysis of such models with informative and noninformative prioris, presenting posteriori summaries and comparing these results to those obtained by means of classic inference. / Nesta dissertação veremos como o proceso de Poisson estendido pode ser aplicado à construção de modelos probabilísticos discretos. Um processo de Poisson estendido é um processo estocástico a tempo contínuo com espaço de estados igual ao conjunto dos números naturais, obtido a partir de uma generalização do processo de Poisson homogê- neo onde as taxas de transição dependem do estado atual do processo. A partir das taxas de transição e das equações diferenciais de Chapman-Kolmogorov pode-se determinar a distribuição de probabilidades para qualquer tempo …xado do processo. Reciprocamente, dada qualquer distribuição de probabilidades sobre o conjunto dos números naturais é pos- sível determinar, de maneira única, uma seqüência de taxas de transição de um processo de Poisson estendido tal que, para algum instante, a distribução unidimensional do processo coincide com a dada distribuição de probabilidades. Portanto, o processo de Poisson es- tendido se apresenta como uma ferramenta bastante ‡exível na análise de dados discretos, pois generaliza todos os modelos probabilísticos discretos. Apresentaremos as taxas de transição dos processos de Poisson estendido que ori- ginam as distribuições de Poisson, Binomial e Binomial Negativa e determinaremos os estimadores de máxima verossimilhança, intervalos de con…ança e testes de hipóteses dos parâmetros dos modelos propostos. Faremos também uma análise bayesiana destes mod- elos com prioris informativas e não informativas, apresentando os resumos a posteriori e comparando estes resultados com aqueles obtidos via inferência clássica.
94

Estimateur bootstrap de la variance d'un estimateur de quantile en contexte de population finie

McNealis, Vanessa 12 1900 (has links)
Ce mémoire propose une adaptation lisse de méthodes bootstrap par pseudo-population aux fins d'estimation de la variance et de formation d'intervalles de confiance pour des quantiles de population finie. Dans le cas de données i.i.d., Hall et al. (1989) ont montré que l'ordre de convergence de l'erreur relative de l’estimateur bootstrap de la variance d’un quantile échantillonnal connaît un gain lorsque l'on rééchantillonne à partir d’une estimation lisse de la fonction de répartition plutôt que de la fonction de répartition expérimentale. Dans cet ouvrage, nous étendons le principe du bootstrap lisse au contexte de population finie en le mettant en œuvre au sein des méthodes bootstrap par pseudo-population. Étant donné un noyau et un paramètre de lissage, cela consiste à lisser la pseudo-population dont sont issus les échantillons bootstrap selon le plan de sondage initial. Deux plans sont abordés, soit l'échantillonnage aléatoire simple sans remise et l'échantillonnage de Poisson. Comme l'utilisation des algorithmes proposés nécessite la spécification du paramètre de lissage, nous décrivons une méthode de sélection par injection et des méthodes de sélection par la minimisation d'estimés bootstrap de critères d'ajustement sur une grille de valeurs du paramètre de lissage. Nous présentons des résultats d'une étude par simulation permettant de montrer empiriquement l'efficacité de l'approche lisse par rapport à l'approche standard pour ce qui est de l'estimation de la variance d'un estimateur de quantile et des résultats plus mitigés en ce qui concerne les intervalles de confiance. / This thesis introduces smoothed pseudo-population bootstrap methods for the purposes of variance estimation and the construction of confidence intervals for finite population quantiles. In an i.i.d. context, Hall et al. (1989) have shown that resampling from a smoothed estimate of the distribution function instead of the usual empirical distribution function can improve the convergence rate of the bootstrap variance estimator of a sample quantile. We extend the smoothed bootstrap to the survey sampling framework by implementing it in pseudo-population bootstrap methods. Given a kernel function and a bandwidth, it consists of smoothing the pseudo-population from which bootstrap samples are drawn using the original sampling design. Two designs are discussed, namely simple random sampling and Poisson sampling. The implementation of the proposed algorithms requires the specification of the bandwidth. To do so, we develop a plug-in selection method along with grid search selection methods based on bootstrap estimates of two performance metrics. We present the results of a simulation study which provide empirical evidence that the smoothed approach is more efficient than the standard approach for estimating the variance of a quantile estimator together with mixed results regarding confidence intervals.
95

Efficient Approaches to the Treatment of Uncertainty in Satisfying Regulatory Limits

Grabaskas, David 30 August 2012 (has links)
No description available.
96

Etude des délais de survenue des effets indésirables médicamenteux à partir des cas notifiés en pharmacovigilance : problème de l'estimation d'une distribution en présence de données tronquées à droite / Time to Onset of Adverse Drug Reactions : Spontaneously Reported Cases Based Analysis and Distribution Estimation From Right-Truncated Data

Leroy, Fanny 18 March 2014 (has links)
Ce travail de thèse porte sur l'estimation paramétrique du maximum de vraisemblance pour des données de survie tronquées à droite, lorsque les délais de troncature sont considérés déterministes. Il a été motivé par le problème de la modélisation des délais de survenue des effets indésirables médicamenteux à partir des bases de données de pharmacovigilance, constituées des cas notifiés. Les distributions exponentielle, de Weibull et log-logistique ont été explorées.Parfois le caractère tronqué à droite des données est ignoré et un estimateur naïf est utilisé à la place de l'estimateur pertinent. Une première étude de simulations a montré que, bien que ces deux estimateurs - naïf et basé sur la troncature à droite - puissent être positivement biaisés, le biais de l'estimateur basé sur la troncature est bien moindre que celui de l'estimateur naïf et il en va de même pour l'erreur quadratique moyenne. De plus, le biais et l'erreur quadratique moyenne de l'estimateur basé sur la troncature à droite diminuent nettement avec l'augmentation de la taille d'échantillon, ce qui n'est pas le cas de l'estimateur naïf. Les propriétés asymptotiques de l'estimateur paramétrique du maximum de vraisemblance ont été étudiées. Sous certaines conditions, suffisantes, cet estimateur est consistant et asymptotiquement normal. La matrice de covariance asymptotique a été détaillée. Quand le délai de survenue est modélisé par la loi exponentielle, une condition d'existence de l'estimation du maximum de vraisemblance, assurant ces conditions suffisantes, a été obtenue. Pour les deux autres lois, une condition d'existence de l'estimation du maximum de vraisemblance a été conjecturée.A partir des propriétés asymptotiques de cet estimateur paramétrique, les intervalles de confiance de type Wald et de la vraisemblance profilée ont été calculés. Une seconde étude de simulations a montré que la couverture des intervalles de confiance de type Wald pouvait être bien moindre que le niveau attendu en raison du biais de l'estimateur du paramètre de la distribution, d'un écart à la normalité et d'un biais de l'estimateur de la variance asymptotique. Dans ces cas-là, la couverture des intervalles de la vraisemblance profilée est meilleure.Quelques procédures d'adéquation adaptées aux données tronquées à droite ont été présentées. On distingue des procédures graphiques et des tests d'adéquation. Ces procédures permettent de vérifier l'adéquation des données aux différents modèles envisagés.Enfin, un jeu de données réelles constitué de 64 cas de lymphomes consécutifs à un traitement anti TNF-α issus de la base de pharmacovigilance française a été analysé, illustrant ainsi l'intérêt des méthodes développées. Bien que ces travaux aient été menés dans le cadre de la pharmacovigilance, les développements théoriques et les résultats des simulations peuvent être utilisés pour toute analyse rétrospective réalisée à partir d'un registre de cas, où les données sur un délai de survenue sont aussi tronquées à droite. / This work investigates the parametric maximum likelihood estimation for right-truncated survival data when the truncation times are considered deterministic. It was motivated by the modeling problem of the adverse drug reactions time-to-onset from spontaneous reporting databases. The families of the exponential, Weibull and log-logistic distributions were explored.Sometimes, right-truncation features of spontaneous reports are not taken into account and a naive estimator is used instead of the truncation-based estimator. Even if the naive and truncation-based estimators may be positively biased, a first simulation study showed that the bias of the truncation-based estimator is always smaller than the naive one and this is also true for the mean squared error. Furthermore, when the sample size increases, the bias and the mean squared error are almost constant for the naive estimator while they decrease clearly for the truncation-based estimator.Asymptotic properties of the truncation-based estimator were studied. Under sufficient conditions, this parametric truncation-based estimator is consistent and asymptotically normally distributed. The covariance matrix was detailed. When the time-to-onset is exponentially distributed, these sufficient conditions are checked as soon as a condition for the maximum likelihood estimation existence is satisfied. When the time-to-onset is Weibull or log-logistic distributed, a condition for the maximum likelihood estimation existence was conjectured.The asymptotic distribution of the maximum likelihood estimator makes it possible to derive Wald-type and profile likelihood confidence intervals for the distribution parameters. A second simulation study showed that the estimated coverage probability of the Wald-type confidence intervals could be far from the expected level because of a bias of the parametric maximum likelihood estimator, a gap from the gaussian distribution and a bias of the asymptotic variance estimator. In these cases, the profile likelihood confidence intervals perform better.Some goodness-of-fit procedures adapted to right-truncated data are presented. Graphical procedures and goodness-of-fit tests may be distinguished. These procedures make it possible to check the fit of different parametric families to the data.Illustrating the developed methods, a real dataset of 64 cases of lymphoma, that occurred after anti TNF-α treatment and that were reported to the French pharmacovigilance, was finally analyzed. Whilst an application to pharmacovigilance was led, the theoretical developments and the results of the simulation study may be used for any retrospective analysis from case registries where data are right-truncated.
97

Eliminação de parâmetros perturbadores em um modelo de captura-recaptura

Salasar, Luis Ernesto Bueno 18 November 2011 (has links)
Made available in DSpace on 2016-06-02T20:04:51Z (GMT). No. of bitstreams: 1 4032.pdf: 1016886 bytes, checksum: 6e1eb83f197a88332f8951b054c1f01a (MD5) Previous issue date: 2011-11-18 / Financiadora de Estudos e Projetos / The capture-recapture process, largely used in the estimation of the number of elements of animal population, is also applied to other branches of knowledge like Epidemiology, Linguistics, Software reliability, Ecology, among others. One of the _rst applications of this method was done by Laplace in 1783, with aim at estimate the number of inhabitants of France. Later, Carl G. J. Petersen in 1889 and Lincoln in 1930 applied the same estimator in the context of animal populations. This estimator has being known in literature as _Lincoln-Petersen_ estimator. In the mid-twentieth century several researchers dedicated themselves to the formulation of statistical models appropriated for the estimation of population size, which caused a substantial increase in the amount of theoretical and applied works on the subject. The capture-recapture models are constructed under certain assumptions relating to the population, the sampling procedure and the experimental conditions. The main assumption that distinguishes models concerns the change in the number of individuals in the population during the period of the experiment. Models that allow for births, deaths or migration are called open population models, while models that does not allow for these events to occur are called closed population models. In this work, the goal is to characterize likelihood functions obtained by applying methods of elimination of nuissance parameters in the case of closed population models. Based on these likelihood functions, we discuss methods for point and interval estimation of the population size. The estimation methods are illustrated on a real data-set and their frequentist properties are analised via Monte Carlo simulation. / O processo de captura-recaptura, amplamente utilizado na estimação do número de elementos de uma população de animais, é também aplicado a outras áreas do conhecimento como Epidemiologia, Linguística, Con_abilidade de Software, Ecologia, entre outras. Uma das primeiras aplicações deste método foi feita por Laplace em 1783, com o objetivo de estimar o número de habitantes da França. Posteriormente, Carl G. J. Petersen em 1889 e Lincoln em 1930 utilizaram o mesmo estimador no contexto de popula ções de animais. Este estimador _cou conhecido na literatura como o estimador de _Lincoln-Petersen_. Em meados do século XX muitos pesquisadores se dedicaram à formula ção de modelos estatísticos adequados à estimação do tamanho populacional, o que causou um aumento substancial da quantidade de trabalhos teóricos e aplicados sobre o tema. Os modelos de captura-recaptura são construídos sob certas hipóteses relativas à população, ao processo de amostragem e às condições experimentais. A principal hipótese que diferencia os modelos diz respeito à mudança do número de indivíduos da popula- ção durante o período do experimento. Os modelos que permitem que haja nascimentos, mortes ou migração são chamados de modelos para população aberta, enquanto que os modelos em que tais eventos não são permitidos são chamados de modelos para popula- ção fechada. Neste trabalho, o objetivo é caracterizar o comportamento de funções de verossimilhança obtidas por meio da utilização de métodos de eliminação de parâmetros perturbadores, no caso de modelos para população fechada. Baseado nestas funções de verossimilhança, discutimos métodos de estimação pontual e intervalar para o tamanho populacional. Os métodos de estimação são ilustrados através de um conjunto de dados reais e suas propriedades frequentistas são analisadas via simulação de Monte Carlo.
98

Statistická analýza souborů s malým rozsahem / Statistical Analysis of Sample with Small Size

Holčák, Lukáš January 2008 (has links)
This diploma thesis is focused on the analysis of small samples where it is not possible to obtain more data. It can be especially due to the capital intensity or time demandingness. Where the production have not a wherewithall for the realization more data or absence of the financial resources. Of course, analysis of small samples is very uncertain, because inferences are always encumbered with the level of uncertainty.
99

Exact Analysis of Exponential Two-Component System Failure Data

Zhang, Xuan 01 1900 (has links)
<p>A survival distribution is developed for exponential two-component systems that can survive as long as at least one of the two components in the system function. It is assumed that the two components are initially independent and non-identical. If one of the two components fail (repair is impossible), the surviving component is subject to a different failure rate due to the stress caused by the failure of the other.</p> <p>In this paper, we consider such an exponential two-component system failure model when the observed failure time data are (1) complete, (2) Type-I censored, (3) Type-I censored with partial information on component failures, (4) Type-II censored and (5) Type-II censored with partial information on component failures. In these situations, we discuss the maximum likelihood estimates (MLEs) of the parameters by assuming the lifetimes to be exponentially distributed. The exact distributions (whenever possible) of the MLEs of the parameters are then derived by using the conditional moment generating function approach. Construction of confidence intervals for the model parameters are discussed by using the exact conditional distributions (when available), asymptotic distributions, and two parametric bootstrap methods. The performance of these four confidence intervals, in terms of coverage probabilities are then assessed through Monte Carlo simulation studies. Finally, some examples are presented to illustrate all the methods of inference developed here.</p> <p>In the case of Type-I and Type-II censored data, since there are no closed-form expressions for the MLEs, we present an iterative maximum likelihood estimation procedure for the determination of the MLEs of all the model parameters. We also carry out a Monte Carlo simulation study to examine the bias and variance of the MLEs.</p> <p>In the case of Type-II censored data, since the exact distributions of the MLEs depend on the data, we discuss the exact conditional confidence intervals and asymptotic confidence intervals for the unknown parameters by conditioning on the data observed.</p> / Thesis / Doctor of Philosophy (PhD)
100

Estimation of Ocean Flow from Satellite Gravity Data and Contributions to Correlation Analysis / Estimaciones del Flujo Oceánico a partir de Gravedad desde Satélite y Contribuciones al Análisis de Correlaciones

Vargas-Alemañy, Juan A. 29 January 2024 (has links)
This thesis, structured in two parts, addresses a series of problems of relevance in the field of Spatial Geodesy. The first part delves into the application of satellite gravity data to enhance our understanding of water transport dynamics. Here, we present two significant contributions. Both are based on satellite gravity data but stem from different mission concepts with distinct objectives: time-variable gravity monitoring and high-resolution, accurate static geoid modelling. First, the fundamental notions about gravity are introduced and a brief summary is made of the different gravity satellite missions throughout history, with emphasis on the GRACE/GRACE-FO and GOCE missions, whose data are the basis of this work. The first application focuses on estimating water transport and geostrophic circulation in the Southern Ocean by leveraging a GOCE geoid and altimetry data. The Volume Transport across the Antartic Circumpolar Current is analyzed and the resulsts are validated validated using the in-situ data collected during the multiple campaigns in the DP. The second application uses time-variable gravity data from the GRACE and GRACE-FO missions to estimate the water cycle in the Mediterranean and Black Sea system, a critical region for regional climate and global ocean circulation. The analysis delves into the analysis of the different components of the hydrological cycle within this region, including the water flow across the Gibraltar Strait, examining their seasonal variations, climatic patterns, and their connection with the North Atlantic Oscillation Index. The second part of the thesis is more focused on data analysis, with the objective of developing mathematical methods to estimate the cross correlation function between two time series that are both unevenly spaced spaced (the sampling is not uniform over time) and observed at unequal time scales (the set of time points for the first series is not identical to the set of time points of the second series). Such time series are frequently encountered in geodetic surveys, especially when combining data from different sources. The estimation of the the cross correlation function for these time series presents unique challenges and requires the adaptation of traditional analysis methods designed for evenly spaced and synchronized time series. The two main contributions in this context are: (i) the study of the asymptotic properties of the Guassian Kernel estimator, that is the recommended estimator for the cross correlation function when the two time series are observed at unequal time scales; (ii) an extension of the stationary bootstrap that allows to construct bootstrap-based confidence intervals for the cross correlation function for unevenly spaced time series not sampled on identical time points.

Page generated in 0.1459 seconds