• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 581
  • 240
  • 59
  • 58
  • 28
  • 25
  • 24
  • 24
  • 20
  • 15
  • 15
  • 7
  • 3
  • 3
  • 3
  • Tagged with
  • 1282
  • 621
  • 315
  • 273
  • 197
  • 195
  • 193
  • 180
  • 172
  • 168
  • 151
  • 122
  • 122
  • 108
  • 106
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Extensões da Distribuição Weibull Aplicadas na Análise de Séries Climatológicas /

Reis, Thaís Carolina Santos dos. January 2017 (has links)
Orientador: Josmar Mazucheli / Resumo: Na análise de séries climatológicas, a metodologia conhecida como “análise de frequências” inicia-se, após a verificação da validade de algumas suposições, pela escolha e ajuste de uma distribuição de probabilidade. A etapa mais importante desta análise é a escolha ou seleção da distribuição de probabilidade que melhor descreva o verdadeiro comportamento da variável em estudo. Uma vez adotada uma distribuição de probabilidade que esteja bem ajustada, segundo um ou vários critérios, é de interesse, por exemplo, estimar a probabilidade de que eventos de certa magnitude sejam igualados ou excedidos em T anos. O inverso desta probabilidade é chamado de período de retorno, sendo esta uma medida de extrema importância na avaliação de riscos associados a fenômenos climatológicos. Em princípio, qualquer distribuição de probabilidade com suporte nos números reais positivos pode ser utilizada na descrição do comportamento de séries fluviométricas, pluviométricas, eólicas, entre outras. Em se tratando de séries pluviométricas, formadas, por exemplo, pelas pluviosidades diárias, decendiais, mensais, trimestrais e anuais, as distribuições Gama e Weibull são as mais utilizadas. Nos últimos anos, a partir de métodos específicos, uma infinidade de novas distribuições vêm sendo propostas para a análise de observações contínuas e estritamente positivas, cujas aplicações, em sua grande maioria, restringem-se a dados de sobrevivência e confiabilidade. Nesta dissertação de Mestrado, foram avaliad... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: In the climatological series analysis, a methodology known as “frequency analysis” begins, after the validity of some assumptions, by choice and adjustment of a probability distribution. The most important step of this analysis is the choice or selection of probability distribution that best describes the true behavior of the variable under study. Once a probability distribution, that is well adjusted according to one or several criteria, is adopted, it is of interest, for example, to estimate a probability of events of a certain magnitude that are matched or exceeded in T years. The opposite of this probability is called a return period, which is a measure of extreme importance in the evaluation of risks associated with climatological phenomena. In principle, any probability distribution supported by positive real numbers can be used to describe the behavior of fluviometric, pluviometric and wind series, among others. When it comes to the case of rainfall series, formed, for example, by daily, decendial, monthly, quarterly and annual rainfall, the Gamma and Weibull Distributions are more used. In recent years, from specific methods, a plethora of new distributions are being proposed for an analysis of continuous and strictly positive observations, which applications, for the most part, are restricted to survival and reliability data. In this Master’s dissertation, the performances of the Odd Weibull, Marshall-Olkin Weibull, Exponentiated Weibull and Transmutated Weibull Dist... (Complete abstract click electronic access below) / Mestre
142

Communicating risk in intelligence forecasts: The consumer's perspective

Dieckmann, Nathan F. 12 1900 (has links)
xv, 178 p. : ill. A print copy of this title is available through the UO Libraries under the call number: KNIGHT HM1101 .D54 2007 / The main goal of many political and intelligence forecasts is to effectively communicate risk information to decision makers (i.e. consumers). Standard reporting most often consists of a narrative discussion of relevant evidence concerning a threat, and rarely involves numerical estimates of uncertainty (e.g. a 5% chance). It is argued that numerical estimates of uncertainty will lead to more accurate representations of risk and improved decision making on the part of intelligence consumers. Little work has focused on how well consumers understand and use forecasts that include numerical estimates of uncertainty. Participants were presented with simulated intelligence forecasts describing potential terrorist attacks. These forecasts consisted of a narrative summary of the evidence related to the attack and numerical estimates of likelihood and potential harm. The primary goals were to explore how the structure of the narrative summary, the format of likelihood information, and the numerical ability (numeracy) of consumers affected perceptions of intelligence forecasts. Consumers perceived forecasts with numerical estimates of likelihood and potential harm as more useful than forecasts with only a narrative evidence summary. However, consumer's risk and likelihood perceptions were more greatly affected by the narrative evidence summary than the stated likelihood information. These results show that even "precise" numerical estimates of likelihood are not necessarily evaluable by consumers and that perceptions of likelihood are affected by supporting narrative information. Numeracy also moderated the effects of stated likelihood and the narrative evidence summary. Consumers higher in numeracy were more likely to use the stated likelihood information and consumers lower in numeracy were more likely to use the narrative evidence to inform their judgments. The moderating effect of likelihood format and consumer's perceptions of forecasts in hindsight are also explored. Explicit estimates of uncertainty are not necessarily useful to all intelligence consumers, particularly when presented with supporting narrative evidence. How consumers respond to intelligence forecasts depends on the structure of any supporting narrative information, the format of the explicit uncertainty information, and the numerical ability of the individual consumer. Forecasters should be sensitive to these three issues when presenting forecasts to consumers. / Adviser: Paul Slovic
143

Towards smooth particle filters for likelihood estimation with multivariate latent variables

Lee, Anthony 11 1900 (has links)
In parametrized continuous state-space models, one can obtain estimates of the likelihood of the data for fixed parameters via the Sequential Monte Carlo methodology. Unfortunately, even if the likelihood is continuous in the parameters, the estimates produced by practical particle filters are not, even when common random numbers are used for each filter. This is because the same resampling step which drastically reduces the variance of the estimates also introduces discontinuities in the particles that are selected across filters when the parameters change. When the state variables are univariate, a method exists that gives an estimator of the log-likelihood that is continuous in the parameters. We present a non-trivial generalization of this method using tree-based o(N²) (and as low as O(N log N)) resampling schemes that induce significant correlation amongst the selected particles across filters. In turn, this reduces the variance of the difference between the likelihood evaluated for different values of the parameters and the resulting estimator is considerably smoother than naively running the filters with common random numbers. Importantly, in practice our methods require only a change to the resample operation in the SMC framework without the addition of any extra parameters and can therefore be used for any application in which particle filters are already used. In addition, excepting the optional use of interpolation in the schemes, there are no regularity conditions for their use although certain conditions make them more advantageous. In this thesis, we first introduce the relevant aspects of the SMC methodology to the task of likelihood estimation in continuous state-space models and present an overview of work related to the task of smooth likelihood estimation. Following this, we introduce theoretically correct resampling schemes that cannot be implemented and the practical tree-based resampling schemes that were developed instead. After presenting the performance of our schemes in various applications, we show that two of the schemes are asymptotically consistent with the theoretically correct but unimplementable methods introduced earlier. Finally, we conclude the thesis with a discussion. / Science, Faculty of / Computer Science, Department of / Graduate
144

Estimation and inference of microeconometric models based on moment condition models

Khatoon, Rabeya January 2014 (has links)
The existing estimation techniques for grouped data models can be analyzed as a class of estimators of instrumental variable-Generalized Method of Moments (GMM) type with the matrix of group indicators being the set of instruments. Econometric literature (e.g. Smith, 1997; Newey and Smith, 2004) show that, in some cases of empirical relevance, GMM can have shortcomings in terms of the large sample behaviour of the estimator being different from the finite sample properties. Generalized Empirical Likelihood (GEL) estimators are developed that are not sensitive to the nature and number of instruments and possess improved finite sample properties compared to GMM estimators. In this thesis, with the assumption that the data vector is iid within a group, but inid across groups, we developed GEL estimators for grouped data model having population moment conditions of zero mean of errors in each group. First order asymptotic analysis of the estimators show that they are √N consistent (N being the sample size) and normally distributed. The thesis explores second order bias properties that demonstrate sources of bias and differences between choices of GEL estimators. Specifically, the second order bias depends on the third moments of the group errors and correlation among the group errors and explanatory variables. With symmetric errors and no endogeneity all three estimators Empirical Likelihood (EL), Exponential Tilting (ET) and Continuous Updating Estimator (CUE) yield unbiased estimators. A detailed simulation exercise is performed to test comparative performance of the EL, ET and their bias corrected estimators to the standard 2SLS/GMM estimators. Simulation results reveal that while, with a few strong instruments, we can simply use 2SLS/GMM estimators, in case of many and/or weak instruments, increased degree of endogeneity, or varied signal to noise ratio, bias corrected EL, ET estimators dominate in terms of both least bias and accurate coverage proportions of asymptotic confidence intervals even for a considerably large sample. The thesis includes a case where there are within group dependent data, to assess the consequences of a key assumption being violated, namely the within-group iid assumption. Theoretical analysis and simulation results show that ignoring this feature can result in misleading inference. The proposed estimators are used to estimate the returns to an additional year of schooling in the UK using Labour Force Survey data over 1997-2009. Pooling the 13 years data yields roughly the same estimate of 11.27% return for British-born men aged 25-50 using any of the estimation techniques. In contrast using 2009 LFS data only, for a relatively small sample and many weak instruments, the return to first degree holder men is 13.88% using EL bias corrected estimator, where 2SLS estimator yields an estimate of 6.8%.
145

Estimation of long-range dependence

Vivero, Oskar January 2010 (has links)
A set of observations from a random process which exhibit correlations that decay slower than an exponential rate is regarded as long-range dependent. This phenomenon has stimulated great interest in the scientific community as it appears in a wide range of areas of knowledge. For example, this property has been observed in data pertaining to electronics, econometrics, hydrology and biomedical signals.There exist several estimation methods for finding model parameters that help explain the set of observations exhibiting long-range dependence. Among these methods, maximum likelihood is attractive, given its desirable statistical properties such as asymptotic consistency and efficiency. However, its computational complexity makes the implementation of maximum likelihood prohibitive.This thesis presents a group of computationally efficient estimators based on the maximum likelihood framework. The thesis consists of two main parts. The first part is devoted to developing a computationally efficient alternative to the maximum likelihood estimate. This alternative is based on the circulant embedding concept and it is shown to maintain the desirable statistical properties of maximum likelihood.Interesting results are obtained by analysing the circulant embedding estimate. In particular, this thesis shows that the maximum likelihood based methods are ill-conditioned; the estimators' performance will deteriorate significantly when the set of observations is corrupted by errors. The second part of this thesis focuses on developing computationally efficient estimators with improved performance under the presence of errors in the observations.
146

The covariance structure of conditional maximum likelihood estimates

Strasser, Helmut 11 1900 (has links) (PDF)
In this paper we consider conditional maximum likelihood (cml) estimates for item parameters in the Rasch model under random subject parameters. We give a simple approximation for the asymptotic covariance matrix of the cml-estimates. The approximation is stated as a limit theorem when the number of item parameters goes to infinity. The results contain precise mathematical information on the order of approximation. The results enable the analysis of the covariance structure of cml-estimates when the number of items is large. Let us give a rough picture. The covariance matrix has a dominating main diagonal containing the asymptotic variances of the estimators. These variances are almost equal to the efficient variances under ml-estimation when the distribution of the subject parameter is known. Apart from very small numbers n of item parameters the variances are almost not affected by the number n. The covariances are more or less negligible when the number of item parameters is large. Although this picture intuitively is not surprising it has to be established in precise mathematical terms. This has been done in the present paper. The paper is based on previous results [5] of the author concerning conditional distributions of non-identical replications of Bernoulli trials. The mathematical background are Edgeworth expansions for the central limit theorem. These previous results are the basis of approximations for the Fisher information matrices of cmlestimates. The main results of the present paper are concerned with the approximation of the covariance matrices. Numerical illustrations of the results and numerical experiments based on the results are presented in Strasser, [6].
147

Introduction to fast Super-Paramagnetic Clustering

Yelibi, Lionel 25 February 2020 (has links)
We map stock market interactions to spin models to recover their hierarchical structure using a simulated annealing based Super-Paramagnetic Clustering (SPC) algorithm. This is directly compared to a modified implementation of a maximum likelihood approach to fast-Super-Paramagnetic Clustering (f-SPC). The methods are first applied standard toy test-case problems, and then to a dataset of 447 stocks traded on the New York Stock Exchange (NYSE) over 1249 days. The signal to noise ratio of stock market correlation matrices is briefly considered. Our result recover approximately clusters representative of standard economic sectors and mixed clusters whose dynamics shine light on the adaptive nature of financial markets and raise concerns relating to the effectiveness of industry based static financial market classification in the world of real-time data-analytics. A key result is that we show that the standard maximum likelihood methods are confirmed to converge to solutions within a Super-Paramagnetic (SP) phase. We use insights arising from this to discuss the implications of using a Maximum Entropy Principle (MEP) as opposed to the Maximum Likelihood Principle (MLP) as an optimization device for this class of problems.
148

N-mixture models with auxiliary populations and for large population abundances

Parker, Matthew R. P. 29 April 2020 (has links)
The key results of this thesis are (1) an extension of N-mixture models to incorporate the additional layer of obfuscation brought by observing counts from a related auxiliary population (rather than the target population), (2) an extension of N-mixture models to allow for grouped counts, the purpose being two-fold: to extend the applicability of N-mixtures to larger population sizes, and to allow for the use of coarse counts in fitting N-mixture models, (3) a new R package allowing the easy application of the new N-mixture models, (4) a new R package allowing for optimization of multi-parameter functions using arbitrary precision arithmetic, which was a necessary tool for optimization of the likelihood in large population abundance N-mixture models, as well as (5) simulation studies validating the new grouped count models and comparing them to the classic N-mixtures models. / Graduate
149

Linear Approximations for Second Order High Dimensional Model Representation of the Log Likelihood Ratio

Foroughi pour, Ali 19 June 2019 (has links)
No description available.
150

Modeling longitudinal data with interval censored anchoring events

Chu, Chenghao 01 March 2018 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / In many longitudinal studies, the time scales upon which we assess the primary outcomes are anchored by pre-specified events. However, these anchoring events are often not observable and they are randomly distributed with unknown distribution. Without direct observations of the anchoring events, the time scale used for analysis are not available, and analysts will not be able to use the traditional longitudinal models to describe the temporal changes as desired. Existing methods often make either ad hoc or strong assumptions on the anchoring events, which are unveri able and prone to biased estimation and invalid inference. Although not able to directly observe, researchers can often ascertain an interval that includes the unobserved anchoring events, i.e., the anchoring events are interval censored. In this research, we proposed a two-stage method to fit commonly used longitudinal models with interval censored anchoring events. In the first stage, we obtain an estimate of the anchoring events distribution by nonparametric method using the interval censored data; in the second stage, we obtain the parameter estimates as stochastic functionals of the estimated distribution. The construction of the stochastic functional depends on model settings. In this research, we considered two types of models. The first model was a distribution-free model, in which no parametric assumption was made on the distribution of the error term. The second model was likelihood based, which extended the classic mixed-effects models to the situation that the origin of the time scale for analysis was interval censored. For the purpose of large-sample statistical inference in both models, we studied the asymptotic properties of the proposed functional estimator using empirical process theory. Theoretically, our method provided a general approach to study semiparametric maximum pseudo-likelihood estimators in similar data situations. Finite sample performance of the proposed method were examined through simulation study. Algorithmically eff- cient algorithms for computing the parameter estimates were provided. We applied the proposed method to a real data analysis and obtained new findings that were incapable using traditional mixed-effects models. / 2 years

Page generated in 0.0344 seconds