• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 4
  • 3
  • 2
  • 1
  • Tagged with
  • 32
  • 32
  • 9
  • 8
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Ajustes para o teste da razão de verossimilhanças em modelos de regressão beta / Adjusted likelihood ratio statistics in beta regression models

Pinheiro, Eliane Cantinho 23 March 2009 (has links)
O presente trabalho considera o problema de fazer inferência com acurácia para pequenas amostras, tomando por base a estatística da razão de verossimilhanças em modelos de regressão beta. Estes, por sua vez, são úteis para modelar proporções contínuas que são afetadas por variáveis independentes. Deduzem-se as estatísticas da razão de verossimilhanças ajustadas de Skovgaard (Scandinavian Journal of Statistics 28 (2001) 3-32) nesta classe de modelos. Os termos do ajuste, que têm uma forma simples e compacta, podem ser implementados em um software estatístico. São feitas simulações de Monte Carlo para mostrar que a inferência baseada nas estatísticas ajustadas propostas é mais confiável do que a inferência usual baseada na estatística da razão de verossimilhanças. Aplicam-se os resultados a um conjunto real de dados. / We consider the issue of performing accurate small-sample likelihood-based inference in beta regression models, which are useful for modeling continuous proportions that are affected by independent variables. We derive Skovgaards (Scandinavian Journal of Statistics 28 (2001) 3-32) adjusted likelihood ratio statistics in this class of models. We show that the adjustment terms have simple compact form that can be easily implemented from standard statistical software. We presentMonte Carlo simulations showing that inference based on the adjusted statistics we propose is more reliable than that based on the usual likelihood ratio statistic. A real data example is presented.
22

Ajustes para o teste da razão de verossimilhanças em modelos de regressão beta / Adjusted likelihood ratio statistics in beta regression models

Eliane Cantinho Pinheiro 23 March 2009 (has links)
O presente trabalho considera o problema de fazer inferência com acurácia para pequenas amostras, tomando por base a estatística da razão de verossimilhanças em modelos de regressão beta. Estes, por sua vez, são úteis para modelar proporções contínuas que são afetadas por variáveis independentes. Deduzem-se as estatísticas da razão de verossimilhanças ajustadas de Skovgaard (Scandinavian Journal of Statistics 28 (2001) 3-32) nesta classe de modelos. Os termos do ajuste, que têm uma forma simples e compacta, podem ser implementados em um software estatístico. São feitas simulações de Monte Carlo para mostrar que a inferência baseada nas estatísticas ajustadas propostas é mais confiável do que a inferência usual baseada na estatística da razão de verossimilhanças. Aplicam-se os resultados a um conjunto real de dados. / We consider the issue of performing accurate small-sample likelihood-based inference in beta regression models, which are useful for modeling continuous proportions that are affected by independent variables. We derive Skovgaards (Scandinavian Journal of Statistics 28 (2001) 3-32) adjusted likelihood ratio statistics in this class of models. We show that the adjustment terms have simple compact form that can be easily implemented from standard statistical software. We presentMonte Carlo simulations showing that inference based on the adjusted statistics we propose is more reliable than that based on the usual likelihood ratio statistic. A real data example is presented.
23

Contribuições em inferência e modelagem de valores extremos / Contributions to extreme value inference and modeling.

Eliane Cantinho Pinheiro 04 December 2013 (has links)
A teoria do valor extremo é aplicada em áreas de pesquisa tais como hidrologia, estudos de poluição, engenharia de materiais, controle de tráfego e economia. A distribuição valor extremo ou Gumbel é amplamente utilizada na modelagem de valores extremos de fenômenos da natureza e no contexto de análise de sobrevivência para modelar o logaritmo do tempo de vida. A modelagem de valores extremos de fenômenos da natureza tais como velocidade de vento, nível da água de rio ou mar, altura de onda ou umidade é importante em estatística ambiental pois o conhecimento de valores extremos de tais eventos é crucial na prevenção de catátrofes. Ultimamente esta teoria é de particular interesse pois fenômenos extremos da natureza têm sido mais comuns e intensos. A maioria dos artigos sobre teoria do valor extremo para modelagem de dados considera amostras de tamanho moderado ou grande. A distribuição Gumbel é frequentemente incluída nas análises mas a qualidade do ajuste pode ser pobre em função de presença de ouliers. Investigamos modelagem estatística de eventos extremos com base na teoria de valores extremos. Consideramos um modelo de regressão valor extremo introduzido por Barreto-Souza & Vasconcellos (2011). Os autores trataram da questão de corrigir o viés do estimador de máxima verossimilhança para pequenas amostras. Nosso primeiro objetivo é deduzir ajustes para testes de hipótese nesta classe de modelos. Derivamos a estatística da razão de verossimilhanças ajustada de Skovgaard (2001) e cinco ajustes da estatística da razão de verossimilhanças sinalizada, que foram propostos por Barndorff-Nielsen (1986, 1991), DiCiccio & Martin (1993), Skovgaard (1996), Severini (1999) e Fraser et al. (1999). As estatísticas ajustadas são aproximadamente distribuídas como uma distribuição $\\chi^2$ e normal padrão com alto grau de acurácia. Os termos dos ajustes têm formas compactas simples que podem ser facilmente implementadas em softwares disponíveis. Comparamos a performance do teste da razão de verossimilhanças, do teste da razão de verossimilanças sinalizada e dos testes ajustados obtidos neste trabalho em amostras pequenas. Ilustramos uma aplicação dos testes usuais e suas versões modificadas em conjuntos de dados reais. As distribuições das estatísticas ajustadas são mais próximas das respectivas distribuições limites comparadas com as distribuições das estatísticas usuais quando o tamanho da amostra é relativamente pequeno. Os resultados de simulação indicaram que as estatísticas ajustadas são recomendadas para inferência em modelo de regressão valor extremo quando o tamanho da amostra é moderado ou pequeno. Parcimônia é importante quando os dados são escassos, mas flexibilidade também é crucial pois um ajuste pobre pode levar a uma conclusão completamente errada. Uma revisão da literatura foi feita para listar as distribuições que são generalizações da distribuição Gumbel. Nosso segundo objetivo é avaliar a parcimônia e flexibilidade destas distribuições. Com este propósito, comparamos tais distribuições através de momentos, coeficientes de assimetria e de curtose e índice da cauda. As famílias mais amplas obtidas pela inclusão de parâmetros adicionais, que têm a distribuição Gumbel como caso particular, apresentam assimetria e curtose flexíveis enquanto a distribuição Gumbel apresenta tais características constantes. Dentre estas distribuições, a distribuição valor extremo generalizada é a única com índice da cauda que pode ser qualquer número real positivo enquanto os índices da cauda das outras distribuições são zero. Observamos que algumas generalizações da distribuição Gumbel estudadas na literatura são não identificáveis. Portanto, para estes modelos a interpretação e estimação de parâmetros individuais não é factível. Selecionamos as distribuições identificáveis e as ajustamos a um conjunto de dados simulado e a um conjunto de dados reais de velocidade de vento. Como esperado, tais distribuições se ajustaram bastante bem ao conjunto de dados simulados de uma distribuição Gumbel. A distribuição valor extremo generalizada e a mistura de duas distribuições Gumbel produziram melhores ajustes aos dados do que as outras distribuições na presença não desprezível de observações discrepantes que não podem ser acomodadas pela distribuição Gumbel e, portanto, sugerimos que tais distribuições devem ser utilizadas neste contexto. / The extreme value theory is applied in research fields such as hydrology, pollution studies, materials engineering, traffic management, economics and finance. The Gumbel distribution is widely used in statistical modeling of extreme values of a natural process such as rainfall and wind. Also, the Gumbel distribution is important in the context of survival analysis for modeling lifetime in logarithmic scale. The statistical modeling of extreme values of a natural process such as wind or humidity is important in environmental statistics; for example, understanding extreme wind speed is crucial in catastrophe/disaster protection. Lately this is of particular interest as extreme natural phenomena/episodes are more common and intense. The majority of papers on extreme value theory for modeling extreme data is supported by moderate or large sample sizes. The Gumbel distribution is often considered but the resulting fit may be poor in the presence of ouliers since its skewness and kurtosis are constant. We deal with statistical modeling of extreme events data based on extreme value theory. We consider a general extreme-value regression model family introduced by Barreto-Souza & Vasconcellos (2011). The authors addressed the issue of correcting the bias of the maximum likelihood estimators in small samples. Here, our first goal is to derive hypothesis test adjustments in this class of models. We derive Skovgaard\'s adjusted likelihood ratio statistics Skovgaard (2001) and five adjusted signed likelihood ratio statistics, which have been proposed by Barndorff-Nielsen (1986, 1991), DiCiccio & Martin (1993), Skovgaard (1996), Severini (1999) and Fraser et al. (1999). The adjusted statistics are approximately distributed as $\\chi^2$ and standard normal with high accuracy. The adjustment terms have simple compact forms which may be easily implemented by readily available software. We compare the finite sample performance of the likelihood ratio test, the signed likelihood ratio test and the adjusted tests obtained in this work. We illustrate the application of the usual tests and their modified versions in real datasets. The adjusted statistics are closer to the respective limiting distribution compared to the usual ones when the sample size is relatively small. Simulation results indicate that the adjusted statistics can be recommended for inference in extreme value regression model with small or moderate sample size. Parsimony is important when data are scarce, but flexibility is also crucial since a poor fit may lead to a completely wrong conclusion. A literature review was conducted to list distributions which nest the Gumbel distribution. Our second goal is to evaluate their parsimony and flexibility. For this purpose, we compare such distributions regarding moments, skewness, kurtosis and tail index. The larger families obtained by introducing additional parameters, which have Gumbel embedded in, present flexible skewness and kurtosis while the Gumbel distribution skewness and kurtosis are constant. Among these distributions the generalized extreme value is the only one with tail index that can be any positive real number while the tail indeces of the other distributions investigated here are zero. We notice that some generalizations of the Gumbel distribution studied in the literature are not indetifiable. Hence, for these models meaningful interpretation and estimation of individual parameters are not feasible. We select the identifiable distributions and fit them to a simulated dataset and to real wind speed data. As expected, such distributions fit the Gumbel simulated data quite well. The generalized extreme value distribution and the two-component extreme value distribution fit the data better than the others in the non-negligible presence of outliers that cannot be accommodated by the Gumbel distribution, and therefore we suggest them to be applied in this context.
24

Stochastic Modelling of Random Variables with an Application in Financial Risk Management.

Moldovan, Max January 2003 (has links)
The problem of determining whether or not a theoretical model is an accurate representation of an empirically observed phenomenon is one of the most challenging in the empirical scientific investigation. The following study explores the problem of stochastic model validation. Special attention is devoted to the unusual two-peaked shape of the empirically observed distributions of the conditional on realised volatility financial returns. The application of statistical hypothesis testing and simulation techniques leads to the conclusion that the conditional on realised volatility returns are distributed with a specific previously undocumented distribution. The probability density that represents this distribution is derived, characterised and applied for validation of the financial model.
25

Dissertation_LeiLi

Lei Li (16631262) 26 July 2023 (has links)
<p>In the real world, uncertainty is a common challenging problem faced by individuals, organizations, and firms. Decision quality is highly impacted by uncertainty because decision makers lack complete information and have to leverage the loss and gain in many possible outcomes or scenarios. This study explores dynamic decision making (with known distributions) and decision learning (with unknown distributions but some samples) in not-for-profit operations and supply chain management. We first study dynamic staffing for paid workers and volunteers with uncertain supply in a nonprofit operation where the optimal policy is too complex to compute and implement. Then, we consider dynamic inventory control and pricing under both supply and demand uncertainties where unmet demand is lost leading to a challenging non-concave dynamic problem. Furthermore, we explore decision learning from limited data of focal system and available data of related but different systems by transfer learning, cross learning, and co-learning utilizing the similarities among related systems.</p>
26

Essays on bayesian and classical econometrics with small samples

Jarocinski, Marek 15 June 2006 (has links)
Esta tesis se ocupa de los problemas de la estimación econométrica con muestras pequeñas, en los contextos del los VARs monetarios y de la investigación empírica del crecimiento. Primero, demuestra cómo mejorar el análisis con VAR estructural en presencia de muestra pequeña. El primer capítulo adapta la especificación con prior intercambiable (exchangeable prior) al contexto del VAR y obtiene nuevos resultados sobre la transmisión monetaria en nuevos miembros de la Unión Europea. El segundo capítulo propone un prior sobre las tasas de crecimiento iniciales de las variables modeladas. Este prior resulta en la corrección del sesgo clásico de la muestra pequeña en series temporales y reconcilia puntos de vista Bayesiano y clásico sobre la estimación de modelos de series temporales. El tercer capítulo estudia el efecto del error de medición de la renta nacional sobre resultados empíricos de crecimiento económico, y demuestra que los procedimientos econométricos robustos a incertidumbre acerca del modelo son muy sensibles al error de medición en los datos. / This thesis deals with the problems of econometric estimation with small samples, in the contexts of monetary VARs and growth empirics. First, it shows how to improve structural VAR analysis on short datasets. The first chapter adapts the exchangeable prior specification to the VAR context, and obtains new findings about monetary transmission in New Member States. The second chapter proposes a prior on initial growth rates of modeled variables, which tackles the Classical small-sample bias in time series, and reconciles Bayesian and Classical points of view on time series estimation. The third chapter studies the effect of measurement error in income data on growth empirics, and shows that econometric procedures which are robust to model uncertainty are very sensitive to measurement error of the plausible size and properties.
27

A decompositional investigation of 3D face recognition

Cook, James Allen January 2007 (has links)
Automated Face Recognition is the process of determining a subject's identity from digital imagery of their face without user intervention. The term in fact encompasses two distinct tasks; Face Verficiation is the process of verifying a subject's claimed identity while Face Identification involves selecting the most likely identity from a database of subjects. This dissertation focuses on the task of Face Verification, which has a myriad of applications in security ranging from border control to personal banking. Recently the use of 3D facial imagery has found favour in the research community due to its inherent robustness to the pose and illumination variations which plague the 2D modality. The field of 3D face recognition is, however, yet to fully mature and there remain many unanswered research questions particular to the modality. The relative expense and specialty of 3D acquisition devices also means that the availability of databases of 3D face imagery lags significantly behind that of standard 2D face images. Human recognition of faces is rooted in an inherently 2D visual system and much is known regarding the use of 2D image information in the recognition of individuals. The corresponding knowledge of how discriminative information is distributed in the 3D modality is much less well defined. This dissertations addresses these issues through the use of decompositional techniques. Decomposition alleviates the problems associated with dimensionality explosion and the Small Sample Size (SSS) problem and spatial decomposition is a technique which has been widely used in face recognition. The application of decomposition in the frequency domain, however, has not received the same attention in the literature. The use of decomposition techniques allows a map ping of the regions (both spatial and frequency) which contain the discriminative information that enables recognition. In this dissertation these techniques are covered in significant detail, both in terms of practical issues in the respective domains and in terms of the underlying distributions which they expose. Significant discussion is given to the manner in which the inherent information of the human face is manifested in the 2D and 3D domains and how these two modalities inter-relate. This investigation is extended to cover also the manner in which the decomposition techniques presented can be recombined into a single decision. Two new methods for learning the weighting functions for both the sum and product rules are presented and extensive testing against established methods is presented. Knowledge acquired from these examinations is then used to create a combined technique termed Log-Gabor Templates. The proposed technique utilises both the spatial and frequency domains to extract superior performance to either in isolation. Experimentation demonstrates that the spatial and frequency domain decompositions are complimentary and can combined to give improved performance and robustness.
28

Dynamic factor model with non-linearities : application to the business cycle analysis / Modèles à facteurs dynamiques avec non linéarités : application à l'analyse du cycle économique

Petronevich, Anna 26 October 2017 (has links)
Cette thèse est dédiée à une classe particulière de modèles à facteurs dynamiques non linéaires, les modèles à facteurs dynamiques à changement de régime markovien (MS-DFM). Par la combinaison des caractéristiques du modèle à facteur dynamique et celui du modèle à changement de régimes markoviens(i.e. la capacité d’agréger des quantités massives d’information et de suivre des processus fluctuants), ce cadre s’est révélé très utile et convenable pour plusieurs applications, dont le plus important est l’analyse des cycles économiques.La connaissance de l’état actuel des cycles économiques est crucial afin de surveiller la santé économique et d’évaluer les résultats des politiques économiques. Néanmoins, ce n’est pas une tâche facile à réaliser car, d’une part, il n’y a pas d’ensemble de données et de méthodes communément reconnus pour identifier les points de retournement, d’autre part, car les institutions officielles annoncent un nouveau point de retournement, dans les pays où une telle pratique existe, avec un délai structurel de plusieurs mois.Le MS-DFM est en mesure de résoudre ces problèmes en fournissant des estimations de l’état actuel de l’économie de manière rapide, transparente et reproductible sur la base de la composante commune des indicateurs macroéconomiques caractérisant le secteur réel.Cette thèse contribue à la vaste littérature sur l’identification des points de retournement du cycle économique dans trois direction. Dans le Chapitre 3, on compare les deux techniques d’estimation de MS-DFM, les méthodes en une étape et en deux étapes, et on les applique aux données françaises pour obtenir la chronologie des points de retournement du cycle économique. Dans Chapitre 4, sur la base des simulations de Monte Carlo, on étudie la convergence des estimateurs de la technique retenue - la méthode d’estimation en deux étapes, et on analyse leur comportement en échantillon fini. Dans le Chapitre 5, on propose une extension de MS-DFM - le MS-DFM à l’influence dynamique (DI-MS-DFM)- qui permet d’évaluer la contribution du secteur financier à la dynamique du cycle économique et vice versa, tout en tenant compte du fait que l’interaction entre eux puisse être dynamique. / This thesis is dedicated to the study of a particular class of non-linear Dynamic Factor Models, the Dynamic Factor Models with Markov Switching (MS-DFM). Combining the features of the Dynamic Factor model and the Markov Switching model, i.e. the ability to aggregate massive amounts of information and to track recurring processes, this framework has proved to be a very useful and convenient instrument in many applications, the most important of them being the analysis of business cycles.In order to monitor the health of an economy and to evaluate policy results, the knowledge of the currentstate of the business cycle is essential. However, it is not easy to determine since there is no commonly accepted dataset and method to identify turning points, and the official institutions announce a newturning point, in countries where such practice exists, with a structural delay of several months. The MS-DFM is able to resolve these issues by providing estimates of the current state of the economy in a timely, transparent and replicable manner on the basis of the common component of macroeconomic indicators characterizing the real sector. The thesis contributes to the vast literature in this area in three directions. In Chapter 3, I compare the two popular estimation techniques of the MS-DFM, the one-step and the two-step methods, and apply them to the French data to obtain the business cycle turning point chronology. In Chapter 4, on the basis of Monte Carlo simulations, I study the consistency of the estimators of the preferred technique -the two-step estimation method, and analyze their behavior in small samples. In Chapter 5, I extend the MS-DFM and suggest the Dynamical Influence MS-DFM, which allows to evaluate the contribution of the financial sector to the dynamics of the business cycle and vice versa, taking into consideration that the interaction between them can be dynamic.
29

Pravděpodobnostní optimalizace konstrukcí / Reliability-based structural optimization

Slowik, Ondřej January 2014 (has links)
This thesis presents the reader the importance of optimization and probabilistic assessment of structures for civil engineering problems. Chapter 2 further investigates the combination between previously proposed optimization techniques and probabilistic assessment in the form of optimization constraints. Academic software has been developed for the purposes of demonstrating the effectiveness of the suggested methods and their statistical testing. 3th chapter summarizes the results of testing previously described optimization method (called Aimed Multilevel Sampling), including a comparison with other optimization techniques. In the final part of the thesis, described procedures have been demonstrated on the selected optimization and reliability problems. The methods described in text represents engineering approach to optimization problems and aims to introduce a simple and transparent optimization algorithm, which could serve to the practical engineering purposes.
30

Data-Driven Success in Infrastructure Megaprojects. : Leveraging Machine Learning and Expert Insights for Enhanced Prediction and Efficiency / Datadriven framgång inom infrastrukturmegaprojekt. : Utnyttja maskininlärning och expertkunskap för förbättrad prognostisering och effektivitet.

Nordmark, David E.G. January 2023 (has links)
This Master's thesis utilizes random forest and leave-one-out cross-validation to predict the success of megaprojects involving infrastructure. The goal was to enhance the efficiency of the design and engineering phase of the infrastructure and construction industries. Due to the small sample size of megaprojects and limitated data sharing, the lack of data poses significant challenges for implementing artificial intelligence for the evaluation and prediction of megaprojects. This thesis explore how megaprojects can benefit from data collection and machine learning despite small sample sizes. The focus of the research was on analyzing data from thirteen megaprojects and identifying the most influential data for machine learning analysis. The results prove that the incorporation of expert data, representing critical success factors for megaprojects, significantly enhanced the accuracy of the predictive model. The superior performance of expert data over economic data, experience data, and documentation data demonstrates the significance of domain expertise. In addition, the results demonstrate the significance of the planning phase by implementing feature selection techniques and feature importance scores. In the planning phase, a small, devoted, and highly experienced team of project planners has proven to be a crucial factor for project success. The thesis concludes that in order for companies to maximize the utility of machine learning, they must identify their critical success factors and collect the corresponding data. / Denna magisteruppsats undersöker följande forskningsfråga: Hur kan maskininlärning och insiktsfull dataanalys användas för att öka effektiviteten i infrastruktursektorns plannerings- och designfas? Denna utmaning löses genom att analysera data från verkliga megaprojekt och tillämpa avancerade maskininlärningsalgoritmer för att förutspå projektframgång och ta reda på framgångsfaktorerna. Vår forskning är särskilt intresserad av megaprojekt på grund av deras komplicerade natur, unika egenskaper och enorma inverkan på samhället. Dessa projekt slutförs sällan, vilket gör att det är svårt att få tillgång till stora mängder verklig data. Det är uppenbart att AI har potential att vara ett ovärderligt verktyg för att förstå och hantera megaprojekts komplexitet, trots de problem vi står inför. Artificiell intelligens gör det möjligt att fatta beslut som är datadrivna och mer informerade. Uppsatsen lyckas med att hanterard det stora problemet som är bristen på data från megaprojekt. Uppsatsen motiveras även av denna brist på data, vilket gör forskningen relevant för andra områden som präglas av litet dataurval. Resultaten från uppsatsen visar att evalueringen av megaprojekt går att förbättra genom smart användning av specifika dataattribut. Uppsatsen inspirerar även företag att börja samla in viktig data för att möjliggöra användningen av artificiell intelligens och maskinginlärning till sin fördel.

Page generated in 0.0575 seconds