• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 85
  • 26
  • 13
  • 12
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 162
  • 162
  • 26
  • 26
  • 24
  • 22
  • 21
  • 20
  • 19
  • 18
  • 18
  • 17
  • 17
  • 16
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Nonlinear unmixing of Hyperspectral images / Démélange non-linéaire d'images hyperspectrales

Altmann, Yoann 07 October 2013 (has links)
Le démélange spectral est un des sujets majeurs de l’analyse d’images hyperspectrales. Ce problème consiste à identifier les composants macroscopiques présents dans une image hyperspectrale et à quantifier les proportions (ou abondances) de ces matériaux dans tous les pixels de l’image. La plupart des algorithmes de démélange suppose un modèle de mélange linéaire qui est souvent considéré comme une approximation au premier ordre du mélange réel. Cependant, le modèle linéaire peut ne pas être adapté pour certaines images associées par exemple à des scènes engendrant des trajets multiples (forêts, zones urbaines) et des modèles non-linéaires plus complexes doivent alors être utilisés pour analyser de telles images. Le but de cette thèse est d’étudier de nouveaux modèles de mélange non-linéaires et de proposer des algorithmes associés pour l’analyse d’images hyperspectrales. Dans un premier temps, un modèle paramétrique post-non-linéaire est étudié et des algorithmes d’estimation basés sur ce modèle sont proposés. Les connaissances a priori disponibles sur les signatures spectrales des composants purs, sur les abondances et les paramètres de la non-linéarité sont exploitées à l’aide d’une approche bayesienne. Le second modèle étudié dans cette thèse est basé sur l’approximation de la variété non-linéaire contenant les données observées à l’aide de processus gaussiens. L’algorithme de démélange associé permet d’estimer la relation non-linéaire entre les abondances des matériaux et les pixels observés sans introduire explicitement les signatures spectrales des composants dans le modèle de mélange. Ces signatures spectrales sont estimées dans un second temps par prédiction à base de processus gaussiens. La prise en compte d’effets non-linéaires dans les images hyperspectrales nécessite souvent des stratégies de démélange plus complexes que celles basées sur un modèle linéaire. Comme le modèle linéaire est souvent suffisant pour approcher la plupart des mélanges réels, il est intéressant de pouvoir détecter les pixels ou les régions de l’image où ce modèle linéaire est approprié. On pourra alors, après cette détection, appliquer les algorithmes de démélange non-linéaires aux pixels nécessitant réellement l’utilisation de modèles de mélange non-linéaires. La dernière partie de ce manuscrit se concentre sur l’étude de détecteurs de non-linéarités basés sur des modèles linéaires et non-linéaires pour l’analyse d’images hyperspectrales. Les méthodes de démélange non-linéaires proposées permettent d’améliorer la caractérisation des images hyperspectrales par rapport au méthodes basées sur un modèle linéaire. Cette amélioration se traduit en particulier par une meilleure erreur de reconstruction des données. De plus, ces méthodes permettent de meilleures estimations des signatures spectrales et des abondances quand les pixels résultent de mélanges non-linéaires. Les résultats de simulations effectuées sur des données synthétiques et réelles montrent l’intérêt d’utiliser des méthodes de détection de non-linéarités pour l’analyse d’images hyperspectrales. En particulier, ces détecteurs peuvent permettre d’identifier des composants très peu représentés et de localiser des régions où les effets non-linéaires sont non-négligeables (ombres, reliefs,...). Enfin, la considération de corrélations spatiales dans les images hyperspectrales peut améliorer les performances des algorithmes de démélange non-linéaires et des détecteurs de non-linéarités. / Spectral unmixing is one the major issues arising when analyzing hyperspectral images. It consists of identifying the macroscopic materials present in a hyperspectral image and quantifying the proportions of these materials in the image pixels. Most unmixing techniques rely on a linear mixing model which is often considered as a first approximation of the actual mixtures. However, the linear model can be inaccurate for some specific images (for instance images of scenes involving multiple reflections) and more complex nonlinear models must then be considered to analyze such images. The aim of this thesis is to study new nonlinear mixing models and to propose associated algorithms to analyze hyperspectral images. First, a ost-nonlinear model is investigated and efficient unmixing algorithms based on this model are proposed. The prior knowledge about the components present in the observed image, their proportions and the nonlinearity parameters is considered using Bayesian inference. The second model considered in this work is based on the approximation of the nonlinear manifold which contains the observed pixels using Gaussian processes. The proposed algorithm estimates the relation between the observations and the unknown material proportions without explicit dependency on the material spectral signatures, which are estimated subsequentially. Considering nonlinear effects in hyperspectral images usually requires more complex unmixing strategies than those assuming linear mixtures. Since the linear mixing model is often sufficient to approximate accurately most actual mixtures, it is interesting to detect pixels or regions where the linear model is accurate. This nonlinearity detection can be applied as a pre-processing step and nonlinear unmixing strategies can then be applied only to pixels requiring the use of nonlinear models. The last part of this thesis focuses on new nonlinearity detectors based on linear and nonlinear models to identify pixels or regions where nonlinear effects occur in hyperspectral images. The proposed nonlinear unmixing algorithms improve the characterization of hyperspectral images compared to methods based on a linear model. These methods allow the reconstruction errors to be reduced. Moreover, these methods provide better spectral signature and abundance estimates when the observed pixels result from nonlinear mixtures. The simulation results conducted on synthetic and real images illustrate the advantage of using nonlinearity detectors for hyperspectral image analysis. In particular, the proposed detectors can identify components which are present in few pixels (and hardly distinguishable) and locate areas where significant nonlinear effects occur (shadow, relief, ...). Moreover, it is shown that considering spatial correlation in hyperspectral images can improve the performance of nonlinear unmixing and nonlinearity detection algorithms.
52

Estimação estrutural do hiato do produto : uma análise para o Brasil

Oliveira, Leandro Padulla da Cruz January 2013 (has links)
O presente trabalho estima o hiato do produto para o Brasil através das principais metodologias estudadas na literatura e apresenta uma forma alternativa de extração do mesmo, ainda não realizada para o caso brasileiro. Esta abordagem estima o hiato do produto através de um modelo de equilíbrio geral dinâmico estocástico (DSGE). O hiato do produto via modelo DSGE apresentado conseguiu identificar os períodos de recessão datados pela FGV. Contudo, além dos episódios relatados pela FGV, foram identificados mais dois períodos de crise. Este tipo de abordagem possui algumas vantagens como a possibilidade de conseguir decompor o hiato do produto estimado nos choques presentes no modelo. A partir da decomposição dos choques observou-se que a maior contribuição para a variação do hiato do produto é dada pelos choques de demanda. Ainda com relação à decomposição dos choques, foi possível identificar que os choques nos preços das commodities podem ser entendidos como choques de demanda e não de oferta. Verificou-se que o hiato do produto estimado por um modelo DSGE possui melhor poder preditivo para a inflação dos preços livres de longo prazo, quando comparado às demais metodologias apresentadas. Dessa forma, o hiato do produto via modelo DSGE pode ser uma ferramenta adicional para a condução da política monetária. / This paper estimates the output gap to Brazil through the main methodologies studied in the literature and presents an alternative way, not yet realized for the Brazilian case. This approach estimates the output gap using a standard dynamic stochastic general equilibrium (DSGE). The output gap via DSGE model presented was able to identify recessions dated by FGV. However, were identified over two periods of crisis. This approach has some advantages such as the possibility of decompose the output gap estimated with the shocks presents in the model. From the decomposition of shock was observed the largest contribution to the variation in the output gap is given by the demand shocks. Also with respect to the decomposition of shocks, we found that the shocks in commodity prices can be understood as demand shocks rather than supply. It was found that the output gap estimated by a DSGE model has better predictive power for inflation of free prices of long-term, when compared to other methodologies presented. Thus, the output gap via DSGE model can be an additional tool for conducting monetary policy.
53

The dynamic impact of monetary policy on regional housing prices in the US: Evidence based on factor-augmented vector autoregressions

Fischer, Manfred M., Huber, Florian, Pfarrhofer, Michael, Staufer-Steinnocher, Petra January 2018 (has links) (PDF)
In this study interest centers on regional differences in the response of housing prices to monetary policy shocks in the US. We address this issue by analyzing monthly home price data for metropolitan regions using a factor-augmented vector autoregression (FAVAR) model. Bayesian model estimation is based on Gibbs sampling with Normal-Gamma shrinkage priors for the autoregressive coefficients and factor loadings, while monetary policy shocks are identified using high-frequency surprises around policy announcements as external instruments. The empirical results indicate that monetary policy actions typically have sizeable and significant positive effects on regional housing prices, revealing differences in magnitude and duration. The largest effects are observed in regions located in states on both the East and West Coasts, notably California, Arizona and Florida. / Series: Working Papers in Regional Science
54

Estimação estrutural do hiato do produto : uma análise para o Brasil

Oliveira, Leandro Padulla da Cruz January 2013 (has links)
O presente trabalho estima o hiato do produto para o Brasil através das principais metodologias estudadas na literatura e apresenta uma forma alternativa de extração do mesmo, ainda não realizada para o caso brasileiro. Esta abordagem estima o hiato do produto através de um modelo de equilíbrio geral dinâmico estocástico (DSGE). O hiato do produto via modelo DSGE apresentado conseguiu identificar os períodos de recessão datados pela FGV. Contudo, além dos episódios relatados pela FGV, foram identificados mais dois períodos de crise. Este tipo de abordagem possui algumas vantagens como a possibilidade de conseguir decompor o hiato do produto estimado nos choques presentes no modelo. A partir da decomposição dos choques observou-se que a maior contribuição para a variação do hiato do produto é dada pelos choques de demanda. Ainda com relação à decomposição dos choques, foi possível identificar que os choques nos preços das commodities podem ser entendidos como choques de demanda e não de oferta. Verificou-se que o hiato do produto estimado por um modelo DSGE possui melhor poder preditivo para a inflação dos preços livres de longo prazo, quando comparado às demais metodologias apresentadas. Dessa forma, o hiato do produto via modelo DSGE pode ser uma ferramenta adicional para a condução da política monetária. / This paper estimates the output gap to Brazil through the main methodologies studied in the literature and presents an alternative way, not yet realized for the Brazilian case. This approach estimates the output gap using a standard dynamic stochastic general equilibrium (DSGE). The output gap via DSGE model presented was able to identify recessions dated by FGV. However, were identified over two periods of crisis. This approach has some advantages such as the possibility of decompose the output gap estimated with the shocks presents in the model. From the decomposition of shock was observed the largest contribution to the variation in the output gap is given by the demand shocks. Also with respect to the decomposition of shocks, we found that the shocks in commodity prices can be understood as demand shocks rather than supply. It was found that the output gap estimated by a DSGE model has better predictive power for inflation of free prices of long-term, when compared to other methodologies presented. Thus, the output gap via DSGE model can be an additional tool for conducting monetary policy.
55

Spillovers and jumps in global markets: a comparative analysis / Saltos e Spillovers nos mercados globais: uma análise comparativa

Rodolfo Chiabai Moura 08 June 2018 (has links)
We analyze the relation between volatility spillovers and jumps in financial markets. For this, we compared the volatility spillover index proposed by Diebold and Yilmaz (2009) with a global volatility component, estimated through a multivariate stochastic volatility model with jumps in the mean and in the conditional volatility. This model allows a direct dating of events that alter the global volatility structure, based on a permanent/transitory decomposition in the structure of returns and volatilities, and also the estimation of market risk measures. We conclude that the multivariate stochastic volatility model solves some limitations in the spillover index and can be a useful tool in measuring and managing risk in global financial markets. / Analisamos a relação existente entre spillovers e saltos na volatilidade nos mercados financeiros. Para isso, comparamos o índice de spillover de volatilidade proposto por Diebold and Yilmaz (2009), com um componente de volatilidade global, estimado através de um modelo multivariado de volatilidade estocástica com saltos na média e na volatilidade condicional. Este modelo permite uma datação direta dos eventos que alteram a estrutura de volatilidade global, baseando-se na decomposição das estruturas de retorno e volatilidade entre efeitos permanentes/transitórios, como também a estimação de medidas de risco de mercado. Concluímos que este modelo resolve algumas das limitações do índice de spillover além de fornecer um método prático para mensurar e administrar o risco nos mercados financeiros globais.
56

Estimação estrutural do hiato do produto : uma análise para o Brasil

Oliveira, Leandro Padulla da Cruz January 2013 (has links)
O presente trabalho estima o hiato do produto para o Brasil através das principais metodologias estudadas na literatura e apresenta uma forma alternativa de extração do mesmo, ainda não realizada para o caso brasileiro. Esta abordagem estima o hiato do produto através de um modelo de equilíbrio geral dinâmico estocástico (DSGE). O hiato do produto via modelo DSGE apresentado conseguiu identificar os períodos de recessão datados pela FGV. Contudo, além dos episódios relatados pela FGV, foram identificados mais dois períodos de crise. Este tipo de abordagem possui algumas vantagens como a possibilidade de conseguir decompor o hiato do produto estimado nos choques presentes no modelo. A partir da decomposição dos choques observou-se que a maior contribuição para a variação do hiato do produto é dada pelos choques de demanda. Ainda com relação à decomposição dos choques, foi possível identificar que os choques nos preços das commodities podem ser entendidos como choques de demanda e não de oferta. Verificou-se que o hiato do produto estimado por um modelo DSGE possui melhor poder preditivo para a inflação dos preços livres de longo prazo, quando comparado às demais metodologias apresentadas. Dessa forma, o hiato do produto via modelo DSGE pode ser uma ferramenta adicional para a condução da política monetária. / This paper estimates the output gap to Brazil through the main methodologies studied in the literature and presents an alternative way, not yet realized for the Brazilian case. This approach estimates the output gap using a standard dynamic stochastic general equilibrium (DSGE). The output gap via DSGE model presented was able to identify recessions dated by FGV. However, were identified over two periods of crisis. This approach has some advantages such as the possibility of decompose the output gap estimated with the shocks presents in the model. From the decomposition of shock was observed the largest contribution to the variation in the output gap is given by the demand shocks. Also with respect to the decomposition of shocks, we found that the shocks in commodity prices can be understood as demand shocks rather than supply. It was found that the output gap estimated by a DSGE model has better predictive power for inflation of free prices of long-term, when compared to other methodologies presented. Thus, the output gap via DSGE model can be an additional tool for conducting monetary policy.
57

Regressão binária usando ligações potência e reversa de potência / Binary regression using power and reversal power links

Susan Alicia Chumbimune Anyosa 07 April 2017 (has links)
O objetivo desta dissertação é estudar uma família de ligações assimétricas para modelos de regressão binária sob a abordagem bayesiana. Especificamente, apresenta-se a estimação dos parâmetros da família de modelos de regressão binária com funções de ligação potência e reversa de potência considerando o método de estimação Monte Carlo Hamiltoniano, na extensão No-U-Turn Sampler, e o método Metropolis-Hastings dentro de Gibbs. Além disso, estudam-se diferentes medidas de comparação de modelos, incluindo critérios de informação e de avaliação preditiva. Um estudo de simulação foi desenvolvido para estudar a acurácia e eficiência nos parâmetros estimados. Através da análise de dados educacionais, mostra-se que os modelos usando as ligações propostas apresentam melhor ajuste do que os modelos usando ligações tradicionais. / The aim of this dissertation is to study a family of asymmetric link functions for binary regression models under Bayesian approach. Specifically, we present the estimation of parameters of power and reversal power binary regression models considering Hamiltonian Monte Carlo method, on No-U-Turn Sampler extension, and Metropolis-Hastings within Gibbs sampling method. Furthermore, we study a wide variety of model comparison measures, including information criteria and measures of predictive evaluation. A simulation study was conducted in order to research accuracy and efficiency on estimated parameters. Through analysis of educational data we show that models using the proposed link functions perform better fit than models using standard links.
58

Consequences of Non-Modeled and Modeled Between Case Variation in the Level-1 Error Structure in Multilevel Models for Single-Case Data: A Monte Carlo Study

Baek, Eun Kyeng 01 January 2015 (has links)
The Multilevel modeling (MLM) approach has a great flexibility in that can handle various methodological issues that may arise with single-case studies, such as the need to model possible dependency in the errors, linear or nonlinear trends, and count outcomes (e.g.,Van den Noortgate & Onghena, 2003a). By using the MLM framework, researchers can not only model dependency in the errors but also model a variety of level-1error structures. The effect of misspecification in the level-1 error structure has been well studied for MLM analyses. Generally, it was found that the estimates of the fixed effects were unbiased but the estimates of variance parameters were substantially biased when level-1 error structure was misspecified. However, in previous misspecification studies as well as applied studies of multilevel models with single-case data, a critical assumption has been made. Researchers generally assumed that the level-1 error structure is constant across all participants. It is possible that the level-1 error structure may not be same across participants. Previous studies show that there is a possibility that the level-1 error structure may not be same across participants (Baek & Ferron, 2011; Baek & Ferron, 2013; Maggin et al., 2011). If much variation in level-1 error structure exists, this can possibly impact estimation of the fixed effects and random effects. Despite the importance of this issue, the effects of modeling between-case variation in the level-1 error structure had not yet been systematically studied. The purpose of this simulation study was to extend the MLM modeling in growth curve models to allow the level-1 error structure to vary across cases, and to identify the consequences of modeling and not modeling between-case variation in the level-1 error structure for single-case studies. A Monte Carlo simulation was conducted that examined conditions that varied in series length per case (10 or 20), the number of cases (4 or 8), the true level-1 errors structure (homogenous, moderately heterogeneous, severely heterogeneous), the level-2 error variance in baseline slope and shift in slope (.05 or .2 times the level-1 variance), and the method to analyze the data (allow level-1 error variance and autocorrelation to vary across cases (Model 2) or not allow level-1 error variance and autocorrelation to vary across cases (Model 1)). All simulated data sets were analyzed using Bayesian estimation. For each condition, 1000 data were simulated, and bias, RMSE and credible interval (CI) coverage and width were examined for the fixed treatment effects and the variance components. The results of this study found that the different modeling methods in level-1 error structure had little to no impact on the estimates of the fixed treatment effects, but substantial impacts on the estimates of the variance components, especially the level-1 error standard deviation and the autocorrelation parameters. Modeling between case variation in the level-1 error structure (Model 2) performs relatively better than not modeling between case variation in the level-1 error structure (Model 1) for the estimates of the level-1 error standard deviation and the autocorrelation parameters. It was found that as degree of the heterogeneity in the data (i.e., homogeneous, moderately heterogeneous, severely heterogeneous) increased, the effectiveness of Model 2 increased. The results also indicated that whether the level-1 error structure was under-specified, over-specified, or correctly-specified had little to no impact on the estimates of the fixed treatment effects, but a substantial impact on the level-1 error standard deviation and the autocorrelation. While the correctly-specified and the over-specified models perform fairly well, the under-specified model performs poorly. Moreover, it was revealed that the form of heterogeneity in the data (i.e., one extreme case versus a more even spread of the level-1 variances) might have some impact on relative effectiveness of the two models, but the degree of the autocorrelation had little to no impact on the relative performance of the two models.
59

Bayesian estimation of Shannon entropy for bivariate beta priors

Bodvin, Joanna Sylvia Liesbeth 10 July 2010 (has links)
Having just survived what is arguably the worst financial crisis in time, it is expected that the focus on regulatory capital held by financial institutions such as banks will increase significantly over the next few years. The probability of default is an important determinant of the amount of regulatory capital to be held, and the accurate calibration of this measure is vital. The purpose of this study is to propose the use of the Shannon entropy when determining the parameters of the prior bivariate beta distribution as part of a Bayesian calibration methodology. Various bivariate beta distributions will be considered as priors to the multinomial distribution associated with rating categories, and the appropriateness of these bivariate beta distributions will be tested on default data. The formulae derived for the Bayesian estimation of Shannon entropy will be used to measure the certainty obtained when selecting the prior parameters. / Dissertation (MSc)--University of Pretoria, 2010. / Statistics / unrestricted
60

Une étude comparative de méthodes d'assimilation de données pour des modèles océaniques / A comparative study of data assimilation methods for oceanic models

Ruggiero, Giovanni Abdelnur 13 March 2014 (has links)
Cette thèse a développé et mis en œuvre des algorithmes itératifs d'assimilation de données pour un modèle d'océan aux équations primitives, et les a comparés avec d'autres méthodes d'AD bien établis tels que le 4Dvar et le Singular Evolutive Extended Kalman (SEEK) Filtre /lisseur. Le modèle numérique utilisé est le modèle NEMO. Il a été configuré pour simuler la circulation typique subtropicale en double gyre. Les nouveaux algorithmes itératifs proposés, semblables au Nudging direct et rétrograde - BFN, sont tous basés sur une séquence d'intégrations alternées du modèle direct et rétrograde. Ce sont le ``Backward Smoother'' (BS), qui utilise le modèle rétrograde pour propager librement des observations "futures" en rétrograde dans le temps, et le ``Back and Forth Kalman Filter'' (BFKF), qui utilise également le modèle rétrograde pour propager les observations en arrière dans le temps, mais qui à chaque fois qu'un lot d'observations est disponible, réalise une étape de mise à jour du système similaire à l'étape de mise à jour du filtre SEEK. Le formalisme Bayésien a été utilisé pour dériver ces méthodes, ce qui signifie qu'ils peuvent être utilisés avec n'importe quelle méthode qui estime la probabilité postérieure du système par des méthodes séquentielles. Les résultats montrent que l'avantage principal des méthodes basées sur le BFN est l'utilisation du modèle rétrograde pour propager les informations des observations en arrière dans le temps. / This thesis developed and implemented iterative data assimilation algorithms for a primitive equation ocean model, and compared them with other well established DA methods such as the 4Dvar and the Singular Evolutive Extended Kalman (SEEK) Filter/Smoother. The new proposed iterative algorithms, similarly to the Back and Forth Nudging (BFN), are all based on a sequence of alternating forward and backward model integrations. Namely, they are the Backward Smoother (BS), which uses the backward model to freely propagate “future” observations backward in time, and the Back and Forth Kalman Filter, which also uses the backward model to propagate the observations backward in time but, at every time an observation batch is available, an update step similar to the SEEK filter step is carried out. The Bayesian formalism was used to derive these methods, which means that they may be used with any algorithm that estimates the “a posteriori” conditional probability of the model state by means of sequential methods. The results show that the main advantage of the methods based on the BFN is the use of the backward model to propagate the observation informations backward in time. By this way, it avoids the use of the adjoint model, needed by the 4Dvar, and of unknown temporal correlations, needed by the Kalman Smoother, to produce initial states or past model trajectories. The advantages of using the Back and Forth (BF) idea rely on the implicit use of the unstable forward subspace, which became stable when stepping backwards, that allows the errors components projecting onto this subspace to be naturally damped during the backward integration.

Page generated in 0.1513 seconds