• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 406
  • 309
  • 42
  • 35
  • 15
  • 12
  • 10
  • 10
  • 10
  • 10
  • 6
  • 5
  • 5
  • 5
  • 5
  • Tagged with
  • 944
  • 345
  • 145
  • 145
  • 128
  • 107
  • 96
  • 91
  • 87
  • 85
  • 80
  • 67
  • 66
  • 66
  • 60
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
291

ScatterNet hybrid frameworks for deep learning

Singh, Amarjot January 2019 (has links)
Image understanding is the task of interpreting images by effectively solving the individual tasks of object recognition and semantic image segmentation. An image understanding system must have the capacity to distinguish between similar looking image regions while being invariant in its response to regions that have been altered by the appearance-altering transformation. The fundamental challenge for any such system lies within this simultaneous requirement for both invariance and specificity. Many image understanding systems have been proposed that capture geometric properties such as shapes, textures, motion and 3D perspective projections using filtering, non-linear modulus, and pooling operations. Deep learning networks ignore these geometric considerations and compute descriptors having suitable invariance and stability to geometric transformations using (end-to-end) learned multi-layered network filters. These deep learning networks in recent years have come to dominate the previously separate fields of research in machine learning, computer vision, natural language understanding and speech recognition. Despite the success of these deep networks, there remains a fundamental lack of understanding in the design and optimization of these networks which makes it difficult to develop them. Also, training of these networks requires large labeled datasets which in numerous applications may not be available. In this dissertation, we propose the ScatterNet Hybrid Framework for Deep Learning that is inspired by the circuitry of the visual cortex. The framework uses a hand-crafted front-end, an unsupervised learning based middle-section, and a supervised back-end to rapidly learn hierarchical features from unlabelled data. Each layer in the proposed framework is automatically optimized to produce the desired computationally efficient architecture. The term `Hybrid' is coined because the framework uses both unsupervised as well as supervised learning. We propose two hand-crafted front-ends that can extract locally invariant features from the input signals. Next, two ScatterNet Hybrid Deep Learning (SHDL) networks (a generative and a deterministic) were introduced by combining the proposed front-ends with two unsupervised learning modules which learn hierarchical features. These hierarchical features were finally used by a supervised learning module to solve the task of either object recognition or semantic image segmentation. The proposed front-ends have also been shown to improve the performance and learning of current Deep Supervised Learning Networks (VGG, NIN, ResNet) with reduced computing overhead.
292

Análise multi-escala de formas bidimensionais / Not available

Cesar Junior, Roberto Marcondes 26 November 1997 (has links)
Esta tese introduz um conjunto de novos métodos para análise de formas bidimensionais (2D) dentro do contexto da resolução de problemas de visão computacional e analise de formas neurais ou neuromorfometria. Mais especificamente, este trabalho apresenta o desenvolvimento de conceitos e algoritmos para a representação e analise multi-escala de contornos de objetos em imagens digitais. Assim, o contorno dos objetos e representado por um sinal que assume valores complexos e que pode ser subseqüentemente analisado por uma transformada multi-escala. Nesse sentido, os desenvolvimentos apresentados nesta tese valeram-se matematicamente de ferramentas desenvolvidas na área de processamento de sinais e de imagens, bem como em outras áreas da matemática como a geometria diferencial. Técnicas de analise de contornos através da curvatura multi-escala e das transformadas de Gabor e em wavelets são introduzidas, incluindo algoritmos específicos para a detecção de vértices, caracterização de escalas naturais, analise fractal de curvas deterministicamente auto-similares e extração de vetores de características associadas a diferentes aspectos de formas como complexidade e retangularidade. Particularmente em relação aos métodos de analise multi-escala de curvatura, esta tese apresenta um novo esquema de estimação digital de curvatura baseado em propriedades da transformada de Fourier e novas abordagens para a prevenção a contração dos contornos devido a filtragem gaussiana. Esse novo esquema de estimação de curvatura foi testado exaustivamente, incluindo uma avaliação da precisão do método através de uma analise de erro entre valores da curvatura analítica e a estimada baseada em curvas B-splines. O novo esquema apresentou resultados encorajadores em todas as avaliações, corroborando sua eficiência. Em relação a parte especifica de analise de formas neurais, as contribuições desta tese residem em duas áreas. Inicialmente, novas medidas de formas, correspondentes as energias multi-escala, foram introduzidas para a caracterização e classificação automática de neurônios baseada na complexidade das formas; experimentos de classificação estatística de celulas ganglionares (gato) são relatados. Finalmente, descreve-se uma nova técnica para a criação semi-automática de dendrogramas, os quais são estruturas de dados abstratas que descrevem células neurais. Todas as técnicas foram extensivamente testadas em imagens reais e sintéticas e os respectivos resultados, que corroboram a eficiência dos algoritmos, são incluídos ao longo da tese / This thesis introduces a set of new methods for two-dimensional shape analysis for computer vision and neural shape analysis applications. More specifically, this work develops concepts and algorithms for multiscale contour representation and analysis of objects present in digital images. Therefore the object contour is represented by a complex-valued signal that can be subsequently analyzed by a multiscale transform. Different mathematical tools from signal and image processing fields, as well as differential geometry, underlie the developments in this work. Techniques for contour analysis through multi scale curvature and the Gabor and wavelet transforms are introduced. The new techniques include specific algorithms for comer detection, natural scales characterization, fractal analysis of self-similar curves and feature vector extraction associated with different shape aspects such as complexity and rectangularity. As far as the multiscale curvature analysis methods are concerned, this thesis presents a new framework for digital curvature estimation based on Fourier transform properties and new approaches for contour shrinking prevention due to Gaussian filtering. The new framework of curvature estimation has been extensively evaluated, including precision assessment of the error of the estimation based on B-spline curves. The new framework has performed successfully in all assessment experiments, which corroborates its efficiency. As far as the neural shape analysis is concerned, the contributions of this thesis are twofold. On one hand, some new shape measures, corresponding to the multiscale energies, have been devised for characterization and classification of neural cells based on shape complexity; statistical pattern recognition experiments using retinal ganglion cells (cat) are reported. On the other hand, a new technique for semi-automated dendrogram generation, i.e. abstract data structures that represent different neural cell features, is described. All the techniques have been extensively assessed using both real and computer-generated images and some of the respective results, which corroborate the robustness of the algorithms, are included throughout the thesis
293

Oscilações intrasazonais no Indo-Pacífico e na zona de convergência do Atlântico Sul: estudo observacional e numérico / Intraseasonal oscillations at the Indo-Pacific and in the South Atlantic Convergence Zone: Observational and numeric study

Barbosa, Augusto Cesar Barros 27 April 2012 (has links)
O presente trabalho foi particularmente motivado pela necessidade de se compreender a variabilidade do sinal intrasazonal relacionado a eventos extremos da Oscilação de Madden-Julian (OMJ) fator consensual na mudança do clima em diversas regiões do globo terrestre, em virtude de seus padrões de teleconexão atmosférica. Tal necessidade exige habilidades diferenciadas, como as apresentadas para o modelo OLAM v3.3 no decorrer do presente estudo. Foram utilizados dados observacionais da Reanálises II do NCEP (campo de vento em 200 e 850 mb) assim como variáveis obtidas por satélites (Radiação de Onda Longa Emergente ROL) para avaliar a estrutura atmosférica na escala de tempo intrasazonal. O campo diário de TSM foi assimilado pelo modelo numérico como principal forçante atmosférica para a geração do sinal intrasazonal; além disso, aninhamentos de grade foram acionados para melhor resolver os processos de menor escala essenciais para formar os processos na grande escala, os quais são intrínsecos ao sinal intrasazonal. Métodos estatísticos com um nível de significância em 5% foram aplicados para validar os resultados obtidos com a modelagem numérica em detrimento as observações. As observações mostraram que o ano de 2002 apresentou uma maior variabilidade intrasazonal na região do INDO-PACÍFICO associada a eventos da OMJ em relação aos outros anos em análise, tanto para o verão quanto para o inverno no HS. De outra forma, para a modelagem numérica, os anos de 2001/2002 apresentaram maior variabilidade na escala de tempo intrasazonal na região de controle INDI com forte influência remota na região da América do Sul/ZCAS para o verão de 2002. O estudo de caso observacional de 22 de dezembro de 2002, mostrou que o principal mecanismo para a interação remota entre a região de controle INDI e a ZCAS2 foi gerado por uma combinação entre o PSA-curto e o guia preferencial de ondas 2. A modelagem numérica sugere que a variabilidade intrasazonal representada pelo modelo OLAM v3.3 independe da distribuição temporal dos campos de TSM. No entanto, evidências mostraram que o sinal modelado será tanto melhor quanto maior a variabilidade da energia intrasazonal no campo de TSM assimilado pelo modelo numérico. Em detrimento à convenção de Grell, a parametrização de cúmulos profundo do tipo Kuo apresentou maior variabilidade temporal na região de controle INDI para a DIV200mb em todo período analisado, favorecendo uma maior atividade convectiva para aquela região, inclusive na escala de tempo intrasazonal. Sucessivos aninhamentos de grade sugerem que a energia intrasazonal tende a aumentar significativamente à medida que se aumenta o número de grades aninhadas. Para o estudo de caso de 01 de julho de 2001, via modelagem numérica, foram necessários 30 dias para a OMJ inverter seu padrão na região de controle INDI, e somente após essa inversão foi encontrado atividade convectiva na escala de tempo intrasazonal sobre a região da ZCAS. Dessa forma, o OLAM v3.3 superestima o tempo de meio ciclo dessa oscilação e consequentemente o tempo de resposta sobre a AS, em particular na região da ZCAS2. Outro aspecto relevante se refere à diferença na quantidade de energia intrasazonal que o OLAM v3.3 simula na região INDI quando há aninhamento de grade na região da ZCAS. Este fato, juntamente com a inversão de sinal descrita acima, sugere uma interação do tipo gangorra convectiva entre a região INDI e a região ZCAS. O espectro de energia da TSO para a divergência ao nível de 200 mb, mostrou que o OLAM v3.3 subestima a energia do sinal intrasazonal na região do oceano Índico em quase a metade do valor real observado. Todavia, as observações mostraram que a energia espectral intrasazonal da divergência em 200 mb na região de controle ZCAS2, para a escala de 43 dias, foi da ordem de 0,42 x 10-10 s-2, resultando em uma diferença positiva de 0,08 x 10-10 s-2 em relação ao valor numérico obtido. Por fim, a metodologia do traçado de raios mostrou que os números de onda 2, 3 e 4 são bem representados pelo OLAM v3.3 na região tropical, corroborando com a habilidade do modelo em reproduzir os padrões de teleconexão atmosférica gerados no evento da OMJ de 01 de julho de 2001. / This work was particularly motivated by the need to understand the variability of the intraseasonal signal, in relation to extreme events of the Madden-Julian Oscillation consensual factor in the weather changes at different regions of the globe, due its atmospheric teleconnection patterns. For this need, it\'s totally necessary special skills, such as those presented in this job for the OLAM model v3.3. In this job, observational datasets were used from the Reanalysis II/NCEP (wind fields at 200 and 850 mb), as also variables obtained by satellites (OLR) to assess the atmospheric profile in the intraseasonal time scale. The SST daily field was assimilated by the numerical OLAM model v3.3 to forcing the sign in the intraseasonal time scale. However, mesh refinement level also was activated for better resolve the smaller scale processes essentials to form key processes in large scale and relevant to intraseasonal signs generation. Statistical methods with 5% significance level, were applied to validate the results obtained with the numerical modeling in detriment to the observational results. The observations has shown that the year 2002 presented a higher intraseasonal variability in the INDO-PACIFIC region associated with MJO events, in detriment of the other years under review, both for summer as for Austral winter. Otherwise, for the numerical modeling, the years 2001/2002 presented higher variability in the intraseasonal time scale over the Indian ocean region showing strongest remote influences over the South America/SACZ to the Austral summer of 2002. The observational case study of December 22, 2002, showed that the main mechanism for the remote interaction between control region over Indian ocean and the SACZ2 control region, was generated by combination among a short-PSA and a preferential wave guide 2. The numerical modeling suggests that the intraseasonal variability represented by the OLAM model v3.3 is independent of the temporal distribution of the SST fields. However, evidences has shown that the sign will be better represented how much greater the intraseasonal energy variability in the SST fields assimilated by the numerical model. In detriment to Grell\'s convention, the Kuo\'s deep cumulus parameterization has showed greater temporal variability in the Indian ocean region for the divergence at 200 mb throughout analyzed period, favoring convective activity in the intraseasonal time scale for that region. Successive nesting grids suggests that the intraseasonal energy tends to increase significantly, when increases the number of nested grids. For the case study of July 1, 2001, via numerical modeling, were necessary 30 days to reverse the MJO\'s signal pattern in the Indian ocean region, and only after this reversal, was found convective activity in the intraseasonal time scale over the SACZ region. Thus, the results obtained with the OLAM model v3.3 suggests overestimation of the half cycle of oscillation and, consequently, the time response over the South America region, in particular over the SACZ2 region. Another important aspect refers to the difference at the intraseasonal energy amount simulated by the OLAM model v3.3 for the Indian ocean region, when is applied nesting grids over the SACZ region. This fact, together with the sign inversion described above, suggests an interaction of the type \"convective seesaw\" between the Indian ocean region and the SACZ region. The wavelets power spectrum for the divergence at 200 mb has shown that OLAM model v3.3 underestimates the intraseasonal signal energy over the Indian ocean region in about half the actual value observed. However, observations has shown that the spectral intraseasonal energy of the divergence at 200 mb in the ZCAS2 region, for 43 day\'s scale, was approximately 0.42 x 10-10 s-2, resulting in a positive difference of 0.08 x 10-10 s-2 in relation to the numerical value obtained. Finally, the methodology of the ray tracing showed that wave numbers 2, 3 and 4 were well represented by the OLAM model v3.3 for the tropical region, confirming the model ability to reproduce the atmospheric teleconnection patterns, as shown for MJO\'s event July 1, 2001.
294

Using wavelet bases to separate scales in quantum field theory

Michlin, Tracie L. 01 May 2017 (has links)
This thesis investigates the use of Daubechies wavelets to separate scales in local quantum field theory. Field theories have an infinite number of degrees of freedom on all distance scales. Quantum field theories are believed to describe the physics of subatomic particles. These theories have no known mathematically convergent approximation methods. Daubechies wavelet bases can be used separate degrees of freedom on different distance scales. Volume and resolution truncations lead to mathematically well-defined truncated theories that can be treated using established methods. This work demonstrates that flow equation methods can be used to block diagonalize truncated field theoretic Hamiltonians by scale. This eliminates the fine scale degrees of freedom. This may lead to approximation methods and provide an understanding of how to formulate well-defined fine resolution limits.
295

Characterization of active sonar targets

Schupp-Omid, Daniel 01 May 2016 (has links)
The problem of characterization of active sonar target response has important applications in many fields, including the currently cost-prohibitive recovery of unexploded ordinance on the ocean floor. We present a method for recognizing these objects using a multidisciplinary approach that fuses machine learning, signal processing, and feature engineering. In short, by taking inspiration from other fields, we solve the problem of object recognition in shallow water in an inexpensive way. These techniques add to the body of explored knowledge in the field of active sonar processing and address real-world problems in the process.
296

Audio compression and speech enhancement using temporal masking models

Gunawan, Teddy Surya, Electrical Engineering & Telecommunications, Faculty of Engineering, UNSW January 2007 (has links)
Of the few existing models of temporal masking applicable to problems such as compression and enhancement, none are based on empirical data from the psychoacoustic literature, presumably because the multidimensional nature of the data makes the derivation of tractable functional models difficult. This thesis presents two new functional models of the temporal masking effect of the human auditory system, and their exploitation in audio compression and speech enhancement applications. Traditional audio compression algorithms do not completely utilise the temporal masking properties of the human auditory system, relying solely on simultaneous masking models. A perceptual wavelet packet-based audio coder has been devised that incorporates the first developed temporal masking model and combined with simultaneous masking models in a novel manner. An evaluation of the coder using both objective (PEAQ, ITU-R BS.1387) and extensive subjective tests (ITU-R BS.1116) revealed a bitrate reduction of more than 17% compared with existing simultaneous masking-based audio coders, while preserving transparent quality. In addition, the oversampled wavelet packet transform (ODWT) has been newly applied to obtain alias-free coefficients for more accurate masking threshold calculation. Finally, a low-complexity scalable audio coding algorithm using the ODWT-based thresholds and temporal masking has been investigated. Currently, there is a strong need for innovative speech enhancement algorithms exploiting the auditory masking effects of human auditory system that perform well at very low signal-to-noise ratio. Existing competitive noise suppression algorithms and those that incorporate simultaneous masking were examined and evaluated for their suitability as baseline algorithms. Objective measures using PESQ (ITU-T P.862) and subjective measures (ITU-T P.835) demonstrate that the proposed enhancement scheme, based on a second new masking model, outperformed the seven baseline speech enhancement methods by at least 6- 20% depending on the SNR. Hence, the proposed speech enhancement scheme exploiting temporal masking effects has good potential across many types and intensities of environmental noise. Keywords: human auditory system; temporal masking; simultaneous masking; audio compression; speech enhancement; subjective test; objective test.
297

Image/video compression and quality assessment based on wavelet transform

Gao, Zhigang, January 2007 (has links)
Thesis (Ph. D.)--Ohio State University, 2007. / Title from first page of PDF file. Includes bibliographical references (p. 107-117).
298

Motion detection algorithm using wavelet transform

Lee, Jeongmin 19 March 2003 (has links)
This thesis presents an algorithm that estimates motion in image sequence using wavelet transform. The motion detection is performed under unfavorable conditions of background movement, change of brightness, and noise. The algorithm is tolerant to brightness changes, noise, and small movement in the background. The false alarm rate of motion detection is reduced as compared to standard techniques. Using wavelet transform is numerically efficient and the storage requirements are significantly reduced. Also, a more accurate motion detection is achieved. Tests performed on real images show the effectiveness of this algorithm. Practical results of motion detection are presented. / Graduation date: 2003
299

Applications of wavelet bases to numerical solutions of elliptic equations

Zhao, Wei 11 1900 (has links)
In this thesis, we investigate Riesz bases of wavelets and their applications to numerical solutions of elliptic equations. Compared with the finite difference and finite element methods, the wavelet method for solving elliptic equations is relatively young but powerful. In the wavelet Galerkin method, the efficiency of the numerical schemes is directly determined by the properties of the wavelet bases. Hence, the construction of Riesz bases of wavelets is crucial. We propose different ways to construct wavelet bases whose stability in Sobolev spaces is then established. An advantage of our approaches is their far superior simplicity over many other known constructions. As a result, the corresponding numerical schemes are easily implemented and efficient. We apply these wavelet bases to solve some important elliptic equations in physics and show their effectiveness numerically. Multilevel algorithm based on preconditioned conjugate gradient algorithm is also developed to significantly improve the numerical performance. Numerical results and comparison with other existing methods are presented to demonstrate the advantages of the wavelet Galerkin method we propose. / Mathematics
300

Essays in Dynamic Macroeconometrics

Bañbura, Marta 26 June 2009 (has links)
The thesis contains four essays covering topics in the field of macroeconomic forecasting. The first two chapters consider factor models in the context of real-time forecasting with many indicators. Using a large number of predictors offers an opportunity to exploit a rich information set and is also considered to be a more robust approach in the presence of instabilities. On the other hand, it poses a challenge of how to extract the relevant information in a parsimonious way. Recent research shows that factor models provide an answer to this problem. The fundamental assumption underlying those models is that most of the co-movement of the variables in a given dataset can be summarized by only few latent variables, the factors. This assumption seems to be warranted in the case of macroeconomic and financial data. Important theoretical foundations for large factor models were laid by Forni, Hallin, Lippi and Reichlin (2000) and Stock and Watson (2002). Since then, different versions of factor models have been applied for forecasting, structural analysis or construction of economic activity indicators. Recently, Giannone, Reichlin and Small (2008) have used a factor model to produce projections of the U.S GDP in the presence of a real-time data flow. They propose a framework that can cope with large datasets characterised by staggered and nonsynchronous data releases (sometimes referred to as “ragged edge”). This is relevant as, in practice, important indicators like GDP are released with a substantial delay and, in the meantime, more timely variables can be used to assess the current state of the economy. The first chapter of the thesis entitled “A look into the factor model black box: publication lags and the role of hard and soft data in forecasting GDP” is based on joint work with Gerhard Rünstler and applies the framework of Giannone, Reichlin and Small (2008) to the case of euro area. In particular, we are interested in the role of “soft” and “hard” data in the GDP forecast and how it is related to their timeliness. The soft data include surveys and financial indicators and reflect market expectations. They are usually promptly available. In contrast, the hard indicators on real activity measure directly certain components of GDP (e.g. industrial production) and are published with a significant delay. We propose several measures in order to assess the role of individual or groups of series in the forecast while taking into account their respective publication lags. We find that surveys and financial data contain important information beyond the monthly real activity measures for the GDP forecasts, once their timeliness is properly accounted for. The second chapter entitled “Maximum likelihood estimation of large factor model on datasets with arbitrary pattern of missing data” is based on joint work with Michele Modugno. It proposes a methodology for the estimation of factor models on large cross-sections with a general pattern of missing data. In contrast to Giannone, Reichlin and Small (2008), we can handle datasets that are not only characterised by a “ragged edge”, but can include e.g. mixed frequency or short history indicators. The latter is particularly relevant for the euro area or other young economies, for which many series have been compiled only since recently. We adopt the maximum likelihood approach which, apart from the flexibility with regard to the pattern of missing data, is also more efficient and allows imposing restrictions on the parameters. Applied for small factor models by e.g. Geweke (1977), Sargent and Sims (1977) or Watson and Engle (1983), it has been shown by Doz, Giannone and Reichlin (2006) to be consistent, robust and computationally feasible also in the case of large cross-sections. To circumvent the computational complexity of a direct likelihood maximisation in the case of large cross-section, Doz, Giannone and Reichlin (2006) propose to use the iterative Expectation-Maximisation (EM) algorithm (used for the small model by Watson and Engle, 1983). Our contribution is to modify the EM steps to the case of missing data and to show how to augment the model, in order to account for the serial correlation of the idiosyncratic component. In addition, we derive the link between the unexpected part of a data release and the forecast revision and illustrate how this can be used to understand the sources of the latter in the case of simultaneous releases. We use this methodology for short-term forecasting and backdating of the euro area GDP on the basis of a large panel of monthly and quarterly data. In particular, we are able to examine the effect of quarterly variables and short history monthly series like the Purchasing Managers' surveys on the forecast. The third chapter is entitled “Large Bayesian VARs” and is based on joint work with Domenico Giannone and Lucrezia Reichlin. It proposes an alternative approach to factor models for dealing with the curse of dimensionality, namely Bayesian shrinkage. We study Vector Autoregressions (VARs) which have the advantage over factor models in that they allow structural analysis in a natural way. We consider systems including more than 100 variables. This is the first application in the literature to estimate a VAR of this size. Apart from the forecast considerations, as argued above, the size of the information set can be also relevant for the structural analysis, see e.g. Bernanke, Boivin and Eliasz (2005), Giannone and Reichlin (2006) or Christiano, Eichenbaum and Evans (1999) for a discussion. In addition, many problems may require the study of the dynamics of many variables: many countries, sectors or regions. While we use standard priors as proposed by Litterman (1986), an important novelty of the work is that we set the overall tightness of the prior in relation to the model size. In this we follow the recommendation by De Mol, Giannone and Reichlin (2008) who study the case of Bayesian regressions. They show that with increasing size of the model one should shrink more to avoid overfitting, but when data are collinear one is still able to extract the relevant sample information. We apply this principle in the case of VARs. We compare the large model with smaller systems in terms of forecasting performance and structural analysis of the effect of monetary policy shock. The results show that a standard Bayesian VAR model is an appropriate tool for large panels of data once the degree of shrinkage is set in relation to the model size. The fourth chapter entitled “Forecasting euro area inflation with wavelets: extracting information from real activity and money at different scales” proposes a framework for exploiting relationships between variables at different frequency bands in the context of forecasting. This work is motivated by the on-going debate whether money provides a reliable signal for the future price developments. The empirical evidence on the leading role of money for inflation in an out-of-sample forecast framework is not very strong, see e.g. Lenza (2006) or Fisher, Lenza, Pill and Reichlin (2008). At the same time, e.g. Gerlach (2003) or Assenmacher-Wesche and Gerlach (2007, 2008) argue that money and output could affect prices at different frequencies, however their analysis is performed in-sample. In this Chapter, it is investigated empirically which frequency bands and for which variables are the most relevant for the out-of-sample forecast of inflation when the information from prices, money and real activity is considered. To extract different frequency components from a series a wavelet transform is applied. It provides a simple and intuitive framework for band-pass filtering and allows a decomposition of series into different frequency bands. Its application in the multivariate out-of-sample forecast is novel in the literature. The results indicate that, indeed, different scales of money, prices and GDP can be relevant for the inflation forecast.

Page generated in 0.0689 seconds