• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 40
  • 14
  • 4
  • 2
  • 1
  • 1
  • Tagged with
  • 76
  • 13
  • 10
  • 9
  • 8
  • 8
  • 8
  • 8
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

PLANNING AND SCHEDULING OF CONTINUOUS PROCESSES VIA INVENTORY PINCH DECOMPOSITION AND GLOBAL OPTIMIZATION ALGORITHMS / INVENTORY PINCH DECOMPOSITION AND GLOBAL OPTIMIZATION METHODS

Castillo Castillo, Pedro Alejandro January 2020 (has links)
Ph. D. Thesis / In order to compute more realistic production plans and schedules, techniques using nonlinear programming (NLP) and mixed-integer nonlinear programming (MINLP) have gathered a lot of attention from the industry and academy. Efficient solution of these problems to a proven ε-global optimality remains a challenge due to their combinatorial, nonconvex, and large dimensionality attributes. The key contributions of this work are: 1) the generalization of the inventory pinch decomposition method to scheduling problems, and 2) the development of a deterministic global optimization method. An inventory pinch is a point at which the cumulative total demand touches its corresponding concave envelope. The inventory pinch points delineate time intervals where a single fixed set of operating conditions is most likely to be feasible and close to the optimum. The inventory pinch method decomposes the original problem in three different levels. The first one deals with the nonlinearities, while subsequent levels involve only linear terms by fixing part of the solution from previous levels. In this heuristic method, infeasibilities (detected via positive value of slack variables) are eliminated by adding at the first level new period boundaries at the point in time where infeasibilities are detected. The global optimization algorithm presented in this work utilizes both piecewise McCormick (PMCR) and Normalized Multiparametric Disaggregation (NMDT), and employs a dynamic partitioning strategy to refine the estimates of the global optimum. Another key element is the parallelized bound tightening procedure. Case studies include gasoline blend planning and scheduling, and refinery planning. Both inventory pinch method and the global optimization algorithm show promising results and their performance is either better or on par with other published techniques and commercial solvers, as exhibited in a number of test cases solved during the course of this work. / Thesis / Doctor of Philosophy (PhD) / Optimal planning and scheduling of production systems are two very important tasks in industrial practice. Their objective is to ensure optimal utilization of raw materials and equipment to reduce production costs. In order to compute realistic production plans and schedules, it is often necessary to replace simplified linear models with nonlinear ones including discrete decisions (e.g., “yes/no”, “on/off”). To compute a global optimal solution for this type of problems in reasonable time is a challenge due to their intrinsic nonlinear and combinatorial nature. The main goal of this thesis is the development of efficient algorithms to solve large-scale planning and scheduling problems. The key contributions of this work are the development of: i) a heuristic technique to compute near-optimal solutions rapidly, and ii) a deterministic global optimization algorithm. Both approaches showed results and performances better or equal to those obtained by commercial software and previously published methods.
42

Electrical Load Disaggregation and Demand Response in Commercial Buildings

Rahman, Imran 28 January 2020 (has links)
Electrical power systems consist of a large number of power generators connected to consumers through a complex system of transmission and distribution lines. Within the electric grid, a continuous balance between generation and consumption of electricity must be maintained., ensuring stable operation of the grid. In recent decades due to increasing electricity demand, there is an increased likelihood of electrical power systems experiencing stress conditions. These conditions lead to a limited supply and cascading failures throughout the grid that could lead to wide area outages. Demand Response (DR) is a method involving the curtailment of loads during critical peak load hours, that restores that balance between demand and supply of electricity. In order to implement DR and ensure efficient energy operation of buildings, detailed energy monitoring is essential. This information can then be used for energy management, by monitoring the power consumption of devices and giving users detailed feedback at an individual device level. Based on the data from the Energy Information Administration (EIA), approximately half of all commercial buildings in the U.S. are 5,000 square feet or smaller in size, whereas the majority of the rest is made up of medium-sized commercial buildings ranging in size between 5,001 and 50,000 square feet. Given that these medium-size buildings account for a large portion of the total energy demand, these buildings are an ideal target for participating in DR. In this dissertation, two broad solutions for commercial building DR have been presented. The first is a load disaggregation technique to disaggregate the power of individual HVACs using machine learning classification techniques, where a single power meter is used to collect aggregated HVAC power data of a building. This method is then tested over a number of case studies, from which it is found that the aggregated power data can be disaggregated to accurately predict the power consumption and state of activity of individual HVAC loads. The second work focuses on a DR algorithm involving the determination of an optimal bid price for double auctioning between the user and the electric utility, in addition to a load scheduling algorithm that controls single floor HVAC and lighting loads in a commercial building, considering user preferences and load priorities. A number of case studies are carried out, from which it is observed that the algorithm can effectively control loads within a given demand limit, while efficiently maintaining user preferences for a number of different load configurations and scenarios. Therefore, the major contributions of this work include- A novel HVAC power disaggregation technique using machine learning methods, and also a DR algorithm for HVAC and lighting load control, incorporating user preferences and load priorities based on a double-auction approach. / Doctor of Philosophy / Electrical power systems consist of a large number of power generators connected to consumers through a complex system of transmission and distribution lines. Within the electric grid, a continuous balance between generation and consumption of electricity must be maintained., ensuring stable operation of the grid. When electricity demand is high, Demand Response (DR) is a method that can be used to reduce user loads, restoring the balance between demand and supply of electricity. Based on data from the Energy Information Administration (EIA), half of all commercial buildings in the US measure 5,000 square feet or smaller in size, whereas the majority of the other half is made up of medium-sized commercial buildings measuring in at between 5,001 to 50,000 square feet. This makes these commercial buildings an ideal target for participating in DR. In this dissertation, two broad solutions for commercial building DR have been presented. The first is a load disaggregation technique, where power consumption and activity of individual HVACs can be obtained, using a single power meter. The second work focuses on a DR algorithm, that controls single floor HVAC and lighting loads in a commercial building, based on a user generated bid price for electricity, user preferences and load priorities, when electricity demand is at its peak.
43

Numerical Methods for the Chemical Master Equation

Zhang, Jingwei 20 January 2010 (has links)
The chemical master equation, formulated on the Markov assumption of underlying chemical kinetics, offers an accurate stochastic description of general chemical reaction systems on the mesoscopic scale. The chemical master equation is especially useful when formulating mathematical models of gene regulatory networks and protein-protein interaction networks, where the numbers of molecules of most species are around tens or hundreds. However, solving the master equation directly suffers from the so called "curse of dimensionality" issue. This thesis first tries to study the numerical properties of the master equation using existing numerical methods and parallel machines. Next, approximation algorithms, namely the adaptive aggregation method and the radial basis function collocation method, are proposed as new paths to resolve the "curse of dimensionality". Several numerical results are presented to illustrate the promises and potential problems of these new algorithms. Comparisons with other numerical methods like Monte Carlo methods are also included. Development and analysis of the linear Shepard algorithm and its variants, all of which could be used for high dimensional scattered data interpolation problems, are also included here, as a candidate to help solve the master equation by building surrogate models in high dimensions. / Ph. D.
44

Modelování charakteristik obyvatelstva z topografických dat / Modeling population with topographic data

Šimbera, Jan January 2016 (has links)
Accurate spatial population data are an important requirement in many applications. In this thesis, the problem of disaggregating the spatial distribution of population density and rent costs using a machine learning model is studied. An approach based on freely available ancillary data such as OpenStreetMap and Urban Atlas is proposed and implemented in the form of an automated Python toolbox for ArcGIS. The applications on the urban areas of Prague, Vienna and Ljubljana show promising results, overperforming the competing population disaggregation solutions in spatial resolution and displaying a satisfying degree of transferability. A number of further improvements is suggested. Powered by TCPDF (www.tcpdf.org)
45

Predicting future spatial distributions of population and employment for South East Queensland – a spatial disaggregation approach

Tiebei Li Unknown Date (has links)
The spatial distribution of future population and employment has become a focus of recent academic enquiry and planning policy concerns. This is largely driven by the rapid urban expansion in major Australian cities and the need to plan ahead for new housing growth and demand for urban infrastructure and services. At a national level forecasts for population and employment are produced by the government and research institutions; however there is a further need to break these forecasts down to a disaggregate geographic scale for growth management within regions. Appropriate planning for the urban growth needs forecasts for fine-grained spatial units. This thesis has developed methodologies to predict the future settlement of the population, employment and urban form by applying a spatial disaggregation approach. The methodology uses the existing regional forecasts reported at regional geographic units and applies a novel spatially-based technique to step-down the regional forecasts to smaller geographical units. South East Queensland (SEQ) is the experimental context for the methodologies developed in the thesis, being one of the fastest-growing metropolitan regions in Australia. The research examines whether spatial disaggregation methodologies that can be used to enhance the forecasts for urban planning purposes and to derive a deeper understanding of the urban spatial structure under growth conditions. The first part of this thesis develops a method by which the SEQ population forecasts can be spatially disaggregated. This is related to a classical problem in geographical analysis called to modifiable area unit problem, where spatial data disaggregation may give inaccurate results due to spatial heterogeneity in the explanatory variables. Several statistical regression and dasymetric techniques are evaluated to spatially disaggregate population forecasts over the study area and to assess their relative accuracies. An important contribution arising from this research is that: i) it extends the dasymetric method beyond its current simple form to techniques that incorporate more complex density assumptions to disaggregate the data and, ii) it selects a method based on balancing the costs and errors of the disaggregation for a study area. The outputs of the method are spatially disaggregated population forecasts across the smaller areas that can be directly used for urban form analysis and are also directly available for subsequent employment disaggregation. The second part in this thesis develops a method to spatially disaggregate the employment forecasts and examine their impact on the urban form. A new method for spatially disaggregating the employment data is evaluated; it analyses the trend and spatial pattern of historic regional employment patterns based on employment determinants (for example, the local population and the proximity of an area to a shopping centre). The method we apply, namely geographically weighted regression (GWR), accounts for spatial effects of data autocorrelation and heterogeneity. Autocorrelation is where certain variables for employment determinants are related in space, and hence violate traditional statistical independence assumptions, and heterogeneity is where the associations between variables change across space. The method uses a locally-fitted relationship to estimate employment in the smaller geography whilst being constrained by the regional forecast. Results show that, by accounting for spatial heterogeneity in the local dependency of employment, the GWR method generates superior estimates over a global regression model. The spatially disaggregate projections developed in this thesis can be used to better understand questions on urban form. From a planning perspective, the results of spatial disaggregation indicate that the future growth of the population for SEQ is likely to maintain a spatially-dispersed growth pattern, whilst the employment is likely to follow a more polycentric distribution focused around the new activity centres. Overall, the thesis demonstrates that the spatial disaggregation method can be applied to supplement the regional forecasts to seek a deeper understanding of the future urban growth patterns. The development, application and validation of the spatial disaggregation methods will enhance the planner’s toolbox whilst responding to the data issues to inform urban planning and future development in a region.
46

Spatio-temporal Crime Prediction Model Based On Analysis Of Crime Clusters

Polat, Esra 01 September 2007 (has links) (PDF)
Crime is a behavior disorder that is an integrated result of social, economical and environmental factors. In the world today crime analysis is gaining significance and one of the most popular subject is crime prediction. Stakeholders of crime intend to forecast the place, time, number of crimes and crime types to get precautions. With respect to these intentions, in this thesis a spatio-temporal crime prediction model is generated by using time series forecasting with simple spatial disaggregation approach in Geographical Information Systems (GIS). The model is generated by utilizing crime data for the year 2003 in Bah&ccedil / elievler and Merkez &Ccedil / ankaya police precincts. Methodology starts with obtaining clusters with different clustering algorithms. Then clustering methods are compared in terms of land-use and representation to select the most appropriate clustering algorithms. Later crime data is divided into daily apoch, to observe spatio-temporal distribution of crime. In order to predict crime in time dimension a time series model (ARIMA) is fitted for each week day, Then the forecasted crime occurrences in time are disagregated according to spatial crime cluster patterns. Hence the model proposed in this thesis can give crime prediction in both space and time to help police departments in tactical and planning operations.
47

The Effects of Item Complexity and the Method Used to Present a Complex Item on the Face of a Financial Statement on Nonprofessional Investors` Judgments

Ragland, Linda Gale 01 January 2011 (has links)
My study is motivated by standard setters interest in better understanding (and the gap in research as to) the effects of item complexity and disaggregation across a financial statement on users' decision processes (Bonner 2008; Glaum 2009; FASB 2010b). I examine whether complexity of an item and the method used to present the item on a financial statement influences nonprofessional investors' judgments. Specifically, I examine two issues raised concerning IAS 19 Employee Benefits. The first is to examine whether there are differences in nonprofessional investors' judgments when individual components of a complex item (defined pension cost) are disaggregated across a financial statement (the statement of comprehensive income) versus when individual components of a complex item are aggregated on the face of the same statement. Differences may arise since disaggregation across a statement provides information about how an item relates to different economic events and this information could help nonprofessional investors to better interpret and use the information in judgments. A second objective is to examine whether increasing the complexity of an already complex item affects the usefulness of information. I find that nonprofessional investors weigh higher levels of item complexity in certain judgments. Additionally, I find that when a complex item (defined pension cost) is disaggregated across a financial statement (the statement of comprehensive income) nonprofessional investors are able to acquire more information about the item and are able to more accurately understand the function of the item. This, in turn, helps the nonprofessional investors decide whether the information is useful in certain judgments.
48

Desagregação e pesos estocásticos em projeções de agregados econômicos: uma análise para o PIB brasileiro

Souza, Rafael Keiti Oiski Grunho de 03 February 2015 (has links)
Submitted by Rafael Keiti Oiski Grunho de Souza (rkeiti@gmail.com) on 2015-02-11T23:13:28Z No. of bitstreams: 1 Dissertação.pdf: 1994936 bytes, checksum: db18f08cc7ccda7d020496e050849f0a (MD5) / Rejected by Renata de Souza Nascimento (renata.souza@fgv.br), reason: Prezado Rafael, boa tarde Seu trabalho foi rejeitado por não estar de acordo com as normas da ABNT. Segue abaixo o que deve ser alterado: - Retirar o acento agudo da palavra Getúlio (GETULIO) - O texto de Agradecimentos, Resumo e Abstract devem estar justificado, alinhado sem conter o parágrafo da primeira linha. Encaminharei por e-mail. Att Renata on 2015-02-12T16:13:14Z (GMT) / Submitted by Rafael Keiti Oiski Grunho de Souza (rkeiti@gmail.com) on 2015-02-12T17:23:08Z No. of bitstreams: 1 Dissertação.pdf: 1998351 bytes, checksum: a5c4eaffe9147c21e954c6125e9c1495 (MD5) / Approved for entry into archive by Renata de Souza Nascimento (renata.souza@fgv.br) on 2015-02-12T18:32:53Z (GMT) No. of bitstreams: 1 Dissertação.pdf: 1998351 bytes, checksum: a5c4eaffe9147c21e954c6125e9c1495 (MD5) / Made available in DSpace on 2015-02-12T18:45:25Z (GMT). No. of bitstreams: 1 Dissertação.pdf: 1998351 bytes, checksum: a5c4eaffe9147c21e954c6125e9c1495 (MD5) Previous issue date: 2015-02-03 / The present study aims to compare and combine different forecast techniques for the Brazilian quarterly GDP from 1991 to the second quarter 2014, using aggregated data, and disaggregated data with fixed and stochastic weights. The disaggregated univariate and multivariate models, as well as the stochastic weights, were estimated by Autometrics algorithm created by Doornik (2009), through the disaggregation levels provided by IBGE in the System of National Accounts. The aggregate models were estimated by Autometrics, Markov-Switching and state-space structural models. The forecast comparison methodology was the Model Confidence Set, developed by Hanse, Lunde and Nason (2011). Two simulations were conducted, the first with the analysis out-of-sample from 2008, and the second from 2000, with forecast horizon of up to six steps ahead. The results suggest that the disaggregated models with fixed weights perform better in the first two steps, while in the remaining periods the aggregate models provides superior forecasts. / O presente estudo tem como objetivo comparar e combinar diferentes técnicas de projeção para o PIB trimestral brasileiro de 1991 ao segundo trimestre de 2014, utilizando dados agregados, e dados desagregados com pesos fixos e estocásticos. Os modelos desagregados univariados e multivariados, assim como os pesos estocásticos, foram estimados pelo algoritmo Autometrics criado por Doornik (2009), através dos níveis de desagregação disponibilizados pelo IBGE no Sistema de Contas Nacionais. Os modelos agregados foram estimados pelo Autometrics, por Markov-Switching e por modelos estruturais de espaço-estado. A metodologia de comparação de projeções utilizada foi o Model Confidence Set, desenvolvida por Hanse, Lunde e Nason (2011). Foram realizadas duas simulações, sendo a primeira com a análise fora da amostra a partir de 2008, e a segunda a partir de 2000, com horizonte de projeção de até 6 passos à frente. Os resultados sugerem que os modelos desagregados com pesos fixos desempenham melhor nos dois primeiros passos, enquanto nos períodos restantes os modelos da série agregada geram melhores previsões.
49

Efeitos ecotoxicológicos das nanopartículas de dióxido de titânio sobre a alga Pseudokirchneriella Subcapitata e sobre o Cladócero Ceriodaphnia Silvestrii por diferentes vias de exposição / Ecotoxicological effects of the titanium dioxide nanoparticles on the algae Pseudokirchneriella subcapitata and on the cladoceran Ceriodaphnia silvestrii by different exposure routes

Lucca, Gisele Maria de 10 June 2016 (has links)
Submitted by Livia Mello (liviacmello@yahoo.com.br) on 2016-10-10T20:21:26Z No. of bitstreams: 1 DissGML.pdf: 3541713 bytes, checksum: 0c6f05408f457af46db02eef2c8ca598 (MD5) / Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-10-20T19:53:18Z (GMT) No. of bitstreams: 1 DissGML.pdf: 3541713 bytes, checksum: 0c6f05408f457af46db02eef2c8ca598 (MD5) / Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-10-20T19:53:24Z (GMT) No. of bitstreams: 1 DissGML.pdf: 3541713 bytes, checksum: 0c6f05408f457af46db02eef2c8ca598 (MD5) / Made available in DSpace on 2016-10-20T19:53:32Z (GMT). No. of bitstreams: 1 DissGML.pdf: 3541713 bytes, checksum: 0c6f05408f457af46db02eef2c8ca598 (MD5) Previous issue date: 2016-06-10 / Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP) / In recent years, increased use of titanium dioxide nanoparticles (TiO 2 NPs) in consumer products and technological devices has raised concerns regarding their environmental impacts and their risks to human health. Ecotoxicological studies have been used as a tool to analyze the toxic potential of TiO 2 NP S in different trophic levels, such as primary producers (algae) and first order consumers (cladocerans). In the present study, the chronic effects of TiO2 NPS on the population growth of the microalgae chlorophycean Pseudokirchneriella subcapitata was evaluated during an exposure period of 96 hours, under conditions of temperature and photoperiod similar to those found in tropical ecosystems. New methods were developed for the separation of the aggregates between the algae cells and TiO2 NPS, whose formation was observed at above 0.01 mg L-1 concentrations after a period of 96 hours’ exposure. The only effective method was the one in which the cells were washed three times with a metal chelator (EDTA), with duration of 1 minute for each wash. In the toxicity chronic test, there was obtained a significant inhibition of algal growth from the concentration of 64 mg L-1 of TiO2 NPs, with a concentration of 50% inhibition of algal cells (96 h - IC50) of 201.22 mg L -1 in 96 h of exposure. Then, it was evaluated the acute effects of exposure by contact and the chronic effects of TiO2 NPs for the cladoceran Ceriodaphnia silvestrii, using contaminated food (Pseudokirchneriella subcapitata) as a route of exposure. In acute toxicity tests was obtained the average value of EC50 - 48 h of 77.57 mg L-1. In chronic toxicity tests, it was observed significant differences in survival from the concentration of 0.01 mg L-1, while for the body length, the number of eggs and neonates produced, toxic effects were observed from the concentration of 1 mg L-1. These results indicate that the nano-TiO2 NPS had a deleterious effect on the growth of Pseudokirchneriella subcapitata only at concentrations above those normally found in natural aquatic environments. For the cladoceran Ceriodaphnia silvestrii, the route of dietary exposure indicated a greater toxic effect. / Nos últimos anos, o aumento do uso das nanopartículas de dióxido de titânio (nano-TiO2) em produtos de consumo e em dispositivos tecnológicos tem gerado preocupações relativas aos seus impactos ambientais e seus riscos à saúde humana. Estudos ecotoxicológicos têm sido utilizados como uma ferramenta para analisar o potencial tóxico das nano-TiO2 em diversos níveis tróficos, tais como produtores primários (algas) e consumidores de primeira ordem (cladóceros). No presente estudo, os efeitos crônicos das nano-TiO2 sobre o crescimento populacional da microalga clorofícea Pseudokirchneriella subcapitata foi avaliado durante um período de exposição de 96 horas, sob condições de temperatura e de fotoperíodo semelhantes às encontradas em ecossistemas de regiões tropicais. Foram desenvolvidas novas metodologias para separação dos agregados entre as células algais e as nano-TiO2, cuja formação foi observada nas concentrações acima de 0,01 mg L -1 após um período de exposição de 96 horas. O único método eficiente foi aquele em que as células foram lavadas três vezes com um quelante de metal (EDTA), com duração de 1 minuto para cada lavagem. No teste de toxicidade crônico obteve-se inibição significativa do crescimento algal a partir da concentração de 64 mg L-1 de nano-TiO2, com uma concentração de inibição a 50% das células algais (CI 50 - 96h) de 201,22 mg L-1 em 96 h de exposição. Em seguida, foram avaliados os efeitos agudos via exposição por contato e os efeitos crônicos das nano-TiO2 para o cladócero Ceriodaphnia silvestrii, utilizando o alimento (Pseudokirchneriella subcapitata) contaminado como via de exposição. Nos testes de toxicidade aguda foi obtido o valor médio de CE50 - 48 h de 77,57 mg L-1. Nos ensaios de toxicidade crônica, diferenças significativas foram observadas na sobrevivência a partir da concentração de 0,01 mg L -1, enquanto que para o comprimento corporal, o número de ovos e o número de neonatas produzidos, os efeitos tóxicos foram observados a partir da concentração de 1 mg L-1. Tais resultados indicam que as nano-TiO2 possuíram um efeito deletério sobre o crescimento de Pseudokirchneriella subcapitata somente em concentrações acima daquelas normalmente encontradas em ambientes aquáticos naturais. Para o cladócero Ceriodaphnia silvestrii, a via de exposição alimentar indicou um maior efeito tóxico. / CNPq: 305698/2013-30 / FAPESP: 2014/14139-3 e 2016/00753-7
50

Previsão de inflação no Brasil utilizando desagregação por componentes de localidade

Lorande, Marcelo Schiller 17 August 2018 (has links)
Submitted by MARCELO LORANDE (marcelomilq@gmail.com) on 2018-09-12T02:06:05Z No. of bitstreams: 1 Dissertação - MSL.pdf: 1806135 bytes, checksum: 104d4a7f94cee09c2d70b23ed78d5fad (MD5) / Rejected by Thais Oliveira (thais.oliveira@fgv.br), reason: Boa noite, Marcelo! Para que possamos aprovar sua Dissertação, serão necessárias apenas duas alterações: - "GETULIO" não tem acento; - Lista de Figuras e Tabelas são posicionadas ANTES do Sumário. Por gentileza, alterar e submeter novamente. Obrigada. on 2018-09-14T21:15:22Z (GMT) / Submitted by MARCELO LORANDE (marcelomilq@gmail.com) on 2018-09-17T02:46:44Z No. of bitstreams: 1 Dissertação - MSL.pdf: 1813262 bytes, checksum: d4225bb320038db9cfeb819e8804a5b8 (MD5) / Approved for entry into archive by Joana Martorini (joana.martorini@fgv.br) on 2018-09-17T15:55:14Z (GMT) No. of bitstreams: 1 Dissertação - MSL.pdf: 1813262 bytes, checksum: d4225bb320038db9cfeb819e8804a5b8 (MD5) / Approved for entry into archive by Suzane Guimarães (suzane.guimaraes@fgv.br) on 2018-09-18T13:14:49Z (GMT) No. of bitstreams: 1 Dissertação - MSL.pdf: 1813262 bytes, checksum: d4225bb320038db9cfeb819e8804a5b8 (MD5) / Made available in DSpace on 2018-09-18T13:14:49Z (GMT). No. of bitstreams: 1 Dissertação - MSL.pdf: 1813262 bytes, checksum: d4225bb320038db9cfeb819e8804a5b8 (MD5) Previous issue date: 2018-08-17 / Este trabalho propõe a desagregação por componentes de localidade do índice de inflação no Brasil como forma de melhorar o desempenho preditivo de modelos econométricos. Foram desenvolvidos modelos autorregressivos com ou sem variáveis macroeconômicas explicativas para se avaliar como a desagregação impacta em cada um deles. Além disso, foram utilizados dois testes estatísticos para se comparar o desempenho dos modelos, o Model Confidence Set e o Superior Predictive Ability. Observou-se que para curto prazo, como horizontes de até 3 meses, modelos autorregressivos de 1ª ordem possuem desempenho imbatível, ao passo que para horizontes mais distantes, modelo macroeconômicos e desagregados geram previsões estatisticamente superiores. / This work proposes the disaggregation of locality components from Brazil´s inflation index to enhance predictive performance of econometric models. Autorregressive models were implemented with or without explicative macroeconomic variables, in order to evaluate how the disaggregation affects each one of them. Besides that, it has been used two statistical tests to compare model forecast performance, the Model Confidence Set and the Superior Predictive Ability. For short term, up to 3 months, autorregressive models showed unachievable performance, whereas for longer terms, macroeconomic disaggregated models generate statistically superior forecasts.

Page generated in 0.0972 seconds