Spelling suggestions: "subject:"long/short"" "subject:"long/chort""
11 |
ASIC implementation of LSTM neural network algorithmPaschou, Michail January 2018 (has links)
LSTM neural networks have been used for speech recognition, image recognition and other artificial intelligence applications for many years. Most applications perform the LSTM algorithm and the required calculations on cloud computers. Off-line solutions include the use of FPGAs and GPUs but the most promising solutions include ASIC accelerators designed for this purpose only. This report presents an ASIC design capable of performing the multiple iterations of the LSTM algorithm on a unidirectional and without peepholes neural network architecture. The proposed design provides arithmetic level parallelism options as blocks are instantiated based on parameters. The internal structure of the design implements pipelined, parallel or serial solutions depending on which is optimal in every case. The implications concerning these decisions are discussed in detail in the report. The design process is described in detail and the evaluation of the design is also presented to measure accuracy and error of the design output.This thesis work resulted in a complete synthesizable ASIC design implementing an LSTM layer, a Fully Connected layer and a Softmax layer which can perform classification of data based on trained weight matrices and bias vectors. The design primarily uses 16-bit fixed point format with 5 integer and 11 fractional bits but increased precision representations are used in some blocks to reduce error output. Additionally, a verification environment has also been designed and is capable of performing simulations, evaluating the design output by comparing it with results produced from performing the same operations with 64-bit floating point precision on a SystemVerilog testbench and measuring the encountered error. The results concerning the accuracy and the design output error margin are presented in this thesis report. The design went through Logic and Physical synthesis and successfully resulted in a functional netlist for every tested configuration. Timing, area and power measurements on the generated netlists of various configurations of the design show consistency and are reported in this report. / LSTM neurala nätverk har använts för taligenkänning, bildigenkänning och andra artificiella intelligensapplikationer i många år. De flesta applikationer utför LSTM-algoritmen och de nödvändiga beräkningarna i digitala moln. Offline lösningar inkluderar användningen av FPGA och GPU men de mest lovande lösningarna inkluderar ASIC-acceleratorer utformade för endast dettaändamål. Denna rapport presenterar en ASIC-design som kan utföra multipla iterationer av LSTM-algoritmen på en enkelriktad neural nätverksarkitetur utan peepholes. Den föreslagna designed ger aritmetrisk nivå-parallellismalternativ som block som är instansierat baserat på parametrar. Designens inre konstruktion implementerar pipelinerade, parallella, eller seriella lösningar beroende på vilket anternativ som är optimalt till alla fall. Konsekvenserna för dessa beslut diskuteras i detalj i rapporten. Designprocessen beskrivs i detalj och utvärderingen av designen presenteras också för att mäta noggrannheten och felmarginal i designutgången. Resultatet av arbetet från denna rapport är en fullständig syntetiserbar ASIC design som har implementerat ett LSTM-lager, ett fullständigt anslutet lager och ett Softmax-lager som kan utföra klassificering av data baserat på tränade viktmatriser och biasvektorer. Designen använder huvudsakligen 16bitars fast flytpunktsformat med 5 heltal och 11 fraktions bitar men ökade precisionsrepresentationer används i vissa block för att minska felmarginal. Till detta har även en verifieringsmiljö utformats som kan utföra simuleringar, utvärdera designresultatet genom att jämföra det med resultatet som produceras från att utföra samma operationer med 64-bitars flytpunktsprecision på en SystemVerilog testbänk och mäta uppstådda felmarginal. Resultaten avseende noggrannheten och designutgångens felmarginal presenteras i denna rapport.Designen gick genom Logisk och Fysisk syntes och framgångsrikt resulterade i en funktionell nätlista för varje testad konfiguration. Timing, area och effektmätningar på den genererade nätlistorna av olika konfigurationer av designen visar konsistens och rapporteras i denna rapport.
|
12 |
Model Predictive Control Used for Optimal Heating in Commercial BuildingsRubin, Fredrik January 2021 (has links)
Model Predictive Control (MPC) is an optimization method used in a wide range of applications. However, in the housing sector its use is still limited. In this project, the possibilities of using an easily scalable MPC controller to optimize the heating of a building, is examined and evaluated. It is a combination of a Long Short Term Memory (LSTM) network for understanding the dynamics of the buildning in order to predict future indoor temperatures, and the probalistic technique Simulated Annealing (SA), used for solving the control problem. As an extension, predicted energy prices per hour are added, with the goal to lower the heating costs. The model is tested on a family house with eight rooms and centrally heated using gas. The results are promising, but ambiguous. The main reason for the uncertainties are the testing environment. / Model Predictive Control (MPC) är en optimeringsmetod som används inom många olika områden. Inom bostadssektorn är dock användningen fortfarande begränsad. I det här projektet undersöks möjligheten att använda en MPC kontroller för att optimera uppvärmningen av en byggnad, och om den enkelt kan appliceras på andra byggnader. Det är en kombination av ett long Short Term Memory (LSTM) nätverk för att förstå dynamiken av byggnaden med målet att förutse framtida inomhustemperaturer, och den probabilistiska metoden Simulated Annealing (SA) som används för att lösa kontrollproblemet. Ett tillägg till modellen är inkluderandet av energipriser för varje timme, där målet istället blir att minimera uppvärmningskostnaderna. Modellen testas på ett familjehus med åtta rum som är centralt uppvärmt genom gas. Resultaten är lovande, men tvetydiga. Huvudorsaken för osäkerheterna är testmiljön.
|
13 |
Anomaly Detection for Root Cause Analysis in System Logs using Long Short-Term Memory / Anomalidetektion för Grundorsaksanalys i Loggar från Mjukvara med hjälp av Long Short-Term Memoryvon Hacht, Johan January 2021 (has links)
Many software systems are under test to ensure that they function as expected. Sometimes, a test can fail, and in that case, it is essential to understand the cause of the failure. However, as systems grow larger and become more complex, this task can become non-trivial and potentially take much time. Therefore, even partially, automating the process of root cause analysis can save time for the developers involved. This thesis investigates the use of a Long Short-Term Memory (LSTM) anomaly detector in system logs for root cause analysis. The implementation is evaluated in a quantitative and a qualitative experiment. The quantitative experiment evaluates the performance of the anomaly detector in terms of precision, recall, and F1 measure. Anomaly injection is used to measure these metrics since there are no labels in the data. Additionally, the LSTM is compared with a baseline model. The qualitative experiment evaluates how effective the anomaly detector could be for root cause analysis of the test failures. This was evaluated in interviews with an expert in the software system that produced the log data that the thesis uses. The results show that the LSTM anomaly detector achieved a higher F1 measure than the proposed baseline implementation thanks to its ability to detect unusual events and events happening out of order. The qualitative results indicate that the anomaly detector could be used for root cause analysis. In many of the evaluated test failures, the expert being interviewed could deduce the cause of the failure. Even if the detector did not find the exact issue, a particular part of the software might be highlighted, meaning that it produces many anomalous log messages. With this information, the expert could contact the people responsible for that part of the application for help. In conclusion, the anomaly detector automatically collects the necessary information for the expert to perform root cause analysis. As a result, it could save the expert time to perform this task. With further improvements, it could also be possible for non-experts to utilise the anomaly detector, reducing the need for an expert. / Många mjukvarusystem testas för att försäkra att de fungerar som de ska. Ibland kan ett test misslyckas och i detta fall är det viktigt att förstå varför det gick fel. Detta kan bli problematiskt när mjukvarusystemen växer och blir mer komplexa eftersom att denna uppgift kan bli icke trivial och ta mycket tid. Om man skulle kunna automatisera felsökningsprocessen skulle det kunna spara mycket tid för de invloverade utvecklarna. Denna rapport undersöker användningen av en Long Short-Term Memory (LSTM) anomalidetektor för grundorsaksanalys i loggar. Implementationen utvärderas genom en kvantitativ och kvalitativ undersökning. Den kvantitativa undersökningen utvärderar prestandan av anomalidetektorn med precision, recall och F1 mått. Artificiellt insatta anomalier används för att kunna beräkna dessa mått eftersom att det inte finns etiketter i den använda datan. Implementationen jämförs också med en annan simpel anomalidetektor. Den kvalitativa undersökning utvärderar hur användbar anomalidetektorn är för grundorsaksanalys för misslyckade tester. Detta utvärderades genom intervjuer med en expert inom mjukvaran som producerade datan som användes in denna rapport. Resultaten visar att LSTM anomalidetektorn lyckades nå ett högre F1 mått jämfört med den simpla modellen. Detta tack vare att den kunde upptäcka ovanliga loggmeddelanden och loggmeddelanden som skedde i fel ordning. De kvalitativa resultaten pekar på att anomalidetektorn kan användas för grundorsaksanalys för misslyckade tester. I många av de misslyckade tester som utvärderades kunde experten hitta anledningen till att felet misslyckades genom det som hittades av anomalidetektorn. Även om detektorn inte hittade den exakta orsaken till att testet misslyckades så kan den belysa en vissa del av mjukvaran. Detta betyder att just den delen av mjukvaran producerad många anomalier i loggarna. Med denna information kan experten kontakta andra personer som känner till den delen av mjukvaran bättre för hjälp. Anomalidetektorn automatiskt den information som är viktig för att experten ska kunna utföra grundorsaksanalys. Tack vare detta kan experten spendera mindre tid på denna uppgift. Med vissa förbättringar skulle det också kunna vara möjligt för mindre erfarna utvecklare att använda anomalidetektorn. Detta minskar behovet för en expert.
|
14 |
Empirisk Modellering av Trafikflöden : En spatio-temporal prediktiv modellering av trafikflöden i Stockholms stad med hjälp av neurala nätverk / Empirical Modeling of Traffic Flow : A spatio-temporal prediction model of the traffic flow in Stockholm city using neural networksBjörkqvist, Niclas, Evestam, Viktor January 2024 (has links)
A better understanding of the traffic flow in a city helps to smooth transport resulting in a better street environment, affecting not only road users and people in proximity. Good predictions of the flow of traffic helps to control and further develop the road network in order to avoid congestion and unneccessary time spent while traveling. This study investigates three different machine learning models with the purpose of predicting traffic flow on different road types inurban Stockholm using loop sensor data between 2013 and 2023. The models used was Long short term memory (LSTM), Temporal convolutional network (TCN) and a hybrid model of LSTM and TCN. The results from the hybrid model indicates a slightly better mean absolute error than TCN suggesting that a hybrid model might be advantagous when predicting traffic flow using loop sensor data. LSTM struggled to capture the complexity of the data and was unable to provide a proper prediction as a result. TCN produced a mean absolute error slightly bigger than the hybrid model and was to an extent able to capture the trends of the traffic flow, but struggled with capturing the scale of the traffic flow suggesting the need for further data preprocessing. Furthermore, this study suggests that the loop sensor data was able to act as a foundation for predicting the traffic flow using machine learning methods. However, it suggest that improvements to the data itself such as incorporating more related parameters might be advantageous to further improve traffic flow prediction.
|
15 |
A SENTIMENT BASED AUTOMATIC QUESTION-ANSWERING FRAMEWORKQiaofei Ye (6636317) 14 May 2019 (has links)
With the rapid growth and maturity of Question-Answering (QA) domain, non-factoid Question-Answering tasks are in high demand. However, existing Question-Answering systems are either fact-based, or highly keyword related and hard-coded. Moreover, if QA is to become more personable, sentiment of the question and answer should be taken into account. However, there is not much research done in the field of non-factoid Question-Answering systems based on sentiment analysis, that would enable a system to retrieve answers in a more emotionally intelligent way. This study investigates to what extent could prediction of the best answer be improved by adding an extended representation of sentiment information into non-factoid Question-Answering.
|
16 |
Ensaios em econometria financeiraCaldeira, João Frois January 2010 (has links)
Os modelos de otimização de carteiras baseados na análise média-variância apresentam dificuldades para estimação das matrizes de covariância, usadas no processo de otimização, o que leva a necessidade de métodos ad hoc para limitar ou suavizar as alocações eficientes recomendadas pelo modelo. Embora as carteiras obtidas por este método sejam eficientes, não é assegurado que o tracking error seja estacionário, podendo a carteira se distanciar do benchmark, exigindo frequentes recomposições. Neste artigo é empregada a metodologia de cointegração para otimização de carteiras no âmbito de duas estratégias: index tracking e estratégia long-short. A estabilidade das carteiras otimizadas através da cointegração em diferentes cenários de mercado, diminuindo custos relativos a frequentes recomposições da carteira, e níveis de retorno e volatilidade superiores aos benchmarks, mostram que a metodologia é uma ferramenta eficiente e capaz de gerar resultados robustos, se caracterizando como uma atraente ferramenta para a gestão quantitativa de recursos. Modelar a estrutura a termo da taxa de juros é extremamente importante para macroeconomistas e participantes do mercado financeiro em geral. Neste artigo é empregada a formulação de Diebold-Li para ajustar e fazer previsões da estrutura a termo da taxa de juros brasileira. São empregados dados diários referentes às taxas dos contratos de DI Futuro negociados na BM&F que apresentaram maior liquidez para o período de Janeiro de 2006 a Fevereiro de 2009. Diferentemente da maior parte da literatura sobre curva de juros para dados brasileiros, em que o modelo de Diebold- Li é estimado pelo método de dois passos, neste trabalho o modelo é colocado no formado de estado espaço, e os parâmetros são estimados simultaneamente, de forma eficiente, pelo Filtro de Kalman. Os resultados obtidos tanto para o ajuste, mas principalmente no que diz respeito à previsão, mostram que a estimação do modelo através do Filtro de Kalman é a mais adequada, gerando melhores previsões para todas as maturidades quando é considerado horizontes de previsão de um mês, três meses e seis meses. No terceiro artigo artigo nós propomos estimar o modelo dinâmico da estrutura a termo da curva de juros de Nelson e Siegel (1987) considerando duas especificações alternativas. Na primeira, nós consideramos os pesos dos fatores como variantes no tempo e tratamos a heterocedasticidade condicional via um modelo volatilidade estocática com fatores comuns. No segundo caso, consideramos um modelo onde os fatores latentes seguem individualmente processos autoregressivos com volatilidade estocástica. Os assim chamados fatores de volatilidade buscam capturar a incerteza ao longo do tempo associada ao nível, inclinação e curvatura da curva de juros. A estimação é realizada através de métodos de inferência bayesiana, por Markov Chain Monte Carlo. Os resultados mostram que os fatores de volatilidade são altamente persistentes, dando suporte ao fato estilizado de que os choques na volatilidade das taxas de juros são altamente persistentes, e também indicam que o uso de estruturas de volatilidade estocástica levam a melhores ajustes dentro da amostra para a curva de juros observada. / The traditional models to optimize portfolios based on mean-variance analysis aim to determine the portfolio weights that minimize the variance for a certain return level. The covariance matrices used to optimize are difficult to estimate and ad hoc methods often need to be applied to limit or smooth the mean-variance efficient allocations recommended by the model. Although the method is efficient, the tracking error isn’t certainly stationary, so the portfolio can get distant from the benchmark, requiring frequent re-balancements. We used the cointegration methodology to devise two quantitative strategies: index tracking and long-short market neutral. We aim to design optimal portfolios acquiring the asset prices’ co-movements. We used Ibovespa’s index and stocks from Jan-2000 to Dec-2008. The results show that the devise of index tracking portfolios using cointegration generates goods results, replicating the benchmark’s return and volatility. The long-short strategy generated stable returns under several market circumstances, presenting low volatility. Modeling the term structure of interest rate is very important to macroeconomists and financial market practitioners in general. In this paper, we used the Diebold-Li interpretation to the Nelson Siegel model in order to fit and forecast the Brazilian yield curve. The data consisted of daily observations of the most liquid future ID yields traded in the BM&F from January 2006 to February 2009. Differently from the literature on the Brazilian yield curve, where the Diebold-Li model is estimated through the two-step method, the model herein is put in the state-space form, and the parameters are simultaneously and efficiently estimated using the Kalman filter. The results obtained for the fit and for the forecast showed that the Kalman filter is the most suitable method for the estimation of the model, generating better forecast for all maturities when we consider the forecasting horizons of one and three months. In the third essay we propose to estimate the dynamic Nelson-Siegel model of yield curve considering two alternative specifications. At first, we consider the factor loadings such as time-varying conditonal heteroskedasticity and treat via a common factors of stochastic volatility models. In the second case, we consider a model where the latent factors individually following autorregressive process with stochastic volatility. The volatility factors seek to capture the uncertainty over time associated with level, slope and curvature of yield curve.The estimation is performed through bayesian inference, Markov Chain Monte Carlo. The volatility factors showed high persistence, supporting the stylized fact that shocks in the volatility of interest rate are highly persistent, and also indicate that the used of structures of stochastic volatility lead to better in-sample fits of the observed yield curve.
|
17 |
The Success of Long-Short Equity Strategies versus Traditional Equity Strategies & Market ReturnsBuchanan, Lauren J. 01 January 2011 (has links)
This study examines the performance of long-short equity trading strategies from January 1990 to December 2010. This study combines two financial screens that will yield candidates for both long and short positions for each month during the aforementioned time period. Two long-short strategies are tested: (1) perfectly-hedged, or equal allocation to long and short positions, and (2) net-long. The results of this thesis reveal that if a long-short equity manager is able to successfully determine what companies are overvalued and undervalued and actively rebalance their portfolio, perfectly-hedged and net-long strategies can generate superior risk-adjusted alpha.
|
18 |
Taiwan multi-factor model construction: Equity market neutral strategies applicationTang, Yun-He 22 July 2004 (has links)
This Thesis attempts to construct a Taiwan equity multi-factor model using fundamental cross-sectional approach step by step. It is found that the model involves 28 explanatory factors (including 20 industry factors) and its explanatory power is 58.6% on average. The results of the estimations can be considered very satisfactory.
Moreover, based on MFM, this study simulates applications of equity market neutral strategies through quantitative techniques over the period Jan.2003 ¡V Dec.2003. The results verified that the three major characteristics of equity market neutral portfolio performance are: 1) providing absolute return; 2) lack of correlation to the equity benchmark; and 3) low volatility due to hedged portfolio structures.
|
19 |
Ensaios em econometria financeiraCaldeira, João Frois January 2010 (has links)
Os modelos de otimização de carteiras baseados na análise média-variância apresentam dificuldades para estimação das matrizes de covariância, usadas no processo de otimização, o que leva a necessidade de métodos ad hoc para limitar ou suavizar as alocações eficientes recomendadas pelo modelo. Embora as carteiras obtidas por este método sejam eficientes, não é assegurado que o tracking error seja estacionário, podendo a carteira se distanciar do benchmark, exigindo frequentes recomposições. Neste artigo é empregada a metodologia de cointegração para otimização de carteiras no âmbito de duas estratégias: index tracking e estratégia long-short. A estabilidade das carteiras otimizadas através da cointegração em diferentes cenários de mercado, diminuindo custos relativos a frequentes recomposições da carteira, e níveis de retorno e volatilidade superiores aos benchmarks, mostram que a metodologia é uma ferramenta eficiente e capaz de gerar resultados robustos, se caracterizando como uma atraente ferramenta para a gestão quantitativa de recursos. Modelar a estrutura a termo da taxa de juros é extremamente importante para macroeconomistas e participantes do mercado financeiro em geral. Neste artigo é empregada a formulação de Diebold-Li para ajustar e fazer previsões da estrutura a termo da taxa de juros brasileira. São empregados dados diários referentes às taxas dos contratos de DI Futuro negociados na BM&F que apresentaram maior liquidez para o período de Janeiro de 2006 a Fevereiro de 2009. Diferentemente da maior parte da literatura sobre curva de juros para dados brasileiros, em que o modelo de Diebold- Li é estimado pelo método de dois passos, neste trabalho o modelo é colocado no formado de estado espaço, e os parâmetros são estimados simultaneamente, de forma eficiente, pelo Filtro de Kalman. Os resultados obtidos tanto para o ajuste, mas principalmente no que diz respeito à previsão, mostram que a estimação do modelo através do Filtro de Kalman é a mais adequada, gerando melhores previsões para todas as maturidades quando é considerado horizontes de previsão de um mês, três meses e seis meses. No terceiro artigo artigo nós propomos estimar o modelo dinâmico da estrutura a termo da curva de juros de Nelson e Siegel (1987) considerando duas especificações alternativas. Na primeira, nós consideramos os pesos dos fatores como variantes no tempo e tratamos a heterocedasticidade condicional via um modelo volatilidade estocática com fatores comuns. No segundo caso, consideramos um modelo onde os fatores latentes seguem individualmente processos autoregressivos com volatilidade estocástica. Os assim chamados fatores de volatilidade buscam capturar a incerteza ao longo do tempo associada ao nível, inclinação e curvatura da curva de juros. A estimação é realizada através de métodos de inferência bayesiana, por Markov Chain Monte Carlo. Os resultados mostram que os fatores de volatilidade são altamente persistentes, dando suporte ao fato estilizado de que os choques na volatilidade das taxas de juros são altamente persistentes, e também indicam que o uso de estruturas de volatilidade estocástica levam a melhores ajustes dentro da amostra para a curva de juros observada. / The traditional models to optimize portfolios based on mean-variance analysis aim to determine the portfolio weights that minimize the variance for a certain return level. The covariance matrices used to optimize are difficult to estimate and ad hoc methods often need to be applied to limit or smooth the mean-variance efficient allocations recommended by the model. Although the method is efficient, the tracking error isn’t certainly stationary, so the portfolio can get distant from the benchmark, requiring frequent re-balancements. We used the cointegration methodology to devise two quantitative strategies: index tracking and long-short market neutral. We aim to design optimal portfolios acquiring the asset prices’ co-movements. We used Ibovespa’s index and stocks from Jan-2000 to Dec-2008. The results show that the devise of index tracking portfolios using cointegration generates goods results, replicating the benchmark’s return and volatility. The long-short strategy generated stable returns under several market circumstances, presenting low volatility. Modeling the term structure of interest rate is very important to macroeconomists and financial market practitioners in general. In this paper, we used the Diebold-Li interpretation to the Nelson Siegel model in order to fit and forecast the Brazilian yield curve. The data consisted of daily observations of the most liquid future ID yields traded in the BM&F from January 2006 to February 2009. Differently from the literature on the Brazilian yield curve, where the Diebold-Li model is estimated through the two-step method, the model herein is put in the state-space form, and the parameters are simultaneously and efficiently estimated using the Kalman filter. The results obtained for the fit and for the forecast showed that the Kalman filter is the most suitable method for the estimation of the model, generating better forecast for all maturities when we consider the forecasting horizons of one and three months. In the third essay we propose to estimate the dynamic Nelson-Siegel model of yield curve considering two alternative specifications. At first, we consider the factor loadings such as time-varying conditonal heteroskedasticity and treat via a common factors of stochastic volatility models. In the second case, we consider a model where the latent factors individually following autorregressive process with stochastic volatility. The volatility factors seek to capture the uncertainty over time associated with level, slope and curvature of yield curve.The estimation is performed through bayesian inference, Markov Chain Monte Carlo. The volatility factors showed high persistence, supporting the stylized fact that shocks in the volatility of interest rate are highly persistent, and also indicate that the used of structures of stochastic volatility lead to better in-sample fits of the observed yield curve.
|
20 |
Ensaios em econometria financeiraCaldeira, João Frois January 2010 (has links)
Os modelos de otimização de carteiras baseados na análise média-variância apresentam dificuldades para estimação das matrizes de covariância, usadas no processo de otimização, o que leva a necessidade de métodos ad hoc para limitar ou suavizar as alocações eficientes recomendadas pelo modelo. Embora as carteiras obtidas por este método sejam eficientes, não é assegurado que o tracking error seja estacionário, podendo a carteira se distanciar do benchmark, exigindo frequentes recomposições. Neste artigo é empregada a metodologia de cointegração para otimização de carteiras no âmbito de duas estratégias: index tracking e estratégia long-short. A estabilidade das carteiras otimizadas através da cointegração em diferentes cenários de mercado, diminuindo custos relativos a frequentes recomposições da carteira, e níveis de retorno e volatilidade superiores aos benchmarks, mostram que a metodologia é uma ferramenta eficiente e capaz de gerar resultados robustos, se caracterizando como uma atraente ferramenta para a gestão quantitativa de recursos. Modelar a estrutura a termo da taxa de juros é extremamente importante para macroeconomistas e participantes do mercado financeiro em geral. Neste artigo é empregada a formulação de Diebold-Li para ajustar e fazer previsões da estrutura a termo da taxa de juros brasileira. São empregados dados diários referentes às taxas dos contratos de DI Futuro negociados na BM&F que apresentaram maior liquidez para o período de Janeiro de 2006 a Fevereiro de 2009. Diferentemente da maior parte da literatura sobre curva de juros para dados brasileiros, em que o modelo de Diebold- Li é estimado pelo método de dois passos, neste trabalho o modelo é colocado no formado de estado espaço, e os parâmetros são estimados simultaneamente, de forma eficiente, pelo Filtro de Kalman. Os resultados obtidos tanto para o ajuste, mas principalmente no que diz respeito à previsão, mostram que a estimação do modelo através do Filtro de Kalman é a mais adequada, gerando melhores previsões para todas as maturidades quando é considerado horizontes de previsão de um mês, três meses e seis meses. No terceiro artigo artigo nós propomos estimar o modelo dinâmico da estrutura a termo da curva de juros de Nelson e Siegel (1987) considerando duas especificações alternativas. Na primeira, nós consideramos os pesos dos fatores como variantes no tempo e tratamos a heterocedasticidade condicional via um modelo volatilidade estocática com fatores comuns. No segundo caso, consideramos um modelo onde os fatores latentes seguem individualmente processos autoregressivos com volatilidade estocástica. Os assim chamados fatores de volatilidade buscam capturar a incerteza ao longo do tempo associada ao nível, inclinação e curvatura da curva de juros. A estimação é realizada através de métodos de inferência bayesiana, por Markov Chain Monte Carlo. Os resultados mostram que os fatores de volatilidade são altamente persistentes, dando suporte ao fato estilizado de que os choques na volatilidade das taxas de juros são altamente persistentes, e também indicam que o uso de estruturas de volatilidade estocástica levam a melhores ajustes dentro da amostra para a curva de juros observada. / The traditional models to optimize portfolios based on mean-variance analysis aim to determine the portfolio weights that minimize the variance for a certain return level. The covariance matrices used to optimize are difficult to estimate and ad hoc methods often need to be applied to limit or smooth the mean-variance efficient allocations recommended by the model. Although the method is efficient, the tracking error isn’t certainly stationary, so the portfolio can get distant from the benchmark, requiring frequent re-balancements. We used the cointegration methodology to devise two quantitative strategies: index tracking and long-short market neutral. We aim to design optimal portfolios acquiring the asset prices’ co-movements. We used Ibovespa’s index and stocks from Jan-2000 to Dec-2008. The results show that the devise of index tracking portfolios using cointegration generates goods results, replicating the benchmark’s return and volatility. The long-short strategy generated stable returns under several market circumstances, presenting low volatility. Modeling the term structure of interest rate is very important to macroeconomists and financial market practitioners in general. In this paper, we used the Diebold-Li interpretation to the Nelson Siegel model in order to fit and forecast the Brazilian yield curve. The data consisted of daily observations of the most liquid future ID yields traded in the BM&F from January 2006 to February 2009. Differently from the literature on the Brazilian yield curve, where the Diebold-Li model is estimated through the two-step method, the model herein is put in the state-space form, and the parameters are simultaneously and efficiently estimated using the Kalman filter. The results obtained for the fit and for the forecast showed that the Kalman filter is the most suitable method for the estimation of the model, generating better forecast for all maturities when we consider the forecasting horizons of one and three months. In the third essay we propose to estimate the dynamic Nelson-Siegel model of yield curve considering two alternative specifications. At first, we consider the factor loadings such as time-varying conditonal heteroskedasticity and treat via a common factors of stochastic volatility models. In the second case, we consider a model where the latent factors individually following autorregressive process with stochastic volatility. The volatility factors seek to capture the uncertainty over time associated with level, slope and curvature of yield curve.The estimation is performed through bayesian inference, Markov Chain Monte Carlo. The volatility factors showed high persistence, supporting the stylized fact that shocks in the volatility of interest rate are highly persistent, and also indicate that the used of structures of stochastic volatility lead to better in-sample fits of the observed yield curve.
|
Page generated in 0.0455 seconds