• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 13
  • 13
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Previsão de séries temporais econômicas usando redes neurais caóticas / Forecasting economic time series using chaotic neural networks

Gonçalves, Victor Henrique 24 November 2017 (has links)
Esta dissertação descreve a aplicação do KIII, um modelo de rede neural biologicamente mais plausível, para a previsão de séries temporais econômicas. Os conjuntos K são modelos conexionistas baseados em populações de neurônios e foram usados em muitas aplicações de aprendizado de máquina, incluindo previsões de séries temporais. Nesta dissertação, este método foi aplicado ao IPCA, um índice de preços ao consumidor brasileiro pesquisado pelo IBGE em 13 regiões metropolitanas. Os valores abrangem o período de agosto de 1994 a junho de 2017. Os experimentos foram realizados utilizando quatro modelos não-paramétricos (KIII, kNN contínuo, RNAs clássicas e SVM) e seis métodos paramétricos: ARIMA, SARIMA, Médias Móveis, SES, Holt, Holt-Winters Aditivo e Holt-Winters Multiplicativo. A médida estatística RMSE foi utilizada para comparar o desempenho dos métodos. Os conjuntos KIII de Freeman funcionaram bem como um filtro, melhorando o desempenho do método, mas não foram um bom método de previsão, sendo superado, na maior parte dos experimentos, por outros métodos de previsão de séries temporais. Esta dissertação contribui com o uso de modelos não paramétricos para prever a inflação em um país em desenvolvimento. / This thesis describes the application of KIII, a biologically more plausible neural network model, for forecasting economic time series. K-sets are connectionist models based on neural populations and have been used in many machine learning applications, including time series prediction. In this thesis, this method was applied to IPCA, a Brazilian consumer price index surveyed by IBGE in 13 metropolitan areas. The values ranged from August 1994 to June 2017. Experiments were performed using four non-parametric models (KIII, continuous kNN, classical ANN, and SVM) and four parametric methods: ARIMA, SARIMA, Moving Average, SES, Holt, Additive HoltWinters, and Multiplicative HoltWinters. The statistical metric RMSE was used to compare methods performance. Freemans KIII sets worked well as filter, improving method performance, but it was not a good prediction method, and was overcome in most experiments by other time series prediction methods. This thesis contributes with the use of non-parametrics models for forecasting inflation in a developing country.
2

Previsão de séries temporais econômicas usando redes neurais caóticas / Forecasting economic time series using chaotic neural networks

Victor Henrique Gonçalves 24 November 2017 (has links)
Esta dissertação descreve a aplicação do KIII, um modelo de rede neural biologicamente mais plausível, para a previsão de séries temporais econômicas. Os conjuntos K são modelos conexionistas baseados em populações de neurônios e foram usados em muitas aplicações de aprendizado de máquina, incluindo previsões de séries temporais. Nesta dissertação, este método foi aplicado ao IPCA, um índice de preços ao consumidor brasileiro pesquisado pelo IBGE em 13 regiões metropolitanas. Os valores abrangem o período de agosto de 1994 a junho de 2017. Os experimentos foram realizados utilizando quatro modelos não-paramétricos (KIII, kNN contínuo, RNAs clássicas e SVM) e seis métodos paramétricos: ARIMA, SARIMA, Médias Móveis, SES, Holt, Holt-Winters Aditivo e Holt-Winters Multiplicativo. A médida estatística RMSE foi utilizada para comparar o desempenho dos métodos. Os conjuntos KIII de Freeman funcionaram bem como um filtro, melhorando o desempenho do método, mas não foram um bom método de previsão, sendo superado, na maior parte dos experimentos, por outros métodos de previsão de séries temporais. Esta dissertação contribui com o uso de modelos não paramétricos para prever a inflação em um país em desenvolvimento. / This thesis describes the application of KIII, a biologically more plausible neural network model, for forecasting economic time series. K-sets are connectionist models based on neural populations and have been used in many machine learning applications, including time series prediction. In this thesis, this method was applied to IPCA, a Brazilian consumer price index surveyed by IBGE in 13 metropolitan areas. The values ranged from August 1994 to June 2017. Experiments were performed using four non-parametric models (KIII, continuous kNN, classical ANN, and SVM) and four parametric methods: ARIMA, SARIMA, Moving Average, SES, Holt, Additive HoltWinters, and Multiplicative HoltWinters. The statistical metric RMSE was used to compare methods performance. Freemans KIII sets worked well as filter, improving method performance, but it was not a good prediction method, and was overcome in most experiments by other time series prediction methods. This thesis contributes with the use of non-parametrics models for forecasting inflation in a developing country.
3

Essays on economic mobility

Yalonetzky, Gaston Isaias January 2008 (has links)
This thesis is a collection of three essays with contributions to the intergenerational and intra-generational mobility literature. The essay on full risk insurance and measurement error examines the likelihood that measurement error may reconcile observed departures from perfect rank immobility in insurable consumption with the mobility predictions of full risk insurance, by generating spurious rank-breaking transitions. The essay shows that under certain assumptions full risk insurance predicts perfect rank immobility and that there exists ranges of error covariance matrices for which the mobility predictions of full risk insurance plus measurement error can not be rejected in the Peruvian data. A novel approach to test these mobility predictions is presented. The essay on discrete time-states Markov chain models applied to welfare dynamics shows that models with higher order may fit data better than the popular first-order, stationary model, and that the order of the chain, in turn, affects the estimation of equilibrium distributions. A best-practice methodology to conduct homogeneity tests between two samples with different optimal order is proposed, and an index by Shorrocks, based on the trace of the transition matrix, is extended to discrete Markov chain models with higher order. The essay on cohort heterogeneity in intergenerational mobility of education shows how cohort heterogeneity affects the analysis of cross-group homogeneity and long-term prospects of a welfare variable, based on transition matrix analysis. The essay compares the transition matrices of Peruvian groups divided by gender and ethnicity and finds genuine reductions in heterogeneity of the mobility regimes between male and female and between indigenous and non-indigenous groups among the youngest cohorts. The essay proposes a methodology to conduct first-order stochastic dominance analysis with equilibrium distributions and shows that among the youngest cohorts past stochastic dominance of male over females and non-indigenous over indigenous disappear in the long term.
4

Ranking-Based Methods for Gene Selection in Microarray Data

Chen, Li 21 March 2006 (has links)
DNA microarrays have been used for the purpose of monitoring expression levels of thousands of genes simultaneously and identifying those genes that are differentially expressed. One of the major goals of microarray data analysis is the detection of differentially expressed genes across two kinds of tissue samples or samples obtained under two experimental conditions. A large number of gene detection methods have been developed and most of them are based on statistical analysis. However the statistical analysis methods have the limitations due to the small sample size and unknown distribution and error structure of microarray data. In this thesis, a study of ranking-based gene selection methods which have weak assumption about the data was done. Three approaches are proposed to integrate the individual ranks to select differentially expressed genes in microarray data. The experiments are implemented on the simulated and biological microarray data, and the results show that ranking-based methods outperform the t-test and SAM in selecting differentially expressed genes, especially when the sample size is small.
5

Feasible Generalized Least Squares: theory and applications

González Coya Sandoval, Emilio 04 June 2024 (has links)
We study the Feasible Generalized Least-Squares (FGLS) estimation of the parameters of a linear regression model in which the errors are allowed to exhibit heteroskedasticity of unknown form and to be serially correlated. The main contribution is two fold; first we aim to demystify the reasons often advanced to use OLS instead of FGLS by showing that the latter estimate is robust, and more efficient and precise. Second, we devise consistent FGLS procedures, robust to misspecification, which achieves a lower mean squared error (MSE), often close to that of the correctly specified infeasible GLS. In the first chapter we restrict our attention to the case with independent heteroskedastic errors. We suggest a Lasso based procedure to estimate the skedastic function of the residuals. This estimate is then used to construct a FGLS estimator. Using extensive Monte Carlo simulations, we show that this Lasso-based FGLS procedure has better finite sample properties than OLS and other linear regression-based FGLS estimates. Moreover, the FGLS-Lasso estimate is robust to misspecification of both the functional form and the variables characterizing the skedastic function. The second chapter generalizes our investigation to the case with serially correlated errors. There are three main contributions; first we show that GLS is consistent requiring only pre-determined regressors, whereas OLS requires exogenous regressors to be consistent. The second contribution is to show that GLS is much more robust that OLS; even a misspecified GLS correction can achieve a lower MSE than OLS. The third contribution is to devise a FGLS procedure valid whether or not the regressors are exogenous, which achieves a MSE close to that of the correctly specified infeasible GLS. Extensive Monte Carlo experiments are conducted to assess the performance of our FGLS procedure against OLS in finite samples. FGLS achieves important reductions in MSE and variance relative to OLS. In the third chapter we consider an empirical application; we re-examine the Uncovered Interest Parity (UIP) hypothesis, which states that the expected rate of return to speculation in the forward foreign exchange market is zero. We extend the FGLS procedure to a setting in which lagged dependent variables are included as regressors. We thus provide a consistent and efficient framework to estimate the parameters of a general k-step-ahead linear forecasting equation. Finally, we apply our FGLS procedures to the analysis of the two main specifications to test the UIP.
6

Essays on Entrepreneurship and Economic Development

Tamvada, Jagannadha Pawan 08 September 2007 (has links)
No description available.
7

Curve Estimation and Signal Discrimination in Spatial Problems

Rau, Christian, rau@maths.anu.edu.au January 2003 (has links)
In many instances arising prominently, but not exclusively, in imaging problems, it is important to condense the salient information so as to obtain a low-dimensional approximant of the data. This thesis is concerned with two basic situations which call for such a dimension reduction. The first of these is the statistical recovery of smooth edges in regression and density surfaces. The edges are understood to be contiguous curves, although they are allowed to meander almost arbitrarily through the plane, and may even split at a finite number of points to yield an edge graph. A novel locally-parametric nonparametric method is proposed which enjoys the benefit of being relatively easy to implement via a `tracking' approach. These topics are discussed in Chapters 2 and 3, with pertaining background material being given in the Appendix. In Chapter 4 we construct concomitant confidence bands for this estimator, which have asymptotically correct coverage probability. The construction can be likened to only a few existing approaches, and may thus be considered as our main contribution. ¶ Chapter 5 discusses numerical issues pertaining to the edge and confidence band estimators of Chapters 2-4. Connections are drawn to popular topics which originated in the fields of computer vision and signal processing, and which surround edge detection. These connections are exploited so as to obtain greater robustness of the likelihood estimator, such as with the presence of sharp corners. ¶ Chapter 6 addresses a dimension reduction problem for spatial data where the ultimate objective of the analysis is the discrimination of these data into one of a few pre-specified groups. In the dimension reduction step, an instrumental role is played by the recently developed methodology of functional data analysis. Relatively standar non-linear image processing techniques, as well as wavelet shrinkage, are used prior to this step. A case study for remotely-sensed navigation radar data exemplifies the methodology of Chapter 6.
8

Neparametrinio kNN metodo taikymo miškų inventorizacijoje tyrimai / Investigation of the application of non – parametric kNN method for forest inventory

Jonikavičius, Donatas 16 August 2007 (has links)
Magistro darbe yra nagrinėjamos neparametrinio knn (k-nearest neighbor) metodo taikymo galimybės Lietuvos sąlygomis vertinant tradicinius miško taksacinius rodiklius bet kokiame šalies teritorijos taške. Darbo objektas – Dubravos miškų urėdijos Dubravos miškas. Darbo tikslas – įvertinti neparametrinio knn (k-nearest neighbor) metodo taikymo Lietuvos miškų inventorizacijose galimybes. Darbo rezultatai. Nustatyta, kad taksacinių rodiklių įvertinimo knn metodu tikslumas kyla didinant apskaitos vienetų, išmatuotų vietovėje, skaičių. Pagrindiniai knn metodo parametrai, kuriais gauti geriausi rezultatai, buvo: 10 artimiausių kaimynų (k reikšmė), atvirkščiai proporcingo atstumo schema, nusakant kiekvieno iš artimiausių kaimynų svertus. Papildomos pagalbinės informacijos – tradicinės sklypinės miškų inventorizacijos metu nustatytų medynų taksacinių rodiklių – panaudojimas kartu su kosminiais Spot Xi vaizdais padidina taksacinių rodiklių įvertinimo tikslumą. Pritaikius optimalų knn metodo taikymo taktikos variantą, mažiausios pasiektos taksacinių rodiklių nustatymo vidutinės kvadratinės paklaidos sudarė 27% medyno vidutinio skersmens, 20% vidutinio aukščio, 40% skerspločių sumos, 35% vidutinio amžiaus, 43% tūrio viename ha, 33% spygliuočių procento rodiklio. Pasitelkus 1999 metų Spot Xi kosminius vaizdus, 1986 apskaitos bareliuose išmatuotas pagrindines medynų taksacines charakteristikas bei 1988 metų sklypinės miškotvarkos duomenis, knn metodu nustatyti pagrindinių taksacinių... [toliau žr. visą tekstą] / The research is dealing with investigations of non-parametric knn (k-nearest neighbor) method for estimation of standard forest characteristics at any point of an area under Lithuanian conditions. Study object: Dubrava forest, managed by Dubrava experimental forest enterprise. Objectives: to assess the usability of non-parametric knn (k-nearest neighbor) method in Lithuanian forests inventory. Results. The increase in number of sample plots with known field information was found to improve the estimation accuracy. The most important parameters for use of knn methods were the following: 10 nearest neighbors (value of k), inverse distance weighted scheme for defining the weights of selected neighbors. Integrating of additional auxiliary information – characteristics of forest compartments, estimated during the conventional stand-wise inventory – to be used together with Spot Xi images improved the overall accuracy of estimations. The lowest achieved root mean square errors were 27% of the average value of all plots within the study area for mean diameter, 20% for mean height, 40% for basal area, 35% for mean age, 43% for volume per 1ha and 33% for the percent of coniferous species in stand tree species composition, when the optimal knn tactics were applied. Spot Xi images from the year 1999, main forest characteristics from 1986 field measured sample plots and data of conventional stand-wise forest inventory from the year 1988 were utilized to estimate using knn method the... [to full text]
9

Uma análise do impacto da experiência ocupacional entre os jovens brasileiros: 2003 a 2012

Ricarte, Thiago Limoeiro 29 August 2014 (has links)
Submitted by Maike Costa (maiksebas@gmail.com) on 2016-04-12T12:09:03Z No. of bitstreams: 1 arquivo total.pdf: 1482933 bytes, checksum: 31b679e69a379b97e881f8c54282460b (MD5) / Made available in DSpace on 2016-04-12T12:09:03Z (GMT). No. of bitstreams: 1 arquivo total.pdf: 1482933 bytes, checksum: 31b679e69a379b97e881f8c54282460b (MD5) Previous issue date: 2014-08-29 / Conselho Nacional de Pesquisa e Desenvolvimento Científico e Tecnológico - CNPq / This dissertation sought to evaluate the impact of occupational experience among young Brazilians workers as determining the chances of insertion in the Brazilian labor market as well as on wage differentials. To achieve this goal were adopted models for Propensity Score Matching (PSM) proposed by Rosenbaum and Rubin (1983) and the Counterfactual Analysis by quantile regressions proposed by Chernozhukov, Fernández-Val and Melly (2013), based on the data to the Monthly Employment Survey (PME), 2003-2012. The dissertation is composed of two essays (chapters) whose independent hypothesis drawn is that the occupational experience, ie, the fact that it has exercised an earlier occupation, can be considered an important variable for distinguishing among young workers (16 to 24 years), both in the search for employment and in their salaries. The first essay analyzed the impact of occupational experience in the occupational chances of insertion in the labor market through the econometric methodology Propensity Score Matching while the second essay assessed the impact of occupational experience in the workers' wage differentiation (workers with occupational experience and without occupational experience) by Chernozhukov, Fernández-Val and Melly (2013) method. The results confirm that occupational experience has a positive impact on influences the chances of insertion in the labor market (on average workers with experience have 10% additional chances of being hired compared to those who don’t have occupational experience), as well indicated that workers who have already exercised a previous occupational activity (reemployed workers) have a higher wage income compared to workers without previous experience (workers who are employed at his first job) in all years of the sample, and that this difference is more significant when analyzed workers located in the lower quantiles of the income distribution. Although the methodological and sampling caveats cited throughout the dissertation, the test of sensitivity analysis rectified that occupational experience in the labor market is a criterion used by the employees both in hiring and in payment. / Esta dissertação procurou avaliar o impacto da experiência ocupacional entre os jovens brasileiros como determinante nas chances de inserção do mercado de trabalho brasileiro, bem como, sobre as diferenças de salários entre os indivíduos jovens. Para atingir este objetivo adotaram-se os modelos de pareamento por escore de propensão (PSM) proposto por Rosenbaum e Rubin (1983) e a Análise Contrafactual por Regressões Quantílicas proposto por Chernozhukov, Fernández-Val e Melly (2013), tendo como base de dados a Pesquisa Mensal de Emprego (PME), de 2003 a 2012. A dissertação foi composta de dois ensaios (capítulos) independentes cuja hipótese traçada é a de que a experiência ocupacional, ou seja, o fato de já ter exercido uma ocupação anterior, pode ser considerada uma variável determinante de distinção entre os trabalhadores jovens (16 a 24 anos), tanto na busca pelo emprego quanto na sua remuneração salarial. O primeiro ensaio analisou o impacto da experiência ocupacional nas chances de inserção ocupacional no mercado de trabalho através da metodologia econométrica Propensity Score Matching enquanto o segundo ensaio avaliou o impacto da experiência ocupacional na diferenciação salarial dos trabalhadores (com experiência e sem experiência ocupacional) através do método de Chernozhukov, Fernández-Val e Melly (2013). Os resultados confirmam a experiência ocupacional como um fator de impacto determinante que influencia positivamente as chances de inserção dos trabalhadores no mercado de trabalho (em média os trabalhadores com experiência têm 10% de chances adicionais de serem contratados comparativamente aos trabalhadores sem experiência), como também, indicaram que os trabalhadores que já exerceram uma atividade ocupacional anterior (trabalhadores de reemprego) possuem um rendimento salarial superior comparativamente aos trabalhadores sem experiência anterior (trabalhadores de primeiro emprego) em todos os anos da amostra, e que este diferencial é mais significativo quando analisamos os trabalhadores localizados nos quantis mais baixos da distribuição de rendimentos. Embora com as ressalvas metodológicas e amostrais citadas ao longo da dissertação, os testes de análise de sensibilidade ratificaram que a experiência ocupacional no mercado de trabalho é um critério utilizado pelos demandantes de mão de obra tanto na contratação quanto na remuneração do trabalhador.
10

Essays on regulation and risk

Martins, Régio Soares Ferreira 30 August 2010 (has links)
Submitted by Regio Martins (regio@fgvmail.br) on 2011-03-16T22:17:36Z No. of bitstreams: 1 Thesis.pdf: 1258015 bytes, checksum: 511b0226f85ea587ab4fb0f330be47c6 (MD5) / Approved for entry into archive by Andrea Virginio Machado(andrea.machado@fgv.br) on 2011-03-18T12:45:47Z (GMT) No. of bitstreams: 1 Thesis.pdf: 1258015 bytes, checksum: 511b0226f85ea587ab4fb0f330be47c6 (MD5) / Made available in DSpace on 2011-03-31T18:04:05Z (GMT). No. of bitstreams: 1 Thesis.pdf: 1258015 bytes, checksum: 511b0226f85ea587ab4fb0f330be47c6 (MD5) Previous issue date: 2010-08-30 / In this thesis, we investigate some aspects of the interplay between economic regulation and the risk of the regulated firm. In the first chapter, the main goal is to understand the implications a mainstream regulatory model (Laffont and Tirole, 1993) have on the systematic risk of the firm. We generalize the model in order to incorporate aggregate risk, and find that the optimal regulatory contract must be severely constrained in order to reproduce real-world systematic risk levels. We also consider the optimal profit-sharing mechanism, with an endogenous sharing rate, to explore the relationship between contract power and beta. We find results compatible with the available evidence that high-powered regimes impose more risk to the firm. In the second chapter, a joint work with Daniel Lima from the University of California, San Diego (UCSD), we start from the observation that regulated firms are subject to some regulatory practices that potentially affect the symmetry of the distribution of their future profits. If these practices are anticipated by investors in the stock market, the pattern of asymmetry in the empirical distribution of stock returns may differ among regulated and non-regulated companies. We review some recently proposed asymmetry measures that are robust to the empirical regularities of return data and use them to investigate whether there are meaningful differences in the distribution of asymmetry between these two groups of companies. In the third and last chapter, three different approaches to the capital asset pricing model of Kraus and Litzenberger (1976) are tested with recent Brazilian data and estimated using the generalized method of moments (GMM) as a unifying procedure. We find that ex-post stock returns generally exhibit statistically significant coskewness with the market portfolio, and hence are sensitive to squared market returns. However, while the theoretical ground for the preference for skewness is well established and fairly intuitive, we did not find supporting evidence that investors require a premium for supporting this risk factor in Brazil. / Essa tese investiga alguns aspectos da relação entre regulação econômica e risco da empresa regulada. No primeiro capítulo, o objetivo é entender as implicações do modelo tradicional de regulação por incentivos (Laffont e Tirole, 1993) sobre o risco sistemático da firma. Generalizamos o modelo de forma a incorporar risco agregado ao lucro da atividade, e descobrimos que o contrato ótimo deve ser severamente restringido para que reproduza betas (CAPM) próximos aos observados em setores regulados. Usamos um caso particular do modelo, de regulação por repartição de lucro (profit-sharing regulation), para avaliar a relação entre a potência do contrato e o nível de risco não diversificável. Encontramos resultados compatíveis com a evidência disponível, de que regimes com alta potência impõem mais risco sobre a firma. No segundo capítulo, escrito em co-autoria com Daniel Lima da Universidade da Califórnia em San Diego (UCSD), partimos da constatação de que empresas reguladas podem estar sujeitas a práticas regulatórias que potencialmente afetam a simetria da distribuição de seus lucros futuros. Se essas práticas forem antecipadas pelos investidores no mercado secundário de ações, poderemos identificar diferenças no padrão da assimetria da distribuição empírica de retornos das empresas reguladas com relação às não-reguladas. Nesse capítulo revisamos alguns métodos de mensuração de assimetria propostos recentemente na literatura, que são robustos à características comuns em séries de retornos financeiros (caudas pesadas e correlação serial), e investigamos se existem diferenças significativas na distribuição de assimetria entre empresas reguladas e não-reguladas. No terceiro e último capítulo, três diferentes abordagens empíricas do modelo de apreçamento de ativos de Kraus e Litzenberger (1976) são testadas com dados do mercado brasileiro de ações. Descobrimos que a distribuição empírica de retornos costuma exibir co-assimetria significativa com relação à carteira de mercado, e que portanto os retornos das ações são sensíveis à volatilidade (retornos quadráticos) do mercado. No entanto, apesar da base teórica para a preferência por retornos assimétricos esteja bem estabelecida e seja bastante intuitiva, não encontramos evidência que suporte a hipótese de que os investidores requeiram um prêmio para aceitar esse tipo de risco no mercado local.

Page generated in 0.0904 seconds