• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 8
  • 8
  • 6
  • 3
  • 2
  • Tagged with
  • 32
  • 32
  • 32
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Analysis of transfer of value added tax - vaf in transfers to the icms municipalities in cearà / AnÃlise do repasse do valor adicionado fiscal â vaf nas transferÃncias do icms aos municÃpios cearenses

Ãngelo Fernandes Moreno dos Santos 27 February 2012 (has links)
nÃo hà / This study aims to analyze the transfer of the Value Added Tax (VAF) in the transfer of resources from ICMS to municipalities in Cearà stipulates as the Federal Constitution of 1988 and the Supplementary Law No. 63 of 1990. The research will include the period from 2003 to 2010 of all 184 municipalities of Cearà and used in data analysis, the econometric model for panel data. With this model, we sought to determine how the independent variables Bolsa Familia, FPM, GDP, Complementary Law No. 86/97 and CIDE influence the dependent variable (VAF). The results show that the explanatory variables in a positive impact in increasing the transfer of the VAF for the municipalities of CearÃ. The variable that was most significant was the Complementary Law No. 86/97, created with the intent to waive the collection of the tax ICMS products and services for export, so the VAF provides an increase in the transfer to the municipalities about 7.47% when there is an increase of its exports. / Este estudo tem como objetivo analisar o repasse do Valor Adicionado Fiscal (VAF) na transferÃncia dos recursos do ICMS devido aos municÃpios cearenses conforme preceitua a ConstituiÃÃo Federal de 1988 e a Lei Complementar n 63 de 1990. A pesquisa compreenderà o perÃodo de 2003 a 2010 de todos os 184 municÃpios cearenses e utilizou-se, na anÃlise dos dados, o modelo economÃtrico de dados em painel. Com esse modelo, buscou-se verificar como as variÃveis independentes Bolsa FamÃlia, FPM, PIB, Lei Complementar n 86/97 e CIDE influenciam na variÃvel dependente (VAF). Os resultados demonstram que as variÃveis explicativas impactam de forma positiva no aumento do repasse do VAF para os municÃpios cearenses. A variÃvel que se apresentou mais significativa foi a Lei Complementar n 86/97, criada com o intuito de isentar da cobranÃa do tributo ICMS os produtos e serviÃos destinados à exportaÃÃo, portanto, o VAF proporciona um aumento no repasse para os municÃpios de cerca de 7,47% quando hà um aumento de suas exportaÃÃes.
22

Diversificações e especializações produtivas: uma análise da atividade inovativa em São Paulo

Montenegro, Rosa Livia Gonçalves 18 December 2008 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2016-10-13T15:33:56Z No. of bitstreams: 1 rosaliviagoncalvesmontenegro.pdf: 1198338 bytes, checksum: d89102c7afac67d168fb89d581f5cd22 (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2016-10-22T12:57:38Z (GMT) No. of bitstreams: 1 rosaliviagoncalvesmontenegro.pdf: 1198338 bytes, checksum: d89102c7afac67d168fb89d581f5cd22 (MD5) / Made available in DSpace on 2016-10-22T12:57:38Z (GMT). No. of bitstreams: 1 rosaliviagoncalvesmontenegro.pdf: 1198338 bytes, checksum: d89102c7afac67d168fb89d581f5cd22 (MD5) Previous issue date: 2008-12-18 / O objetivo principal do trabalho é investigar a influência de externalidades de diversificação e de especialização sobre a atividade inovadora de microrregiões de estado de São Paulo, no período compreendido entre 1996-2003. Além disso, outros fatores determinantes da inovação são também considerados, como a capacidade de realização de P&D, o nível de escolaridade e a defasagem temporal da inovação. As patentes per capita são usadas na pesquisa como proxy para a avaliação da atividade inovadora, ou seja, medem a capacidade tecnológica da microrregião. A base de dados consiste na utilização de microdados provenientes do Instituto Brasileiro de Geografia e Estatística (IBGE) e dos dados de depósitos de patentes do Instituto Nacional de Propriedade Industrial (INPI). A metodologia aplicada aborda a Análise Exploratória de Dados Espaciais (AEDE) e modelos de regressão espacial com dados em painel. Ambas as técnicas permitiram um acompanhamento espacial e temporal do progresso do sistema regional de inovação em São Paulo. Os resultados revelaram que a especialização produtiva das microrregiões é fator determinante para seu desempenho inovador. Alguns efeitos também se mostram importantes como a escolaridade, as inovações realizadas no período anterior, os transbordamentos de conhecimentos e, em menor grau, as externalidades de diversificação. / The aim of the present work is to assess the extent to which the degree of specialization or diversification externalities may affect the innovative performance in a particular microregion. Additionally, the influence of other regional factors on the innovative output is examined, such as regional R&D capacity, schooling of local population and the innovative tradition of the microregion. The analysis is based on a database of 63 microregions of the state of São Paulo from 1996 to 2003 that was merged by micro-data mainly from Yearly Industrial Survey and Brazilian Patent Office. These data were analyzed by means of Exploratory Analysis of Spatial Data and panel data regression models with spatial dependence. Both techniques reveal the spatial and temporal evolution of the regional innovation system of the state of São Paulo. The main result shows that microregion’s innovative performance seems to be affected mainly by the specialization externalities rather than diversification externalities. Other results emphasize the positive influence played by the schooling of local population, the technological knwoledge spillovers, and the innovative tradition of the microregion on its innovative output.
23

都市蔓延與氣候暖化關係之研究-以台北都會區為例 / The Study of relationship between urban sprawl and climate warming - An example of Taipei metropolitan area

賴玫錡, Lai, Mei Chi Unknown Date (has links)
本研究主要探討台北都會區都市蔓延與氣候暖化之關係,實證分析是否都市蔓延的發展形態會造成氣溫的上升。有研究指出台灣的歷年氣溫上升是因為近年來工商業急速發展,人口增加,建築物型態改變,交通運輸量激增等所致。國內外許多研究也發現都市化與氣溫是呈現正相關,而綠地與氣溫呈現負相關。 本研究實證分析部分使用地理資訊系統之內差法和空間分析方法,以及迴歸分析使用panel data之固定效果模型等工具,內插法之結果得到台北都會區年平均氣溫自1996年至2006年約上升1℃,有些地區甚至上升約2℃,且上升之溫度範圍有擴大的趨勢,呈現放射狀的溫度分布,此與都市蔓延之放射狀發展形態類似。使用空間分析方法則證實了一地人口數的增加會造成該地氣溫上升,並且也發現近來人口數多增加在都市外圍地區,這與上述氣溫分布和都市蔓延之放射狀發展形態也相符合。 迴歸分析結果顯示人口數對於氣溫有相當大之正相關,耕地面積對氣溫則呈現負相關,可見得擁有廣大綠地可以降低區域之氣溫,減緩氣候暖化,因此建議政府需檢討當前農地政策,配合環境保護,適合時宜的提出正確之政策。另外在各鄉鎮市區固定效果估計量方面,可以歸納出若一地區有廣大的公園、綠地、或是有河川流域的經過,對於降低當地氣溫有明顯的幫助;時間趨勢之固定效果估計量顯示台北都會區隨著時間的經過,氣溫將持續上升。因此在未來都市規劃方面,規劃者必須了解各地區特性,善加利用其自然環境以調和氣候暖化之影響、多設置公園綠地、多種植綠色植物、在道路周邊行道樹的設置、建築物間風場之設計等。如此將可以降低都市蔓延對氣候暖化的影響,以及防止氣候暖化的發生。 / In this study, we research the relationship between urban sprawl and climate warming in Taipei metropolitan area. Analyze empirically whether the developed shape of urban sprawl causes the climbing of the temperature. Some studies indicate that the reasons why the climate is getting warmer in Taiwan are the high-speed developments of industry and commerce, the increase of population, the changes of the buildings and the huge increase of the traffic volume. Some other studies also find out that there is a positive correlation between the urbanization and the temperature, and there is a negative correlation between the green space and the temperature. The empirical analysis in this study is based on the Interpolation Method and Spatial Analysis of GIS. And the regression analysis is based on the Fixed Effect Model of Panel Data. The yearly average temperature increased about 1℃ to 2℃ in the Taipei metropolitan area from 1996 to 2006. Furthermore, the range of the increasing temperature has been trending up, and it reveals a radial distribution. It is similar to the radial developed shape of urban sprawl. By using Spatial Analysis, we prove that the temperature of an area increases when the population rises. And we find out that the population rises in most of the peri-urban areas. It also answers to the radial developed shape of urban sprawl and the distribution of the temperature as above. The result of using the regression analysis shows that there is a positive correlation between the number of the population and the temperature and is a negative correlation between the farmland areas and the temperature. So that if there is a big green space, it can decrease the temperature in an area, reduce climate warming. For this reason, I suggest that the government should review our current farmland policy, which should be worked with the environmental protection policy, and bring it into practice at the right time right place. From the fixed effect estimation, we concludes that it helps decrease the temperature in an area obviously when there is a big park, big green space or where a river passing through. The time trend of the fixed effect estimation indicates that the climate in the Taipei metropolitan area will be getting warming with time goes by. Therefore, the urban planner should know better of the feature in each area, using the natural environment to accommodate the influence of climate warming. To have more parks, green spaces and plants, plant more trees by the roads, design the wind flow between buildings. Cut down the carbon production by using either way. Thus and so, we can reduce the influence of urban sprawl to climate warming, and also prevent climate warming.
24

Essays on economic and econometric applications of Bayesian estimation and model comparison

Li, Guangjie January 2009 (has links)
This thesis consists of three chapters on economic and econometric applications of Bayesian parameter estimation and model comparison. The first two chapters study the incidental parameter problem mainly under a linear autoregressive (AR) panel data model with fixed effect. The first chapter investigates the problem from a model comparison perspective. The major finding in the first chapter is that consistency in parameter estimation and model selection are interrelated. The reparameterization of the fixed effect parameter proposed by Lancaster (2002) may not provide a valid solution to the incidental parameter problem if the wrong set of exogenous regressors are included. To estimate the model consistently and to measure its goodness of fit, the Bayes factor is found to be more preferable for model comparson than the Bayesian information criterion based on the biased maximum likelihood estimates. When the model uncertainty is substantial, Bayesian model averaging is recommended. The method is applied to study the relationship between financial development and economic growth. The second chapter proposes a correction function approach to solve the incidental parameter problem. It is discovered that the correction function exists for the linear AR panel model of order p when the model is stationary with strictly exogenous regressors. MCMC algorithms are developed for parameter estimation and to calculate the Bayes factor for model comparison. The last chapter studies how stock return's predictability and model uncertainty affect a rational buy-and-hold investor's decision to allocate her wealth for different lengths of investment horizons in the UK market. The FTSE All-Share Index is treated as the risky asset, and the UK Treasury bill as the riskless asset in forming the investor's portfolio. Bayesian methods are employed to identify the most powerful predictors by accounting for model uncertainty. It is found that though stock return predictability is weak, it can still affect the investor's optimal portfolio decisions over different investment horizons.
25

人民幣國際化程度與前景的實證分析 / Empirical study on the degree and prospect of renminbi internationalization

王國臣, Wang, Guo Chen Unknown Date (has links)
人民幣是否可能成為另一個重要的國際貨幣,甚至挑戰美元的國際地位?此即本論文的問題意識。對此,本論文進一步提出三個研究問題:一是如何測量當前的人民幣國際化程度?二是如何測量當前的人民幣資本開放程度?三是資本開放對於人民幣國際化程度的影響為何? 為此,本研究利用主成分分析(PCA),以建構人民幣國際化程度(CIDI)與人民幣資本帳開放程度(CAOI)。其次再利用動態追蹤資料模型──系統一般動差估計法(SGMM),以檢證各項人民幣綜合競爭力對於貨幣國際化程度的影響。最後,本研究進一步梳理人民幣資本帳開放的進程,並結合上述所有實證分析的結果,進而預估漸進資本開放下人民幣國際化的前景。研究對象包括人民幣在內的33種國際貨幣,研究時間則起自1999年歐元成立,迄於2009年。 本論文的發現三:一是,當前人民幣國際化程度進展相當快速。但截至2009年年底,人民幣國際化程度還很低,遠落後於美元、歐元、日圓,以及英鎊等主要國際貨幣。不僅如此,人民幣國際化程度也遜於俄羅斯盧布、巴西里拉,以及印度盧比等開發中國家所發行的貨幣。 二是,過去10年來,人民幣資本帳開放程度不升反降,截至2009年年底,人民幣的資本帳開放程度維持在零,這表示:人民幣是世界上管制最為嚴格的貨幣。相對而言,美元、歐元、日圓,以及英鎊的資本帳開放程度至少都在70%以上,特別是英鎊的資本帳開放程度更趨近於完全開放。 三是,根據SGMM的實證結果顯示,網路外部性、經濟規模、金融市場規模、貨幣穩定度,以及資本開放程度都是影響貨幣國際化程度的關鍵因素。在此基礎上,本研究利用發生機率(odds ratio),以計算不同資本開放情境下,人民幣成為前10大國際貨幣的可能性。結果顯示,如果人民幣的資本帳開放到73%左右,人民幣便可擠進前10大國際貨幣(發生機率為65.6%)。 不過,這只是最為保守的估計。原因有二:一是,隨者中國經濟實力的崛起,以及人民幣預期升值的脈絡下,國際市場對於人民幣的需求原本就很高。此時,人民幣資本帳如果能適時開放,則人民幣的國際持有將大幅增加。換言之,本研究沒有考量到,各貨幣競爭力因素與資本開放程度之間的加乘效果。 二是,資本開放不僅直接對貨幣國際化程度產生影響,也會透過擴大金融市場規模與網路外部性等其他貨幣競爭力因素,間接對貨幣國際化程度造成影響。這間接效果,本研究也沒有考量到。因此,可以預期的是,只要人民幣資本帳能夠漸進開放,人民幣國際化的前景將比本研究所預估的高出許多。 / This paper discusses whether the Renminbi (RMB) will become an international currency, even challenging to the U.S. dollar. In order to examine above question, this paper take the following three steps: 1. By using principal component analyses (PCA), this paper constructs two indices: currency internationalization degree index (CIDI) and capital account liberalization degree index (CAOI); 2. By using dynamic panel data model-system generalized method of moment (SGMM), this paper analyzes factors affect the CIDI, including economic and trade size, financial system, network externalities, confidence in the currency’s value, and CAOI; 3. According to the PCA and SGMM results, this paper calculates the odds ratio of RMB becoming important international currency. The reserch achieved the following results. First, the degree of internationalization of the RMB progress very fast, but the RMB CIDI is still very low, its CIDI far behinds the dollar, euro, Japanese yen, and pounds. Second, over the past 10 years, RMB CAOI is not increased but decreased. Its CAOI is at zero in 2009, this means that: the RMB is the most stringent controls in the world currency. In contrast, U.S. dollars, euros, yen, and pound CAOI are at least in more than 70%. Third, according to the SGMM results, economic size, financial system, network externalities, confidence in the currency’s value, and CAOI are key factors affect the CIDI. Based on this output, this paper forecasted that if the RMB CAOI is open to about 73%, RMB could be squeezed into the top 10 of the international currency. (The odds ratio is 65.6%) It is noteworthy that this is only the lowest estimates. This is because that this paper did not consider the interaction effects of each currency competitiveness factors and CAOI. Therefore, if RMB CAOI continues open, the prospect of RMB CIDI is much higher than estimated by this paper.
26

Impact of ACA’s free screening policy on colorectal cancer outcomes and cost savings : Effect of removal of out-of-pocket cancer screening fee on screening, incidence, mortality, and cost savings

Togtokhjav, Oyun January 2023 (has links)
Colorectal cancer is the second leading cause of cancer-related deaths worldwide as of 2020. Early detection and diagnosis of colorectal cancer can greatly increase the chances of successful treatment and can also reduce the cost of care including treatment. It’s shown in recent years that the colorectal cancer screening rates have slowed nationwide which impacts the new diagnoses of colorectal cancer (CRC) and the ability to treat it at an early stage to avoid increase in mortality rate. The purpose of this research is to examine the impact of the Affordable Care Act 2010 ‘s policy to remove colorectal cancer screening fee for adults aged 50-75 on screening, incidence, and mortality rate of colorectal cancer using panel data model and employing sequential recursive system of equations method. Since a decision to get screened is an individual’s choice, this study explores methods to increase colorectal cancer screening rate with the help of behavioral economics theories. Results of the study show that Affordable Care Act’s policy to remove colorectal cancer screening fee has a significant impact on both colorectal cancer screening and incidence rates. The ACA’s policy is associated with an increase in colorectal cancer screening rate while associating with a decrease in cancer incidence rate. Relating to the colorectal cancer mortality rate, an effort was made to examine the effect of the Affordable Care Act's policy to remove colorectal cancer screening fee on the overall cost savings resulting from lives saved. However, since this study found no significant impact of the ACA's policy on the mortality rate of colorectal cancer, further exploration in this regard was not pursued. On the other hand, studies conducted to increase colorectal cancer screening rate by applying behavioral economics methods have shown that default method with an opt-out choice and financial incentive with a loss-framed messaging methods are proven effective. Therefore, these methods can be investigated to design and implement a nationwide initiative.
27

自我迴歸模型的動差估計與推論 / Estimation and inference in autoregressive models with method of moments

陳致綱, Chen, Jhih Gang Unknown Date (has links)
本論文的研究主軸圍繞於自我迴歸模型的估計與推論上。文獻上自我迴歸模型的估計多直接採用最小平方法, 但此估計方式卻有兩個缺點:(一)當序列具單根時,最小平方估計式的漸近分配為非正規型態,因此檢定時需透過電腦模擬得到臨界值;(二)最小平方估計式雖具一致性,但卻有嚴重的有限樣本偏誤問題。有鑑於此,我們提出一種「二階差分轉換估計式」,並證明該估計式的偏誤遠低於前述最小平方估計式,且在序列為粧定與具單根的環境下具有相同的漸近常態分配。此外,二階差分轉換估計式相當適合應用於固定效果追蹤資料模型,而據以形成的追蹤資料單根檢定在序列較短的情況下仍有不錯的檢定力。 本論文共分四章,茲分別簡單說明如下: 第1章為緒論,回顧文獻上估計與推論自我回歸模型時的問題,並說明本論文的研究目標。估計自我迴歸模型的傳統方式是直接採取最小平方法,但在序列具單根的情況下由於訊息不隨時間消逝而快速累積,使估計式的收斂速度高於序列為恒定的情況。不過,這也導致最小平方估計式的漸近分配為非標準型態,並使得進行假設檢定前必須先透過電腦模擬來獲得臨界值。其次,最小平方估計式雖具一致性,但在有限樣本下卻是偏誤的。實證上, 樣本點不多是研究者時常面臨的窘境,並使得小樣本偏誤程度格外嚴重。本章中透過對前述問題形成因素的瞭解,說明解決與改善的方法,亦即我們提出的「二階差分轉換估計式」。 第2章主要目的在於推導二階差分轉換估計式之有限樣本偏誤。我們亦推導了多階差分自我迴歸模型下二階段最小平方估計式(two stage least squares, 2SLS)與 Phillips andHan (2008)採用的一階差分轉換估計式之偏誤,以同時進行比較。本章理論與模擬結果皆顯示,一階與二階差分轉換估許式與2SLS之 $T^{−1}$ 階偏誤程度皆低於以最小平方法估計原始準模型(level model)的偏誤,其中 T 為時間序列長度。另外,一階差分轉換估計式與二階差分轉換估計式在 $T^{−1}$ 階偏誤上,分別與一階和二階差分模型下2SLS相同,但兩估計式的相對偏誤程度則因自我相關係數的大小而互有優劣。同時,我們發現估計高於二階的差分模型對小樣本偏誤並無法有更進一步的改善。最後,即使在樣本點不多的情況下,本章所推導的偏誤理論對於實際偏誤仍有良好的近似能力。 第3章主要目的在於發展二階差分轉換估計式之漸近理論。與 Phillips and Han (2008) 採用之一階差分轉換估計式相似的是,該估計式在序列為恒定與具單根的情況下收斂速度相同,並有漸近常態分配的優點。值得注意的是, 二階差分轉換估計式的漸近分配為 N(0,2),不受任何未知參數的影響。另外,當序列呈現正自我相關時,二階差分轉換估計式相較於一階差分轉換估計式具有較小的漸近變異數,進而使得據以形成的檢定統計量有較佳的對立假設偵測能力。最後, 誠如 Phillips and Han (2008) 所述,由於差分過程消除了模型中的截距項,使得此類估計方法在固定效果的動態追蹤資料模型(dynamic panel data model with fixed effect) 具相當的發展與應用價值。 本論文第4 章進一步將二階差分轉換估計式推展至固定效果的動態追蹤資料模型。文獻上估計此種模型通常利用差分來消除固定效果後,再以一般動差法 (generalized method of moments, GMM) 進行估計。然而,這樣的估計方式在序列為近單根或具單根時卻面臨了弱工具變數(weak instrument)的問題,並導致嚴重的估計偏誤。相反的,差分轉換估計式所利用的動差條件在近單根與單根的情況下仍然穩固,因此在小樣本下的估計偏誤相當輕微(甚至無偏誤)。另外,我們證明了不論序列長度(T )或橫斷面規模(n)趨近無窮大,差分轉換估計式皆有漸近常態分配的性質。與單一序列時相同的是,我們提出的二階差分轉換估計式在序列具正自我相關性時的漸近變異數較一階差分轉換估計式小;受惠於此,利用二階差分轉換估計式所建構的檢定具有較佳的檢力。值得注意的是,由於二階差分轉換估計式在單根的情況下仍有漸近常態分配的性質,我們得以直接利用該漸近理論建構追蹤資料單根檢定。電腦模擬結果發現,在小 T 大 n 的情況下,其檢力優於文獻上常用的 IPS 檢定(Im et al., 1997, 2003)。 / This thesis deals with estimation and inference in autoregressive models. Conventionally, the autoregressive models estimated by the least squares (LS) procedure may be subject to two shortcomings. First, the asymptotic distribution of the LS estimates for autoregressive coefficient is discontinuous at unity. Test statistics based on the LS estimates thus follow nonstandard distributions, and the critical values obtained need to rely on Monte Carlo techniques. Secondly, as is well known, the LS estimates of autoregressive models are biased in finite samples. This bias could be substantial and leads to serious size distortion for the test statistics built on the estimates and forecast errors. In this thesis,we consider a simple newmethod ofmoments estimator, termed the “transformed second-difference” (hereafter TSD) estimator, that is without the aforementioned problems, and has many useful applications. Notably, when applied to dynamic panel models, the associated panel unit root tests shares a great power advantage over the existing ones, for the cases with very short time span. The thesis consists of 4 chapters, which are briefly described as follows. 1. Introduction: Overview and Purpose This chapter first reviews the literature and states the purpose of this dissertation. We discuss the sources of problems in estimating autoregressive models with the conventional method. The motivation to estimate the autoregressive series with multiple-difference models, instead of the conventional level model, is provided. We then propose a new estimator, the TSD estimator, which can avoid (fully or partly) the drawbacks of the LS method, and highlight its finite-sample and asymptotic properties. 2. The Bias of 2SLSs and transformed difference estimators in Multiple-Difference AR(1) Models In this chapter, we derive approximate bias for the TSD estimator. For comparisons, the corresponding bias of the two stage least squares estimators (2SLS) in multiple-difference AR(1) models and the transformed first-difference (TFD) estimator proposed by Chowdhurry (1987) are also given as by-products. We find that: (i) All the estimators considered are much less biased than the LS ones with the level regression; (ii)The difference method can be exploited to reduce the bias only up to the order of difference 2; and (iii) The bias of the TFD and TSD estimators share the same order at $O(T^{-1})$ as that of 2SLSs. However, to the extent of bias reductions, neither the 2 considered transformed difference estimators shows a uniform dominance over the entire parameter space. Our simulation evidence lends credible supports to our bias approximation theory. 3. Gaussian Inference in AR(1) Time Series with or without a Unit Root The goal of the chapter is to develop an asymptotic theory of the TSD estimator. Similar to that of the TFD estimator shown by Phillips and Han (2008), the TSDestimator is found to have Gaussian asymptotics for all values of ρ ∈ (−1, 1] with $\sqrt{T}$ rate of convergence, where ρ is the autoregressive coefficient of interest and T is the time span. Specifically, the limit distribution of the TSD estimator is N(0,2) for all possible values of ρ. In addition, the asymptotic variance of the TSD estimator is smaller than that of the TFD estimator for the cases with ρ > 0, and the corresponding t -test thus exhibits superior power to the TFD-based one. 4. Estimation and Inference with Moment Methods for Dynamic Panels with Fixed Effects This chapter demonstrates the usefulness of the TSD estimator when applying to to dynamic panel datamodels. We find again that the TSD estimator displays a standard Gaussian limit, with a convergence rate of $\sqrt{nT}$ for all values of ρ, including unity, irrespective of how n or T approaches infinity. Particularly, the TSD estimator makes use of moment conditions that are strong for all values of ρ, and therefore can completely avoid the weak instrument problem for ρ in the vicinity of unity, and has virtually no finite sample bias. As in the time series case, the asymptotic variance of the TSD estimator is smaller than that of the TFD estimator of Han and Phillips (2009) when ρ > 0 and T > 3, and the corresponding t -ratio test is thus more capable of unveiling the true data generating process. Furthermore, the asymptotic theory can be applied directly to panel unit root test. Our simulation results reveal that the TSD-based unit root test is more powerful than the widely used IPS test (Im et al, 1997, 2003) when n is large and T is small.
28

台灣市場小型股與成交量之實證關係 / An empirical study of relations between small cap stock and volume in taiwanese stock market

林大偉 Unknown Date (has links)
量價關係,一直以來皆為技術分析學派所廣泛運用,其主張運用過去的股價以及成交量來推測股票未來的走勢,而也有許多的研究以及投資策略皆是從量價關係所出。在國內,小型股也由於其股本小的特性,往往成為有心人士炒作之標的。此外,小型股亦較大型股具有不對稱資訊的性質,而由於成交量背後往往隱藏著許多的資訊,因此投資人利用量與價之間的關係,得到能夠有效預測小型股股價的方法以利其投資。 而本文之研究,將量價關係運用在小型股上,想檢視彼此間有無任何關係存在。本文中我們使用了因果關係檢定,三因子模型,以及縱橫迴歸模型,用來分別檢視小型股與大型股的量價關係。驗證結果發現,在不同的檢驗方式下,都會得到小型股較大型股,有顯著量價影響的關係存在。 / The relation between volume and price is widely used in technical analysis. It predicts future stock price by using past stock price and volume. There are lots of investigations and investment strategies are stemmed from it. In Taiwan, small caps are preferred to be held by the people who would like to manipulate the price because of their small number of capitalization. In addition, compared with large caps, small caps are of asymmetric information to the investors. As there is lot of information hidden behind volume, investors are likely to use the relation between volume and price to get a useful way to predict small caps’ stock price. In this paper, I use granger causality test, three-factor model, and panel data model to test the relation between price/return and volume of small caps and big caps separately. The experiment shows that use different ways, we can verify there exist more obvious relations between volume and price in small caps than in large caps.
29

Empirická verifikace krátkodobé agregátní nabídky podle Lucasova modelu a nové keynesovské ekonomie / Empirical verification of short-run aggregate supply based on Lucas model and new Keynesian theory

Marošová, Ivana January 2015 (has links)
The aim of the master thesis is to empirically analyze if there is a support for new classics or new Keynesians as a dominant theory of short-run aggregate supply curve. The analysis is based on dynamic panel data model for 38 countries and period between 1970 and 2014. Because the results show some evidence on negative significance of level of inflation in contrast with its variability, I conclude that there is support for the new Keynesian theory. I focus on examination of the panel data assumptions such as the stationarity of explanatory variables, existence of the individual or random effects, validity of homogeneity of slope coefficients and mainly the cross-sectional dependence of error terms. After testing for these assumptions, I choose the most suitable method of estimation for dynamic panel data models. I use these methods for analyzing both linear and non-linear specification of the given model. As a result, we can see that the selection of right estimation method plays a great role in final outcomes. I also check model robustness by including changes of real oil price as a proxy variable for the supply shock in the economy.
30

Essays in dynamic panel data models and labor supply

Nayihouba, Kolobadia Ada 08 1900 (has links)
Cette thèse est organisée en trois chapitres. Les deux premiers proposent une approche régularisée pour l’estimation du modèle de données de panel dynamique : l’estimateur GMM et l’estimateur LIML. Le dernier chapitre de la thèse est une application de la méthode de régularisation à l’estimation des élasticités de l’offre de travail en utilisant des modèles de pseudo-données de panel. Dans un modèle de panel dynamique, le nombre de conditions de moments augmente rapidement avec la dimension temporelle du panel conduisant à une matrice de covariance des instruments de grande dimension. L’inversion d’une telle matrice pour calculer l’estimateur affecte négativement les propriétés de l’estimateur en échantillon fini. Comme solution à ce problème, nous proposons une approche par la régularisation qui consiste à utiliser une inverse généralisée de la matrice de covariance au lieu de son inverse classique. Trois techniques de régularisation sont utilisées : celle des composantes principales, celle de Tikhonov qui est basée sur le Ridge régression (aussi appelée Bayesian shrinkage) et enfin celle de Landweber Fridman qui est une méthode itérative. Toutes ces techniques introduisent un paramètre de régularisation qui est similaire au paramètre de lissage dans les régressions non paramétriques. Les propriétés en echantillon fini de l’estimateur régularisé dépend de ce paramètre qui doit être sélectionné parmis plusieurs valeurs potentielles. Dans le premier chapitre (co-écrit avec Marine Carrasco), nous proposons l’estimateur GMM régularisé du modèle de panel dynamique. Sous l’hypothèse que le nombre d’individus et de périodes du panel tendent vers l’infini, nous montrons que nos estimateurs sont convergents and assymtotiquement normaux. Nous dérivons une méthode empirique de sélection du paramètrede régularisation basée sur une expansion de second ordre du l’erreur quadratique moyenne et nous démontrons l’optimalité de cette procédure de sélection. Les simulations montrent que la régularisation améliore les propriétés de l ’estimateur GMM classique. Comme application empirique, nous avons analysé l’effet du développement financier sur la croissance économique. Dans le deuxième chapitre (co-écrit avec Marine Carrasco), nous nous intéressons à l’estimateur LIML régularisé du modèle de données de panel dynamique. L’estimateur LIML est connu pour avoir de meilleures propriétés en échantillon fini que l’estimateur GMM mais son utilisation devient problématique lorsque la dimension temporelle du panel devient large. Nous dérivons les propriétes assymtotiques de l’estimateur LIML régularisé sous l’hypothèse que le nombre d’individus et de périodes du panel tendent vers l’infini. Une procédure empirique de sélection du paramètre de régularisation est aussi proposée. Les bonnes performances de l’estimateur régularisé par rapport au LIML classique (non régularisé), au GMM classique ainsi que le GMM régularisé sont confirmées par des simulations. Dans le dernier chapitre, je considère l’estimation des élasticités d’offre de travail des hommes canadiens. L’hétérogéneité inobservée ainsi que les erreurs de mesures sur les salaires et les revenus sont connues pour engendrer de l’endogéneité quand on estime les modèles d’offre de travail. Une solution fréquente à ce problème d’endogéneité consiste à régrouper les données sur la base des carastéristiques observables et d’ éffectuer les moindres carrées pondérées sur les moyennes des goupes. Il a été démontré que cet estimateur est équivalent à l’estimateur des variables instrumentales sur les données individuelles avec les indicatrices de groupe comme instruments. Donc, en présence d’un grand nombre de groupe, cet estimateur souffre de biais en échantillon fini similaire à celui de l’estimateur des variables instrumentales quand le nombre d’instruments est élevé. Profitant de cette correspondance entre l’estimateur sur les données groupées et l’estimateur des variables instrumentales sur les données individuelles, nous proposons une approche régularisée à l’estimation du modèle. Cette approche conduit à des élasticités substantiellement différentes de ceux qu’on obtient en utilisant l’estimateur sur données groupées. / This thesis is organized in three chapters. The first two chapters propose a regularization approach to the estimation of two estimators of the dynamic panel data model : the Generalized Method of Moment (GMM) estimator and the Limited Information Maximum Likelihood (LIML) estimator. The last chapter of the thesis is an application of regularization to the estimation of labor supply elasticities using pseudo panel data models. In a dynamic panel data model, the number of moment conditions increases rapidly with the time dimension, resulting in a large dimensional covariance matrix of the instruments. Inverting this large dimensional matrix to compute the estimator leads to poor finite sample properties. To address this issue, we propose a regularization approach to the estimation of such models where a generalized inverse of the covariance matrix of the intruments is used instead of its usual inverse. Three regularization schemes are used : Principal components, Tikhonov which is based on Ridge regression (also called Bayesian shrinkage) and finally Landweber Fridman which is an iterative method. All these methods involve a regularization parameter which is similar to the smoothing parameter in nonparametric regressions. The finite sample properties of the regularized estimator depends on this parameter which needs to be selected between many potential values. In the first chapter (co-authored with Marine Carrasco), we propose the regularized GMM estimator of the dynamic panel data models. Under double asymptotics, we show that our regularized estimators are consistent and asymptotically normal provided that the regularization parameter goes to zero slower than the sample size goes to infinity. We derive a data driven selection of the regularization parameter based on an approximation of the higher-order Mean Square Error and show its optimality. The simulations confirm that regularization improves the properties of the usual GMM estimator. As empirical application, we investigate the effect of financial development on economic growth. In the second chapter (co-authored with Marine Carrasco), we propose the regularized LIML estimator of the dynamic panel data model. The LIML estimator is known to have better small sample properties than the GMM estimator but its implementation becomes problematic when the time dimension of the panel becomes large. We derive the asymptotic properties of the regularized LIML under double asymptotics. A data-driven procedure to select the parameter of regularization is proposed. The good performances of the regularized LIML estimator over the usual (not regularized) LIML estimator, the usual GMM estimator and the regularized GMM estimator are confirmed by the simulations. In the last chapter, I consider the estimation of the labor supply elasticities of Canadian men through a regularization approach. Unobserved heterogeneity and measurement errors on wage and income variables are known to cause endogeneity issues in the estimation of labor supply models. A popular solution to the endogeneity issue is to group data in categories based on observable characteristics and compute the weighted least squares at the group level. This grouping estimator has been proved to be equivalent to instrumental variables (IV) estimator on the individual level data using group dummies as intruments. Hence, in presence of large number of groups, the grouping estimator exhibites a small bias similar to the one of the IV estimator in presence of many instruments. I take advantage of the correspondance between grouping estimators and the IV estimator to propose a regularization approach to the estimation of the model. Using this approach leads to wage elasticities that are substantially different from those obtained through grouping estimators.

Page generated in 0.4517 seconds