• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 43
  • 4
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 76
  • 76
  • 15
  • 13
  • 12
  • 11
  • 10
  • 9
  • 9
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Makroekonomiska faktorers påverkan på svenskt och amerikanskt aktieindex : En studie om hur olika makroekonomiska variabler påverkar aktiemarknaden mellan 1970–2021 / Macroeconomic determinants of Swedish and American stock index : A study about various macroeconomic variables effects the stock market between 1970-2021

Brolin, Magnus, Olsson, David January 2022 (has links)
Under  ekonomiska  konjunkturcykler  är  sambandet  mellan  grundläggande makroekonomiska variabler och aktiemarknadens avkastning högst intressant att undersöka. Syftet med denna uppsats är att undersöka hur aktiepriser på den svenska- och amerikanska aktiemarknaden påverkas av relevanta makroekonomiska faktorer under tidsperioden 1970–2021. Genom ko-integrationstest, Vector Error Correction modeller och kausalitetstest på årsdata finner vi signifikanta resultat för att den svenska aktiemarknaden divergerar från jämvikt av en chock i penningmängd. För den amerikanska aktiemarknaden finner vi fler signifikanta resultat och att en chock i statsskulden skapar en divergerande effekt i fjärde laggen men en konvergerande effekt i andra. För att fördjupa undersökningen genomförs en bivariat analys för att undersöka hur aktiepriser påverkas av BNP som en indikation på ekonomisk tillväxt och även hur aktiepriser påverkas av lång och kort ränta som en konjunkturindikator. Resultaten varierar för Sverige och USA. Det kausala sambandet mellan aktiepris och BNP visar att aktiepris påverkar BNP för Sverige. För USA finner vi däremot inga signifikanta resultat gällande det kausala sambandet. För sambandet mellan amerikanskt aktiepris mot kort och lång ränta så visade det sig att kort ränta orkar divergens mot jämvikt för aktiemarknaden. För Sverige påvisade resultaten att kort ränta orsakar konvergens och lång ränta divergens vid en chock i aktiemarknaden. / During economic cycles, the relationship between fundamental macroeconomic variables and stock market returns is highly interesting to examine. The purpose of this thesis is to investigate how share prices in the Swedish and American stock markets are affected by relevant macroeconomic factors during the period 1970–2021. Through co-integration tests, Vector Error Correction models and causality tests on annual data, we find significant results for the Swedish stock market to diverge from equilibrium by a shock in the money supply. For the US stock market, we find more significant variables and a result that a shock in the central government debt creates a divergent effect in the fourth layer but a converging effect in the second. To deepen the survey, a bivariate analysis is carried out to examine how share prices are affected by GDP as an indication of economic growth and how share prices are affected by long-term and short-term interest rates as a business cycle indicator. The results vary for Sweden and the USA. The causal relationship between share price and GDP shows that share price affects GDP for Sweden. For the United States, however, we find no significant results regarding the causal relationship. For the relationship between the US share price against short and long interest rates, it turned out that short interest rates can withstand divergence towards equilibrium for the stock market. For Sweden, the results showed that short-term interest rates cause convergence and long-term interest rate divergence in the event of a shock in the stock market.
72

Monte Carlo Simulations of Stock Prices : Modelling the probability of future stock returns / Monte Carlo-simuleringar av aktiekurser : Sannolikhetsmodellering av framtida aktiekurser

Brodd, Tobias, Djerf, Adrian January 2018 (has links)
The financial market is a stochastic and complex system that is challenging to model. It is crucial for investors to be able to model the probability of possible outcomes of financial investments and financing decisions in order to produce fruitful and productive investments. This study investigates how Monte Carlo simulations of random walks can be used to model the probability of future stock returns and how the simulations can be improved in order to provide better accuracy. The implemented method uses a mathematical model called Geometric Brownian Motion (GBM) in order to simulate stock prices. Ten Swedish large-cap stocks were used as a data set for the simulations, which in turn were conducted in time periods of 1 month, 3 months, 6 months, 9 months and 12 months. The two main parameters which determine the outcome of the simulations are the mean return of a stock and the standard deviation of historical returns. When these parameters were calculated without weights the method proved to be of no statistical significance. The method improved and thereby proved to be statistically significant for predictions for a 1 month time period when the parameters instead were weighted. By varying the assumptions regarding price distribution with respect to the size of the current time period and using other weights, the method could possibly prove to be more accurate than what this study suggests. Monte Carlo simulations seem to have the potential to become a powerful tool that can expand our abilities to predict and model stock prices. / Den finansiella marknaden är ett stokastiskt och komplext system som är svårt att modellera. Det är angeläget för investerare att kunna modellera sannolikheten för möjliga utfall av finansiella investeringar och beslut för att kunna producera fruktfulla och produktiva investeringar. Den här studien undersöker hur Monte Carlo-simuleringar av så kallade random walks kan användas för att modellera sannolikheten för framtida aktieavkastningar, och hur simuleringarna kan förbättras för att ge bättre precision. Den implementerade metoden använder den matematiska modellen Geometric Brownian Motion (GBM) för att simulera aktiepriser. Tio svenska large-cap aktier valdes ut som data för simuleringarna, som sedan gjordes för tidsperioderna 1 månad, 3 månader, 6 månader, 9 månader och 12 månader. Huvudparametrarna som bestämmer utfallet av simuleringarna är medelvärdet av avkastningarna för en aktie samt standardavvikelsen av de historiska avkastningarna. När dessa parametrar beräknades utan viktning gav metoden ingen statistisk signifikans. Metoden förbättrades och gav då statistisk signifikans på en 1 månadsperiod när parametrarna istället var viktade. Metoden skulle kunna visa sig ha högre precision än vad den här studien föreslår. Det är möjligt att till exempel variera antagandena angående prisernas fördelning med avseende på storleken av den nuvarande tidsperioden, och genom att använda andra vikter. Monte Carlo-simuleringar har därför potentialen att utvecklas till ett kraftfullt verktyg som kan öka vår förmåga att modellera och förutse aktiekurser.
73

公司信用風險之衡量 / Corporate credit risk measurement

林妙宜, Lin, Miao-Yi Unknown Date (has links)
論文名稱:公司信用風險之衡量 校所組別:國立政治大學金融研究所 畢業時間:九十年度第二學期 提要別:碩士學位論文提要 研究生:林妙宜 指導教授:陳松男博士 論文提要及內容: 信用風險一直是整體金融環境非常重要的一環,銀行授信、商業交易、投資評估,都會對信用風險做仔細的研究與評估。本論文以台灣的公司為樣本,採用會計財務比率與股票價格,主要兩項反映公司體質的資訊,建構信用風險模型,期望能提供台灣公司信用風險衡量上,公正而有效的指標。 以財務比率為基礎的區別分析模型,選取變數為獲利能力指標的常續性EPS、現金流量指標的現金流量對負債、成長率指標的盈餘成長率、償債能力指標的負債比率,與經營能力指標的平均收帳天數,這五項財務比率涵蓋企業繼續經營與財務狀況的各個層面。區別分析模型在財務危機前一年可達正確分類率91.67%。 以股票市場價格為基礎的選擇權模型,可由每日之股票價格求算出預期違約機率,將市場對公司價值的衡量轉化為信用風險的程度,能即時掌握公司體質的變化,做出適當之因應。 關鍵字:信用風險、財務危機、會計資訊、財務比率、區別分析、股票價格、選擇權模型、預期違約機率 / Title of Thesis: Corporate Credit Risk Measurement Name of Institute: Graduate Institute of Money and Banking, NCCU Graduate Date: June, 2002 Name of Student: Lin, Miao-Yi Advisor: Dr. Chen, Son-Nan Abstract: Credit Risk has been the great concern in the financial market. Before the bank grants a loan or the company makes deals and investment, they first consider the credit risk of the conterparty. The empirical study tries to construct the credit risk models based on the public firms in Taiwan. Using financial ratios and stock prices, the two main sources of corporate financial information, we expect to provide a fair and efficient indicator to measure the corporate credit risk in Taiwan. In the discriminant analysis based on accounting data, the model chooses five financial ratios that cover the corporate operation and financial situation. They are earnings per share, operating cash flow to total debt, equity substantial growth rate, and average days to accounts receivable. The discrimanant analysis model can accurately classify 91.67% of the data as being default or solvency one year before the financial distress. In the option pricing model based on stock prices, the expected default probability can be solved by daily stock prices. In this model, how the market values the firm is turned into the level of credit risk, which can help us catch the changes of corporate soundness and make proper responses. Keywords: Credit Risk, Financial Distress, Accounting Data, Financial Ratio, Discrimanant Analysis, Stock Prices, Option Pricing Model, Expected Default Probability
74

Stochastic modelling of financial time series with memory and multifractal scaling

Snguanyat, Ongorn January 2009 (has links)
Financial processes may possess long memory and their probability densities may display heavy tails. Many models have been developed to deal with this tail behaviour, which reflects the jumps in the sample paths. On the other hand, the presence of long memory, which contradicts the efficient market hypothesis, is still an issue for further debates. These difficulties present challenges with the problems of memory detection and modelling the co-presence of long memory and heavy tails. This PhD project aims to respond to these challenges. The first part aims to detect memory in a large number of financial time series on stock prices and exchange rates using their scaling properties. Since financial time series often exhibit stochastic trends, a common form of nonstationarity, strong trends in the data can lead to false detection of memory. We will take advantage of a technique known as multifractal detrended fluctuation analysis (MF-DFA) that can systematically eliminate trends of different orders. This method is based on the identification of scaling of the q-th-order moments and is a generalisation of the standard detrended fluctuation analysis (DFA) which uses only the second moment; that is, q = 2. We also consider the rescaled range R/S analysis and the periodogram method to detect memory in financial time series and compare their results with the MF-DFA. An interesting finding is that short memory is detected for stock prices of the American Stock Exchange (AMEX) and long memory is found present in the time series of two exchange rates, namely the French franc and the Deutsche mark. Electricity price series of the five states of Australia are also found to possess long memory. For these electricity price series, heavy tails are also pronounced in their probability densities. The second part of the thesis develops models to represent short-memory and longmemory financial processes as detected in Part I. These models take the form of continuous-time AR(∞) -type equations whose kernel is the Laplace transform of a finite Borel measure. By imposing appropriate conditions on this measure, short memory or long memory in the dynamics of the solution will result. A specific form of the models, which has a good MA(∞) -type representation, is presented for the short memory case. Parameter estimation of this type of models is performed via least squares, and the models are applied to the stock prices in the AMEX, which have been established in Part I to possess short memory. By selecting the kernel in the continuous-time AR(∞) -type equations to have the form of Riemann-Liouville fractional derivative, we obtain a fractional stochastic differential equation driven by Brownian motion. This type of equations is used to represent financial processes with long memory, whose dynamics is described by the fractional derivative in the equation. These models are estimated via quasi-likelihood, namely via a continuoustime version of the Gauss-Whittle method. The models are applied to the exchange rates and the electricity prices of Part I with the aim of confirming their possible long-range dependence established by MF-DFA. The third part of the thesis provides an application of the results established in Parts I and II to characterise and classify financial markets. We will pay attention to the New York Stock Exchange (NYSE), the American Stock Exchange (AMEX), the NASDAQ Stock Exchange (NASDAQ) and the Toronto Stock Exchange (TSX). The parameters from MF-DFA and those of the short-memory AR(∞) -type models will be employed in this classification. We propose the Fisher discriminant algorithm to find a classifier in the two and three-dimensional spaces of data sets and then provide cross-validation to verify discriminant accuracies. This classification is useful for understanding and predicting the behaviour of different processes within the same market. The fourth part of the thesis investigates the heavy-tailed behaviour of financial processes which may also possess long memory. We consider fractional stochastic differential equations driven by stable noise to model financial processes such as electricity prices. The long memory of electricity prices is represented by a fractional derivative, while the stable noise input models their non-Gaussianity via the tails of their probability density. A method using the empirical densities and MF-DFA will be provided to estimate all the parameters of the model and simulate sample paths of the equation. The method is then applied to analyse daily spot prices for five states of Australia. Comparison with the results obtained from the R/S analysis, periodogram method and MF-DFA are provided. The results from fractional SDEs agree with those from MF-DFA, which are based on multifractal scaling, while those from the periodograms, which are based on the second order, seem to underestimate the long memory dynamics of the process. This highlights the need and usefulness of fractal methods in modelling non-Gaussian financial processes with long memory.
75

Understanding co-movements in macro and financial variables

D'Agostino, Antonello 09 January 2007 (has links)
Over the last years, the growing availability of large datasets and the improvements in the computational speed of computers have further fostered the research in the fields of both macroeconomic modeling and forecasting analysis. A primary focus of these research areas is to improve the models performance by exploiting the informational content of several time series. Increasing the dimension of macro models is indeed crucial for a detailed structural understanding of the economic environment, as well as for an accurate forecasting analysis. As consequence, a new generation of large-scale macro models, based on the micro-foundations of a fully specified dynamic stochastic general equilibrium set-up, has became one of the most flourishing research areas of interest both in central banks and academia. At the same time, there has been a revival of forecasting methods dealing with many predictors, such as the factor models. The central idea of factor models is to exploit co-movements among variables through a parsimonious econometric structure. Few underlying common shocks or factors explain most of the co-variations among variables. The unexplained component of series movements is on the other hand due to pure idiosyncratic dynamics. The generality of their framework allows factor models to be suitable for describing a broad variety of models in a macroeconomic and a financial context. The revival of factor models, over the recent years, comes from important developments achieved by Stock and Watson (2002) and Forni, Hallin, Lippi and Reichlin (2000). These authors find the conditions under which some data averages become collinear to the space spanned by the factors when, the cross section dimension, becomes large. Moreover, their factor specifications allow the idiosyncratic dynamics to be mildly cross-correlated (an effect referred to as the 'approximate factor structure' by Chamberlain and Rothschild, 1983), a situation empirically verified in many applications. These findings have relevant implications. The most important being that the use of a large number of series is no longer representative of a dimensional constraint. On the other hand, it does help to identify the factor space. This new generation of factor models has been applied in several areas of macroeconomics and finance as well as for policy evaluation. It is consequently very likely to become a milestone in the literature of forecasting methods using many predictors. This thesis contributes to the empirical literature on factor models by proposing four original applications. <p><p>In the first chapter of this thesis, the generalized dynamic factor model of Forni et. al (2002) is employed to explore the predictive content of the asset returns in forecasting Consumer Price Index (CPI) inflation and the growth rate of Industrial Production (IP). The connection between stock markets and economic growth is well known. In the fundamental valuation of equity, the stock price is equal to the discounted future streams of expected dividends. Since the future dividends are related to future growth, a revision of prices, and hence returns, should signal movements in the future growth path. Though other important transmission channels, such as the Tobin's q theory (Tobin, 1969), the wealth effect as well as capital market imperfections, have been widely studied in this literature. I show that an aggregate index, such as the S&P500, could be misleading if used as a proxy for the informative content of the stock market as a whole. Despite the widespread wisdom of considering such index as a leading variable, only part of the assets included in the composition of the index has a leading behaviour with respect to the variables of interest. Its forecasting performance might be poor, leading to sceptical conclusions about the effectiveness of asset prices in forecasting macroeconomic variables. The main idea of the first essay is therefore to analyze the lead-lag structure of the assets composing the S&P500. The classification in leading, lagging and coincident variables is achieved by means of the cross correlation function cleaned of idiosyncratic noise and short run fluctuations. I assume that asset returns follow a factor structure. That is, they are the sum of two parts: a common part driven by few shocks common to all the assets and an idiosyncratic part, which is rather asset specific. The correlation<p>function, computed on the common part of the series, is not affected by the assets' specific dynamics and should provide information only on the series driven by the same common factors. Once the leading series are identified, they are grouped within the economic sector they belong to. The predictive content that such aggregates have in forecasting IP growth and CPI inflation is then explored and compared with the forecasting power of the S&P500 composite index. The forecasting exercise is addressed in the following way: first, in an autoregressive (AR) model I choose the truncation lag that minimizes the Mean Square Forecast Error (MSFE) in 11 years out of sample simulations for 1, 6 and 12 steps ahead, both for the IP growth rate and the CPI inflation. Second, the S&P500 is added as an explanatory variable to the previous AR specification. I repeat the simulation exercise and find that there are very small improvements of the MSFE statistics. Third, averages of stock return leading series, in the respective sector, are added as additional explanatory variables in the benchmark regression. Remarkable improvements are achieved with respect to the benchmark specification especially for one year horizon forecast. Significant improvements are also achieved for the shorter forecast horizons, when the leading series of the technology and energy sectors are used. <p><p>The second chapter of this thesis disentangles the sources of aggregate risk and measures the extent of co-movements in five European stock markets. Based on the static factor model of Stock and Watson (2002), it proposes a new method for measuring the impact of international, national and industry-specific shocks. The process of European economic and monetary integration with the advent of the EMU has been a central issue for investors and policy makers. During these years, the number of studies on the integration and linkages among European stock markets has increased enormously. Given their forward looking nature, stock prices are considered a key variable to use for establishing the developments in the economic and financial markets. Therefore, measuring the extent of co-movements between European stock markets has became, especially over the last years, one of the main concerns both for policy makers, who want to best shape their policy responses, and for investors who need to adapt their hedging strategies to the new political and economic environment. An optimal portfolio allocation strategy is based on a timely identification of the factors affecting asset returns. So far, literature dating back to Solnik (1974) identifies national factors as the main contributors to the co-variations among stock returns, with the industry factors playing a marginal role. The increasing financial and economic integration over the past years, fostered by the decline of trade barriers and a greater policy coordination, should have strongly reduced the importance of national factors and increased the importance of global determinants, such as industry determinants. However, somehow puzzling, recent studies demonstrated that countries sources are still very important and generally more important of the industry ones. This paper tries to cast some light on these conflicting results. The chapter proposes an econometric estimation strategy more flexible and suitable to disentangle and measure the impact of global and country factors. Results point to a declining influence of national determinants and to an increasing influence of the industries ones. The international influences remains the most important driving forces of excess returns. These findings overturn the results in the literature and have important implications for strategic portfolio allocation policies; they need to be revisited and adapted to the changed financial and economic scenario. <p><p>The third chapter presents a new stylized fact which can be helpful for discriminating among alternative explanations of the U.S. macroeconomic stability. The main finding is that the fall in time series volatility is associated with a sizable decline, of the order of 30% on average, in the predictive accuracy of several widely used forecasting models, included the factor models proposed by Stock and Watson (2002). This pattern is not limited to the measures of inflation but also extends to several indicators of real economic activity and interest rates. The generalized fall in predictive ability after the mid-1980s is particularly pronounced for forecast horizons beyond one quarter. Furthermore, this empirical regularity is not simply specific to a single method, rather it is a common feature of all models including those used by public and private institutions. In particular, the forecasts for output and inflation of the Fed's Green book and the Survey of Professional Forecasters (SPF) are significantly more accurate than a random walk only before 1985. After this date, in contrast, the hypothesis of equal predictive ability between naive random walk forecasts and the predictions of those institutions is not rejected for all horizons, the only exception being the current quarter. The results of this chapter may also be of interest for the empirical literature on asymmetric information. Romer and Romer (2000), for instance, consider a sample ending in the early 1990s and find that the Fed produced more accurate forecasts of inflation and output compared to several commercial providers. The results imply that the informational advantage of the Fed and those private forecasters is in fact limited to the 1970s and the beginning of the 1980s. In contrast, during the last two decades no forecasting model is better than "tossing a coin" beyond the first quarter horizon, thereby implying that on average uninformed economic agents can effectively anticipate future macroeconomics developments. On the other hand, econometric models and economists' judgement are quite helpful for the forecasts over the very short horizon, that is relevant for conjunctural analysis. Moreover, the literature on forecasting methods, recently surveyed by Stock and Watson (2005), has devoted a great deal of attention towards identifying the best model for predicting inflation and output. The majority of studies however are based on full-sample periods. The main findings in the chapter reveal that most of the full sample predictability of U.S. macroeconomic series arises from the years before 1985. Long time series appear<p>to attach a far larger weight on the earlier sub-sample, which is characterized by a larger volatility of inflation and output. Results also suggest that some caution should be used in evaluating the performance of alternative forecasting models on the basis of a pool of different sub-periods as full sample analysis are likely to miss parameter instability. <p><p>The fourth chapter performs a detailed forecast comparison between the static factor model of Stock and Watson (2002) (SW) and the dynamic factor model of Forni et. al. (2005) (FHLR). It is not the first work in performing such an evaluation. Boivin and Ng (2005) focus on a very similar problem, while Stock and Watson (2005) compare the performances of a larger class of predictors. The SW and FHLR methods essentially differ in the computation of the forecast of the common component. In particular, they differ in the estimation of the factor space and in the way projections onto this space are performed. In SW, the factors are estimated by static Principal Components (PC) of the sample covariance matrix and the forecast of the common component is simply the projection of the predicted variable on the factors. FHLR propose efficiency improvements in two directions. First, they estimate the common factors based on Generalized Principal Components (GPC) in which observations are weighted according to their signal to noise ratio. Second, they impose the constraints implied by the dynamic factors structure when the variables of interest are projected on the common factors. Specifically, they take into account the leading and lagging relations across series by means of principal components in the frequency domain. This allows for an efficient aggregation of variables that may be out of phase. Whether these efficiency improvements are helpful to forecast in a finite sample is however an empirical question. Literature has not yet reached a consensus. On the one hand, Stock and Watson (2005) show that both methods perform similarly (although they focus on the weighting of the idiosyncratic and not on the dynamic restrictions), while Boivin and Ng (2005) show that SW's method largely outperforms the FHLR's and, in particular, conjecture that the dynamic restrictions implied by the method are harmful for the forecast accuracy of the model. This chapter tries to shed some new light on these conflicting results. It<p>focuses on the Industrial Production index (IP) and the Consumer Price Index (CPI) and bases the evaluation on a simulated out-of sample forecasting exercise. The data set, borrowed from Stock and Watson (2002), consists of 146 monthly observations for the US economy. The data spans from 1959 to 1999. In order to isolate and evaluate specific characteristics of the methods, a procedure, where the<p>two non-parametric approaches are nested in a common framework, is designed. In addition, for both versions of the factor model forecasts, the chapter studies the contribution of the idiosyncratic component to the forecast. Other non-core aspects of the model are also investigated: robustness with respect to the choice of the number of factors and variable transformations. Finally, the chapter performs a sub-sample performances of the factor based forecasts. The purpose of this exercise is to design an experiment for assessing the contribution of the core characteristics of different models to the forecasting performance and discussing auxiliary issues. Hopefully this may also serve as a guide for practitioners in the field. As in Stock and Watson (2005), results show that efficiency improvements due to the weighting of the idiosyncratic components do not lead to significant more accurate forecasts, but, in contrast to Boivin and Ng (2005), it is shown that the dynamic restrictions imposed by the procedure of Forni et al. (2005) are not harmful for predictability. The main conclusion is that the two methods have a similar performance and produce highly collinear forecasts. <p> / Doctorat en sciences économiques, Orientation économie / info:eu-repo/semantics/nonPublished
76

Trois contributions sur l'effet informatif des cours boursiers dans les décisions d'entreprise / Three essays on informational feedback from stock prices to corporate decisions

Xu, Liang 27 June 2017 (has links)
Ce travail doctoral étudie l’effet « retour » de l’information financière liée aux prix des actions sur les décisions des dirigeants d’entreprise. Plus précisément, j'étudie si et comment les gestionnaires apprennent effectivement les nouvelles informations contenues dans les prix des actions pour guider leurs décisions d'entreprise. Ma thèse de doctorat est composée de trois essais, chacun abordant un aspect différent de ce même sujet. Le premier essai étudie le lien entre l'efficacité informationnelle du marché d'actions et le niveau d’efficacité économique réelle de l'entreprise. Dans le premier essai, je constate que lorsque les prix de l'action agrègent une plus quantité d'informations utile plus grande, les décisions des entreprises prises par les gestionnaires devraient être encore plus optimales efficaces. Le deuxième essai étudie si les gestionnaires cherchent à apprendre les informations utilisées par les vendeurs à découvert. L’étude des prix des actions en présence de vendeurs à découvert est-il utile pour les décisions de l'entreprise ? Dans le deuxième essai, j'ai surmonté les difficultés empiriques en exploitant une caractéristique institutionnelle unique sur le marché des actions de Hong Kong. Je constate que les gestionnaires des entreprises « non-shortable » peuvent tirer profit des informations des vendeurs à découvert sur les conditions économiques sectorielles par l'intermédiaire des prix des actions d'autres entreprises « shortable » dans la même industrie et les utilisent dans leurs décisions d'entreprise. Le troisième essai étudie les effets réels de la négociation d'options à long terme. Dans le troisième essai, je constate que l’introduction d’une catégorie spécifique d'options à long terme stimule la production d'informations privées à long terme et donc entraîne une augmentation de l'informativité des prix sur les fondamentaux à long terme des entreprises. Par conséquent, les dirigeants peuvent extraire davantage d'informations du prix de l’action pour guider leurs décisions d'investissement à long terme. / In my doctoral thesis, I investigate the information feedback from stock prices to managers’ decisions. More specifically, I study whether and how managers learn new information from stock prices to guide their corporate decisions. My doctoral thesis includes three essays focusing on this topic. The first essay studies the relationship between stock market informational efficiency and real economy efficiency at firm-level. In the first essay, I find that when stock prices reflect greater amount of information that managers care about, corporate decisions made by managers become more efficient. The second essay studies whether managers seek to learn short sellers’ information from stock prices and use it in corporate decisions. In the second essay, I overcome the empirical difficulties by exploiting a unique institutional feature in Hong Kong stock market that only stocks included in an official list are allowed for short sales. I find that that non-shortable firms’ managers can learn short sellers’ information on external conditions from shortable peers’ stock prices and use it in their corporate decisions. The third essay studies the real effects of long-term option trading. I find that long-term option trading stimulates the production of long-term information, which managers can use to guide their long-term investment decisions.

Page generated in 0.1064 seconds