• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2023
  • 513
  • 357
  • 284
  • 196
  • 157
  • 107
  • 103
  • 96
  • 94
  • 90
  • 67
  • 39
  • 33
  • 29
  • Tagged with
  • 4359
  • 1115
  • 676
  • 608
  • 554
  • 468
  • 441
  • 439
  • 435
  • 415
  • 399
  • 337
  • 288
  • 288
  • 280
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Tracking Stocks im deutschen Aktienrecht unter besonderer Berücksichtigung eines Mischbeteiligungsmodells durch quotale Ausschüttung des Spartengewinns an mehrere Aktiengattungen

Kuhn, Christian January 2006 (has links)
Zugl.: Düsseldorf, Univ., Diss., 2006
132

Security market design & execution cost.

Cook, Rowan M, Banking & Finance, Australian School of Business, UNSW January 2007 (has links)
We employ the Reuters database to compare execution costs for 2,330 matched-pair securities across the top 7 equity markets in the Dow Jones STOXX Global 1800 Index. This sample encompasses a wide variety of thirteen market design features. In addition, we investigate execution costs well beyond the most heavily traded stocks to include equities in the sixth through tenth deciles of traded value. Our findings indicate that full transparency of the limit order book to investors and a composite of unique NYSE features (but not the presence of the crowd) unequivocally reduce effective spreads. In contrast, a fully transparent limit order book revealed to brokers, the presence of a market maker, or the mixture of execution systems present on the LSE sharply increase effective spreads in both thickly and thinly-traded stocks. The effect of a physical trading floor is statistically significant but relatively small; it increases effective spreads slightly for thickly-traded firms, and reduces them for thinly-traded stocks. The findings for price impact are the same with three exceptions. First, the presence of a trading floor increases costs, dramatically so for thinlytraded stocks. Second, a fully transparent limit order book for brokers raises price impact for thickly traded stocks, but lowers price impacts for thinly traded firms. Third, in thinly-traded stocks, London???s hybrid market decreases price impact, and in thickly-traded stocks, crowd trading on the NYSE and full transparency to investors decrease price impact. Finally, the results for realised spread are essentially the same as those for effective spread, with the exception that the effect of the presence of a trading floor is to reduce realised spreads. Overall, the London Stock Exchange is the highest execution cost market, and the NYSE is the lowest. This research includes a market-specific study of the effect on execution cost of the Liquidity Provider of Euronext Paris. Euronext Paris affords a natural experimental research design because a third of firms have Liquidity Providers and two thirds do not. Results indicate quoted spreads, effective spreads and realized spreads are significantly affected by the presence of a Liquidity Provider, but price impacts are not. On the one hand, this suggests that the thickly-traded stocks where the Liquidity Providers are prohibited have sufficient liquidity in their absence. On the other hand however, liquidity providers on Euronext Paris reduce effective and realised spreads in essentially all stocks. This finding suggests that the limit order book refreshes much more quickly after developing an imbalance of large size orders when Liquidity Providers can facilitate other liquidity suppliers in assessing picking off risk. The Liquidity Provider increases quoted spreads for thickly-traded firms from the first three traded value deciles while reducing quoted spreads for the lower deciles.
133

Security market design & execution cost.

Cook, Rowan M, Banking & Finance, Australian School of Business, UNSW January 2007 (has links)
We employ the Reuters database to compare execution costs for 2,330 matched-pair securities across the top 7 equity markets in the Dow Jones STOXX Global 1800 Index. This sample encompasses a wide variety of thirteen market design features. In addition, we investigate execution costs well beyond the most heavily traded stocks to include equities in the sixth through tenth deciles of traded value. Our findings indicate that full transparency of the limit order book to investors and a composite of unique NYSE features (but not the presence of the crowd) unequivocally reduce effective spreads. In contrast, a fully transparent limit order book revealed to brokers, the presence of a market maker, or the mixture of execution systems present on the LSE sharply increase effective spreads in both thickly and thinly-traded stocks. The effect of a physical trading floor is statistically significant but relatively small; it increases effective spreads slightly for thickly-traded firms, and reduces them for thinly-traded stocks. The findings for price impact are the same with three exceptions. First, the presence of a trading floor increases costs, dramatically so for thinlytraded stocks. Second, a fully transparent limit order book for brokers raises price impact for thickly traded stocks, but lowers price impacts for thinly traded firms. Third, in thinly-traded stocks, London???s hybrid market decreases price impact, and in thickly-traded stocks, crowd trading on the NYSE and full transparency to investors decrease price impact. Finally, the results for realised spread are essentially the same as those for effective spread, with the exception that the effect of the presence of a trading floor is to reduce realised spreads. Overall, the London Stock Exchange is the highest execution cost market, and the NYSE is the lowest. This research includes a market-specific study of the effect on execution cost of the Liquidity Provider of Euronext Paris. Euronext Paris affords a natural experimental research design because a third of firms have Liquidity Providers and two thirds do not. Results indicate quoted spreads, effective spreads and realized spreads are significantly affected by the presence of a Liquidity Provider, but price impacts are not. On the one hand, this suggests that the thickly-traded stocks where the Liquidity Providers are prohibited have sufficient liquidity in their absence. On the other hand however, liquidity providers on Euronext Paris reduce effective and realised spreads in essentially all stocks. This finding suggests that the limit order book refreshes much more quickly after developing an imbalance of large size orders when Liquidity Providers can facilitate other liquidity suppliers in assessing picking off risk. The Liquidity Provider increases quoted spreads for thickly-traded firms from the first three traded value deciles while reducing quoted spreads for the lower deciles.
134

Feasibility of supplementary sampling of the commercial groundfish landings in Oregon using seafood plant workers

Builder, Tonya L. 17 November 2000 (has links)
Fishery dependent data--length distributions, sex ratios, maturity schedules, and species composition of landed catches--are necessary for stock assessments. These data are currently collected by state port biologists using a sampling design that randomly selects samples from a small percentage of a very large target population. Sampling programs may need to increase the sample size and possibly expand data collection times into evenings and weekends. This must also be accomplished in an economically reasonable manner, which is a significant challenge. Working cooperatively with the seafood processing plants is one way to meet these challenges. This study explored the feasibility of implementing a cooperative sampling program for Pacific West Coast groundfish, with the goal of improving the precision and accuracy of estimates derived from the fishery dependent samples. The study was a cooperative project utilizing seafood processing plant workers to collect fish length frequency data. There is evidence that the seafood plant workers can measure fish with reasonable accuracy. This cooperative effort has the potential to dramatically increase the sample size and the coverage of sampled catch landings. / Graduation date: 2001
135

Risk incentives of executive stock options evidence from mergers and acquisitions /

Zhou, Haigang. January 1900 (has links)
Thesis (Ph.D.)--University of Nebraska-Lincoln, 2006. / Title from title screen (site viewed on Mar. 13, 2007). PDF text: vi, 124 p. : col. ill. UMI publication number: AAT 3225346. Includes bibliographical references. Also available in microfilm and microfiche format.
136

Understanding Co-Movements in Macro and Financial Variables

D'Agostino, Antonello 09 January 2007 (has links)
Over the last years, the growing availability of large datasets and the improvements in the computational speed of computers have further fostered the research in the fields of both macroeconomic modeling and forecasting analysis. A primary focus of these research areas is to improve the models performance by exploiting the informational content of several time series. Increasing the dimension of macro models is indeed crucial for a detailed structural understanding of the economic environment, as well as for an accurate forecasting analysis. As consequence, a new generation of large-scale macro models, based on the micro-foundations of a fully specified dynamic stochastic general equilibrium set-up, has became one of the most flourishing research areas of interest both in central banks and academia. At the same time, there has been a revival of forecasting methods dealing with many predictors, such as the factor models. The central idea of factor models is to exploit co-movements among variables through a parsimonious econometric structure. Few underlying common shocks or factors explain most of the co-variations among variables. The unexplained component of series movements is on the other hand due to pure idiosyncratic dynamics. The generality of their framework allows factor models to be suitable for describing a broad variety of models in a macroeconomic and a financial context. The revival of factor models, over the recent years, comes from important developments achieved by Stock and Watson (2002) and Forni, Hallin, Lippi and Reichlin (2000). These authors find the conditions under which some data averages become collinear to the space spanned by the factors when, the cross section dimension, becomes large. Moreover, their factor specifications allow the idiosyncratic dynamics to be mildly cross-correlated (an effect referred to as the 'approximate factor structure' by Chamberlain and Rothschild, 1983), a situation empirically verified in many applications. These findings have relevant implications. The most important being that the use of a large number of series is no longer representative of a dimensional constraint. On the other hand, it does help to identify the factor space. This new generation of factor models has been applied in several areas of macroeconomics and finance as well as for policy evaluation. It is consequently very likely to become a milestone in the literature of forecasting methods using many predictors. This thesis contributes to the empirical literature on factor models by proposing four original applications. In the first chapter of this thesis, the generalized dynamic factor model of Forni et. al (2002) is employed to explore the predictive content of the asset returns in forecasting Consumer Price Index (CPI) inflation and the growth rate of Industrial Production (IP). The connection between stock markets and economic growth is well known. In the fundamental valuation of equity, the stock price is equal to the discounted future streams of expected dividends. Since the future dividends are related to future growth, a revision of prices, and hence returns, should signal movements in the future growth path. Though other important transmission channels, such as the Tobin's q theory (Tobin, 1969), the wealth effect as well as capital market imperfections, have been widely studied in this literature. I show that an aggregate index, such as the S&P500, could be misleading if used as a proxy for the informative content of the stock market as a whole. Despite the widespread wisdom of considering such index as a leading variable, only part of the assets included in the composition of the index has a leading behaviour with respect to the variables of interest. Its forecasting performance might be poor, leading to sceptical conclusions about the effectiveness of asset prices in forecasting macroeconomic variables. The main idea of the first essay is therefore to analyze the lead-lag structure of the assets composing the S&P500. The classification in leading, lagging and coincident variables is achieved by means of the cross correlation function cleaned of idiosyncratic noise and short run fluctuations. I assume that asset returns follow a factor structure. That is, they are the sum of two parts: a common part driven by few shocks common to all the assets and an idiosyncratic part, which is rather asset specific. The correlation function, computed on the common part of the series, is not affected by the assets' specific dynamics and should provide information only on the series driven by the same common factors. Once the leading series are identified, they are grouped within the economic sector they belong to. The predictive content that such aggregates have in forecasting IP growth and CPI inflation is then explored and compared with the forecasting power of the S&P500 composite index. The forecasting exercise is addressed in the following way: first, in an autoregressive (AR) model I choose the truncation lag that minimizes the Mean Square Forecast Error (MSFE) in 11 years out of sample simulations for 1, 6 and 12 steps ahead, both for the IP growth rate and the CPI inflation. Second, the S&P500 is added as an explanatory variable to the previous AR specification. I repeat the simulation exercise and find that there are very small improvements of the MSFE statistics. Third, averages of stock return leading series, in the respective sector, are added as additional explanatory variables in the benchmark regression. Remarkable improvements are achieved with respect to the benchmark specification especially for one year horizon forecast. Significant improvements are also achieved for the shorter forecast horizons, when the leading series of the technology and energy sectors are used. The second chapter of this thesis disentangles the sources of aggregate risk and measures the extent of co-movements in five European stock markets. Based on the static factor model of Stock and Watson (2002), it proposes a new method for measuring the impact of international, national and industry-specific shocks. The process of European economic and monetary integration with the advent of the EMU has been a central issue for investors and policy makers. During these years, the number of studies on the integration and linkages among European stock markets has increased enormously. Given their forward looking nature, stock prices are considered a key variable to use for establishing the developments in the economic and financial markets. Therefore, measuring the extent of co-movements between European stock markets has became, especially over the last years, one of the main concerns both for policy makers, who want to best shape their policy responses, and for investors who need to adapt their hedging strategies to the new political and economic environment. An optimal portfolio allocation strategy is based on a timely identification of the factors affecting asset returns. So far, literature dating back to Solnik (1974) identifies national factors as the main contributors to the co-variations among stock returns, with the industry factors playing a marginal role. The increasing financial and economic integration over the past years, fostered by the decline of trade barriers and a greater policy coordination, should have strongly reduced the importance of national factors and increased the importance of global determinants, such as industry determinants. However, somehow puzzling, recent studies demonstrated that countries sources are still very important and generally more important of the industry ones. This paper tries to cast some light on these conflicting results. The chapter proposes an econometric estimation strategy more flexible and suitable to disentangle and measure the impact of global and country factors. Results point to a declining influence of national determinants and to an increasing influence of the industries ones. The international influences remains the most important driving forces of excess returns. These findings overturn the results in the literature and have important implications for strategic portfolio allocation policies; they need to be revisited and adapted to the changed financial and economic scenario. The third chapter presents a new stylized fact which can be helpful for discriminating among alternative explanations of the U.S. macroeconomic stability. The main finding is that the fall in time series volatility is associated with a sizable decline, of the order of 30% on average, in the predictive accuracy of several widely used forecasting models, included the factor models proposed by Stock and Watson (2002). This pattern is not limited to the measures of inflation but also extends to several indicators of real economic activity and interest rates. The generalized fall in predictive ability after the mid-1980s is particularly pronounced for forecast horizons beyond one quarter. Furthermore, this empirical regularity is not simply specific to a single method, rather it is a common feature of all models including those used by public and private institutions. In particular, the forecasts for output and inflation of the Fed's Green book and the Survey of Professional Forecasters (SPF) are significantly more accurate than a random walk only before 1985. After this date, in contrast, the hypothesis of equal predictive ability between naive random walk forecasts and the predictions of those institutions is not rejected for all horizons, the only exception being the current quarter. The results of this chapter may also be of interest for the empirical literature on asymmetric information. Romer and Romer (2000), for instance, consider a sample ending in the early 1990s and find that the Fed produced more accurate forecasts of inflation and output compared to several commercial providers. The results imply that the informational advantage of the Fed and those private forecasters is in fact limited to the 1970s and the beginning of the 1980s. In contrast, during the last two decades no forecasting model is better than "tossing a coin" beyond the first quarter horizon, thereby implying that on average uninformed economic agents can effectively anticipate future macroeconomics developments. On the other hand, econometric models and economists' judgement are quite helpful for the forecasts over the very short horizon, that is relevant for conjunctural analysis. Moreover, the literature on forecasting methods, recently surveyed by Stock and Watson (2005), has devoted a great deal of attention towards identifying the best model for predicting inflation and output. The majority of studies however are based on full-sample periods. The main findings in the chapter reveal that most of the full sample predictability of U.S. macroeconomic series arises from the years before 1985. Long time series appear to attach a far larger weight on the earlier sub-sample, which is characterized by a larger volatility of inflation and output. Results also suggest that some caution should be used in evaluating the performance of alternative forecasting models on the basis of a pool of different sub-periods as full sample analysis are likely to miss parameter instability. The fourth chapter performs a detailed forecast comparison between the static factor model of Stock and Watson (2002) (SW) and the dynamic factor model of Forni et. al. (2005) (FHLR). It is not the first work in performing such an evaluation. Boivin and Ng (2005) focus on a very similar problem, while Stock and Watson (2005) compare the performances of a larger class of predictors. The SW and FHLR methods essentially differ in the computation of the forecast of the common component. In particular, they differ in the estimation of the factor space and in the way projections onto this space are performed. In SW, the factors are estimated by static Principal Components (PC) of the sample covariance matrix and the forecast of the common component is simply the projection of the predicted variable on the factors. FHLR propose efficiency improvements in two directions. First, they estimate the common factors based on Generalized Principal Components (GPC) in which observations are weighted according to their signal to noise ratio. Second, they impose the constraints implied by the dynamic factors structure when the variables of interest are projected on the common factors. Specifically, they take into account the leading and lagging relations across series by means of principal components in the frequency domain. This allows for an efficient aggregation of variables that may be out of phase. Whether these efficiency improvements are helpful to forecast in a finite sample is however an empirical question. Literature has not yet reached a consensus. On the one hand, Stock and Watson (2005) show that both methods perform similarly (although they focus on the weighting of the idiosyncratic and not on the dynamic restrictions), while Boivin and Ng (2005) show that SW's method largely outperforms the FHLR's and, in particular, conjecture that the dynamic restrictions implied by the method are harmful for the forecast accuracy of the model. This chapter tries to shed some new light on these conflicting results. It focuses on the Industrial Production index (IP) and the Consumer Price Index (CPI) and bases the evaluation on a simulated out-of sample forecasting exercise. The data set, borrowed from Stock and Watson (2002), consists of 146 monthly observations for the US economy. The data spans from 1959 to 1999. In order to isolate and evaluate specific characteristics of the methods, a procedure, where the two non-parametric approaches are nested in a common framework, is designed. In addition, for both versions of the factor model forecasts, the chapter studies the contribution of the idiosyncratic component to the forecast. Other non-core aspects of the model are also investigated: robustness with respect to the choice of the number of factors and variable transformations. Finally, the chapter performs a sub-sample performances of the factor based forecasts. The purpose of this exercise is to design an experiment for assessing the contribution of the core characteristics of different models to the forecasting performance and discussing auxiliary issues. Hopefully this may also serve as a guide for practitioners in the field. As in Stock and Watson (2005), results show that efficiency improvements due to the weighting of the idiosyncratic components do not lead to significant more accurate forecasts, but, in contrast to Boivin and Ng (2005), it is shown that the dynamic restrictions imposed by the procedure of Forni et al. (2005) are not harmful for predictability. The main conclusion is that the two methods have a similar performance and produce highly collinear forecasts.
137

Are international stock markets correlated? : Comparing NIKKEI, Dow Jones and Dax in the periods 1991-2000 and 2001-2010

Fan, Yang January 2011 (has links)
With the process of financial globalization, many thousands of stock traders and stock brokers endeavor to seek the best portfolio diversification. Ever since the emergence of stock exchanges, whether international stock/equity markets are correlated or not generates more and more attention by investors. Based upon the augmented Dickey- Fuller (ADF) test and the error correction model (ECM), this paper tests the cointegration of three of the biggest stock exchanges in the world. Two periods, 1991-2000 and 2001-2010 are studied. The main finding is that there is no cointegration in the long run period among the tested markets, but in short run Dow Jone Industiral Average (DJIA) will affect Deutscher Aktien- Indice (DAX) and Nikkei Heikin Kabuka, 225 (NIKKEI 225).
138

Stock Market Co-Movement and Volatility Spillover between USA and South Africa

Yonis, Manex January 2011 (has links)
No description available.
139

Essays on Asset Prices

Kim, Sang Bong 16 January 2010 (has links)
In this dissertation I explain the relationship among inflation volatility, rational bubbles, and asset prices. In addition, I investigate the transmission of asset prices and volatility among countries. In the second chapter, which deals with the relationship between inflation volatility and asset prices, my empirical analysis shows that real stock returns tend to co-vary negatively with expected inflation during periods of stable inflation, but co-vary positively with expected inflation during periods of volatile inflation for 16 countries. To investigate the relationship between rational bubbles and asset prices in the third chapter, I formulate an information error model which allows one to derive the measure of non-fundamentals in stock prices in a straightforward manner. This study provides a new method by specifying rational bubble measures that follow the Weibull distribution. As a result, my empirical analysis is the first step in applying survival analysis to bubbles, and it reveals preliminary evidence that there is the increasing bursting rate at a decreasing rate for extraneous or instrinsic bubbles in the U.S. stock market. In the fourth chapter, which deals with the transmission of asset prices and volatility, I investigate how the 1997 crisis has changed the Korean market by focusing on price and volatility spillovers from the U.S., Chinese, and Japanese markets. I have used daily stock prices from January 3, 1995 to July 31, 2007 and employed an EGARCH model. New information on stock prices originated in the U.S. market was more transmitted to the Korean market for all periods. The price spillover effect from the Japanese market to the Korean market became stronger from the crisis period. The influence of U.S. and Japanese innovations on market volatility increased after the crisis period. However, the magnitude of spillover effects from the Chinese market to the Korean market remained small and stable between the prior- and post-crisis periods and the volatility spillover effect remained stable for all periods. Asymmetry in the spillover effects on market volatility was pronounced in the Korean market after the financial crisis.
140

Preferred Stock and the Debt-Equity Hybrid Puzzle: An Analysis Using Credit Ratings

Strawser, William 2011 May 1900 (has links)
This study investigates the effect of preferred stock on the credit ratings assessed by professional credit analysts. Preferred stock inherently contains both features of debt and equity financing. Hence, the nature of preferred stock has presented a puzzle to the efforts of accounting regulators such as the Financial Accounting Standards Board to consistently classify within the existing framework established by financial reporting standards. I find evidence that the association of preferred stock with credit analysts' assessments of credit risk depends on two factors. First, the association of preferred stock with credit ratings varies by the type of preferred stock. Preferred stock that is redeemable is negatively associated with credit ratings, while nonredeemable preferred stock bears no consistent association with credit ratings. Second, the negative association of redeemable preferred stock with credit ratings is sensitive to the firm's financial condition. For those firms in poor financial health, the negative association dissipates. This is in line with preferred stock's inability to drive an insolvent firm into bankruptcy.

Page generated in 0.0266 seconds