• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 1
  • Tagged with
  • 7
  • 7
  • 7
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Three essays on S&P 500 Index constituent changes

Ivanov, Stoyu I. January 2009 (has links)
Thesis (Ph.D.)--University of Nebraska-Lincoln, 2009. / Title from title screen (site viewed October 13, 2009). PDF text: 118 p. ; 11 Mb. UMI publication number: AAT 3358959. Includes bibliographical references. Also available in microfilm and microfiche formats.
2

Index inclusion effect growth vs. value /

Lee, Sang H., January 2008 (has links)
Thesis (B.A.)--Haverford College, Dept. of Economics, 2008. / Includes bibliographical references.
3

Excessive margin requirements and intermarket derivative exchange competition a study of the effect of risk management on market microstructure /

Dutt, Hans R., January 2008 (has links)
Thesis (Ph.D.)--George Mason University, 2008. / Vita: p. 75. Thesis director: Willem Thorbeck. Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Economics. Title from PDF t.p. (viewed Aug. 27, 2008). Includes bibliographical references (p. 70-74). Also issued in print.
4

Variable Clustering Methods and Applications in Portfolio Selection

Xu, Xiao January 2021 (has links)
This thesis introduces three variable clustering methods designed in the context of diversified portfolio selection. The motivation is to cluster financial assets in order to identify a small set of assets to approximate the level of diversification of the whole universe of stocks. First, we develop a data-driven approach to variable clustering based on a correlation blockmodel, in which assets in the same cluster have the same correlations with all other assets. Under the correlation blockmodel, the assets in the same cluster are controlled by the same latent factor. In addition, each cluster forms an equivalent class among assets, in the sense that the portfolio consisting of one stock from each cluster will have the same correlation matrix, regardless of the specific stocks chosen. We devise an algorithm named ACC (Asset Clustering through Correlation) to detect the clusters, with theoretical analysis and practical guidance for tuning the parameter for the algorithm. Our second method studies a multi-factor block model, which is a generalization of the correlation blockmodel. Under this multi-factor block model, assets in the same cluster are governed by a set of multiple latent factors, instead of a single factor, as in the correlation blockmodel. Observations of the asset returns lie near a union of low-dimensional subspaces under this model. We propose a subspace clustering method that utilizes square-root LASSO nodewise regression to identify these subspaces and recover the corresponding clusters. Through theoretical analysis, we provide a practical and straightforward guidance for choosing the regularization parameters. Existing subspace clustering methods based on regularized nodewise regression often arbitrarily choose the form of the regularization. The parameter that controls the regularization is also often determined exogenously or by cross-validation.Our third method theoretically unifies the choices of the regularizer and its parameter by formulating a distributionally robust version of nodewise regression. In this new formulation, we optimize the worst-case square loss within a region of distributional uncertainty around the empirical distribution. We show that this formulation naturally leads to a spectral-norm regularized optimization problem. In addition, the parameter that controls the regularization is nothing but the radius of the uncertainty region and can be determined easily based on the degree of uncertainty in the data. We also propose an alternating direction method of multipliers (ADMM) algorithm for efficient implementation. Finally, we design and implement an empirical analysis framework to verify the performance of the three proposed clustering methods. This framework consists of four main steps: clustering, stock selection, asset allocation, and portfolio backtesting. The main idea is to select stocks from each cluster to construct a portfolio and then assess the clustering method by analyzing the portfolio's performance. Using this framework, we can easily compare new clustering methods with existing ones by creating portfolios with the same selection and allocation strategies. We apply this framework to the daily returns of the S&P 500 stock universe. Specifically, we compare portfolios constructed using different clustering methods and asset allocation strategies with the S&P 500 Index benchmark. Portfolios from our proposed clustering methods outperform the benchmark significantly. They also perform favorably compared to other existing clustering algorithms in terms of the risk-adjusted return.
5

Análise dos ratings de classificação de risco soberano

Miyake, Mauro 13 July 2001 (has links)
Made available in DSpace on 2010-04-20T20:20:24Z (GMT). No. of bitstreams: 0 Previous issue date: 2001-07-13T00:00:00Z / Análise dos critérios determinantes dos ratings de risco soberano emitidos pela agência Standard & Poor's, evidenciando variáveis de cunho econômico e político. Realização de testes empíricos de regressão linear e análise dos coeficientes determinantes do risco soberano em moeda estrangeira.
6

Data Science in Finance: Robustness, Fairness, and Strategic Modeling

Li, Mike January 2024 (has links)
In the multifaceted landscape of financial markets, the understanding and application of data science methods are crucial for achieving robustness, fairness, and strategic advancement. This dissertation addresses these critical areas through three interconnected studies. The first study investigates the problem of data imbalance, with particular emphasis on financial applications such as credit risk assessment, where the prevalence of non-defaulting entities overshadows defaulting ones. Traditional classification models often falter under such imbalances, leading to biased predictions. By analyzing linear discriminant functions under conditions where one class's sample size grows indefinitely while the other remains fixed, this study reveals that certain parameters stabilize, providing robust predictions. This robustness ensures model reliability even in skewed data environments. The second study explores anomalies in option pricing, specifically the total positivity of order 2 (TP₂) in call options and the reverse sign rule of order 2 (RR₂) in put options within the S&P 500 index. By examining the empirical significance and occurrence patterns of these violations, the research identifies potential trading opportunities. The findings demonstrate that while these conditions are mostly satisfied, violations can be strategically exploited for consistent positive returns, providing practical insights into profitable trading strategies. The third study addresses the fairness of regulatory stress tests, which are crucial for assessing the capital adequacy of banks. The uniform application of stress test models across diverse banks raises concerns about fairness and accuracy. This study proposes a method to aggregate individual models into a common framework, balancing forecast accuracy and equitable treatment. The research demonstrates that estimating and discarding centered bank fixed effects leads to more reliable and fair stress test outcomes. The conclusions of these studies highlight the importance of understanding the behavior of commonly used models in handling imbalanced data, the strategic exploitation of option pricing anomalies for profitable trading, and the need for fair regulatory practices to ensure financial stability. Together, these findings contribute to a deeper understanding of data science in finance, offering practical insights for regulators, financial institutions, and traders.
7

Derivation of Probability Density Functions for the Relative Differences in the Standard and Poor's 100 Stock Index Over Various Intervals of Time

Bunger, R. C. (Robert Charles) 08 1900 (has links)
In this study a two-part mixed probability density function was derived which described the relative changes in the Standard and Poor's 100 Stock Index over various intervals of time. The density function is a mixture of two different halves of normal distributions. Optimal values for the standard deviations for the two halves and the mean are given. Also, a general form of the function is given which uses linear regression models to estimate the standard deviations and the means. The density functions allow stock market participants trading index options and futures contracts on the S & P 100 Stock Index to determine probabilities of success or failure of trades involving price movements of certain magnitudes in given lengths of time.

Page generated in 0.1623 seconds