Spelling suggestions: "subject:"1standard anda poor's corporation"" "subject:"1standard anda poor's eorporation""
1 |
Three essays on S&P 500 Index constituent changesIvanov, Stoyu I. January 2009 (has links)
Thesis (Ph.D.)--University of Nebraska-Lincoln, 2009. / Title from title screen (site viewed October 13, 2009). PDF text: 118 p. ; 11 Mb. UMI publication number: AAT 3358959. Includes bibliographical references. Also available in microfilm and microfiche formats.
|
2 |
Index inclusion effect growth vs. value /Lee, Sang H., January 2008 (has links)
Thesis (B.A.)--Haverford College, Dept. of Economics, 2008. / Includes bibliographical references.
|
3 |
Excessive margin requirements and intermarket derivative exchange competition a study of the effect of risk management on market microstructure /Dutt, Hans R., January 2008 (has links)
Thesis (Ph.D.)--George Mason University, 2008. / Vita: p. 75. Thesis director: Willem Thorbeck. Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Economics. Title from PDF t.p. (viewed Aug. 27, 2008). Includes bibliographical references (p. 70-74). Also issued in print.
|
4 |
Variable Clustering Methods and Applications in Portfolio SelectionXu, Xiao January 2021 (has links)
This thesis introduces three variable clustering methods designed in the context of diversified portfolio selection. The motivation is to cluster financial assets in order to identify a small set of assets to approximate the level of diversification of the whole universe of stocks.
First, we develop a data-driven approach to variable clustering based on a correlation blockmodel, in which assets in the same cluster have the same correlations with all other assets. Under the correlation blockmodel, the assets in the same cluster are controlled by the same latent factor. In addition, each cluster forms an equivalent class among assets, in the sense that the portfolio consisting of one stock from each cluster will have the same correlation matrix, regardless of the specific stocks chosen. We devise an algorithm named ACC (Asset Clustering through Correlation) to detect the clusters, with theoretical analysis and practical guidance for tuning the parameter for the algorithm.
Our second method studies a multi-factor block model, which is a generalization of the correlation blockmodel. Under this multi-factor block model, assets in the same cluster are governed by a set of multiple latent factors, instead of a single factor, as in the correlation blockmodel. Observations of the asset returns lie near a union of low-dimensional subspaces under this model. We propose a subspace clustering method that utilizes square-root LASSO nodewise regression to identify these subspaces and recover the corresponding clusters. Through theoretical analysis, we provide a practical and straightforward guidance for choosing the regularization parameters.
Existing subspace clustering methods based on regularized nodewise regression often arbitrarily choose the form of the regularization. The parameter that controls the regularization is also often determined exogenously or by cross-validation.Our third method theoretically unifies the choices of the regularizer and its parameter by formulating a distributionally robust version of nodewise regression. In this new formulation, we optimize the worst-case square loss within a region of distributional uncertainty around the empirical distribution. We show that this formulation naturally leads to a spectral-norm regularized optimization problem. In addition, the parameter that controls the regularization is nothing but the radius of the uncertainty region and can be determined easily based on the degree of uncertainty in the data. We also propose an alternating direction method of multipliers (ADMM) algorithm for efficient implementation.
Finally, we design and implement an empirical analysis framework to verify the performance of the three proposed clustering methods. This framework consists of four main steps: clustering, stock selection, asset allocation, and portfolio backtesting. The main idea is to select stocks from each cluster to construct a portfolio and then assess the clustering method by analyzing the portfolio's performance. Using this framework, we can easily compare new clustering methods with existing ones by creating portfolios with the same selection and allocation strategies. We apply this framework to the daily returns of the S&P 500 stock universe. Specifically, we compare portfolios constructed using different clustering methods and asset allocation strategies with the S&P 500 Index benchmark. Portfolios from our proposed clustering methods outperform the benchmark significantly. They also perform favorably compared to other existing clustering algorithms in terms of the risk-adjusted return.
|
5 |
Análise dos ratings de classificação de risco soberanoMiyake, Mauro 13 July 2001 (has links)
Made available in DSpace on 2010-04-20T20:20:24Z (GMT). No. of bitstreams: 0
Previous issue date: 2001-07-13T00:00:00Z / Análise dos critérios determinantes dos ratings de risco soberano emitidos pela agência Standard & Poor's, evidenciando variáveis de cunho econômico e político. Realização de testes empíricos de regressão linear e análise dos coeficientes determinantes do risco soberano em moeda estrangeira.
|
6 |
Data Science in Finance: Robustness, Fairness, and Strategic ModelingLi, Mike January 2024 (has links)
In the multifaceted landscape of financial markets, the understanding and application of data science methods are crucial for achieving robustness, fairness, and strategic advancement. This dissertation addresses these critical areas through three interconnected studies.
The first study investigates the problem of data imbalance, with particular emphasis on financial applications such as credit risk assessment, where the prevalence of non-defaulting entities overshadows defaulting ones. Traditional classification models often falter under such imbalances, leading to biased predictions. By analyzing linear discriminant functions under conditions where one class's sample size grows indefinitely while the other remains fixed, this study reveals that certain parameters stabilize, providing robust predictions. This robustness ensures model reliability even in skewed data environments.
The second study explores anomalies in option pricing, specifically the total positivity of order 2 (TP₂) in call options and the reverse sign rule of order 2 (RR₂) in put options within the S&P 500 index. By examining the empirical significance and occurrence patterns of these violations, the research identifies potential trading opportunities. The findings demonstrate that while these conditions are mostly satisfied, violations can be strategically exploited for consistent positive returns, providing practical insights into profitable trading strategies.
The third study addresses the fairness of regulatory stress tests, which are crucial for assessing the capital adequacy of banks. The uniform application of stress test models across diverse banks raises concerns about fairness and accuracy. This study proposes a method to aggregate individual models into a common framework, balancing forecast accuracy and equitable treatment. The research demonstrates that estimating and discarding centered bank fixed effects leads to more reliable and fair stress test outcomes.
The conclusions of these studies highlight the importance of understanding the behavior of commonly used models in handling imbalanced data, the strategic exploitation of option pricing anomalies for profitable trading, and the need for fair regulatory practices to ensure financial stability. Together, these findings contribute to a deeper understanding of data science in finance, offering practical insights for regulators, financial institutions, and traders.
|
7 |
Derivation of Probability Density Functions for the Relative Differences in the Standard and Poor's 100 Stock Index Over Various Intervals of TimeBunger, R. C. (Robert Charles) 08 1900 (has links)
In this study a two-part mixed probability density function was derived which described the relative changes in the Standard and Poor's 100 Stock Index over various intervals of time. The density function is a mixture of two different halves of normal distributions. Optimal values for the standard deviations for the two halves and the mean are given. Also, a general form of the function is given which uses linear regression models to estimate the standard deviations and the means.
The density functions allow stock market participants trading index options and futures contracts on the S & P 100 Stock Index to determine probabilities of success or failure of trades involving price movements of certain magnitudes in given lengths of time.
|
8 |
Reinforcement Learning for Continuous-Time Linear-Quadratic Control and Mean-Variance Portfolio Selection: Regret Analysis and Empirical StudyHuang, Yilie January 2025 (has links)
This thesis explores continuous-time reinforcement learning (RL) for stochastic control with two intimately related problems: mean-variance (MV) portfolio selection and linear-quadratic (LQ) control. For the former, we investigate markets where stock prices are diffusion processes driven by observable factors that are also diffusion processes yet the coefficients of these processes are unknown. Based on the recently developed RL theory for diffusion processes, we present data-driven algorithms that learns the pre-committed investment strategies directly without attempting to learn or estimate the market coefficients. For multi-stock Black–Scholes markets without factors, we develop a baseline algorithm and prove its performance guarantee by deriving a sublinear regret bound in terms of the Sharpe ratio.
To optimize performance and facilitate real-world application, we further adapt the baseline algorithm into four variants. These enhancements incorporate techniques such as real-time online learning, offline pre-training, and mechanisms for managing leverage constraints and trading frequency. Following this, we perform a comprehensive empirical study to compare our RL algorithms against fifteen established portfolio allocation strategies based on S&P 500 constituents. The study employs multiple performance metrics, including annualized returns, variations of the Sharpe ratio, maximum drawdown, and recovery time. The results demonstrate that our continuous-time RL strategies are consistently among the best especially in a volatile bear market, and decisively outperform the model-based continuous-time counterparts by significant margins.
We next study RL for a class of continuous-time LQ control problems for diffusions, where states are scalar-valued and running control rewards are absent but volatilities of the state processes depend on both state and control variables. We apply a model-free approach that relies neither on knowledge of model parameters nor on their estimations, and devise an actor--critic algorithm to learn the optimal policy parameter directly. Our main contributions include the introduction of an exploration schedule and a regret analysis of the proposed algorithm. We provide the convergence rate of the policy parameter to the optimal one, and prove that the algorithm achieves a regret bound of 𝑂(𝑁³/⁴) up to a logarithmic factor, where N is the number of learning episodes. We conduct a simulation study to validate the theoretical results and demonstrate the effectiveness and reliability of the proposed algorithm. We also perform numerical comparisons between our method and those of the recent model-based stochastic LQ RL studies adapted to the state- and control-dependent volatility setting, demonstrating a better performance of the former in terms of regret bounds.
Along a different direction, we present a policy gradient-based actor–critic algorithm featuring adaptive exploration in both actor and critic. To wit, both the variance of the stochastic policy (actor) and the temperature parameter (critic) are decreasing in time according to certain schedules. In particular, endogenizing the temperature parameter reduces the need for manual tuning. Despite this added flexibility, the algorithm maintains the same sublinear regret bound of 𝑂(𝑁³/⁴) as achieved in the deterministic schedule. In the numerical experiments, we evaluate the convergence rate and regret bound of the proposed algorithm, with results aligning closely with our theoretical findings.
|
Page generated in 0.0973 seconds