351 |
Hedge Funds and Their Strategies : An Investigation about Correlation Market Neutrality and the Improvement of Portfolio PerformanceKuhn, Andreas, Muske, Roland January 2007 (has links)
Reading the daily financial news, it becomes quite obvious that hedge funds are receiving huge attention by financial analysts and politicians. Many people fear the influence of hedge funds on single companies as well as on the global economy. This study does not judge the behavior of hedge funds. Instead, it focuses on the nature of hedge funds and their mystique image. Especially the common view of their market neutral performance is of interest, which is theoretically achieved through the use of derivatives and short positions. In this thesis the feature of market neutrality is investigated in depth, since it can improve the overall performance of investors’ portfolios in bull as well as in bear markets through diversification effects. Therefore the hedge funds and their environment, the capital markets, are examined from an academically point of view by emphasizing on the following research questions: 1. Are hedge funds performing market neutral in bull and bear markets? 2. To what extent should they be included in optimal risky portfolios according to Modern Portfolio Theory and advanced performance measurement tools, considering their degree of market neutrality? This study is based on extensive knowledge of financial and econometric theories. Capital market theories, modern portfolio theory, hedge fund data and econometric knowledge about time series analysis build the basis for further investigations and are necessary to understand the characteristics of hedge funds and hedge fund data. In order to be able to deal with the shortcomings of hedge fund data, an analytical framework for the preparation of data is created that enables the authors to start with the analysis of these questions. The framework is applicable to all kinds of hedge funds presented in this thesis and enables the reader to test further hedge fund classes by himself. In a quantitative study the created framework is applied to 2160 hedge funds of Barclays Hedge Fund Database, which builds the basis for analyzing market neutrality. Further input for the portfolio optimization consists of 19 hedge fund indices, which were provided by the Greenwich Alternative Investment Hedge Fund Database and 4 benchmark indices for the stock and bond market. The analysis consists of two different parts. For the first research question various correlation and return matrices are constructed, which shall provide information about market neutrality of hedge funds. A correlation matrix also serves as important input for the portfolio analysis and therefore builds the basis for the analysis of the second research question. This shall provide some fundamental recommendations about the weighting of diverse hedge fund classes in optimal risky portfolios. The conducted analysis demonstrated clearly the following findings: 1. Market neutrality has to be rejected for most hedge fund strategies. It is only attainable through strategies, which focus more on arbitrage and/or the bond market and therefore seems to be more a by-product than an actually provoked feature. 2. Only two strategies, equity short and convertible arbitrage, managed to beat the benchmark and to improve the overall performance of the portfolio when taking the specific return distribution of hedge funds into account.
|
352 |
The Swedish Hedge Fund Industry : An Evaluation of Strategies, Risks and ReturnsPersson, Martin, Carlsson, Henrik, Eliasson, Sofie January 2008 (has links)
The purpose of this study is to analyze Swedish hedge funds in terms of pursued investment strategies, risks and returns. The study deals with a large number of quantitative data and delimitations were used to obtain a sample that better fulfills the purpose of this paper. The time frame chosen for increas-ing validity and reliability was almost four years. Furthermore, the study uses secondary data due to difficulties and costs as-sociated with obtaining primary data though this is not consi-dered as lowering the quality of the study. The theory section starts by presenting the differences between hedge funds and mutual funds and then focusing on different hedge fund strategies, risks associated with hedge funds and fi-nally risk and return measurements. This section provides an overview for the empirical findings and analysis. In the empirical findings and analysis, statistical calculations of and Analysis the risk measurements standard deviation, Sharpe ratio, track-ing error and correlation are conducted for the sample. The re-sults are related to the hedge funds strategies. Later on the strategies are weighted against each other. Finally, all strategies are compared to OMXS to find the investors‟ most appropriate investment structure. After categorizing the different hedge funds with respect to pursued strategies, the result shows how there are clear dispari-ties in risk and returns for the different strategies. We found indications of a significant relationship between high return and high risk as well as between low return and low risk.
|
353 |
Development and Application of Methods to Study Nanoparticle Diffusion Using Intensity Correlation SpectroscopyJanuary 2011 (has links)
The practical application of nanoparticles requires transitioning from well controlled experimental settings to highly variable "real-life" conditions. Understanding the resulting changes in the behavior and stability of nanoparticles is therefore of paramount importance. This thesis discusses the development and practical applications of tools to monitor the behavior of nanoparticles in real-time using intensity correlation spectroscopy techniques. I show how-correlation spectroscopy can be adapted to nanoparticle systems; and provide particular parameters and settings especially vital for heterogeneous systems. Oftentimes nanoparticles have to be labeled to be detected, which can complicate the system of study and can introduce systematic errors into the analysis. Intensity correlation spectroscopy was tested on dye-labeled magnetite nanocrystals. The fluorescence correlation spectroscopy results were surprisingly biased towards a low concentration of aggregates. Scattering and absorption cross-sections of gold nanoparticles are greatly enhanced near the plasmon resonance wavelength, providing strong intrinsic signals for directly visualizing nanoparticles. I show here how scattering and absorption scale with nanoparticle size; and how size heterogeneity within nanoparticle samples translates into the detected signals. One-photon luminescence of gold nanoparticles, an often neglected signal, was also considered. A comparison between one-photon luminescence and scattering correlation spectroscopy revealed that the former has a much smaller bias towards aggregates and therefore is advantageous in systems prone to aggregation. Overall, the work presented here describes the tools and methods that were developed towards better understanding of nanoparticle behavior in a liquid medium where they are to be employed for environmental and biological applications.
|
354 |
On the Potential Use of Small Scale Fire Tests for Screening Steiner Tunnel Results for Spray Foam InsulationDidomizio, Matthew 05 1900 (has links)
The goal of this study is to assess the potential of using bench-scale fire testing to screen materials for the Steiner tunnel fire test. It is hypothesized that the chemical and physical changes made to a material to improve its fire performance in small scale fire tests will have a predictable response in the Steiner tunnel. This hypothesis is based on the observation that fire test results can, in some cases, provide insight on a material's relative fire hazard, and the assumption that the relative hazard should be consistent across scale.
The ASTM E84 Steiner tunnel test provides a relative ranking of material hazard in two categories. The horizontal Flame Spread Index (FSI) is used to rank the flame hazard of a material, and the Smoke Developed Index (SDI) is used to rank the smoke hazard of a material. Two fire tests are proposed to independently assess each hazard at the bench-scale. The ASTM E1354 cone calorimeter test measures a material's open-flaming heat release rate; it is proposed that the cone calorimeter test can be used to assess a material's relative flame hazard. The ISO 5659-2 smoke density chamber test measures a material's closed-environment smoke development; it is proposed that the smoke density chamber test can be used to assess a material's relative smoke hazard.
The material selected for this study is fire-retarded sprayed polyurethane foam (FRSPF) insulation. Specific details of the foam chemistry, fire retardants, and the manufacturer are confidential. Generally, the foam can be described as medium-density (approximately 2 lbs/ft³), closed-celled, and semi-rigid. The fire retardant additives are comprised of differing ratios and concentrations of phosphorous- and halogen-containing compounds.
A series of 30 Steiner tunnel tests is conducted on 20 different formulations. Repeated testing is conducted on several formulations in order to assess variability in the Steiner tunnel test results. Cone calorimeter and smoke density chamber tests are conducted on a subset of those formulations, in sets of 3-5 tests per formulation.
Key performance indicators are identified from each fire test, relationships between those indicators are examined, and correlations are presented where strong relationships are apparent. Empirical prediction models are proposed for FSI and SDI based on the success rate of prediction, and minimization of error between experimental (measured) and modelled (predicted) results. It is concluded that for the materials tested in this study, there is sufficient evidence of consistency in relative performance to recommend bench-scale screening tests as a cost-effective alternative to repeated Steiner tunnel testing.
|
355 |
Genetic Considerations in the Evolution of Sexual DimorphismWyman, Minyoung 08 January 2013 (has links)
Sexual differences are dramatic and widespread across taxa. However, a common genome between males and females should hinder phenotypic divergence. In this thesis I have used experimental, genomic, and theoretical approaches to study processes that can facilitate and maintain differences between males and females. I studied two mechanisms for the evolution of sexual dimorphism - condition-dependence and gene duplication. If sex-specific traits are costly, then individuals should only express such traits when they possess enough resources to do so. I experimentally manipulated adult condition and found that the sex-biased gene expression depends on condition. Second, duplication events can permit different gene copies to adopt sex-specific expression. I showed that half of all duplicate families have paralogs with different sex-biased expression patterns between members. I investigated how current sexual dimorphism may support novel dimorphism. With regards gene duplication, I found that related duplicates did not always have different expression patterns. However, duplicating a pre-existing sex-biased gene effectively increases organismal sexual dimorphism overall. From a theoretical perspective, I investigated how sexually dimorphic recombination rates allow novel sexually antagonistic variation to invade. Male and female recombination rates separately affect invasion probabilities of new alleles. Finally, I examined the assumption that a common genetic architecture impedes the evolution of sexual dimorphism. First, I conducted a literature review to test whether additive genetic variances in shared traits were different between the sexes. There were few significant statistical differences. However, extreme male-biased variances were more common than extreme female-biased variances. Sexual dimorphism is expected to evolve easily in such traits. Second, I compared these results to findings from the multivariate literature. In contrast to single trait studies, almost all multivariate studies of sexual dimorphism have found variance differences, both in magnitude and orientation, between males and females.
Overall, this thesis concludes that sexual dimorphism can evolve by processes that generate novel sexual dimorphism or that take advantage of pre-existing dimorphism. Furthermore, a common genome is not necessarily a strong barrier if genetic variances differ between the sexes. It will be an exciting challenge to understand how mutation and selection work together to allow organisms differ in their ability evolve sexual dimorphism.
|
356 |
Validation of the MOPITT-A instrument through radiative transfer modelling and laboratory calibrationLamont, Kirk 31 August 2007
This thesis presents the characterization and calibration of the MOPITT-A instrument which uses the technique of correlation spectroscopy to ensure carbon monoxide in the atmosphere. A
theoretical model is developed for the instrument and compared to MOPITT-A measurements collected under controlled laboratory conditions, which were designed to emulate atmospheric signals. It
is shown that the model and measurements are in very good agreement with each other and that the MOPITT-A instrument behaves as expected. It was found that the gain of the instrument varies with
time. The cause of the gain variation is not known but it is suggested that frosting inside the detector nest would be consistent with the observed nature of the variation.
|
357 |
Phase Retrieval with Application to Intensity Correlation InterferometersTrahan, Russell 1987- 14 March 2013 (has links)
As astronomers and astrophysicists seek to view ever-increasingly distant celestial objects, the desired angular resolution of telescopes is constantly being increased. Classical optics, however, has shown a proportional relationship between the size of an optical telescope and the possible angular resolution. Experience has also shown that prohibitive cost accompanies large optical systems. With these limitations on classical optical systems and with the drastic increase in computational power over the past decade, intensity correlation interferometry (ICI) has seen renewed interest since the 1950’s and 60’s when it was initially conceived by Hanbury Brown and Twiss. Intensity correlation interferometry has the advantage of less stringent equipment precision and less equipment cost when compared to most other forms of interferometry. ICI is thus attractive as a solution to the desire for high angular resolution imaging especially in space based imaging systems.
Optical interferometry works by gathering information about the Fourier transform of the geometry of an optical source. An ICI system, however, can only detect the magnitude of the Fourier components. The phase of the Fourier components must be recovered through some computational means and typically some a priori knowledge of the optical source.
This thesis gives the physics and mathematical basis of the intensity correlation interferometer. Since the ICI system cannot detect the phase of an optical source's Fourier transform, some known methods for recovering the phase information are discussed. The primary method of interest here is the error-reduction algorithm by Gerchberg-Saxton which was adapted by Fienup to phase retrieval. This algorithm works by using known qualities of the image as constraints; however, sometimes it can be difficult to know what these constraints are supposed to be. A method of adaptively discovering these constraints is presented, and its performance is evaluated in the presence of noise. Additionally, an algorithm is presented to adapt to the presence of noise in the Fourier modulus data. Finally, the effects of the initial condition of the error-reduction algorithm are shown and a method of mitigating its effect by averaging several independent solutions together is shown.
|
358 |
Forecasting Conditional Correlation for Exchange Rates using Multivariate GARCH models with Historical Value-at-Risk applicationHartman, Joel, Sedlak, Jan January 2013 (has links)
The generalization from the univariate volatility model into a multivariate approach opens up a variety of modeling possibilities. This study aims to examine the performance of the two multivariate GARCH models BEKK and DCC, applied on ten years exchange rates data. Estimations and forecasts of the covariance matrix are made for the EUR/SEK and USD/SEK, whereby the used in a practical application: 1-day and 10-day ahead historical simulated Value-at-Risk predictions for two theoretical portfolios, one equally weighted and one hedged, consisting of the two exchange rates. An univariate GARCH(1,1) approach is included in the Vale-at-Risk predictions to visualize the diversification effect in the portfolio. The conditional correlation forecasts are evaluated using three measures, OLS-regression, MAE and RMSE, based on an one year evaluation period of intraday data. The Value-at-Risk estimates are evaluated with the backtesting method introduced by Kupiec (1995). The results indicate that the BEKK model performs relatively better than the DCC model, and both these models perform better than the univariate GARCH(1,1) model.
|
359 |
Three Essays in Energy EconomicsLi, Jianghua 05 September 2012 (has links)
This thesis includes three chapters on electricity and natural gas prices. In the first chapter, we give a brief introduction to the characteristics of power prices and propose a mean reversion jump diffusion model, in which jump intensity depends on temperature data and overall system load, to model electricity prices. Compared to the models used in the literature, we find the model proposed in this chapter is better to capture the tail behavior in the electricity prices.
In the second chapter, we use the model proposed in the first chapter to simulate the spark spread option and value the power generations. In order to simulate power generation, we first propose and estimate mean reversion jump diffusion model for natural gas prices, in which jump intensity is defined as a function of temperature and storage. Combing the model with the electricity models in chapter 1, we find that the value of power generation is closer to the real value of the power plants as reflected in the recent market transaction than one obtains from many other models used in literature.
The third chapter investigates extremal dependence among the energy market. We find a tail dependence that exceeds the Pearson correlation ρ, which means the traditional Pearson correlation is not appropriate to model tail behavior of oil, natural gas and electricity prices. However, asymptotic dependence is rejected in all pairs except Henry Hub gas return and Houston Ship Channel gas return. We also find that extreme value dependence in energy market is stronger in bull market than that in bear market due to the special characteristics in energy market, which conflicts the accepted wisdom in equity market that tail correlation is much higher in periods of volatile markets from previous literature.
|
360 |
Attaching Social Interactions Surrounding Software Changes to the Release History of an Evolving Software SystemBaysal, Olga January 2006 (has links)
Open source software is designed, developed and maintained by means of electronic media. These media include discussions on a variety of issues reflecting the evolution of a software system, such as reports on bugs and their fixes, new feature requests, design change, refactoring tasks, test plans, etc. Often this valuable information is simply buried as plain text in the mailing archives.
We believe that email interactions collected prior to a product release are related to its source code modifications, or if they do not immediately correlate to change events of the current release, they might affect changes happening in future revisions.
In this work, we propose a method to reason about the nature of software changes by mining and correlating electronic mailing list archives. Our approach is based on the assumption that developers use meaningful names and their domain knowledge in defining source code identifiers, such as classes and methods. We employ natural language processing techniques to find similarity between source code change history and history of public interactions surrounding
these changes. Exact string matching is applied to find a set of common concepts between discussion vocabulary and changed code vocabulary.
We apply our correlation method on two software systems, LSEdit and Apache Ant. The results of these exploratory case studies
demonstrate the evidence of similarity between the content of free-form text emails among developers and the actual modifications in the code.
We identify a set of correlation patterns between discussion and changed code vocabularies and discover that some releases referred to as minor should instead fall under the major category. These patterns can be used to give estimations about the type of a change and time needed to implement it.
|
Page generated in 0.093 seconds