601 |
Bank Rates and the Yield Curve : A Study on the Relationship Between Banks' Deposit and Lending Rates to Treasury Yield RatesDalteg, Tomas January 2005 (has links)
<p>The purpose of this thesis is to investigate how well Swedish banks’ follow the interest rate development of Swedish Treasury Bills and Swedish Government Bonds when they are determining the levels for their deposit and lending rates. Individuals’ deposits in a bank serves as one of the banks main assets in the balance sheet, and the spread between the bank’s deposit rate and the short-term market rate is a large source of funding for the bank. If there is a strong relationship of this spread over time, one may assume that this spread is of great importance for financing of the banking firm.</p><p>The spread between the bank’s lending rate and the long-term market rate – credit risk spread – also serves a large source of interest income for the bank, and if this relationship is strong over time, one may assume that this spread is of great importance for financing of the banking firm as well.</p><p>The banks subjected for investigation in this paper are Handelsbanken (SHB) and Föreningssparbanken (FSB). This paper finds a weaker relationship between the banks’ deposit rates and the short-term market rates, than for the lending rates and the long-term market rates. This indicates that the credit risk spread is of greater importance for financing of the banking firm than the funding spread. The weaker relationship between the banks’ deposit rates and the short-term market rate may be due to the great variability of savings alternatives offered in the market place today. The fact that banks today have deposit-deficit may also explain the weaker relationship, which may be explained by the Baumol-Tobin transaction model – where the higher the interest rate, the greater amount is being kept in the account. The stronger relationship between the banks’ lending rate and the long-term market rate may be due to the nature of the credit risk spread to function as a price-discrimination tool between lending clients.</p>
|
602 |
Kuznets in Sweden? : A study of the relationship between carbon dioxide emissions and incomeHanson Lundström, Elenor January 2008 (has links)
<p> </p><p> </p><p>According to the Environmental Kuznets Curve (EKC), economic growth will eventually cause carbon dioxide emissions to decrease. Is this the case in Sweden? A time series covering the period 1800-1995 is used to analyze the relation between carbon dioxide emissions and income per capita in Sweden. The empirical results indicate that an EKC for carbon dioxide is highly likely to exist in Sweden for the examined period. To take the analysis further, a cross-section data set is employed to examine the relationship between carbon dioxide emissions, income per capita and 4 other potentially influential variables in 75 countries. Only carbon intensity of energy is significant for carbon dioxide emissions. This implies that the utilized energy source is of importance, and it is crucial to separate energy consumption from carbon dioxide emissions. Emissions is a matter of structural aspects such as the type of industry and production a country comprise, and what type of energy that is consumed; not merely the quantity of energy. Sweden has experienced a shift in production techniques and in energy supply, and the energy-efficiency has improved during the past 100 years. It is consequently plausible to believe that it is not a critical income per capita which decreases CO</p><p>2 emissions – it is the “right” energy sources, energy efficiency and improved technology.</p><p> </p><p> </p>
|
603 |
Crack lengths calculation by unloading compliance technique for Charpy size specimensDzugan, Jan 31 March 2010 (has links) (PDF)
The problems with the crack length determination by the unloading compliance method are well known for Charpy size specimens. The final crack lengths calculated for bent specimens do not fulfil ASTM 1820 accuracy requirements. Therefore some investigations have been performed to resolve this problem. In those studies it was considered that measured compliance should be corrected for various factors, but satisfying results were not attained. In the presented work the problem was attacked from the other side, the measured specimen compliance was taken as a correct value and what had to be adjusted was the calculation procedure. On the basis of experimentally obtained compliances of bent specimens and optically measured crack lengths the investigation was carried out. Finally, a calculation procedure enabling accurate crack length calculation up to 5mm of plastic deflection was developed. Applying the new procedure, out of investigated 238 measured crack lengths, more than 80% of the values fulfilled the ASTM 1820 accuracy requirements, while presently used procedure provided only about 30% of valid results. The newly proposed procedure can be also prospectively used in modified form for the specimens of different than Charpy size.
|
604 |
Asynchronous stochastic learning curve effects in a large scale production system /Lu, Roberto Francisco-Yi. January 2008 (has links)
Thesis (Ph. D.)--University of Washington, 2008. / Vita. Includes bibliographical references (leaves 126-133).
|
605 |
Essays on macroeconomic dynamics of job vacancies, job flows, and entreprenerial activities /Fujita, Shigeru. January 2004 (has links)
Thesis (Ph. D.)--University of California, San Diego, 2004. / Vita. Includes bibliographical references (leaves 121-125).
|
606 |
Superluminous supernovae : theory and observationsChatzopoulos, Emmanouil 25 October 2013 (has links)
The discovery of superluminous supernovae in the past decade challenged our understanding of explosive stellar death. Subsequent extensive observations of superluminous supernova light curves and spectra has provided some insight for the nature of these events. We present observations of one of the most luminous self-interacting supernovae ever observed, the hydrogen-rich SN 2008am discovered by the Robotic Optical Transient Search Experiment Supernova Verification Project with the ROTSE-IIIb telescope located in the McDonald Observatory. We provide theoretical modeling of superluminous supernova light curves and fit the models to a number of observed events and similar transients in order to understand the mechanism that is responsible for the vast amounts of energy emitted by these explosions. The models we investigate include deposition of energy due to the radioactive decays of massive amounts of nickel-56, interaction of supernova ejecta with a dense circumstellar medium and magnetar spin-down. To probe the nature of superluminous supernovae progenitor stars we study the evolution of massive stars, including important effects such as rotation and magnetic fields, and perform multi-dimensional hydrodynamics simulations of the resulting explosions. The effects of rotational mixing are also studied in solar-type secondary stars in cataclysmic variable binary star systems in order to provide an explanation for some carbon-depleted examples of this class. We find that most superluminous supernovae can be explained by violent interaction of the SN ejecta with >1 Msun dense circumstellar shells ejected by the progenitor stars in the decades preceding the SN explosion. / text
|
607 |
Analysis of quasiconformal maps in RnPurcell, Andrew 01 June 2006 (has links)
In this thesis, we examine quasiconformal mappings in Rn. We begin by proving basic properties of the modulus of curve families. We then give the geometric, analytic,and metric space definitions of quasiconformal maps and show their equivalence. We conclude with several computational examples.
|
608 |
A study of power spectral densities of real and simulated Kepler light curvesWeishaupt, Holger January 2015 (has links)
During the last decade, the transit method has evolved to one of the most promising techniques in the search for extrasolar planets and the quest to find other earth-like worlds. In theory, the transit method is straight forward being based on the detection of an apparent dimming of the host star’s light due to an orbiting planet traversing in front of the observer. However, in practice, the detection of such light curve dips and their confident ascription to a planetary transit is heavily burdened by the presence of different sources of noise, the most prominent of which is probably the so called intrinsic stellar variability. Filtering out potential transit signals from background noise requires a well adjusted high-pass filter. In order to optimize such a filter, i.e. to achieve best separation between signal and noise, one typically requires access to benchmark datasets that exhibit the same light curve with and without obstructing noise. Several methods for simulating stellar variability have been proposed for the construction of such benchmark datasets. However, while such methods have been widely used in testing transit method detection algorithms in the past, it is not very well known how such simulations compare to real recorded light curves - a fact that might be contributed to the lack of large databases of stellar light curves for comparisons at that time. With the increasing amount of light curve data now available due to missions such as Kepler, I have here undertaken such a comparison of synthetic and real light curves for one particular method that simulates stellar variability based on scaled power spectra of the Sun’s flux variations. Conducting the respective comparison also in terms of estimated power spectra of real and simulated light curves, I have revealed that the two datasets exhibit substantial differences in average power, with the synthetic power spectra having generally a lower power and also lacking certain distinct power peaks present in the real light curves. The results of this study suggest that scaled power spectra of solar variability alone might be insufficient for light curve simulations and that more work will be required to understand the origin and relevance of the observed power peaks in order to improve on such light curve models.
|
609 |
Developing archaeomagnetic dating in the British Iron AgeClelland, Sarah-Jane January 2011 (has links)
Archaeomagnetism is an area of research that utilises the magnetic properties of archaeological materials to date past human activity. This research aimed to use the evidence of past geomagnetism, as recorded by archaeological and geological materials, to identify and characterise short timescale changes in the Earth¿s magnetic field. This contribution to the discipline focused on the first millennium BC, as there is evidence that during this time the Earth¿s magnetic field experienced rapid changes in direction. This work focused on an established weakness in archaeomagnetic studies, i.e. the application of archaeological information to assign a date range to the magnetic directions. The date ranges for 232 magnetic directions from 98 Iron Age sites were reviewed and a programme of fieldwork produced 25 new magnetic directions from 11 Iron Age sites across Britain. The approach developed in this thesis has made significant improvements to the data examined, which represent the prehistoric section of the British secular variation curve (SVC). These data have been incorporated into the British archaeomagnetic dataset that now comprises over 1000 magnetic directions and will be used to generate future British SVCs. The potential of the near continuous records of geomagnetic secular variation from British lake sediment sequences to SVCs was explored. This showed that these sediments have recorded the relative changes in the Earth¿s magnetic field but the dating and method of constructing the British master curve requires revision. As SVCs are predominately used as calibration curves for archaeomagnetic dating, this work provides a foundation for a revised and extended British SVC. This revision would be to the mutual benefit of studies in archaeology and archaeomagnetism, as the latter could potentially enable highresolution dating of Iron Age material, providing a viable alternative to radiocarbon dating.
|
610 |
System and Method for Comparison and Training of Mechanical Circulatory Support Devices: A Patient Independent Platform Using the Total Artificial Heart and Donovan Mock Circulation SystemDeCook, Katrina Jolene January 2015 (has links)
Mechanical circulatory support (MCS) is a viable therapy for end stage heart failure. However, despite clinical success, the ability to compare MCS devices in vitro and perform training scenarios is extremely limited. Comparative studies are limited as different devices cannot be interchanged in a patient due to the surgical nature of implant. Further, training and failure scenarios cannot be performed on patients with devices as this would subject a patient to a failure mode. A need exists for a readily available mock system that can perform comparative testing and training scenarios with MCS devices. Previously, our group has fabricated a well characterized mock circulation system consisting of a SynCardia temporary Total Artificial Heart (TAH) and Donovan Mock Circulation tank (DMC tank). Further, utilizing this system with the TAH operating in reduced output mode, a heart failure model was developed. In the present study, three ventricular assist devices (VADs) were independently attached to the heart failure model to compare device performances over a range of preloads and afterloads. In addition, specific clinical scenarios were created with the system to analyze how VAD-displayed waveforms from the system correlate with clinical scenarios. Finally, each VAD was powered off while attached to the heart failure model to compare fluid flow through the VAD in a pump-failure scenario. We demonstrated that this system can successfully be utilized to compare MCS devices (i.e. ventricular assist devices) and for successful training of patients and clinicians.
|
Page generated in 0.7749 seconds