• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • 1
  • 1
  • 1
  • Tagged with
  • 24
  • 24
  • 16
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Synthesis and Enzymatic Oxidation of Model Lignin Carbohydrate Complexes

Kovur, Srinivasulu Raju January 2008 (has links) (PDF)
No description available.
2

Molecular dynamics studies of water and biomolecules /

Mark, Pekka, January 2002 (has links)
Diss. (sammanfattning) Stockholm : Karol. inst., 2002. / Härtill 5 uppsatser.
3

Modeling of Frictional Contact Conditions in Structures

Do, Nguyen Ba 19 May 2005 (has links)
This thesis explores two aspects of modeling the behavior of joint friction in structures. The first aspect deals with the accurate and efficient simulation of a simple system that incorporates the LuGre friction law. Energy transfer and dissipation in a structural joint model is the second topic of this thesis. It is hypothesized that friction could serve to pump energy from one frequency to higher frequencies where it might be dissipated more quickly. Motivation for this study stems from the need to have accurate models of high-precision space structures. Because friction at connecting joints plays a major role in the damping capacity of the structure, a good understanding of this mechanism is necessary to predict the vibratory response and enhance the energy dissipation of the structure. Simulation results of a dynamic system with LuGre friction show that the system is relatively well-conditioned when the slip velocity is small, and ill-conditioned for large slip velocities. Furthermore, the most efficient numerical method to simulate this system is determined to be an implicit integration scheme. To study the energy transfer and dissipation, two models of a jointed structure with friction are considered. Results from the steady-state forced responses of the two structural systems indicate that friction converted low frequency, single harmonic excitation to multi-harmonic response through internal resonances. However, differences in energy dissipation results between the models show that the response of a frictional system is highly sensitive to system parameters and friction laws. Conclusions and suggestions for future research are also discussed.
4

An Assessment of Econometric Methods Used in the Estimation of Affine Term Structure Models

Juneja, Januj January 2010 (has links)
The first essay empirically evaluates recently developed techniques that have been proposed to improve the estimation of affine term structure models. The evaluation presented here is performed on two dimensions. On the first dimension, I find that invariant transformations and rotations can be used to reduce the number of free parameters needed to estimate the model and subsequently, improve the empirical performance of affine term structure models. The second dimension of this evaluation surrounds the comparison between estimating an affine term structure model using the model-free method and the inversion method. Using daily LIBOR rate and swap rate quotes from June 1996 to July 2008 to extract a panel of 3,034 time-series observations and 14 cross sections, this paper shows that, a term structure model that is estimated using the model-free method does not perform significantly better in fitting yields, at any horizon, than the more traditional methods available in the literature.The second essay attempts explores implications of using principal components analysis in the estimation of affine term structure models. Early work employing principal component analysis focused on portfolio formation and trading strategies. Recent work, however, has moved the usage of principal components analysis into more formal applications such as the direct involvement of principal component based factors within an affine term structure model. It is this usage of principal components analysis in formal model settings that warrants a study of potential econometric implications of its application to term structure modeling. Serial correlation in interest rate data, for example, has been documented by several authors. The majority of the literature has focused on strong persistence in state variables as giving rise to this phenomena. In this paper, I take yields as given, and hence document the effects of whitening on the model-implied state-dependent factors, subsequently estimated by the principal component based model-free method. These results imply that the process of pre-whitening the data does play a critical role in model estimation. Results are robust to Monte Carlo Simulations. Empirical results are obtained from using daily LIBOR rate and swap rate quotes from June 1996 to July 2008 to extract a panel of zero-coupon yields consisting of 3,034 time-series observations and 14 cross sections.The third essay examines the extent to which the prevalence of estimation risk in numerical integration creates bias, inefficiencies, and inaccurate results in the widely used class of affine term structure models. In its most general form, this class of models relies on the solution to a system of non-linear Ricatti equations to back out the state-factor coefficients. Only in certain cases does this class of models admit explicit, and thus analytically tractable, solutions for the state factor coefficients. Generally, and for more economically plausible scenarios, explicit closed form solutions do not exist and the application of Runge-Kutta methods must be employed to obtain numerical estimates of the coefficients for the state variables. Using a panel of 3,034 yields and 14 cross-sections, this paper examines what perils, if any, exist in this trade off of analytical tractability against economic flexibility. Robustness checks via Monte Carlo Simulations are provided. In specific, while the usage of analytical methods needs less computational time, numerical methods can be used to estimate a broader set of economic scenarios. Regardless of the data generating process, the generalized Gaussian process seems to dominate the Vasicek model in terms of bias and efficiency. However, when the data are generated from a Vasicek model, the Vasicek model performs better than the generalized Gaussian process for fitting the yield curve. These results impart new and important information about the trade off that exists between using analytical methods and numerical methods for estimate affine term structure models.
5

Two Essays on Estimation and Inference of Affine Term Structure Models

Wang, Qian 09 May 2015 (has links)
Affine term structure models (ATSMs) are one set of popular models for yield curve modeling. Given that the models forecast yields based on the speed of mean reversion, under what circumstances can we distinguish one ATSM from another? The objective of my dissertation is to quantify the benefit of knowing the “true” model as well as the cost of being wrong when choosing between ATSMs. In particular, I detail the power of out-of-sample forecasts to statistically distinguish one ATSM from another given that we only know the data are generated from an ATSM and are observed without errors. My study analyzes the power and size of affine term structure models (ATSMs) by evaluating their relative out-of-sample performance. Essay one focuses on the study of the oneactor ATSMs. I find that the model’s predictive ability is closely related to the bias of mean reversion estimates no matter what the true model is. The smaller the bias of the estimate of the mean reversion speed, the better the out-of-sample forecasts. In addition, my finding shows that the models' forecasting accuracy can be improved, in contrast, the power to distinguish between different ATSMs will be reduced if the data are simulated from a high mean reversion process with a large sample size and with a high sampling frequency. In the second essay, I extend the question of interest to the multiactor ATSMs. My finding shows that adding more factors in the ATSMs does not improve models' predictive ability. But it increases the models' power to distinguish between each other. The multiactor ATSMs with larger sample size and longer time span will have more predictive ability and stronger power to differentiate between models.
6

Modélisation du smile de volatilité pour les produits dérivés de taux d'intérêt / Multi factor stochastic volatility for interest rates modeling

Palidda, Ernesto 29 May 2015 (has links)
L'objet de cette thèse est l'étude d'un modèle de la dynamique de la courbe de taux d'intérêt pour la valorisation et la gestion des produits dérivées. En particulier, nous souhaitons modéliser la dynamique des prix dépendant de la volatilité. La pratique de marché consiste à utiliser une représentation paramétrique du marché, et à construire les portefeuilles de couverture en calculant les sensibilités par rapport aux paramètres du modèle. Les paramètres du modèle étant calibrés au quotidien pour que le modèle reproduise les prix de marché, la propriété d'autofinancement n'est pas vérifiée. Notre approche est différente, et consiste à remplacer les paramètres par des facteurs, qui sont supposés stochastiques. Les portefeuilles de couverture sont construits en annulant les sensibilités des prix à ces facteurs. Les portefeuilles ainsi obtenus vérifient la propriété d’autofinancement / This PhD thesis is devoted to the study of an Affine Term Structure Model where we use Wishart-like processes to model the stochastic variance-covariance of interest rates. This work was initially motivated by some thoughts on calibration and model risk in hedging interest rates derivatives. The ambition of our work is to build a model which reduces as much as possible the noise coming from daily re-calibration of the model to the market. It is standard market practice to hedge interest rates derivatives using models with parameters that are calibrated on a daily basis to fit the market prices of a set of well chosen instruments (typically the instrument that will be used to hedge the derivative). The model assumes that the parameters are constant, and the model price is based on this assumption; however since these parameters are re-calibrated, they become in fact stochastic. Therefore, calibration introduces some additional terms in the price dynamics (precisely in the drift term of the dynamics) which can lead to poor P&L explain, and mishedging. The initial idea of our research work is to replace the parameters by factors, and assume a dynamics for these factors, and assume that all the parameters involved in the model are constant. Instead of calibrating the parameters to the market, we fit the value of the factors to the observed market prices. A large part of this work has been devoted to the development of an efficient numerical framework to implement the model. We study second order discretization schemes for Monte Carlo simulation of the model. We also study efficient methods for pricing vanilla instruments such as swaptions and caplets. In particular, we investigate expansion techniques for prices and volatility of caplets and swaptions. The arguments that we use to obtain the expansion rely on an expansion of the infinitesimal generator with respect to a perturbation factor. Finally we have studied the calibration problem. As mentioned before, the idea of the model we study in this thesis is to keep the parameters of the model constant, and calibrate the values of the factors to fit the market. In particular, we need to calibrate the initial values (or the variations) of the Wishart-like process to fit the market, which introduces a positive semidefinite constraint in the optimization problem. Semidefinite programming (SDP) gives a natural framework to handle this constraint
7

Simple structure MIRT equating for multidimensional tests

Kim, Stella Yun 01 May 2018 (has links)
Equating is a statistical process used to accomplish score comparability so that the scores from the different test forms can be used interchangeably. One of the most widely used equating procedures is unidimensional item response theory (UIRT) equating, which requires a set of assumptions about the data structure. In particular, the essence of UIRT rests on the unidimensionality assumption, which requires that a test measures only a single ability. However, this assumption is not likely to be fulfilled for many real data such as mixed-format tests or tests composed of several content subdomains: failure to satisfy the assumption threatens the accuracy of the estimated equating relationships. The main purpose of this dissertation was to contribute to the literature on multidimensional item response theory (MIRT) equating by developing a theoretical and conceptual framework for true-score equating using a simple-structure MIRT model (SS-MIRT). SS-MIRT has several advantages over other complex MIRT models such as improved efficiency in estimation and a straightforward interpretability. In this dissertation, the performance of the SS-MIRT true-score equating procedure (SMT) was examined and evaluated through four studies using different data types: (1) real data, (2) simulated data, (3) pseudo forms data, and (4) intact single form data with identity equating. Besides SMT, four competitors were included in the analyses in order to assess the relative benefits of SMT over the other procedures: (a) equipercentile equating with presmoothing, (b) UIRT true-score equating, (c) UIRT observed-score equating, and (d) SS-MIRT observed-score equating. In general, the proposed SMT procedure behaved similarly to the existing procedures. Also, SMT showed more accurate equating results compared to the traditional UIRT equating. Better performance of SMT over UIRT true-score equating was consistently observed across the three studies that employed different criterion relationships with different datasets, which strongly supports the benefit of a multidimensional approach to equating with multidimensional data.
8

Variable Structure And Dynamism Extensions To A Devs Based Modeling And Simulation Framework

Deniz, Fatih 01 February 2010 (has links) (PDF)
In this thesis, we present our approach to add dynamism support to simulation environments, which adopts DEVS-based modeling and simulation approach and builds upon previous work on SiMA, a DEVS-based simulation framework developed at TUBITAK UEKAE. Defining and executing simulation models of complex and adaptive systems is often a non-trivial task. One of the requirements of simulation software frameworks for such complex and adaptive systems is that supporting variable structure models, which can change their behavior and structure according to the changing conditions. In the relevant literature there are already proposed solutions to the dynamism support problem. One particular contribution offered in this study over previous approaches is the systematic and automatic framework support for poststructural-change state synchronization among models with related couplings, in a way that benefits from the strongly-typed execution environment SiMA provides. In this study, in addition to introducing theoretical extensions to classic SiMA, performance comparisons of dynamic version with classic version over a sample Wireless Sensor Network simulation is provided and possible effects of dynamism extensions to the performance are discussed.
9

Modélisation du smile de volatilité pour les produits dérivés de taux d'intérêt / Multi factor stochastic volatility for interest rates modeling

Palidda, Ernesto 29 May 2015 (has links)
L'objet de cette thèse est l'étude d'un modèle de la dynamique de la courbe de taux d'intérêt pour la valorisation et la gestion des produits dérivées. En particulier, nous souhaitons modéliser la dynamique des prix dépendant de la volatilité. La pratique de marché consiste à utiliser une représentation paramétrique du marché, et à construire les portefeuilles de couverture en calculant les sensibilités par rapport aux paramètres du modèle. Les paramètres du modèle étant calibrés au quotidien pour que le modèle reproduise les prix de marché, la propriété d'autofinancement n'est pas vérifiée. Notre approche est différente, et consiste à remplacer les paramètres par des facteurs, qui sont supposés stochastiques. Les portefeuilles de couverture sont construits en annulant les sensibilités des prix à ces facteurs. Les portefeuilles ainsi obtenus vérifient la propriété d’autofinancement / This PhD thesis is devoted to the study of an Affine Term Structure Model where we use Wishart-like processes to model the stochastic variance-covariance of interest rates. This work was initially motivated by some thoughts on calibration and model risk in hedging interest rates derivatives. The ambition of our work is to build a model which reduces as much as possible the noise coming from daily re-calibration of the model to the market. It is standard market practice to hedge interest rates derivatives using models with parameters that are calibrated on a daily basis to fit the market prices of a set of well chosen instruments (typically the instrument that will be used to hedge the derivative). The model assumes that the parameters are constant, and the model price is based on this assumption; however since these parameters are re-calibrated, they become in fact stochastic. Therefore, calibration introduces some additional terms in the price dynamics (precisely in the drift term of the dynamics) which can lead to poor P&L explain, and mishedging. The initial idea of our research work is to replace the parameters by factors, and assume a dynamics for these factors, and assume that all the parameters involved in the model are constant. Instead of calibrating the parameters to the market, we fit the value of the factors to the observed market prices. A large part of this work has been devoted to the development of an efficient numerical framework to implement the model. We study second order discretization schemes for Monte Carlo simulation of the model. We also study efficient methods for pricing vanilla instruments such as swaptions and caplets. In particular, we investigate expansion techniques for prices and volatility of caplets and swaptions. The arguments that we use to obtain the expansion rely on an expansion of the infinitesimal generator with respect to a perturbation factor. Finally we have studied the calibration problem. As mentioned before, the idea of the model we study in this thesis is to keep the parameters of the model constant, and calibrate the values of the factors to fit the market. In particular, we need to calibrate the initial values (or the variations) of the Wishart-like process to fit the market, which introduces a positive semidefinite constraint in the optimization problem. Semidefinite programming (SDP) gives a natural framework to handle this constraint
10

Essays in asset pricing

Liu, Liu January 2017 (has links)
This thesis improves our understanding of asset prices and returns as it documents a regime shift risk premium in currencies, corrects the estimation bias in the term premium of bond yields, and shows the impact of ambiguity aversion towards parameter uncertainty on equities. The thesis consists of three essays. The first essay "The Yen Risk Premiums: A Story of Regime Shifts in Bond Markets" documents a new monetary mechanism, namely the shift of monetary policies, to account for the forward premium puzzle in the USD-JPY currency pair. The shift of monetary policy regimes is modelled by a regime switching dynamic term structure model where the risk of regime shifts is priced. Our model estimation characterises two policy regimes in the Japanese bond market---a conventional monetary policy regime and an unconventional policy regime of quantitative easing. Using foreign exchange data from 1985 to 2009, we find that the shift of monetary policies generates currency risk: the yen excess return is predicted by the Japanese regime shift premium, and the emergence of the yen carry trade in the mid 1990s is associated with the transition from the conventional to the unconventional monetary policy in Japan. The second essay "Correcting Estimation Bias in Regime Switching Dynamic Term Structure Models" examines the small sample bias in the estimation of a regime switching dynamic term structure model. Using US data from 1971 to 2009, we document two regimes driven by the conditional volatility of bond yields and risk factors. In both regimes, the process of bond yields is highly persistent, which is the source of estimation bias when the sample size is small. After bias correction, the inference about expectations of future policy rates and long-maturity term premia changes dramatically in two high-volatility episodes: the 1979--1982 monetary experiment and the recent financial crisis. Empirical findings are supported by Monte Carlo simulation, which shows that correcting small sample bias leads to more accurate inference about expectations of future policy rates and term premia compared to before bias correction. The third essay "Learning about the Persistence of Recessions under Ambiguity Aversion" incorporates ambiguity aversion into the process of parameter learning and assess the asset pricing implications of the model. Ambiguity is characterised by the unknown parameter that governs the persistence of recessions, and the representative investor learns about this parameter while being ambiguity averse towards parameter uncertainty. We examine model-implied conditional moments and simulated moments of asset prices and returns, and document an uncertainty effect that characterises the difference between learning under ambiguity aversion and learning under standard recursive utility. This uncertainty effect is asymmetric across economic expansions and recessions, and this asymmetry generates in simulation a sharp increase in the equity premium at the onset of recessions, as in the recent financial crisis.

Page generated in 0.0643 seconds