Spelling suggestions: "subject:"erices  amathematical models"" "subject:"erices  dmathematical models""
61 
Stochastic volatility modelsLe, Truc January 2005 (has links)
Abstract not available

62 
The real effects of S&P 500 Index additions: evidence from corporate investmentWei, Yong, 卫勇 January 2010 (has links)
published_or_final_version / Economics and Finance / Master / Master of Philosophy

63 
Options pricing and risk measures under regimeswitching modelsHao, Fangcheng., 郝方程. January 2011 (has links)
published_or_final_version / Statistics and Actuarial Science / Doctoral / Doctor of Philosophy

64 
Mathematical models and numerical algorithms for option pricing and optimal tradingSong, Na., 宋娜. January 2013 (has links)
Research conducted in mathematical finance focuses on the quantitative modeling of financial markets. It allows one to solve financial problems by using mathematical methods and provides understanding and prediction of the complicated financial behaviors. In this thesis, efforts are devoted to derive and extend stochastic optimization models in financial economics and establish practical algorithms for representing and solving problems in mathematical finance.
An option gives the holder the right, but not the obligation, to buy or sell an underlying asset at a specified strike price on or before a specified date. In this thesis, a valuation model for a perpetual convertible bond is developed when the price dynamics of the underlying share are governed by Markovian regimeswitching models. By making use of the relationship between the convertible bond and an American option, the valuation of a perpetual convertible bond can be transformed into an optimal stopping problem. A novel approach is also proposed to discuss an optimal inventory level of a retail product from a real option perspective in this thesis. The expected present value of the net profit from selling the product which is the objective function of the optimal inventory problem can be given by the actuarial value of a real option. Hence, option pricing techniques are adopted to solve the optimal inventory problem in this thesis.
The goal of risk management is to eliminate or minimize the level of risk associated with a business operation. In the risk measurement literature, there is relatively little amount of work focusing on the risk measurement and management of interest rate instruments. This thesis concerns about building a risk measurement framework based on some modern risk measures, such as ValueatRisk (VaR) and Expected Shortfall (ES), for describing and quantifying the risk of interest rate sensitive instruments. From the lessons of the recent financial turmoils, it is understood that maximizing profits is not the only objective that needs to be taken into account. The consideration for risk control is of primal importance. Hence, an optimal submission problem of bid and ask quotes in the presence of risk constraints is studied in this thesis. The optimal submission problem of bid and ask quotes is formulated as a stochastic optimal control problem.
Portfolio management is a professional management of various securities and assets in order to match investment objectives and balance risk against performance. Different choices of time series models for asset price may lead to different portfolio management strategies. In this thesis, a discretetime dynamic programming approach which is flexible enough to deal with the optimal asset allocation problem under a general stochastic dynamical system is explored. It’s also interesting to analyze the implications of the heteroscedastic effect described by a continuoustime stochastic volatility model for evaluating risk of a cash management problem. In this thesis, a continuoustime dynamic programming approach is employed to investigate the cash management problem under stochastic volatility model and constant volatility model respectively. / published_or_final_version / Mathematics / Doctoral / Doctor of Philosophy

65 
THE IMPLICATIONS OF DECREASING BLOCK PRICING FOR INDIVIDUAL DEMAND FUNCTIONS: AN EMPIRICAL APPROACHWade, Steven Howard January 1980 (has links)
Decreasing block pricing refers to the practice of selling a product at successively lower marginal prices as the amount purchased in any one time period increases. In more familiar terms, this practice can be thought of as any quantity discount scheme as long as marginal price does not vary continuously with quantity. Decreasing block pricing results in a faceted, nonconvex budget set, and under standard assumptions concerning consumer preferences, yields several nonstandard theoretical implications. The central goal of this paper is to formulate an estimation technique which is consistent with these implications. When the budget set is not convex, the uniqueness of consumer equilibrium is no longer guaranteed. It also follows that discontinuities in demand occur whenever consumer equilibrium shifts from one facet of the budget constraint to another. Prior empirical studies have not made use of demand functions consistent with these results. In Chapter 2, a utilitymaximizing algorithm was developed to determine consumer equilibrium given the declining block pricing schedule and income for a CobbDouglas utility function. In developing this algorithm, it was made clear that the proper approach for estimating individual demand was through the use of a blockdependent independent variable. The coefficient of this blockdepartment independent variable provided an estimate of a utility function parameter which completely specified the CobbDouglas form. Incorporating this utility function estimate into the utilitymaximation algorithm made it possible to obtain estimates of consumption given changes in any or all of the rate schedule components. While the use of a blockdependent independent variable is the theoretically correct method for estimating demand, it poses an inescapable problem of errorsinvariables. A Monte Carlo study was performed in Chapter 2 to investigate, among other things, the seriousness of the errorsinvariables bias. The results were quite encouraging. When using data incorporating extremely large error variances, amazingly precise estimates were obtained. Another encouraging Monte Carlo result was when comparing samples not containing a discontinuity with those with one, it was found that the latter produced estimates with statistically significant superiority. Chapter 3 generalized the estimation technique of the previous chapter to allow the estimation of demand using crosssectional data. The data base recorded monthly electricity consumption for households from a number of cities whose utilities had decreasing block rates. Seven of these cities were selected for analysis. The data also included various demographic characteristics and electric appliance stock information. The generalization was accomplished by assuming that all households had a StoneGeary utility function. Also, the utility function parameter representing the minimum required quantity of electricity was assumed to depend linearly on the household's appliance stock and demographic characteristics. This allowed demand to vary across households on the basis of this parameter and income. The results of applying this regression technique to the crosssectional data were then compared with results from a conventional, nontheoretically based demand specification. The data were used in pooled and individual month form with the former yielding much better statistical results. The StoneGeary form provided a greater number of significant coefficients for price and income variables than the conventional version. The predominant failure of the conventional version was that the coefficient of marginal price was rarely significant and when significant, frequently of the wrong sign. For the same samples, the StoneGeary results were quite acceptable except for the regressions involving one of the cities. Thus, it was demonstrated that a method consistent with the theoretical implications of decreasing block pricing is easily applied to crosssectional data and produces better results than conventional techniques.

66 
Willow treeHo, Andy C.T. 11 1900 (has links)
We present a tree algorithm, called the willow tree, for financial derivative pricing. The
setup of the tree uses a fixed number of spatial nodes at each time step. The transition
probabilities are determine by solving linear programming problems. The willow tree
method is radically superior in numerical performance when compared to the binomial
tree method.

67 
Multilateral approaches to the theory of international comparisonsArmstrong, Keir G. 11 1900 (has links)
The present thesis provides a definite answer to the question of how comparisons of
certain aggregate quantities and price levels should be made across two or more geographic
regions. It does so from the viewpoint of both economic theory and the “test” (or
“axiomatic”) approach to indexnumber theory.
Chapter 1
gives an overview of the problem of multilateral interspatial comparisons and
introduces the rest of the thesis.
Chapter 2 focuses on a particular domain of comparison involving consumer goods and
services, countries and households in developing a theory of international comparisons in
terms of the the (Kontistype) costofliving index. To this end, two new classes of
purchasing power parity measures are set out and the relationship between them is explored.
The first is the manyhousehold analogue of the (singlehousehold) costofliving index and,
as such, is rooted in the theory of group costofliving indexes. The second Consists of sets
of (nominal) expenditureshare deflators, each corresponding to a system of (real)
consumption shares for a group of countries. Using this framework, a rigorous exact index
number interpretation for Diewert’s “ownshare” system of multilateral quantity indexes is
provided.
Chapter 3 develops a novel multilateral test approach to the problem at hand by
generalizing Eichhorn and Voeller’s bilateral counterpart in a sensible manner. The
equivalence of this approach to an extended version of Diewert’s multilateral test approach is
exploited in an assessment of the relative merits of several alternative multilateral comparison
formulae motivated outside the testapproach framework.
Chapter 4 undertakes an empirical comparison of the formulae examined on theoretical
grounds in Chapter 3
using an appropriate crosssectional data set constructed by the
Eurostat—OECD Purchasing Power Parity Programme. The principal aim of this comparison is
to ascertain the magnitude of the effect of choosing one formula over another. In aid of this, a
new indicator is proposed which facilitates the measurement of the difference between two sets
of purchasing power parities, each computed using a different multilateral indexnumber
formula.

68 
Three essays on volatility long memory and European option valuationWang, Yintian, 1976 January 2007 (has links)
This dissertation is in the form of three essays on the topic of component and long memory GARCH models. The unifying feature of the thesis is the focus on investigating European index option evaluation using these models. / The first essay presents a new model for the valuation of European options. In this model, the volatility of returns consists of two components. One of these components is a longrun component that can be modeled as fully persistent. The other component is shortrun and has zero mean. The model can be viewed as an affine version of Engle and Lee (1999), allowing for easy valuation of European options. The model substantially outperforms a benchmark singlecomponent volatility model that is well established in the literature. It also fits options better than a model that combines conditional heteroskedasticity and Poisson normal jumps. While the improvement in the component model's performance is partly due to its improved ability to capture the structure of the smirk and the path of spot volatility, its most distinctive feature is its ability to model the term structure. This feature enables the component model to jointly model longmaturity and shortmaturity options. / The second essay derives two new GARCH variance component models with nonnormal innovations. One of these models has an affine structure and leads to a closedform option valuation formula. The other model has a nonaffine structure and hence, option valuation is carried out using Monte Carlo simulation. We provide an empirical comparison of these two new component models and the respective special cases with normal innovations. We also compare the four component models against GARCH(1,1) models which they nest. All eight models are estimated using MLE on S&P500 returns. The likelihood criterion strongly favors the component models as well as nonnormal innovations. The properties of the nonaffine models differ significantly from those of the affine models. Evaluating the performance of component variance specifications for option valuation using parameter estimates from returns data also provides strong support for component models. However, support for nonnormal innovations and nonaffine structure is less convincing for option valuation. / The third essay aims to investigate the impact of long memory in volatility on European option valuation. We mainly compare two groups of GARCH models that allow for long memory in volatility. They are the component HestonNandi GARCH model developed in the first essay, in which the volatility of returns consists of a longrun and a shortrun component, and a fractionally integrated HestonNandi GARCH (FIHNGARCH) model based on Bollerslev and Mikkelsen (1999). We investigate the performance of the models using S&P500 index returns and crosssections of European options data. The component GARCH model slightly outperforms the FIGARCH in fitting return data but significantly dominates the FIHNGARCH in capturing option prices. The findings are mainly due to the shorter memory of the FIHNGARCH model, which may be attributed to an artificially prolonged leverage effect that results from fractional integration and the limitations of the affine structure.

69 
Three essays on volatility specification in option valuationMimouni, Karim. January 2007 (has links)
Most recent empirical option valuation studies build on the affine square root (SQR) stochastic volatility model. The SQR model is a convenient choice, because it yields closedform solutions for option prices. However, relatively little is known about the empirical shortcomings of this model. In the first essay, we investigate alternatives to the SQR model, by comparing its empirical performance with that of five different but equally parsimonious stochastic volatility models. We provide empirical evidence from three different sources. We first use realized volatilities to assess the properties of the SQR model and to guide us in the search for alternative specifications. We then estimate the models using maximum likelihood on a long sample of S& P500 returns. Finally, we employ nonlinear least squares on a time series of cross sections of option data. In the estimations on returns and options data, we use the particle filtering technique to retrieve the spot volatility path. The three sources of data we employ all point to the same conclusion: the SQR model is misspecified. Overall, the best of alternative volatility specifications is a model we refer to as the VAR model, which is of the GARCH diffusion type. / In the second essay, we estimate the Constant Elasticity of Variance (CEV) model in order to study the level of nonlinearity in the volatility dynamic. We also estimate a CEV process combined with a jump process (CEVJ) and analyze the effects of the jump component on the nonlinearity coefficient. Estimation is performed using the particle filtering technique on a long series of S&P500 returns and on options data. We find that both returns data and returnsandoptions data favor nonlinear specifications for the volatility dynamic, suggesting that the extensive use of linear models is not supported empirically. We also find that the inclusion of jumps does not affect the level of nonlinearity and does not improve the CEV model fit. / The third essay provides an empirical comparison of two classes of option valuation models: continuoustime models and discretetime models. The literature provides some theoretical limit results for these types of dynamics, and researchers have used these limit results to argue that the performance of certain discretetime and continuoustime models ought to be very similar. This interpretation is somewhat contentious, because a given discretetime model can have several continuoustime limits, and a given continuoustime model can be the limit for more than one discretetime model. Therefore, it is imperative to investigate whether there exist similarities between these specifications from an empirical perspective. Using data on S&P500 returns and call options, we find that the discretetime models investigated in this paper have the same performance in fitting the data as selected continuoustime models both in and outofsample.

70 
Modelling strategic information technology impact on interfirm competition: pricingNault, Barrie R. January 1990 (has links)
This research studies normative pricing strategies for information technology (IT) used by suppliers to supplement an underlying primary good. Transactions with consumers and customer firms are considered. Characteristics of IT are divided into IT impacts on customers, and IT impacts on suppliers. IT impacts on customers include vertical differentiation or reduced turnover costs for the primary good, and positive IT adoption costs. IT impacts on suppliers include reduced production costs for the primary good, and the costs of IT. Optimal pricing for the IT and the primary good is modelled for monopoly, and Bertrand competition based on IT and the primary good is modelled for oligopoly. Two part tariffs are used for the IT and IT enhanced primary good. Results of pricing to consumers show that the fixed component of an optimal (or equilibrium) two part tariff can either be a net tax or a net subsidy, confirming the possibility of taxed or subsidized IT adoption. For the monopolist offering the IT and IT enhanced primary good only, the consumer's adoption/switching cost limits the possible subsidy. Consistent with previous economics research, in a duopoly where one supplier has IT, the IT supplier abandons the original primary good. Two suppliers with identical IT cannot attain a positive profit equilibrium. Analogous results obtain for a special case of pricing to customer firms. Empirical results support differential (premium) pricing for an IT enhanced primary good over an original good. / Business, Sauder School of / Operations and Logistics (OPLOG), Division of / Graduate

Page generated in 0.4144 seconds