421 |
Monotonicity of Option Prices Relative to VolatilityCheng, Yu-Chen 18 July 2012 (has links)
The Black-Scholes formula was the widely-used model for option pricing, this formula can be use to calculate the price of option by using current underlying asset prices, strike price, expiration time, volatility and interest rates. The European call option price from the model is a convex and increasing with respect to the initial underlying asset price. Assume underlying asset prices follow a generalized geometric Brownian motion, it is true that option prices increasing with respect to the constant interest rate and volatility, so that the volatility can be a very important factor in pricing option, if the volatility process £m(t) is constant (with £m(t) =£m for any t ) satisfying £m_1 ≤ £m(t) ≤ £m_2 for some constants £m_1 and £m_2 such that 0 ≤ £m_1 ≤ £m_2. Let C_i(t, S_t) be the price of the call at time t corresponding to the constant volatility £m_i (i = 1,2), we will derive that the price of call option at time 0 in the model with varying volatility belongs to the interval [C_1(0, S_0),C_2(0, S_0)].
|
422 |
Performance Analysis of Fully Joint Diversity Combining, Adaptive Modulation, and Power Control SchemesBouida, Zied 14 January 2010 (has links)
Adaptive modulation and diversity combining represent very important adaptive
solutions for future generations of wireless communication systems. Indeed, to
improve the performance and the efficiency of these systems, these two techniques
recently have been used jointly in new schemes named joint adaptive modulation
and diversity combining (JAMDC) schemes. Considering the problem of finding lowcomplexity,
bandwidth-efficient, and processing-power efficient transmission schemes
for a downlink scenario and capitalizing on some of these recently proposed JAMDC
schemes, we propose and analyze three fully joint adaptive modulation, diversity
combining, and power control (FJAMDC) schemes. More specifically, the modulation
constellation size, the number of combined diversity paths, and the needed power
level are determined jointly to achieve the highest spectral efficiency with the lowest
possible combining complexity, given the fading channel conditions and the required
bit error rate (BER) performance. The performance of these three FJAMDC schemes
is analyzed in terms of their spectral efficiency, processing power consumption, and
error-rate performance. Selected numerical examples show that these schemes considerably
increase the spectral efficiency of the existing JAMDC schemes with a slight increase in the average number of combined paths for the low signal to noise ratio
range while maintaining compliance with the BER performance and a low radiated
power resulting in a substantial decrease in interference to co-existing systems/users.
|
423 |
Bayesian classification and survival analysis with curve predictorsWang, Xiaohui 15 May 2009 (has links)
We propose classification models for binary and multicategory data where the
predictor is a random function. The functional predictor could be irregularly and
sparsely sampled or characterized by high dimension and sharp localized changes. In
the former case, we employ Bayesian modeling utilizing flexible spline basis which is
widely used for functional regression. In the latter case, we use Bayesian modeling
with wavelet basis functions which have nice approximation properties over a large
class of functional spaces and can accommodate varieties of functional forms observed
in real life applications. We develop an unified hierarchical model which accommodates
both the adaptive spline or wavelet based function estimation model as well as
the logistic classification model. These two models are coupled together to borrow
strengths from each other in this unified hierarchical framework. The use of Gibbs
sampling with conjugate priors for posterior inference makes the method computationally
feasible. We compare the performance of the proposed models with the naive
models as well as existing alternatives by analyzing simulated as well as real data. We
also propose a Bayesian unified hierarchical model based on a proportional hazards model and generalized linear model for survival analysis with irregular longitudinal
covariates. This relatively simple joint model has two advantages. One is that using
spline basis simplifies the parameterizations while a flexible non-linear pattern of
the function is captured. The other is that joint modeling framework allows sharing
of the information between the regression of functional predictors and proportional
hazards modeling of survival data to improve the efficiency of estimation. The novel
method can be used not only for one functional predictor case, but also for multiple
functional predictors case. Our methods are applied to analyze real data sets and
compared with a parameterized regression method.
|
424 |
Gas ejector modeling for design and analysisLiao, Chaqing 15 May 2009 (has links)
A generalized ejector model was successfully developed for gas ejector design and
performance analysis. Previous 1-D analytical models can be derived from this new
comprehensive model as particular cases. For the first time, this model shows the
relationship between the cosntant-pressure and constant-area 1-D ejector models. The
new model extends existing models and provides a high level of confidence in the
understanding of ejector mechanics. “Off-design” operating conditions, such as the
shock occurring in the primary stream, are included in the generalized ejector model.
Additionally, this model has been applied to two-phase systems including the gas-liquid
ejector designed for a Proton Exchange Membrane (PEM) fuel cell system.
The equations of the constant-pressure and constant-area models were verified. A
parametric study was performed on these widely adopted 1-D analytical ejector models.
FLUENT, commercially available Computational Fluid Dynamics (CFD) software, was
used to model gas ejectors. To validate the CFD simulation, the numerical predictions were compared to test data and good agreement was found between them. Based on this
benchmark, FLUENT was applied to design ejectors with optimal geometry
configurations.
|
425 |
Generalized score tests for missing covariate dataJin, Lei 15 May 2009 (has links)
In this dissertation, the generalized score tests based on weighted estimating equations
are proposed for missing covariate data. Their properties, including the effects
of nuisance functions on the forms of the test statistics and efficiency of the tests,
are investigated. Different versions of the test statistic are properly defined for various
parametric and semiparametric settings. Their asymptotic distributions are also
derived. It is shown that when models for the nuisance functions are correct, appropriate
test statistics can be obtained via plugging the estimates of the nuisance
functions into the appropriate test statistic for the case that the nuisance functions
are known. Furthermore, the optimal test is obtained using the relative efficiency
measure. As an application of the proposed tests, a formal model validation procedure
is developed for generalized linear models in the presence of missing covariates.
The asymptotic distribution of the data driven methods is provided. A simulation
study in both linear and logistic regressions illustrates the applicability and the finite
sample performance of the methodology. Our methods are also employed to analyze
a coronary artery disease diagnostic dataset.
|
426 |
Bayesian Semiparametric Models for Heterogeneous Cross-platform Differential Gene ExpressionDhavala, Soma Sekhar 2010 December 1900 (has links)
We are concerned with testing for differential expression and consider three different
aspects of such testing procedures. First, we develop an exact ANOVA type
model for discrete gene expression data, produced by technologies such as a Massively
Parallel Signature Sequencing (MPSS), Serial Analysis of Gene Expression (SAGE)
or other next generation sequencing technologies. We adopt two Bayesian hierarchical
models—one parametric and the other semiparametric with a Dirichlet process
prior that has the ability to borrow strength across related signatures, where a signature
is a specific arrangement of the nucleotides. We utilize the discreteness of the
Dirichlet process prior to cluster signatures that exhibit similar differential expression
profiles. Tests for differential expression are carried out using non-parametric
approaches, while controlling the false discovery rate. Next, we consider ways to
combine expression data from different studies, possibly produced by different technologies
resulting in mixed type responses, such as Microarrays and MPSS. Depending
on the technology, the expression data can be continuous or discrete and can have different
technology dependent noise characteristics. Adding to the difficulty, genes can
have an arbitrary correlation structure both within and across studies. Performing
several hypothesis tests for differential expression could also lead to false discoveries.
We propose to address all the above challenges using a Hierarchical Dirichlet process
with a spike-and-slab base prior on the random effects, while smoothing splines model the unknown link functions that map different technology dependent manifestations
to latent processes upon which inference is based. Finally, we propose an algorithm
for controlling different error measures in a Bayesian multiple testing under generic
loss functions, including the widely used uniform loss function. We do not make
any specific assumptions about the underlying probability model but require that
indicator variables for the individual hypotheses are available as a component of the
inference. Given this information, we recast multiple hypothesis testing as a combinatorial
optimization problem and in particular, the 0-1 knapsack problem which
can be solved efficiently using a variety of algorithms, both approximate and exact in
nature.
|
427 |
Testing Lack-of-Fit of Generalized Linear Models via Laplace ApproximationGlab, Daniel Laurence 2011 May 1900 (has links)
In this study we develop a new method for testing the null hypothesis that the predictor
function in a canonical link regression model has a prescribed linear form. The class of
models, which we will refer to as canonical link regression models, constitutes arguably
the most important subclass of generalized linear models and includes several of the most
popular generalized linear models. In addition to the primary contribution of this study,
we will revisit several other tests in the existing literature. The common feature among the
proposed test, as well as the existing tests, is that they are all based on orthogonal series
estimators and used to detect departures from a null model.
Our proposal for a new lack-of-fit test is inspired by the recent contribution of Hart
and is based on a Laplace approximation to the posterior probability of the null hypothesis.
Despite having a Bayesian construction, the resulting statistic is implemented in a
frequentist fashion. The formulation of the statistic is based on characterizing departures
from the predictor function in terms of Fourier coefficients, and subsequent testing that all
of these coefficients are 0. The resulting test statistic can be characterized as a weighted
sum of exponentiated squared Fourier coefficient estimators, whereas the weights depend
on user-specified prior probabilities. The prior probabilities provide the investigator the
flexibility to examine specific departures from the prescribed model. Alternatively, the use
of noninformative priors produces a new omnibus lack-of-fit statistic.
We present a thorough numerical study of the proposed test and the various existing
orthogonal series-based tests in the context of the logistic regression model. Simulation
studies demonstrate that the test statistics under consideration possess desirable power
properties against alternatives that have been identified in the existing literature as being
important.
|
428 |
The Impact of Trust Model on Customer Loyalty¡XA Study of Direct Selling IndustryWang, Jau-Shyong 19 January 2005 (has links)
The role of trust in market exchange has been of consistent interest to marketing researchers over the past decade. Many researches in marketing have shown that customer trust in a company and its representatives can positively influence customer loyalty. However, a customer¡¦s deal with a particular product/service provider can also be influenced by the customer¡¦s trust in the broader marketplace¡Xfor example, trust in those who regulate the market and trust in the professionals who populate the marketplace. Drawing from a number of disciplines in addition to marketing, we identify three types of trust (Institutional Trust, Role Trust, Generalized Trust) in the broader marketplace that might influence trust (interpersonal trust, firm-specific trust) between two exchange partners. Using survey results collected from direct sellers of Taiwan¡¦s direct selling companies, we test competing theories about the influence of this trust. Our results show that the influence of broad-scope trust on customer loyalty is not direct, but is mediated by narrow-scope trust. Because the substitutional view implies a direct relationship between broad-scope trust and customer loyalty, this finding supports the foundational view of the relationship between broad-scope and narrow-scope trust.
|
429 |
Bounds On The Anisotropic Elastic ConstantsDinckal, Cigdem 01 February 2008 (has links) (PDF)
In this thesis, mechanical and elastic behaviour of anisotropic materials are inves-
tigated in order to understand the optimum mechanical behaviour of them in
selected directions. For an anisotropic material with known elastic constants, it
is possible to choose the best set of e¤ / ective elastic constants and e¤ / ective eigen-
values which determine the optimum mechanical and elastic properties of it and
also represent the material in a speci.ed greater material symmetry.
For this reason, bounds on the e¤ / ective elastic constants which are the best set
of elastic constants and e¤ / ective eigenvalues of materials have been constructed
symbollicaly for all anisotropic elastic symmetries by using Hill [4,13] approach.
Anisotropic Hooke.s law and its Kelvin inspired formulation are described and
generalized Hill inequalities are explained in detail. For di¤ / erent types of sym-
metries, materials were selected randomly and data of elastic constants for them
were collected. These data have been used to calculate bounds on the e¤ / ective
elastic constants and e¤ / ective eigenvalues.
Finally, by examining numerical results of bounds given in tables, it is seen that
the materials selected from the same symmetry type which have larger interval
between the bounds, are more anisotropic, whereas some materials which have
smaller interval between the bounds, are closer to isotropy.
|
430 |
Parameter Estimation In Generalized Partial Linear Models With Conic Quadratic ProgrammingCelik, Gul 01 September 2010 (has links) (PDF)
In statistics, regression analysis is a technique, used to understand and model the
relationship between a dependent variable and one or more independent variables.
Multiple Adaptive Regression Spline (MARS) is a form of regression analysis. It is a
non-parametric regression technique and can be seen as an extension of linear models
that automatically models non-linearities and interactions. MARS is very important
in both classification and regression, with an increasing number of applications in
many areas of science, economy and technology.
In our study, we analyzed Generalized Partial Linear Models (GPLMs), which are
particular semiparametric models. GPLMs separate input variables into two parts
and additively integrates classical linear models with nonlinear model part. In order
to smooth this nonparametric part, we use Conic Multiple Adaptive Regression Spline
(CMARS), which is a modified form of MARS. MARS is very benefical for high
dimensional problems and does not require any particular class of relationship between
the regressor variables and outcome variable of interest. This technique offers a great advantage for fitting nonlinear multivariate functions. Also, the contribution of the
basis functions can be estimated by MARS, so that both the additive and interaction
effects of the regressors are allowed to determine the dependent variable. There are
two steps in the MARS algorithm: the forward and backward stepwise algorithms. In
the first step, the model is constructed by adding basis functions until a maximum
level of complexity is reached. Conversely, in the second step, the backward stepwise
algorithm reduces the complexity by throwing the least significant basis functions from
the model.
In this thesis, we suggest not using backward stepwise algorithm, instead, we employ
a Penalized Residual Sum of Squares (PRSS). We construct PRSS for MARS as a
Tikhonov Regularization Problem. We treat this problem using continuous optimization
techniques which we consider to become an important complementary technology
and alternative to the concept of the backward stepwise algorithm. Especially, we apply
the elegant framework of Conic Quadratic Programming (CQP) an area of convex
optimization that is very well-structured, hereby, resembling linear programming and,
therefore, permitting the use of interior point methods.
At the end of this study, we compare CQP with Tikhonov Regularization problem
for two different data sets, which are with and without interaction effects. Moreover,
by using two another data sets, we make a comparison between CMARS and two
other classification methods which are Infinite Kernel Learning (IKL) and Tikhonov
Regularization whose results are obtained from the thesis, which is on progress.
|
Page generated in 0.0821 seconds