Spelling suggestions: "subject:"nonmonotone"" "subject:"nonmonotony""
51 |
Increasing coupling for probabilistic cellular automataLouis, Pierre-Yves January 2005 (has links)
We give a necessary and sufficient condition for the existence of an increasing coupling of N (N >= 2) synchronous dynamics on S-Zd (PCA). Increasing means the coupling preserves stochastic ordering. We first present our main construction theorem in the case where S is totally ordered; applications to attractive PCAs are given. When S is only partially ordered, we show on two examples that a coupling of more than two synchronous dynamics may not exist. We also prove an extension of our main result for a particular class of partially ordered spaces.
|
52 |
Linear and Non-linear Monotone Methods for Valuing Financial Options Under Two-Factor, Jump-Diffusion ModelsClift, Simon Sivyer January 2007 (has links)
The evolution of the price of two financial assets may be modeled by correlated geometric Brownian motion with additional, independent, finite activity jumps. Similarly, the evolution of the price of one financial asset may be modeled by a stochastic volatility process and finite activity jumps. The value of a contingent claim, written on assets where the underlying evolves by either of these two-factor processes, is given by the solution of a linear, two-dimensional, parabolic, partial integro-differential equation (PIDE). The focus of this thesis is the development of new, efficient numerical solution approaches for these PIDE's for both linear and non-linear cases. A localization scheme approximates the initial-value problem on an infinite spatial domain by an initial-boundary value problem on a finite spatial domain. Convergence of the localization method is proved using a Green's function approach. An implicit, finite difference method discretizes the PIDE. The theoretical conditions for the stability of the discrete approximation are examined under both maximum and von Neumann analysis. Three linearly convergent, monotone variants of the approach are reviewed for the constant coefficient, two-asset case and reformulated for the non-constant coefficient, stochastic volatility case. Each monotone scheme satisfies the conditions which imply convergence to the viscosity solution of the localized PIDE. A fixed point iteration solves the discrete, algebraic equations at each time step. This iteration avoids solving a dense linear system through the use of a lagged integral evaluation. Dense matrix-vector multiplication is avoided by using an FFT method. By using Green's function analysis, von Neumann analysis and maximum analysis, the fixed point iteration is shown to be rapidly convergent under typical market parameters. Combined with a penalty iteration, the value of options with an American early exercise feature may be computed. The rapid convergence of the iteration is verified in numerical tests using European and American options with vanilla payoffs, and digital, one-touch option payoffs. These tests indicate that the localization method for the PIDE's is effective. Adaptations are developed for degenerate or extreme parameter sets. The three monotone approaches are compared by computational cost and resulting error. For the stochastic volatility case, grid rotation is found to be the preferred approach. Finally, a new algorithm is developed for the solution of option values in the non-linear case of a two-factor option where the jump parameters are known only to within a deterministic range. This case results in a Hamilton-Jacobi-Bellman style PIDE. A monotone discretization is used and a new fixed point, policy iteration developed for time step solution. Analysis proves that the new iteration is globally convergent under a mild time step restriction. Numerical tests demonstrate the overall convergence of the method and investigate the financial implications of uncertain parameters on the option value.
|
53 |
Linear and Non-linear Monotone Methods for Valuing Financial Options Under Two-Factor, Jump-Diffusion ModelsClift, Simon Sivyer January 2007 (has links)
The evolution of the price of two financial assets may be modeled by correlated geometric Brownian motion with additional, independent, finite activity jumps. Similarly, the evolution of the price of one financial asset may be modeled by a stochastic volatility process and finite activity jumps. The value of a contingent claim, written on assets where the underlying evolves by either of these two-factor processes, is given by the solution of a linear, two-dimensional, parabolic, partial integro-differential equation (PIDE). The focus of this thesis is the development of new, efficient numerical solution approaches for these PIDE's for both linear and non-linear cases. A localization scheme approximates the initial-value problem on an infinite spatial domain by an initial-boundary value problem on a finite spatial domain. Convergence of the localization method is proved using a Green's function approach. An implicit, finite difference method discretizes the PIDE. The theoretical conditions for the stability of the discrete approximation are examined under both maximum and von Neumann analysis. Three linearly convergent, monotone variants of the approach are reviewed for the constant coefficient, two-asset case and reformulated for the non-constant coefficient, stochastic volatility case. Each monotone scheme satisfies the conditions which imply convergence to the viscosity solution of the localized PIDE. A fixed point iteration solves the discrete, algebraic equations at each time step. This iteration avoids solving a dense linear system through the use of a lagged integral evaluation. Dense matrix-vector multiplication is avoided by using an FFT method. By using Green's function analysis, von Neumann analysis and maximum analysis, the fixed point iteration is shown to be rapidly convergent under typical market parameters. Combined with a penalty iteration, the value of options with an American early exercise feature may be computed. The rapid convergence of the iteration is verified in numerical tests using European and American options with vanilla payoffs, and digital, one-touch option payoffs. These tests indicate that the localization method for the PIDE's is effective. Adaptations are developed for degenerate or extreme parameter sets. The three monotone approaches are compared by computational cost and resulting error. For the stochastic volatility case, grid rotation is found to be the preferred approach. Finally, a new algorithm is developed for the solution of option values in the non-linear case of a two-factor option where the jump parameters are known only to within a deterministic range. This case results in a Hamilton-Jacobi-Bellman style PIDE. A monotone discretization is used and a new fixed point, policy iteration developed for time step solution. Analysis proves that the new iteration is globally convergent under a mild time step restriction. Numerical tests demonstrate the overall convergence of the method and investigate the financial implications of uncertain parameters on the option value.
|
54 |
Online Auctions: Theoretical and Empirical InvestigationsZhang, Yu 2010 August 1900 (has links)
This dissertation, which consists of three essays, studies online auctions both
theoretically and empirically.
The first essay studies a special online auction format used by eBay, “Buy-It-
Now” (BIN) auctions, in which bidders are allowed to buy the item at a fixed BIN
price set by the seller and end the auction immediately. I construct a two-stage
model in which the BIN price is only available to one group of bidders. I find that
bidders cutoff is lower in this model, which means, bidders are more likely to accept
the BIN option, compared with the models assuming all bidders are offered the BIN.
The results explain the high frequency of bidders accepting BIN price, and may also
help explain the popularity of temporary BIN auctions in online auction sites, such
as eBay, where BIN option is only offered to early bidders.
In the second essay, I study how bidders’ risk attitude and time preference affect
their behavior in Buy-It-Now auctions. I consider two cases, when both bidders enter
the auction at the same time (homogenous bidders) thus BIN option is offered to both
of them, and when two bidders enter the auction at two different stages (heterogenous
bidders) thus the BIN option is only offered to the early bidder. Bidders’ optimal
strategies are derived explicitly in both cases. In particular, given bidders’ risk attitude and time preference, the cutoff valuation, such that a bidder will accept BIN if
his valuation is higher than the cutoff valuation and reject it otherwise, is calculated.
I find that the cutoff valuation in the case of heterogenous bidders is lower than that
in the case of homogenous bidders.
The third essay focuses on the empirical modeling of the price processes of online
auctions. I generalize the monotone series estimator to model the pooled price
processes. Then I apply the model and the estimator to eBay auction data of a palm
PDA. The results are shown to capture closely the overall pattern of observed price
dynamics. In particular, early bidding, mid-auction draught, and sniping are well
approximated by the estimated price curve.
|
55 |
Overcoming the failure of the classical generalized interior-point regularity conditions in convex optimization. Applications of the duality theory to enlargements of maximal monotone operatorsCsetnek, Ernö Robert 14 December 2009 (has links) (PDF)
The aim of this work is to present several new results concerning
duality in scalar convex optimization, the formulation of sequential
optimality conditions and some applications of the duality to the theory
of maximal monotone operators.
After recalling some properties of the classical generalized
interiority notions which exist in the literature, we give some
properties of the quasi interior and quasi-relative interior,
respectively. By means of these notions we introduce several
generalized interior-point regularity conditions which guarantee
Fenchel duality. By using an approach due to Magnanti, we derive
corresponding regularity conditions expressed via the quasi
interior and quasi-relative interior which ensure Lagrange
duality. These conditions have the advantage to be applicable in
situations when other classical regularity conditions fail.
Moreover, we notice that several duality results given in the
literature on this topic have either superfluous or contradictory
assumptions, the investigations we make offering in this sense an
alternative.
Necessary and sufficient sequential optimality conditions for a
general convex optimization problem are established via
perturbation theory. These results are applicable even in the
absence of regularity conditions. In particular, we show that
several results from the literature dealing with sequential
optimality conditions are rediscovered and even improved.
The second part of the thesis is devoted to applications of the
duality theory to enlargements of maximal monotone operators in
Banach spaces. After establishing a necessary and sufficient
condition for a bivariate infimal convolution formula, by
employing it we equivalently characterize the
$\varepsilon$-enlargement of the sum of two maximal monotone
operators. We generalize in this way a classical result
concerning the formula for the $\varepsilon$-subdifferential of
the sum of two proper, convex and lower semicontinuous functions.
A characterization of fully enlargeable monotone operators is also
provided, offering an answer to an open problem stated in the
literature. Further, we give a regularity condition for the
weak$^*$-closedness of the sum of the images of enlargements of
two maximal monotone operators.
The last part of this work deals with enlargements of positive sets in SSD spaces. It is shown that many results from the literature concerning enlargements of maximal monotone operators can be generalized to the setting of Banach SSD spaces.
|
56 |
Clones sous-maximaux des fonctions monotones sur l'univers à trois élémentsBariteau, Charles January 2007 (has links)
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal
|
57 |
Metalogical Contributions to the Nonmonotonic Theory of Abstract ArgumentationBaumann, Ringo 03 February 2014 (has links) (PDF)
The study of nonmonotonic logics is one mayor field of Artificial Intelligence (AI). The reason why such kind of formalisms are so attractive to model human reasoning is that they allow to withdraw former conclusion. At the end of the 1980s the novel idea of using argumentation to model nonmonotonic reasoning emerged in AI. Nowadays argumentation theory is a vibrant research area in AI, covering aspects of knowledge representation, multi-agent systems, and also philosophical questions.
Phan Minh Dung’s abstract argumentation frameworks (AFs) play a dominant role in the field of argumentation. In AFs arguments
and attacks between them are treated as primitives, i.e. the
internal structure of arguments is not considered. The major focus is
on resolving conflicts. To this end a variety of semantics have been defined, each of them specifying acceptable sets of arguments, so-called extensions, in a particular way. Although, Dung-style AFs are among the simplest argumentation systems one can think of, this approach is still powerful. It can be seen as a general theory capturing several nonmonotonic formalisms as well as a tool for solving well-known problems as the stable-marriage problem.
This thesis is mainly concerned with the investigation of metalogical
properties of Dung’s abstract theory. In particular, we provide cardinality, monotonicity and splitting results as well as characterization theorems for equivalence notions. The established results have theoretical and practical gains. On the one hand, they yield deeper theoretical insights into how this nonmonotonic theory works, and on the other the obtained results can be used to refine existing algorithms or even give rise to new computational procedures. A further main part is the study of problems regarding dynamic aspects of abstract argumentation. Most noteworthy we solve the so-called enforcing and the more general minimal change problem for a huge number of semantics.
|
58 |
Accelerating Successive Approximation Algorithm Via Action EliminationJaber, Nasser M. A. Jr. 20 January 2009 (has links)
This research is an effort to improve the performance of successive approximation algorithm with a prime aim of solving finite states and actions, infinite horizon, stationary, discrete and discounted
Markov Decision Processes (MDPs). Successive approximation is a simple and commonly used method to solve MDPs. Successive approximation often appears to be intractable for solving large scale MDPs due to its computational complexity. Action elimination, one of the techniques used to accelerate solving MDPs, reduces the
problem size through identifying and eliminating sub-optimal actions. In some cases successive approximation is terminated when all actions but one per state are eliminated.
The bounds on value functions are the key element in action elimination. New terms (action gain, action relative gain and action
cumulative relative gain) were introduced to construct tighter bounds on the value functions and to propose an improved action
elimination algorithm.
When span semi-norm is used, we show numerically that the actual convergence of successive approximation is faster than the known theoretical rate. The absence of easy-to-compute bounds on the actual convergence rate motivated the current research to try a
heuristic action elimination algorithm. The heuristic utilizes an estimated convergence rate in the span semi-norm to speed up action
elimination. The algorithm demonstrated exceptional performance in terms of solution optimality and savings in computational time.
Certain types of structured Markov processes are known to have monotone optimal policy. Two special action elimination algorithms
are proposed in this research to accelerate successive approximation for these types of MDPs. The first algorithm uses the state space partitioning and prioritize iterate values updating in a way that maximizes temporary elimination of sub-optimal actions based on the policy monotonicity. The second algorithm is an improved version that includes permanent action elimination to improve the performance of the algorithm. The performance of the proposed algorithms are assessed and compared to that of other algorithms. The proposed algorithms demonstrated outstanding performance in
terms of number of iterations and omputational time to converge.
|
59 |
Transaction size and effective spread: an informational relationshipXiao, Yuewen, Banking & Finance, Australian School of Business, UNSW January 2008 (has links)
The relationship between quantity traded and transaction costs has been one of the main focuses among financial scholars and practitioners. The purpose of this thesis is to investigate the informational relationship between these variables. Following insights and results of Milgrom (1981), Feldman (2004), and Feldman and Winer (2004), we use New York Stock Exchange (NYSE) data and kernel estimation methods to construct the distribution of one variable conditional on the other. We then study the information in these conditional distributions: the extent to which they are ordered by first order stochastic dominance (FOSD) and by monotone likelihood ratio property (MLRP). We find that transaction size and effective spread are statistically significantly orrelated. FOSD, a necessary condition for a "separating signaling equilibrium", holds under certain conditions. We start from two-subsample case. We choose a cut-off point in transaction size and categorize the observations with transaction sizes smaller than the cut-off point into group "low". The remaining data is classified as "high". We repeat this procedure for all possible transaction size cut-off points. It turns out that FOSD holds nowhere. However, once we eliminate transactions at the quote midpoint, the "crossings" between exchange members not specialists, FOSD holds for all the cut-off points fewer than 15800 shares. MLRP, a necessary and sufficient condition for the separating equilibrium to hold point by point of the conditional density functions, does not hold but might not be ruled out considering the error in the estimates. We also find that large trades are not necessarily associated with large spread. Instead, it is more likely that larger trades are transacted at the quote midpoint (again, the non-specialist "crossings") than smaller trades. Our results confirm the findings of Barclay and Warner (1993) regarding the informativeness of medium-size transactions: we identify informational relationships between mid-size transactions and spreads but not for trades at the quote midpoint and large-size transactions. That is, we identify two regimes, an informational one and a non-informational/liquidity one.
|
60 |
Dualization of monotone generalized equations /Pennanen, Teemu, January 1999 (has links)
Thesis (Ph. D.)--University of Washington, 1999. / Vita. Includes bibliographical references (p. 85-91).
|
Page generated in 0.0312 seconds