• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 764
  • 229
  • 138
  • 95
  • 30
  • 29
  • 19
  • 16
  • 14
  • 10
  • 7
  • 5
  • 4
  • 4
  • 4
  • Tagged with
  • 1611
  • 591
  • 340
  • 247
  • 245
  • 235
  • 191
  • 187
  • 176
  • 167
  • 167
  • 160
  • 143
  • 135
  • 131
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
431

Regression-Based Monte Carlo For Pricing High-Dimensional American-Style Options / Regressionsbaserad Monte Carlo För Att Prissätta Högdimensionella Amerikanska Optioner

Andersson, Niklas January 2016 (has links)
Pricing different financial derivatives is an essential part of the financial industry. For some derivatives there exists a closed form solution, however the pricing of high-dimensional American-style derivatives is still today a challenging problem. This project focuses on the derivative called option and especially pricing of American-style basket options, i.e. options with both an early exercise feature and multiple underlying assets. In high-dimensional problems, which is definitely the case for American-style options, Monte Carlo methods is advantageous. Therefore, in this thesis, regression-based Monte Carlo has been used to determine early exercise strategies for the option. The well known Least Squares Monte Carlo (LSM) algorithm of Longstaff and Schwartz (2001) has been implemented and compared to Robust Regression Monte Carlo (RRM) by C.Jonen (2011). The difference between these methods is that robust regression is used instead of least square regression to calculate continuation values of American style options. Since robust regression is more stable against outliers the result using this approach is claimed by C.Jonen to give better estimations of the option price. It was hard to compare the techniques without the duality approach of Andersen and Broadie (2004) therefore this method was added. The numerical tests then indicate that the exercise strategy determined using RRM produces a higher lower bound and a tighter upper bound compared to LSM. The difference between upper and lower bound could be up to 4 times smaller using RRM. Importance sampling and Quasi Monte Carlo have also been used to reduce the variance in the estimation of the option price and to speed up the convergence rate. / Prissättning av olika finansiella derivat är en viktig del av den finansiella sektorn. För vissa derivat existerar en sluten lösning, men prissättningen av derivat med hög dimensionalitet och av amerikansk stil är fortfarande ett utmanande problem. Detta projekt fokuserar på derivatet som kallas option och särskilt prissättningen av amerikanska korg optioner, dvs optioner som både kan avslutas i förtid och som bygger på flera underliggande tillgångar. För problem med hög dimensionalitet, vilket definitivt är fallet för optioner av amerikansk stil, är Monte Carlo metoder fördelaktiga. I detta examensarbete har därför regressions baserad Monte Carlo använts för att bestämma avslutningsstrategier för optionen. Den välkända minsta kvadrat Monte Carlo (LSM) algoritmen av Longstaff och Schwartz (2001) har implementerats och jämförts med Robust Regression Monte Carlo (RRM) av C.Jonen (2011). Skillnaden mellan metoderna är att robust regression används istället för minsta kvadratmetoden för att beräkna fortsättningsvärden för optioner av amerikansk stil. Eftersom robust regression är mer stabil mot avvikande värden påstår C.Jonen att denna metod ger bättre skattingar av optionspriset. Det var svårt att jämföra teknikerna utan tillvägagångssättet med dualitet av Andersen och Broadie (2004) därför lades denna metod till. De numeriska testerna indikerar då att avslutningsstrategin som bestämts med RRM producerar en högre undre gräns och en snävare övre gräns jämfört med LSM. Skillnaden mellan övre och undre gränsen kunde vara upp till 4 gånger mindre med RRM. Importance sampling och Quasi Monte Carlo har också använts för att reducera variansen i skattningen av optionspriset och för att påskynda konvergenshastigheten.
432

ROBUST ADAPTIVE BEAMFORMING WITH BROAD NULLS

Yudong, He, Xianghua, Yang, Jie, Zhou, Banghua, Zhou, Beibei, Shao 10 1900 (has links)
ITC/USA 2007 Conference Proceedings / The Forty-Third Annual International Telemetering Conference and Technical Exhibition / October 22-25, 2007 / Riviera Hotel & Convention Center, Las Vegas, Nevada / Robust adaptive beamforming using worst-case performance optimization is developed in recent years. It had good performance against array response errors, but it cannot reject strong interferences. In this paper, we propose a scheme for robust adaptive beamforming with broad nulls to reject strong interferences. We add a quadratic constraint to suppress the power of the array response over a spatial region of the interferences. The optimal weighting vector is then obtained by minimizing the power of the array output subject to quadratic constrains on the desired signal and interferences, respectively. We derive the formulations for the optimization problem and solve it efficiently using Newton recursive algorithm. Numerical examples are presented to compare the performances of the robust adaptive beamforming with no null constrains, sharp nulls and broad nulls. The results show its powerful ability to reject strong interferences.
433

Robust pricing and hedging beyond one marginal

Spoida, Peter January 2014 (has links)
The robust pricing and hedging approach in Mathematical Finance, pioneered by Hobson (1998), makes statements about non-traded derivative contracts by imposing very little assumptions about the underlying financial model but directly using information contained in traded options, typically call or put option prices. These prices are informative about marginal distributions of the asset. Mathematically, the theory of Skorokhod embeddings provides one possibility to approach robust problems. In this thesis we consider mostly robust pricing and hedging problems of Lookback options (options written on the terminal maximum of an asset) and Convex Vanilla Options (options written on the terminal value of an asset) and extend the analysis which is predominately found in the literature on robust problems by two features: Firstly, options with multiple maturities are available for trading (mathematically this corresponds to multiple marginal constraints) and secondly, restrictions on the total realized variance of asset trajectories are imposed. Probabilistically, in both cases, we develop new optimal solutions to the Skorokhod embedding problem. More precisely, in Part I we start by constructing an iterated Azema-Yor type embedding (a solution to the n-marginal Skorokhod embedding problem, see Chapter 2). Subsequently, its implications are presented in Chapter 3. From a Mathematical Finance perspective we obtain explicitly the optimal superhedging strategy for Barrier/Lookback options. From a probability theory perspective, we find the maximum maximum of a martingale which is constrained by finitely many intermediate marginal laws. Further, as a by-product, we discover a new class of martingale inequalities for the terminal maximum of a cadlag submartingale, see Chapter 4. These inequalities enable us to re-derive the sharp versions of Doob's inequalities. In Chapter 5 a different problem is solved. Motivated by the fact that in some markets both Vanilla and Barrier options with multiple maturities are traded, we characterize the set of market models in this case. In Part II we incorporate the restriction that the total realized variance of every asset trajectory is bounded by a constant. This has been previously suggested by Mykland (2000). We further assume that finitely many put options with one fixed maturity are traded. After introducing the general framework in Chapter 6, we analyse the associated robust pricing and hedging problem for convex Vanilla and Lookback options in Chapters 7 and 8. Robust pricing is achieved through construction of appropriate Root solutions to the Skorokhod embedding problem. Robust hedging and pathwise duality are obtained by a careful development of dynamic pathwise superhedging strategies. Further, we characterize existence of market models with a suitable notion of arbitrage.
434

Robust optimization for portfolio risk : a re-visit of worst-case risk management procedures after Basel III award

Özün, Alper January 2012 (has links)
The main purpose of this thesis is to develop methodological and practical improvements on robust portfolio optimization procedures. Firstly, the thesis discusses the drawbacks of classical mean-variance optimization models, and examines robust portfolio optimization procedures with CVaR and worst-case CVaR risk models by providing a clear presentation of derivation of robust optimization models from a basic VaR model. For practical purposes, the thesis introduces an open source software interface called “RobustRisk”, which is developed for producing empirical evidence for the robust portfolio optimization models. The software, which performs Monte-Carlo simulation and out-of-sample performance for the portfolio optimization, is introduced by using a hypothetical portfolio data from selected emerging markets. In addition, the performance of robust portfolio optimization procedures are discussed by providing empirical evidence in the crisis period from advanced markets. Empirical results show that robust optimization with worst-case CVaR model outperforms the nominal CVaR model in the crisis period. The empirical results encourage us to construct a forward-looking stress test procedure based on robust portfolio optimization under regime switches. For this purpose, the Markov chain process is embedded into robust optimization procedure in order to stress regime transition matrix. In addition, assets returns, volatilities, correlation matrix and covariance matrix can be stressed under pre-defined scenario expectations. An application is provided with a hypothetical portfolio representing an internationally diversified portfolio. The CVaR efficient frontier and corresponding optimized portfolio weights are achieved under regime switch scenarios. The research suggests that stressed-CVaR optimization provides a robust and forward-looking stress test procedure to comply with the regulatory requirements stated in Basel II and CRD regulations.
435

Robust coalition formation in a dynamic, contractless environment

Jones, Christopher Lyman 21 June 2010 (has links)
This dissertation focuses on robust coalition formation between selfish agents in a dynamic environment where contracts are unenforceable. Previous research on this topic has covered each different aspect of this problem, but no research successfully addresses these factors in combination. Therefore, a novel approach is required. This dissertation accordingly has three major goals: to develop a theoretical framework that describes how selfish agents should select jobs and partners in a dynamic, contractless environment, to test a strategy based on that framework against existing heuristics in a simulated environment, and to create a learning agent capable of optimally adjusting its coalition formation strategy based on the level of dynamic change found in its environment. Experimental results demonstrate that the Expected Utility (EU) strategy based on the developed theoretical framework performs better than strategies using heuristics to select jobs and partners, and strategies which simulate a centralized “manager”. Future work in this area includes altering the EU strategy from an anytime strategy to a hill-climbing one, as well as further game theoretic explorations of the interactions between different strategies. / text
436

Robust Diagnostics for the Logistic Regression Model With Incomplete Data

范少華 Unknown Date (has links)
Atkinson 及 Riani 應用前進搜尋演算法來處理百牡利資料中所包含的多重離群值(2001)。在這篇論文中,我們沿用相同的想法來處理在不完整資料下一般線性模型中的多重離群值。這個演算法藉由先填補資料中遺漏的部分,再利用前進搜尋演算法來確認資料中的離群值。我們所提出的方法可以解決處理多重離群值時常會遇到的遮蓋效應。我們應用了一些真實資料來說明這個演算法並得到令人滿意結果。 / Atkinson and Riani (2001) apply the forward search algorithm to deal with the problem of the detection of multiple outliers in binomial data. In this thesis, we extend the similar idea to identify multiple outliers for the generalized linear models when part of data are missing. The algorithm starts with imputation method to fill-in the missing observations in the data, and then use the forward search algorithm to confirm outliers. The proposed method can overcome the masking effect, which commonly occurs when multiple outliers exit in the data. Real data are used to illustrate the procedure, and satisfactory results are obtained.
437

Aspects of probabilistic modelling for data analysis

Delannay, Nicolas 23 October 2007 (has links)
Computer technologies have revolutionised the processing of information and the search for knowledge. With the ever increasing computational power, it is becoming possible to tackle new data analysis applications as diverse as mining the Internet resources, analysing drugs effects on the organism or assisting wardens with autonomous video detection techniques. Fundamentally, the principle of any data analysis task is to fit a model which encodes well the dependencies (or patterns) present in the data. However, the difficulty is precisely to define such proper model when data are noisy, dependencies are highly stochastic and there is no simple physical rule to represent them. The aim of this work is to discuss the principles, the advantages and weaknesses of the probabilistic modelling framework for data analysis. The main idea of the framework is to model dispersion of data as well as uncertainty about the model itself by probability distributions. Three data analysis tasks are presented and for each of them the discussion is based on experimental results from real datasets. The first task considers the problem of linear subspaces identification. We show how one can replace a Gaussian noise model by a Student-t noise to make the identification more robust to atypical samples and still keep the learning procedure simple. The second task is about regression applied more specifically to near-infrared spectroscopy datasets. We show how spectra should be pre-processed before entering the regression model. We then analyse the validity of the Bayesian model selection principle for this application (and in particular within the Gaussian Process formulation) and compare this principle to the resampling selection scheme. The final task considered is Collaborative Filtering which is related to applications such as recommendation for e-commerce and text mining. This task is illustrative of the way how intuitive considerations can guide the design of the model and the choice of the probability distributions appearing in it. We compare the intuitive approach with a simpler matrix factorisation approach.
438

Aeroelastic Concepts for Flexible Wing Structures

Heinze, Sebastian January 2005 (has links)
<p>This thesis summarizes investigations performed within design, analysis and experimental evaluation of flexible aircraft structures. Not only the problems, but rather the opportunities related to aeroelasticity are discussed.</p><p>In the first part of the thesis, different concepts for using active aeroelastic configurations to increase aircraft performance are considered. In particular, one study deals with the minimization of the induced drag of a highly flexible wing by using multiple control surfaces. Another study deals with a possible implementation of a high-bandwidth piezo electric actuator for control applications using aeroelastic amplification.</p><p>The second part of the thesis deals with the development of an approach for modeling and analysis of flexible structures considering uncertainties in analysis models. Especially in cases of large structural variations, such as fuel level variations, a fixed-base modal formulation in robust flutter analysis may lead to incorrect results. Besides a discussion about this issue, possible means of treating this problem are presented.</p>
439

Conquering Variability for Robust and Low Power Designs

Sun, Jin January 2011 (has links)
As device feature sizes shrink to nano-scale, continuous technology scaling has led to a large increase in parameter variability during semiconductor manufacturing process. According to the source of uncertainty, parameter variations can be classified into three categories: process variations, environmental variations, and temporal variations. All these variation sources exert significant influences on circuit performance, and make it more challenging to characterize parameter variability and achieve robust, low-power designs. The scope of this dissertation is conquering parameter variability and successfully designing efficient yet robust integrated circuit (IC) systems. Previous experiences have indicated that we need to tackle this issue at every design stage of IC chips. In this dissertation, we propose several robust techniques for accurate variability characterization and efficient performance prediction under parameter variations. At pre-silicon verification stage, a robust yield prediction scheme under limited descriptions of parameter uncertainties, a robust circuit performance prediction methodology based on importance of uncertainties, and a robust gate sizing framework by ElasticR estimation model, have been developed. These techniques provide possible solutions to achieve both prediction accuracy and computation efficiency in early design stage. At on-line validation stage, a dynamic workload balancing framework and an on-line self-tuning design methodology have been proposed for application-specific multi-core systems under variability-induced aging effects. These on-line validation techniques are beneficial to alleviate device performance degradation due to parameter variations and extend device lifetime.
440

A Fast MLP-based Learning Method and its Application to Mine Countermeasure Missions

Shao, Hang 16 November 2012 (has links)
In this research, a novel machine learning method is designed and applied to Mine Countermeasure Missions. Similarly to some kernel methods, the proposed approach seeks to compute a linear model from another higher dimensional feature space. However, no kernel is used and the feature mapping is explicit. Computation can be done directly in the accessible feature space. In the proposed approach, the feature projection is implemented by constructing a large hidden layer, which differs from traditional belief that Multi-Layer Perceptron is usually funnel-shaped and the hidden layer is used as feature extractor. The proposed approach is a general method that can be applied to various problems. It is able to improve the performance of the neural network based methods and the learning speed of support vector machine. The classification speed of the proposed approach is also faster than that of kernel machines on the mine countermeasure mission task.

Page generated in 0.0407 seconds