11 |
Nonparametric estimation of risk neutral densityDJOSSABA, ADJIMON MARCEL 10 1900 (has links)
Ce mémoire vise à estimer la densité neutre au risque (Risk neutral density (RND) en anglais) par une approche non paramétrique tout en tenant compte de l’endogénéité. Les prix transversaux des options européennes sont utilisés pour l’estimation. Le modèle principal considéré est la régression linéaire fonctionnelle. Nous montrons comment utiliser des variables instrumentales dans ce modèle pour corriger l’endogénéité. En outre, nous avons intégré des variables instrumentales dans le modèle approximant le RND par l’utilisation des fonctions d’Hermite à des fins de comparaison des résultats. Pour garantir un estimateur stable, nous utilisons la technique de régularisation de Tikhonov. Ensuite, nous effectuons des simulations de Monte-Carlo pour étudier l’impact des différents types de distribution RND sur les résultats obtenus. Plus précisément, nous analysons une distribution de mélange lognormale et une distribution de smile de Black-Scholes. Les résultats des simulations démontrent que l’estimateur utilisant des variables instrumentales pour corriger l’endogénéité est plus performant que l’alternative qui ne les utilise pas. En outre, les résultats de la distribution de smile de Black-Scholes sont plus performants que ceux de la distribution de mélange log-normale. Enfin, S&P 500 options sont utilisées pour une application de l’estimateur. / This thesis aims to estimate the risk-neutral density (RND) through a non-parametric approach
while accounting for endogeneity. The cross-sectional prices of European options are used for
the estimation. The primary model under consideration is functional linear regression. We
have demonstrated the use of instrumental variables in this model to address endogeneity.
Additionally, we have integrated instrumental variables into the model approximating RND
through the use of Hermite functions for the purpose of result comparison. To ensure a stable
estimator, we employ the Tikhonov regularization technique. Following this, we conduct Monte-
Carlo simulations to investigate the impact of different RND distribution types on the obtained
results. Specifically, we analyze a lognormal mixture distribution and a Black-Scholes smile
distribution. The simulation results demonstrate that the estimator utilizing instrumental
variables to adjust for endogeneity outperforms the non-adjusted alternative. Additionally,
outcomes from the Black-Scholes smile distribution exhibit superior performance compared to
those from the log-normal mixture distribution. Finally, S&P 500 options are used for an
application of the estimator.
|
12 |
Efficient Semi-Implicit Time-Stepping Schemes for Incompressible FlowsLoy, Kak Choon January 2017 (has links)
The development of numerical methods for the incompressible Navier-Stokes equations received much attention in the past 50 years. Finite element methods emerged given their robustness and reliability. In our work, we choose the P2-P1 finite element for space approximation which gives 2nd-order accuracy for velocity and 1st-order accuracy for pressure. Our research focuses on the development of several high-order semi-implicit time-stepping methods to compute unsteady flows. The methods investigated include backward difference formulae (SBDF) and defect correction strategy (DC). Using the defect correction strategy, we investigate two variants, the first one being based on high-order artificial compressibility and bootstrapping strategy proposed by Guermond and Minev (GM) and the other being a combination of GM methods with sequential regularization method (GM-SRM). Both GM and GM-SRM methods avoid solving saddle point problems as for SBDF and DC methods. This approach reduces the complexity of the linear systems at the expense that many smaller linear systems need to be solved. Next, we proposed several numerical improvements in terms of better approximations of the nonlinear advection term and high-order initialization for all methods. To further minimize the complexity of the resulting linear systems, we developed several new variants of grad-div splitting algorithms besides the one studied by Guermond and Minev. Splitting algorithm allows us to handle larger flow problems. We showed that our new methods are capable of reproducing flow characteristics (e.g., lift and drag parameters and Strouhal numbers) published in the literature for 2D lid-driven cavity and 2D flow around the cylinder. SBDF methods with grad-div stabilization terms are found to be very stable, accurate and efficient when computing flows with high Reynolds numbers. Lastly, we showcased the robustness of our methods to carry 3D computations.
|
13 |
Vylepšení metodiky rekonstrukce biomedicínských obrazů založené na impedanční tomografii / Improvement of the Biomedical Image Reconstruction Methodology Based on Impedance TomographyKořínková, Ksenia January 2016 (has links)
Disertační práce, jež má teoretický charakter, je zaměřena na vylepšení a výzkum algoritmů pro zobrazování vnitřní struktury vodivých objektů, hlavně biologických tkání a orgánů pomocí elektrické impedanční tomografie (EIT). V práci je formulován teoretický rámec EIT. Dále jsou prezentovány a porovnány algoritmy pro řešení inverzní úlohy, které zajišťují efektivní rekonstrukci prostorového rozložení elektrických vlastností ve zkoumaném objektu a jejích zobrazení. Hlavní myšlenka vylepšeného algoritmu, který je založen na deterministickém přístupu, spočívá v zavedení dodatečných technik: level set a nebo fuzzy filtru. Kromě toho, je ukázána metoda 2-D rekonstrukce rozložení konduktivity z jediného komponentu magnetického pole a to konkrétně z-tové složky magnetického toku. Byly vytvořeny numerické modely biologické tkáně s určitým rozložení admitivity (nebo konduktivity) pro otestování těchto algoritmů. Výsledky získané z rekonstrukcí pomocí vylepšených algoritmů jsou ukázány a porovnány.
|
14 |
Advancing Optimal Control Theory Using Trigonometry For Solving Complex Aerospace ProblemsKshitij Mall (5930024) 17 January 2019 (has links)
<div>Optimal control theory (OCT) exists since the 1950s. However, with the advent of modern computers, the design community delegated the task of solving the optimal control problems (OCPs) largely to computationally intensive direct methods instead of methods that use OCT. Some recent work showed that solvers using OCT could leverage parallel computing resources for faster execution. The need for near real-time, high quality solutions for OCPs has therefore renewed interest in OCT in the design community. However, certain challenges still exist that prohibits its use for solving complex practical aerospace problems, such as landing human-class payloads safely on Mars.</div><div><br></div><div>In order to advance OCT, this thesis introduces Epsilon-Trig regularization method to simply and efficiently solve bang-bang and singular control problems. The Epsilon-Trig method resolves the issues pertaining to the traditional smoothing regularization method. Some benchmark problems from the literature including the Van Der Pol oscillator, the boat problem, and the Goddard rocket problem verified and validated the Epsilon-Trig regularization method using GPOPS-II.</div><div><br></div><div>This study also presents and develops the usage of trigonometry for incorporating control bounds and mixed state-control constraints into OCPs and terms it as Trigonometrization. Results from literature and GPOPS-II verified and validated the Trigonometrization technique using certain benchmark OCPs. Unlike traditional OCT, Trigonometrization converts the constrained OCP into a two-point boundary value problem rather than a multi-point boundary value problem, significantly reducing the computational effort required to formulate and solve it. This work uses Trigonometrization to solve some complex aerospace problems including prompt global strike, noise-minimization for general aviation, shuttle re-entry problem, and the g-load constraint problem for an impactor. Future work for this thesis includes the development of the Trigonometrization technique for OCPs with pure state constraints.</div>
|
15 |
Nové typy a principy optimalizace digitálního zpracování obrazů v EIT / New Optimization Algorithms for a Digital Image Reconstruction in EITKříž, Tomáš January 2016 (has links)
This doctoral thesis proposes a new algorithm for the reconstruction of impedance images in monitored objects. The algorithm eliminates the spatial resolution problems present in existing reconstruction methods, and, with respect to the monitored objects, it exploits both the partial knowledge of configuration and the material composition. The discussed novel method is designed to recognize certain significant fields of interest, such as material defects or blood clots and tumors in biological images. The actual reconstruction process comprises two phases; while the former stage is focused on industry-related images, with the aim to detect defects in conductive materials, the latter one concentrates on biomedical applications. The thesis also presents a description of the numerical model used to test the algorithm. The testing procedure was centred on the resulting impedivity value, influence of the regularization parameter, initial value of the numerical model impedivity, and effect exerted by noise on the voltage electrodes upon the overall reconstruction results. Another issue analyzed herein is the possibility of reconstructing impedance images from components of the magnetic flux density measured outside the investigated object. The given magnetic field is generated by a current passing through the object. The created algorithm for the reconstruction of impedance images is modeled on the proposed algorithm for EIT-based reconstruction of impedance images from voltage. The algoritm was tested for stability, influence of the regularization parameter, and initial conductivity. From the general perspective, the thesis describes the methodology for both magnetic field measurement via NMR and processing of the obtained data.
|
16 |
Statistical Design of Sequential Decision Making AlgorithmsChi-hua Wang (12469251) 27 April 2022 (has links)
<p>Sequential decision-making is a fundamental class of problem that motivates algorithm designs of online machine learning and reinforcement learning. Arguably, the resulting online algorithms have supported modern online service industries for their data-driven real-time automated decision making. The applications span across different industries, including dynamic pricing (Marketing), recommendation (Advertising), and dosage finding (Clinical Trial). In this dissertation, we contribute fundamental statistical design advances for sequential decision-making algorithms, leaping progress in theory and application of online learning and sequential decision making under uncertainty including online sparse learning, finite-armed bandits, and high-dimensional online decision making. Our work locates at the intersection of decision-making algorithm designs, online statistical machine learning, and operations research, contributing new algorithms, theory, and insights to diverse fields including optimization, statistics, and machine learning.</p>
<p><br></p>
<p>In part I, we contribute a theoretical framework of continuous risk monitoring for regularized online statistical learning. Such theoretical framework is desirable for modern online service industries on monitoring deployed model's performance of online machine learning task. In the first project (Chapter 1), we develop continuous risk monitoring for the online Lasso procedure and provide an always-valid algorithm for high-dimensional dynamic pricing problems. In the second project (Chapter 2), we develop continuous risk monitoring for online matrix regression and provide new algorithms for rank-constrained online matrix completion problems. Such theoretical advances are due to our elegant interplay between non-asymptotic martingale concentration theory and regularized online statistical machine learning.</p>
<p><br></p>
<p>In part II, we contribute a bootstrap-based methodology for finite-armed bandit problems, termed Residual Bootstrap exploration. Such a method opens a possibility to design model-agnostic bandit algorithms without problem-adaptive optimism-engineering and instance-specific prior-tuning. In the first project (Chapter 3), we develop residual bootstrap exploration for multi-armed bandit algorithms and shows its easy generalizability to bandit problems with complex or ambiguous reward structure. In the second project (Chapter 4), we develop a theoretical framework for residual bootstrap exploration in linear bandit with fixed action set. Such methodology advances are due to our development of non-asymptotic theory for the bootstrap procedure.</p>
<p><br></p>
<p>In part III, we contribute application-driven insights on the exploration-exploitation dilemma for high-dimensional online decision-making problems. Such insights help practitioners to implement effective high-dimensional statistics methods to solve online decisionmaking problems. In the first project (Chapter 5), we develop a bandit sampling scheme for online batch high-dimensional decision making, a practical scenario in interactive marketing, and sequential clinical trials. In the second project (Chapter 6), we develop a bandit sampling scheme for federated online high-dimensional decision-making to maintain data decentralization and perform collaborated decisions. These new insights are due to our new bandit sampling design to address application-driven exploration-exploitation trade-offs effectively. </p>
|
Page generated in 0.1097 seconds