481 |
Asymptotic results for American option prices under extended Heston modelTeri, Veronica January 2019 (has links)
In this thesis, we consider the pricing problem of an American put option. We introduce a new market model for the evolution of the underlying asset price. Our model adds a new parameter to the well known Heston model. Hence we name our model the extended Heston model. To solve the American put pricing problem we adapt the idea developed by Fouque et al. (2000) to derive the asymptotic formula. We then connect the idea developed by Medvedev and Scaillet (2010) to provide an asymptotic solution for the leading order term P0. We do numerical analysis to gain insight into the accuracy and validity of our asymptotic approximation formula.
|
482 |
Plongements grossièrement Lipschitz et presque Lipschitz dans les espaces de Banach / Coarse Lipschitz embeddings and almost Lipschitz embeddings into Banach spacesNetillard, François 22 October 2019 (has links)
Le thème central de cette thèse est l'étude de plongements d'espaces métriques dans des espaces de Banach. La première étude concerne les plongements grossièrement Lipschitz entre les espaces de James Jp pour p≻1 et p fini. On obtient que, pour p,q différents, Jq ne se plonge pas grossièrement Lipschitz dans Jp. Nous avons également obtenu, dans le cas où q≺p, une majoration de l'exposant de compression de Jq dans Jp par q/p. La question naturelle qui se pose ensuite est de savoir si le résultat obtenu pour les espaces de James est vrai aussi en ce qui concerne leurs duaux. Nous obtenons que, pour p,q différents, Jp* ne se plonge pas grossièrement lipschitz dans Jq*. Suite à ce travail, on établit des résultats plus généraux sur la non-plongeabilité des espaces de Banach q-AUS dans les espaces de Banach p-AMUC pour p≺q. On en déduit aussi, à l'aide d'un théorème de renormage, un résultat sur les indices de Szlenk. Par ailleurs, on obtient un résultat sur la plongeabilité quasi-Lipschitz dont la définition diffère légèrement de la plongeabilité presque Lipschitz : pour deux espaces de Banach X et Y, si, pour C≻1, X est C-finiment crûment représentable dans tout sous-espace vectoriel de codimension finie de Y, alors tout sous-espace propre M de X se plonge quasi-Lipschitz dans Y. Pour conclure, on obtient le corollaire suivant : soient X et Y deux espaces de Banach tels que X est localement minimal et Y est finiment crûment représentable dans X. Alors, pour M sous-espace propre de Y, M se plonge quasi-Lipschitz dans X. / The central theme of this thesis is the study of embeddings of metric spaces into Banach spaces.The first study focuses on the coarse Lipschitz embeddings between James Spaces Jp for p≻1 and p finite. We obtain that, for p,q different, Jq does not coarse Lipschitz embed into Jp. We also obtain, in the case where q≺p, that the compression exponent of Jq in Jp is lower or equal to q/p. Another natural question is to know whether we have similar results for the dual spaces of James spaces. We obtain that, for p,q different, Jp* does not coarse Lipschitz embed into Jq*. Further to this work, we establish a more general result about the coarse Lipschitz embeddability of a Banach space which has a q-AUS norm into a Banach space which has a p-AMUC norm for p≺q. With the help of a renorming theorem, we deduce also a result about the Szlenk index. Moreover, after defining the quasi-Lipschitz embeddability, which is slightly different to the almost Lipschitz embeddability, we obtain the following result: For two Banach spaces X, if X is crudely finitely representable with constant C (where C≻1) in any subspace of Y of finite codimension, then every proper subset M of X quasi-Lipschitz embeds into Y. To conclude, we obtain the following corollary: Let X be a locally minimal Banach space, and Y be a Banach space which is crudely finitely representable in X. Then, for M a proper subspace of Y, M quasi-Lipschitz embeds into X.
|
483 |
Steady State Analysis of Load Balancing Algorithms in the Heavy Traffic RegimeJanuary 2019 (has links)
abstract: This dissertation studies load balancing algorithms for many-server systems (with N servers) and focuses on the steady-state performance of load balancing algorithms in the heavy traffic regime. The framework of Stein’s method and (iterative) state-space collapse (SSC) are used to analyze three load balancing systems: 1) load balancing in the Sub-Halfin-Whitt regime with exponential service time; 2) load balancing in the Beyond-Halfin-Whitt regime with exponential service time; 3) load balancing in the Sub-Halfin-Whitt regime with Coxian-2 service time.
When in the Sub-Halfin-Whitt regime, the sufficient conditions are established such that any load balancing algorithm that satisfies the conditions have both asymptotic zero waiting time and zero waiting probability. Furthermore, the number of servers with more than one jobs is o(1), in other words, the system collapses to a one-dimensional space. The result is proven using Stein’s method and state space collapse (SSC), which are powerful mathematical tools for steady-state analysis of load balancing algorithms. The second system is in even “heavier” traffic regime, and an iterative refined procedure is proposed to obtain the steady-state metrics. Again, asymptotic zero delay and waiting are established for a set of load balancing algorithms. Different from the first system, the system collapses to a two-dimensional state-space instead of one-dimensional state-space. The third system is more challenging because of “non-monotonicity” with Coxian-2 service time, and an iterative state space collapse is proposed to tackle the “non-monotonicity” challenge. For these three systems, a set of load balancing algorithms is established, respectively, under which the probability that an incoming job is routed to an idle server is one asymptotically at steady-state. The set of load balancing algorithms includes join-the-shortest-queue (JSQ), idle-one-first(I1F), join-the-idle-queue (JIQ), and power-of-d-choices (Pod) with a carefully-chosen d. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2019
|
484 |
Asymptotic Expansions for Second-Order Moments of Integral Functionals of Weakly Correlated Random FunctionsScheidt, Jrgen vom, Starkloff, Hans-Jrg, Wunderlich, Ralf 30 October 1998 (has links)
In the paper asymptotic expansions for
second-order moments of integral functionals
of a class of random functions are considered.
The random functions are assumed to be
$\epsilon$-correlated, i.e. the values are not
correlated excluding a $\epsilon$-neighbourhood
of each point. The asymptotic expansions are
derived for $\epsilon \to 0$. With the help of
a special weak assumption there are found
easier expansions as in the case of general
weakly correlated functions.
|
485 |
Compression Techniques for Boundary Integral Equations - Optimal Complexity EstimatesDahmen, Wolfgang, Harbrecht, Helmut, Schneider, Reinhold 05 April 2006 (has links)
In this paper matrix compression techniques in the
context of wavelet Galerkin schemes for boundary
integral equations are developed and analyzed that
exhibit optimal complexity in the following sense.
The fully discrete scheme produces approximate
solutions within discretization error accuracy
offered by the underlying Galerkin method at a
computational expense that is proven to stay
proportional to the number of unknowns.
Key issues are the second compression, that
reduces the near field complexity significantly,
and an additional a-posteriori compression.
The latter one is based on a general result
concerning an optimal work balance, that applies,
in particular, to the quadrature used to compute
the compressed stiffness matrix with sufficient
accuracy in linear time. The theoretical results
are illustrated by a 3D example on a nontrivial
domain.
|
486 |
Efficient Knot Optimization for Accurate B-spline-based Data ApproximationYo-Sing Yeh (9757565) 14 December 2020
<div>Many practical applications benefit from the reconstruction of a smooth multivariate function from discrete data for purposes such as reducing file size or improving analytic and visualization performance. Among the different reconstruction methods, tensor product B-spline has a number of advantageous properties over alternative data representation. However, the problem of constructing a best-fit B-spline approximation effectively contains many roadblocks. Within the many free parameters in the B-spline model, the choice of the knot vectors, which defines the separation of each piecewise polynomial patch in a B-spline construction, has a major influence on the resulting reconstruction quality. Yet existing knot placement methods are still ineffective, computationally expensive, or impose limitations on the dataset format or the B-spline order. Moving beyond the 1D cases (curves) and onto higher dimensional datasets (surfaces, volumes, hypervolumes) introduces additional computational challenges as well. Further complications also arise in the case of undersampled data points where the approximation problem can become ill-posed and existing regularization proves unsatisfactory.</div><div><br></div><div>This dissertation is concerned with improving the efficiency and accuracy of the construction of a B-spline approximation on discrete data. Specifically, we present a novel B-splines knot placement approach for accurate reconstruction of discretely sampled data, first in 1D, then extended to higher dimensions for both structured and unstructured formats. Our knot placement methods take into account the feature or complexity of the input data by estimating its high-order derivatives such that the resulting approximation is highly accurate with a low number of control points. We demonstrate our method on various 1D to 3D structured and unstructured datasets, including synthetic, simulation, and captured data. We compare our method with state-of-the-art knot placement methods and show that our approach achieves higher accuracy while requiring fewer B-spline control points. We discuss a regression approach to the selection of the number of knots for multivariate data given a target error threshold. In the case of the reconstruction of irregularly sampled data, where the linear system often becomes ill-posed, we propose a locally varying regularization scheme to address cases for which a straightforward regularization fails to produce a satisfactory reconstruction.</div>
|
487 |
Measuring the Characteristic Sizes of Convection Structures in AGB Stars with Fourier Decomposition Analyses : the Stellar Intensity Analyzer (SIA) Pipeline.Colom i Bernadich, Miquel January 2020 (has links)
Context. Theoretical studies predict that the length scale of convection in stellar atmospheres isproportional to the pressure scale height, which implies that giant and supergiant stars should have convection granules of sizes comparable to their radii. Numerical simulations and the observation of anisotropies on stellar discs agree well with this prediction. Aims. To measure the characteristic sizes of convection structures of models simulated with the CO5BOLD code, to look at how they vary between models and to study their limitations due to numerical resolution. Methods. Fourier analyses are performed to frames from the models to achieve spatial spectral power distributions which are averaged over time. The position of the main peak and the averagevalue of the wavevector are taken as indicators of these sizes. The general shape of the intensity map of the disc in the frame is fitted and subtracted so that it does not contaminate the Fourier analysis. Results. A general relationship of the convection granule size being more or less ten times larger than the pressure length scale is found. The expected wavevector value of the time-averaged spectral power distributions is higher than the position of the main peak. Loose increasing trends with the characteristic sizes by the pressure scale height increasing against stellar mass, radius, luminosity,temperature and gravity are found, while a decreasing trends are found with the radius and modelresolution. Bad resolution subtracts signals on the slope at the side of the main peak towards larger wavevector values and in extreme cases it creates spurious signal towards the end of the spectrum due to artifacts appearing on the frames. Conclusions. The wavevector position of the absolute maximum in the time-averaged spectral power distribution is the best measure of the most prominent sizes in the stellar surfaces. The proportionality constant between granule size and pressure length scale is of the same order ofmagnitude as the one in the literature, however, models present sizes larger than the ones expected, likely because the of prominent features do not correspond to convection granules but to larger features hovering above them. Further studies on models with higher resolution will help in drawing more conclusive results. Appendix. The SIA pipeline takes a set of time-dependent pictures of stellar disks and uses a Fourier Analysis to measure the characteristic sizes of their features and other useful quantities, such as standard deviations or the spatial power distributions of features. The main core of the pipeline consists in identifying the stellar disc in the frames and subtracting their signal from the spatial power distributions through a general fit of the disc intensity. To analyze a time sequence, the SIA pipeline requires at least two commands from the user. The first commandorders the SIA pipeline to read the .sav IDL data structure file where the frame sequence is stored and to produce another .sav file with information on the spectral power distributions, the second command orders the reading of such file to produce two more .sav files, one containing time-averaged size measurements and their deviations while the other breaking down time-dependant information and other arrays used for the calculations. The SIA pipeline has been entirely written in Interactive Data Language (IDL). Most of the procedures used here are original from the SIA pipeline, but a small handfull like ima3_distancetransform.pro, power2d1d.pro, extremum.pro and smooth2d.pro from Bernd Freytag and peaks.pro and compile opt.pro amongst others are actually external. / <p>The report consists in two parts:</p><p>1.- The main project, where we apply our pipeline and get scientific results.</p><p>2.- The appendix, where a technical description of the pipeline is given.</p>
|
488 |
Úplně nejmenší čtverce a jejich asymptotické vlastnosti / Total Least Squares and Their Asymptotic PropertiesChuchel, Karel January 2020 (has links)
Tato práce se zabývá metodou úplně nejmenších čtverc·, která slouží pro odhad parametr· v lineárních modelech. V práci je uveden základní popis metody a její asymptotické vlastnosti. Je vysvětleno, jakým zp·sobem lze v konceptu metody využít neparametrický bootstrap pro hledání odhadu. Vlastnosti bootstrap od- had· jsou pak simulovány na pseudo náhodně vygenerovaných datech. Simulace jsou prováděny pro dvourozměrný parametr v r·zných nastaveních základního modelu. Jednotlivé bootstrap odhady jsou v rovině řazeny pomocí Mahalanobis a Tukey statistical depth function. Simulace potvrzují, že bootstrap odhad dává dostatečně dobré výsledky, aby se dal využít pro reálné situace.
|
489 |
Simulations of turbulent boundary layers with suction and pressure gradientsBobke, Alexandra January 2016 (has links)
The focus of the present licentiate thesis is on the effect of suction and pressure gradients on turbulent boundary-layer flows, which are investigated separately through performing numerical simulations.The first part aims at assessing history and development effects on adverse pressure-gradient (APG) turbulent boundary layers (TBL). A suitable set-up was developed to study near-equilibrium conditions for a boundary layer developingon a flat plate by setting the free-stream velocity at the top of the domain following a power law. The computational box size and the correct definition of the top-boundary condition were systematically tested. Well-resolved large-eddy simulations were performed to keep computational costs low. By varying the free-stream velocity distribution parameters, e.g. power-law exponent and virtual origin, pressure gradients of different strength and development were obtained. The magnitude of the pressure gradient is quantified in terms of the Clauser pressure-gradient parameter β. The effect of the APG is closely related to its streamwise development, hence, TBLs with non-constant and constant β were investigated. The effect was manifested in the mean flow through a much more pronounced wake region and in the Reynolds stresses through the existence of an outer peak. The terms of the turbulent kinetic energy budgets indicate the influence of the APG on the distribution of the transfer mechanism across the boundary layer. Stronger and more energetic structures were identified in boundary layers with relatively stronger pressure gradients in their development history. Due to the difficulty of determining the boundary-layer thickness in flows with strong pressure gradients or over a curvedsurface, a new method based on the diagnostic-plot concept was introduced to obtain a robust estimation of the edge of a turbulent boundary layer. In the second part, large-eddy simulations were performed on temporally developing turbulent asymptotic suction boundary layers (TASBLs). Findings from previous studies about the effect of suction could be confirmed, e.g. the reduction of the fluctuation levels and Reynolds shear stresses. Furthermore, the importance of the size of the computational domain and the time development were investigated. Both parameters were found to have a large impact on the results even on low-order statistics. While the mean velocity profile collapses in the inner layer irrespective of box size and development time, a wake region occurs for too small box sizes or early development time and vanishes once sufficiently large domains and/or integration times are chosen. The asymptotic state is charactersized by surprisingly thick boundary layers even for moderateReynolds numbers Re (based on free-stream velocity and laminar displacement thickness); for instance, Re = 333 gives rise to a friction Reynolds number Reτ = 2000. Similarly, the flow gives rise to very large structures in the outer region. These findings have important ramifications for experiments, since very large facilities are required to reach the asymptotic state even for low Reynolds numbers. / <p>QC 20160418</p>
|
490 |
Mathematical modelling of vanadium redox batteries / Modelagem matemática de baterias redox de vanádioAssuncao, Milton Unknown Date (has links)
Mathematical modelling using differential equations is an important tool to predict the behaviorof vanadium redox batteries, since it may contribute to improve the device performance and leadto a better understanding of the principles of its operation. Modelling can be complementedby asymptotic analysis as a mean to promote reductions or simplifications that make modelsless complex. Such simplifications are useful in this context, whereas these models usuallyaddresses one cell only – the smallest operating unit – while real applications demand tensor hundreds cells implying on larger computational requirements. In this research, severaloptions for asymptotic reductions were investigated and, applied to different models, were ableto speed up the processing time in 2.46× or reduce the memory requirements up to 11.39%. Thecomputational simulations were executed by COMSOL Multiphysics v.4.4, also by in-housecode developed in MATLAB. The validation of results was done by comparing it to experimentalresults available in literature. Additionally, correlating the results provided by COMSOL withthe ones arising from the implemented sub-routines allowed to validate the developed algorithm.Key-words: / A modelagem matemática por meio de equações diferenciais é uma importante ferramenta paraprever o comportamento de baterias redox de vanádio, pois ela pode contribuir para o aperfeiçoamentodo produto e melhor entendimento dos princípios da sua operação. Os estudos demodelagem podem ser aliados à análise assintótica no intuito de promover reduções ou simplificaçõesque tornem os modelos menos complexos, isso é feito a partir da observação da importânciaque cada termo exerce sobre as equações. Tais simplificações são úteis neste contexto, visto queos modelos geralmente abordam uma célula apenas - a menor unidade operacional da bateria- enquanto aplicações reais exigem o uso de dezenas ou centenas delas implicando em umamaximização do uso de recursos computacionais. Neste trabalho, foram investigadas múltiplasformas de reduções assintóticas que empregadas na construção dos modelos puderam acelerar otempo de processamento em até 2,46 vezes ou reduzir os requisitos de memória principal em até11,39%. As simulações computacionais foram executadas pelo software COMSOL Multiphysicsv. 4.4, e também por scripts desenvolvidos em ambiente de programação MATLAB. A validaçãodos resultados foi feita comparando-os a dados experimentais presentes na literatura. Talabordagem permitiu também validar as rotinas implementadas para a simulação dos modeloscomparando suas soluções com aquelas providas pelo COMSOL.
|
Page generated in 0.0568 seconds