• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 28
  • 15
  • 7
  • 1
  • 1
  • Tagged with
  • 267
  • 42
  • 32
  • 28
  • 22
  • 20
  • 20
  • 16
  • 15
  • 15
  • 14
  • 14
  • 13
  • 13
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Quelques algorithmes rapides pour la finance quantitative / Some fast algorithms for quantitative finance

Sall, Guillaume 21 December 2017 (has links)
Dans cette thèse, nous nous intéressons à des noeuds critiques du calcul du risque de contrepartie, la valorisation rapide des produits dérivées et de leurs sensibilités. Nous proposons plusieurs méthodes mathématiques et informatiques pour répondre à cette problématique. Nous contribuons à quatre domaines différents: une extension de la méthode Vibrato et l'application des méthodes multilevel Monte Carlo pour le calcul des grecques à ordre élevé n>1 avec une technique de différentiation automatique. La troisième contribution concerne l'évaluation des produits Américain, ici nous nous servons d'un schéma pararéel pour l'accélération du processus de valorisation et nous faisons également une application pour la résolution d'une équation différentielle stochastique rétrograde. La quatrième contribution est la conception d'un moteur de calcul performant à architecture parallèle. / In this thesis, we will focus on the critical node of the computation of counterparty credit risk, the fast evaluation of financial derivatives and their sensitivities. We propose several mathematical and computer-based methods to address this issue. We have contributed to four areas: an extension of the Vibrato method and an application of the weighted multilevel Monte Carlo for the computation of the greeks for high order derivatives n>1 with automatic differentiation. The third contribution concerns the evaluation of American style option, here we use a parareal scheme to speed up the assessing process and we made an application for solving backward stochastic differential equations. The last contribution is the conception of an efficient computation engine for financial derivatives with a parallel architecture.
222

Contributions aux méthodes bayésiennes approchées pour modèles complexes / Contributions to Bayesian Computing for Complex Models

Grazian, Clara 15 April 2016
Récemment, la grande complexité des applications modernes, par exemple dans la génétique, l’informatique, la finance, les sciences du climat, etc. a conduit à la proposition des nouveaux modèles qui peuvent décrire la réalité. Dans ces cas,méthodes MCMC classiques ne parviennent pas à rapprocher la distribution a posteriori, parce qu’ils sont trop lents pour étudier le space complet du paramètre. Nouveaux algorithmes ont été proposés pour gérer ces situations, où la fonction de vraisemblance est indisponible. Nous allons étudier nombreuses caractéristiques des modèles complexes: comment éliminer les paramètres de nuisance de l’analyse et faire inférence sur les quantités d’intérêt,dans un cadre bayésienne et non bayésienne et comment construire une distribution a priori de référence. / Recently, the great complexity of modern applications, for instance in genetics,computer science, finance, climatic science etc., has led to the proposal of newmodels which may realistically describe the reality. In these cases, classical MCMCmethods fail to approximate the posterior distribution, because they are too slow toinvestigate the full parameter space. New algorithms have been proposed to handlethese situations, where the likelihood function is unavailable. We will investigatemany features of complex models: how to eliminate the nuisance parameters fromthe analysis and make inference on key quantities of interest, both in a Bayesianand not Bayesian setting, and how to build a reference prior.
223

Polynômes aléatoires, gaz de Coulomb, et matrices aléatoires / Random Polynomials, Coulomb Gas and Random Matrices

Butez, Raphaël 04 December 2017 (has links)
L'objet principal de cette thèse est l'étude de plusieurs modèles de polynômes aléatoires. Il s'agit de comprendre le comportement macroscopique des racines de polynômes aléatoires dont le degré tend vers l'infini. Nous explorerons la connexion existant entre les racines de polynômes aléatoires et les gaz de Coulomb afin d'obtenir des principes de grandes déviations pour la mesure empiriques des racines. Nous revisitons l'article de Zeitouni et Zelditch qui établit un principe de grandes déviations pour un modèle général de polynômes aléatoires à coefficients gaussiens complexes. Nous étendons ce résultat au cas des coefficients gaussiens réels. Ensuite, nous démontrons que ces résultats restent valides pour une large classe de lois sur les coefficients, faisant des grandes déviations un phénomène universel pour ces modèles. De plus, nous démontrons tous les résultats précédents pour le modèle des polynômes de Weyl renormalisés. Nous nous intéressons aussi au comportement de la racine de plus grand module des polynômes de Kac. Celle-ci a un comportement non-universel et est en général une variable aléatoire à queues lourdes. Enfin, nous démontrons un principe de grandes déviations pour la mesure empirique des ensembles biorthogonaux. / The main topic of this thesis is the study of the roots of random polynomials from several models. We seek to understand the behavior of the roots as the degree of the polynomial tends to infinity. We explore the connexion between the roots of random polynomials and Coulomb gases to obtain large deviations principles for the empirical measures of the roots of random polynomials. We revisit the article of Zeitouni and Zelditch which establishes the large deviations for a rather general model of random polynomials with independent complex Gaussian coefficients. We extend this result to the case of real Gaussian coefficients. Then, we prove that those results are also valid for a wide class of distributions on the coefficients, which means that those large deviations principles are a universal property. We also prove all of those results for renormalized Weyl polynomials. study the largest root in modulus of Kac polynomials. We show that this random variable has a non-universal behavior and has heavy tails. Finally, we establish a large deviations principle for the empirical measures of biorthogonal ensembles.
224

Étude probabiliste des contraintes de bout en bout dans les systèmes temps réel / Probabilistic study of end-to-end constraints in real-time systems

Maxim, Cristian 11 December 2017 (has links)
L'interaction sociale, l'éducation et la santé ne sont que quelques exemples de domaines dans lesquels l'évolution rapide de la technologie a eu un grand impact sur la qualité de vie. Les entreprises s’appuient de plus en plus sur les systèmes embarqués pour augmenter leur productivité, leur efficacité et leurs valeurs. Dans les usines, la précision des robots tend à remplacer la polyvalence humaine. Bien que les appareils connectés comme les drônes, les montres intelligentes ou les maisons intelligentes soient de plus en plus populaires ces dernières années, ce type de technologie a été utilisé depuis longtemps dans les industries concernées par la sécurité des utilisateurs. L’industrie avionique utilise des ordinateurs pour ses produits depuis 1972 avec la production du premier avion A300; elle a atteint des progrès étonnants avec le développement du premier avion Concorde en 1976 en dépassant de nombreuses années les avions de son époque, et ça a été considéré comme un miracle de la technologie. Certaines innovations et connaissances acquises pour le Concorde sont toujours utilisées dans les modèles récents comme A380 ou A350. Un système embarqué est un système à microprocesseur qui est construit pour contrôler une fonction ou une gamme de fonctions et qui n’est pas conçu pour être programmé par l'utilisateur final de la même manière qu'un ordinateur personnel. Un système temps-réel est un système de traitement de l’information qui doit répondre aux stimuli d’entrées générées de manière externe dans une période finie et spécifiée. Le comportement de ces systèmes prend en compte non seulement l'exactitude dépend non seulement du résultat logique mais aussi du temps dans lequel il a été livré. Les systèmes temps-réel peuvent être trouvés dans des industries comme l'aéronautique, l'aérospatiale, l'automobile ou l’industrie ferroviaire mais aussi dans les réseaux de capteurs, les traitements d'image, les applications multimédias, les technologies médicales, les robotiques, les communications, les jeux informatiques ou les systèmes ménagers. Dans cette thèse, nous nous concentrons sur les systèmes temps-réel embarqués et pour la facilité des notations, nous leur nommons simplement des systèmes temps réel. Nous pourrions nous référer aux systèmes cyber-physiques si tel est le cas. Le pire temps d’exécution (WCET) d'une tâche représente le temps maximum possible pour qu’elle soit exécutée. Le WCET est obtenu après une analyse de temps et souvent il ne peut pas être déterminé avec précision en déterminant toutes les exécutions possibles. C'est pourquoi, dans l'industrie, les mesures sont faites uniquement sur un sous-ensemble de scénarios possibles, celui qui générerait les temps d'exécution les plus élevés, et une limite supérieure de temps d’exécution est estimé en ajoutant une marge de sécurité au plus grand temps observé. L’analyses de temps est un concept clé qui a été utilisé dans les systèmes temps-réel pour affecter une limite supérieure aux WCET des tâches ou des fragments de programme. Cette affectation peut être obtenue soit par analyse statique, soit par analyse des mesures. Les méthodes statiques et par mesure, dans leurs approches déterministes, ont tendance à être extrêmement pessimistes. Malheureusement, ce niveau de pessimisme et le sur-provisionnement conséquent ne peut pas être accepté par tous les systèmes temps-réels, et pour ces cas, d'autres approches devraient être prises en considération. / In our times, we are surrounded by technologies meant to improve our lives, to assure its security, or programmed to realize different functions and to respect a series of constraints. We consider them as embedded systems or often as parts of cyber-physical systems. An embedded system is a microprocessor-based system that is built to control a function or a range of functions and is not designed to be programmed by the end user in the same way that a PC is. The Worst Case Execution Time (WCET) of a task represents the maximum time it can take to be executed. The WCET is obtained after analysis and most of the time it cannot be accurately determined by exhausting all the possible executions. This is why, in industry, the measurements are done only on a subset of possible scenarios (the one that would generate the highest execution times) and an execution time bound is estimated by adding a safety margin to the greatest observed time. Amongst all branches of real-time systems, an important role is played by the Critical Real-Time Embedded Systems (CRTES) domain. CRTESs are widely being used in fields like automotive, avionics, railway, health-care, etc. The performance of CRTESs is analyzed not only from the point of view of their correctness, but also from the perspective of time. In the avionics industry such systems have to undergo a strict process of analysis in order to fulfill a series of certification criteria demanded by the certifications authorities, being the European Aviation Safety Agency (EASA) in Europe or the Federal Aviation Administration (FAA) in United States. The avionics industry in particular and the real-time domain in general are known for being conservative and adapting to new technologies only when it becomes inevitable. For the avionics industry this is motivated by the high cost that any change in the existing functional systems would bring. Any change in the software or hardware has to undergo another certification process which cost the manufacturer money, time and resources. Despite their conservative tendency, the airplane producers cannot stay inactive to the constant change in technology and ignore the performance benefices brought by COTS processors which nowadays are mainly multi-processors. As a curiosity, most of the microprocessors found in airplanes flying actually in the world, have a smaller computation power than a modern home PC. Their chips-sets are specifically designed for embedded applications characterized by low power consumption, predictability and many I/O peripherals. In the actual context, where critical real-time systems are invaded by multi-core platforms, the WCET analysis using deterministic approaches becomes difficult, if not impossible. The time constraints of real-time systems need to be verified in the context of certification. This verification, done during the entire development cycle, must take into account architectures more and more complex. These architectures increase the cost and complexity of actual, deterministic, tools to identify all possible time constrains and dependencies that can occur inside the system, risking to overlook extreme cases. An alternative to these problems is the probabilistic approach, which is more adapted to deal with these hazards and uncertainty and which allows a precise modeling of the system. 2. Contributions. The contribution of the thesis is three folded containing the conditions necessary for using the theory of extremes on executions time measurements, the methods developed using the theory of extremes for analyzing real-time systems and experimental results. 2.1. Conditions for use of EVT in the real-time domain. In this chapter we establish the environment in which our work is done. The use of EVT in any domain comes with a series of restrictions for the data being analyzed. In our case the data being analyzed consists in execution time measurements.
225

Estimation non paramétrique adaptative dans la théorie des valeurs extrêmes : application en environnement / Nonparametric adaptive estimation in the extreme value theory : application in ecology

Pham, Quang Khoai 09 January 2015 (has links)
L'objectif de cette thèse est de développer des méthodes statistiques basées sur la théorie des valeurs extrêmes pour estimer des probabilités d'évènements rares et des quantiles extrêmes conditionnelles. Nous considérons une suite de variables aléatoires indépendantes X_{t_1}$, $X_{t_2}$,...$,$X_{t_n}$ associées aux temps $0≤t_{1}< … <t_{n}≤T_{\max}$ où $X_{t_i}$ a la fonction de répartition $F_{t_i}$ et $F_t$ est la loi conditionnelle de $X$ sachant $T=t \in [0,T_{\max}]$. Pour chaque $t \in [0,T_{\max}]$, nous proposons un estimateur non paramétrique de quantiles extrêmes de $F_t$. L'idée de notre approche consiste à ajuster pour chaque $t \in [0,T_{\max}]$ la queue de la distribution $F_{t}$, par une distribution de Pareto de paramètre $\theta_{t,\tau}$ à partir d'un seuil $\tau.$ Le paramètre $\theta_{t,\tau}$ est estimé en utilisant un estimateur non paramétrique à noyau de taille de fenêtre $h$ basé sur les observations plus grandes que $\tau$. Sous certaines hypothèses de régularité, nous montrons que l'estimateur adaptatif proposé de $\theta_{t,\tau} $ est consistant et nous donnons sa vitesse de convergence. Nous proposons une procédure de tests séquentiels pour déterminer le seuil $\tau$ et nous obtenons le paramètre $h$ suivant deux méthodes : la validation croisée et une approche adaptative. Nous proposons également une méthode pour choisir simultanément le seuil $\tau$ et la taille de la fenêtre $h$. Finalement, les procédures proposées sont étudiées sur des données simulées et sur des données réelles dans le but d'aider à la surveillance de systèmes aquatiques. / The objective of this PhD thesis is to develop statistical methods based on the theory of extreme values to estimate the probabilities of rare events and conditional extreme quantiles. We consider independent random variables $X_{t_1},…,X_{t_n}$ associated to a sequence of times $0 ≤t_1 <… < t_n ≤ T_{\max}$ where $X_{t_i}$ has distribution function $F_{t_i}$ and $F_t$ is the conditional distribution of $X$ given $T = t \in [0,T_{\max}]$. For each $ t \in [0, T {\max}]$, we propose a nonparametric adaptive estimator for extreme quantiles of $F_t$. The idea of our approach is to adjust the tail of the distribution function $F_t$ with a Pareto distribution of parameter $\theta {t,\tau}$ starting from a threshold $\tau$. The parameter $\theta {t,\tau}$ is estimated using a nonparametric kernel estimator of bandwidth $h$ based on the observations larger than $\tau$. We propose a sequence testing based procedure for the choice of the threshold $\tau$ and we determine the bandwidth $h$ by two methods: cross validation and an adaptive procedure. Under some regularity assumptions, we prove that the adaptive estimator of $\theta {t, \tau}$ is consistent and we determine its rate of convergence. We also propose a method to choose simultaneously the threshold $\tau$ and the bandwidth $h$. Finally, we study the proposed procedures by simulation and on real data set to contribute to the survey of aquatic systems.
226

Transport optimal de martingale multidimensionnel. / Multidimensional martingale optimal transport.

De march, Hadrien 29 June 2018 (has links)
Nous étudions dans cette thèse divers aspects du transport optimal martingale en dimension plus grande que un, de la dualité à la structure locale, puis nous proposons finalement des méthodes d’approximation numérique.On prouve d’abord l’existence de composantes irréductibles intrinsèques aux transports martingales entre deux mesures données, ainsi que la canonicité de ces composantes. Nous avons ensuite prouvé un résultat de dualité pour le transport optimal martingale en dimension quelconque, la dualité point par point n’est plus vraie mais une forme de dualité quasi-sûre est démontrée. Cette dualité permet de démontrer la possibilité de décomposer le transport optimal quasi-sûre en une série de sous-problèmes de transports optimaux point par point sur chaque composante irréductible. On utilise enfin cette dualité pour démontrer un principe de monotonie martingale, analogue au célèbre principe de monotonie du transport optimal classique. Nous étudions ensuite la structure locale des transports optimaux, déduite de considérations différentielles. On obtient ainsi une caractérisation de cette structure en utilisant des outils de géométrie algébrique réelle. On en déduit la structure des transports optimaux martingales dans le cas des coûts puissances de la norme euclidienne, ce qui permet de résoudre une conjecture qui date de 2015. Finalement, nous avons comparé les méthodes numériques existantes et proposé une nouvelle méthode qui s’avère plus efficace et permet de traiter un problème intrinsèque de la contrainte martingale qu’est le défaut d’ordre convexe. On donne également des techniques pour gérer en pratique les problèmes numériques. / In this thesis, we study various aspects of martingale optimal transport in dimension greater than one, from duality to local structure, and finally we propose numerical approximation methods.We first prove the existence of irreducible intrinsic components to martingal transport between two given measurements, as well as the canonicity of these components. We have then proved a duality result for optimal martingale transport in any dimension, point by-point duality is no longer true but a form of quasi safe duality is demonstrated. This duality makes it possible to demonstrate the possibility of decomposing the quasi-safe optimal transport into a series of optimal transport subproblems point by point on each irreducible component. Finally, this duality is used to demonstrate a principle of martingale monotony, analogous to the famous monotonic principle of classical optimal transport. We then study the local structure of optimal transport, deduced from differential considerations. We thus obtain a characterization of this structure using tools of real algebraic geometry. We deduce the optimal martingal transport structure in the case of the power costs of the Euclidean norm, which makes it possible to solve a conjecture that dates from 2015. Finally, we compared the existingnumerical methods and proposed a new method which proves more efficient and allows to treat an intrinsic problem of the martingale constraint which is the defect of convex order. Techniques are also provided to manage digital problems in practice.
227

Bayesian methods for inverse problems

Lian, Duan January 2013 (has links)
This thesis describes two novel Bayesian methods: the Iterative Ensemble Square Filter (IEnSRF) and the Warp Ensemble Square Root Filter (WEnSRF) for solving the barcode detection problem, the deconvolution problem in well testing and the history matching problem of facies patterns. For the barcode detection problem, at the expanse of overestimating the posterior uncertainty, the IEnSRF efficiently achieves successful detections with very challenging real barcode images which the other considered methods and commercial software fail to detect. It also performs reliable detection on low-resolution images under poor ambient light conditions. For the deconvolution problem in well testing, the IEnSRF is capable of quantifying estimation uncertainty, incorporating the cumulative production data and estimating the initial pressure, which were thought to be unachievable in the existing well testing literature. The estimation results for the considered real benchmark data using the IEnSRF significantly outperform the existing methods in the commercial software. The WEnSRF is utilised for solving the history matching problem of facies patterns. Through the warping transformation, the WEnSRF performs adjustment on the reservoir features directly and is thus superior in estimating the large-scale complicated facies patterns. It is able to provide accurate estimates of the reservoir properties robustly and efficiently with reasonably reliable prior reservoir structural information.
228

Effective design of marine reserves : incorporating alongshore currents, size structure, and uncertainty

Reimer, Jody January 2013 (has links)
Marine populations worldwide are in decline due to anthropogenic effects. Spatial management via marine reserves may be an effective conservation method for many species, but the requisite theory is still underdeveloped. Integrodifference equation (IDE) models can be used to determine the critical domain size required for persistence and provide a modelling framework suitable for many marine populations. Here, we develop a novel spatially implicit approximation for the proportion of individuals lost outside the reserve areas which consistently outperforms the most common approximation. We examine how results using this approximation compare to the existing IDE results on the critical domain size for populations in a single reserve, in a network of reserves, in the presence of alongshore currents, and in structured populations. We find that the approximation consistently provides results which are in close agreement with those of an IDE model with the advantage of being simpler to convey to a biological audience while providing insights into the significance of certain model components. We also design a stochastic individual based model (IBM) to explore the probability of extinction for a population within a reserve area. We use our spatially implicit approximation to estimate the proportion of individuals which disperse outside the reserve area. We then use this approximation to obtain results on extinction using two different approaches, which we can compare to the baseline IBM; the first approach is based on the Central Limit Theorem and provides efficient simulation results, and the second modifies a simple Galton-Watson branching process to include loss outside the reserve area. We find that this spatially implicit approximation is also effective in obtaining results similar to those produced by the IBM in the presence of both demographic and environmental variability. Overall, this provides a set of complimentary methods for predicting the reserve area required to sustain a population in the presence of strong fishing pressure in the surrounding waters.
229

Rates of convergence of variance-gamma approximations via Stein's method

Gaunt, Robert E. January 2013 (has links)
Stein's method is a powerful technique that can be used to obtain bounds for approximation errors in a weak convergence setting. The method has been used to obtain approximation results for a number of distributions, such as the normal, Poisson and Gamma distributions. A major strength of the method is that it is often relatively straightforward to apply it to problems involving dependent random variables. In this thesis, we consider the adaptation of Stein's method to the class of Variance-Gamma distributions. We obtain a Stein equation for the Variance-Gamma distributions. Uniform bounds for the solution of the Symmetric Variance-Gamma Stein equation and its first four derivatives are given in terms of the supremum norms of derivatives of the test function. New formulas and inequalities for modified Bessel functions are obtained, which allow us to obtain these bounds. We then use local approach couplings to obtain bounds on the error in approximating two asymptotically Variance-Gamma distributed statistics by their limiting distribution. In both cases, we obtain a convergence rate of order n<sup>-1</sup> for suitably smooth test functions. The product of two normal random variables has a Variance-Gamma distribution and this leads us to consider the development of Stein's method to the product of r independent mean-zero normal random variables. An elegant Stein equation is obtained, which motivates a generalisation of the zero bias transformation. This new transformation has a number of interesting properties, which we exploit to prove some limit theorems for statistics that are asymptotically distributed as the product of two central normal distributions. The Variance-Gamma and Product Normal distributions arise as functions of the multivariate normal distribution. We end this thesis by demonstrating how the multivariate normal Stein equation can be used to prove limit theorems for statistics that are asymptotically distributed as a function of the multivariate normal distribution. We establish some sufficient conditions for convergence rates to be of order n<sup>-1</sup> for smooth test functions, and thus faster than the O(n<sup>-1/2</sup>) rate that would arise from the Berry-Esseen Theorem. We apply the multivariate normal Stein equation approach to prove Variance-Gamma and Product Normal limit theorems, and we also consider an application to Friedman's X<sup>2</sup> statistic.
230

Selection in a spatially structured population

Straulino, Daniel January 2014 (has links)
This thesis focus on the effect that selection has on the ancestry of a spatially structured population. In the absence of selection, the ancestry of a sample from the population behaves as a system of random walks that coalesce upon meeting. Backwards in time, each ancestral lineage jumps, at the time of its birth, to the location of its parent, and whenever two ancestral lineages have the same parent they jump to the same location and coalesce. Introducing selective forces to the evolution of a population translates into branching when we follow ancestral lineages, a by-product of biased sampling forwards in time. We study populations that evolve according to the Spatial Lambda-Fleming-Viot process with selection. In order to assess whether the picture under selection differs from the neutral case we must consider the timescale dictated by the neutral mutation rate Theta. Thus we look at the rescaled dual process with n=1/Theta. Our goal is to find a non-trivial rescaling limit for the system of branching and coalescing random walks that describe the ancestral process of a population. We show that the strength of selection (relative to the mutation rate) required to do so depends on the dimension; in one and two dimensions selection needs to be stronger in order to leave a detectable trace in the population. The main results in this thesis can be summarised as follows. In dimensions three and higher we take the selection coefficient to be proportional to 1/n, in dimension two we take it to be proportional to log(n)/n and finally, in dimension one we take the selection coefficient to be proportional to 1/sqrt(n). We then proceed to prove that in two and higher dimensions the ancestral process of a sample of the population converges to branching Brownian motion. In one dimension, provided we do not allow ancestral lineages to jump over each other, the ancestral process converges to a subset of the Brownian net. We also provide numerical results that show that the non-crossing restriction in one dimension cannot be lifted without a qualitative change in the behaviour of the process. Finally, through simulations, we study the rate of convergence in the two-dimensional case.

Page generated in 0.0263 seconds