• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 625
  • 267
  • 111
  • 73
  • 43
  • 43
  • 35
  • 22
  • 17
  • 11
  • 8
  • 7
  • 5
  • 5
  • 5
  • Tagged with
  • 1427
  • 530
  • 171
  • 160
  • 157
  • 147
  • 114
  • 104
  • 104
  • 100
  • 100
  • 97
  • 95
  • 94
  • 93
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

An Empirical Investigation of Tukey's Honestly Significant Difference Test with Variance Heterogeneity and Unequal Sample Sizes, Utilizing Kramer's Procedure and the Harmonic Mean

McKinney, William Lane 05 1900 (has links)
This study sought to determine the effect upon Tukey's Honestly Significant Difference (HSD) statistic of concurrently violating the assumptions of homogeneity of variance and equal sample sizes. Two forms for the unequal sample size problem were investigated. Kramer's form and the harmonic mean approach were the two unequal sample size procedures studied. The study employed a Monte Carlo simulation procedure which varied sample sizes with a heterogeneity of variance condition. Four thousand experiments were generated. Findings of this study were based upon the empirically obtained significance levels. Five conclusions were reached in this study. The first conclusion was that for the conditions of this study the Kramer form of the HSD statistic is not robust at the .05 or .01 nominal level of significance. A second conclusion was that the harmonic mean form of the HSD statistic is not robust at the .05 and .01 nominal level of significance. A general conclusion reached from all the findings formed the third conclusion. It was that the Kramer form of the HSD test is the preferred procedure under combined assumption violations of variance heterogeneity and unequal sample sizes. Two additional conclusions are based on related findings. The fourth conclusion was that for the combined assumption violations in this study, the actual significance levels (probability levels) were less-than the nominal significance levels when the magnitude of the unequal variances were positively related to the magnitude of the unequal sample sizes. The fifth and last conclusion was that for the concurrent assumption violation of variance heterogeneity and unequal sample sizes, the actual significance levels significantly exceed the nominal significance levels when the magnitude of the unequal variances are negatively related to the magnitude of the unequal sample sizes.
132

Accélération de la convergence dans le code de transport de particules Monte-Carlo TRIPOLI-4® en criticité / Convergence acceleration in the Monte-Carlo particle transport code TRIPOLI-4® in criticality

Dehaye, Benjamin 05 December 2014 (has links)
Un certain nombre de domaines tels que les études de criticité requièrent le calcul de certaines grandeurs neutroniques d'intérêt. Il existe deux types de code : les codes déterministes et les codes stochastiques. Ces derniers sont réputés simuler la physique de la configuration traitée de manière exacte. Toutefois, le temps de calcul nécessaire peut s'avérer très élevé.Le travail réalisé dans cette thèse a pour but de bâtir une stratégie d'accélération de la convergence de la criticité dans le code de calcul TRIPOLI-4®. Nous souhaitons mettre en œuvre le jeu à variance nulle. Pour ce faire, il est nécessaire de calculer le flux adjoint. L'originalité de cette thèse est de calculer directement le flux adjoint par une simulation directe Monte-Carlo sans passer par un code externe, grâce à la méthode de la matrice de fission. Ce flux adjoint est ensuite utilisé comme carte d'importance afin d'accélérer la convergence de la simulation. / Fields such as criticality studies need to compute some values of interest in neutron physics. Two kind of codes may be used : deterministic ones and stochastic ones. The stochastic codes do not require approximation and are thus more exact. However, they may require a lot of time to converge with a sufficient precision.The work carried out during this thesis aims to build an efficient acceleration strategy in the TRIPOLI-4®. We wish to implement the zero variance game. To do so, the method requires to compute the adjoint flux. The originality of this work is to directly compute the adjoint flux directly from a Monte-Carlo simulation without using external codes thanks to the fission matrix method. This adjoint flux is then used as an importance map to bias the simulation.
133

Structure and Variability of the North Atlantic Meridional Overturning Circulation from Observations and Numerical Models

Shaw, Benjamin Stuard 01 January 2010 (has links)
This study presents an analysis of observed Atlantic Meridional Overturning Circulation (AMOC) variability at 26.5°N on submonthly to interannual time scales compared to variability characteristics produced by a selection of five high- and low-resolution, synoptically and climatologically forced OGCMs. The focus of the analysis is on the relative contributions of ocean mesoscale eddies and synoptic atmospheric forcing to the overall AMOC variability. Observations used in this study were collected within the framework of the joint U.K.-U.S. Rapid Climate Change (RAPID)-Meridional Overturning Circulation & Heat Flux Array (MOCHA) Program. The RAPID-MOCHA array has now been in place for nearly 6 years, of which 4 years of data (2004-2007) are analyzed in this study. At 26.5°N, the MOC strength measured by the RAPID-MOCHA array is 18.5 Sv. Overall, the models tend to produce a realistic, though slightly underestimated, MOC. With the exception of one of the high-resolution, synoptically forced models, standard deviations of model-produced MOC are lower than the observed standard deviation by 1.5 to 2 Sv. A comparison of the MOC spectra at 26.5°N shows that model variability is weaker than observed variability at periods longer than 100 days. Of the five models investigated in this study, two were selected for a more in-depth examination. One model is forced by a monthly climatology derived from 6-hourly NCEP/NCAR winds (OFES-CLIM), whereas the other is forced by NCEP/NCAR reanalysis daily winds and fluxes (OFES-NCEP). They are identically configured, presenting an opportunity to explain differences in their MOCs by their differences in forcing. Both of these models were produced by the OGCM for the Earth Simulator (OFES), operated by the Japan Agency for Marine-Earth Science & Technology (JAMSTEC). The effects of Ekman transport on the strength, variability, and meridional decorrelation scale are investigated for the OFES models. This study finds that AMOC variance due to Ekman forcing is distributed nearly evenly between the submonthly, intraseasonal, and seasonal period bands. When Ekman forcing is removed, the remaining variance is the result of geostrophic motions. In the intraseasonal period band this geostrophic AMOC variance is dominated by eddy activity, and variance in the submonthly period band is dominated by forced geostrophic motions such as Rossby and Kelvin waves. It is also found that MOC variability is coherent over a meridional distance of ~8° throughout the study region, and that this coherence scale is intrinsic to both Ekman and geostrophic motions. A Monte Carlo-style evaluation of the 27-year-long OFES-NCEP timeseries is used to investigate the ability of a four year MOC strength timeseries to represent the characteristics of lengthier timeseries. It is found that a randomly selected four year timeseries will fall within ~1 Sv of the true mean 95% of the time, but long term trends cannot be accurately calculated from a four year timeseries. Errors in the calculated trend are noticeably reduced for each additional year until the timeseries reaches ~11 years in length. For timeseries longer than 11-years, the trend's 95% confidence interval asymptotes to 2 Sv/decade.
134

Node Localization using Fractal Signal Preprocessing and Artificial Neural Network

Kaiser, Tashniba January 2012 (has links)
This thesis proposes an integrated artificial neural network based approach to classify the position of a wireless device in an indoor protected area. Our experiments are conducted in two different types of interference affected indoor locations. We found that the environment greatly influences the received signal strength. We realized the need of incorporating a complexity measure of the Wi-Fi signal as additional information in our localization algorithm. The inputs to the integrated artificial neural network were comprised of an integer dimension representation and a fractional dimension representation of the Wi-Fi signal. The integer dimension representation consisted of the raw signal strength, whereas the fractional dimension consisted of a variance fractal dimension of the Wi-Fi signal. The results show that the proposed approach performed 8.7% better classification than the “one dimensional input” ANN approach, achieving an 86% correct classification rate. The conventional Trilateration method achieved only a 47.97% correct classification rate.
135

Local Volatility Calibration on the Foreign Currency Option Market / Kalibrering av lokal volatilitet på valutaoptionsmarknaden

Falck, Markus January 2014 (has links)
In this thesis we develop and test a new method for interpolating and extrapolating prices of European options. The theoretical base originates from the local variance gamma model developed by Carr (2008), in which the local volatility model by Dupire (1994) is combined with the variance gamma model by Madan and Seneta (1990). By solving a simplied version of the Dupire equation under the assumption of a continuous ve parameter di usion term, we derive a parameterization dened for strikes in an interval of arbitrary size. The parameterization produces positive option prices which satisfy both conditions for absence of arbitrage in a one maturity setting, i.e. all adjacent vertical spreads and buttery spreads are priced non-negatively. The method is implemented and tested in the FX-option market. We suggest two sub-models, one with three and one with ve degrees of freedom. By using a least-square approach, we calibrate the two sub-models against 416 Reuters quoted volatility smiles. Both sub-models succeeds in generating prices within the bid-ask spread for all options in the sample. Compared to the three parameter model, the model with ve parameters calibrates more exactly to market quoted mids but has a longer calibration time. The three parameter model calibrates remarkably quickly; in a MATLAB implementation using a Levenberg-Marquardt algorithm the average calibration time is approximately 1 ms. Both sub-models produce volatility smiles which are C2 and well-behaving. Further, we suggest a technique allowing for arbitrage-free interpolation of calibrated option price functions in the maturity dimension. The interpolation is performed in parameter space, where every set of parameters uniquely determines an option price function. Furthermore, we produce sucient conditions to ensure absence of calendar spread arbitrage when calibrating the proposed model to several maturities. We use this technique to produce implied volatility surfaces which are suciently smooth, satisfy all conditions for absence of arbitrage and fit market quoted volatility surfaces within the bid-ask spread. In the final chapter we use the results for producing Dupire local volatility surfaces and for pricing variance swaps.
136

An Empirical Investigation of Tukey's Honestly Significant Difference Test with Variance Heterogeneity and Equal Sample Sizes, Utilizing Box's Coefficient of Variance Variation

Strozeski, Michael W. 05 1900 (has links)
This study sought to determine boundary conditions for robustness of the Tukey HSD statistic when the assumptions of homogeneity of variance were violated. Box's coefficient of variance variation, C^2 , was utilized to index the degree of variance heterogeneity. A Monte Carlo computer simulation technique was employed to generate data under controlled violation of the homogeneity of variance assumption. For each sample size and number of treatment groups condition, an analysis of variance F-test was computed, and Tukey's multiple comparison technique was calculated. When the two additional sample size cases were added to investigate the large sample sizes, the Tukey test was found to be conservative when C^2 was set at zero. The actual significance level fell below the lower limit of the 95 per cent confidence interval around the 0.05 nominal significance level.
137

Calcul parallèle pour les problèmes linéaires, non-linéaires et linéaires inverses en finance / Parallel computing for linear, nonlinear and linear inverse problems in finance

Abbas-Turki, Lokman 21 September 2012 (has links)
De ce fait, le premier objectif de notre travail consiste à proposer des générateurs de nombres aléatoires appropriés pour des architectures parallèles et massivement parallèles de clusters de CPUs/GPUs. Nous testerons le gain en temps de calcul et l'énergie consommée lors de l'implémentation du cas linéaire du pricing européen. Le deuxième objectif est de reformuler le problème non-linéaire du pricing américain pour que l'on puisse avoir des gains de parallélisation semblables à ceux obtenus pour les problèmes linéaires. La méthode proposée fondée sur le calcul de Malliavin est aussi plus avantageuse du point de vue du praticien au delà même de l'intérêt intrinsèque lié à la possibilité d'une bonne parallélisation. Toujours dans l'objectif de proposer des algorithmes parallèles, le dernier point est l'étude de l'unicité de la solution de certains cas linéaires inverses en finance. Cette unicité aide en effet à avoir des algorithmes simples fondés sur Monte Carlo / Handling multidimensional parabolic linear, nonlinear and linear inverse problems is the main objective of this work. It is the multidimensional word that makes virtually inevitable the use of simulation methods based on Monte Carlo. This word also makes necessary the use of parallel architectures. Indeed, the problems dealing with a large number of assets are major resources consumers, and only parallelization is able to reduce their execution times. Consequently, the first goal of our work is to propose "appropriate" random number generators to parallel and massively parallel architecture implemented on CPUs/GPUs cluster. We quantify the speedup and the energy consumption of the parallel execution of a European pricing. The second objective is to reformulate the nonlinear problem of pricing American options in order to get the same parallelization gains as those obtained for linear problems. In addition to its parallelization suitability, the proposed method based on Malliavin calculus has other practical advantages. Continuing with parallel algorithms, the last point of this work is dedicated to the uniqueness of the solution of some linear inverse problems in finance. This theoretical study enables the use of simple methods based on Monte Carlo
138

Revealed Preferences for Portfolio Selection–Does Skewness Matter?

Liechty, Merrill W., Sağlam, Ümit 16 August 2017 (has links)
In this article, we consider the portfolio selection problem as a Bayesian decision problem. We compare the traditional mean–variance and mean–variance–skewness efficient portfolios. We develop bi-level programming problem to investigate the market’s preference for risk by using observed (market) weights. Numerical experiments are conducted on a portfolio formed by the 30 stocks in the Dow Jones Industrial Average. Numerical results show that the market’s preferences are better explained when skewness is included.
139

Asymptotic approaches in financial risk management / Approches asymptotiques en gestion des risques financiers

Genin, Adrien 21 September 2018 (has links)
Cette thèse se propose de traiter de trois problèmes de gestion des risques financiers en utilisant différentes approches asymptotiques. La première partie présente un algorithme Monte Carlo d’échantillonnage d’importance pour la valorisation d’options asiatiques dans des modèles exponentiels de Lévy. La mesure optimale d’échantillonnage d’importance est obtenue grâce à la théorie des grandes déviations. La seconde partie présente l’étude du comportement asymptotique de la somme de n variables aléatoires positives et dépendantes dont la distribution est un mélange log-normal ainsi que des applications en gestion des risque de portefeuille d’actifs. Enfin, la dernière partie, présente une application de la notion de variations régulières pour l’analyse du comportement des queues de distribution d’un vecteur aléatoire dont les composantes suivent des distributions à queues épaisses et dont la structure de dépendance est modélisée par une copule Gaussienne. Ces résultats sont ensuite appliqués au comportement asymptotique d’un portefeuille d’options dans le modèle de Black-Scholes / This thesis focuses on three problems from the area of financial risk management, using various asymptotic approaches. The first part presents an importance sampling algorithm for Monte Carlo pricing of exotic options in exponential Lévy models. The optimal importance sampling measure is computed using techniques from the theory of large deviations. The second part uses the Laplace method to study the tail behavior of the sum of n dependent positive random variables, following a log-normal mixture distribution, with applications to portfolio risk management. Finally, the last part employs the notion of multivariate regular variation to analyze the tail behavior of a random vector with heavy-tailed components, whose dependence structure is modeled by a Gaussian copula. As application, we consider the tail behavior of a portfolio of options in the Black-Scholes model
140

Asynchronous optimization for machine learning / Optimisation asynchrone pour l'apprentissage statistique

Leblond, Rémi 15 November 2018 (has links)
Les explosions combinées de la puissance computationnelle et de la quantité de données disponibles ont fait des algorithmes les nouveaux facteurs limitants en machine learning. L’objectif de cette thèse est donc d’introduire de nouvelles méthodes capables de tirer profit de quantités de données et de ressources computationnelles importantes. Nous présentons deux contributions indépendantes. Premièrement, nous développons des algorithmes d’optimisation rapides, adaptés aux avancées en architecture de calcul parallèle pour traiter des quantités massives de données. Nous introduisons un cadre d’analyse pour les algorithmes parallèles asynchrones, qui nous permet de faire des preuves correctes et simples. Nous démontrons son utilité en analysant les propriétés de convergence et d’accélération de deux nouveaux algorithmes. Asaga est une variante parallèle asynchrone et parcimonieuse de Saga, un algorithme à variance réduite qui a un taux de convergence linéaire rapide dans le cas d’un objectif lisse et fortement convexe. Dans les conditions adéquates, Asaga est linéairement plus rapide que Saga, même en l’absence de parcimonie. ProxAsaga est une extension d’Asaga au cas plus général où le terme de régularisation n’est pas lisse. ProxAsaga obtient aussi une accélération linéaire. Nous avons réalisé des expériences approfondies pour comparer nos algorithms à l’état de l’art. Deuxièmement, nous présentons de nouvelles méthodes adaptées à la prédiction structurée. Nous nous concentrons sur les réseaux de neurones récurrents (RNNs), dont l’algorithme d’entraînement traditionnel – basé sur le principe du maximum de vraisemblance (MLE) – présente plusieurs limitations. La fonction de coût associée ignore l’information contenue dans les métriques structurées ; de plus, elle entraîne des divergences entre l’entraînement et la prédiction. Nous proposons donc SeaRNN, un nouvel algorithme d’entraînement des RNNs inspiré de l’approche dite “learning to search”. SeaRNN repose sur une exploration de l’espace d’états pour définir des fonctions de coût globales-locales, plus proches de la métrique d’évaluation que l’objectif MLE. Les modèles entraînés avec SeaRNN ont de meilleures performances que ceux appris via MLE pour trois tâches difficiles, dont la traduction automatique. Enfin, nous étudions le comportement de ces modèles et effectuons une comparaison détaillée de notre nouvelle approche aux travaux de recherche connexes. / The impressive breakthroughs of the last two decades in the field of machine learning can be in large part attributed to the explosion of computing power and available data. These two limiting factors have been replaced by a new bottleneck: algorithms. The focus of this thesis is thus on introducing novel methods that can take advantage of high data quantity and computing power. We present two independent contributions. First, we develop and analyze novel fast optimization algorithms which take advantage of the advances in parallel computing architecture and can handle vast amounts of data. We introduce a new framework of analysis for asynchronous parallel incremental algorithms, which enable correct and simple proofs. We then demonstrate its usefulness by performing the convergence analysis for several methods, including two novel algorithms. Asaga is a sparse asynchronous parallel variant of the variance-reduced algorithm Saga which enjoys fast linear convergence rates on smooth and strongly convex objectives. We prove that it can be linearly faster than its sequential counterpart, even without sparsity assumptions. ProxAsaga is an extension of Asaga to the more general setting where the regularizer can be non-smooth. We prove that it can also achieve a linear speedup. We provide extensive experiments comparing our new algorithms to the current state-of-art. Second, we introduce new methods for complex structured prediction tasks. We focus on recurrent neural networks (RNNs), whose traditional training algorithm for RNNs – based on maximum likelihood estimation (MLE) – suffers from several issues. The associated surrogate training loss notably ignores the information contained in structured losses and introduces discrepancies between train and test times that may hurt performance. To alleviate these problems, we propose SeaRNN, a novel training algorithm for RNNs inspired by the “learning to search” approach to structured prediction. SeaRNN leverages test-alike search space exploration to introduce global-local losses that are closer to the test error than the MLE objective. We demonstrate improved performance over MLE on three challenging tasks, and provide several subsampling strategies to enable SeaRNN to scale to large-scale tasks, such as machine translation. Finally, after contrasting the behavior of SeaRNN models to MLE models, we conduct an in-depth comparison of our new approach to the related work.

Page generated in 0.0724 seconds