• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 629
  • 267
  • 111
  • 73
  • 43
  • 43
  • 35
  • 22
  • 17
  • 11
  • 8
  • 7
  • 5
  • 5
  • 5
  • Tagged with
  • 1431
  • 530
  • 171
  • 161
  • 157
  • 147
  • 114
  • 104
  • 104
  • 100
  • 100
  • 97
  • 95
  • 94
  • 93
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
351

Black-Litterman Model: Practical Asset Allocation Model Beyond Traditional Mean-Variance

Abdumuminov, Shuhrat, Esteky, David Emanuel January 2016 (has links)
This paper consolidates and compares the applicability and practicality of Black-Litterman model versus traditional Markowitz Mean-Variance model. Although well-known model such as Mean-Variance is academically sound and popular, it is rarely used among asset managers due to its deficiencies. To put the discussion into context we shed light on the improvement made by Fisher Black and Robert Litterman by putting the performance and practicality of both Black- Litterman and Markowitz Mean-Variance models into test. We will illustrate detailed mathematical derivations of how the models are constructed and bring clarity and profound understanding of the intuition behind the models. We generate two different portfolios, composing data from 10-Swedish equities over the course of 10-year period and respectively select 30-days Swedish Treasury Bill as a risk-free rate. The resulting portfolios orientate our discussion towards the better comparison of the performance and applicability of these two models and we will theoretically and geometrically illustrate the differences. Finally, based on extracted results of the performance of both models we demonstrate the superiority and practicality of Black-Litterman model, which in our particular case outperform traditional Mean- Variance model.
352

Analysis of Nanopore Detector Measurements using Machine Learning Methods, with Application to Single-Molecule Kinetics

Landry, Matthew 18 May 2007 (has links)
At its core, a nanopore detector has a nanometer-scale biological membrane across which a voltage is applied. The voltage draws a DNA molecule into an á-hemolysin channel in the membrane. Consequently, a distinctive channel current blockade signal is created as the molecule flexes and interacts with the channel. This flexing of the molecule is characterized by different blockade levels in the channel current signal. Previous experiments have shown that a nanopore detector is sufficiently sensitive such that nearly identical DNA molecules were classified successfully using machine learning techniques such as Hidden Markov Models and Support Vector Machines in a channel current based signal analysis platform [4-9]. In this paper, methods for improving feature extraction are presented to improve both classification and to provide biologists and chemists with a better understanding of the physical properties of a given molecule.
353

Utilisation d'une assimilation d'ensemble pour modéliser des covariances d'erreur d'ébauche dépendantes de la situation météorologique à échelle convective / Use of an ensemble data assimilation to model flow-dependent background error covariances a convective scale

Ménétrier, Benjamin 03 July 2014 (has links)
L'assimilation de données vise à fournir aux modèles de prévision numérique du temps un état initial de l'atmosphère le plus précis possible. Pour cela, elle utilise deux sources d'information principales : des observations et une prévision récente appelée "ébauche", toutes deux entachées d'erreurs. La distribution de ces erreurs permet d'attribuer un poids relatif à chaque source d'information, selon la confiance que l'on peut lui accorder, d'où l'importance de pouvoir estimer précisément les covariances de l'erreur d'ébauche. Les méthodes de type Monte-Carlo, qui échantillonnent ces covariances à partir d'un ensemble de prévisions perturbées, sont considérées comme les plus efficaces à l'heure actuelle. Cependant, leur coût de calcul considérable limite de facto la taille de l'ensemble. Les covariances ainsi estimées sont donc contaminées par un bruit d'échantillonnage, qu'il est nécessaire de filtrer avant toute utilisation. Cette thèse propose des méthodes de filtrage du bruit d'échantillonnage dans les covariances d'erreur d'ébauche pour le modèle à échelle convective AROME de Météo-France. Le premier objectif a consisté à documenter la structure des covariances d'erreur d'ébauche pour le modèle AROME. Une assimilation d'ensemble de grande taille a permis de caractériser la nature fortement hétérogène et anisotrope de ces covariances, liée au relief, à la densité des observations assimilées, à l'influence du modèle coupleur, ainsi qu'à la dynamique atmosphérique. En comparant les covariances estimées par deux ensembles indépendants de tailles très différentes, le bruit d'échantillonnage a pu être décrit et quantifié. Pour réduire ce bruit d'échantillonnage, deux méthodes ont été développées historiquement, de façon distincte : le filtrage spatial des variances et la localisation des covariances. On montre dans cette thèse que ces méthodes peuvent être comprises comme deux applications directes du filtrage linéaire des covariances. L'existence de critères d'optimalité spécifiques au filtrage linéaire de covariances est démontrée dans une seconde partie du travail. Ces critères présentent l'avantage de n'impliquer que des grandeurs pouvant être estimées de façon robuste à partir de l'ensemble. Ils restent très généraux et l'hypothèse d'ergodicité nécessaire à leur estimation n'est requise qu'en dernière étape. Ils permettent de proposer des algorithmes objectifs de filtrage des variances et pour la localisation des covariances. Après un premier test concluant dans un cadre idéalisé, ces nouvelles méthodes ont ensuite été évaluées grâce à l'ensemble AROME. On a pu montrer que les critères d'optimalité pour le filtrage homogène des variances donnaient de très bons résultats, en particulier le critère prenant en compte la non-gaussianité de l'ensemble. La transposition de ces critères à un filtrage hétérogène a permis une légère amélioration des performances, à un coût de calcul plus élevé cependant. Une extension de la méthode a ensuite été proposée pour les composantes du tenseur de la hessienne des corrélations locales. Enfin, les fonctions de localisation horizontale et verticale ont pu être diagnostiquées, uniquement à partir de l'ensemble. Elles ont montré des variations cohérentes selon la variable et le niveau concernés, et selon la taille de l'ensemble. Dans une dernière partie, on a évalué l'influence de l'utilisation de variances hétérogènes dans le modèle de covariances d'erreur d'ébauche d'AROME, à la fois sur la structure des covariances modélisées et sur les scores des prévisions. Le manque de réalisme des covariances modélisées et l'absence d'impact positif pour les prévisions soulèvent des questions sur une telle approche. Les méthodes de filtrage développées au cours de cette thèse pourraient toutefois mener à d'autres applications fructueuses au sein d'approches hybrides de type EnVar, qui constituent une voie prometteuse dans un contexte d'augmentation de la puissance de calcul disponible. / Data assimilation aims at providing an initial state as accurate as possible for numerical weather prediction models, using two main sources of information : observations and a recent forecast called the “background”. Both are affected by systematic and random errors. The precise estimation of the distribution of these errors is crucial for the performance of data assimilation. In particular, background error covariances can be estimated by Monte-Carlo methods, which sample from an ensemble of perturbed forecasts. Because of computational costs, the ensemble size is much smaller than the dimension of the error covariances, and statistics estimated in this way are spoiled with sampling noise. Filtering is necessary before any further use. This thesis proposes methods to filter the sampling noise of forecast error covariances. The final goal is to improve the background error covariances of the convective scale model AROME of Météo-France. The first goal is to document the structure of background error covariances for AROME. A large ensemble data assimilation is set up for this purpose. It allows to finely characterize the highly heterogeneous and anisotropic nature of covariances. These covariances are strongly influenced by the topography, by the density of assimilated observations, by the influence of the coupling model, and also by the atmospheric dynamics. The comparison of the covariances estimated from two independent ensembles of very different sizes gives a description and quantification of the sampling noise. To damp this sampling noise, two methods have been historically developed in the community : spatial filtering of variances and localization of covariances. We show in this thesis that these methods can be understood as two direct applications of the theory of linear filtering of covariances. The existence of specific optimality criteria for the linear filtering of covariances is demonstrated in the second part of this work. These criteria have the advantage of involving quantities that can be robustly estimated from the ensemble only. They are fully general and the ergodicity assumption that is necessary to their estimation is required in the last step only. They allow the variance filtering and the covariance localization to be objectively determined. These new methods are first illustrated in an idealized framework. They are then evaluated with various metrics, thanks to the large ensemble of AROME forecasts. It is shown that optimality criteria for the homogeneous filtering of variances yields very good results, particularly with the criterion taking the non-gaussianity of the ensemble into account. The transposition of these criteria to a heterogeneous filtering slightly improves performances, yet at a higher computational cost. An extension of the method is proposed for the components of the local correlation hessian tensor. Finally, horizontal and vertical localization functions are diagnosed from the ensemble itself. They show consistent variations depending on the considered variable and level, and on the ensemble size. Lastly, the influence of using heterogeneous variances into the background error covariances model of AROME is evaluated. We focus first on the description of the modelled covariances using these variances and then on forecast scores. The lack of realism of the modelled covariances and the negative impact on scores raise questions about such an approach. However, the filtering methods developed in this thesis are general. They are likely to lead to other prolific applications within the framework of hybrid approaches, which are a promising way in a context of growing computational resources.
354

Optimalizácia investičných rozhodnutí v medzinárodnom prostredí / Optimization of investment decisions in international trade

Gondeková, Tatiana January 2009 (has links)
In this thesis, a portfolio optimization with integer variables which influence optimal assets allocation in domestic as well as in international environment, is studied. At the beginning with basic terms, assets and portfolio background, incentives of portfolio creation, fields of portfolio application and portfolio management is dealt. Following the characteristics of assets and portfolios (expected return, risk, liquidity), which are used by investors to value their properties, are introduced. Next the mean-risk models are derived for the measures of risk - variance, Value at Risk, Conditional Value at Risk and prepared for a practical application. Heuristics implemented in Matlab and standard algorithms of software GAMS are used for solving problems of the portfolio optimization. At the end optimization methods are applied on real financial data and an outputs are compared.
355

Metoda převažování (kalibrace) ve výběrových šetřeních / The method of re-weighting (calibration) in survey sampling

Michálková, Anna January 2019 (has links)
In this thesis, we study re-weighting when estimating totals in survey sampling. The purpose of re-weighting is to adjust the structure of the sample in order to comply with the structure of the population (with respect to given auxiliary variables). We sum up some known results for methods of the traditional desin-based approach, more attention is given to the model-based approach. We generalize known asymptotic results in the model-based theory to a wider class of weighted estimators. Further, we propose a consistent estimator of asymptotic variance, which takes into consideration weights used in estimator of the total. This is in contrast to usually recommended variance estimators derived from the design-based approach. Moreover, the estimator is robust againts particular model misspecifications. In a simulation study, we investigate how the proposed estimator behaves in comparison with variance estimators which are usually recommended in the literature or used in practice. 1
356

Rare Events Simulations with Applications to the Performance Evaluation of Wireless Communication Systems

Ben Rached, Nadhir 08 October 2018 (has links)
The probability that a sum of random variables (RVs) exceeds (respectively falls below) a given threshold, is often encountered in the performance analysis of wireless communication systems. Generally, a closed-form expression of the sum distribution does not exist and a naive Monte Carlo (MC) simulation is computationally expensive when dealing with rare events. An alternative approach is represented by the use of variance reduction techniques, known for their efficiency in requiring less computations for achieving the same accuracy requirement. For the right-tail region, we develop a unified hazard rate twisting importance sampling (IS) technique that presents the advantage of being logarithmic efficient for arbitrary distributions under the independence assumption. A further improvement of this technique is then developed wherein the twisting is applied only to the components having more impacts on the probability of interest than others. Another challenging problem is when the components are correlated and distributed according to the Log-normal distribution. In this setting, we develop a generalized hybrid IS scheme based on a mean shifting and covariance matrix scaling techniques and we prove that the logarithmic efficiency holds again for two particular instances. We also propose two unified IS approaches to estimate the left-tail of sums of independent positive RVs. The first applies to arbitrary distributions and enjoys the logarithmic efficiency criterion, whereas the second satisfies the bounded relative error criterion under a mild assumption but is only applicable to the case of independent and identically distributed RVs. The left-tail of correlated Log-normal variates is also considered. In fact, we construct an estimator combining an existing mean shifting IS approach with a control variate technique and prove that it possess the asymptotically vanishing relative error property. A further interesting problem is the left-tail estimation of sums of ordered RVs. Two estimators are presented. The first is based on IS and achieves the bounded relative error under a mild assumption. The second is based on conditional MC approach and achieves the bounded relative error property for the Generalized Gamma case and the logarithmic efficiency for the Log-normal case.
357

Quadratic Criteria for Optimal Martingale Measures in Incomplete Markets

McWalter, Thomas Andrew 22 February 2007 (has links)
Student Number : 8804388Y - MSc Dissertation - School of Computational and Applied Mathematics - Faculty of Science / This dissertation considers the pricing and hedging of contingent claims in a general semimartingale market. Initially the focus is on a complete market, where it is possible to price uniquely and hedge perfectly. In this context the two fundamental theorems of asset pricing are explored. The market is then extended to incorporate risk that cannot be hedged fully, thereby making it incomplete. Using quadratic cost criteria, optimal hedging approaches are investigated, leading to the derivations of the minimal martingale measure and the variance-optimal martingale measure. These quadratic approaches are then applied to the problem of minimizing the basis risk that arises when an option on a non-traded asset is hedged with a correlated asset. Closed-form solutions based on the Black-Scholes equation are derived and numerical results are compared with those resulting from a utility maximization approach, with encouraging results.
358

Options exotiques dans les modèles exponentiels de Lévy / Exotic options under exponential Lévy model

Dia, El Hadj Aly 01 July 2010 (has links)
La valorisation des options exotiques continues de façon "exacte" est très difficile (voire impossible) dans les modèles exponentiels de Lévy. En fait nous verrons que pour les options lookback et barrière digitale, et sous l'hypothèse que les sauts de l'actif sous-jacent sont tous négatifs, nous avons des formules semi-fermées. En général il faut recourir à des techniques qui permettent d'approcher les prix de ces dérivés, ce qui engendre des erreurs. Nous étudierons le comportement asymptotique de ces erreurs. Dans certains cas ces erreurs peuvent être corrigées de sorte à obtenir une convergence plus rapide vers la valeur "exacte" recherchée. Nous proposons aussi des méthodes permettant d'évaluer les prix des options exotiques par des techniques de Monte-Carlo / The exact valuation of continuous exotic options is very difficult (sometimes impossible) in exponential Lévy models. In fact, for lookback options and digital barrier, and assuming that the jumps of the underlying assets are all negative, we have semi-closed formulas. In general it is necessary to use numerical methods to approach the prices of these derivatives, which causes errors. We study the asymptotic behavior of these errors. In some cases these errors can be corrected so that to obtain a faster convergence to the fair value. We also propose some methods to evaluate the prices of exotic options by Monte Carlo techniques
359

Utilização de técnicas multivariadas na avaliação da divergência genética de populações de girassol (Helianthus annuus L.) /

Messetti, Ana Vergínia Libos, 1964- January 2007 (has links)
Orientador: Carlos Roberto Padovani / Banca: Adriano Wagner Ballarin / Banca: José Carlos Martinez / Banca: Jacinta Ludovico Zamboti / Banca: Marie Oshiiwa / Resumo: Este trabalho foi desenvolvido com os objetivos de avaliar a divergência genética de 12 populações de girassol do Banco de Germoplasma da EMBRAPA /Soja de Londrina por meio de técnicas multivariadas; divulgar tópicos recentes e interessantes das técnicas multivariadas que não são explorados nos trabalhos científicos de melhoramento de plantas e orientar a escolha de populações para cruzamentos nos programas de melhoramento genético da cultura de girassol. O modelo experimental constitui-se de delineamento bloco casualizado envolvendo 12 variedades de girassol avaliadas sob cinco caracteres morfoagronômicos. Por meio da análise univariada foi verificada diferença significativa (p<0,05) dos tratamentos para todos caracteres. A aplicação dos componentes principais permitiu a redução bidimensional, com a explicação de 82,5% da variação total. O número de componentes foi avaliado pelo critério de Kaiser e critério Scree-test. A visualização da divergência genética proporcionada pelos escores das duas primeiras variáveis canônicas, evidenciaram grupos geneticamente diferentes. Ambas técnicas apontaram concordância nos resultados. Com base nas estimativas da distância Mahalanobis e distância euclideana foi realizada a análise de agrupamento adotando-se três algoritmos hierárquicos. Para determinar o número de grupos adotou-se o dendrograma, a análise do nível de fusão e a análise do comportamento de similaridade. Para validação utilizou-se o critério de Wilks dentro de cada grupo e gráficos multivariados auxiliaram na interpretação dos resultados. Pode-se concluir pela existência da divergência genética, detectando-se quatro grupos geneticamente diferentes e caracterizado pelos escores médios. / Abstract: The objective of this work was to evaluate genetic divergence in 12 sunflower populations from EMBRAPA/ Londrina Soybean Germplasm Bank, using multivariate techniques, to discuss recent and interesting topics related to the multivariate techniques donþt found in plant improvement scientific papers, and to offer guidelines on how to choose populations for sunflower genetic improvement crossing programs. The experiment included a totally block casualized design, with twelve sunflower varieties, evaluated according to 5 morphoagronomics traits. The univariate analysis showed a significant difference (p<0,05) among treatments for all the traits. Application of main components allowed for a bi-dimensional reduction, with 82,5% of the total variation. The number of components were evaluated by the Kaiser and Scree-test criteria. Genetic divergence visualization provided by the two first canonical variables showed genetically different groups. Both techniques showed the same results. Based on Mahalanobis and Euclidean distance estimates, a clustering analysis was carried out using three hierarchicals algorithms. A dendrogram, a fusion level analysis and a similarity behavior analysis were conducted to determine the number of groups. Validation used the Wilks criteria inside each group, while multivariate graphs helped with data interpretation. Results from this study showed genetic divergence in four groups characterized by average/mean scores. / Doutor
360

Discounting the role of causal attributions in the ANOVA model of attribution

Unknown Date (has links)
For years attribution research has been dominated by the ANOVA model of behavior which proposes that people construct their dispositional attributions of others by carefully comparing and weighing all situational information using mental computations similar to the processes used by researchers to analyze data. A preliminary experiment successfully determined that participants were able to distinguish differences in variability assessed across persons (high vs. low consensus) and across situations (high vs. low distinctiveness). Also, it was clear that the subjects could evaluate varying levels of situational constraint. A primary experiment administered to participants immediately following the preliminary study determined that participants grossly under-utilized those same variables when making dispositional attributions. Results gave evidence against the use of traditional ANOVA models and support for the use of the Behavior Averaging Principle of Attribution. / by Kori A. Hakala. / Thesis (M.A.)--Florida Atlantic University, 2008. / Includes bibliography. / Electronic reproduction. Boca Raton, Fla., 2008. Mode of access: World Wide Web.

Page generated in 0.0768 seconds