• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 21
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 46
  • 22
  • 10
  • 9
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

IRRBB in a Low Interest Rate Environment / : IRRBB i en lågräntemiljö

Berg, Simon, Elfström, Victor January 2020 (has links)
Financial institutions are exposed to several different types of risk. One of the risks that can have a significant impact is the interest rate risk in the bank book (IRRBB). In 2018, the European Banking Authority (EBA) released a regulation on IRRBB to ensure that institutions make adequate risk calculations. This article proposes an IRRBB model that follows EBA's regulations. Among other things, this framework contains a deterministic stress test of the risk-free yield curve, in addition to this, two different types of stochastic stress tests of the yield curve were made. The results show that the deterministic stress tests give the highest risk, but that the outcomes are considered less likely to occur compared to the outcomes generated by the stochastic models. It is also demonstrated that EBA's proposal for a stress model could be better adapted to the low interest rate environment that we experience now. Furthermore, a discussion is held on the need for a more standardized framework to clarify, both for the institutions themselves and the supervisory authorities, the risks that institutes are exposed to. / Finansiella institutioner är exponerade mot flera olika typer av risker. En av de risker som kan ha en stor påverkan är ränterisk i bankboken (IRRBB). 2018 släppte European Banking Authority (EBA) ett regelverk gällande IRRBB som ska se till att institutioner gör tillräckliga riskberäkningar. Detta papper föreslår en IRRBB modell som följer EBAs regelverk. Detta regelverk innehåller bland annat ett deterministiskt stresstest av den riskfria avkastningskurvan, utöver detta så gjordes två olika typer av stokastiska stresstest av avkastningskurvan. Resultatet visar att de deterministiska stresstesten ger högst riskutslag men att utfallen anses vara mindre sannolika att inträffa jämfört med utfallen som de stokastiska modellera genererade. Det påvisas även att EBAs förslag på stressmodell skulle kunna anpassas bättre mot den lågräntemiljö som vi för tillfället befinner oss i. Vidare förs en diskussion gällande ett behov av ett mer standardiserat ramverk för att tydliggöra, både för institutioner själva och samt övervakande myndigheter, vilka risker institutioner utsätts för.
42

A Runtime Framework for Regular and Irregular Message-Driven Parallel Applications on GPU Systems

Rengasamy, Vasudevan January 2014 (has links) (PDF)
The effective use of GPUs for accelerating applications depends on a number of factors including effective asynchronous use of heterogeneous resources, reducing data transfer between CPU and GPU, increasing occupancy of GPU kernels, overlapping data transfers with computations, reducing GPU idling and kernel optimizations. Overcoming these challenges require considerable effort on the part of the application developers. Most optimization strategies are often proposed and tuned specifically for individual applications. Message-driven executions with over-decomposition of tasks constitute an important model for parallel programming and provide multiple benefits including communication-computation overlap and reduced idling on resources. Charm++ is one such message-driven language which employs over decomposition of tasks, computation-communication overlap and a measurement-based load balancer to achieve high CPU utilization. This research has developed an adaptive runtime framework for efficient executions of Charm++ message-driven parallel applications on GPU systems. In the first part of our research, we have developed a runtime framework, G-Charm with the focus primarily on optimizing regular applications. At runtime, G-Charm automatically combines multiple small GPU tasks into a single larger kernel which reduces the number of kernel invocations while improving CUDA occupancy. G-Charm also enables reuse of existing data in GPU global memory, performs GPU memory management and dynamic scheduling of tasks across CPU and GPU in order to reduce idle time. In order to combine the partial results obtained from the computations performed on CPU and GPU, G-Charm allows the user to specify an operator using which the partial results are combined at runtime. We also perform compile time code generation to reduce programming overhead. For Cholesky factorization, a regular parallel application, G-Charm provides 14% improvement over a highly tuned implementation. In the second part of our research, we extended our runtime to overcome the challenges presented by irregular applications such as a periodic generation of tasks, irregular memory access patterns and varying workloads during application execution. We developed models for deciding the number of tasks that can be combined into a kernel based on the rate of task generation, and the GPU occupancy of the tasks. For irregular applications, data reuse results in uncoalesced GPU memory access. We evaluated the effect of altering the global memory access pattern in improving coalesced access. We’ve also developed adaptive methods for hybrid execution on CPU and GPU wherein we consider the varying workloads while scheduling tasks across the CPU and GPU. We demonstrate that our dynamic strategies result in 8-38% reduction in execution times for an N-body simulation application and a molecular dynamics application over the corresponding static strategies that are amenable for regular applications.
43

Contribution to the estimation of VARMA models with time-dependent coefficients / Contribution à l'estimation des modèles VARMA à coefficients dépendant du temps.

Alj, Abdelkamel 07 September 2012 (has links)
Dans cette thèse, nous étudions l’estimation de modèles autorégressif-moyenne mobile<p>vectoriels ou VARMA, `a coefficients dépendant du temps, et avec une matrice de covariance<p>des innovations dépendant du temps. Ces modèles sont appel´es tdVARMA. Les éléments<p>des matrices des coefficients et de la matrice de covariance sont des fonctions déterministes<p>du temps dépendant d’un petit nombre de paramètres. Une première partie de la thèse<p>est consacrée à l’étude des propriétés asymptotiques de l’estimateur du quasi-maximum<p>de vraisemblance gaussienne. La convergence presque sûre et la normalité asymptotique<p>de cet estimateur sont démontrées sous certaine hypothèses vérifiables, dans le cas o`u les<p>coefficients dépendent du temps t mais pas de la taille des séries n. Avant cela nous considérons les propriétés asymptotiques des estimateurs de modèles non-stationnaires assez<p>généraux, pour une fonction de pénalité générale. Nous passons ensuite à l’application de<p>ces théorèmes en considérant que la fonction de pénalité est la fonction de vraisemblance<p>gaussienne (Chapitre 2). L’étude du comportement asymptotique de l’estimateur lorsque<p>les coefficients du modèle dépendent du temps t et aussi de n fait l’objet du Chapitre 3.<p>Dans ce cas, nous utilisons une loi faible des grands nombres et un théorème central limite<p>pour des tableaux de différences de martingales. Ensuite, nous présentons des conditions<p>qui assurent la consistance faible et la normalité asymptotique. Les principaux<p>résultats asymptotiques sont illustrés par des expériences de simulation et des exemples<p>dans la littérature. La deuxième partie de cette thèse est consacrée à un algorithme qui nous<p>permet d’évaluer la fonction de vraisemblance exacte d’un processus tdVARMA d’ordre (p, q) gaussien. Notre algorithme est basé sur la factorisation de Cholesky d’une matrice<p>bande partitionnée. Le point de départ est une généralisation au cas multivarié de Mélard<p>(1982) pour évaluer la fonction de vraisemblance exacte d’un modèle ARMA(p, q) univarié. Aussi, nous utilisons quelques résultats de Jonasson et Ferrando (2008) ainsi que les programmes Matlab de Jonasson (2008) dans le cadre d’une fonction de vraisemblance<p>gaussienne de modèles VARMA à coefficients constants. Par ailleurs, nous déduisons que<p>le nombre d’opérations requis pour l’évaluation de la fonction de vraisemblance en fonction de p, q et n est approximativement le double par rapport à un modèle VARMA à coefficients<p>constants. L’implémentation de cet algorithme a été testée en comparant ses résultats avec<p>d’autres programmes et logiciels très connus. L’utilisation des modèles VARMA à coefficients<p>dépendant du temps apparaît particulièrement adaptée pour la dynamique de quelques<p>séries financières en mettant en évidence l’existence de la dépendance des paramètres en<p>fonction du temps.<p> / Doctorat en Sciences / info:eu-repo/semantics/nonPublished
44

Modelling and experimental analysis of frequency dependent MIMO channels

García Ariza, Alexis Paolo 04 December 2009 (has links)
La integración de tecnologías de ulta-wideband, radio-cognitiva y MIMO representa una herramienta podersoa para mejorar la eficiencia espectral de los sistemas de comunicación inalámbricos. En esta dirección, nuevas estrategias para el modelado de canales MIMO y su caracterización se hacen necesarias si se desea investigar cómo la frecuencia central y el acho de banda afectan el desempeño de los sistemas MIMO. Investigaciones preliminares han enfocado menos atención en cómo estos parámetros afectan las características del canal MIMO. Se presenta una caracterización del canal MIMO en función de la frecuencia, abondándose puntos de vista experimentales y teóricos. Los problemas indicados tratan cinco áreas principales: medidas, post-procesado de datos, generación sintética del canal, estadística multivariable para datos y modelado del canal. Se ha diseñado y validado un sistema de medida basado en un analizador vectorial de redes y se han ejecutado medidas entre 2 y 12 GHz en condiciones estáticas, tanto en línea de vista como no línea de vista. Se ha propuesto y validado un procedimiento confiable para post-procesado, generación sintética de canal y análisis experimental basado en medidas en el dominio de frecuencia. El procedimiento experimental se ha focalizado en matrices de transferencia del canal para casos no selectivos en frecuencia, estimándose además las matrices complejas de covarianza, aplicándose la factorización de Cholesky sobre ls CCM y obteniéndose finalmente matrices de coloreado del sistema. Se presenta un procedimiento de corrección para generación sintética del canal aplicado a casos MIMO de grandes dimensiones y cuando la CCM es indefinida. Este CP permite la factorización de Cholesky y de dichas CCM. Las características multivariables de los datos experimentales han sido investigadas, realizándose un test de normalidad compleja multivariable. / García Ariza, AP. (2009). Modelling and experimental analysis of frequency dependent MIMO channels [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/6563 / Palancia
45

High-Performance Scientific Applications Using Mixed Precision and Low-Rank Approximation Powered by Task-based Runtime Systems

Alomairy, Rabab M. 20 July 2022 (has links)
To leverage the extreme parallelism of emerging architectures, so that scientific applications can fulfill their high fidelity and multi-physics potential while sustaining high efficiency relative to the limiting resource, numerical algorithms must be redesigned. Algorithmic redesign is capable of shifting the limiting resource, for example from memory or communication to arithmetic capacity. The benefit of algorithmic redesign expands greatly when introducing a tunable tradeoff between accuracy and resources. Scientific applications from diverse sources rely on dense matrix operations. These operations arise in: Schur complements, integral equations, covariances in spatial statistics, ridge regression, radial basis functions from unstructured meshes, and kernel matrices from machine learning, among others. This thesis demonstrates how to extend the problem sizes that may be treated and to reduce their execution time. Two “universes” of algorithmic innovations have emerged to improve computations by orders of magnitude in capacity and runtime. Each introduces a hierarchy, of rank or precision. Tile Low-Rank approximation replaces blocks of dense operator with those of low rank. Mixed precision approximation, increasingly well supported by contemporary hardware, replaces blocks of high with low precision. Herein, we design new high-performance direct solvers based on the synergism of TLR and mixed precision. Since adapting to data sparsity leads to heterogeneous workloads, we rely on task-based runtime systems to orchestrate the scheduling of fine-grained kernels onto computational resources. We first demonstrate how TLR permits to accelerate acoustic scattering and mesh deformation simulations. Our solvers outperform the state-of-art libraries by up to an order of magnitude. Then, we demonstrate the impact of enabling mixed precision in bioinformatics context. Mixed precision enhances the performance up to three-fold speedup. To facilitate the adoption of task-based runtime systems, we introduce the AL4SAN library to provide a common API for the expression and queueing of tasks across multiple dynamic runtime systems. This library handles a variety of workloads at a low overhead, while increasing user productivity. AL4SAN enables interoperability by switching runtimes at runtime, which permits to achieve a twofold speedup on a task-based generalized symmetric eigenvalue solver.
46

市場風險值管理之應用分析以某金融控股公司為例 / The analysis of Market Risk VaR management :the case of financial holding company

周士偉, Chou, Jacky Unknown Date (has links)
2008年次貸風暴橫掃全球金融市場,Basel II制度歷經多年的實施,卻無法有效防阻金融風暴的發生。觀察2008已採用內部模型法之主要國際金融機構之年報,亦發現採用蒙地卡羅模擬法之代表銀行『德意志銀行』於該年度竟發生了35次穿透,市場風險管理到底出了什麼問題?這是被極度關心的現象,產官學界也對此現象提出了許多議題。2012年的現在,次貸的風暴尚未遠去,新的歐債危機也正在蔓延,若金融風暴再次來臨,市場風險管理是否能克服次貸風暴後所凸顯的缺失,市場風險管理的價值除被動管理外,是否還可以進階到主動預警,以作為經營決策的重要參考資訊?這些都是國內金融機構需積極面對的急迫的市場風險管理議題。 個案金控的市場風險管理機制致力於解決次貸以來所凸顯的市場風險管理議題、提升市場風險衡量的精準度、擴大市場風險管理之應用範圍,並將市場風險管理的價值由被動管理角色進階到主動預警角色,以期作為經營決策的重要參考。經過多年的淬煉,其發展理念與經驗應具相當參考價值,故本論文以個案金融控股公司(以下簡稱個案金控)之實務經驗進行個案研究,除分析個案金控市場風險管理機制的基礎架構外,也將研究重心放在個案金控如何在此基礎架構下,開發多種進階市場風險量化管理功能。 本論文除研究個案金控如何完善市場風險值量化機制外,也對各量化功能的實施結果進行分析,以期研究成果可更客觀的作為其他金融控股公司未來發展進階市場風險衡量機制之參考。

Page generated in 0.0435 seconds