• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 4
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 22
  • 22
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

多維異質變異模型於結構型商品評價上之應用研究

王俊欽 Unknown Date (has links)
近年來市面上的結構型商品日新月異,其中的股權連結型商品,其報酬收益形態往往因為與多檔標的資產連結而造成封閉解的不易求得。在評價此類商品時,常需要藉由撰寫電腦程式語言來模擬各標的股價的未來路徑 (例:Monte Carlo Simulation) ,並對未來期望現金流量折現求解。但因為各標的股價間彼此相關,在模擬股價時,需要對其相關係數矩陣 (correlation matrix) 做Cholesky Decomposition的操作,以便藉由獨立的常態隨機變數造出彼此相關的多元常態隨機變數。 由過去的歷史資料和實證分析得知,各股價報酬間的相關係數矩陣和波動度 (volatility) 皆是隨著時間改變 (time-varing) 而非固定不變的常數 (constant) ,故本論文在模擬股價時,不直接以過去歷史資料所求算之樣本變異數、樣本相關係數來做為模擬股價所需參數,而是考慮使用時間序列中的多維異質變異模型 (Multivariate Conditional Heteroscedastic Models) 或稱多維的波動度模型 (Multivariate Volatility Models) 來預測 (forecast) 未來商品存續期間各時點連結標的資產報酬間的相關係數矩陣和波動度,以便做為模擬股價所需之參數。 本文實際將波動度模型套用在兩件於中國發行的多標的股權連動債券的評價上,發現因為經由波動度模型所預測而得之未來波動度和相關係數皆有均數回歸性質 (mean reversion),造成最後的評價結果與直接使用歷史波動度和歷史相關係數所得之結果無太大的差異,故認為將來處理相同問題時,可直接使用歷史資料所估得之參數代入模擬程序即可。 關鍵詞:波動度模型、Cholesky Decomposition、結構商品評價、蒙地卡羅法。
12

Longitudinal data analysis with covariates measurement error

Hoque, Md. Erfanul 05 January 2017 (has links)
Longitudinal data occur frequently in medical studies and covariates measured by error are typical features of such data. Generalized linear mixed models (GLMMs) are commonly used to analyse longitudinal data. It is typically assumed that the random effects covariance matrix is constant across the subject (and among subjects) in these models. In many situations, however, this correlation structure may differ among subjects and ignoring this heterogeneity can cause the biased estimates of model parameters. In this thesis, following Lee et al. (2012), we propose an approach to properly model the random effects covariance matrix based on covariates in the class of GLMMs where we also have covariates measured by error. The resulting parameters from this decomposition have a sensible interpretation and can easily be modelled without the concern of positive definiteness of the resulting estimator. The performance of the proposed approach is evaluated through simulation studies which show that the proposed method performs very well in terms biases and mean square errors as well as coverage rates. The proposed method is also analysed using a data from Manitoba Follow-up Study. / February 2017
13

Acceleration of Massive MIMO algorithms for Beyond 5G Baseband processing

Nihl, Ellen, de Bruijckere, Eek January 2023 (has links)
As the world becomes more globalised, user equipment such as smartphones and Internet of Things devices require increasingly more data, which increases the demand for wireless data traffic. Hence, the acceleration of next-generational networks (5G and beyond) focuses mainly on increasing the bitrate and decreasing the latency. A crucial technology for 5G and beyond is the massive MIMO. In a massive MIMO system, a detector processes the received signals from multiple antennas to decode the transmitted data and extract useful information. This has been implemented in many ways, and one of the most used algorithms is the Zero Forcing (ZF) algorithm. This thesis presents a novel parallel design to accelerate the ZF algorithm using the Cholesky decomposition. This is implemented on a GPU, written in the CUDA programming language, and compared to the existing state-of-the-art implementations regarding latency and throughput. The implementation is also validated from a MATLAB implementation. This research demonstrates promising performance using GPUs for massive MIMO detection algorithms. Our approach achieves a significant speedup factor of 350 in comparison to a serial version of the implementation. The throughput achieved is 160 times greater than a comparable GPU-based approach. Despite this, our approach reaches a 2.4 times lower throughput than a solution that employed application-specific hardware. Given the promising results, we advocate for continued research in this area to further optimise detection algorithms and enhance their performance on GPUs, to potentially achieve even higher throughput and lower latency. / <p>Our examiner Mahdi wants to wait six months before the thesis is published. </p>
14

High Performance FPGA-Based Computation and Simulation for MIMO Measurement and Control Systems

Palm, Johan January 2009 (has links)
<p>The Stressometer system is a measurement and control system used in cold rolling to improve the flatness of a metal strip. In order to achieve this goal the system employs a multiple input multiple output (MIMO) control system that has a considerable number of sensors and actuators. As a consequence the computational load on the Stressometer control system becomes very high if too advance functions are used. Simultaneously advances in rolling mill mechanical design makes it necessary to implement more complex functions in order for the Stressometer system to stay competitive. Most industrial players in this market considers improved computational power, for measurement, control and modeling applications, to be a key competitive factor. Accordingly there is a need to improve the computational power of the Stressometer system. Several different approaches towards this objective have been identified, e.g. exploiting hardware parallelism in modern general purpose and graphics processors.</p><p>Another approach is to implement different applications in FPGA-based hardware, either tailored to a specific problem or as a part of hardware/software co-design. Through the use of a hardware/software co-design approach the efficiency of the Stressometer system can be increased, lowering overall demand for processing power since the available resources can be exploited more fully. Hardware accelerated platforms can be used to increase the computational power of the Stressometer control system without the need for major changes in the existing hardware. Thus hardware upgrades can be as simple as connecting a cable to an accelerator platform while hardware/software co-design is used to find a suitable hardware/software partition, moving applications between software and hardware.</p><p>In order to determine whether this hardware/software co-design approach is realistic or not, the feasibility of implementing simulator, computational and control applications in FPGAbased hardware needs to be determined. This is accomplished by selecting two specific applications for a closer study, determining the feasibility of implementing a Stressometer measuring roll simulator and a parallel Cholesky algorithm in FPGA-based hardware.</p><p>Based on these studies this work has determined that the FPGA device technology is perfectly suitable for implementing both simulator and computational applications. The Stressometer measuring roll simulator was able to approximate the force and pulse signals of the Stressometer measuring roll at a relative modest resource consumption, only consuming 1747 slices and eight DSP slices. This while the parallel FPGA-based Cholesky component is able to provide performance in the range of GFLOP/s, exceeding the performance of the personal computer used for comparison in several simulations, although at a very high resource consumption. The result of this thesis, based on the two feasibility studies, indicates that it is possible to increase the processing power of the Stressometer control system using the FPGA device technology.</p>
15

High Performance FPGA-Based Computation and Simulation for MIMO Measurement and Control Systems

Palm, Johan January 2009 (has links)
The Stressometer system is a measurement and control system used in cold rolling to improve the flatness of a metal strip. In order to achieve this goal the system employs a multiple input multiple output (MIMO) control system that has a considerable number of sensors and actuators. As a consequence the computational load on the Stressometer control system becomes very high if too advance functions are used. Simultaneously advances in rolling mill mechanical design makes it necessary to implement more complex functions in order for the Stressometer system to stay competitive. Most industrial players in this market considers improved computational power, for measurement, control and modeling applications, to be a key competitive factor. Accordingly there is a need to improve the computational power of the Stressometer system. Several different approaches towards this objective have been identified, e.g. exploiting hardware parallelism in modern general purpose and graphics processors. Another approach is to implement different applications in FPGA-based hardware, either tailored to a specific problem or as a part of hardware/software co-design. Through the use of a hardware/software co-design approach the efficiency of the Stressometer system can be increased, lowering overall demand for processing power since the available resources can be exploited more fully. Hardware accelerated platforms can be used to increase the computational power of the Stressometer control system without the need for major changes in the existing hardware. Thus hardware upgrades can be as simple as connecting a cable to an accelerator platform while hardware/software co-design is used to find a suitable hardware/software partition, moving applications between software and hardware. In order to determine whether this hardware/software co-design approach is realistic or not, the feasibility of implementing simulator, computational and control applications in FPGAbased hardware needs to be determined. This is accomplished by selecting two specific applications for a closer study, determining the feasibility of implementing a Stressometer measuring roll simulator and a parallel Cholesky algorithm in FPGA-based hardware. Based on these studies this work has determined that the FPGA device technology is perfectly suitable for implementing both simulator and computational applications. The Stressometer measuring roll simulator was able to approximate the force and pulse signals of the Stressometer measuring roll at a relative modest resource consumption, only consuming 1747 slices and eight DSP slices. This while the parallel FPGA-based Cholesky component is able to provide performance in the range of GFLOP/s, exceeding the performance of the personal computer used for comparison in several simulations, although at a very high resource consumption. The result of this thesis, based on the two feasibility studies, indicates that it is possible to increase the processing power of the Stressometer control system using the FPGA device technology.
16

結構型金融商品之評價與分析:連結一籃子商品之保本型票券 / Pricing the structured notes-capital protected note linked to a basket of commodities

曾瓊葦 Unknown Date (has links)
金融風暴過後,在預期全球景氣轉佳之下,將帶動原物料市場價格之走揚。故能源及基礎金屬商品等的投資需求增加,近年來也出現不少連結能源及商品的投資工具。加上現今低利率之經濟環境,使得投資人希望尋求有較市場利率高之獲利率的金融商品。 本論文採用市場上銷售的結構型商品來進行評價與分析,其為連結一籃子商品之保本型票券。本文以蒙地卡羅模擬法作產品之評價及分析,讓讀者充分瞭解產品結構、報酬型態與成本以及所面臨的風險;此外也從發行商的角度,並分析其可運用的避險策略。
17

Impacto directo e indirecto de cambios en la cotización internacional del petróleo sobre la inflación: un estudio para Perú 2007 – 2019

Huillca Huamán, Betty Marilyn, Villanueva Orrego, Elizabeth Consuelo Dorila 05 October 2021 (has links)
La presente investigación tiene como objetivo cuantificar en términos de duración e impacto del efecto de choques en la cotización internacional del precio del petróleo sobre la inflación del Perú para el periodo 2007 al 2019. Por ello, se realiza una diferenciación entre el efecto directo e indirecto, ya que el efecto sobre la economía no solo se visualiza en los precios de los combustibles, sino también en variables macroeconómicas consideradas como variables control (crecimiento de la economía de Estados Unidos, términos de intercambio, tasa de crecimiento de Perú, tasa de referencia y la tasa de desempleo), las cuales multiplican aún más el efecto sobre la inflación. El análisis del efecto directo se desarrolló mediante una regresión simple OLS, la cual evaluará cambios en la cotización internacional del petróleo sobre el precio de los combustibles y derivados y su efecto sobre la inflación. Mientras que para los efectos indirectos se ejecutó utilizando un SVAR bajo la metodología de identificación de SIMS (1980) y la imposición de restricciones estructurales según la teoría económica. El resultado esperado, es demostrar que cambios en el precio de la cotización internacional del petróleo afectará en mayor medida (magnitud, duración e impacto) a la inflación por el canal indirecto, que por el canal directo. / The objective of this research is to quantify in terms of duration and impact the effect of shocks in the international price of oil on inflation in Peru for the period 2007-2019. Therefore, a differentiation is made between the direct and indirect effect since the effect on the economy is not only seen in fuel prices, but also in macroeconomic variables considered as control variables (growth of the United States economy, terms of trade, Peru's growth rate, reference rate, and unemployment rate), which further multiply the effect on inflation. The direct effect analysis was developed through a simple OLS regression, which will evaluate changes in the international price of oil on the price of fuels and derivatives and their effect on inflation. While for indirect effects, it is executed using an SVAR under the identification methodology of SIMS (1980) and the imposition of structural restrictions according to economic theory. The expected result is to show that changes in the international price of oil will affect inflation to a greater extent (magnitude, duration, and impact) through the indirect channel than through the direct channel. / Tesis
18

IRRBB in a Low Interest Rate Environment / : IRRBB i en lågräntemiljö

Berg, Simon, Elfström, Victor January 2020 (has links)
Financial institutions are exposed to several different types of risk. One of the risks that can have a significant impact is the interest rate risk in the bank book (IRRBB). In 2018, the European Banking Authority (EBA) released a regulation on IRRBB to ensure that institutions make adequate risk calculations. This article proposes an IRRBB model that follows EBA's regulations. Among other things, this framework contains a deterministic stress test of the risk-free yield curve, in addition to this, two different types of stochastic stress tests of the yield curve were made. The results show that the deterministic stress tests give the highest risk, but that the outcomes are considered less likely to occur compared to the outcomes generated by the stochastic models. It is also demonstrated that EBA's proposal for a stress model could be better adapted to the low interest rate environment that we experience now. Furthermore, a discussion is held on the need for a more standardized framework to clarify, both for the institutions themselves and the supervisory authorities, the risks that institutes are exposed to. / Finansiella institutioner är exponerade mot flera olika typer av risker. En av de risker som kan ha en stor påverkan är ränterisk i bankboken (IRRBB). 2018 släppte European Banking Authority (EBA) ett regelverk gällande IRRBB som ska se till att institutioner gör tillräckliga riskberäkningar. Detta papper föreslår en IRRBB modell som följer EBAs regelverk. Detta regelverk innehåller bland annat ett deterministiskt stresstest av den riskfria avkastningskurvan, utöver detta så gjordes två olika typer av stokastiska stresstest av avkastningskurvan. Resultatet visar att de deterministiska stresstesten ger högst riskutslag men att utfallen anses vara mindre sannolika att inträffa jämfört med utfallen som de stokastiska modellera genererade. Det påvisas även att EBAs förslag på stressmodell skulle kunna anpassas bättre mot den lågräntemiljö som vi för tillfället befinner oss i. Vidare förs en diskussion gällande ett behov av ett mer standardiserat ramverk för att tydliggöra, både för institutioner själva och samt övervakande myndigheter, vilka risker institutioner utsätts för.
19

Accelerated sampling of energy landscapes

Mantell, Rosemary Genevieve January 2017 (has links)
In this project, various computational energy landscape methods were accelerated using graphics processing units (GPUs). Basin-hopping global optimisation was treated using a version of the limited-memory BFGS algorithm adapted for CUDA, in combination with GPU-acceleration of the potential calculation. The Lennard-Jones potential was implemented using CUDA, and an interface to the GPU-accelerated AMBER potential was constructed. These results were then extended to form the basis of a GPU-accelerated version of hybrid eigenvector-following. The doubly-nudged elastic band method was also accelerated using an interface to the potential calculation on GPU. Additionally, a local rigid body framework was adapted for GPU hardware. Tests were performed for eight biomolecules represented using the AMBER potential, ranging in size from 81 to 22\,811 atoms, and the effects of minimiser history size and local rigidification on the overall efficiency were analysed. Improvements relative to CPU performance of up to two orders of magnitude were obtained for the largest systems. These methods have been successfully applied to both biological systems and atomic clusters. An existing interface between a code for free energy basin-hopping and the SuiteSparse package for sparse Cholesky factorisation was refined, validated and tested. Tests were performed for both Lennard-Jones clusters and selected biomolecules represented using the AMBER potential. Significant acceleration of the vibrational frequency calculations was achieved, with negligible loss of accuracy, relative to the standard diagonalisation procedure. For the larger systems, exploiting sparsity reduces the computational cost by factors of 10 to 30. The acceleration of these computational energy landscape methods opens up the possibility of investigating much larger and more complex systems than previously accessible. A wide array of new applications are now computationally feasible.
20

Contribution to the estimation of VARMA models with time-dependent coefficients / Contribution à l'estimation des modèles VARMA à coefficients dépendant du temps.

Alj, Abdelkamel 07 September 2012 (has links)
Dans cette thèse, nous étudions l’estimation de modèles autorégressif-moyenne mobile<p>vectoriels ou VARMA, `a coefficients dépendant du temps, et avec une matrice de covariance<p>des innovations dépendant du temps. Ces modèles sont appel´es tdVARMA. Les éléments<p>des matrices des coefficients et de la matrice de covariance sont des fonctions déterministes<p>du temps dépendant d’un petit nombre de paramètres. Une première partie de la thèse<p>est consacrée à l’étude des propriétés asymptotiques de l’estimateur du quasi-maximum<p>de vraisemblance gaussienne. La convergence presque sûre et la normalité asymptotique<p>de cet estimateur sont démontrées sous certaine hypothèses vérifiables, dans le cas o`u les<p>coefficients dépendent du temps t mais pas de la taille des séries n. Avant cela nous considérons les propriétés asymptotiques des estimateurs de modèles non-stationnaires assez<p>généraux, pour une fonction de pénalité générale. Nous passons ensuite à l’application de<p>ces théorèmes en considérant que la fonction de pénalité est la fonction de vraisemblance<p>gaussienne (Chapitre 2). L’étude du comportement asymptotique de l’estimateur lorsque<p>les coefficients du modèle dépendent du temps t et aussi de n fait l’objet du Chapitre 3.<p>Dans ce cas, nous utilisons une loi faible des grands nombres et un théorème central limite<p>pour des tableaux de différences de martingales. Ensuite, nous présentons des conditions<p>qui assurent la consistance faible et la normalité asymptotique. Les principaux<p>résultats asymptotiques sont illustrés par des expériences de simulation et des exemples<p>dans la littérature. La deuxième partie de cette thèse est consacrée à un algorithme qui nous<p>permet d’évaluer la fonction de vraisemblance exacte d’un processus tdVARMA d’ordre (p, q) gaussien. Notre algorithme est basé sur la factorisation de Cholesky d’une matrice<p>bande partitionnée. Le point de départ est une généralisation au cas multivarié de Mélard<p>(1982) pour évaluer la fonction de vraisemblance exacte d’un modèle ARMA(p, q) univarié. Aussi, nous utilisons quelques résultats de Jonasson et Ferrando (2008) ainsi que les programmes Matlab de Jonasson (2008) dans le cadre d’une fonction de vraisemblance<p>gaussienne de modèles VARMA à coefficients constants. Par ailleurs, nous déduisons que<p>le nombre d’opérations requis pour l’évaluation de la fonction de vraisemblance en fonction de p, q et n est approximativement le double par rapport à un modèle VARMA à coefficients<p>constants. L’implémentation de cet algorithme a été testée en comparant ses résultats avec<p>d’autres programmes et logiciels très connus. L’utilisation des modèles VARMA à coefficients<p>dépendant du temps apparaît particulièrement adaptée pour la dynamique de quelques<p>séries financières en mettant en évidence l’existence de la dépendance des paramètres en<p>fonction du temps.<p> / Doctorat en Sciences / info:eu-repo/semantics/nonPublished

Page generated in 0.109 seconds