Spelling suggestions: "subject:"postcopula"" "subject:"popula""
271 |
Frequency Analysis of Droughts Using Stochastic and Soft Computing TechniquesSadri, Sara January 2010 (has links)
In the Canadian Prairies recurring droughts are one of the realities which can
have significant economical, environmental, and social impacts. For example,
droughts in 1997 and 2001 cost over $100 million on different sectors. Drought frequency
analysis is a technique for analyzing how frequently a drought event of a given
magnitude may be expected to occur. In this study the state of the science related
to frequency analysis of droughts is reviewed and studied. The main contributions
of this thesis include development of a model in Matlab which uses the qualities of
Fuzzy C-Means (FCMs) clustering and corrects the formed regions to meet the criteria
of effective hydrological regions. In FCM each site has a degree of membership in
each of the clusters. The algorithm developed is flexible to get number of regions and
return period as inputs and show the final corrected clusters as output for most case
scenarios. While drought is considered a bivariate phenomena with two statistical
variables of duration and severity to be analyzed simultaneously, an important step
in this study is increasing the complexity of the initial model in Matlab to correct
regions based on L-comoments statistics (as apposed to L-moments). Implementing
a reasonably straightforward approach for bivariate drought frequency analysis using
bivariate L-comoments and copula is another contribution of this study. Quantile estimation at ungauged sites for return periods of interest is studied by introducing two
new classes of neural network and machine learning: Radial Basis Function (RBF)
and Support Vector Machine Regression (SVM-R). These two techniques are selected
based on their good reviews in literature in function estimation and nonparametric
regression. The functionalities of RBF and SVM-R are compared with traditional
nonlinear regression (NLR) method. As well, a nonlinear regression with regionalization
method in which catchments are first regionalized using FCMs is applied and
its results are compared with the other three models. Drought data from 36 natural
catchments in the Canadian Prairies are used in this study. This study provides a
methodology for bivariate drought frequency analysis that can be practiced in any
part of the world.
|
272 |
運用新共同邊界法探討多重產出銀行業市場競爭度與成本效率 / A New Approach to Jointly Estimating the Lerner Index and Cost Efficiency for Multi-output Banks under a New Meta-Frontier Framework江典霖, Chiang, Dien Lin Unknown Date (has links)
過去文獻大多使用Lerner指數來衡量銀行業之市場競爭度,但在計算過程中有可能出現其值為負之問題。為解決上述問題,本文運用關聯結構函數建立聯立隨機邊界模型,它由銀行成本邊界與兩條產出價格邊界所組成,可以同時衡量放款市場及投資市場之市場競爭度與成本效率。另外,為比較西歐五個國家的銀行市場競爭度與成本效率,本文進一步採用Huang et al. (2014)所提出的新隨機共同邊界模型,此模型除使用共同成本邊界計算技術缺口比率外,還透過產出價格共同邊界衡量潛在Lerner指數,進一步拆解成Lerner指數與MC gap ratio (MCGR)兩部分,可以比較不同國家間的市場競爭程度。 / This paper proposes the copula-based simultaneous stochastic frontier model (CSSFM), composed of a cost frontier and two output price frontiers for the banking sector, in order to measure cost efficiency and market power in the markets of loans and investments. The new Lerner index can be estimated by relying on the simultaneous equations model, consisting of three frontier equations, which avoids obtaining negative measures of the Lerner index. We then apply the new meta-frontier model to simultaneously estimate and compare cost efficiency and market power across five countries over the period 1998-2010. The salient feature of our proposed approach is that it allows for calculating the technology gap ratio on the basis of the cost frontier, as well as evaluating the potential Lerner index from price frontiers, which can be decomposed into the country-specific Lerner index and marginal cost gap ratio.
|
273 |
Frequency Analysis of Droughts Using Stochastic and Soft Computing TechniquesSadri, Sara January 2010 (has links)
In the Canadian Prairies recurring droughts are one of the realities which can
have significant economical, environmental, and social impacts. For example,
droughts in 1997 and 2001 cost over $100 million on different sectors. Drought frequency
analysis is a technique for analyzing how frequently a drought event of a given
magnitude may be expected to occur. In this study the state of the science related
to frequency analysis of droughts is reviewed and studied. The main contributions
of this thesis include development of a model in Matlab which uses the qualities of
Fuzzy C-Means (FCMs) clustering and corrects the formed regions to meet the criteria
of effective hydrological regions. In FCM each site has a degree of membership in
each of the clusters. The algorithm developed is flexible to get number of regions and
return period as inputs and show the final corrected clusters as output for most case
scenarios. While drought is considered a bivariate phenomena with two statistical
variables of duration and severity to be analyzed simultaneously, an important step
in this study is increasing the complexity of the initial model in Matlab to correct
regions based on L-comoments statistics (as apposed to L-moments). Implementing
a reasonably straightforward approach for bivariate drought frequency analysis using
bivariate L-comoments and copula is another contribution of this study. Quantile estimation at ungauged sites for return periods of interest is studied by introducing two
new classes of neural network and machine learning: Radial Basis Function (RBF)
and Support Vector Machine Regression (SVM-R). These two techniques are selected
based on their good reviews in literature in function estimation and nonparametric
regression. The functionalities of RBF and SVM-R are compared with traditional
nonlinear regression (NLR) method. As well, a nonlinear regression with regionalization
method in which catchments are first regionalized using FCMs is applied and
its results are compared with the other three models. Drought data from 36 natural
catchments in the Canadian Prairies are used in this study. This study provides a
methodology for bivariate drought frequency analysis that can be practiced in any
part of the world.
|
274 |
Stress-Test Exercises and the Pricing of Very Long-Term BondsDubecq, Simon 28 January 2013 (has links) (PDF)
In the first part of this thesis, we introduce a new methodology for stress-test exercises. Our approach allows to consider richer stress-test exercises, which assess the impact of a modification of the whole distribution of asset prices' factors, rather than focusing as the common practices on a single realization of these factors, and take into account the potential reaction to the shock of the portfolio manager. The second part of the thesis is devoted to the pricing of bonds with very long-term time-to-maturity (more than ten years). Modeling the volatility of very long-term rates is a challenge, due to the constraints put by no-arbitrage assumption. As a consequence, most of the no-arbitrage term structure models assume a constant limiting rate (of infinite maturity). The second chapter investigates the compatibility of the so-called "level" factor, whose variations have a uniform impact on the modeled yield curve, with the no-arbitrage assumptions. We introduce in the third chapter a new class of arbitrage-free term structure factor models, which allows the limiting rate to be stochastic, and present its empirical properties on a dataset of US T-Bonds.
|
275 |
時間數列模型應用於合成型抵押擔保債務憑證之評價與預測 / Time series model apply to price and predict for Synthetic CDOs張弦鈞, Chang, Hsien Chun Unknown Date (has links)
根據以往探討評價合成型抵押擔保債務憑證之文獻研究,最廣泛使用的方法應為大樣本一致性資產組合(large homogeneous portfolio portfolio;LHP)假設之單因子常態關聯結構模型來評價,但會因為常態分配的厚尾度及偏斜性造成與市場報價間的差異過大,且會造成相關性微笑曲線現象。故像是Kalemanova et al.在2007年提出之應用LHP假設的單因子Normal Inverse Gaussian(NIG)關聯結構模型以及邱嬿燁(2007)提出NIG及Closed Skew Normal(CSN)複合分配之單因子關聯結構模型(MIX模型)皆是為了改善其在各分劵評價時能達到更佳的評價結果
,然而過去的文獻在評價合成型抵押擔保債務憑證時,需要將CDS價差、各分劵真實報價之資訊導入模型,並藉由此兩種資訊進而得到相關係數及報價,故靜態模型大多為事後之驗證,在靜態模型方面,我們嘗試使用不同概念之CDS取法以及相對到期日期數遞減之概念來比較此兩種不同方法與原始的關聯結構模型進行比較分析,在動態模型方面,我們應用與時間序列相關之方法套入以往的評價模型,針對不同商品結構的合成型抵押擔保債券評價,並由實證分析來比較此兩種模型,而在最後,我們利用時間序列模型來對各分劵進行預測。
|
276 |
運用關聯結構網絡隨機邊界分析法探討我國壽險公司經營績效 / Applying the Copula-Based Network Stochastic Frontier Approach to Study the Efficiency of Taiwan’s Life Insurance Industry巫瑞虔, Wu, Ruei Cian Unknown Date (has links)
本研究以2000至2012年台灣地區26間人壽保險公司的不平衡縱橫資料,運用網絡隨機邊界分析法將壽險業的生產過程分為行銷與投資兩階段進行效率評估,並利用估計結果計算規模彈性與成本彈性探討台灣壽險業的生產特性,附帶分析跨期技術變動率,最後比較不同分組的壽險公司間經營效率是否存在差異。
實證結果發現壽險公司在行銷活動過程投入較少的內勤員工與較多的固定資產,在投資階段則相反,投入較多的內勤員工與較少的固定資產,與壽險公司實際運作情況相符;此外,投資階段的效率優於第一階段的行銷效率。整體台灣壽險業受到2008年金融風暴影響導致經營效率下降,國內壽險公司在經營效率上優於外商壽險分公司,金控壽險公司生產技術效率優於非金控壽險公司,1993年後成立的新壽險公司生產技術效率平均優於傳統舊壽險公司。 / This paper uses the copula-based network SFA model developed by Huang et al. (2013) to estimate the technical efficiency of Taiwan’s life insurance companies over the period 2000-2012. Under this framework, life insurance companies produce premium income as intermediate product which is one of input factors to produce investment income. The empirical analysis concluded: (a) life insurers use little internal staff in first stage, (b) domestic life insurers have both high technical efficiency and cost efficiency in comparison with foreign life insurers, (c) financial holding life insurers have greater technical efficiency than those of not from financial holding insurers, and (d) new life insurers have higher technical efficiency than old life insurers.
|
277 |
單一分券違約信用交換與單一分券擔保債權憑證之評價-Copula方法林晚容 Unknown Date (has links)
銀行承載許多公司借款、各式擔保貸款及各式信用貸款等,使金融機構面臨龐大各式信用風險問題。在新版巴塞爾資本協定針對信用風險之計算方法做了重大修正,其中信用衍生性商品已具有信用風險抵減之功能。故本研究將針對一籃子信用標的針對信用結構式商品中具有量身訂作的單一分券信用違約交換與單一分券擔保債權憑進行更深入之研究並使用加入Vasicek Model特例Ornstein-Uhlenbeck process表示違約強度之隨機動態過程利用類似風險性債券之概念求得出封閉解以替代存活函數,來為簡化起見在無風險利率假設為一固定常數使用Copula方法評價單一分券信用違約交換與單一分券擔保債權憑。
在數值模擬部分,本篇利用實際市場資料建構出一合成單一分券擔保債權憑證產品,先針對違約動態模型與Copula函數之相關參數以實際市場資料做計與校正,再以評價公式以計算出合理信用價差,其結果可知當Copula函數越能描繪具有信用違約相關之信用違約事件,則當發生信用標的資產先後違約聚集情形會越高,以本研究實際產品資料特性而言Clayton Copula最能表現出違維聚集之情形,但在反應在第一次發生違約的權益分券上反而沒有其他兩種Copula函數用蒙地卡羅法所模擬出之違約次數高反而更低,做所求出來的信用價差也相對來的低,反而在反應違約聚集部分的先償違約交換具有較高信用價差。而在VaR值之衡量上可能因信用標的資產比較少,並沒有明顯之差異。
|
278 |
探討合成型抵押擔保債券憑證之評價 / Pricing the Synthetic CDOs林聖航 Unknown Date (has links)
根據以往探討評價合成型抵押擔保債券之文獻研究,最廣為使用的方法應用大樣本一致性資產組合(large homogeneous portfolio portfolio ; LHP)假設之單因子常態關聯結構模型來評價,但會造成合成型抵押擔保債券憑證與市場報價間的差異過大,且會造成相關性微笑曲線現象。由文獻顯示,單因子關聯結構模型若能加入厚尾度或偏斜性能夠改善以上問題,且對於分券評價時也會有較好的效果,像是Kalemanova et al. (2007) 提出應用LHP假設之單因子Normal Inverse Gaussian(NIG)關聯結構模型以及邱嬿燁(2007)提出NIG及Closed Skew Normal(CSN)複合分配之單因子關聯結構模型(MIX模型)在實證分析中得到極佳的評價結果。自2008年起,合成型抵押擔保債券商品結構開始出現變化,而以往評價合成型抵押擔保債券價格時,商品結構皆為同一種型式。本文將利用常態分配、NIG分配、CSN分配以及NIG與CSN複合分配作為不同的單因子關聯結構模型,藉由絕對誤差極小化方法,針對不同商品結構的合成型抵押擔保債券評價,並進行模型比較分析。由最後實證分析結果顯示,單因子NIG(2)關聯結構模型優於其他模型,也證明NIG分配的第二個參數 β 能夠帶來改善的評價效果,此項證明與過去文獻結論有所不同,但 MIX模型則為唯一一個符合LHP假設的模型。 / Based on the literature of discussing the approach for pricing synthetic CDOs, the most widely used methods used application of Large Homogeneous Portfolio (LHP) assumption of the one factor Gaussian copula model, however , it fails to fit the prices of synthetic CDOs tranches and leads to the implied correlation smile. The literature shows that one factor copula model adding the heavy-tail or skew can improve the above problem, and also has a good effect for pricing tranches such as
Kalemanova et al (2007) proposed the application of LHP assumption of one factor NIG copula model and Qiu Yan Ye (2007) proposed the application of LHP assumption of one factor NIG and CSN copula model. This article found that the structure of synthetic CDOs began to change since 2008. The past of pricing synthetic CDOs, the structure of synthetic CDOs are the same type, so this article will use different one factor copula model for pricing different structure of synthetic CDOs by using the absolute error minimization. This article will observe whether the above model can be applied in the new synthetic CDOs and implement of different type model for comparative analysis. The last empirical analysis shows that one factor NIG (2) copula model is superior to other models, more meeting the actual market demand, also proving the second parameter β of the NIG distribution able to bring about improvements in pricing results. This proving is different for the past literature conclusions. However, the MIX model is the only one in line with the LHP assumptions.
|
279 |
Fusion d'images de télédétection hétérogènes par méthodes crédibilistes / Fusion of heterogeneous remote sensing images by credibilist methodsHammami, Imen 08 December 2017 (has links)
Avec l’avènement de nouvelles techniques d’acquisition d’image et l’émergence des systèmes satellitaires à haute résolution, les données de télédétection à exploiter sont devenues de plus en plus riches et variées. Leur combinaison est donc devenue essentielle pour améliorer le processus d’extraction des informations utiles liées à la nature physique des surfaces observées. Cependant, ces données sont généralement hétérogènes et imparfaites ce qui pose plusieurs problèmes au niveau de leur traitement conjoint et nécessite le développement de méthodes spécifiques. C’est dans ce contexte que s’inscrit cette thèse qui vise à élaborer une nouvelle méthode de fusion évidentielle dédiée au traitement des images de télédétection hétérogènes à haute résolution. Afin d’atteindre cet objectif, nous axons notre recherche, en premier lieu, sur le développement d’une nouvelle approche pour l’estimation des fonctions de croyance basée sur la carte de Kohonen pour simplifier l’opération d’affectation des masses des gros volumes de données occupées par ces images. La méthode proposée permet de modéliser non seulement l’ignorance et l’imprécision de nos sources d’information, mais aussi leur paradoxe. Ensuite, nous exploitons cette approche d’estimation pour proposer une technique de fusion originale qui permettra de remédier aux problèmes dus à la grande variété des connaissances apportées par ces capteurs hétérogènes. Finalement, nous étudions la manière dont la dépendance entre ces sources peut être considérée dans le processus de fusion moyennant la théorie des copules. Pour cette raison, une nouvelle technique pour choisir la copule la plus appropriée est introduite. La partie expérimentale de ce travail est dédiée à la cartographie de l’occupation des sols dans les zones agricoles en utilisant des images SPOT-5 et RADARSAT-2. L’étude expérimentale réalisée démontre la robustesse et l’efficacité des approches développées dans le cadre de cette thèse. / With the advent of new image acquisition techniques and the emergence of high-resolution satellite systems, remote sensing data to be exploited have become increasingly rich and varied. Their combination has thus become essential to improve the process of extracting useful information related to the physical nature of the observed surfaces. However, these data are generally heterogeneous and imperfect, which poses several problems in their joint treatment and requires the development of specific methods. It is in this context that falls this thesis that aimed at developing a new evidential fusion method dedicated to heterogeneous remote sensing images processing at high resolution. In order to achieve this objective, we first focus our research, firstly, on the development of a new approach for the belief functions estimation based on Kohonen’s map in order to simplify the masses assignment operation of the large volumes of data occupied by these images. The proposed method allows to model not only the ignorance and the imprecision of our sources of information, but also their paradox. After that, we exploit this estimation approach to propose an original fusion technique that will solve problems due to the wide variety of knowledge provided by these heterogeneous sensors. Finally, we study the way in which the dependence between these sources can be considered in the fusion process using the copula theory. For this reason, a new technique for choosing the most appropriate copula is introduced. The experimental part of this work isdevoted to land use mapping in case of agricultural areas using SPOT-5 and RADARSAT-2 images. The experimental study carried out demonstrates the robustness and effectiveness of the approaches developed in the framework of this thesis.
|
280 |
Avaliação esportiva utilizando técnicas multivariadas: construção de indicadores e sistemas onlineMaiorano, Alexandre Cristovão 10 October 2014 (has links)
Submitted by Izabel Franco (izabel-franco@ufscar.br) on 2016-09-27T13:57:54Z
No. of bitstreams: 1
DissACM.pdf: 2683283 bytes, checksum: 013455f7d8c0d48a1566d18bcdd0fbe8 (MD5) / Approved for entry into archive by Ronildo Prado (ronisp@ufscar.br) on 2016-10-03T18:18:22Z (GMT) No. of bitstreams: 1
DissACM.pdf: 2683283 bytes, checksum: 013455f7d8c0d48a1566d18bcdd0fbe8 (MD5) / Approved for entry into archive by Ronildo Prado (ronisp@ufscar.br) on 2016-10-03T18:18:38Z (GMT) No. of bitstreams: 1
DissACM.pdf: 2683283 bytes, checksum: 013455f7d8c0d48a1566d18bcdd0fbe8 (MD5) / Made available in DSpace on 2016-10-03T18:29:41Z (GMT). No. of bitstreams: 1
DissACM.pdf: 2683283 bytes, checksum: 013455f7d8c0d48a1566d18bcdd0fbe8 (MD5)
Previous issue date: 2014-10-10 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / The main objective of this research is to provide statistical tools that allow the comparison
of individuals in a speci ed sports category. Particularly, the present study is focused
on the performance evaluation in football using univariate and multivariate methods. The
univariate approach is given by Z-CELAFISCS methodology, which was developed with
the purpose of identifying talents in the sport. The multivariate approaches are given
by the construction of indicators, speci cally by means of principal component analysis,
factor analysis and copulas. These indicators allows the reduction of the dimensionality
of the data in studying, providing better interpretation of the results and improving comparability
between the performance and assortment of individuals. To facilitate the use
of the methodology studied here was built an online statistical system called i-Sports. / principal objetivo do trabalho é apresentar ferramentas estatísticas que permitam a
comparação de indivíduos em uma determinada modalidade esportiva. Particularmente, o
estudo exposto é voltado à avaliação de desempenho em futebol, utilizando métodos univariados
e multivariados. A abordagem univariada é dada pela metodologia Z-CELAFISCS,
desenvolvida com o propósito de identi car talentos no esporte. As abordagens multivariadas
são dadas pela construção de indicadores, mais especi camente por meio da análise
de componentes principais, análise fatorial e cópulas. A obtenção desses indicadores possibilita
a redução da dimensionalidade do estudo, fornecendo melhor interpretação dos
resultados e melhor comparabilidade entre o desempenho e rankeamento dos indivíduos.
Para facilitar a utilização da metodologia aqui estudada foi construído um sistema estat
ístico online chamado de i-Sports.
|
Page generated in 0.0445 seconds