Spelling suggestions: "subject:"popula"" "subject:"copula""
91 |
Estimação de cópulas via ondaletas / Copula estimation through waveletsSilva, Francyelle de Lima e 03 October 2014 (has links)
Cópulas tem se tornado uma importante ferramenta para descrever e analisar a estrutura de dependência entre variáveis aleatórias e processos estocásticos. Recentemente, surgiram alguns métodos de estimação não paramétricos, utilizando kernels e ondaletas. Neste contexto, sabendo que cópulas podem ser escritas como expansão em ondaletas, foi proposto um estimador não paramétrico via ondaletas para a função cópula para dados independentes e de séries temporais, considerando processos alfa-mixing. Este estimador tem como característica principal estimar diretamente a função cópula, sem fazer suposição alguma sobre a distribuição dos dados e sem ajustes prévios de modelos ARMA - GARCH, como é feito em ajuste paramétrico para cópulas. Foram calculadas taxas de convergência para o estimador proposto em ambos os casos, mostrando sua consistência. Foram feitos também alguns estudos de simulação, além de aplicações a dados reais. / Copulas are important tools for describing the dependence structure between random variables and stochastic processes. Recently some nonparametric estimation procedures have appeared, using kernels and wavelets. In this context, knowing that a copula function can be expanded in a wavelet basis, we have proposed a nonparametric copula estimation procedure through wavelets for independent data and times series under alpha-mixing condition. The main feature of this estimator is the copula function estimation without assumptions about the data distribution and without ARMA - GARCH modeling, like in parametric copula estimation. Convergence rates for the estimator were computed, showing the estimator consistency. Some simulation studies were made, as well as analysis of real data sets.
|
92 |
Modelování vícerozměrné závislosti pomocí kopula funkcí / Multivariate Dependence Modeling Using CopulasKlaus, Marek January 2012 (has links)
Multivariate volatility models, such as DCC MGARCH, are estimated under assumption of multivariate normal distribution of random variables, while this assumption have been rejected by empirical evidence. Therefore, the estimated conditional correlation may not explain the whole dependence structure, since under non-normality the linear correlation is only one of the dependency measures. The aim of this thesis is to employ a copula function to the DCC MGARCH model, as copulas are able to link non-normal marginal distributions to create corresponding multivariate joint distribution. The copula-based MGARCH model with uncorrelated dependent errors permits to model conditional cor- relation by DCC-MGARCH and dependence by the copula function, sepa- rately and simultaneously. In other words the model aims to explain addi- tional dependence not captured by traditional DCC MGARCH model due to assumption of normality. In the empirical analysis we apply the model on datasets consisting primarily of stocks of the PX Index and on the pair of S&P500 and NASDAQ100 in order to compare the copula-based MGARCH model to traditional DCC MGARCH in terms of capturing the dependency structure. 1
|
93 |
Pokročilejší techniky agregace rizik / Advanced Techniques of Risk AggregationDufek, Jaroslav January 2012 (has links)
In last few years Value-at-Risk (Var) is a very popular and frequently used risk measure. Risk measure VaR is used in most of the financial institutions. VaR is popular thanks to its simple interpretation and simple valuation. Valuation of VaR is a problem if we assume a few dependent risks. So VaR is estimated in a practice. In presented thesis we study theory of stochastic bounding. Using this theory we obtain bounds for VaR of sum a few dependent risks. In next part of presented thesis we show how we can generalize obtained bounds by theory of copulae. Then we show numerical algorithm, which we can use to evaluate bounds, when exact analytical evaluate isn't possible. In a final part of presented thesis we show our results on practical examples.
|
94 |
A sintaxe das construções semiclivadas e pseudoclivadas no português brasileiro / The Syntax of Semiclefts and Pseudoclefts Constructions in Brazilian PortugueseMariana Santos de Resenes 06 May 2014 (has links)
Esta tese tem por objetivo descrever e analisar, com base na Teoria Gerativa, as construções semiclivadas e pseudoclivadas no português brasileiro. Reconhecemos, para ambas as construções, a necessidade de uma análise dual, que identifica dois tipos. Quanto às semiclivadas, um tipo (e este é o grande grupo) recebe uma análise monoracional e independente das pseudoclivadas, são as chamadas semiclivadas verdadeiras, e o outro, bem delimitado e restrito, recebe uma análise bioracional, como uma versão reduzida das pseudoclivadas. Em nossa proposta de uma análise monoracional para a sintaxe das semiclivadas verdadeiras, a cópula, que imediatamente precede o foco, é a realização de uma categoria funcional envolvida no estabelecimento de relações de predicação, um RELATOR ou LINKER, seguindo a sintaxe da predicação desenvolvida em Den Dikken (2006a). Explorando a aplicação de Inversão de Predicado, com suas implicações, como a emergência de uma cópula, o congelamento do sujeito da predicação (que fica in situ) para manipulações sintáticas, e que recebe, invariavelmente, a interpretação de foco nas interfaces, alargamos o tratamento conferido à sintaxe da predicação, tratando inclusive a relação \'objeto de\' como uma relação essencialmente predicacional. Por sua vez, quanto às pseudoclivadas, os dois tipos reconhecidos recebem ambos análises bioracionais (contra análises monoracionais e de reconstrução). A proposta de uma análise dual para as pseudoclivadas, em tipo A e tipo B, na esteira de Den Dikken, Meinunger & Wilder (2000), se fundamenta na heterogeneidade dos dados em nossa língua - sobretudo com relação às diferenças entre os dois padrões dessas construções (com oração-wh inicial e final) e ao tipo de oração-wh que pode ocorrer em cada qual (interrogativa ou relativa livre) - que resistem a um tratamento único, inteiramente homogêneo. Essa análise dual tem alcance interlinguístico, parcial ou total, de acordo com as características das línguas. Procuramos mostrar como orações-wh diferentes estão relacionadas a padrões diferentes de pseudoclivadas, cada qual com uma estrutura sintática particular, bem como algumas decorrências que delas surgem e que contrastam entre si, conforme previsto. Mostramos, ainda, dentre os polêmicos efeitos de conectividade que as pseudoclivadas podem exibir, quais merecem tratamento sintático e quais não. Conectividades envolvendo anáfora e pronomes ligados são melhor tratadas sob uma perspectiva semântica, haja vista sua ocorrência mesmo em copulares especificacionais não-clivadas, independentemente da ordem entre os termos pré e pós-cópula, e para as quais mesmo um tratamento via reconstrução (última chance de um tratamento ainda sintático) é, às vezes, impossível. Crucialmente, as conectividades envolvendo IPN (itens de polaridade negativa) e Caso são as distintivas para a análise das pseudoclivadas em dois tipos. As pseudoclivadas do tipo A recebem uma análise bioracional com elipse, na qual a oração-wh é uma interrogativa e o contrapeso, uma resposta sentencial completa para essa pergunta, sujeita à elipse preferencial do material repetido. Assim, em analogia aos question-answer pairs, essas pseudoclivadas são chamadas de self-answering questions, com uma estrutura \'tópico-comentário\', cuja ordem é rígida, resultando somente no padrão com oração-wh inicial (pergunta<resposta). Já as pseudoclivadas do tipo B, por sua vez, têm uma estrutura predicacional como base, uma small clause. Elas recebem uma análise também bioracional, mas do tipo WYSIWYG (what you see is what you get), em que a oração-wh, aqui uma relativa livre, é o predicado, e o XP, seu sujeito, necessariamente o foco da sentença. A relativa livre, na qualidade de um predicado nominal, está sujeita à Inversão de Predicado; consequentemente, a ordem das pseudoclivadas do tipo B é mais flexível, produzindo tanto o padrão com oração-wh inicial, quanto final. Por fim, resgatamos o segundo tipo de semiclivadas, o pequeno grupo restrito que é uma versão reduzida das pseudoclivadas, como evidenciado pelos fatos relativos à concordância do verbo lexical. A ocorrência dessas semiclivadas é limitada às semiclivadas de sujeito. Em face de dois tipos de pseudoclivadas, mostramos que somente as do tipo A permitem \'redução\'. Colaboram para isso a rigidez na ordem, característica das semiclivadas e apenas das pseudoclivadas do tipo A, e a possibilidade de omissão do pronome wh (\'wh-drop\'), possível em interrogativas (como uma opção disponível na Gramática Universal), mas nunca em relativas livres. Sendo essas pseudoclivadas reduzidas restritas às de sujeito, seu wh nulo (ocorrência de \'wh-drop\') é reanalisado como prowh nas línguas românicas que dispõem dessas construções. / This dissertation aims to describe and analyse semiclefts and pseudoclefts constructions in Brazilian Portuguese, with the background of Generative Theory. Given the heterogeneity of the facts analysed, we propose an approach that divides both constructions in two types. Regarding the semiclefts, one type (and this is the big group) receives a monoclausal analysis, independent of pseudoclefts, which are called true semiclefts, whereas the other, well delimited and restricted, receives a biclausal analysis, as a reduced version of pseudoclefts. In our proposal of a monoclausal analysis to the syntax of true semiclefts, the copula, which immediately precedes the focus, is the spell out of a functional category involved in the establishment of predication relations, either a RELATOR or a LINKER, following the syntax of predication developed in Den Dikken (2006a). Exploring the application of Predicate Inversion, with its implications, such as the emergence of a copula, the freezing of the subject of predication to syntactic manipulations which stays in situ and receives, invariably, the focus interpretation at the interfaces, we enlarge the approach given to the syntax of predication, treating also the \'object of\' relation as an essentially predicational relation. As for the pseudoclefts, the two types recognized receive both a biclausal analysis (contra monoclausal and reconstruction analyses). The proposal of a dual analysis to pseudoclefts, in type A and type B, a la Den Dikken, Meinunger & Wilder (2000), is based on the heterogeneity of the data in our language - especially regarding the differences attested between the two patterns of these constructions (with wh-clause initial or final) and regarding the type of wh-clause that can occur in each of them (an interrogative or a free relative) - data that resist a single fully homogeneous treatment. This dual analysis has crosslinguistic (partial or total) scope, according to the characteristics of the languages. We tried to show how different wh-clauses are related to different patterns of pseudoclefts, each of them with its particular syntactic structure, as well as some consequences that they give rise to and that contrast with each other, as predicted. We also claim that, among the polemic connectivity effects that pseudoclefts can have, some deserve a syntactic account, whereas others do not. Connectivity effects related to anaphors and bound pronouns are better accounted for by a semantic account, given their occurrence even in simple (non-cleft) specificational copular sentences, regardless of the order between the pre and postcopular terms, and for which even an analysis via reconstruction (last chance to still have a syntactic treatment) is sometimes impossible. Crucially, connectivity effects related to NPI (negative polarity items) and Case are the relevant ones to distinguish the pseudoclefts in two types. Type A pseudoclefts receive a biclausal analysis with ellipsis, in which the wh-clause is an interrogative and the counterweight is a full sentence, the complete answer to the question, subject to (strongly favoured) ellipsis of the repeated part. In analogy to the question-answer pairs, these pseudoclefts are called self-answering questions, with a \'topic-comment\' structure, whose order is rigid, resulting only in the wh-clause-initial pattern (question<answer). On the other hand, type B pseudoclefts are based on a predicational structure, a small clause. They also receive a biclausal analysis, but one of the WYSIWYG (what you see is what you get) kind, in which the wh-clause, here a free relative, is the predicate, and the XP, its subject, necessarily the focus of the sentence. The free relative, as a predicate nominal, is subject to Predicate Inversion; consequently, the order of type B pseudoclefts is more flexible, resulting in both patterns, wh-clause-initial or final. Finally, we go back to semiclefts, more specifically, to its the second type, the limited group that is a reduced version of pseudoclefts, as indicated by the facts related to the agreement in the lexical verb. The occurrence of such semiclefts is restricted to the subject semiclefts. In face of two types of pseudoclefts, we show that only type A allow \'reduction\'. Factors that contribute to this are the rigid order, characteristic of semiclefts and only of type A pseudoclefts, and the possibility of omission of the wh pronoun (\'wh-drop\'), possible in interrogatives (as an available option in Universal Grammar), but never in free relatives. Since these reduced pseudoclefts are limited to the subject ones, their null wh (an instance of \'wh-drop\') is reanalysed as a prowh in the romance languages that have these constructions.
|
95 |
Quantitative analysis of extreme risks in insurance and financeYuan, Zhongyi 01 May 2013 (has links)
In this thesis, we aim at a quantitative understanding of extreme risks. We use heavy-tailed distribution functions to model extreme risks, and use various tools, such as copulas and MRV, to model dependence structures. We focus on modeling as well as quantitatively estimating certain measurements of extreme risks.
We start with a credit risk management problem. More specifically, we consider a credit portfolio of multiple obligors subject to possible default. We propose a new structural model for the loss given default, which takes into account the severity of default. Then we study the tail behavior of the loss given default under the assumption that the losses of the obligors jointly follow an MRV structure. This structure provides an ideal framework for modeling both heavy tails and asymptotic dependence. Using HRV, we also accommodate the asymptotically independent case. Multivariate models involving Archimedean copulas, mixtures and linear transforms are revisited.
We then derive asymptotic estimates for the Value at Risk and Conditional Tail Expectation of the loss given default and compare them with the traditional empirical estimates.
Next, we consider an investor who invests in multiple lines of business and study a capital allocation problem. A randomly weighted sum structure is proposed, which can capture both the heavy-tailedness of losses and the dependence among them, while at the same time separates the magnitudes from dependence. To pursue as much generality as possible, we do not impose any requirement on the dependence structure of the random weights. We first study the tail behavior of the total loss and obtain asymptotic formulas under various sets of conditions. Then we derive asymptotic formulas for capital allocation and further refine them to be explicit for some cases.
Finally, we conduct extreme risk analysis for an insurer who makes investments. We consider a discrete-time risk model in which the insurer is allowed to invest a proportion of its wealth in a risky stock and keep the rest in a risk-free bond. Assume that the claim amounts within individual periods follow an autoregressive process with heavy-tailed innovations and that the log-returns of the stock follow another autoregressive process, independent of the former one. We derive an asymptotic formula for the finite-time ruin probability and propose a hybrid method, combining simulation with asymptotics, to compute this ruin probability more efficiently. As an application, we consider a portfolio optimization problem in which we determine the proportion invested in the risky stock that maximizes the expected terminal wealth subject to a constraint on the ruin probability.
|
96 |
Three Essays on US Agricultural InsuranceKim, Taehoo 01 May 2016 (has links)
Many economists and policy analysts have conducted studies on crop insurance. Three research gaps are identified: i) moral hazard in prevented planting (PP), ii) choice of PP and planting a second crop, and iii) selecting margin protection in the Dairy Margin Protection Program (MPP-Dairy).
The first essay analyzes the existence of moral hazard in PP. The PP provision is defined as the “failure to plant an insured crop by the final planting date due to adverse events”. If the farmer decides not to plant a crop, the farmer receives a PP indemnity. Late planting (LP) is an option for the farmer to plant a crop while maintaining crop insurance after the final planting date. Crop insurance may alter farmers’ behavior in selecting PP or LP and could increase the likelihood of PP claims even though farmers can choose LP. This study finds evidence that a farmer with higher insurance coverage tends to choose PP more often (moral hazard). Spatial panel models attest to the existence of moral hazard in PP empirically.
If a farmer chooses PP, s/he receives the PP indemnity and may either leave the acreage unplanted or plant a second crop, e.g., soybean for corn. If the farmer plants a second crop after the PP claim, the farmer receives a 35% of PP payment. The current PP provision fails to provide farmers with an incentive to plant a second crop; 99.9% of PP claiming farmers do not plant a second crop. Adjusting PP indemnity payment may encourage farmers to plant a second crop. The second essay explores this question using a stochastic simulation and suggests to increase the PP payment by 10%-15%.
The third essay investigates why Wisconsin dairy farmers purchase more supplementary protection than California farmers in a MPP-Dairy introduced in the 2014 Farm Bill. MPP-Dairy provides dairy producers with margin protection when the national dairy margin is below a farmer selected threshold. This study determines whether conditional probabilities regarding regional and national margins have a role in farmer’s decision-making to purchase supplementary coverages using Copula models. Results indicate that Wisconsin farmers have higher conditional probabilities and purchase more buy-up coverages.
|
97 |
Contributions to Extreme Value Theory in Finite and Infinite Dimensions: With a Focus on Testing for Generalized Pareto Models / Beiträge zur endlich- und unendlichdimensionalen Extremwerttheorie: Mit einem Schwerpunkt auf Tests auf verallgemeinerte Pareto-ModelleAulbach, Stefan January 2015 (has links) (PDF)
Extreme value theory aims at modeling extreme but rare events from a probabilistic point of view. It is well-known that so-called generalized Pareto distributions, which are briefly reviewed in Chapter 1, are the only reasonable probability distributions suited for modeling observations above a high threshold, such as waves exceeding the height of a certain dike, earthquakes having at least a certain intensity, and, after applying a simple transformation, share prices falling below some low threshold. However, there are cases for which a generalized Pareto model might fail. Therefore, Chapter 2 derives certain neighborhoods of a generalized Pareto distribution and provides several statistical tests for these neighborhoods, where the cases of observing finite dimensional data and of observing continuous functions on [0,1] are considered. By using a notation based on so-called D-norms it is shown that these tests consistently link both frameworks, the finite dimensional and the functional one. Since the derivation of the asymptotic distributions of the test statistics requires certain technical restrictions, Chapter 3 analyzes these assumptions in more detail. It provides in particular some examples of distributions that satisfy the null hypothesis and of those that do not. Since continuous copula processes are crucial tools for the functional versions of the proposed tests, it is also discussed whether those copula processes actually exist for a given set of data. Moreover, some practical advice is given how to choose the free parameters incorporated in the test statistics. Finally, a simulation study in Chapter 4 compares the in total three different test statistics with another test found in the literature that has a similar null hypothesis. This thesis ends with a short summary of the results and an outlook to further open questions. / Gegenstand der Extremwerttheorie ist die wahrscheinlichkeitstheoretische Modellierung von extremen, aber seltenen Ereignissen. Es ist wohlbekannt, dass sog. verallgemeinerte Pareto-Verteilungen, die in Kapitel 1 kurz zusammengefasst werden, die einzigen Wahrscheinlichkeitsverteilungen sind, mit denen sich Überschreitungen über hohe Schwellenwerte geeignet modellieren lassen, wie z. B. Fluthöhen, die einen Deich überschreiten, Erdbeben einer gewissen Mindeststärke, oder - nach einer einfachen Transformation - Aktienkurse, die einen festen Wert unterschreiten. Jedoch gibt es auch Fälle, in denen verallgemeinerte Pareto-Modelle fehlschlagen könnten. Deswegen beschäftigt sich Kapitel 2 mit gewissen Umgebungen einer verallgemeinerten Pareto-Verteilung und leitet mehrere statistische Tests auf diese Umgebungen her. Dabei werden sowohl multivariate Daten als auch Datensätze bestehend aus stetigen Funktionen auf [0,1] betrachtet. Durch Verwendung einer Notation basierend auf sog. D-Normen wird insbesondere gezeigt, dass die vorgestellten Testverfahren beide Fälle, den multivariaten und den funktionalen, auf natürliche Weise miteinander verbinden. Da das asymptotische Verhalten dieser Tests von einigen technischen Voraussetzungen abhängt, werden diese Annahmen in Kapitel 3 detaillierter analysiert. Insbesondere werden Beispiele für Verteilungen betrachtet, die die Nullhypothese erfüllen, und solche, die das nicht tun. Aufgrund ihrer Bedeutung für die funktionale Version der Tests wird auch der Frage nachgegangen, ob sich ein Datensatz durch stetige Copula-Prozesse beschreiben lässt. Außerdem wird auf die Wahl der freien Parameter in den Teststatistiken eingegangen. Schließlich befasst sich Kapitel 4 mit den Ergebnissen einer Simulationsstudie, um die insgesamt drei Testverfahren mit einem ähnlichen Test aus der Literatur zu vergleichen. Diese Arbeit endet mit einer kurzen Zusammenfassung und einem Ausblick auf weiterführende Fragestellungen.
|
98 |
Focus asymmetries in BuraHartmann, Katharina, Jacob, Peggy, Zimmermann, Malte January 2008 (has links)
(Chadic), which exhibits a number of asymmetries: Grammatical focus marking is obligatory only with focused subjects, where focus is marked by the particle án following the subject. Focused subjects remain in situ and the complement of án is a regular VP. With nonsubject foci, án appears in a cleft-structure between the fronted focus constituent and a relative clause. We present a semantically unified analysis of focus marking in Bura that treats the particle as a focusmarking copula in T that takes a property-denoting expression (the
background) and an individual-denoting expression (the focus) as arguments. The article also investigates the realization of predicate and polarity focus, which are almost never marked. The upshot of the discussion is that Bura shares many characteristic traits of focus marking with other Chadic languages, but it crucially differs in exhibiting a structural difference in the marking of focus on subjects and non-subject constituents.
|
99 |
D- and Ds-optimal Designs for Estimation of Parameters in Bivariate Copula ModelsLiu, Hua-Kun 27 July 2007 (has links)
For current status data, the failure time of interest may not be observed exactly. The type of this data consists only of a monitoring time and knowledge of whether the failure time occurred before or after the monitoring time. In order to be able to obtain more information from this data, so the monitoring time is very important. In this work, the optimal designs for determining the monitoring times such that maximum information may be obtained in bivariate copula model (Clayton) are investigated. Here, the D-
optimal criterion is used to decide the best monitoring time Ci (i = 1; ¢ ¢ ¢ ; n), then use these monitoring times Ci to estimate the unknown parameters simultaneously by maximizing the corresponding likelihood function. Ds-optimal designs for estimation
of association parameter in the copula model are also discussed. Simulation studies are presented to compare the performance of using monitoring time C¤D and C¤Ds to do the estimation.
|
100 |
Application of Entropy Theory in Hydrologic Analysis and SimulationHao, Zengchao 2012 May 1900 (has links)
The dissertation focuses on the application of entropy theory in hydrologic analysis and simulation, namely, rainfall analysis, streamflow simulation and drought analysis.
The extreme value distribution has been employed for modeling extreme rainfall values. Based on the analysis of changes in the frequency distribution of annual rainfall maxima in Texas with the changes in duration, climate zone and distance from the sea, an entropy-based distribution is proposed as an alternative distribution for modeling extreme rainfall values. The performance of the entropy based distribution is validated by comparing with the commonly used generalized extreme value (GEV) distribution based on synthetic and observed data and is shown to be preferable for extreme rainfall values with high skewness.
An entropy based method is proposed for single-site monthly streamflow simulation. An entropy-copula method is also proposed to simplify the entropy based method and preserve the inter-annual dependence of monthly streamflow. Both methods are shown to preserve statistics, such as mean, standard deviation, skenwess and lag-one correlation, well for monthly streamflow in the Colorado River basin. The entropy and entropy-copula methods are also extended for multi-site annual streamflow simulation at four stations in the Colorado River basin. Simulation results show that both methods preserve the mean, standard deviation and skewness equally well but differ in preserving the dependence structure (e.g., Pearson linear correlation).
An entropy based method is proposed for constructing the joint distribution of drought variables with different marginal distributions and is applied for drought analysis based on monthly streamflow of Brazos River at Waco, Texas. Coupling the entropy theory and copula theory, an entropy-copula method is also proposed for constructing the joint distribution for drought analysis, which is illustrated with a case study based on the Parmer drought severity index (PDSI) data in Climate Division 5 in Texas.
|
Page generated in 0.0475 seconds