Spelling suggestions: "subject:"value theory"" "subject:"alue theory""
181 |
Théorie des valeurs extrêmes et applications en environnement / Extreme value theory and applications in environmentRietsch, Théo 14 November 2013 (has links)
Les deux premiers chapitres de cette thèse s'attachent à répondre à des questions cruciales en climatologie. La première est de savoir si un changement dans le comportement des extrêmes de température peut être détecté entre le début du siècle et aujourd'hui. Nous utilisons la divergence de Kullback Leibler, que nous adaptons au contexte des extrêmes. Des résultats théoriques et des simulations permettent de valider notre approche. La deuxième question est de savoir où retirer des stations météo pour perdre le moins d'information sur le comportement des extrêmes. Un algorithme, le Query By Committee, est développé puis appliqué à un jeu de données réelles. Le dernier chapitre de la thèse traite de l'estimation robuste du paramètre de queue d'une distribution de type Weibull en présence de co-variables aléatoires. Nous proposons un estimateur robuste basé sur un critère de minimisation de la divergence entre deux densités et étudions ses propriétés. / In the first two chapters, we try to answer two questions that are critical in climatology. The first one is to know whether a change in the behaviour of the temperature extremes occured between the beginning of the century and today. We suggest to use a version of the Kullback Leibler divergence tailored for the extreme value context. We provide some theoretical and simulation results to justify our approach. The second question is to decide where to remove stations from a network to lose the least information about the behaviour of the extremes. An algorithm called the Query By Committee is developed and applied to real data. The last chapter of the thesis deals with a more theoretical subject which is the robust estimation of a Weibull type tail index in presence of random covariates. We propose a robust estimator based on a criterion ofminimization of the divergence between two densities and study its properties.
|
182 |
Novelty detection with extreme value theory in vital-sign monitoringHugueny, Samuel Y. January 2013 (has links)
Every year in the UK, tens of thousands of hospital patients suffer adverse events, such as un-planned transfers to Intensive Therapy Units or unexpected cardiac arrests. Studies have shown that in a large majority of cases, significant physiological abnormalities can be observed within the 24-hour period preceding such events. Such warning signs may go unnoticed, if they occur between observations by the nursing staff, or are simply not identified as such. Timely detection of these warning signs and appropriate escalation schemes have been shown to improve both patient outcomes and the use of hospital resources, most notably by reducing patients’ length of stay. Automated real-time early-warning systems appear to be cost-efficient answers to the need for continuous vital-sign monitoring. Traditionally, a limitation of such systems has been their sensitivity to noisy and artefactual measurements, resulting in false-alert rates that made them unusable in practice, or earned them the mistrust of clinical staff. Tarassenko et al. (2005) and Hann (2008) proposed a novelty detection approach to the problem of continuous vital-sign monitoring, which, in a clinical trial, was shown to yield clinically acceptable false alert rates. In this approach, an observation is compared to a data fusion model, and its “normality” assessed by comparing a chosen statistic to a pre-set threshold. The method, while informed by large amounts of training data, has a number of heuristic aspects. This thesis proposes a principled approach to multivariate novelty detection in stochastic time- series, where novelty scores have a probabilistic interpretation, and are explicitly linked to the starting assumptions made. Our approach stems from the observation that novelty detection using complex multivariate, multimodal generative models is generally an ill-defined problem when attempted in the data space. In situations where “novel” is equivalent to “improbable with respect to a probability distribution ”, formulating the problem in a univariate probability space allows us to use classical results of univariate statistical theory. Specifically, we propose a multivariate extension to extreme value theory and, more generally, order statistics, suitable for performing novelty detection in time-series generated from a multivariate, possibly multimodal model. All the methods introduced in this thesis are applied to a vital-sign monitoring problem and compared to the existing method of choice. We show that it is possible to outperform the existing method while retaining a probabilistic interpretation. In addition to their application to novelty detection for vital-sign monitoring, contributions in this thesis to existing extreme value theory and order statistics are also valid in the broader context of data-modelling, and may be useful for analysing data from other complex systems.
|
183 |
Doba nezaměstnanosti v České republice pohledem analýzy přežití / Unemployment Duration in the Czech Republic Through the Lens of Survival AnalysisČabla, Adam January 2017 (has links)
In the presented thesis the aim is to apply methods of survival analysis to the data from the Labour Force Survey, which are interval-censored. With regard to this type of data, I use specific methods designed to handle them, especially Turnbull estimate, weighted log-rank test and the AFT model. Other objective of the work is the design and application of a methodology for creating a model of unemployment duration, depending on the available factors and its interpretation. Other aim is to evaluate evolution of the probability distribution of unemployment duration and last but not least aim is to create more accurate estimate of the tail using extreme value theory. The main benefits of the thesis can include the creation of a methodology for examining the data from the Labour Force Survey based on standard techniques of survival analysis. Since the data are internationally comparable, the methodology is applicable at the level of European Union countries and several others. Another benefit of this work is estimation of the parameters of the generalized Pareto distribution on interval-censored data and creation and comparison of the models of piecewise connected distribution functions with solution of the connection problem. Work brought empirical results, most important of which is the comparison of results from three different data approaches and specific relationship between selected factors and time to find a job or spell of unemployment.
|
184 |
Approche algébrique et théorie des valeurs extrêmes pour la détection de ruptures : Application aux signaux biomédicaux / Algebraic approach and extreme value theory for change-point detection : Application to the biomedical signalsDebbabi, Nehla 14 December 2015 (has links)
Ce travail développe des techniques non-supervisées de détection et de localisation en ligne de ruptures dans les signaux enregistrés dans un environnement bruité. Ces techniques reposent sur l'association d'une approche algébrique avec la TVE. L'approche algébrique permet d'appréhender aisément les ruptures en les caractérisant en termes de distributions de Dirac retardées et leurs dérivées dont la manipulation est facile via le calcul opérationnel. Cette caractérisation algébrique, permettant d'exprimer explicitement les instants d'occurrences des ruptures, est complétée par une interprétation probabiliste en termes d'extrêmes : une rupture est un évènement rare dont l'amplitude associée est relativement grande. Ces évènements sont modélisés dans le cadre de la TVE, par une distribution de Pareto Généralisée. Plusieurs modèles hybrides sont proposés dans ce travail pour décrire à la fois le comportement moyen (bruit) et les comportements extrêmes (les ruptures) du signal après un traitement algébrique. Des algorithmes entièrement non-supervisés sont développés pour l'évaluation de ces modèles hybrides, contrairement aux techniques classiques utilisées pour les problèmes d'estimation en question qui sont heuristiques et manuelles. Les algorithmes de détection de ruptures développés dans cette thèse ont été validés sur des données générées, puis appliqués sur des données réelles provenant de différents phénomènes, où les informations à extraire sont traduites par l'apparition de ruptures. / This work develops non supervised techniques for on-line detection and location of change-points in noisy recorded signals. These techniques are based on the combination of an algebraic approach with the Extreme Value Theory (EVT). The algebraic approach offers an easy identification of the change-points. It characterizes them in terms of delayed Dirac distributions and their derivatives which are easily handled via operational calculus. This algebraic characterization, giving rise to an explicit expression of the change-points locations, is completed with a probabilistic interpretation in terms of extremes: a change point is seen as a rare and extreme event. Based on EVT, these events are modeled by a Generalized Pareto Distribution.Several hybrid multi-components models are proposed in this work, modeling at the same time the mean behavior (noise) and the extremes ones (change-points) of the signal after an algebraic processing. Non supervised algorithms are proposed to evaluate these hybrid models, avoiding the problems encountered with classical estimation methods which are graphical ad hoc ones. The change-points detection algorithms developed in this thesis are validated on generated data and then applied on real data, stemming from different phenomenons, where change-points represent the information to be extracted.
|
185 |
Investigating Systematics In The Cosmological Data And Possible Departures From Cosmological PrincipleGupta, Shashikant 08 1900 (has links) (PDF)
This thesis contributes to the field of dark energy and observational cosmology. We have investigated possible direction dependent systematic signal and non-Gaussian features in the supernovae (SNe) Type Ia data. To detect these effects we propose a new method of analysis. Although We have used this technique on SNe Ia data, it is quite general and can be applied to other data sets as well.
SNe Ia are the most precise known distance indicators at the cosmological distances. Their constant peak luminosity(after correction) makesthem standard candles and hence one can measure the distances in the universe using SNe Ia. This distance measurement can determine various cosmological parameters such as the Hubble constant, various components of matter density and dark energy from, the SNe Ia observations. Recent SNe Ia observations have shown that the expansion of the universe is currently accelerating. This recent acceleration is explained by invoking a component in the universe having negative pressure and is termed as dark energy. It can be described by a homogeneous and isotropic fluid with the equation of state P = wρ, where w is allowed to be negative. A constant(Λ) in the Einstein equation(known as cosmological constant) can explain the acceleration, in the fluid model it can be modeled with w = -1. Other models of dark energy with w = -1 can also explain the acceleration, however the precise nature of this mysterious component remains unknown. Although there exist a wide range of dark energy models, cosmological constant provides the simplest explanation to the acceleration of the expansion of the Universe. The equation of state parameter w has been investigated by recent surveys but the results are still consistent with a wide range of dark energy models. In order to discriminate among various cosmological models we need an even more precise measurement of distance and error bars in the SNe Ia data.
From the central limit theorem we expect Gaussian errors in any experiment that is free from systematic noise. However in astronomy we do not have a control over the observed phenomena and thus can not control the systematic errors (due to some physical processes in the Universe) in the observed data. The only possible way to deal with such data is by using appropriate statistical techniques. Among these systematic features the direction dependent features are more dangerous ones since they may indicate a preferred direction in the Universe.
To address the issue of direction dependent features we have developed a new technique(Δ statistic henceforth) which is based on the extreme value theory. We have applied this technique to the available high-z SNe Ia data from Riess et al.(2004)and Riess et al.(2007). In addition we have applied it to the HST data from HST key project for H0 measurement. Below we summarize the material presented in the thesis.
Chapter wise summary of the thesis
In the first chapter we present an introductory discussion of the various basic cosmological notions eg. Cosmological Principle (CP), observational evidence in support of CP and departures from it, distance measures and large scale structure. The observed departures from the CP could be present due to the systematic errors and/or non-Gaussian error bars in the data. We discuss the errors involved in the measurement process
Basics of statistical techniques : In the next two chapters we discuss basics of the statistical techniques used in this thesis and extreme value theory. Extreme value theory describes how to calculate the distribution of extreme events. The simplest of the distributions of the extremes is known as the Gumbel distribution. We discuss features of the Gumbel distribution since it is used extensively in our analysis.
Δ statistic and features in the SNe data : In the fourth chapter we derive Δ statistic and apply it to the SNe Ia data sets. An outline of the Δ statistic is as follows : a) We define a plane which cuts the sky into hemispheres. This plane will divide the data into two subsets, one in each hemisphere. b) Now we calculate the χ2 in each hemisphere for an FRW universe assuming a flat geometry. c) The difference of χ2 in the two hemisphere is calculated and maximized by rotating the plane. This maximum should follow the Gumbel distribution. Since it is difficult to calculate the analytic form of Gumbel distribution we calculate it numerically assuming Gaussian error bars. This gives the theoretical distribution for the above calculated maximum of difference of χ2 . The results indicate that GD04 shows systematic effects as well non-Gaussian features while the set GD07 is better in terms of systematic effects and non-Gaussian features.
Non-Gaussian features in the H0 data : HST key project measures the value of Hubble constant at the level of 10% accuracy, which requires precise measurement of the distances. It uses various methods to measure distance for instance SNe Ia, Tully-Fisher relation, surface-brightness fluctuations etc. In the fifth chapter we apply Δ statistic to the HST Key Project data in order to check the presence of non-Gaussian and direction dependent features. Our results show that although this data set seems to be free of direction dependent features, it is inconsistent with the Gaussian errors.
Analytic Marginalization : The quantities of real interest in cosmology are ΩM and ΩΛ, Hubble constant could in principle be treated as a nuisance parameter. It would be useful to marginalize over the nuisance parameter. Although it can be done numerically using Bayesian method, Δ statistic does not allow it. In chapter six we propose a method to marginalize over H0 analytically. The χ2 in this case is a complicated function of errors in the data. We compare this analytic method with the Bayesian marginalization method and results show that the two methods are quite consistent. We apply the Δ statistic to the SNe data after the analytic marginalization. Results do not change much indicating the insensitivity of the direction de-pendent features to the Hubble constant.
A variation to the Δ statistic: As has been discussed earlier that, it is difficult to calculate the theoretical distribution of Δ in general. However if the parent distribution follows certain conditions it is possible to derive the analytic form for the Gumbel distribution for Δ. In the seventh chapter we derive a variation to the Δ statistic in a way that allows us to calculate the analytic distribution. The results in this case are different from those presented earlier, but they confirm the same direction dependence and non-Gaussian features in the data.
|
186 |
Evaluation et optimisation des performances de fonctions pour la surveillance de turboréacteurs / Evaluation and optimization of function performances for the monitoring of turbojet enginesHmad, Ouadie 06 December 2013 (has links)
Cette thèse concerne les systèmes de surveillance des turboréacteurs. Le développement de tels systèmes nécessite une phase d’évaluation et d’optimisation des performances, préalablement à la mise en exploitation. Le travail a porté sur cette phase, et plus précisément sur les performances des fonctions de détection et de pronostic de deux systèmes. Des indicateurs de performances associés à chacune de ces fonctions ainsi que leur estimation ont été définis. Les systèmes surveillés sont d’une part la séquence de démarrage pour la fonction de détection et d’autre part la consommation d’huile pour la fonction de pronostic. Les données utilisées venant de vols en exploitation sans dégradations, des simulations ont été nécessaires pour l’évaluation des performances. L’optimisation des performances de détection a été obtenue par réglage du seuil sur la statistique de décision en tenant compte des exigences des compagnies aériennes exprimées en termes de taux de bonne détection et de taux d’alarme fausse. Deux approches ont été considérées et leurs performances ont été comparées pour leurs meilleures configurations. Les performances de pronostic de surconsommations d’huile, simulées à l’aide de processus Gamma, ont été évaluées en fonction de la pertinence de la décision de maintenance induite par le pronostic. Cette thèse a permis de quantifier et d’améliorer les performances des fonctions considérées pour répondre aux exigences. D’autres améliorations possibles sont proposées comme perspectives pour conclure ce mémoire / This thesis deals with monitoring systems of turbojet engines. The development of such systems requires a performance evaluation and optimization phase prior to their introduction in operation. The work has been focused on this phase, and more specifically on the performance of the detection and the prognostic functions of two systems. Performances metrics related to each of these functions as well as their estimate have been defined. The monitored systems are, on the one hand, the start sequence for the detection function and on the other hand, the oil consumption for the prognostic function. The used data come from flights in operation without degradation, simulations of degradation were necessary for the performance assessment. Optimization of detection performance was obtained by tuning a threshold on the decision statistics taking into account the airlines requirements in terms of good detection rate and false alarm rate. Two approaches have been considered and their performances have been compared for their best configurations. Prognostic performances of over oil consumption, simulated using Gamma processes, have been assessed on the basis of the relevance of maintenance decision induced by the prognostic. This thesis has allowed quantifying and improving the performance of the two considered functions to meet the airlines requirements. Other possible improvements are proposed as prospects to conclude this thesis
|
187 |
Metodologie rakouské školy: Vybraní autoři a problémové okruhy / The Methodology of The Austrian School: chosen authors and problematic aspectsHlavík, Petr January 2007 (has links)
The aim is to present specific methodology of the austrian school with laying stress on prerequisits which determine its understanding of phenomena.
|
188 |
Využití teorie extrémních hodnot při řízení operačních rizik / Extreme Value Theory in Operational Risk ManagementVojtěch, Jan January 2009 (has links)
Currently, financial institutions are supposed to analyze and quantify a new type of banking risk, known as operational risk. Financial institutions are exposed to this risk in their everyday activities. The main objective of this work is to construct an acceptable statistical model of capital requirement computation. Such a model must respect specificity of losses arising from operational risk events. The fundamental task is represented by searching for a suitable distribution, which describes the probabilistic behavior of losses arising from this type of risk. There is a strong utilization of the Pickands-Balkema-de Haan theorem used in extreme value theory. Roughly speaking, distribution of a random variable exceeding a given high threshold, converges in distribution to generalized Pareto distribution. The theorem is subsequently used in estimating the high percentile from a simulated distribution. The simulated distribution is considered to be a compound model for the aggregate loss random variable. It is constructed as a combination of frequency distribution for the number of losses random variable and the so-called severity distribution for individual loss random variable. The proposed model is then used to estimate a fi -nal quantile, which represents a searched amount of capital requirement. This capital requirement is constituted as the amount of funds the bank is supposed to retain, in order to make up for the projected lack of funds. There is a given probability the capital charge will be exceeded, which is commonly quite small. Although a combination of some frequency distribution and some severity distribution is the common way to deal with the described problem, the final application is often considered to be problematic. Generally, there are some combinations for severity distribution of two or three, for instance, lognormal distributions with different location and scale parameters. Models like these usually do not have any theoretical background and in particular, the connecting of distribution functions has not been conducted in the proper way. In this work, we will deal with both problems. In addition, there is a derivation of maximum likelihood estimates of lognormal distribution for which hold F_LN(u) = p, where u and p is given. The results achieved can be used in the everyday practices of financial institutions for operational risks quantification. In addition, they can be used for the analysis of a variety of sample data with so-called heavy tails, where standard distributions do not offer any help. As an integral part of this work, a CD with source code of each function used in the model is included. All of these functions were created in statistical programming language, in S-PLUS software. In the fourth annex, there is the complete description of each function and its purpose and general syntax for a possible usage in solving different kinds of problems.
|
189 |
Exchange market pressure: an evaluation using extreme value theory / Napětí na devizovém trhu: měření pomocí teorie extrémních hodnotZuzáková, Barbora January 2013 (has links)
This thesis discusses the phenomenon of currency crises, in particular it is devoted to empirical identification of crisis periods. As a crisis indicator, we aim to utilize an exchange market pressure index which has been revealed as a very powerful tool for the exchange market pressure quantification. Since enumeration of the exchange market pressure index is crucial for further analysis, we pay special attention to different approaches of its construction. In the majority of existing literature on exchange market pressure models, a currency crisis is defined as a period of time when the exchange market pressure index exceeds a predetermined level. In contrast to this, we incorporate a probabilistic approach using the extreme value theory. Our goal is to prove that stochastic methods are more accurate, in other words they are more reliable instruments for crisis identification. We illustrate the application of the proposed method on a selected sample of four central European countries over the period 1993 - 2012, or 1993 - 2008 respectively, namely the Czech Republic, Hungary, Poland and Slovakia. The choice of the sample is motivated by the fact that these countries underwent transition reforms to market economies at the beginning of 1990s and therefore could have been exposed to speculative attacks on their newly arisen currencies. These countries are often assumed to be relatively homogeneous group of countries at similar stage of the integration process. Thus, a resembling development of exchange market pressure, particularly during the last third of the estimation period, would not be surprising.
|
190 |
Non-Life Excess of Loss Reinsurance Pricing / Oceňování zajištění škodního nadměrku v neživotním pojištěníHrevuš, Jan January 2010 (has links)
Probably the most frequently used definition of reinsurance is insurance for insurance companies, by reinsurance the cedant (insurance company) cedes part of the risk to the reinsurer. Reinsurance plays nowadays a crucial role in insurance industry as it does not only reduce the reinsured's exposure, but it can also significantly reduce the required solvency capital. In past few decades various approaches to reinsurance actuarial modelling were published and many actuaries are nowadays just reinsurance specialized. The thesis provides an overview of the actuarial aspects of modelling a non-life per risk and for motor third party liability per event excess of loss reinsurance structure, according to the author's knowledge no study of such wide scope exists and various aspects have to be found in various fragmented articles published worldwide. The thesis is based on recent industry literature describing latest trends and methodologies used, the theory is compared with the praxis as the author has working experience from underwriting at CEE reinsurer and actuarial reinsurance modelling at global reinsurance broker. The sequence of topics which are dealt corresponds to sequence of the steps taken by actuary modelling reinsurance and each step is discussed in detail. Starting with data preparation and besides loss inflation, more individual claims development methods are introduced and own probabilistic model is constructed. Further, burning cost analysis and probabilistic rating focused on heavy tailed distributions are discussed. A special attention is given to exposure rating which is not commonly known discipline among actuaries outside of reinsurance industry and different methodologies for property and casualty exposure modelling are introduced including many best practice suggestions. All main approaches to the reinsurance modelling are also illustrated on either real or realistically looking data, similar to those provided by European insurance companies to their reinsurers during renewal periods.
|
Page generated in 0.0591 seconds