Spelling suggestions: "subject:"loss actionfunction"" "subject:"loss functionaction""
1 |
Development of reservoir models using economic loss functionsKilmartin, Donovan James 03 September 2009 (has links)
As oil and gas supply decrease, it becomes more important to quantify the uncertainty associated with reservoir models and implementation of field development decisions. Various geostatistical methods have assisted in the development of field scale models of reservoir heterogeneity. Sequential simulation algorithms in geostatistic require an assessment of local uncertainty in an attribute value at a location followed by random sampling from the uncertainty distribution to retrieve the simulation value. Instead of random sampling of an outcome from the uncertainty distrubution, the retrieval of an optimal simulated value at each location by considering an economic loss function is demonstrated in this thesis.
By applying a loss function that depicts the economic impact of an over or underestimation at a location and retrieving the optimal simulated value that minimizes the expected loss, a map of simulated values can be generated that accounts for the impact of permeability as it relates to economic loss. Both an asymmetric linear loss function and a parabolic loss function models are investigated. The end result of this procedure will be a reservoir realization that exhibits the correct spatial characteristics (i.e. variogram reproduction) while, at the same time, exhibiting the minimum expected loss in terms of the parameters used to construct the loss function.
The process detailed in this thesis provides an effective alternative whereby realizations in the middle of the uncertainty distribution can be directly retrieved by application of suitable loss functions. An extension of this method is to alter the loss function (so as to emphasize either under or over estimation), other realizations at the extremes of the global uncertainty distribution can also be retrieved, thereby eliminating the necessity for the generation of a large suite of realizations to locate the global extremes of the uncertainty distribution. / text
|
2 |
An investigation of a Bayesian decision-theoretic procedure in the context of mastery testsHsieh, Ming-Chuan 01 January 2007 (has links)
The purpose of this study was to extend Glas and Vos's (1998) Bayesian procedure to the 3PL IRT model by using the MCMC method. In the context of fixed-length mastery tests, the Bayesian decision-theoretic procedure was compared with two conventional procedures (conventional- Proportion Correct and conventional- EAP) across different simulation conditions. Several simulation conditions were investigated, including two loss functions (linear and threshold loss function), three item pools (high discrimination, moderate discrimination and real item pool) and three test lengths (20, 40 and 60). Different loss parameters were manipulated in the Bayesian decision-theoretic procedure to examine the effectiveness of controlling false positive and false negative errors. The degree of decision accuracy for the Bayesian decision-theoretic procedure using both the 3PL and 1PL models was also compared. Four criteria, including the percentages of correct classifications, false positive error rates, false negative error rates, and phi correlations between the true and observed classification status, were used to evaluate the results of this study. According to these criteria, the Bayesian decision-theoretic procedure appeared to effectively control false negative and false positive error rates. The differences in the percentages of correct classifications and phi correlations between true and predicted status for the Bayesian decision-theoretic procedures and conventional procedures were quite small. The results also showed that there was no consistent advantage for either the linear or threshold loss function. In relation to the four criteria used in this study, the values produced by these two loss functions were very similar. One of the purposes of this study was to extend the Bayesian procedure from the 1PL to the 3PL model. The results showed that when the datasets were simulated to fit the 3PL model, using the 1PL model in the Bayesian procedure yielded less accurate results. However, when the datasets were simulated to fit the 1PL model, using the 3PL model in the Bayesian procedure yielded reasonable classification accuracies in most cases. Thus, the use of the Bayesian decision-theoretic procedure with the 3PL model seemed quite promising in the context of fixed-length mastery tests.
|
3 |
The performance of the preliminary test estimator under different loss functionsKleyn, Judith January 2014 (has links)
In this thesis different situations are considered in which the preliminary test estimator is applied and
the performance of the preliminary test estimator under different proposed loss functions, namely
the reflected normal , linear exponential (LINEX) and bounded LINEX (BLINEX) loss functions is
evaluated. In order to motivate the use of the BLINEX loss function rather than the reflected
normal loss or the LINEX loss function, the risk for the preliminary test estimator and its component
estimators derived under BLINEX loss is compared to the risk of the preliminary test estimator and
its components estimators derived under both reflected normal loss and LINEX loss analytically (in
some sections) and computationally. It is shown that both the risk under reflected normal loss and
the risk under LINEX loss is higher than the risk under BLINEX loss. The key focus point under
consideration is the estimation of the regression coefficients of a multiple regression model under two
conditions, namely the presence of multicollinearity and linear restrictions imposed on the regression
coefficients. In order to address the multicollinearity problem, the regression coefficients were
adjusted by making use of Hoerl and Kennard’s (1970) approach in ridge regression. Furthermore,
in situations where under- or overestimation exist, symmetric loss functions will not give optimal
results and it was necessary to consider asymmetric loss functions. In the economic application,
it was shown that a loss function which is both asymmetric and bounded to ensure a maximum
upper bound for the loss, is the most appropriate function to use. In order to evaluate the effect
that different ridge parameters have on the estimation, the risk values were calculated for all three
ridge regression estimators under different conditions, namely an increase in variance, an increase
in the level of multicollinearity, an increase in the number of parameters to be estimated in the
regression model and an increase in the sample size. These results were compared to each other
and summarised for all the proposed estimators and proposed loss functions. The comparison of the
three proposed ridge regression estimators under all the proposed loss functions was also summarised
for an increase in the sample size and an increase in variance. / Thesis (PhD)--University of Pretoria, 2014. / lk2014 / Statistics / PhD / Unrestricted
|
4 |
Subset selection based on likelihood ratios : the normal means caseChotai, Jayanti January 1979 (has links)
Let π1, ..., πk be k(>_2) populations such that πi, i = 1, 2, ..., k, is characterized by the normal distribution with unknown mean and ui variance aio2 , where ai is known and o2 may be unknown. Suppose that on the basis of independent samples of size ni from π (i=1,2,...,k), we are interested in selecting a random-size subset of the given populations which hopefully contains the population with the largest mean.Based on likelihood ratios, several new procedures for this problem are derived in this report. Some of these procedures are compared with the classical procedure of Gupta (1956,1965) and are shown to be better in certain respects. / <p>Ny rev. utg.</p><p>This is a slightly revised version of Statistical Research Report No. 1978-6.</p> / digitalisering@umu
|
5 |
Revealed preference differences among credit rating agenciesLarik, Waseem January 2012 (has links)
The thesis studies the factors which underpin the allocation of credit ratings by the two major credit rating agencies (CRAs) namely Moody’s and S&P. CRAs make regular headlines, and their rating’s judgements are closely followed and debated by the financial community. Indeed, criticism of these agencies emerged, both in this community and the popular press, following the 2007-2008 financial crisis. This thesis examines several aspects of the allocation of credit ratings by the major agencies, particularly in relation to (i) their revealed “loss function” preference structure, (ii) the determinants underpinning the allocation of credit ratings and (iii) the reasons determining the circumstances when the two agencies appear to differ in their opinions, and we witness a split credit rating allocation. The first essay empirically estimates the loss function preferences of two agencies by analyzing instances of split credit ratings assigned to corporate issuers. Our dataset utilises a time series of nineteen years (1991-2009) of historical credit ratings data from corporate issuers. The methodology consists of estimating rating judgment differences by deducting the rating implied probability of default from the estimated market implied probability of default. Then, utilising judgment differences, we adapt the GMM estimation following Elliott et al. (2005), to extract the loss function preferences of the two agencies. The estimated preferences show a higher degree of asymmetry in the case of Moody’s, and we find strong evidence of conservatism (relative to the market) in industry sectors other than financials and utilities. S&P exhibits loss function asymmetry in both the utility and financial sectors, whereas in other sectors we find strong evidence of symmetric preferences relative to those of the market. The second essay compares the impact of financial, governance and other variables (in an attempt to capture various subjective elements) in determining issuer credit ratings between the two major CRAs. Utilising a sample of 5192 firm-year observations from S&P400, S&P500 and S&P600 index constituent issuer firms, we employ an ordered probit model on a panel dataset spanning 1995 through 2009. The empirical results suggest that the agencies indeed differ on the level of importance they attach to each variable. We conclude that financial information remains the most significant factor in the attribution of credit ratings for both the agencies. We find no significant improvement in the predictive power of credit rating when we incorporate governance related variables. Our other factors show strong evidence of continuing stringent standards, reputational concerns, and differences in standards during economic crises by the two rating agencies. The third essay investigates the factors determining the allocation of different (split) credit ratings to the same firm by the two agencies. We use financial, governance and other factors in an attempt to capture various subjective elements to explain split credit ratings. The study uses a two-stage bivariate probit estimation method. We use a sample of 5238 firm-year observations from S&P 500, S&P 400, and S&P 600 index constituent firms. Our results indicate that a firm having greater size, favourable coverage and higher profitability are less likely to have a split. However, smaller firms with unfavourable coverage and lower profitability appear to be rated lower by Moody’s in comparison to S&P. Our findings suggest that the stage of the business cycle plays no significant role in deciding splits, but rating shopping and the introduction of regulation FD increase the likelihood of splits arising.
|
6 |
Minimisation du risque empirique avec des fonctions de perte nonmodulaires / Empirical risk minimization with non-modular loss functionsYu, Jiaqian 22 March 2017 (has links)
Cette thèse aborde le problème de l’apprentissage avec des fonctions de perte nonmodulaires. Pour les problèmes de prédiction, où plusieurs sorties sont prédites simultanément, l’affichage du résultat comme un ensemble commun de prédiction est essentiel afin de mieux incorporer les circonstances du monde réel. Dans la minimisation du risque empirique, nous visons à réduire au minimum une somme empirique sur les pertes encourues sur l’échantillon fini avec une certaine perte fonction qui pénalise sur la prévision compte tenu de la réalité du terrain. Dans cette thèse, nous proposons des méthodes analytiques et algorithmiquement efficaces pour traiter les fonctions de perte non-modulaires. L’exactitude et l’évolutivité sont validées par des résultats empiriques. D’abord, nous avons introduit une méthode pour les fonctions de perte supermodulaires, qui est basé sur la méthode d’orientation alternée des multiplicateurs, qui ne dépend que de deux problémes individuels pour la fonction de perte et pour l’infèrence. Deuxièmement, nous proposons une nouvelle fonction de substitution pour les fonctions de perte submodulaires, la Lovász hinge, qui conduit à une compléxité en O(p log p) avec O(p) oracle pour la fonction de perte pour calculer un gradient ou méthode de coupe. Enfin, nous introduisons un opérateur de fonction de substitution convexe pour des fonctions de perte nonmodulaire, qui fournit pour la première fois une solution facile pour les pertes qui ne sont ni supermodular ni submodular. Cet opérateur est basé sur une décomposition canonique submodulairesupermodulaire. / This thesis addresses the problem of learning with non-modular losses. In a prediction problem where multiple outputs are predicted simultaneously, viewing the outcome as a joint set prediction is essential so as to better incorporate real-world circumstances. In empirical risk minimization, we aim at minimizing an empirical sum over losses incurred on the finite sample with some loss function that penalizes on the prediction given the ground truth. In this thesis, we propose tractable and efficient methods for dealing with non-modular loss functions with correctness and scalability validated by empirical results. First, we present the hardness of incorporating supermodular loss functions into the inference term when they have different graphical structures. We then introduce an alternating direction method of multipliers (ADMM) based decomposition method for loss augmented inference, that only depends on two individual solvers for the loss function term and for the inference term as two independent subproblems. Second, we propose a novel surrogate loss function for submodular losses, the Lovász hinge, which leads to O(p log p) complexity with O(p) oracle accesses to the loss function to compute a subgradient or cutting-plane. Finally, we introduce a novel convex surrogate operator for general non-modular loss functions, which provides for the first time a tractable solution for loss functions that are neither supermodular nor submodular. This surrogate is based on a canonical submodular-supermodular decomposition.
|
7 |
Cílování inflace v ČR / Inflation Targeting in the Czech RepublicKlukavý, Petr January 2010 (has links)
This diploma thesis is focused on the description of Inflation Targeting regime in the Czech Republic. The paper is divided into three parts. The first part deals with inflation and its targeting and with the economical circumstances that led to the launch of this monetary policy regime in the Czech Republic. The next part concerns the central bank reaction function, transmission channels, evaluation of the inflation target sets and the description of prognostic models that the central bank uses for forecasting. Then main stress is laid on the new structural dynamic model "g3". The last part describes my own inflation prognosis, which is based on the time series analysis.
|
8 |
Network inference using independence criteriaVerbyla, Petras January 2018 (has links)
Biological systems are driven by complex regulatory processes. Graphical models play a crucial role in the analysis and reconstruction of such processes. It is possible to derive regulatory models using network inference algorithms from high-throughput data, for example; from gene or protein expression data. A wide variety of network inference algorithms have been designed and implemented. Our aim is to explore the possibilities of using statistical independence criteria for biological network inference. The contributions of our work can be categorized into four sections. First, we provide a detailed overview of some of the most popular general independence criteria: distance covariance (dCov), kernel canonical variance (KCC), kernel generalized variance (KGV) and the Hilbert-Schmidt Independence Criterion (HSIC). We provide easy to understand geometrical interpretations for these criteria. We also explicitly show the equivalence of dCov, KGV and HSIC. Second, we introduce a new criterion for measuring dependence based on the signal to noise ratio (SNRIC). SNRIC is significantly faster to compute than other popular independence criteria. SNRIC is an approximate criterion but becomes exact under many popular modelling assumptions, for example for data from an additive noise model. Third, we compare the performance of the independence criteria on biological experimental data within the framework of the PC algorithm. Since not all criteria are available in a version that allows for testing conditional independence, we propose and test an approach which relies on residuals and requires only an unconditional version of an independence criterion. Finally we propose a novel method to infer networks with feedback loops. We use an MCMC sampler, which samples using a loss function based on an independence criterion. This allows us to find networks under very general assumptions, such as non-linear relationships, non-Gaussian noise distributions and feedback loops.
|
9 |
A Note on Support Vector Machines DegeneracyRifkin, Ryan, Pontil, Massimiliano, Verri, Alessandro 11 August 1999 (has links)
When training Support Vector Machines (SVMs) over non-separable data sets, one sets the threshold $b$ using any dual cost coefficient that is strictly between the bounds of $0$ and $C$. We show that there exist SVM training problems with dual optimal solutions with all coefficients at bounds, but that all such problems are degenerate in the sense that the "optimal separating hyperplane" is given by ${f w} = {f 0}$, and the resulting (degenerate) SVM will classify all future points identically (to the class that supplies more training data). We also derive necessary and sufficient conditions on the input data for this to occur. Finally, we show that an SVM training problem can always be made degenerate by the addition of a single data point belonging to a certain unboundedspolyhedron, which we characterize in terms of its extreme points and rays.
|
10 |
A Study of Optimal Portfolio Decision and Performance MeasuresChen, Hsin-Hung 03 June 2004 (has links)
Since most financial institutions use the Sharpe Ratio to evaluate the performance of mutual funds, the objective of most fund managers is to select the portfolio that can generate the highest Sharpe Ratio. Traditionally, they can revise the objective function of the Markowitz mean-variance portfolio model and resolve non-linear programming to obtain the maximum Sharpe Ratio portfolio. In the scenario with short sales allowed, this project will propose a closed-form solution for the optimal Sharpe Ratio portfolio by applying Cauchy-Schwarz maximization. This method without using a non-linear programming computer program is easier than traditional method to implement and can save computing time and costs. Furthermore, in the scenarios with short sales disallowed, we will use Kuhn-Tucker conditions to find the optimal Sharpe Ratio portfolio.
On the other hand, an efficient frontier generated by Markowitz mean-variance portfolio model normally has higher risk higher return characteristic, which often causes dilemma for decision maker. This research applies generalized loss function to create a family of decision-aid performance measures called IRp which can well tradeoff return with risk. We compare IRp with Sharpe Ratio and utility functions to confirm that IRp measures are approapriate to evaluate portfolio performance on efficient frontier and to improve asset allocation decisions.
In addition, empirical data of domestic and international investment instruments will be used to examine the feasibility and fitness of the new proposed method and IRp measures. This study applies the methods of Cauchy-Schwarz maximization in multivariate statistical analysis and loss function in quality engineering to portfolio decisions. We believe these new applications will complete portfolio model theory and will be meaningful for academic and business fields.
|
Page generated in 0.0599 seconds