• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 298
  • 107
  • 49
  • 38
  • 23
  • 20
  • 20
  • 18
  • 9
  • 8
  • 7
  • 6
  • 5
  • 4
  • 4
  • Tagged with
  • 689
  • 152
  • 84
  • 77
  • 71
  • 66
  • 55
  • 54
  • 49
  • 48
  • 46
  • 43
  • 43
  • 42
  • 40
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
331

Estimation of the Optimal Threshold Using Kernel Estimate and ROC Curve Approaches

Zhu, Zi 23 May 2011 (has links)
Credit Line Analysis plays a very important role in the housing market, especially with the situation of large number of frozen loans during the current financial crisis. In this thesis, we apply the methods of kernel estimate and the Receiver Operating Characteristic (ROC) curve in the credit loan application process in order to help banks select the optimal threshold to differentiate good customers from bad customers. Better choice of the threshold is essential for banks to prevent loss and maximize profit from loans. One of the main advantages of our study is that the method does not require us to specify the distribution of the latent risk score. We apply bootstrap method to construct the confidence interval for the estimate.
332

Value gain from corporate reorganization

Glew, Ian Andrew 22 August 2007 (has links)
In the absence of taxes and transactions costs, there can be no benefit to corporate reorganization from a financial standpoint, but ‘real world’ limitations and frictions do provide additional value that is gained through divestitures in terms of focus and financial flexibility. Herein, the corporate divestiture decision is analyzed to determine the motivation for a parent company either to cleave its offspring directly to the external capital market in an equity carve-out or to distribute the shares to the existing shareholders in a tax-free spin-off. Cash flow performance, asymmetric information, relative size of the divestiture, and relatedness of the parent’s and subsidiary’s operations are all found to contribute significantly to the divestiture decision. In Canada, an alternate form of security, known as the income trust unit, has become popular for corporate reorganizations, either through an initial public offering or as a conversion of shares. The flow-through structure of income trusts currently allows avoidance of corporate taxation to offer higher pre-tax returns to retail investors, in a market setting where yield is increasingly equated with value. To determine placement of these securities in the market, the risk of the income trust organizational form is analyzed and compared to the standard corporate form. Further, a number of publicly known characteristics of the income trusts can predict the relative risk of this type of investment. In recent ‘hot markets’ for these securities, proof is uncovered that unsuitable firms have been migrating to this sector, but valuation of the investments in this sector has remained fair and full. Although pending legislation will discontinue the tax-exempt status of income trusts in 2011, during their tenure these securities have improved the Canadian market. Based on the data analysis herein, all types of divestitures studied have been predicted to provide commensurate value with respect to risk depending on the nature of the subsidiary. / Thesis (Ph.D, Management) -- Queen's University, 2007-08-15 11:20:20.465
333

Quantification of reservoir uncertainty for optimal decision making

Alshehri, Naeem S. Unknown Date
No description available.
334

Seasonal volatility models with applications in option pricing

Doshi, Ankit 03 1900 (has links)
GARCH models have been widely used in finance to model volatility ever since the introduction of the ARCH model and its extension to the generalized ARCH (GARCH) model. Lately, there has been growing interest in modelling seasonal volatility, most recently with the introduction of the multiplicative seasonal GARCH models. As an application of the multiplicative seasonal GARCH model with real data, call prices from the major stock market index of India are calculated using estimated parameter values. It is shown that a multiplicative seasonal GARCH option pricing model outperforms the Black-Scholes formula and a GARCH(1,1) option pricing formula. A parametric bootstrap procedure is also employed to obtain an interval approximation of the call price. Narrower confidence intervals are obtained using the multiplicative seasonal GARCH model than the intervals provided by the GARCH(1,1) model for data that exhibits multiplicative seasonal GARCH volatility.
335

Goodness-of-Fit Test Issues in Generalized Linear Mixed Models

Chen, Nai-Wei 2011 December 1900 (has links)
Linear mixed models and generalized linear mixed models are random-effects models widely applied to analyze clustered or hierarchical data. Generally, random effects are often assumed to be normally distributed in the context of mixed models. However, in the mixed-effects logistic model, the violation of the assumption of normally distributed random effects may result in inconsistency for estimates of some fixed effects and the variance component of random effects when the variance of the random-effects distribution is large. On the other hand, summary statistics used for assessing goodness of fit in the ordinary logistic regression models may not be directly applicable to the mixed-effects logistic models. In this dissertation, we present our investigations of two independent studies related to goodness-of-fit tests in generalized linear mixed models. First, we consider a semi-nonparametric density representation for the random effects distribution and provide a formal statistical test for testing normality of the random-effects distribution in the mixed-effects logistic models. We obtain estimates of parameters by using a non-likelihood-based estimation procedure. Additionally, we not only evaluate the type I error rate of the proposed test statistic through asymptotic results, but also carry out a bootstrap hypothesis testing procedure to control the inflation of the type I error rate and to study the power performance of the proposed test statistic. Further, the methodology is illustrated by revisiting a case study in mental health. Second, to improve assessment of the model fit in the mixed-effects logistic models, we apply the nonparametric local polynomial smoothed residuals over within-cluster continuous covariates to the unweighted sum of squares statistic for assessing the goodness-of-fit of the logistic multilevel models. We perform a simulation study to evaluate the type I error rate and the power performance for detecting a missing quadratic or interaction term of fixed effects using the kernel smoothed unweighted sum of squares statistic based on the local polynomial smoothed residuals over x-space. We also use a real data set in clinical trials to illustrate this application.
336

Multiple Kernel Imputation : A Locally Balanced Real Donor Method

Pettersson, Nicklas January 2013 (has links)
We present an algorithm for imputation of incomplete datasets based on Bayesian exchangeability through Pólya sampling. Each (donee) unit with a missing value is imputed multiple times by observed (real) values on units from a donor pool. The donor pools are constructed using auxiliary variables. Several features from kernel estimation are used to counteract unbalances that are due to sparse and bounded data. Three balancing features can be used with only one single continuous auxiliary variable, but an additional fourth feature need, multiple continuous auxiliary variables. They mainly contribute by reducing nonresponse bias. We examine how the donor pool size should be determined, that is the number of potential donors within the pool. External information is shown to be easily incorporated in the imputation algorithm. Our simulation studies show that with a study variable which can be seen as a function of one or two continuous auxiliaries plus residual noise, the method performs as well or almost as well as competing methods when the function is linear, but usually much better when the function is nonlinear. / <p>At the time of the doctoral defense, the following papers were unpublished and had a status as follows: Paper 1: In press. Paper 3: Submitted. Paper 4: Submitted.</p>
337

Seasonal volatility models with applications in option pricing

Doshi, Ankit 03 1900 (has links)
GARCH models have been widely used in finance to model volatility ever since the introduction of the ARCH model and its extension to the generalized ARCH (GARCH) model. Lately, there has been growing interest in modelling seasonal volatility, most recently with the introduction of the multiplicative seasonal GARCH models. As an application of the multiplicative seasonal GARCH model with real data, call prices from the major stock market index of India are calculated using estimated parameter values. It is shown that a multiplicative seasonal GARCH option pricing model outperforms the Black-Scholes formula and a GARCH(1,1) option pricing formula. A parametric bootstrap procedure is also employed to obtain an interval approximation of the call price. Narrower confidence intervals are obtained using the multiplicative seasonal GARCH model than the intervals provided by the GARCH(1,1) model for data that exhibits multiplicative seasonal GARCH volatility.
338

Jackknife Empirical Likelihood Inference For The Pietra Ratio

Su, Yueju 17 December 2014 (has links)
Pietra ratio (Pietra index), also known as Robin Hood index, Schutz coefficient (Ricci-Schutz index) or half the relative mean deviation, is a good measure of statistical heterogeneity in the context of positive-valued data sets. In this thesis, two novel methods namely "adjusted jackknife empirical likelihood" and "extended jackknife empirical likelihood" are developed from the jackknife empirical likelihood method to obtain interval estimation of the Pietra ratio of a population. The performance of the two novel methods are compared with the jackknife empirical likelihood method, the normal approximation method and two bootstrap methods (the percentile bootstrap method and the bias corrected and accelerated bootstrap method). Simulation results indicate that under both symmetric and skewed distributions, especially when the sample is small, the extended jackknife empirical likelihood method gives the best performance among the six methods in terms of the coverage probabilities and interval lengths of the confidence interval of Pietra ratio; when the sample size is over 20, the adjusted jackknife empirical likelihood method performs better than the other methods, except the extended jackknife empirical likelihood method. Furthermore, several real data sets are used to illustrate the proposed methods.
339

Quantification of reservoir uncertainty for optimal decision making

Alshehri, Naeem S. 06 1900 (has links)
A reliable estimate of the amount of oil or gas in a reservoir is required for development decisions. Uncertainty in reserve estimates affects resource/reserve classification, investment decisions, and development decisions. There is a need to make the best decisions with an appropriate level of technical analysis considering all available data. Current methods of estimating resource uncertainty use spreadsheets or Monte Carlo simulation software with specified probability distributions for each variable. 3-D models may be constructed, but they rarely consider uncertainty in all variables. This research develops an appropriate 2-D model of heterogeneity and uncertainty by integrating 2-D model methodology to account for parameter uncertainty in the mean, which is of primary importance in the input histograms. This research improves reserve evaluation in the presence of geologic uncertainty. Guidelines are developed to: a) select the best modeling scale for making decisions by comparing 2-D vs. 0-D and 3-D models, b) understand parameters that play a key role in reserve estimates, c) investigate how to reduce uncertainties, and d) show the importance of accounting for parameter uncertainty in reserves assessment to get fair global uncertainty by comparing results of Hydrocarbon Initially-in-Place (HIIP) with/without parameter uncertainty. The parameters addressed in this research are those required in the assessment of uncertainty including statistical and geological parameters. This research shows that fixed parameters seriously underestimate the actual uncertainty in resources. A complete setup of methodology for the assessment of uncertainty in the structural surfaces of a reservoir, fluid contacts levels, and petrophysical properties is developed with accounting for parameter uncertainty in order to get fair global uncertainty. Parameter uncertainty can be quantified by several approaches such as the conventional bootstrap (BS), spatial bootstrap (SBS), and conditional-finite-domain (CFD). Real data from a large North Sea reservoir dataset is used to compare those approaches. The CFD approach produced more realistic uncertainty in distributions of the HIIP than those obtained from the BS or SBS approaches. 0-D modeling was used for estimating uncertainty in HIIP with different source of thickness. 2-D is based on geological mapping and can be presented in 2-D maps and checked locally. / Petroleum Engineering
340

Bootstrap inference in time series econometrics

Gredenhoff, Mikael January 1998 (has links)
This dissertation contains five essays in the field of time series econometrics. The main issue discussed is the lack of coherence between small sample and asymptotic inference. Frequently, in modern econometrics distributional results are strictly only valid for a hypothetical infinite sample. Studies show that the attained actual level of a test may be considerable different from the nominal significance level, and as a concequence, too many true null hypotheses will falsely be rejected. This leads, in the extension, to applied users that too often reject evidence in the data for theoretical predictions. In large, the thesis discusses how computer intensive methods may be used to adjust the test distribution, such that the actual significance level will coincide with the desired nominal level. The first two essays focus on how to improve testing for persistence in data, through a bootstrap procedure within a univariate framework. The remaining three essays are studies of multivariate time series models. The third essay considers the identification problem of the basic stationary vector autoregressive model, which is also the basic-line econometric specification for maximum likelihood cointegration analysis. In the fourth essay the multivariate framework is expanded to allow for components of different integrating order and in this setting the paper discusses how fractional cointegration affects the inference in maximum likelihood cointegration analysis. The fifth essay consider once again the bootstrap testing approach, now in a multivariate application, to correct inference on long-run relations in maximum likelihood cointegration analysis. / Diss. Stockholm : Handelshögsk.

Page generated in 0.0434 seconds