Spelling suggestions: "subject:"parametric approach"" "subject:"arametric approach""
1 |
Generalized Semiparametric Approach to the Analysis of VariancePathiravasan, Chathurangi Heshani Karunapala 01 August 2019 (has links) (PDF)
The one-way analysis of variance (ANOVA) is mainly based on several assumptions and can be used to compare the means of two or more independent groups of a factor. To relax the normality assumption in one-way ANOVA, recent studies have considered exponential distortion or tilt of a reference distribution. The reason for the exponential distortion was not investigated before; thus the main objective of the study is to closely examine the reason behind it. In doing so, a new generalized semi-parametric approach for one-way ANOVA is introduced. The proposed method not only compares the means but also variances of any type of distributions. Simulation studies show that proposed method has favorable performance than classical ANOVA. The method is demonstrated on meteorological radar data and credit limit data. The asymptotic distribution of the proposed estimator was determined in order to test the hypothesis for equality of one sample multivariate distributions. The power comparison of one sample multivariate distributions reveals that there is a significant power improvement in the proposed chi-square test compared to the Hotelling's T-Square test for non normal distributions. A bootstrap paradigm is incorporated for testing equidistributions of multiple samples. As far as power comparison simulations for multiple large samples are considered, the proposed test outperforms other existing parametric, nonparametric and semi-parametric approaches for non normal distributions.
|
2 |
A semi-parametric approach to estimating item response functionsLiang, Longjuan 22 June 2007 (has links)
No description available.
|
3 |
Design synthesis for morphing 3D meso-scale structureChu, Chen 21 May 2009 (has links)
Rapid prototyping (RP) can be used to make complex shapes with very little or even no constraint on the form of the parts. New design methods are needed for parts that can take advantage of the unique capabilities of RP. Although current synthesis methods can successfully solve simple design problems, practical applications with thousands to millions elements are prohibitive to generate solution for.
Two factors are considered. One is the number of design variables; the other is the optimization method. To reduce the number of design variables, parametric approach is introduced. Control diameters are used to control all strut size across the entire structure by utilizing a concept similar to control vertices and Bezier surface. This operation allows the number of design variables to change from the number of elements to a small set of coefficients.
In lattice structure design, global optimization methods are popular and widely used. These methods use heuristic strategies to search the design space and thus perform, as oppose to traditional mathematical programming (MP) methods, a better global search. This work propose that although traditional MP methods find local optimum near starting point, given a quick convergence rate, it will be more efficient to perform such method multiple times to integrate global search than using a global optimization method. Particle Swarm Optimization and Levenburg-Marquardt are chosen to perform the experiments.
|
4 |
MEASURING COMMERCIAL BANK PERFORMANCE AND EFFICIENCY IN SUB-SAHARAN AFRICANGU, BRYAN, Mesfin, Tsegaye January 2009 (has links)
<p>This paper offers to measure efficiency of banks in Sub Saharan Africa and its determining input andout put factors on two fonts. At this purpose, we applied the first font; Data Envelopment Analysis(DEA) for assessing efficiency level. The actual and target level of inputs/outputs to foster efficiencyare shown in the results. Secondly, the banks ratio analysis measuring banks performance throughreturns volatility for each bank, asset utilization and provision for bad and doubtful debts over thestudy period are all used as tools for this analysis. Our results suggest that Sub Saharan AfricanBanks are about 98.35% efficient. We are aware that the level of efficiency could be subject to up anddown swing if environmental factors influencing banks efficiency where taken into consideration.Finally, our result (DEA) is more sensitive to loans, other liabilities, other non interest expense,securities and deposit.</p>
|
5 |
MEASURING COMMERCIAL BANK PERFORMANCE AND EFFICIENCY IN SUB-SAHARAN AFRICANGU, BRYAN, Mesfin, Tsegaye January 2009 (has links)
This paper offers to measure efficiency of banks in Sub Saharan Africa and its determining input andout put factors on two fonts. At this purpose, we applied the first font; Data Envelopment Analysis(DEA) for assessing efficiency level. The actual and target level of inputs/outputs to foster efficiencyare shown in the results. Secondly, the banks ratio analysis measuring banks performance throughreturns volatility for each bank, asset utilization and provision for bad and doubtful debts over thestudy period are all used as tools for this analysis. Our results suggest that Sub Saharan AfricanBanks are about 98.35% efficient. We are aware that the level of efficiency could be subject to up anddown swing if environmental factors influencing banks efficiency where taken into consideration.Finally, our result (DEA) is more sensitive to loans, other liabilities, other non interest expense,securities and deposit.
|
6 |
Applications of constrained non-parametric smoothing methods in computing financial riskWong, Chung To (Charles) January 2008 (has links)
The aim of this thesis is to improve risk measurement estimation by incorporating extra information in the form of constraint into completely non-parametric smoothing techniques. A similar approach has been applied in empirical likelihood analysis. The method of constraints incorporates bootstrap resampling techniques, in particular, biased bootstrap. This thesis brings together formal estimation methods, empirical information use, and computationally intensive methods. In this thesis, the constraint approach is applied to non-parametric smoothing estimators to improve the estimation or modelling of risk measures. We consider estimation of Value-at-Risk, of intraday volatility for market risk, and of recovery rate densities for credit risk management. Firstly, we study Value-at-Risk (VaR) and Expected Shortfall (ES) estimation. VaR and ES estimation are strongly related to quantile estimation. Hence, tail estimation is of interest in its own right. We employ constrained and unconstrained kernel density estimators to estimate tail distributions, and we estimate quantiles from the fitted tail distribution. The constrained kernel density estimator is an application of the biased bootstrap technique proposed by Hall & Presnell (1998). The estimator that we use for the constrained kernel estimator is the Harrell-Davis (H-D) quantile estimator. We calibrate the performance of the constrained and unconstrained kernel density estimators by estimating tail densities based on samples from Normal and Student-t distributions. We find a significant improvement in fitting heavy tail distributions using the constrained kernel estimator, when used in conjunction with the H-D quantile estimator. We also present an empirical study demonstrating VaR and ES calculation. A credit event in financial markets is defined as the event that a party fails to pay an obligation to another, and credit risk is defined as the measure of uncertainty of such events. Recovery rate, in the credit risk context, is the rate of recuperation when a credit event occurs. It is defined as Recovery rate = 1 - LGD, where LGD is the rate of loss given default. From this point of view, the recovery rate is a key element both for credit risk management and for pricing credit derivatives. Only the credit risk management is considered in this thesis. To avoid strong assumptions about the form of the recovery rate density in current approaches, we propose a non-parametric technique incorporating a mode constraint, with the adjusted Beta kernel employed to estimate the recovery density function. An encouraging result for the constrained Beta kernel estimator is illustrated by a large number of simulations, as genuine data are very confidential and difficult to obtain. Modelling high frequency data is a popular topic in contemporary finance. The intraday volatility patterns of standard indices and market-traded assets have been well documented in the literature. They show that the volatility patterns reflect the different characteristics of different stock markets, such as double U-shaped volatility pattern reported in the Hang Seng Index (HSI). We aim to capture this intraday volatility pattern using a non-parametric regression model. In particular, we propose a constrained function approximation technique to formally test the structure of the pattern and to approximate the location of the anti-mode of the U-shape. We illustrate this methodology on the HSI as an empirical example.
|
7 |
Assessing technical, allocative and economic efficiency of smallholder maize producers using the stochastic frontier approach in Chongwe District, ZambiaKabwe, Michael 19 July 2012 (has links)
Smallholder farmers' efficiency has been measured by different scholars using different approaches. Both parametric and non-parametric approaches have been applied; each presenting unique results in some ways. The parametric approach uses econometric approaches to make assumptions about the error terms in the data generation process and also impose functional forms on the production functions. The nonparametric approaches neither impose any functional form nor make assumptions about the error terms. The bottom line of both approaches is to determine efficiency in production. In this study a parametric stochastic frontier approach is used to assess technical, allocative and economic efficiency from a sample of smallholder maize producers of Chongwe District, Zambia. This approach was chosen based on the fact that production among this group of farmers varies a great deal, and so the stochastic frontier attributes part of the variations to the random errors (which reflects measurement errors and statistical noise) and farm specific efficiency. Using a Cobb-Douglas frontier production function which exhibits self dual characteristics, technical efficiency scores for the sample of the smallholder maize producers are derived. With the parameter estimates(âi) obtained from the Cobb-Douglas stochastic production frontier, input prices (âi) and taking advantage of the self dual characteristics of the Cobb-Douglas, a cost function is derived. This forms the basis for calculating the farmers' allocative and economic efficiency. Results obtained from the study showed considerable technical, allocative and economic inefficiencies among smallholder maize producers. Technical Efficiency (TE) estimates range from 40.6 percent to 96.53 percent with a mean efficiency of 78.19 percent, while Allocative Efficiency (AE) estimates range from 33.57 to 92.14 percent with a mean of 61.81. The mean Economic Efficiency (EE) is 47.88 percent, with a minimum being 30 percent and a maximum of 79.26 percent. The results therefore indicate that inefficiency in maize production in Chongwe District is dominated by allocative and economic inefficiency. Additionally, in the two stage regression households characteristics: age; sex; education level; occupation; years in farming; land ownership; household size; access to extension and access to credit services; are regressed against technical efficiency scores using a logit function. Results obtained shows that land ownership, access to credit services, access to extension services, land ownership and education level of up to post primary (secondary and tertiary) have a positive influence on the households' technical efficiency. On the other hand, age of the household head; female headed household and lack of education (though not statistically significant at any confidence level) have a negative influence on this group of maize producers. In a similar two stage regression, access to extension services, membership to producer organisation, access to credit and disaster experienced on the farm such as floods, drought and hail, are regressed against AE. The result shows that access to extension services, access to credit services, membership to cooperatives and natural calamities affect AE. Results therefore show that there is a great deal of both allocative and economic inefficiency among smallholder maize farmers than there is technical inefficiency. To address these inefficiencies observed there is need to design policies that will ensure that environmental (e.g. poor land practices which lead to nutrient depletion from the soils), economic (e.g. high transport cost due to poor road infrastructure) and institutional issues (access to credit) are addressed. In other words, Government should help create credit facilities to provide affordable loans to this group of farmers. Additionally, there is need to improve extension systems to help educate farmers about better farming practices and other innovative technologies to further improve their efficiency in production. Issues of land ownership among this group of farmers needs to be addressed as this will not only raise confidence but will also ensure that their cost of production is reduced since there will be no need for payment of rental charges, and that farmers will adhere to good farming practices knowing they own title to land. Copyright / Dissertation (MSc)--University of Pretoria, 2012. / Agricultural Economics, Extension and Rural Development / unrestricted
|
8 |
Spectral Analysis of Nonuniformly Sampled Data and ApplicationsBabu, Prabhu January 2012 (has links)
Signal acquisition, signal reconstruction and analysis of spectrum of the signal are the three most important steps in signal processing and they are found in almost all of the modern day hardware. In most of the signal processing hardware, the signal of interest is sampled at uniform intervals satisfying some conditions like Nyquist rate. However, in some cases the privilege of having uniformly sampled data is lost due to some constraints on the hardware resources. In this thesis an important problem of signal reconstruction and spectral analysis from nonuniformly sampled data is addressed and a variety of methods are presented. The proposed methods are tested via numerical experiments on both artificial and real-life data sets. The thesis starts with a brief review of methods available in the literature for signal reconstruction and spectral analysis from non uniformly sampled data. The methods discussed in the thesis are classified into two broad categories - dense and sparse methods, the classification is based on the kind of spectra for which they are applicable. Under dense spectral methods the main contribution of the thesis is a non-parametric approach named LIMES, which recovers the smooth spectrum from non uniformly sampled data. Apart from recovering the spectrum, LIMES also gives an estimate of the covariance matrix. Under sparse methods the two main contributions are methods named SPICE and LIKES - both of them are user parameter free sparse estimation methods applicable for line spectral estimation. The other important contributions are extensions of SPICE and LIKES to multivariate time series and array processing models, and a solution to the grid selection problem in sparse estimation of spectral-line parameters. The third and final part of the thesis contains applications of the methods discussed in the thesis to the problem of radial velocity data analysis for exoplanet detection. Apart from the exoplanet application, an application based on Sudoku, which is related to sparse parameter estimation, is also discussed.
|
9 |
Dimension Flexible and Adaptive Statistical LearningKhowaja, Kainat 02 March 2023 (has links)
Als interdisziplinäre Forschung verbindet diese Arbeit statistisches Lernen mit aktuellen fortschrittlichen Methoden, um mit hochdimensionalität und Nichtstationarität umzugehen. Kapitel 2 stellt Werkzeuge zur Verfügung, um statistische Schlüsse auf die Parameterfunktionen von Generalized Random Forests zu ziehen, die als Lösung der lokalen Momentenbedingung identifiziert wurden. Dies geschieht entweder durch die hochdimensionale Gaußsche Approximationstheorie oder durch Multiplier-Bootstrap. Die theoretischen Aspekte dieser beiden Ansätze werden neben umfangreichen Simulationen und realen Anwendungen im Detail diskutiert. In Kapitel 3 wird der lokal parametrische Ansatz auf zeitvariable Poisson-Prozesse ausgeweitet, um ein Instrument zur Ermittlung von Homogenitätsintervallen innerhalb der Zeitreihen von Zähldaten in einem nichtstationären Umfeld bereitzustellen. Die Methodik beinhaltet rekursive Likelihood-Ratio-Tests und hat ein Maximum in der Teststatistik mit unbekannter Verteilung. Um sie zu approximieren und den kritischen Wert zu finden, verwenden wir den Multiplier-Bootstrap und demonstrieren den Nutzen dieses Algorithmus für deutsche M\&A Daten. Kapitel 4 befasst sich mit der Erstellung einer niedrigdimensionalen Approximation von hochdimensionalen Daten aus dynamischen Systemen. Mithilfe der Resampling-Methoden, der Hauptkomponentenanalyse und Interpolationstechniken konstruieren wir reduzierte dimensionale Ersatzmodelle, die im Vergleich zu den ursprünglichen hochauflösenden Modellen schnellere Ausgaben liefern. In Kapitel 5 versuchen wir, die Verteilungsmerkmale von Kryptowährungen mit den von ihnen zugrunde liegenden Mechanismen zu verknüpfen. Wir verwenden charakteristikbasiertes spektrales Clustering, um Kryptowährungen mit ähnlichem Verhalten in Bezug auf Preis, Blockzeit und Blockgröße zu clustern, und untersuchen diese Cluster, um gemeinsame Mechanismen zwischen verschiedenen Krypto-Clustern zu finden. / As an interdisciplinary research, this thesis couples statistical learning with current advanced methods to deal with high dimensionality and nonstationarity. Chapter 2 provides tools to make statistical inference (uniformly over covariate space) on the parameter functions from Generalized Random Forests identified as the solution of the local moment condition. This is done by either highdimensional Gaussian approximation theorem or via multiplier bootstrap. The theoretical aspects of both of these approaches are discussed in detail alongside extensive simulations and real life applications. In Chapter 3, we extend the local parametric approach to time varying Poisson processes, providing a tool to find intervals of homogeneity within the time series of count data in a nonstationary setting. The methodology involves recursive likelihood ratio tests and has a maxima in test statistic with unknown distribution. To approximate it and find the critical value, we use multiplier bootstrap and demonstrate the utility of this algorithm on German M\&A data. Chapter 4 is concerned with creating low dimensional approximation of high dimensional data from dynamical systems. Using various resampling methods, Principle Component Analysis, and interpolation techniques, we construct reduced dimensional surrogate models that provide faster responses as compared to the original high fidelity models. In Chapter 5, we aim to link the distributional characteristics of cryptocurrencies to their underlying mechanism. We use characteristic based spectral clustering to cluster cryptos with similar behaviour in terms of price, block time, and block size, and scrutinize these clusters to find common mechanisms between various crypto clusters.
|
10 |
Modelling Financial and Social NetworksKlochkov, Yegor 04 October 2019 (has links)
In dieser Arbeit untersuchen wir einige Möglichkeiten, financial und soziale Netzwerke zu analysieren, ein Thema, das in letzter Zeit in der ökonometrischen Literatur große Beachtung gefunden hat.
Kapitel 2 untersucht den Risiko-Spillover-Effekt über das in White et al. (2015) eingeführte multivariate bedingtes autoregressives Value-at-Risk-Modell. Wir sind an der Anwendung auf nicht stationäre Zeitreihen interessiert und entwickeln einen sequentiellen statistischen Test, welcher das größte verfügbare Homogenitätsintervall auswählt. Unser Ansatz basiert auf der Changepoint-Teststatistik und wir verwenden einen neuartigen Multiplier Bootstrap Ansatz zur Bewertung der kritischen Werte.
In Kapitel 3 konzentrieren wir uns auf soziale Netzwerke. Wir modellieren Interaktionen zwischen Benutzern durch ein Vektor-Autoregressivmodell, das Zhu et al. (2017) folgt. Um für die hohe Dimensionalität kontrollieren, betrachten wir ein Netzwerk, das einerseits von Influencers und Andererseits von Communities gesteuert wird, was uns hilft, den autoregressiven Operator selbst dann abzuschätzen, wenn die Anzahl der aktiven Parameter kleiner als die Stichprobegröße ist.
Kapitel 4 befasst sich mit technischen Tools für die Schätzung des Kovarianzmatrix und Kreuzkovarianzmatrix. Wir entwickeln eine neue Version von der Hanson-Wright- Ungleichung für einen Zufallsvektor mit subgaußschen Komponenten. Ausgehend von unseren Ergebnissen zeigen wir eine Version der dimensionslosen Bernstein-Ungleichung, die für Zufallsmatrizen mit einer subexponentiellen Spektralnorm gilt. Wir wenden diese Ungleichung auf das Problem der Schätzung der Kovarianzmatrix mit fehlenden Beobachtungen an und beweisen eine verbesserte Version des früheren Ergebnisses von (Lounici 2014). / In this work we explore some ways of studying financial and social networks, a topic that has recently received tremendous amount of attention in the Econometric literature.
Chapter 2 studies risk spillover effect via Multivariate Conditional Autoregressive Value at Risk model introduced in White et al. (2015). We are particularly interested in application to non-stationary time series and develop a sequential test procedure that chooses the largest available interval of homogeneity. Our approach is based on change point test statistics and we use a novel Multiplier Bootstrap approach for the evaluation of critical values.
In Chapter 3 we aim at social networks. We model interactions between users through a vector autoregressive model, following Zhu et al. (2017). To cope with high dimensionality we consider a network that is driven by influencers on one side, and communities on the other, which helps us to estimate the autoregressive operator even when the number of active parameters is smaller than the sample size.
Chapter 4 is devoted to technical tools related to covariance cross-covariance estimation. We derive uniform versions of the Hanson-Wright inequality for a random vector with independent subgaussian components. The core technique is based on the entropy method combined with truncations of both gradients of functions of interest and of the coordinates itself. We provide several applications of our techniques: we establish a version of the standard Hanson-Wright inequality, which is tighter in some regimes. Extending our results we show a version of the dimension-free matrix Bernstein inequality that holds for random matrices with a subexponential spectral norm. We apply the derived inequality to the problem of covariance estimation with missing observations and prove an improved high probability version of the recent result of Lounici (2014).
|
Page generated in 0.0875 seconds