• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 21
  • 8
  • 7
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 55
  • 55
  • 15
  • 11
  • 10
  • 10
  • 9
  • 9
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Multiscale Total Variation Estimators for Regression and Inverse Problems

Álamo, Miguel del 24 May 2019 (has links)
No description available.
12

Análise da série do índice de Depósito Interfinanceiro: modelagem da volatilidade e apreçamento de suas opções. / Analysis of Brazilian Interbank Deposit Index series: volatility modeling and option pricing

Mauad, Roberto Baltieri 05 December 2013 (has links)
Modelos bastante utilizados atualmente no apreçamento de derivativos de taxas de juros realizam, muitas vezes, premissas excessivamente restritivas com relação à volatilidade da série do ativo objeto. O método de Black and Scholes e o de Vasicek, por exemplo, consideram a variância da série como constante no tempo e entre as diferentes maturidades, suposição que pode não ser a mais adequada para todos os casos. Assim, entre as técnicas alternativas de modelagem da volatilidade que vêm sendo estudadas, destacam-se as regressões por kernel. Discutimos neste trabalho a modelagem não paramétrica por meio da referida técnica e posterior apreçamento das opções em um modelo HJM Gaussiano. Analisamos diferentes especificações possíveis para a estimação não paramétrica da função de volatilidade através de simulações de Monte Carlo para o apreçamento de opções sobre títulos zero cupom, e realizamos um estudo empírico utilizando a metodologia proposta para o apreçamento de opções sobre IDI no mercado brasileiro. Um dos principais resultados encontrados é o bom ajuste da metodologia proposta no apreçamento de opções sobre títulos zero cupom. / Many models which have been recently used for derivatives pricing make restrictive assumptions about the volatility of the underlying object. Black-Scholes and Vasicek models, for instance, consider the volatility of the series as constant throughout time and maturity, an assumption that might not be the most appropriate for all cases. In this context, kernel regressions are important technics which have been researched recently. We discuss in this framework nonparametric modeling using the aforementioned technic and posterior option pricing using a Gaussian HJM model. We analyze different specifications for the nonparametric estimation of the volatility function using Monte Carlo simulations for the pricing of options on zero coupon bonds and conduct an empirical study using the proposed methodology for the pricing of options on the Interbank Deposit Index (IDI) in the Brazilian market. One of our main results is the good adjustment of the proposed methodology on the pricing of options on zero coupon bonds.
13

Análise da série do índice de Depósito Interfinanceiro: modelagem da volatilidade e apreçamento de suas opções. / Analysis of Brazilian Interbank Deposit Index series: volatility modeling and option pricing

Roberto Baltieri Mauad 05 December 2013 (has links)
Modelos bastante utilizados atualmente no apreçamento de derivativos de taxas de juros realizam, muitas vezes, premissas excessivamente restritivas com relação à volatilidade da série do ativo objeto. O método de Black and Scholes e o de Vasicek, por exemplo, consideram a variância da série como constante no tempo e entre as diferentes maturidades, suposição que pode não ser a mais adequada para todos os casos. Assim, entre as técnicas alternativas de modelagem da volatilidade que vêm sendo estudadas, destacam-se as regressões por kernel. Discutimos neste trabalho a modelagem não paramétrica por meio da referida técnica e posterior apreçamento das opções em um modelo HJM Gaussiano. Analisamos diferentes especificações possíveis para a estimação não paramétrica da função de volatilidade através de simulações de Monte Carlo para o apreçamento de opções sobre títulos zero cupom, e realizamos um estudo empírico utilizando a metodologia proposta para o apreçamento de opções sobre IDI no mercado brasileiro. Um dos principais resultados encontrados é o bom ajuste da metodologia proposta no apreçamento de opções sobre títulos zero cupom. / Many models which have been recently used for derivatives pricing make restrictive assumptions about the volatility of the underlying object. Black-Scholes and Vasicek models, for instance, consider the volatility of the series as constant throughout time and maturity, an assumption that might not be the most appropriate for all cases. In this context, kernel regressions are important technics which have been researched recently. We discuss in this framework nonparametric modeling using the aforementioned technic and posterior option pricing using a Gaussian HJM model. We analyze different specifications for the nonparametric estimation of the volatility function using Monte Carlo simulations for the pricing of options on zero coupon bonds and conduct an empirical study using the proposed methodology for the pricing of options on the Interbank Deposit Index (IDI) in the Brazilian market. One of our main results is the good adjustment of the proposed methodology on the pricing of options on zero coupon bonds.
14

Das Arbeitsangebot verheirateter Frauen in den neuen und alten Bundesländern

Kempe, Wolfram January 1996 (has links)
In diesem Beitrag wird eine Regressionsanalyse vorgestellt, die die Einflüsse auf die Entscheidung verheirateter deutscher Frauen untersucht, eine Erwerbstätigkeit aufzunehmen. Um Differenzen im Verhalten von ost- und westdeutschen Frauen zu ermitteln, erfolgte die Untersuchung getrennt in zwei Datensätzen. Zur Vermeidung von Annahmen über die Art des Zusammenhanges wurde das Generalisierte Additive Modell (GAM) gewählt, ein semiparametrisches Regressionsmodell. Diese Modellform, die nichtparametrische und parametrische Regressionsmethoden in sich vereint, hat bisher wenig Verbreitung in der Praxis gefunden. Dies lag vor allem am Schätz verfahren, dem Backfitting. Seit etwa einem Jahr gibt es neue Ansätze, in dieser Modellform zu schätzen. Die analytischen Eigenschaften des neuen Schätzers lassen sich leichter bestimmen. Mit dieser Schätzung konnten Unterschiede zwischen Ost und West genau herausgearbeitet werden und die funktionalen Zusammenhänge zwischen Einflußvariablen und Antwortvariable untersucht werden. Die Analyse brachte deutliche Unterschiede im Erwerbsverhalten zwischen der Frauen beider Landesteile zum Vorschein. / This paper will focus on the regression analysis of labor supply decisions of married German women. In order to determine differences East and West German women were compared seperately. To avoid assumptions about the functional type of correlation the Generalized Additive Model, a semiparametric regression model, was chosen. So far, this pattern consisting of nonparametric and parametric methods has not found acceptance in practical application. Reason for that is the backfitting-estimator. One year ago new ideas for the estimation by GAM were found. The analytical features of the new estimator are easier to determine. Using this method differences between East and West were discovered in detail and functional correlations between endogenous and exogenous variables were investigated. This analysis unveiled significant differences of labor supply behavior among East and West Germany.
15

Estimation and testing the effect of covariates in accelerated life time models under censoring

Liero, Hannelore January 2010 (has links)
The accelerated lifetime model is considered. To test the influence of the covariate we transform the model in a regression model. Since censoring is allowed this approach leads to a goodness-of-fit problem for regression functions under censoring. So nonparametric estimation of regression functions under censoring is investigated, a limit theorem for a L2-distance is stated and a test procedure is formulated. Finally a Monte Carlo procedure is proposed.
16

Selection of smoothing parameters with application in causal inference

Häggström, Jenny January 2011 (has links)
This thesis is a contribution to the research area concerned with selection of smoothing parameters in the framework of nonparametric and semiparametric regression. Selection of smoothing parameters is one of the most important issues in this framework and the choice can heavily influence subsequent results. A nonparametric or semiparametric approach is often desirable when large datasets are available since this allow us to make fewer and weaker assumptions as opposed to what is needed in a parametric approach. In the first paper we consider smoothing parameter selection in nonparametric regression when the purpose is to accurately predict future or unobserved data. We study the use of accumulated prediction errors and make comparisons to leave-one-out cross-validation which is widely used by practitioners. In the second paper a general semiparametric additive model is considered and the focus is on selection of smoothing parameters when optimal estimation of some specific parameter is of interest. We introduce a double smoothing estimator of a mean squared error and propose to select smoothing parameters by minimizing this estimator. Our approach is compared with existing methods.The third paper is concerned with the selection of smoothing parameters optimal for estimating average treatment effects defined within the potential outcome framework. For this estimation problem we propose double smoothing methods similar to the method proposed in the second paper. Theoretical properties of the proposed methods are derived and comparisons with existing methods are made by simulations.In the last paper we apply our results from the third paper by using a double smoothing method for selecting smoothing parameters when estimating average treatment effects on the treated. We estimate the effect on BMI of divorcing in middle age. Rich data on socioeconomic conditions, health and lifestyle from Swedish longitudinal registers is used.
17

An Additive Bivariate Hierarchical Model for Functional Data and Related Computations

Redd, Andrew Middleton 2010 August 1900 (has links)
The work presented in this dissertation centers on the theme of regression and computation methodology. Functional data is an important class of longitudinal data, and principal component analysis is an important approach to regression with this type of data. Here we present an additive hierarchical bivariate functional data model employing principal components to identify random e ects. This additive model extends the univariate functional principal component model. These models are implemented in the pfda package for R. To t the curves from this class of models orthogonalized spline basis are used to reduce the dimensionality of the t, but retain exibility. Methods for handing spline basis functions in a purely analytical manner, including the orthogonalizing process and computing of penalty matrices used to t the principal component models are presented. The methods are implemented in the R package orthogonalsplinebasis. The projects discussed involve complicated coding for the implementations in R. To facilitate this I created the NppToR utility to add R functionality to the popular windows code editor Notepad . A brief overview of the use of the utility is also included.
18

Statistical Learning of Some Complex Systems: From Dynamic Systems to Market Microstructure

Tong, Xiao Thomas 27 September 2013 (has links)
A complex system is one with many parts, whose behaviors are strongly dependent on each other. There are two interesting questions about complex systems. One is to understand how to recover the true structure of a complex system from noisy data. The other is to understand how the system interacts with its environment. In this thesis, we address these two questions by studying two distinct complex systems: dynamic systems and market microstructure. To address the first question, we focus on some nonlinear dynamic systems. We develop a novel Bayesian statistical method, Gaussian Emulator, to estimate the parameters of dynamic systems from noisy data, when the data are either fully or partially observed. Our method shows that estimation accuracy is substantially improved and computation is faster, compared to the numerical solvers. To address the second question, we focus on the market microstructure of hidden liquidity. We propose some statistical models to explain the hidden liquidity under different market conditions. Our statistical results suggest that hidden liquidity can be reliably predicted given the visible state of the market. / Statistics
19

JAMES-STEIN TYPE COMPOUND ESTIMATION OF MULTIPLE MEAN RESPONSE FUNCTIONS AND THEIR DERIVATIVES

Feng, Limin 01 January 2013 (has links)
Charnigo and Srinivasan originally developed compound estimators to nonparametrically estimate mean response functions and their derivatives simultaneously when there is one response variable and one covariate. The compound estimator maintains self consistency and almost optimal convergence rate. This dissertation studies, in part, compound estimation with multiple responses and/or covariates. An empirical comparison of compound estimation, local regression and spline smoothing is included, and near optimal convergence rates are established in the presence of multiple covariates. James and Stein proposed an estimator of the mean vector of a p dimensional multivariate normal distribution, which produces a smaller risk than the maximum likelihood estimator if p is at least 3. In this dissertation, we also extend their idea to a nonparametric regression setting. More specifically, we present Steinized local regression estimators of p mean response functions and their derivatives. We consider different covariance structures for the error terms, and whether or not a known upper bound for the estimation bias is assumed. We also apply Steinization to compound estimation, considering the application of Steinization to both pointwise estimators (for example, as obtained through local regression) and weight functions. Finally, the new methodology introduced in this dissertation will be demonstrated on numerical data illustrating the outcomes of a laboratory experiment in which radiation induces nanoparticles to scatter evanescent waves. The patterns of scattering, as represented by derivatives of multiple mean response functions, may be used to classify nanoparticles on their sizes and structures.
20

Production Economics Modeling and Analysis of Polluting firms: The Production Frontier Approach

Mekaroonreung, Maethee 2012 August 1900 (has links)
As concern grows about energy and environment issues, energy and environmental modeling and related policy analysis are critical issues for today's society. Polluting firms such as coal power plants play an important role in providing electricity to drive the U.S. economy as well as producing pollution that damages the environment and human health. This dissertation is intended to model and estimate polluting firms' production using nonparametric methods. First, frontier production function of polluting firms is characterized by weak disposability between outputs and pollutants to reflecting the opportunity cost to reduce pollutants. The StoNED method is extended to estimate a weak disposability frontier production function accounting for random noise in the data. The method is applied to the U.S. coal power plants under the Acid Rain Program to find the average technical inefficiency and shadow price of SO2 and NOx. Second, polluting firms' production processes are modeled characterizing both the output production process and the pollution abatement process. Using the law of conservation of mass applied to the pollution abatement process, this dissertation develops a new frontier pollutant function which then is used to find corresponding marginal abatement cost of pollutants. The StoNEZD method is applied to estimate a frontier pollutant function considering the vintage of capital owned by the polluting firms. The method is applied to estimate the average NOx marginal abatement cost for the U.S. coal power plants under the current Clean Air Interstate Rule NOx program. Last, the effect of a technical change on marginal abatement costs are investigated using an index decomposition technique. The StoNEZD method is extended to estimate sequential frontier pollutant functions reflecting the innovation in pollution reduction. The method is then applied to estimate a technical change effect on a marginal abatement cost of the U.S. coal power plants under the current Clean Air Interstate Rule NOx program.

Page generated in 0.1179 seconds