• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • 1
  • Tagged with
  • 4
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

MCMC methods for wavelet representations in single index models

Park, Chun Gun 30 September 2004 (has links)
Single index models are a special type of nonlinear regression model that are partially linear and play an important role in fields that employ multidimensional regression models. A wavelet series is thought of as a good approximation to any function in the space. There are two ways to represent the function: one in which all wavelet coefficients are used in the series, and another that allows for shrinkage rules. We propose posterior inference for the two wavelet representations of the function. To implement posterior inference, we define a hierarchial (mixture) prior model on the scaling(wavelet) coefficients. Since from the two representations a non-zero coefficient has information about the features of the function at a certain scale and location, a prior model for the coefficient should depend on its resolution level. In wavelet shrinkage rules we use "pseudo-priors" for a zero coefficient. In single index models a direction theta affects estimates of the function. Transforming theta to a spherical polar coordinate is a convenient way of imposing the constraint. The posterior distribution of the direction is unknown and we employ a Metropolis algorithm and an independence sampler, which require a proposal distribution. A normal distribution is proposed as the proposal distribution for the direction. We introduce ways to choose its mode (mean) using the independence sampler. For Monte Carlo simulations and real data we compare performances of the Metropolis algorithm and independence samplers for the direction and two functions: the cosine function is represented only by the smooth part in the wavelet series and the Doppler function is represented by both smooth and detail parts of the series.
2

(Ultra-)High Dimensional Partially Linear Single Index Models for Quantile Regression

Zhang, Yuankun 30 October 2018 (has links)
No description available.
3

Essays on High-dimensional Nonparametric Smoothing and Its Applications to Asset Pricing

Wu, Chaojiang 25 October 2013 (has links)
No description available.
4

Estimation non-paramétrique du quantile conditionnel et apprentissage semi-paramétrique : applications en assurance et actuariat / Nonparametric estimation of conditional quantile and semi-parametric learning : applications on insurance and actuarial data

Knefati, Muhammad Anas 19 November 2015 (has links)
La thèse se compose de deux parties : une partie consacrée à l'estimation des quantiles conditionnels et une autre à l'apprentissage supervisé. La partie "Estimation des quantiles conditionnels" est organisée en 3 chapitres : Le chapitre 1 est consacré à une introduction sur la régression linéaire locale, présentant les méthodes les plus utilisées, pour estimer le paramètre de lissage. Le chapitre 2 traite des méthodes existantes d’estimation nonparamétriques du quantile conditionnel ; Ces méthodes sont comparées, au moyen d’expériences numériques sur des données simulées et des données réelles. Le chapitre 3 est consacré à un nouvel estimateur du quantile conditionnel et que nous proposons ; Cet estimateur repose sur l'utilisation d'un noyau asymétrique en x. Sous certaines hypothèses, notre estimateur s'avère plus performant que les estimateurs usuels.<br> La partie "Apprentissage supervisé" est, elle aussi, composée de 3 chapitres : Le chapitre 4 est une introduction à l’apprentissage statistique et les notions de base utilisées, dans cette partie. Le chapitre 5 est une revue des méthodes conventionnelles de classification supervisée. Le chapitre 6 est consacré au transfert d'un modèle d'apprentissage semi-paramétrique. La performance de cette méthode est montrée par des expériences numériques sur des données morphométriques et des données de credit-scoring. / The thesis consists of two parts: One part is about the estimation of conditional quantiles and the other is about supervised learning. The "conditional quantile estimate" part is organized into 3 chapters. Chapter 1 is devoted to an introduction to the local linear regression and then goes on to present the methods, the most used in the literature to estimate the smoothing parameter. Chapter 2 addresses the nonparametric estimation methods of conditional quantile and then gives numerical experiments on simulated data and real data. Chapter 3 is devoted to a new conditional quantile estimator, we propose. This estimator is based on the use of asymmetrical kernels w.r.t. x. We show, under some hypothesis, that this new estimator is more efficient than the other estimators already used.<br> The "supervised learning" part is, too, with 3 chapters: Chapter 4 provides an introduction to statistical learning, remembering the basic concepts used in this part. Chapter 5 discusses the conventional methods of supervised classification. Chapter 6 is devoted to propose a method of transferring a semiparametric model. The performance of this method is shown by numerical experiments on morphometric data and credit-scoring data.

Page generated in 0.0565 seconds