• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 7
  • 1
  • Tagged with
  • 18
  • 8
  • 8
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Modeling and Estimation of Linear and Nonlinear Piezoelectric Systems

Paruchuri, Sai Tej 13 October 2020 (has links)
A bulk of the research on piezoelectric systems in recent years can be classified into two categories, 1) studies of linear piezoelectric oscillator arrays, 2) studies of nonlinear piezoelectric oscillators. This dissertation derives novel linear and nonlinear modeling and estimation methods for such piezoelectric systems. In the first part, this work develops modeling and design methods for Piezoelectric Subordinate Oscillator Arrays (PSOAs) for the wideband vibration attenuation problem. PSOAs offer a straightforward and low mass ratio solution to cancel out the resonant peaks in a host structure's frequency domain. Further, they provide adaptability through shunt tuning, which gives them the ability to recover performance losses because of structural parameter errors. This dissertation studies the derivation of governing equations that result in a closed-form expression for the frequency response function. It also analyzes systematic approaches to assign distributions to the nondimensional parameters in the frequency response function to achieve the desired flat-band frequency response. Finally, the effectiveness of PSOAs under ideal and nonideal conditions are demonstrated in this dissertation through extensive numerical and experimental studies. The concept of performance recovery, introduced in empirical studies, gives a measure of the PSOA's effectiveness in the presence of disorder before and after capacitive tuning. The second part of this dissertation introduces novel modeling and estimation methods for nonlinear piezoelectric oscillators. Traditional modeling techniques require knowledge of the structure as well as the source of nonlinearity. Data-driven modeling techniques used extensively in recent times build approximations. An adaptive estimation method, that uses reproducing kernel Hilbert space (RKHS) embedding methods, can estimate the underlying nonlinear function that governs the system's dynamics. A model built by such a method can overcome some of the limitations of the modeling approaches mentioned above. This dissertation discusses (i) how to construct the RKHS based estimator for the piezoelectric oscillator problem, (ii) how to choose kernel centers for approximating the RKHS, and (iii) derives sufficient conditions for convergence of the function estimate to the actual function. In each of these discussions, numerical studies are used to show the RKHS based adaptive estimator's effectiveness for identifying linearities in piezoelectric oscillators. / Doctor of Philosophy / Piezoelectric materials are materials that generate an electric charge when mechanical stress is applied, and vice versa, in a lossless transformation. Engineers have used piezoelectric materials for a variety of applications, including vibration control and energy harvesting. This dissertation introduces (1) novel methods for vibration attenuation using an array of piezoelectric oscillators, and (2) methods to model and estimate the nonlinear behavior exhibited by piezoelectric materials at very high mechanical forces or electric charges. Arrays of piezoelectric oscillators attached to a host structure are termed piezoelectric subordinate oscillator arrays (PSOAs). With the careful design of PSOAs, we show that we can reduce the vibration of the host structure. This dissertation analyzes methodologies for designing PSOAs and illustrates their vibration attenuation capabilities numerically and experimentally. The numerical and experimental studies also illustrate the robustness of PSOAs. In the second part of this dissertation, we analyze reproducing kernel Hilbert space embedding methods for adaptive estimation of nonlinearities in piezoelectric systems. Kernel methods are extensively used in machine learning, and control theorists have studied adaptive estimation of functions in finite-dimensional spaces. In this work, we adapt kernel methods for adaptive estimation of functions in infinite-dimensional spaces that appear while modeling piezoelectric systems. We derive theorems that ensure convergence of function estimates to the actual function and develop algorithms for careful selection of the kernel basis functions.
2

Parametric and semi-parametric models for predicting genomic breeding values of complex traits in Nelore cattle / Modelos estatísticos paramétricos e semiparamétricos para a predição de valores genéticos genômicos de características complexas em bovinos da raça Nelore

Espigolan, Rafael [UNESP] 23 February 2017 (has links)
Submitted by RAFAEL ESPIGOLAN (espigolan@yahoo.com.br) on 2017-03-17T22:04:14Z No. of bitstreams: 1 Tese_Rafael_Espigolan.pdf: 1532864 bytes, checksum: c79ad7471b25137c47529f25762a83a2 (MD5) / Approved for entry into archive by Juliano Benedito Ferreira (julianoferreira@reitoria.unesp.br) on 2017-03-22T12:50:50Z (GMT) No. of bitstreams: 1 espigolan_r_dr_jabo.pdf: 1532864 bytes, checksum: c79ad7471b25137c47529f25762a83a2 (MD5) / Made available in DSpace on 2017-03-22T12:50:50Z (GMT). No. of bitstreams: 1 espigolan_r_dr_jabo.pdf: 1532864 bytes, checksum: c79ad7471b25137c47529f25762a83a2 (MD5) Previous issue date: 2017-02-23 / Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP) / O melhoramento genético animal visa melhorar a produtividade econômica das futuras gerações de espécies domésticas por meio da seleção. A maioria das características de interesse econômico na pecuária é de expressão quantitativa e complexa, isto é, são influenciadas por vários genes e afetadas por fatores ambientais. As análises estatísticas de informações de fenótipo e pedigree permite estimar os valores genéticos dos candidatos à seleção com base no modelo infinitesimal. Uma grande quantidade de dados genômicos está atualmente disponível para a identificação e seleção de indivíduos geneticamente superiores com o potencial de aumentar a acurácia de predição dos valores genéticos e, portanto, a eficiência dos programas de melhoramento genético animal. Vários estudos têm sido conduzidos com o objetivo de identificar metodologias apropriadas para raças e características específicas, o que resultará em estimativas de valores genéticos genômicos (GEBVs) mais acurados. Portanto, o objetivo deste estudo foi verificar a possibilidade de aplicação de modelos semiparamétricos para a seleção genômica e comparar a habilidade de predição com os modelos paramétricos para dados reais (características de carcaça, qualidade da carne, crescimento e reprodutiva) e simulados. As informações fenotípicas e de pedigree utilizadas foram fornecidas por onze fazendas pertencentes a quatro programas de melhoramento genético animal. Para as características de carcaça e qualidade da carne, o banco de dados continha 3.643 registros para área de olho de lombo (REA), 3.619 registros para espessura de gordura (BFT), 3.670 registros para maciez da carne (TEN) e 3.378 observações para peso de carcaça quente (HCW). Um total de 825.364 registros para peso ao sobreano (YW) e 166.398 para idade ao primeiro parto (AFC) foi utilizado para as características de crescimento e reprodutiva. Genótipos de 2.710, 2.656, 2.749, 2.495, 4.455 e 1.760 animais para REA, BFT, TEN, HCW, YW e AFC foram disponibilizados, respectivamente. Após o controle de qualidade, restaram dados de, aproximadamente, 450.000 polimorfismos de base única (SNP). Os modelos de análise utilizados foram BLUP genômico (GBLUP), single-step GBLUP (ssGBLUP), Bayesian LASSO (BL) e as abordagens semiparamétricas Reproducing Kernel Hilbert Spaces (RKHS) e Kernel Averaging (KA). Para cada característica foi realizada uma validação cruzada composta por cinco “folds” e replicada aleatoriamente trinta vezes. Os modelos estatísticos foram comparados em termos do erro do quadrado médio (MSE) e acurácia de predição (ACC). Os valores de ACC variaram de 0,39 a 0,40 (REA), 0,38 a 0,41 (BFT), 0,23 a 0,28 (TEN), 0,33 a 0,35 (HCW), 0,36 a 0,51 (YW) e 0,49 a 0,56 (AFC). Para todas as características, os modelos GBLUP e BL apresentaram acurácias de predição similares. Para REA, BFT e HCW, todos os modelos apresentaram ACC similares, entretanto a regressão RKHS obteve o melhor ajuste comparado ao KA. Para características com maior quantidade de registros fenotípicos comparada ao número de animais genotipados (YW e AFC) o modelo ssGBLUP é indicado. Considerando o desempenho geral, para todas as características estudadas, a regressão RKHS é, particularmente, uma alternativa interessante para a aplicação na seleção genômica, especialmente para características de baixa herdabilidade. No estudo de simulação, genótipos, pedigree e fenótipos para quatro características (A, B, C e D) foram simulados utilizando valores de herdabilidade baseados nos obtidos com os dados reais (0,09, 0,12, 0,36 e 0,39 para cada característica, respectivamente). O genoma simulado consistiu de 735.293 marcadores e 1.000 QTLs distribuídos aleatoriamente por 29 pares de autossomos, com comprimento variando de 40 a 146 centimorgans (cM), totalizando 2.333 cM. Assumiu-se que os QTLs explicavam 100% da variação genética. Considerando as frequências do alelo menor maiores ou iguais a 0,01, um total de 430.000 marcadores foram selecionados aleatoriamente. Os fenótipos foram obtidos pela soma dos resíduos (aleatoriamente amostrados de uma distribuição normal com média igual a zero) aos valores genéticos verdadeiros, e todo o processo de simulação foi replicado 10 vezes. A ACC foi calculada por meio da correlação entre o valor genético genômico estimado e o valor genético verdadeiro, simulados da 12a a 15a geração. A média do desequilíbrio de ligação, medido entre os pares de marcadores adjacentes para todas as características simuladas foi de 0,21 para as gerações recentes (12a, 13a e 14a), e 0,22 para a 15a geração. A ACC para as características simuladas A, B, C e D variou de 0,43 a 0,44, 0,47 a 0,48, 0,80 a 0,82 e 0,72 a 0,73, respectivamente. Diferentes metodologias de seleção genômica implementadas neste estudo mostraram valores similares de acurácia de predição, e o método mais adequado é dependente da característica explorada. Em geral, as regressões RKHS obtiveram melhor desempenho em termos de ACC com menor valor de MSE em comparação com os outros modelos. / Animal breeding aims to improve economic productivity of future generations of domestic species through selection. Most of the traits of economic interest in livestock have a complex and quantitative expression i.e. are influenced by a large number of genes and affected by environmental factors. Statistical analysis of phenotypes and pedigree information allows estimating the breeding values of the selection candidates based on infinitesimal model. A large amount of genomic data is now available for the identification and selection of genetically superior individuals with the potential to increase the accuracy of prediction of genetic values and thus, the efficiency of animal breeding programs. Numerous studies have been conducted in order to identify appropriate methodologies to specific breeds and traits, which will result in more accurate genomic estimated breeding values (GEBVs). Therefore, the objective of this study was to verify the possibility of applying semi-parametric models for genomic selection and to compare their ability of prediction with those of parametric models for real (carcass, meat quality, growth and reproductive traits) and simulated data. The phenotypic and pedigree information used were provided by farms belonging to four animal breeding programs which represent eleven farms. For carcass and meat quality traits, the data set contained 3,643 records for rib eye area (REA), 3,619 records for backfat thickness (BFT), 3,670 records for meat tenderness (TEN) and 3,378 observations for hot carcass weight (HCW). A total of 825,364 records for yearling weight (YW) and 166,398 for age at first calving (AFC) were used as growth and reproductive traits of Nelore cattle. Genotypes of 2,710, 2,656, 2,749, 2,495, 4,455 and 1,760 animals were available for REA, BFT, TEN, HCW, YW and AFC, respectively. After quality control, approximately 450,000 single nucleotide polymorphisms (SNP) remained. Methods of analysis were genomic BLUP (GBLUP), single-step GBLUP (ssGBLUP), Bayesian LASSO (BL) and the semi-parametric approaches Reproducing Kernel Hilbert Spaces (RKHS) regression and Kernel Averaging (KA). A five-fold cross-validation with thirty random replicates was carried out and models were compared in terms of their prediction mean squared error (MSE) and accuracy of prediction (ACC). The ACC ranged from 0.39 to 0.40 (REA), 0.38 to 0.41 (BFT), 0.23 to 0.28 (TEN), 0.33 to 0.35 (HCW), 0.36 to 0.51 (YW) and 0.49 to 0.56 (AFC). For all traits, the GBLUP and BL models showed very similar prediction accuracies. For REA, BFT and HCW, models provided similar prediction accuracies, however RKHS regression had the best fit across traits considering multiple-step models and compared to KA. For traits which have a higher number of animals with phenotypes compared to the number of those with genotypes (YW and AFC), the ssGBLUP is indicated. Judged by overall performance, across all traits, the RKHS regression is particularly appealing for application in genomic selection, especially for low heritability traits. Simulated genotypes, pedigree, and phenotypes for four traits A, B, C and D were obtained using heritabilities based on real data (0.09, 0.12, 0.36 and 0.39 for each trait, respectively). The simulated genome consisted of 735,293 markers and 1,000 QTLs randomly distributed over 29 pairs of autosomes, with length varying from 40 to 146 centimorgans (cM), totaling 2,333 cM. It was assumed that QTLs explained 100% of genetic variance. Considering Minor Allele Frequencies greater or equal to 0.01, a total of 430,000 markers were randomly selected. The phenotypes were generated by adding residuals, randomly drawn from a normal distribution with mean equal to zero, to the true breeding values and all simulation process was replicated 10 times. ACC was quantified using correlations between the predicted genomic breeding value and true breeding values simulated for the generations of 12 to 15. The average linkage disequilibrium, measured between pairs of adjacent markers for all simulated traits was 0.21 for recent generations (12, 13 and 14), and 0.22 for generation 15. The ACC for simulated traits A, B, C and D ranged from 0.43 to 0.44, 0.47 to 0.48, 0.80 to 0.82 and 0.72 to 0.73, respectively. Different genomic selection methodologies implemented in this study showed similar accuracies of prediction, and the optimal method was sometimes trait dependent. In general, RKHS regressions were preferable in terms of ACC and provided smallest MSE estimates compared to other models. / FAPESP: 2014/00779-0 / FAPESP: 2015/13084-3
3

Calibration of Option Pricing in Reproducing Kernel Hilbert Space

Ge, Lei 01 January 2015 (has links)
A parameter used in the Black-Scholes equation, volatility, is a measure for variation of the price of a financial instrument over time. Determining volatility is a fundamental issue in the valuation of financial instruments. This gives rise to an inverse problem known as the calibration problem for option pricing. This problem is shown to be ill-posed. We propose a regularization method and reformulate our calibration problem as a problem of finding the local volatility in a reproducing kernel Hilbert space. We defined a new volatility function which allows us to embrace both the financial and time factors of the options. We discuss the existence of the minimizer by using regu- larized reproducing kernel method and show that the regularizer resolves the numerical instability of the calibration problem. Finally, we apply our studied method to data sets of index options by simulation tests and discuss the empirical results obtained.
4

Étude de classes de noyaux adaptées à la simplification et à l’interprétation des modèles d’approximation. Une approche fonctionnelle et probabiliste. / Covariance kernels for simplified and interpretable modeling. A functional and probabilistic approach.

Durrande, Nicolas 09 November 2011 (has links)
Le thème général de cette thèse est celui de la construction de modèles permettantd’approximer une fonction f lorsque la valeur de f(x) est connue pour un certainnombre de points x. Les modèles considérés ici, souvent appelés modèles de krigeage,peuvent être abordés suivant deux points de vue : celui de l’approximation dans les espacesde Hilbert à noyaux reproduisants ou celui du conditionnement de processus gaussiens.Lorsque l’on souhaite modéliser une fonction dépendant d’une dizaine de variables, lenombre de points nécessaires pour la construction du modèle devient très important etles modèles obtenus sont difficilement interprétables. A partir de ce constat, nous avonscherché à construire des modèles simplifié en travaillant sur un objet clef des modèles dekrigeage : le noyau. Plus précisement, les approches suivantes sont étudiées : l’utilisation denoyaux additifs pour la construction de modèles additifs et la décomposition des noyauxusuels en sous-noyaux pour la construction de modèles parcimonieux. Pour finir, nousproposons une classe de noyaux qui est naturellement adaptée à la représentation ANOVAdes modèles associés et à l’analyse de sensibilité globale. / The framework of this thesis is the approximation of functions for which thevalue is known at limited number of points. More precisely, we consider here the so-calledkriging models from two points of view : the approximation in reproducing kernel Hilbertspaces and the Gaussian Process regression.When the function to approximate depends on many variables, the required numberof points can become very large and the interpretation of the obtained models remainsdifficult because the model is still a high-dimensional function. In light of those remarks,the main part of our work adresses the issue of simplified models by studying a key conceptof kriging models, the kernel. More precisely, the following aspects are adressed: additivekernels for additive models and kernel decomposition for sparse modeling. Finally, wepropose a class of kernels that is well suited for functional ANOVA representation andglobal sensitivity analysis.
5

Currents- and varifolds-based registration of lung vessels and lung surfaces

Pan, Yue 01 December 2016 (has links)
This thesis compares and contrasts currents- and varifolds-based diffeomorphic image registration approaches for registering tree-like structures in the lung and surface of the lung. In these approaches, curve-like structures in the lung—for example, the skeletons of vessels and airways segmentation—and surface of the lung are represented by currents or varifolds in the dual space of a Reproducing Kernel Hilbert Space (RKHS). Currents and varifolds representations are discretized and are parameterized via of a collection of momenta. A momenta corresponds to a line segment via the coordinates of the center of the line segment and the tangent direction of the line segment at the center. A momentum corresponds to a mesh via the coordinates of the center of the mesh and the normal direction of the mesh at the center. The magnitude of the tangent vector for the line segment and the normal vector for the mesh are the length of the line segment and the area of the mesh respectively. A varifolds-based registration approach is similar to currents except that two varifolds representations are aligned independent of the tangent (normal) vector orientation. An advantage of varifolds over currents is that the orientation of the tangent vectors can be difficult to determine especially when the vessel and airway trees are not connected. In this thesis, we examine the image registration sensitivity and accuracy of currents- and varifolds-based registration as a function of the number and location of momenta used to represent tree like-structures in the lung and the surface of the lung. The registrations presented in this thesis were generated using the Deformetrica software package, which is publicly available at www.deformetrica.org.
6

Étude de classes de noyaux adaptées à la simplification et à l'interprétation des modèles d'approximation. Une approche fonctionnelle et probabiliste.

Durrande, Nicolas 09 November 2011 (has links) (PDF)
Le thème général de cette thèse est celui de la construction de modèles permettantd'approximer une fonction f lorsque la valeur de f(x) est connue pour un certainnombre de points x. Les modèles considérés ici, souvent appelés modèles de krigeage,peuvent être abordés suivant deux points de vue : celui de l'approximation dans les espacesde Hilbert à noyaux reproduisants ou celui du conditionnement de processus gaussiens.Lorsque l'on souhaite modéliser une fonction dépendant d'une dizaine de variables, lenombre de points nécessaires pour la construction du modèle devient très important etles modèles obtenus sont difficilement interprétables. A partir de ce constat, nous avonscherché à construire des modèles simplifié en travaillant sur un objet clef des modèles dekrigeage : le noyau. Plus précisement, les approches suivantes sont étudiées : l'utilisation denoyaux additifs pour la construction de modèles additifs et la décomposition des noyauxusuels en sous-noyaux pour la construction de modèles parcimonieux. Pour finir, nousproposons une classe de noyaux qui est naturellement adaptée à la représentation ANOVAdes modèles associés et à l'analyse de sensibilité globale.
7

Étude de classes de noyaux adaptées à la simplification et à l'interprétation des modèles d'approximation. Une approche fonctionnelle et probabiliste.

Durrande, Nicolas 09 November 2001 (has links) (PDF)
Le thème général de cette thèse est celui de la construction de modèles permettant d'approximer une fonction f lorsque la valeur de f(x) est connue pour un certain nombre de points x. Les modèles considérés ici, souvent appelés modèles de krigeage, peuvent être abordés suivant deux points de vue : celui de l'approximation dans les espaces de Hilbert à noyaux reproduisants ou celui du conditionnement de processus gaussiens. Lorsque l'on souhaite modéliser une fonction dépendant d'une dizaine de variables, le nombre de points nécessaires pour la construction du modèle devient très important et les modèles obtenus sont difficilement interprétables. A partir de ce constat, nous avons cherché à construire des modèles simplifiés en travaillant sur un objet clef des modèles de krigeage : le noyau. Plus précisement, les approches suivantes sont étudiées : l'utilisation de noyaux additifs pour la construction de modèles additifs et la décomposition des noyaux usuels en sous-noyaux pour la construction de modèles parcimonieux. Pour finir, nous proposons une classe de noyaux qui est naturellement adaptée à la représentation ANOVA des modèles associés et à l'analyse de sensibilité globale.
8

Role of Majorization in Learning the Kernel within a Gaussian Process Regression Framework

Kapat, Prasenjit 21 October 2011 (has links)
No description available.
9

Méthodes mathématiques et numériques pour la modélisation des déformations et l'analyse de texture. Applications en imagerie médicale / Mathematical and numerical methods for the modeling of deformations and image texture analysis. Applications in medical imaging

Chesseboeuf, Clément 23 November 2017 (has links)
Nous décrivons une procédure numérique pour le recalage d'IRM cérébrales 3D. Le problème d'appariement est abordé à travers la distinction usuelle entre le modèle de déformation et le critère d'appariement. Le modèle de déformation est celui de l'anatomie computationnelle, fondé sur un groupe de difféomorphismes engendrés en intégrant des champs de vecteurs. Le décalage entre les images est évalué en comparant les lignes de niveau de ces images, représentées par un courant différentiel dans le dual d'un espace de champs de vecteurs. Le critère d'appariement obtenu est non local et rapide à calculer. On se place dans l'ensemble des difféomorphismes pour rechercher une déformation reliant les deux images. Pour cela, on minimise le critère en suivant le principe de l'algorithme sous-optimal. L'efficacité de l'algorithme est renforcée par une description eulérienne et périodique du mouvement. L'algorithme est appliqué pour le recalage d'images IRM cérébrale 3d, la procédure numérique menant à ces résultats est intégralement décrite. Nos travaux concernent aussi l'analyse des propriétés de l'algorithme. Pour cela, nous avons simplifié l'équation représentant l'évolution de l'image et étudié l'équation simplifiée en utilisant la théorie des solutions de viscosité. Nous étudions aussi le problème de détection de rupture dans la variance d'un signal aléatoire gaussien. La spécificité de notre modèle vient du cadre infill, ce qui signifie que la distribution des données dépend de la taille de l'échantillon. L'estimateur de l'instant de rupture est défini comme le point maximisant une fonction de contraste. Nous étudions la convergence de cette fonction et ensuite la convergence de l'estimateur associé. L'application la plus directe concerne l'estimation de changement dans le paramètre de Hurst d'un mouvement brownien fractionnaire. L'estimateur dépend d'un paramètre p > 0 et nos résultats montrent qu'il peut être intéressant de choisir p < 2. / We present a numerical procedure for the matching of 3D MRI. The problem of image matching is addressed through the usual distinction between the deformation model and the matching criterion. The deformation model is based on the theory of computational anatomy and the set of deformations is a group of diffeomorphisms generated by integrating vector fields. The discrepancy between the two images is evaluated through comparisons of level lines represented by a differential current in the dual of a space of vector fields. This representation leads to a quickly computable non-local criterion. Then, the optimisation method is based on the minimization of the criterion following the idea of the so-called sub-optimal algorithm. We take advantage of the eulerian and periodical description of the algorithm to get an efficient numerical procedure. This algorithm can be used to deal with 3d MR images and numerical experiences are presented. In an other part, we focus on theoretical properties of the algorithm. We begin by simplifying the equation representing the evolution of the deformed image and we use the theory of viscosity solutions to study the simplified equation. The second issue we are interested in is the change-point estimation for a gaussian sequence with change in the variance parameter. The main feature of our model is that we work with infill data and the nature of the data can evolve jointly with the size of the sample. The usual approach suggests to introduce a contrast function and using the point of its maximum as a change-point estimator. We first get an information about the asymptotic fluctuations of the contrast function around its mean function. Then, we focus on the change-point estimator and more precisely on the convergence of this estimator. The most direct application concerns the detection of change in the Hurst parameter of a fractional brownian motion. The estimator depends on a parameter p > 0, generalizing the usual choice p = 2. We present some results illustrating the advantage of a parameter p < 2.
10

Correspondance entre régression par processus Gaussien et splines d'interpolation sous contraintes linéaires de type inégalité. Théorie et applications. / Correspondence between Gaussian process regression and interpolation splines under linear inequality constraints. Theory and applications

Maatouk, Hassan 01 October 2015 (has links)
On s'intéresse au problème d'interpolation d'une fonction numérique d'une ou plusieurs variables réelles lorsque qu'elle est connue pour satisfaire certaines propriétés comme, par exemple, la positivité, monotonie ou convexité. Deux méthodes d'interpolation sont étudiées. D'une part, une approche déterministe conduit à un problème d'interpolation optimale sous contraintes linéaires inégalité dans un Espace de Hilbert à Noyau Reproduisant (RKHS). D'autre part, une approche probabiliste considère le même problème comme un problème d'estimation d'une fonction dans un cadre bayésien. Plus précisément, on considère la Régression par Processus Gaussien ou Krigeage pour estimer la fonction à interpoler sous les contraintes linéaires de type inégalité en question. Cette deuxième approche permet également de construire des intervalles de confiance autour de la fonction estimée. Pour cela, on propose une méthode d'approximation qui consiste à approcher un processus gaussien quelconque par un processus gaussien fini-dimensionnel. Le problème de krigeage se ramène ainsi à la simulation d'un vecteur gaussien tronqué à un espace convexe. L'analyse asymptotique permet d'établir la convergence de la méthode et la correspondance entre les deux approches déterministeet probabiliste, c'est le résultat théorique de la thèse. Ce dernier est vu comme unegénéralisation de la correspondance établie par [Kimeldorf and Wahba, 1971] entre estimateur bayésien et spline d'interpolation. Enfin, une application réelle dans le domainede l'assurance (actuariat) pour estimer une courbe d'actualisation et des probabilités dedéfaut a été développée. / This thesis is dedicated to interpolation problems when the numerical function is known to satisfy some properties such as positivity, monotonicity or convexity. Two methods of interpolation are studied. The first one is deterministic and is based on convex optimization in a Reproducing Kernel Hilbert Space (RKHS). The second one is a Bayesian approach based on Gaussian Process Regression (GPR) or Kriging. By using a finite linear functional decomposition, we propose to approximate the original Gaussian process by a finite-dimensional Gaussian process such that conditional simulations satisfy all the inequality constraints. As a consequence, GPR is equivalent to the simulation of a truncated Gaussian vector to a convex set. The mode or Maximum A Posteriori is defined as a Bayesian estimator and prediction intervals are quantified by simulation. Convergence of the method is proved and the correspondence between the two methods is done. This can be seen as an extension of the correspondence established by [Kimeldorf and Wahba, 1971] between Bayesian estimation on stochastic process and smoothing by splines. Finally, a real application in insurance and finance is given to estimate a term-structure curve and default probabilities.

Page generated in 0.4048 seconds