• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • Tagged with
  • 4
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Design of Derivative Estimator Using Adaptive Sliding Mode Technique

Chang, Ming-wen 15 July 2004 (has links)
Based on Lyapunov stability theorem, a design methodology of nth order adaptive integral variable structure derivative estimator (AIVSDE) is proposed in this thesis. The proposed derivative estimator not only is an improved version of the existing AIVSDE, but also can be used to estimate the nth differentiation of a smooth signal which has continuous and bounded derivatives up to n+1. A low pass filter is cascaded with AIVSDE so that the effects of noise can be alleviated by adjusting the designing parameters of filter and AIVSDE. The adaptive algorithm is incorporated in the control scheme for removing the a priori knowledge of the upper bound of the observed signal. The stability of the proposed derivative estimator is guaranteed, and the comparison of upper bound of derivative estimation error between recently proposed nonlinear adaptive variable structure derivative estimator (NAVSDE) and AIVSDE is also demonstrated. An example is given for showing the applicability of the proposed AIVSDE.
2

Design of Adaptive Derivative Estimator Using Sliding Mode Technique

Wu, Peir-Cherng 01 September 2003 (has links)
This thesis is concerned with the designing of an nth order adaptive integral variable structure derivative estimator (AIVSDE). The proposed estimator's scheme is in fact a modified and extended version of the existing AIVSDE. The new proposed AIVSDE can be used as a direct nth differentiator for a smooth signal which has n continuous and bounded derivatives. The adaptive algorithm is utilized for the switching gain to remove the requirement for a priori knowledge about the upper bound of the derivative of the input signal. The stability of the redesigned first order, the second order, and the nth order derivative's estimation is guaranteed by the proposed scheme. An example is demonstrated for showing the applicability of the proposed AIVSDE.
3

Design of the nth Order Adaptive Integral Variable Structure Derivative Estimator

Shih, Wei-Che 17 January 2009 (has links)
Based on the Lyapunov stability theorem, a methodology of designing an nth order adaptive integral variable structure derivative estimator (AIVSDE) is proposed in this thesis. The proposed derivative estimator not only is an improved version of the existing AIVSDE, but also can be used to estimate the nth derivative of a smooth signal which has continuous and bounded derivatives up to n+1. Analysis results show that adjusting some of the parameters can facilitate the derivative estimation of signals with higher frequency noise. The adaptive algorithm is incorporated in the estimation scheme for tracking the unknown upper bounded of the input signal as well as their's derivatives. The stability of the proposed derivative estimator is guaranteed, and the comparison between recently proposed derivative estimator of high-order sliding mode control and AIVSDE is also demonstrated.
4

Réduction de la dimension en régression / Dimension reduction in regression

Portier, François 02 July 2013 (has links)
Dans cette thèse, nous étudions le problème de réduction de la dimension dans le cadre du modèle de régression suivant Y=g(B X,e), où X est un vecteur de dimension p, Y appartient à R, la fonction g est inconnue et le bruit e est indépendant de X. Nous nous intéressons à l'estimation de la matrice B, de taille dxp où d est plus petit que p, (dont la connaissance permet d'obtenir de bonnes vitesses de convergence pour l'estimation de g). Ce problème est traité en utilisant deux approches distinctes. La première, appelée régression inverse nécessite la condition de linéarité sur X. La seconde, appelée semi-paramétrique ne requiert pas une telle condition mais seulement que X possède une densité lisse. Dans le cadre de la régression inverse, nous étudions deux familles de méthodes respectivement basées sur E[X f(Y)] et E[XX^T f(Y)]. Pour chacune de ces familles, nous obtenons les conditions sur f permettant une estimation exhaustive de B, aussi nous calculons la fonction f optimale par minimisation de la variance asymptotique. Dans le cadre de l'approche semi-paramétrique, nous proposons une méthode permettant l'estimation du gradient de la fonction de régression. Sous des hypothèses semi-paramétriques classiques, nous montrons la normalité asymptotique de notre estimateur et l'exhaustivité de l'estimation de B. Quel que soit l'approche considérée, une question fondamentale est soulevée : comment choisir la dimension de B ? Pour cela, nous proposons une méthode d'estimation du rang d'une matrice par test d'hypothèse bootstrap. / In this thesis, we study the problem of dimension reduction through the following regression model Y=g(BX,e), where X is a p dimensional vector, Y belongs to R, the function g is unknown and the noise e is independent of X. We are interested in the estimation of the matrix B, with dimension d times p where d is smaller than p (whose knowledge provides good convergence rates for the estimation of g). This problem is processed according to two different approaches. The first one, called the inverse regression, needs the linearity condition on X. The second one, called semiparametric, do not require such an assumption but only that X has a smooth density. In the context of inverse regression, we focus on two families of methods respectively based on E[X f(Y)] and E[XX^T f(Y)]. For both families, we provide conditions on f that allow an exhaustive estimation of B, and also we compute the better function f by minimizing the asymptotic variance. In the semiparametric context, we give a method for the estimation of the gradient of the regression function. Under some classical semiparametric assumptions, we show the root n consistency of our estimator, the exhaustivity of the estimation and the convergence in the processes space. Within each point, an important question is raised : how to choose the dimension of B ? For this we propose a method that estimates of the rank of a matrix by bootstrap hypothesis testing.

Page generated in 0.0943 seconds