• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 5
  • 2
  • 2
  • 1
  • Tagged with
  • 20
  • 20
  • 20
  • 19
  • 6
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Design and Simulation of a Model Reference Adaptive Control System Employing Reproducing Kernel Hilbert Space for Enhanced Flight Control of a Quadcopter

Scurlock, Brian Patrick 04 June 2024 (has links)
This thesis presents the integration of reproducing kernel Hilbert spaces (RKHSs) into the model reference adaptive control (MRAC) framework to enhance the control systems of quadcopters. Traditional MRAC systems, while robust under predictable conditions, can struggle with the dynamic uncertainties typical in unmanned aerial vehicle (UAV) operations such as wind gusts and payload variations. By incorporating RKHS, we introduce a non-parametric, data-driven approach that significantly enhances system adaptability to in-flight dynamics changes. The research focuses on the design, simulation, and analysis of an RKHS-enhanced MRAC system applied to quadcopters. Through theoretical developments and simulation results, the thesis demonstrates how RKHS can be used to improve the precision, adaptability, and error handling of MRAC systems, especially in managing the complexities of UAV flight dynamics under various disturbances. The simulations validate the improved performance of the RKHS-MRAC system compared to traditional MRAC, showing finer control over trajectory tracking and adaptive gains. Further contributions of this work include the exploration of the computational impact and the relationship between the configuration of basis centers and system performance. Detailed analysis reveals that the number and distribution of basis centers critically influence the system's computational efficiency and adaptive capability, demonstrating a significant trade-off between efficiency and performance. The thesis concludes with potential future research directions, emphasizing the need for further tests and implementations in real-world scenarios to explore the full potential of RKHS in adaptive UAV control, especially in critical applications requiring high precision and reliability. This work lays the groundwork for future explorations into scalable RKHS applications in MRAC systems, aiming to optimize computational resources while maximizing control system performance. / Master of Science / This thesis develops and tests an advanced flight control system for quadcopters, using a technique referred to as reproducing kernel Hilbert space (RKHS) embedded model reference adaptive control (MRAC). Traditional control systems perform well in stable conditions but often falter with environmental challenges such as wind gusts or changes in weight. By integrating RKHS into MRAC, this new controller adapts in real-time, instantly adjusting the drone's operations based on its performance and environmental interactions. The focus of this research is on the creation, testing, and analysis of this enhanced control system. Results from simulations show that incorporating RKHS into standard MRAC significantly boosts precision, adaptability, and error management, particularly under the complex flight dynamics faced by unmanned aerial vehicles (UAVs) in varied environments. These tests confirm that the RKHS-MRAC system performs better than traditional approaches, especially in maintaining accurate flight paths. Additionally, this work examines the computational costs and the impact of various RKHS configurations on system performance. The thesis concludes by outlining future research opportunities, stressing the importance of real-world tests to verify the ability of RKHS-embedded MRAC in critical real-world applications where high precision and reliability are essential.
12

Positive definite kernels, harmonic analysis, and boundary spaces: Drury-Arveson theory, and related

Sabree, Aqeeb A 01 January 2019 (has links)
A reproducing kernel Hilbert space (RKHS) is a Hilbert space $\mathscr{H}$ of functions with the property that the values $f(x)$ for $f \in \mathscr{H}$ are reproduced from the inner product in $\mathscr{H}$. Recent applications are found in stochastic processes (Ito Calculus), harmonic analysis, complex analysis, learning theory, and machine learning algorithms. This research began with the study of RKHSs to areas such as learning theory, sampling theory, and harmonic analysis. From the Moore-Aronszajn theorem, we have an explicit correspondence between reproducing kernel Hilbert spaces (RKHS) and reproducing kernel functions—also called positive definite kernels or positive definite functions. The focus here is on the duality between positive definite functions and their boundary spaces; these boundary spaces often lead to the study of Gaussian processes or Brownian motion. It is known that every reproducing kernel Hilbert space has an associated generalized boundary probability space. The Arveson (reproducing) kernel is $K(z,w) = \frac{1}{1-_{\C^d}}, z,w \in \B_d$, and Arveson showed, \cite{Arveson}, that the Arveson kernel does not follow the boundary analysis we were finding in other RKHS. Thus, we were led to define a new reproducing kernel on the unit ball in complex $n$-space, and naturally this lead to the study of a new reproducing kernel Hilbert space. This reproducing kernel Hilbert space stems from boundary analysis of the Arveson kernel. The construction of the new RKHS resolves the problem we faced while researching “natural” boundary spaces (for the Drury-Arveson RKHS) that yield boundary factorizations: \[K(z,w) = \int_{\mathcal{B}} K^{\mathcal{B}}_z(b)\overline{K^{\mathcal{B}}_w(b)}d\mu(b), \;\;\; z,w \in \B_d \text{ and } b \in \mathcal{B} \tag*{\it{(Factorization of} $K$).}\] Results from classical harmonic analysis on the disk (the Hardy space) are generalized and extended to the new RKHS. Particularly, our main theorem proves that, relaxing the criteria to the contractive property, we can do the generalization that Arveson's paper showed (criteria being an isometry) is not possible.
13

Bandlimited functions, curved manifolds, and self-adjoint extensions of symmetric operators

Martin, Robert January 2008 (has links)
Sampling theory is an active field of research that spans a variety of disciplines from communication engineering to pure mathematics. Sampling theory provides the crucial connection between continuous and discrete representations of information that enables one store continuous signals as discrete, digital data with minimal error. It is this connection that allows communication engineers to realize many of our modern digital technologies including cell phones and compact disc players. This thesis focuses on certain non-Fourier generalizations of sampling theory and their applications. In particular, non-Fourier analogues of bandlimited functions and extensions of sampling theory to functions on curved manifolds are studied. New results in bandlimited function theory, sampling theory on curved manifolds, and the theory of self-adjoint extensions of symmetric operators are presented. Besides being of mathematical interest in itself, the research contained in this thesis has applications to quantum physics on curved space and could potentially lead to more efficient information storage methods in communication engineering.
14

Bandlimited functions, curved manifolds, and self-adjoint extensions of symmetric operators

Martin, Robert January 2008 (has links)
Sampling theory is an active field of research that spans a variety of disciplines from communication engineering to pure mathematics. Sampling theory provides the crucial connection between continuous and discrete representations of information that enables one store continuous signals as discrete, digital data with minimal error. It is this connection that allows communication engineers to realize many of our modern digital technologies including cell phones and compact disc players. This thesis focuses on certain non-Fourier generalizations of sampling theory and their applications. In particular, non-Fourier analogues of bandlimited functions and extensions of sampling theory to functions on curved manifolds are studied. New results in bandlimited function theory, sampling theory on curved manifolds, and the theory of self-adjoint extensions of symmetric operators are presented. Besides being of mathematical interest in itself, the research contained in this thesis has applications to quantum physics on curved space and could potentially lead to more efficient information storage methods in communication engineering.
15

Subnormality and Moment Sequences

Hota, Tapan Kumar January 2012 (has links) (PDF)
In this report we survey some recent developments of relationship between Hausdorff moment sequences and subnormality of an unilateral weighted shift operator. Although discrete convolution of two Haudorff moment sequences may not be a Hausdorff moment sequence, but Hausdorff convolution of two moment sequences is always a moment sequence. Observing from the Berg and Dur´an result that the multiplication operator on Is subnormal, we discuss further work on the subnormality of the multiplication operator on a reproducing kernel Hilbert space, whose kernel is a point-wise product of two diagonal positive kernels. The relationship between infinitely divisible matrices and moment sequence is discussed and some open problems are listed.
16

Adaptive Kernel Functions and Optimization Over a Space of Rank-One Decompositions

Wang, Roy Chih Chung January 2017 (has links)
The representer theorem from the reproducing kernel Hilbert space theory is the origin of many kernel-based machine learning and signal modelling techniques that are popular today. Most kernel functions used in practical applications behave in a homogeneous manner across the domain of the signal of interest, and they are called stationary kernels. One open problem in the literature is the specification of a non-stationary kernel that is computationally tractable. Some recent works solve large-scale optimization problems to obtain such kernels, and they often suffer from non-identifiability issues in their optimization problem formulation. Many practical problems can benefit from using application-specific prior knowledge on the signal of interest. For example, if one can adequately encode the prior assumption that edge contours are smooth, one does not need to learn a finite-dimensional dictionary from a database of sampled image patches that each contains a circular object in order to up-convert images that contain circular edges. In the first portion of this thesis, we present a novel method for constructing non-stationary kernels that incorporates prior knowledge. A theorem is presented that ensures the result of this construction yields a symmetric and positive-definite kernel function. This construction does not require one to solve any non-identifiable optimization problems. It does require one to manually design some portions of the kernel while deferring the specification of the remaining portions to when an observation of the signal is available. In this sense, the resultant kernel is adaptive to the data observed. We give two examples of this construction technique via the grayscale image up-conversion task where we chose to incorporate the prior assumption that edge contours are smooth. Both examples use a novel local analysis algorithm that summarizes the p-most dominant directions for a given grayscale image patch. The non-stationary properties of these two types of kernels are empirically demonstrated on the Kodak image database that is popular within the image processing research community. Tensors and tensor decomposition methods are gaining popularity in the signal processing and machine learning literature, and most of the recently proposed tensor decomposition methods are based on the tensor power and alternating least-squares algorithms, which were both originally devised over a decade ago. The algebraic approach for the canonical polyadic (CP) symmetric tensor decomposition problem is an exception. This approach exploits the bijective relationship between symmetric tensors and homogeneous polynomials. The solution of a CP symmetric tensor decomposition problem is a set of p rank-one tensors, where p is fixed. In this thesis, we refer to such a set of tensors as a rank-one decomposition with cardinality p. Existing works show that the CP symmetric tensor decomposition problem is non-unique in the general case, so there is no bijective mapping between a rank-one decomposition and a symmetric tensor. However, a proposition in this thesis shows that a particular space of rank-one decompositions, SE, is isomorphic to a space of moment matrices that are called quasi-Hankel matrices in the literature. Optimization over Riemannian manifolds is an area of optimization literature that is also gaining popularity within the signal processing and machine learning community. Under some settings, one can formulate optimization problems over differentiable manifolds where each point is an equivalence class. Such manifolds are called quotient manifolds. This type of formulation can reduce or eliminate some of the sources of non-identifiability issues for certain optimization problems. An example is the learning of a basis for a subspace by formulating the solution space as a type of quotient manifold called the Grassmann manifold, while the conventional formulation is to optimize over a space of full column rank matrices. The second portion of this thesis is about the development of a general-purpose numerical optimization framework over SE. A general-purpose numerical optimizer can solve different approximations or regularized versions of the CP decomposition problem, and they can be applied to tensor-related applications that do not use a tensor decomposition formulation. The proposed optimizer uses many concepts from the Riemannian optimization literature. We present a novel formulation of SE as an embedded differentiable submanifold of the space of real-valued matrices with full column rank, and as a quotient manifold. Riemannian manifold structures and tangent space projectors are derived as well. The CP symmetric tensor decomposition problem is used to empirically demonstrate that the proposed scheme is indeed a numerical optimization framework over SE. Future investigations will concentrate on extending the proposed optimization framework to handle decompositions that correspond to non-symmetric tensors.
17

A NEW INDEPENDENCE MEASURE AND ITS APPLICATIONS IN HIGH DIMENSIONAL DATA ANALYSIS

Ke, Chenlu 01 January 2019 (has links)
This dissertation has three consecutive topics. First, we propose a novel class of independence measures for testing independence between two random vectors based on the discrepancy between the conditional and the marginal characteristic functions. If one of the variables is categorical, our asymmetric index extends the typical ANOVA to a kernel ANOVA that can test a more general hypothesis of equal distributions among groups. The index is also applicable when both variables are continuous. Second, we develop a sufficient variable selection procedure based on the new measure in a large p small n setting. Our approach incorporates marginal information between each predictor and the response as well as joint information among predictors. As a result, our method is more capable of selecting all truly active variables than marginal selection methods. Furthermore, our procedure can handle both continuous and discrete responses with mixed-type predictors. We establish the sure screening property of the proposed approach under mild conditions. Third, we focus on a model-free sufficient dimension reduction approach using the new measure. Our method does not require strong assumptions on predictors and responses. An algorithm is developed to find dimension reduction directions using sequential quadratic programming. We illustrate the advantages of our new measure and its two applications in high dimensional data analysis by numerical studies across a variety of settings.
18

L'approche Support Vector Machines (SVM) pour le traitement des données fonctionnelles / Support Vector Machines (SVM) for Fonctional Data Analysis

Henchiri, Yousri 16 October 2013 (has links)
L'Analyse des Données Fonctionnelles est un domaine important et dynamique en statistique. Elle offre des outils efficaces et propose de nouveaux développements méthodologiques et théoriques en présence de données de type fonctionnel (fonctions, courbes, surfaces, ...). Le travail exposé dans cette thèse apporte une nouvelle contribution aux thèmes de l'apprentissage statistique et des quantiles conditionnels lorsque les données sont assimilables à des fonctions. Une attention particulière a été réservée à l'utilisation de la technique Support Vector Machines (SVM). Cette technique fait intervenir la notion d'Espace de Hilbert à Noyau Reproduisant. Dans ce cadre, l'objectif principal est d'étendre cette technique non-paramétrique d'estimation aux modèles conditionnels où les données sont fonctionnelles. Nous avons étudié les aspects théoriques et le comportement pratique de la technique présentée et adaptée sur les modèles de régression suivants. Le premier modèle est le modèle fonctionnel de quantiles de régression quand la variable réponse est réelle, les variables explicatives sont à valeurs dans un espace fonctionnel de dimension infinie et les observations sont i.i.d.. Le deuxième modèle est le modèle additif fonctionnel de quantiles de régression où la variable d'intérêt réelle dépend d'un vecteur de variables explicatives fonctionnelles. Le dernier modèle est le modèle fonctionnel de quantiles de régression quand les observations sont dépendantes. Nous avons obtenu des résultats sur la consistance et les vitesses de convergence des estimateurs dans ces modèles. Des simulations ont été effectuées afin d'évaluer la performance des procédures d'inférence. Des applications sur des jeux de données réelles ont été considérées. Le bon comportement de l'estimateur SVM est ainsi mis en évidence. / Functional Data Analysis is an important and dynamic area of statistics. It offers effective new tools and proposes new methodological and theoretical developments in the presence of functional type data (functions, curves, surfaces, ...). The work outlined in this dissertation provides a new contribution to the themes of statistical learning and quantile regression when data can be considered as functions. Special attention is devoted to use the Support Vector Machines (SVM) technique, which involves the notion of a Reproducing Kernel Hilbert Space. In this context, the main goal is to extend this nonparametric estimation technique to conditional models that take into account functional data. We investigated the theoretical aspects and practical attitude of the proposed and adapted technique to the following regression models.The first model is the conditional quantile functional model when the covariate takes its values in a bounded subspace of the functional space of infinite dimension, the response variable takes its values in a compact of the real line, and the observations are i.i.d.. The second model is the functional additive quantile regression model where the response variable depends on a vector of functional covariates. The last model is the conditional quantile functional model in the dependent functional data case. We obtained the weak consistency and a convergence rate of these estimators. Simulation studies are performed to evaluate the performance of the inference procedures. Applications to chemometrics, environmental and climatic data analysis are considered. The good behavior of the SVM estimator is thus highlighted.
19

Sampling Inequalities and Applications / Sampling Ungleichungen und Anwendungen

Rieger, Christian 28 March 2008 (has links)
No description available.
20

Contributions au démélange non-supervisé et non-linéaire de données hyperspectrales / Contributions to unsupervised and nonlinear unmixing of hyperspectral data

Ammanouil, Rita 13 October 2016 (has links)
Le démélange spectral est l’un des problèmes centraux pour l’exploitation des images hyperspectrales. En raison de la faible résolution spatiale des imageurs hyperspectraux en télédetection, la surface représentée par un pixel peut contenir plusieurs matériaux. Dans ce contexte, le démélange consiste à estimer les spectres purs (les end members) ainsi que leurs fractions (les abondances) pour chaque pixel de l’image. Le but de cette thèse estde proposer de nouveaux algorithmes de démélange qui visent à améliorer l’estimation des spectres purs et des abondances. En particulier, les algorithmes de démélange proposés s’inscrivent dans le cadre du démélange non-supervisé et non-linéaire. Dans un premier temps, on propose un algorithme de démelange non-supervisé dans lequel une régularisation favorisant la parcimonie des groupes est utilisée pour identifier les spectres purs parmi les observations. Une extension de ce premier algorithme permet de prendre en compte la présence du bruit parmi les observations choisies comme étant les plus pures. Dans un second temps, les connaissances a priori des ressemblances entre les spectres à l’échelle localeet non-locale ainsi que leurs positions dans l’image sont exploitées pour construire un graphe adapté à l’image. Ce graphe est ensuite incorporé dans le problème de démélange non supervisé par le biais d’une régularisation basée sur le Laplacian du graphe. Enfin, deux algorithmes de démélange non-linéaires sont proposés dans le cas supervisé. Les modèles de mélanges non-linéaires correspondants incorporent des fonctions à valeurs vectorielles appartenant à un espace de Hilbert à noyaux reproduisants. L’intérêt de ces fonctions par rapport aux fonctions à valeurs scalaires est qu’elles permettent d’incorporer un a priori sur la ressemblance entre les différentes fonctions. En particulier, un a priori spectral, dans un premier temps, et un a priori spatial, dans un second temps, sont incorporés pour améliorer la caractérisation du mélange non-linéaire. La validation expérimentale des modèles et des algorithmes proposés sur des données synthétiques et réelles montre une amélioration des performances par rapport aux méthodes de l’état de l’art. Cette amélioration se traduit par une meilleure erreur de reconstruction des données / Spectral unmixing has been an active field of research since the earliest days of hyperspectralremote sensing. It is concerned with the case where various materials are found inthe spatial extent of a pixel, resulting in a spectrum that is a mixture of the signatures ofthose materials. Unmixing then reduces to estimating the pure spectral signatures and theircorresponding proportions in every pixel. In the hyperspectral unmixing jargon, the puresignatures are known as the endmembers and their proportions as the abundances. Thisthesis focuses on spectral unmixing of remotely sensed hyperspectral data. In particular,it is aimed at improving the accuracy of the extraction of compositional information fromhyperspectral data. This is done through the development of new unmixing techniques intwo main contexts, namely in the unsupervised and nonlinear case. In particular, we proposea new technique for blind unmixing, we incorporate spatial information in (linear and nonlinear)unmixing, and we finally propose a new nonlinear mixing model. More precisely, first,an unsupervised unmixing approach based on collaborative sparse regularization is proposedwhere the library of endmembers candidates is built from the observations themselves. Thisapproach is then extended in order to take into account the presence of noise among theendmembers candidates. Second, within the unsupervised unmixing framework, two graphbasedregularizations are used in order to incorporate prior local and nonlocal contextualinformation. Next, within a supervised nonlinear unmixing framework, a new nonlinearmixing model based on vector-valued functions in reproducing kernel Hilbert space (RKHS)is proposed. The aforementioned model allows to consider different nonlinear functions atdifferent bands, regularize the discrepancies between these functions, and account for neighboringnonlinear contributions. Finally, the vector-valued kernel framework is used in orderto promote spatial smoothness of the nonlinear part in a kernel-based nonlinear mixingmodel. Simulations on synthetic and real data show the effectiveness of all the proposedtechniques

Page generated in 0.0688 seconds