• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 83
  • 15
  • 5
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 133
  • 133
  • 33
  • 28
  • 25
  • 25
  • 20
  • 19
  • 19
  • 17
  • 13
  • 13
  • 12
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Commutants of composition operators on the Hardy space of the disk

Carter, James Michael 06 November 2013 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / The main part of this thesis, Chapter 4, contains results on the commutant of a semigroup of operators defined on the Hardy Space of the disk where the operators have hyperbolic non-automorphic symbols. In particular, we show in Chapter 5 that the commutant of the semigroup of operators is in one-to-one correspondence with a Banach algebra of bounded analytic functions on an open half-plane. This algebra of functions is a subalgebra of the standard Newton space. Chapter 4 extends previous work done on maps with interior fixed point to the case of the symbol of the composition operator having a boundary fixed point.
122

Restrictions to Invariant Subspaces of Composition Operators on the Hardy Space of the Disk

Thompson, Derek Allen 29 January 2014 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Invariant subspaces are a natural topic in linear algebra and operator theory. In some rare cases, the restrictions of operators to different invariant subspaces are unitarily equivalent, such as certain restrictions of the unilateral shift on the Hardy space of the disk. A composition operator with symbol fixing 0 has a nested sequence of invariant subspaces, and if the symbol is linear fractional and extremally noncompact, the restrictions to these subspaces all have the same norm and spectrum. Despite this evidence, we will use semigroup techniques to show many cases where the restrictions are still not unitarily equivalent.
123

Hypercyclic Extensions Of Bounded Linear Operators

Turcu, George R. 20 December 2013 (has links)
No description available.
124

L'approche Support Vector Machines (SVM) pour le traitement des données fonctionnelles / Support Vector Machines (SVM) for Fonctional Data Analysis

Henchiri, Yousri 16 October 2013 (has links)
L'Analyse des Données Fonctionnelles est un domaine important et dynamique en statistique. Elle offre des outils efficaces et propose de nouveaux développements méthodologiques et théoriques en présence de données de type fonctionnel (fonctions, courbes, surfaces, ...). Le travail exposé dans cette thèse apporte une nouvelle contribution aux thèmes de l'apprentissage statistique et des quantiles conditionnels lorsque les données sont assimilables à des fonctions. Une attention particulière a été réservée à l'utilisation de la technique Support Vector Machines (SVM). Cette technique fait intervenir la notion d'Espace de Hilbert à Noyau Reproduisant. Dans ce cadre, l'objectif principal est d'étendre cette technique non-paramétrique d'estimation aux modèles conditionnels où les données sont fonctionnelles. Nous avons étudié les aspects théoriques et le comportement pratique de la technique présentée et adaptée sur les modèles de régression suivants. Le premier modèle est le modèle fonctionnel de quantiles de régression quand la variable réponse est réelle, les variables explicatives sont à valeurs dans un espace fonctionnel de dimension infinie et les observations sont i.i.d.. Le deuxième modèle est le modèle additif fonctionnel de quantiles de régression où la variable d'intérêt réelle dépend d'un vecteur de variables explicatives fonctionnelles. Le dernier modèle est le modèle fonctionnel de quantiles de régression quand les observations sont dépendantes. Nous avons obtenu des résultats sur la consistance et les vitesses de convergence des estimateurs dans ces modèles. Des simulations ont été effectuées afin d'évaluer la performance des procédures d'inférence. Des applications sur des jeux de données réelles ont été considérées. Le bon comportement de l'estimateur SVM est ainsi mis en évidence. / Functional Data Analysis is an important and dynamic area of statistics. It offers effective new tools and proposes new methodological and theoretical developments in the presence of functional type data (functions, curves, surfaces, ...). The work outlined in this dissertation provides a new contribution to the themes of statistical learning and quantile regression when data can be considered as functions. Special attention is devoted to use the Support Vector Machines (SVM) technique, which involves the notion of a Reproducing Kernel Hilbert Space. In this context, the main goal is to extend this nonparametric estimation technique to conditional models that take into account functional data. We investigated the theoretical aspects and practical attitude of the proposed and adapted technique to the following regression models.The first model is the conditional quantile functional model when the covariate takes its values in a bounded subspace of the functional space of infinite dimension, the response variable takes its values in a compact of the real line, and the observations are i.i.d.. The second model is the functional additive quantile regression model where the response variable depends on a vector of functional covariates. The last model is the conditional quantile functional model in the dependent functional data case. We obtained the weak consistency and a convergence rate of these estimators. Simulation studies are performed to evaluate the performance of the inference procedures. Applications to chemometrics, environmental and climatic data analysis are considered. The good behavior of the SVM estimator is thus highlighted.
125

A Hilbert space approach to multiple recurrence in ergodic theory

Beyers, Frederik Johannes Conradie 22 February 2006 (has links)
The use of Hilbert space theory became an important tool for ergodic theoreticians ever since John von Neumann proved the fundamental Mean Ergodic theorem in Hilbert space. Recurrence is one of the corner stones in the study of dynamical systems. In this dissertation some extended ideas besides those of the basic, well-known recurrence results are investigated. Hilbert space theory proves to be a very useful approach towards the solution of multiple recurrence problems in ergodic theory. Another very important use of Hilbert space theory became evident only relatively recently, when it was realized that non-commutative dynamical systems become accessible to the ergodic theorist through the important Gelfand-Naimark-Segal (GNS) representation of C*-algebras as Hilbert spaces. Through this construction we are enabled to invoke the rich catalogue of Hilbert space ergodic results to approach the more general, and usually more involved, non-commutative extensions of classical ergodic-theoretical results. In order to make this text self-contained, the basic, standard, ergodic-theoretical results are included in this text. In many instances Hilbert space counterparts of these basic results are also stated and proved. Chapters 1 and 2 are devoted to the introduction of these basic ergodic-theoretical results such as an introduction to the idea of measure-theoretic dynamical systems, citing some basic examples, Poincairé’s recurrence, the ergodic theorems of Von Neumann and Birkhoff, ergodicity, mixing and weakly mixing. In Chapter 2 several rudimentary results, which are the basic tools used in proofs, are also given. In Chapter 3 we show how a Hilbert space result, i.e. a variant of a result by Van der Corput for uniformly distributed sequences modulo 1, is used to simplify the proofs of some multiple recurrence problems. First we use it to simplify and clarify the proof of a multiple recurrence result by Furstenberg, and also to extend that result to a more general case, using the same Van der Corput lemma. This may be considered the main result of this thesis, since it supplies an original proof of this result. The Van der Corput lemma helps to simplify many of the tedious terms that are found in Furstenberg’s proof. In Chapter 4 we list and discuss a few important results where classical (commutative) ergodic results were extended to the non-commutative case. As stated before, these extensions are mainly due to the accessibility of Hilbert space theory through the GNS construction. The main result in this section is a result proved by Niculescu, Ströh and Zsidó, which is proved here using a similar Van der Corput lemma as in the commutative case. Although we prove a special case of the theorem by Niculescu, Ströh and Zsidó, the same method (Van der Corput) can be used to prove the generalized result. Copyright 2004, University of Pretoria. All rights reserved. The copyright in this work vests in the University of Pretoria. No part of this work may be reproduced or transmitted in any form or by any means, without the prior written permission of the University of Pretoria. Please cite as follows: Beters, FJC 2004, A Hilbert space approach to multiple recurrence in ergodic theory, MSc dissertation, University of Pretoria, Pretoria, viewed yymmdd < http://upetd.up.ac.za/thesis/available/etd-02222006-104936 / > / Dissertation (MSc (Applied Mathematics))--University of Pretoria, 2007. / Mathematics and Applied Mathematics / unrestricted
126

On The Fourier Transform Approach To Quantum Error Control

Kumar, Hari Dilip 07 1900 (has links) (PDF)
Quantum mechanics is the physics of the very small. Quantum computers are devices that utilize the power of quantum mechanics for their computational primitives. Associated to each quantum system is an abstract space known as the Hilbert space. A subspace of the Hilbert space is known as a quantum code. Quantum codes allow to protect the computational state of a quantum computer against decoherence errors. The well-known classes of quantum codes are stabilizer or additive codes, non-additive codes and Clifford codes. This thesis aims at demonstrating a general approach to the construction of the various classes of quantum codes. The framework utilized is the Fourier transform over finite groups. The thesis is divided into four chapters. The first chapter is an introduction to basic quantum mechanics, quantum computation and quantum noise. It lays the foundation for an understanding of quantum error correction theory in the next chapter. The second chapter introduces the basic theory behind quantum error correction. Also, the various classes and constructions of active quantum error-control codes are introduced. The third chapter introduces the Fourier transform over finite groups, and shows how it may be used to construct all the known classes of quantum codes, as well as a class of quantum codes as yet unpublished in the literature. The transform domain approach was originally introduced in (Arvind et al., 2002). In that paper, not all the classes of quantum codes were introduced. We elaborate on this work to introduce the other classes of quantum codes, along with a new class of codes, codes from idempotents in the transform domain. The fourth chapter details the computer programs that were used to generate and test for the various code classes. Code was written in the GAP (Groups, Algorithms, Programming) computer algebra package. The fifth and final chapter concludes, with possible directions for future work. References cited in the thesis are attached at the end of the thesis.
127

Filtrage adaptatif à l’aide de méthodes à noyau : application au contrôle d’un palier magnétique actif / Adaptive filtering using kernel methods : application to the control of an active magnetic bearing

Saide, Chafic 19 September 2013 (has links)
L’estimation fonctionnelle basée sur les espaces de Hilbert à noyau reproduisant demeure un sujet de recherche actif pour l’identification des systèmes non linéaires. L'ordre du modèle croit avec le nombre de couples entrée-sortie, ce qui rend cette méthode inadéquate pour une identification en ligne. Le critère de cohérence est une méthode de parcimonie pour contrôler l’ordre du modèle. Le modèle est donc défini à partir d'un dictionnaire de faible taille qui est formé par les fonctions noyau les plus pertinentes.Une fonction noyau introduite dans le dictionnaire y demeure même si la non-stationnarité du système rend sa contribution faible dans l'estimation de la sortie courante. Il apparaît alors opportun d'adapter les éléments du dictionnaire pour réduire l'erreur quadratique instantanée et/ou mieux contrôler l'ordre du modèle.La première partie traite le sujet des algorithmes adaptatifs utilisant le critère de cohérence. L'adaptation des éléments du dictionnaire en utilisant une méthode de gradient stochastique est abordée pour deux familles de fonctions noyau. Cette partie a un autre objectif qui est la dérivation des algorithmes adaptatifs utilisant le critère de cohérence pour identifier des modèles à sorties multiples.La deuxième partie introduit d'une manière abrégée le palier magnétique actif (PMA). La proposition de contrôler un PMA par un algorithme adaptatif à noyau est présentée pour remplacer une méthode utilisant les réseaux de neurones à couches multiples / Function approximation methods based on reproducing kernel Hilbert spaces are of great importance in kernel-based regression. However, the order of the model is equal to the number of observations, which makes this method inappropriate for online identification. To overcome this drawback, many sparsification methods have been proposed to control the order of the model. The coherence criterion is one of these sparsification methods. It has been shown possible to select a subset of the most relevant passed input vectors to form a dictionary to identify the model.A kernel function, once introduced into the dictionary, remains unchanged even if the non-stationarity of the system makes it less influent in estimating the output of the model. This observation leads to the idea of adapting the elements of the dictionary to obtain an improved one with an objective to minimize the resulting instantaneous mean square error and/or to control the order of the model.The first part deals with adaptive algorithms using the coherence criterion. The adaptation of the elements of the dictionary using a stochastic gradient method is presented for two types of kernel functions. Another topic is covered in this part which is the implementation of adaptive algorithms using the coherence criterion to identify Multiple-Outputs models.The second part introduces briefly the active magnetic bearing (AMB). A proposed method to control an AMB by an adaptive algorithm using kernel methods is presented to replace an existing method using neural networks
128

Sampling Inequalities and Applications / Sampling Ungleichungen und Anwendungen

Rieger, Christian 28 March 2008 (has links)
No description available.
129

Inference for stationary functional time series: dimension reduction and regression

Kidzinski, Lukasz 24 October 2014 (has links)
Les progrès continus dans les techniques du stockage et de la collection des données permettent d'observer et d'enregistrer des processus d’une façon presque continue. Des exemples incluent des données climatiques, des valeurs de transactions financières, des modèles des niveaux de pollution, etc. Pour analyser ces processus, nous avons besoin des outils statistiques appropriés. Une technique très connue est l'analyse de données fonctionnelles (ADF).<p><p>L'objectif principal de ce projet de doctorat est d'analyser la dépendance temporelle de l’ADF. Cette dépendance se produit, par exemple, si les données sont constituées à partir d'un processus en temps continu qui a été découpé en segments, les jours par exemple. Nous sommes alors dans le cadre des séries temporelles fonctionnelles.<p><p>La première partie de la thèse concerne la régression linéaire fonctionnelle, une extension de la régression multivariée. Nous avons découvert une méthode, basé sur les données, pour choisir la dimension de l’estimateur. Contrairement aux résultats existants, cette méthode n’exige pas d'assomptions invérifiables. <p><p>Dans la deuxième partie, on analyse les modèles linéaires fonctionnels dynamiques (MLFD), afin d'étendre les modèles linéaires, déjà reconnu, dans un cadre de la dépendance temporelle. Nous obtenons des estimateurs et des tests statistiques par des méthodes d’analyse harmonique. Nous nous inspirons par des idées de Brillinger qui a étudié ces models dans un contexte d’espaces vectoriels. / Doctorat en Sciences / info:eu-repo/semantics/nonPublished
130

Image Reconstruction Based On Hilbert And Hybrid Filtered Algorithms With Inverse Distance Weight And No Backprojection Weight

Narasimhadhan, A V 08 1900 (has links) (PDF)
Filtered backprojection (FBP) reconstruction algorithms are very popular in the field of X-ray computed tomography (CT) because they give advantages in terms of the numerical accuracy and computational complexity. Ramp filter based fan-beam FBP reconstruction algorithms have the position dependent weight in the backprojection which is responsible for spatially non-uniform distribution of noise and resolution, and artifacts. Many algorithms based on shift variant filtering or spatially-invariant interpolation in the backprojection step have been developed to deal with this issue. However, these algorithms are computationally demanding. Recently, fan-beam algorithms based on Hilbert filtering with inverse distance weight and no weight in the backprojection have been derived using the Hamaker’s relation. These fan-beam reconstruction algorithms have been shown to improve noise uniformity and uniformity in resolution. In this thesis, fan-beam FBP reconstruction algorithms with inverse distance back-projection weight and no backprojection weight for 2D image reconstruction are presented and discussed for the two fan-beam scan geometries -equi-angular and equispace detector array. Based on the proposed and discussed fan-beam reconstruction algorithms with inverse distance backprojection and no backprojection weight, new 3D cone-beam FDK reconstruction algorithms with circular and helical scan trajectories for curved and planar detector geometries are proposed. To start with three rebinning formulae from literature are presented and it is shown that one can derive all fan-beam FBP reconstruction algorithms from these rebinning formulae. Specifically, two fan-beam algorithms with no backprojection weight based on Hilbert filtering for equi-space linear array detector and one new fan-beam algorithm with inverse distance backprojection weight based on hybrid filtering for both equi-angular and equi-space linear array detector are derived. Simulation results for these algorithms in terms of uniformity of noise and resolution in comparison to standard fan-beam FBP reconstruction algorithm (ramp filter based fan-beam reconstruction algorithm) are presented. It is shown through simulation that the fan-beam reconstruction algorithm with inverse distance in the backprojection gives better noise performance while retaining the resolution properities. A comparison between above mentioned reconstruction algorithms is given in terms of computational complexity. The state of the art 3D X-ray imaging systems in medicine with cone-beam (CB) circular and helical computed tomography scanners use non-exact (approximate) FBP based reconstruction algorithm. They are attractive because of their simplicity and low computational cost. However, they produce sub-optimal reconstructed images with respect to cone-beam artifacts, noise and axial intensity drop in case of circular trajectory scan imaging. Axial intensity drop in the reconstructed image is due to the insufficient data acquired by the circular-scan trajectory CB CT. This thesis deals with investigations to improve the image quality by means of the Hilbert and hybrid filtering based algorithms using redundancy data for Feldkamp, Davis and Kress (FDK) type reconstruction algorithms. In this thesis, new FDK type reconstruction algorithms for cylindrical detector and planar detector for CB circular CT are developed, which are obtained by extending to three dimensions (3D) an exact Hilbert filtering based FBP algorithm for 2D fan-beam beam algorithms with no position dependent backprojection weight and fan-beam algorithm with inverse distance backprojection weight. The proposed FDK reconstruction algorithm with inverse distance weight in the backprojection requires full-scan projection data while the FDK reconstruction algorithm with no backprojection weight can handle partial-scan data including very short-scan. The FDK reconstruction algorithms with no backprojection weight for circular CB CT are compared with Hu’s, FDK and T-FDK reconstruction algorithms in-terms of axial intensity drop and computational complexity. The simulation results of noise, CB artifacts performance and execution timing as well as the partial-scan reconstruction abilities are presented. We show that FDK reconstruction algorithms with no backprojection weight have better noise performance characteristics than the conventional FDK reconstruction algorithm where the backprojection weight is known to result in spatial non-uniformity in the noise characteristics. In this thesis, we present an efficient method to reduce the axial intensity drop in circular CB CT. The efficient method consists of two steps: the first one is reconstruction of the object using FDK reconstruction algorithm with no backprojection weight and the second is estimating the missing term. The efficient method is comparable to Zhu et al.’s method in terms of reduction in axial intensity drop, noise and computational complexity. The helical scanning trajectory satisfies the Tuy-smith condition, hence an exact and stable reconstruction is possible. However, the helical FDK reconstruction algorithm is responsible for the cone-beam artifacts since the helical FDK reconstruction algorithm is approximate in its derivation. In this thesis, helical FDK reconstruction algorithms based on Hilbert filtering with no backprojection weight and FDK reconstruction algorithm based on hybrid filtering with inverse distance backprojection weight are presented to reduce the CB artifacts. These algorithms are compared with standard helical FDK in-terms of noise, CB artifacts and computational complexity.

Page generated in 0.0339 seconds