• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 1
  • Tagged with
  • 12
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Rademacher Sums, Hecke Operators and Moonshine

Bruno, Paul 01 June 2016 (has links)
No description available.
2

Sufficient Criteria for Total Differentiability of a Real Valued Function of a Complex Variable in Rn an Extension of H. Rademacher's Result for R²

Matovsky, Veron Rodieck 08 1900 (has links)
This thesis provides sufficient conditions for total differentiability almost everywhere of a real-valued function of a complex variable defined on a bounded region in IRn. This thesis extends H. Rademacher's 1918 results in IR2 which culminated in total differentiability, to IRn
3

Subespaços complementados de espaços de Banach clássicos / Complemented subspaces of classical Banach spaces

Melendez Caraballo, Blas, 1988- 27 August 2018 (has links)
Orientador: Jorge Tulio Mujica Ascui / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matemática Estatística e Computação Científica / Made available in DSpace on 2018-08-27T12:08:37Z (GMT). No. of bitstreams: 1 MelendezCaraballo_Blas_M.pdf: 1140173 bytes, checksum: 61bc3f801fdfc8946dd6852692a39bfd (MD5) Previous issue date: 2015 / Resumo: Em 1960, Pelczynski [1] provou que, se X é um dos espaços c0 ou lp, com p número real maior ou igual do que um. Então todo subespaço complementado de dimensão infinita de X é isomorfo a X. Outro resultado clássico de Pelczynski [1] afirma que se p é um número real maior do que um, então o espaço Lp[0,1] contém um subespaço complementado isomorfo a l2. Nosso objetivo é estudar os resultados deste tipo, e introduzir alguns problemas abertos. BIBLIOGRAFIA [1] A. Pelczynski, Projections in certain Banach spaces, Studia Methematica, 19 (1960), pág. 209-228 / Abstract: In 1960, Pelczynski [1] showed that if X is one of the spaces c0 or lp, p real number greater than or equal to one. Then each infinite dimensional subspace complemented in X is isomorphic to X. Another classical result of Pelczynski [1] states that if p is a real number greater that one, then the space Lp[0,1] contains a complemented subspace isomorphic to l2. Our aim is to study results of this kind, and to introduce some open problems. BIBLIOGRAFIA [1] A. Pelczynski, Projections in certain Banach spaces, Studia Methematica, 19 (1960), pág. 209-228 / Mestrado / Matematica / Mestre em Matemática
4

Multiple-valued functions in the sense of F. J. Almgren

Goblet, Jordan 19 June 2008 (has links)
A multiple-valued function is a "function" that assumes two or more distinct values in its range for at least one point in its domain. While these "functions" are not functions in the normal sense of being single-valued, the usage is so common that there is no way to dislodge it. This thesis is devoted to a particular class of multiple-valued functions: Q-valued functions. A Q-valued function is essentially a rule assigning Q unordered and not necessarily distinct points of R^n to each element of R^m. This object is one of the key ingredients of Almgren's 1700 pages proof that the singular set of an m-dimensional mass minimizing integral current in R^n has dimension at most m-2. We start by developing a decomposition theory and show for instance when a continuous Q-valued function can or cannot be seen as Q "glued" continuous classical functions. Then, the decomposition theory is used to prove intrinsically a Rademacher type theorem for Lipschitz Q-valued functions. A couple of Lipschitz extension theorems are also obtained for partially defined Lipschitz Q-valued functions. The second part is devoted to a Peano type result for a particular class of nonconvex-valued differential inclusions. To the best of the author's knowledge this is the first theorem, in the nonconvex case, where the existence of a continuously differentiable solution is proved under a mere continuity assumption on the corresponding multifunction. An application to a particular class of nonlinear differential equations is included. The third part is devoted to the calculus of variations in the multiple-valued framework. We define two different notions of Dirichlet nearly minimizing Q-valued functions, generalizing Dirichlet energy minimizers studied by Almgren. Hölder regularity is obtained for these nearly minimizers and we give some examples showing that the branching phenomena can be much worse in this context.
5

The law of the iterated logarithm for tail sums

Ghimire, Santosh January 1900 (has links)
Doctor of Philosophy / Department of Mathematics / Charles N. Moore / The main purpose of this thesis is to derive the law of the iterated logarithm for tail sums in various contexts in analysis. The various contexts are sums of Rademacher functions, general dyadic martingales, independent random variables and lacunary trigonometric series. We name the law of the iterated logarithm for tail sums as tail law of the iterated logarithm. We first establish the tail law of the iterated logarithm for sums of Rademacher functions and obtain both upper and lower bound in it. Sum of Rademacher functions is a nicely behaved dyadic martingale. With the ideas from the Rademacher case, we then establish the tail law of the iterated logarithm for general dyadic martingales. We obtain both upper and lower bound in the case of martingales. A lower bound is obtained for the law of the iterated logarithm for tail sums of bounded symmetric independent random variables. Lacunary trigonometric series exhibit many of the properties of partial sums of independent random variables. So we finally obtain a lower bound for the tail law of the iterated logarithm for lacunary trigonometric series introduced by Salem and Zygmund.
6

Evaluation of certain infinite series using theorems of John, Rademacher and Kronecker /

Haley, Colette Sharon, January 1900 (has links)
Thesis (M. Sc.)--Carleton University, 2005. / Includes bibliographical references (p. 111-114). Also available in electronic format on the Internet.
7

ADVERSARIAL LEARNING ON ROBUSTNESS AND GENERATIVE MODELS

Qingyi Gao (11211114) 03 August 2021 (has links)
<div>In this dissertation, we study two important problems in the area of modern deep learning: adversarial robustness and adversarial generative model. In the first part, we study the generalization performance of deep neural networks (DNNs) in adversarial learning. Recent studies have shown that many machine learning models are vulnerable to adversarial attacks, but much remains unknown concerning its generalization error in this scenario. We focus on the $\ell_\infty$ adversarial attacks produced under the fast gradient sign method (FGSM). We establish a tight bound for the adversarial Rademacher complexity of DNNs based on both spectral norms and ranks of weight matrices. The spectral norm and rank constraints imply that this class of networks can be realized as a subset of the class of a shallow network composed with a low dimensional Lipschitz continuous function. This crucial observation leads to a bound that improves the dependence on the network width compared to previous works and achieves depth independence. We show that adversarial Rademacher complexity is always larger than its natural counterpart, but the effect of adversarial perturbations can be limited under our weight normalization framework. </div><div></div><div>In the second part, we study deep generative models that receive great success in many fields. It is well-known that the complex data usually does not populate its ambient Euclidean space but resides in a lower-dimensional manifold instead. Thus, misspecifying the latent dimension in generative models will result in a mismatch of latent representations and poor generative qualities. To address these problems, we propose a novel framework called Latent Wasserstein GAN (LWGAN) to fuse the auto-encoder and WGAN such that the intrinsic dimension of data manifold can be adaptively learned by an informative latent distribution. In particular, we show that there exist an encoder network and a generator network in such a way that the intrinsic dimension of the learned encodes distribution is equal to the dimension of the data manifold. Theoretically, we prove the consistency of the estimation for the intrinsic dimension of the data manifold and derive a generalization error bound for LWGAN. Comprehensive empirical experiments verify our framework and show that LWGAN is able to identify the correct intrinsic dimension under several scenarios, and simultaneously generate high-quality synthetic data by samples from the learned latent distribution. </div><div><br></div>
8

Machine à vecteurs de support hyperbolique et ingénierie du noyau / Hyperbolic Support Vector Machine and Kernel design

El Dakdouki, Aya 11 September 2019 (has links)
La théorie statistique de l’apprentissage est un domaine de la statistique inférentielle dont les fondements ont été posés par Vapnik à la fin des années 60. Il est considéré comme un sous-domaine de l’intelligence artificielle. Dans l’apprentissage automatique, les machines à vecteurs de support (SVM) sont un ensemble de techniques d’apprentissage supervisé destinées à résoudre des problèmes de discrimination et de régression. Dans cette thèse, notre objectif est de proposer deux nouveaux problèmes d’aprentissagestatistique: Un portant sur la conception et l’évaluation d’une extension des SVM multiclasses et un autre sur la conception d’un nouveau noyau pour les machines à vecteurs de support. Dans un premier temps, nous avons introduit une nouvelle machine à noyau pour la reconnaissance de modèle multi-classe: la machine à vecteur de support hyperbolique. Géometriquement, il est caractérisé par le fait que ses surfaces de décision dans l’espace de redescription sont définies par des fonctions hyperboliques. Nous avons ensuite établi ses principales propriétés statistiques. Parmi ces propriétés nous avons montré que les classes de fonctions composantes sont des classes de Glivenko-Cantelli uniforme, ceci en établissant un majorant de la complexité de Rademacher. Enfin, nous établissons un risque garanti pour notre classifieur.Dans un second temps, nous avons créer un nouveau noyau s’appuyant sur la transformation de Fourier d’un modèle de mélange gaussien. Nous procédons de la manière suivante: d’abord, chaque classe est fragmentée en un nombre de sous-classes pertinentes, ensuite on considère les directions données par les vecteurs obtenus en prenant toutes les paires de centres de sous-classes d’une même classe. Parmi celles-ci, sont exclues celles permettant de connecter deux sous-classes de deux classes différentes. On peut aussi voir cela comme la recherche d’invariance par translation dans chaque classe. Nous l’avons appliqué avec succès sur plusieurs jeux de données dans le contexte d’un apprentissage automatique utilisant des machines à vecteurs support multi-classes. / Statistical learning theory is a field of inferential statistics whose foundations were laid by Vapnik at the end of the 1960s. It is considered a subdomain of artificial intelligence. In machine learning, support vector machines (SVM) are supervised learning models with associated learning algorithms that analyze data used for classification and regression analysis. In this thesis, our aim is to propose two new statistical learning problems : one on the conception and evaluation of a multi-class SVM extension and another on the design of a new kernel for support vectors machines. First, we introduced a new kernel machine for multi-class pattern recognition : the hyperbolic support vector machine. Geometrically, it is characterized by the fact that its decision boundaries in the feature space are defined by hyperbolic functions. We then established its main statistical properties. Among these properties we showed that the classes of component functions are uniform Glivenko-Cantelli, this by establishing an upper bound of the Rademacher complexity. Finally, we establish a guaranteed risk for our classifier. Second, we constructed a new kernel based on the Fourier transform of a Gaussian mixture model. We proceed in the following way: first, each class is fragmented into a number of relevant subclasses, then we consider the directions given by the vectors obtained by taking all pairs of subclass centers of the same class. Among these are excluded those allowing to connect two subclasses of two different classes. We can also see this as the search for translation invariance in each class. It successfully on several datasets in the context of machine learning using multiclass support vector machines.
9

The classification of Boolean functions using the Rademacher-Walsh transform

Anderson, Neil Arnold 31 August 2007 (has links)
When considering Boolean switching functions with n input variables, there are 2^(2^n) possible functions that can be realized by enumerating all possible combinations of input values and arrangements of output values. As is expected with double exponential growth, the number of functions becomes unmanageable very quickly as n increases. This thesis develops a new approach for computing the spectral classes where the spectral operations are performed by manipulating the truth tables rather than first moving to the spectral domain to manipulate the spectral coefficients. Additionally, a generic approach is developed for modeling these spectral operations within the functional domain. The results of this research match previous for n < or = to 4 but differ when n=5 is considered. This research indicates with a high level of confidence that there are in fact 15 previously unidentified classes, for a total of 206 spectral classes needed to represent all 2^(2^n) Boolean functions.
10

A concentration inequality based statistical methodology for inference on covariance matrices and operators

Kashlak, Adam B. January 2017 (has links)
In the modern era of high and infinite dimensional data, classical statistical methodology is often rendered inefficient and ineffective when confronted with such big data problems as arise in genomics, medical imaging, speech analysis, and many other areas of research. Many problems manifest when the practitioner is required to take into account the covariance structure of the data during his or her analysis, which takes on the form of either a high dimensional low rank matrix or a finite dimensional representation of an infinite dimensional operator acting on some underlying function space. Thus, novel methodology is required to estimate, analyze, and make inferences concerning such covariances. In this manuscript, we propose using tools from the concentration of measure literature–a theory that arose in the latter half of the 20th century from connections between geometry, probability, and functional analysis–to construct rigorous descriptive and inferential statistical methodology for covariance matrices and operators. A variety of concentration inequalities are considered, which allow for the construction of nonasymptotic dimension-free confidence sets for the unknown matrices and operators. Given such confidence sets a wide range of estimation and inferential procedures can be and are subsequently developed. For high dimensional data, we propose a method to search a concentration in- equality based confidence set using a binary search algorithm for the estimation of large sparse covariance matrices. Both sub-Gaussian and sub-exponential concentration inequalities are considered and applied to both simulated data and to a set of gene expression data from a study of small round blue-cell tumours. For infinite dimensional data, which is also referred to as functional data, we use a celebrated result, Talagrand’s concentration inequality, in the Banach space setting to construct confidence sets for covariance operators. From these confidence sets, three different inferential techniques emerge: the first is a k-sample test for equality of covariance operator; the second is a functional data classifier, which makes its decisions based on the covariance structure of the data; the third is a functional data clustering algorithm, which incorporates the concentration inequality based confidence sets into the framework of an expectation-maximization algorithm. These techniques are applied to simulated data and to speech samples from a set of spoken phoneme data. Lastly, we take a closer look at a key tool used in the construction of concentration based confidence sets: Rademacher symmetrization. The symmetrization inequality, which arises in the probability in Banach spaces literature, is shown to be connected with optimal transport theory and specifically the Wasserstein distance. This insight is used to improve the symmetrization inequality resulting in tighter concentration bounds to be used in the construction of nonasymptotic confidence sets. A variety of other applications are considered including tests for data symmetry and tightening inequalities in Banach spaces. An R package for inference on covariance operators is briefly discussed in an appendix chapter.

Page generated in 0.0999 seconds