• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 6
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Measure Theory of Self-Similar Groups and Digit Tiles

Kravchenko, Rostyslav 2010 December 1900 (has links)
This dissertation is devoted to the measure theoretical aspects of the theory of automata and groups generated by them. It consists of two main parts. In the first part we study the action of automata on Bernoulli measures. We describe how a finite-state automorphism of a regular rooted tree changes the Bernoulli measure on the boundary of the tree. It turns out, that a finite-state automorphism of polynomial growth, as defined by Sidki, preserves a measure class of a Bernoulli measure, and we write down the explicit formula for its Radon-Nikodim derivative. On the other hand the image of the Bernoulli measure under the action of a strongly connected finite-state automorphism is singular to the measure itself. The second part is devoted to introduction of measure into the theory of limit spaces of Nekrashevysh. Let G be a group and φ : H → G be a contracting homomorphism from a subgroup H < G of finite index. Nekrashevych associated with the pair (G, φ) the limit dynamical system (JG, s) and the limit G-space XG together with the covering ∪g∈GT · g by the tile T. We develop the theory of selfsimilar measures m on these limit spaces. It is shown that (JG, s,m) is conjugate to the one-sided Bernoulli shift. Using sofic subshifts we prove that the tile T has integer measure and we give an algorithmic way to compute it. In addition we give an algorithm to find the measure of the intersection of tiles T ∩ (T · g) for g ∈ G. We present applications to the evaluation of the Lebesgue measure of integral self-affine tiles. Previously the main tools in the theory of self-similar fractals were tools from measure theory and analysis. The methods developed in this disseration provide a new way to investigate self-similar and self-affine fractals, using combinatorics and group theory.
2

Some Problems in Multivariable Operator Theory

Sarkar, Santanu January 2014 (has links) (PDF)
In this thesis we have investigated two different types of problems in multivariable operator theory. The first one deals with the defect sequence for contractive tuples and maximal con-tractive tuples. These condone deals with the wandering subspaces of the Bergman space and the Dirichlet space over the polydisc. These are described in thefollowing two sections. (I) The Defect Sequence for ContractiveTuples LetT=(T1,...,Td)bead-tuple of bounded linear operators on some Hilbert space H. We say that T is a row contraction, or, acontractive tuplei f the row operator (Pl refer the abstract pdf file)
3

A Continuous, Nowhere-Differentiable Function with a Dense Set of Proper Local Extrema

Huggins, Mark C. (Mark Christopher) 12 1900 (has links)
In this paper, we use the following scheme to construct a continuous, nowhere-differentiable function 𝑓 which is the uniform limit of a sequence of sawtooth functions 𝑓ₙ : [0, 1] → [0, 1] with increasingly sharp teeth. Let 𝑋 = [0, 1] x [0, 1] and 𝐹(𝑋) be the Hausdorff metric space determined by 𝑋. We define contraction maps 𝑤₁ , 𝑤₂ , 𝑤₃ on 𝑋. These maps define a contraction map 𝑤 on 𝐹(𝑋) via 𝑤(𝐴) = 𝑤₁(𝐴) ⋃ 𝑤₂(𝐴) ⋃ 𝑤₃(𝐴). The iteration under 𝑤 of the diagonal in 𝑋 defines a sequence of graphs of continuous functions 𝑓ₙ. Since 𝑤 is a contraction map in the compact metric space 𝐹(𝑋), 𝑤 has a unique fixed point. Hence, these iterations converge to the fixed point-which turns out to be the graph of our continuous, nowhere-differentiable function 𝑓. Chapter 2 contains the background we will need to engage our task. Chapter 3 includes two results from the Baire Category Theorem. The first is the well known fact that the set of continuous, nowhere-differentiable functions on [0,1] is a residual set in 𝐶[0,1]. The second fact is that the set of continuous functions on [0,1] which have a dense set of proper local extrema is residual in 𝐶[0,1]. In the fourth and last chapter we actually construct our function and prove it is continuous, nowhere-differentiable and has a dense set of proper local extrema. Lastly we iterate the set {(0,0), (1,1)} under 𝑤 and plot its points. Any terms not defined in Chapters 2 through 4 may be found in [2,4]. The same applies to the basic properties of metric spaces which have not been explicitly stated. Throughout, we will let 𝒩 and 𝕽 denote the natural numbers and the real numbers, respectively.
4

Approximation von Fixpunkten streng pseudokontraktiver Operatoren

Bethke, Matthias 19 January 2021 (has links)
In vorliegendem Artikel wird eine Verallgemeinerung eines Approximationsteorems von Chidume /6/ für streng pseudokontraktive Operatoren in Lp beziehungsweise lp-Räumen (mit P =2) angegeben. Es wird ein Verfahren betrachtet, welches von MANN /14/ für reelle Funktionen eingeführt würde. / This article gives a generalization of an approximation theorem by Chidume / 6 / for strictly pseudocontractive operators in Lp or lp spaces (with P = 2). We consider a method, which MANN / 14 / had introduced for real functions.
5

Algorithmes d'apprentissage pour la recommandation

Bisson, Valentin 09 1900 (has links)
L'ère numérique dans laquelle nous sommes entrés apporte une quantité importante de nouveaux défis à relever dans une multitude de domaines. Le traitement automatique de l'abondante information à notre disposition est l'un de ces défis, et nous allons ici nous pencher sur des méthodes et techniques adaptées au filtrage et à la recommandation à l'utilisateur d'articles adaptés à ses goûts, dans le contexte particulier et sans précédent notable du jeu vidéo multi-joueurs en ligne. Notre objectif est de prédire l'appréciation des niveaux par les joueurs. Au moyen d'algorithmes d'apprentissage machine modernes tels que les réseaux de neurones profonds avec pré-entrainement non-supervisé, que nous décrivons après une introduction aux concepts nécessaires à leur bonne compréhension, nous proposons deux architectures aux caractéristiques différentes bien que basées sur ce même concept d'apprentissage profond. La première est un réseau de neurones multi-couches pour lequel nous tentons d'expliquer les performances variables que nous rapportons sur les expériences menées pour diverses variations de profondeur, d'heuristique d'entraînement, et des méthodes de pré-entraînement non-supervisé simple, débruitant et contractant. Pour la seconde architecture, nous nous inspirons des modèles à énergie et proposons de même une explication des résultats obtenus, variables eux aussi. Enfin, nous décrivons une première tentative fructueuse d'amélioration de cette seconde architecture au moyen d'un fine-tuning supervisé succédant le pré-entrainement, puis une seconde tentative où ce fine-tuning est fait au moyen d'un critère d'entraînement semi-supervisé multi-tâches. Nos expériences montrent des performances prometteuses, notament avec l'architecture inspirée des modèles à énergie, justifiant du moins l'utilisation d'algorithmes d'apprentissage profonds pour résoudre le problème de la recommandation. / The age of information in which we have entered brings with it a whole new set of challenges to take up in many different fields. Making computers process this profuse information is one such challenge, and this thesis focuses on techniques adapted for automatically filtering and recommending to users items that will fit their tastes, in the somehow original context of an online multi-player game. Our objective is to predict players' ratings of the game's levels. We first introduce machine learning concepts necessary to understand the two architectures we then describe; both of which taking advantage of deep learning and unsupervised pre-training concepts to solve the recommendation problem. The first architecture is a multilayered neural network for which we try to explain different performances we get for different settings of depth, training heuristics and unsupervised pre-training methods, namely, straight, denoising and contrative auto-encoders. The second architecture we explore takes its roots in energy-based models. We give possible explanations for the various results it yields depending on the configurations we experimented with. Finally, we describe two successful improvements on this second architecture. The former is a supervised fine-tuning taking place after the unsupervised pre-training, and the latter is a tentative improvement of the fine-tuning phase by using a multi-tasking training criterion. Our experiments show promising results, especially with the architecture inspired from energy-based models, justifying the use of deep learning algorithms to solve the recommendation problem.
6

Algorithmes d'apprentissage pour la recommandation

Bisson, Valentin 09 1900 (has links)
L'ère numérique dans laquelle nous sommes entrés apporte une quantité importante de nouveaux défis à relever dans une multitude de domaines. Le traitement automatique de l'abondante information à notre disposition est l'un de ces défis, et nous allons ici nous pencher sur des méthodes et techniques adaptées au filtrage et à la recommandation à l'utilisateur d'articles adaptés à ses goûts, dans le contexte particulier et sans précédent notable du jeu vidéo multi-joueurs en ligne. Notre objectif est de prédire l'appréciation des niveaux par les joueurs. Au moyen d'algorithmes d'apprentissage machine modernes tels que les réseaux de neurones profonds avec pré-entrainement non-supervisé, que nous décrivons après une introduction aux concepts nécessaires à leur bonne compréhension, nous proposons deux architectures aux caractéristiques différentes bien que basées sur ce même concept d'apprentissage profond. La première est un réseau de neurones multi-couches pour lequel nous tentons d'expliquer les performances variables que nous rapportons sur les expériences menées pour diverses variations de profondeur, d'heuristique d'entraînement, et des méthodes de pré-entraînement non-supervisé simple, débruitant et contractant. Pour la seconde architecture, nous nous inspirons des modèles à énergie et proposons de même une explication des résultats obtenus, variables eux aussi. Enfin, nous décrivons une première tentative fructueuse d'amélioration de cette seconde architecture au moyen d'un fine-tuning supervisé succédant le pré-entrainement, puis une seconde tentative où ce fine-tuning est fait au moyen d'un critère d'entraînement semi-supervisé multi-tâches. Nos expériences montrent des performances prometteuses, notament avec l'architecture inspirée des modèles à énergie, justifiant du moins l'utilisation d'algorithmes d'apprentissage profonds pour résoudre le problème de la recommandation. / The age of information in which we have entered brings with it a whole new set of challenges to take up in many different fields. Making computers process this profuse information is one such challenge, and this thesis focuses on techniques adapted for automatically filtering and recommending to users items that will fit their tastes, in the somehow original context of an online multi-player game. Our objective is to predict players' ratings of the game's levels. We first introduce machine learning concepts necessary to understand the two architectures we then describe; both of which taking advantage of deep learning and unsupervised pre-training concepts to solve the recommendation problem. The first architecture is a multilayered neural network for which we try to explain different performances we get for different settings of depth, training heuristics and unsupervised pre-training methods, namely, straight, denoising and contrative auto-encoders. The second architecture we explore takes its roots in energy-based models. We give possible explanations for the various results it yields depending on the configurations we experimented with. Finally, we describe two successful improvements on this second architecture. The former is a supervised fine-tuning taking place after the unsupervised pre-training, and the latter is a tentative improvement of the fine-tuning phase by using a multi-tasking training criterion. Our experiments show promising results, especially with the architecture inspired from energy-based models, justifying the use of deep learning algorithms to solve the recommendation problem.

Page generated in 0.0926 seconds