• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • Tagged with
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

On Generalized Measures Of Information With Maximum And Minimum Entropy Prescriptions

Dukkipati, Ambedkar 03 1900 (has links)
Kullback-Leibler relative-entropy or KL-entropy of P with respect to R defined as ∫xlnddPRdP , where P and R are probability measures on a measurable space (X, ), plays a basic role in the definitions of classical information measures. It overcomes a shortcoming of Shannon entropy – discrete case definition of which cannot be extended to nondiscrete case naturally. Further, entropy and other classical information measures can be expressed in terms of KL-entropy and hence properties of their measure-theoretic analogs will follow from those of measure-theoretic KL-entropy. An important theorem in this respect is the Gelfand-Yaglom-Perez (GYP) Theorem which equips KL-entropy with a fundamental definition and can be stated as: measure-theoretic KL-entropy equals the supremum of KL-entropies over all measurable partitions of X . In this thesis we provide the measure-theoretic formulations for ‘generalized’ information measures, and state and prove the corresponding GYP-theorem – the ‘generalizations’ being in the sense of R ´enyi and nonextensive, both of which are explained below. Kolmogorov-Nagumo average or quasilinear mean of a vector x = (x1, . . . , xn) with respect to a pmf p= (p1, . . . , pn)is defined ashxiψ=ψ−1nk=1pkψ(xk), whereψis an arbitrarycontinuous and strictly monotone function. Replacing linear averaging in Shannon entropy with Kolmogorov-Nagumo averages (KN-averages) and further imposing the additivity constraint – a characteristic property of underlying information associated with single event, which is logarithmic – leads to the definition of α-entropy or R ´enyi entropy. This is the first formal well-known generalization of Shannon entropy. Using this recipe of R´enyi’s generalization, one can prepare only two information measures: Shannon and R´enyi entropy. Indeed, using this formalism R´enyi characterized these additive entropies in terms of axioms of KN-averages. On the other hand, if one generalizes the information of a single event in the definition of Shannon entropy, by replacing the logarithm with the so called q-logarithm, which is defined as lnqx =x1− 1 −1 −q , one gets what is known as Tsallis entropy. Tsallis entropy is also a generalization of Shannon entropy but it does not satisfy the additivity property. Instead, it satisfies pseudo-additivity of the form x ⊕qy = x + y + (1 − q)xy, and hence it is also known as nonextensive entropy. One can apply R´enyi’s recipe in the nonextensive case by replacing the linear averaging in Tsallis entropy with KN-averages and thereby imposing the constraint of pseudo-additivity. A natural question that arises is what are the various pseudo-additive information measures that can be prepared with this recipe? We prove that Tsallis entropy is the only one. Here, we mention that one of the important characteristics of this generalized entropy is that while canonical distributions resulting from ‘maximization’ of Shannon entropy are exponential in nature, in the Tsallis case they result in power-law distributions. The concept of maximum entropy (ME), originally from physics, has been promoted to a general principle of inference primarily by the works of Jaynes and (later on) Kullback. This connects information theory and statistical mechanics via the principle: the states of thermodynamic equi- librium are states of maximum entropy, and further connects to statistical inference via select the probability distribution that maximizes the entropy. The two fundamental principles related to the concept of maximum entropy are Jaynes maximum entropy principle, which involves maximizing Shannon entropy and the Kullback minimum entropy principle that involves minimizing relative-entropy, with respect to appropriate moment constraints. Though relative-entropy is not a metric, in cases involving distributions resulting from relative-entropy minimization, one can bring forth certain geometrical formulations. These are reminiscent of squared Euclidean distance and satisfy an analogue of the Pythagoras’ theorem. This property is referred to as Pythagoras’ theorem of relative-entropy minimization or triangle equality and plays a fundamental role in geometrical approaches to statistical estimation theory like information geometry. In this thesis we state and prove the equivalent of Pythagoras’ theorem in the nonextensive formalism. For this purpose we study relative-entropy minimization in detail and present some results. Finally, we demonstrate the use of power-law distributions, resulting from ME-rescriptions of Tsallis entropy, in evolutionary algorithms. This work is motivated by the recently proposed generalized simulated annealing algorithm based on Tsallis statistics. To sum up, in light of their well-known axiomatic and operational justifications, this thesis establishes some results pertaining to the mathematical significance of generalized measures of information. We believe that these results represent an important contribution towards the ongoing research on understanding the phenomina of information. (For formulas pl see the original document) ii
2

Contribution à la modélisation et à la simulation numérique multi-échelle du transport cinétique électronique dans un plasma chaud

Mallet, Jessy 01 October 2012 (has links)
En physique des plasmas, le transport des électrons peut être décrit d'un point de vue cinétique ou d'un point de vue hydrodynamique.En théorie cinétique, une équation de Fokker-Planck couplée aux équations de Maxwell est utilisée habituellement pour décrire l'évolution des électrons dans un plasma collisionnel. Plus précisément la solution de l'équation cinétique est une fonction de distribution non négative f spécifiant la densité des particules en fonction de la vitesse des particules, le temps et la position dans l'espace. Afin d'approcher la solution de ce problème cinétique, de nombreuses méthodes de calcul ont été développées. Ici, une méthode déterministe est proposée dans une géométrie plane. Cette méthode est basée sur différents schémas numériques d'ordre élevé . Chaque schéma déterministe utilisé présente de nombreuses propriétés fondamentales telles que la conservation du flux de particules, la préservation de la positivité de la fonction de distribution et la conservation de l'énergie. Cependant, le coût de calcul cinétique pour cette méthode précise est trop élevé pour être utilisé dans la pratique, en particulier dans un espace multidimensionnel.Afin de réduire ce temps de calcul, le plasma peut être décrit par un modèle hydrodynamique. Toutefois, pour les nouvelles cibles à haute énergie, les effets cinétiques sont trop importants pour les négliger et remplacer le calcul cinétique par des modèles habituels d'Euler macroscopiques. C'est pourquoi une approche alternative est proposée en considérant une description intermédiaire entre le modèle fluide et le modèle cinétique. Pour décrire le transport des électrons, le nouveau modèle réduit cinétique M1 est basé sur une approche aux moments pour le système Maxwell-Fokker-Planck. Ce modèle aux moments utilise des intégrations de la fonction de distribution des électrons sur la direction de propagation et ne retient que l'énergie des particules comme variable cinétique. La variable de vitesse est écrite en coordonnées sphériques et le modèle est défini en considérant le système de moments par rapport à la variable angulaire. La fermeture du système de moments est obtenue sous l'hypothèse que la fonction de distribution est une fonction d'entropie minimale. Ce modèle satisfait les propriétés fondamentales telles que la conservation de la positivité de la fonction de distribution, les lois de conservation pour les opérateurs de collision et la dissipation d'entropie. En outre une discrétisation entropique avec la variable de vitesse est proposée sur le modèle semi-discret. De plus, le modèle M1 peut être généralisé au modèle MN en considérant N moments donnés. Le modèle aux N-moments obtenu préserve également les propriétés fondamentales telles que les lois de conservation et la dissipation de l'entropie. Le schéma semi-discret associé préserve les propriétés de conservation et de décroissance de l'entropie. / In plasma physics, the transport of electrons can be described from a kinetic point of view or from an hydrodynamical point of view.Classically in kinetic theory, a Fokker-Planck equation coupled with Maxwell equations is used to describe the evolution of electrons in a collisional plasma. More precisely the solution of the kinetic equations is a non-negative distribution function f specifying the density of particles as a function of velocity of particles, the time and the position in space. In order to approximate the solution of such problems, many computational methods have been developed. Here, a deterministic method is proposed in a planar geometry. This method is based on different high order numerical schemes. Each deterministic scheme used presents many fundamental properties such as conservation of flux particles, preservation of positivity of the distribution function and conservation of energy. However the kinetic computation of this accurate method is too expensive to be used in practical computation especially in multi-dimensional space.To reduce the computational time, the plasma can be described by an hydrodynamic model. However for the new high energy target drivers, the kinetic effects are too important to neglect them and replace kinetic calculus by usual macroscopic Euler models.That is why an alternative approach is proposed by considering an intermediate description between the fluid and the kinetic level. To describe the transport of electrons, the new reduced kinetic model M1 proposed is based on a moment approach for Maxwell-Fokker-Planck equations. This moment model uses integration of the electron distribution function on the propagating direction and retains only the energy of particles as kinetic variable. The velocity variable is written in spherical coordinates and the model is written by considering the system of moments with respect to the angular variable. The closure of the moments system is obtained under the assumption that the distribution function is a minimum entropy function. This model is proved to satisfy fundamental properties such as the non-negativity of the distribution function, conservation laws for collision operators and entropy dissipation. Moreover an entropic discretization in the velocity variable is proposed on the semi-discrete model. Moreover the M1 model can be generalized to the MN model by considering N given moments. The N-moments model obtained also preserves fundamental properties such as conservation laws and entropy dissipation. The associated semi-discrete scheme is shown to preserve the conservation properties and entropy decay.
3

Keller-Segel-type models and kinetic equations for interacting particles : long-time asymptotic analysis

Hoffmann, Franca Karoline Olga January 2017 (has links)
This thesis consists of three parts: The first and second parts focus on long-time asymptotics of macroscopic and kinetic models respectively, while in the third part we connect these regimes using different scaling approaches. (1) Keller–Segel-type aggregation-diffusion equations: We study a Keller–Segel-type model with non-linear power-law diffusion and non-local particle interaction: Does the system admit equilibria? If yes, are they unique? Which solutions converge to them? Can we determine an explicit rate of convergence? To answer these questions, we make use of the special gradient flow structure of the equation and its associated free energy functional for which the overall convexity properties are not known. Special cases of this family of models have been investigated in previous works, and this part of the thesis represents a contribution towards a complete characterisation of the asymptotic behaviour of solutions. (2) Hypocoercivity techniques for a fibre lay-down model: We show existence and uniqueness of a stationary state for a kinetic Fokker-Planck equation modelling the fibre lay-down process in non-woven textile production. Further, we prove convergence to equilibrium with an explicit rate. This part of the thesis is an extension of previous work which considered the case of a stationary conveyor belt. Adding the movement of the belt, the global equilibrium state is not known explicitly and a more general hypocoercivity estimate is needed. Although we focus here on a particular application, this approach can be used for any equation with a similar structure as long as it can be understood as a certain perturbation of a system for which the global Gibbs state is known. (3) Scaling approaches for collective animal behaviour models: We study the multi-scale aspects of self-organised biological aggregations using various scaling techniques. Not many previous studies investigate how the dynamics of the initial models are preserved via these scalings. Firstly, we consider two scaling approaches (parabolic and grazing collision limits) that can be used to reduce a class of non-local kinetic 1D and 2D models to simpler models existing in the literature. Secondly, we investigate how some of the kinetic spatio-temporal patterns are preserved via these scalings using asymptotic preserving numerical methods.
4

Minimization Problems Based On A Parametric Family Of Relative Entropies

Ashok Kumar, M 05 1900 (has links) (PDF)
We study minimization problems with respect to a one-parameter family of generalized relative entropies. These relative entropies, which we call relative -entropies (denoted I (P; Q)), arise as redundancies under mismatched compression when cumulants of compression lengths are considered instead of expected compression lengths. These parametric relative entropies are a generalization of the usual relative entropy (Kullback-Leibler divergence). Just like relative entropy, these relative -entropies behave like squared Euclidean distance and satisfy the Pythagorean property. We explore the geometry underlying various statistical models and its relevance to information theory and to robust statistics. The thesis consists of three parts. In the first part, we study minimization of I (P; Q) as the first argument varies over a convex set E of probability distributions. We show the existence of a unique minimizer when the set E is closed in an appropriate topology. We then study minimization of I on a particular convex set, a linear family, which is one that arises from linear statistical constraints. This minimization problem generalizes the maximum Renyi or Tsallis entropy principle of statistical physics. The structure of the minimizing probability distribution naturally suggests a statistical model of power-law probability distributions, which we call an -power-law family. Such a family is analogous to the exponential family that arises when relative entropy is minimized subject to the same linear statistical constraints. In the second part, we study minimization of I (P; Q) over the second argument. This minimization is generally on parametric families such as the exponential family or the - power-law family, and is of interest in robust statistics ( > 1) and in constrained compression settings ( < 1). In the third part, we show an orthogonality relationship between the -power-law family and an associated linear family. As a consequence of this, the minimization of I (P; ), when the second argument comes from an -power-law family, can be shown to be equivalent to a minimization of I ( ; R), for a suitable R, where the first argument comes from a linear family. The latter turns out to be a simpler problem of minimization of a quasi convex objective function subject to linear constraints. Standard techniques are available to solve such problems, for example, via a sequence of convex feasibility problems, or via a sequence of such problems but on simpler single-constraint linear families.

Page generated in 0.1185 seconds