• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 5
  • 2
  • Tagged with
  • 17
  • 17
  • 6
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A fragmentation model for sprays and L² stability estimates for shockes solutions of scalar conservation laws using the relative entropy method

Leger, Nicholas Matthew 11 October 2010 (has links)
We present a mathematical study of two conservative systems in fluid mechanics. First, we study a fragmentation model for sprays. The model takes into account the break-up of spray droplets due to drag forces. In particular, we establish the existence of global weak solutions to a system of incompressible Navier-Stokes equations coupled with a Boltzmann-like kinetic equation. We assume the particles initially have bounded radii and bounded velocities relative to the gas, and we show that those bounds remain as the system evolves. One interesting feature of the model is the apparent accumulation of particles with arbitrarily small radii. As a result, there can be no nontrivial hydrodynamical equilibrium for this system. Next, with an interest in understanding hydrodynamical limits in discontinuous regimes, we study a classical model for shock waves. Specifically, we consider scalar nonviscous conservation laws with strictly convex flux in one spatial dimension, and we investigate the behavior of bounded L² perturbations of shock wave solutions to the Riemann problem using the relative entropy method. We show that up to a time-dependent translation of the shock, the L² norm of a perturbed solution relative to the shock wave is bounded above by the L² norm of the initial perturbation. Finally, we include some preliminary relative entropy estimates which are suitable for a study of shock wave solutions to n x n systems of conservation laws having a convex entropy. / text
2

Constrained relative entropy minimization with applications to multitask learning

Koyejo, Oluwasanmi Oluseye 15 July 2013 (has links)
This dissertation addresses probabilistic inference via relative entropy minimization subject to expectation constraints. A canonical representation of the solution is determined without the requirement for convexity of the constraint set, and is given by members of an exponential family. The use of conjugate priors for relative entropy minimization is proposed, and a class of conjugate prior distributions is introduced. An alternative representation of the solution is provided as members of the prior family when the prior distribution is conjugate. It is shown that the solutions can be found by direct optimization with respect to members of such parametric families. Constrained Bayesian inference is recovered as a special case with a specific choice of constraints induced by observed data. The framework is applied to the development of novel probabilistic models for multitask learning subject to constraints determined by domain expertise. First, a model is developed for multitask learning that jointly learns a low rank weight matrix and the prior covariance structure between different tasks. The multitask learning approach is extended to a class of nonparametric statistical models for transposable data, incorporating side information such as graphs that describe inter-row and inter-column similarity. The resulting model combines a matrix-variate Gaussian process prior with inference subject to nuclear norm expectation constraints. In addition, a novel nonparametric model is proposed for multitask bipartite ranking. The proposed model combines a hierarchical matrix-variate Gaussian process prior with inference subject to ordering constraints and nuclear norm constraints, and is applied to disease gene prioritization. In many of these applications, the solution is found to be unique. Experimental results show substantial performance improvements as compared to strong baseline models. / text
3

On Generalized Measures Of Information With Maximum And Minimum Entropy Prescriptions

Dukkipati, Ambedkar 03 1900 (has links)
Kullback-Leibler relative-entropy or KL-entropy of P with respect to R defined as ∫xlnddPRdP , where P and R are probability measures on a measurable space (X, ), plays a basic role in the definitions of classical information measures. It overcomes a shortcoming of Shannon entropy – discrete case definition of which cannot be extended to nondiscrete case naturally. Further, entropy and other classical information measures can be expressed in terms of KL-entropy and hence properties of their measure-theoretic analogs will follow from those of measure-theoretic KL-entropy. An important theorem in this respect is the Gelfand-Yaglom-Perez (GYP) Theorem which equips KL-entropy with a fundamental definition and can be stated as: measure-theoretic KL-entropy equals the supremum of KL-entropies over all measurable partitions of X . In this thesis we provide the measure-theoretic formulations for ‘generalized’ information measures, and state and prove the corresponding GYP-theorem – the ‘generalizations’ being in the sense of R ´enyi and nonextensive, both of which are explained below. Kolmogorov-Nagumo average or quasilinear mean of a vector x = (x1, . . . , xn) with respect to a pmf p= (p1, . . . , pn)is defined ashxiψ=ψ−1nk=1pkψ(xk), whereψis an arbitrarycontinuous and strictly monotone function. Replacing linear averaging in Shannon entropy with Kolmogorov-Nagumo averages (KN-averages) and further imposing the additivity constraint – a characteristic property of underlying information associated with single event, which is logarithmic – leads to the definition of α-entropy or R ´enyi entropy. This is the first formal well-known generalization of Shannon entropy. Using this recipe of R´enyi’s generalization, one can prepare only two information measures: Shannon and R´enyi entropy. Indeed, using this formalism R´enyi characterized these additive entropies in terms of axioms of KN-averages. On the other hand, if one generalizes the information of a single event in the definition of Shannon entropy, by replacing the logarithm with the so called q-logarithm, which is defined as lnqx =x1− 1 −1 −q , one gets what is known as Tsallis entropy. Tsallis entropy is also a generalization of Shannon entropy but it does not satisfy the additivity property. Instead, it satisfies pseudo-additivity of the form x ⊕qy = x + y + (1 − q)xy, and hence it is also known as nonextensive entropy. One can apply R´enyi’s recipe in the nonextensive case by replacing the linear averaging in Tsallis entropy with KN-averages and thereby imposing the constraint of pseudo-additivity. A natural question that arises is what are the various pseudo-additive information measures that can be prepared with this recipe? We prove that Tsallis entropy is the only one. Here, we mention that one of the important characteristics of this generalized entropy is that while canonical distributions resulting from ‘maximization’ of Shannon entropy are exponential in nature, in the Tsallis case they result in power-law distributions. The concept of maximum entropy (ME), originally from physics, has been promoted to a general principle of inference primarily by the works of Jaynes and (later on) Kullback. This connects information theory and statistical mechanics via the principle: the states of thermodynamic equi- librium are states of maximum entropy, and further connects to statistical inference via select the probability distribution that maximizes the entropy. The two fundamental principles related to the concept of maximum entropy are Jaynes maximum entropy principle, which involves maximizing Shannon entropy and the Kullback minimum entropy principle that involves minimizing relative-entropy, with respect to appropriate moment constraints. Though relative-entropy is not a metric, in cases involving distributions resulting from relative-entropy minimization, one can bring forth certain geometrical formulations. These are reminiscent of squared Euclidean distance and satisfy an analogue of the Pythagoras’ theorem. This property is referred to as Pythagoras’ theorem of relative-entropy minimization or triangle equality and plays a fundamental role in geometrical approaches to statistical estimation theory like information geometry. In this thesis we state and prove the equivalent of Pythagoras’ theorem in the nonextensive formalism. For this purpose we study relative-entropy minimization in detail and present some results. Finally, we demonstrate the use of power-law distributions, resulting from ME-rescriptions of Tsallis entropy, in evolutionary algorithms. This work is motivated by the recently proposed generalized simulated annealing algorithm based on Tsallis statistics. To sum up, in light of their well-known axiomatic and operational justifications, this thesis establishes some results pertaining to the mathematical significance of generalized measures of information. We believe that these results represent an important contribution towards the ongoing research on understanding the phenomina of information. (For formulas pl see the original document) ii
4

Information Theoretical Measures for Achieving Robust Learning Machines

Zegers, Pablo, Frieden, B., Alarcón, Carlos, Fuentes, Alexis 12 August 2016 (has links)
Information theoretical measures are used to design, from first principles, an objective function that can drive a learning machine process to a solution that is robust to perturbations in parameters. Full analytic derivations are given and tested with computational examples showing that indeed the procedure is successful. The final solution, implemented by a robust learning machine, expresses a balance between Shannon differential entropy and Fisher information. This is also surprising in being an analytical relation, given the purely numerical operations of the learning machine.
5

Ensemble Filtering Methods for Nonlinear Dynamics

Kim, Sangil January 2005 (has links)
The standard ensemble filtering schemes such as Ensemble Kalman Filter (EnKF) and Sequential Monte Carlo (SMC) do not properly represent states of low priori probability when the number of samples is too small and the dynamical system is high dimensional system with highly non-Gaussian statistics. For example, when the standard ensemble methods are applied to two well-known simple, but highly nonlinear systems such as a one-dimensional stochastic diffusion process in a double-well potential and the well-known three-dimensional chaotic dynamical system of Lorenz, they produce erroneous results to track transitions of the systems from one state to the other.In this dissertation, a set of new parametric resampling methods are introduced to overcome this problem. The new filtering methods are motivated by a general H-theorem for the relative entropy of Markov stochastic processes. The entropy-based filters first approximate a prior distribution of a given system by a mixture of Gaussians and the Gaussian components represent different regions of the system. Then the parameters in each Gaussian, i.e., weight, mean and covariance are determined sequentially as new measurements are available. These alternative filters yield a natural generalization of the EnKF method to systems with highly non-Gaussian statistics when the mixture model consists of one single Gaussian and measurements are taken on full states.In addition, the new filtering methods give the quantities of the relative entropy and log-likelihood as by-products with no extra cost. We examine the potential usage and qualitative behaviors of the relative entropy and log-likelihood for the new filters. Those results of EnKF and SMC are also included. We present results of the new methods on the applications to the above two ordinary differential equations and one partial differential equation with comparisons to the standard filters, EnKF and SMC. These results show that the entropy-based filters correctly track the transitions between likely states in both highly nonlinear systems even with small sample size N=100.
6

Contributions to the theory of Gaussian Measures and Processes with Applications

Zachary A Selk (12474759) 28 April 2022 (has links)
<p>This thesis studies infinite dimensional Gaussian measures on Banach spaces. Let $\mu_0$ be a centered Gaussian measure on Banach space $\mathcal B$, and $\mu^\ast$ is a measure equivalent to $\mu_0$. We are interested in approximating, in sense of relative entropy (or KL divergence) the quantity $\frac{d\mu^z}{d\mu^\ast}$ where $\mu^z$ is a mean shift measure of $\mu_0$ by an element $z$ in the so-called ``Cameron-Martin" space $\mathcal H_{\mu_0}$. That is, we want to find the information projection</p> <p><br></p> <p>$$\inf_{z\in \mathcal H_{\mu_0}} D_{KL}(\mu^z||\mu_0)=\inf_{z\in \mathcal H_{\mu_0}} E_{\mu^z} \left(\log \left(\frac{d\mu^z}{d\mu^\ast}\right)\right).$$</p> <p><br></p> <p>We relate this information projection to a mode computation, to an ``open loop" control problem, and to a variational formulation leading to an Euler-Lagrange equation. Furthermore, we use this relationship to establish a kind of Feynman-Kac theorem for systems of ordinary differential equations. We demonstrate that the solution to a system of second order linear ordinary differential equations is the mode of a diffusion, analogous to the result of Feynman-Kac for parabolic partial differential equations. </p>
7

Minimization Problems Based On A Parametric Family Of Relative Entropies

Ashok Kumar, M 05 1900 (has links) (PDF)
We study minimization problems with respect to a one-parameter family of generalized relative entropies. These relative entropies, which we call relative -entropies (denoted I (P; Q)), arise as redundancies under mismatched compression when cumulants of compression lengths are considered instead of expected compression lengths. These parametric relative entropies are a generalization of the usual relative entropy (Kullback-Leibler divergence). Just like relative entropy, these relative -entropies behave like squared Euclidean distance and satisfy the Pythagorean property. We explore the geometry underlying various statistical models and its relevance to information theory and to robust statistics. The thesis consists of three parts. In the first part, we study minimization of I (P; Q) as the first argument varies over a convex set E of probability distributions. We show the existence of a unique minimizer when the set E is closed in an appropriate topology. We then study minimization of I on a particular convex set, a linear family, which is one that arises from linear statistical constraints. This minimization problem generalizes the maximum Renyi or Tsallis entropy principle of statistical physics. The structure of the minimizing probability distribution naturally suggests a statistical model of power-law probability distributions, which we call an -power-law family. Such a family is analogous to the exponential family that arises when relative entropy is minimized subject to the same linear statistical constraints. In the second part, we study minimization of I (P; Q) over the second argument. This minimization is generally on parametric families such as the exponential family or the - power-law family, and is of interest in robust statistics ( > 1) and in constrained compression settings ( < 1). In the third part, we show an orthogonality relationship between the -power-law family and an associated linear family. As a consequence of this, the minimization of I (P; ), when the second argument comes from an -power-law family, can be shown to be equivalent to a minimization of I ( ; R), for a suitable R, where the first argument comes from a linear family. The latter turns out to be a simpler problem of minimization of a quasi convex objective function subject to linear constraints. Standard techniques are available to solve such problems, for example, via a sequence of convex feasibility problems, or via a sequence of such problems but on simpler single-constraint linear families.
8

Solution Of Inverse Electrocardiography Problem Using Minimum Relative Entropy Method

Bircan, Ali 01 October 2010 (has links) (PDF)
The interpretation of heart&#039 / s electrical activity is very important in clinical medicine since contraction of cardiac muscles is initiated by the electrical activity of the heart. The electrocardiogram (ECG) is a diagnostic tool that measures and records the electrical activity of the heart. The conventional 12 lead ECG is a clinical tool that provides information about the heart status. However, it has limited information about functionality of heart due to limited number of recordings. A better alternative approach for understanding cardiac electrical activity is the incorporation of body surface potential measurements with torso geometry and the estimation of the equivalent cardiac sources. The problem of the estimating the cardiac sources from the torso potentials and the body geometry is called the inverse problem of electrocardiography. The aim of this thesis is reconstructing accurate high resolution maps of epicardial potential representing the electrical activity of the heart from the body surface measurements. However, accurate estimation of the epicardial potentials is not an easy problem due to ill-posed nature of the inverse problem. In this thesis, the linear inverse ECG problem is solved using different optimization techniques such as Conic Quadratic Programming, multiple constrained convex optimization, Linearly Constrained Tikhonov Regularization and Minimum Relative Entropy (MRE) method. The prior information used in MRE method is the lower and upper bounds of epicardial potentials and a prior expected value of epicardial potentials. The results are compared with Tikhonov Regularization and with the true potentials.
9

Class degree and measures of relative maximal entropy

Allahbakhshi, Mahsa 16 March 2011 (has links)
Given a factor code [pi] from a shift of finite type X onto an irreducible sofic shift Y, and a fully supported ergodic measure v on Y we give an explicit upper bound on the number of ergodic measures on X which project to v and have maximal entropy among all measures in the fiber [pi]-1{v}. This bound is invariant under conjugacy. We relate this to an important construction for finite-to-one symbolic factor maps.
10

Ordering of Entangled States for Different Entanglement Measures / Ordning av Sammanflätningsgrad hos Kvantmekaniska Tillstånd för Olika Mätmodeller

Sköld, Jennie January 2014 (has links)
Quantum entanglement is a phenomenon which has shown great potential use in modern technical implementations, but there is still much development needed in the field. One major problem is how to measure the amount of entanglement present in a given entangled state. There are numerous different entanglement measures suggested, all satisfying some conditions being of either operational, or more abstract, mathematical nature. However, in contradiction to what one might expect, the measures show discrepancies in the ordering of entangled states. Concretely this means that with respect to one measure, a state can be more entangled than another state, but the ordering may be opposite for the same states using another measure. In this thesis we take a closer look at some of the most commonly occurring entanglement measures, and find examples of states showing inequivalent entanglement ordering for the different measures. / Kvantmekanisk sammanflätning är ett fenomen som visat stor potential för framtida tekniska tillämpningar, men för att kunna använda oss av detta krävs att vi hittar lämpliga modeller att mäta omfattningen av sammanflätningen hos ett givet tillstånd. Detta har visat sig vara en svår uppgift, då de modeller som finns idag är otillräckliga när det gäller att konsekvent avgöra till vilken grad olika tillstånd är sammanflätade. Exempelvis kan en modell visa att ett tillstånd är mer sammanflätat än ett annat, medan en annan modell kan visa på motsatsen - att det första tillståndet är mindre sammanflätat än det andra. En möljig orsak kan ligga i de olika modellernas deifnition, då vissa utgår från operativa definitioner, medan andra grundas på matematiska, abstrakta villkor. I denna uppsats tittar vi lite närmre på några av de mätmodeller som finns, och hittar exempel på tillstånd som uppvisar olika ordning av sammanflätningsgrad beroende på vilken modell som används.

Page generated in 0.0735 seconds