• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 34
  • 9
  • 5
  • 1
  • 1
  • 1
  • Tagged with
  • 61
  • 61
  • 18
  • 9
  • 9
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Entropia e informaÃÃo de sistemas quÃnticos amortecidos / Entropy and information of quantum damped systems

Vanderley Aguiar de Lima JÃnior 17 July 2014 (has links)
Conselho Nacional de Desenvolvimento CientÃfico e TecnolÃgico / Neste trabalho analisamos as soluÃÃes para a equaÃÃo de movimento para os osciladores de Lane-Emden, onde a massa à dada por m(t)=t^α, onde α>0. Os osciladores de Lane-Emden sÃo osciladores harmÃnicos amortecidos, onde o fator de amortecimento depende do tempo, γ(t)=α/t. Obtivemos as expressÃes analÃticas de q(t), dq(t)/dt, and p(t)=m(t)(dq(t)/dt) para α=2 e α=4. Discutimos as diferenÃas entre as expressÃes da hamiltoniana e da energia para sistemas dependentes do tempo. TambÃm, comparamos nossos resultados com aqueles do oscilador de Caldirola-Kanai. Usamos o mÃtodo dos invariantes quÃnticos e uma transformaÃÃo unitÃria para obter a funÃÃo de onda exata de SchrÃdinger, ψn (q,t), e calcular para n=0 a entropia conjunta (entropia de Leipnik) dependente do tempo e as informaÃÃes Fisher para posiÃÃo (Fq) e para o momento (Fp) para duas classes de osciladores harmÃnicos quÃnticos amortecidos. Observamos que a entropia de Leipnik nÃo varia no tempo para o oscilador Caldirola-Kanai, enquanto diminui e tende a um valor constante (ln(e/2)) para tempos assintÃticos para o oscilador de Lane-Emden. Isto à devido ao fato de que, para este Ãltimo, o fator de amortecimento diminui à medida que o tempo aumenta. Os resultados mostram que a dependÃncia do tempo da entropia de Leipnik à bastante complexa e nÃo obedece a uma tendÃncia geral de aumento monotonicamente com o tempo e que Fq aumenta enquanto Fp diminui com o aumento do tempo. AlÃm disso, FqFp aumenta e tende a um valor constante (4/ℏ^2 ) no limite em que t->∞. NÃs comparamos os resultados com os do bem conhecido oscilador de Caldirola-Kanai. / In this work we analyze the solutions of the equations of motions for two Lane-Emden-type Caldirola-Kanai oscillators. For these oscillators the mass varies as m(t)=t^α, where α>0.We obtain the analytical expression of q(t), dq(t)/dt, and p(t)=m(t)(dq(t)/dt) for α=2 and α=4. These are damped-like harmonic oscillators with a time-dependent damping factor given by γ(t)=α/t. We discuss the differences between the expressions for the hamiltonian and the mechanical energy for time-dependent systems. We also compared our results to those of the well-known Caldirola-Kanai oscillators. We use the quantum invariant method and a unitary transformation to obtain the exact SchrÃdinger wave function, ψn (q,t), and calculate for n=0 the time-dependent joint entropy (LeipnikÂs entropy) and the position (Fq) and momentum (Fp) Fisher information for two classes of quantum damped harmonic oscillators. We observe that the joint entropy does not vary in time for the Caldirola-Kanai oscillator, while it decreases and tends to a constant value (ln(e/2)) for asymptotic times for the Lane-Emden ones. This is due to the fact that for the latter, the damping factor decreases as time increases. The results show that the time dependence of the joint entropy is quite complex and does not obey a general trend of monotonously increase with time and that F_q increases while F_p decreases with increasing time. Also, FqFp increases and tends to a constant value (4/ℏ^2 ) in the limit t->∞.We compare the results with those of the well-known Caldirola-Kanai oscillator.
42

Geometria da informação : métrica de Fisher / Information geometry : Fisher's metric

Porto, Julianna Pinele Santos, 1990- 23 August 2018 (has links)
Orientador: João Eloir Strapasson / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matemática Estatística e Computação Científica / Made available in DSpace on 2018-08-23T13:44:50Z (GMT). No. of bitstreams: 1 Porto_JuliannaPineleSantos_M.pdf: 2346170 bytes, checksum: 9f8b7284329ef1eb2f319c2e377b7a3c (MD5) Previous issue date: 2013 / Resumo: A Geometria da Informação é uma área da matemática que utiliza ferramentas geométricas no estudo de modelos estatísticos. Em 1945, Rao introduziu uma métrica Riemanniana no espaço das distribuições de probabilidade usando a matriz de informação, dada por Ronald Fisher em 1921. Com a métrica associada a essa matriz, define-se uma distância entre duas distribuições de probabilidade (distância de Rao), geodésicas, curvaturas e outras propriedades do espaço. Desde então muitos autores veem estudando esse assunto, que está naturalmente ligado a diversas aplicações como, por exemplo, inferência estatística, processos estocásticos, teoria da informação e distorção de imagens. Neste trabalho damos uma breve introdução à geometria diferencial e Riemanniana e fazemos uma coletânea de alguns resultados obtidos na área de Geometria da Informação. Mostramos a distância de Rao entre algumas distribuições de probabilidade e damos uma atenção especial ao estudo da distância no espaço formado por distribuições Normais Multivariadas. Neste espaço, como ainda não é conhecida uma fórmula fechada para a distância e nem para a curva geodésica, damos ênfase ao cálculo de limitantes para a distância de Rao. Conseguimos melhorar, em alguns casos, o limitante superior dado por Calvo e Oller em 1990 / Abstract: Information Geometry is an area of mathematics that uses geometric tools in the study of statistical models. In 1945, Rao introduced a Riemannian metric on the space of the probability distributions using the information matrix provided by Ronald Fisher in 1921. With the metric associated with this matrix, we define a distance between two probability distributions (Rao's distance), geodesics, curvatures and other properties. Since then, many authors have been studying this subject, which is associated with various applications, such as: statistical inference, stochastic processes, information theory, and image distortion. In this work we provide a brief introduction to Differential and Riemannian Geometry and a survey of some results obtained in Information Geometry. We show Rao's distance between some probability distributions, with special atention to the study of such distance in the space of multivariate normal distributions. In this space, since closed forms for the distance and for the geodesic curve are not known yet, we focus on the calculus of bounds for Rao's distance. In some cases, we improve the upper bound provided by Calvo and Oller in 1990 / Mestrado / Matematica Aplicada / Mestra em Matemática Aplicada
43

Statistická analýza výběrů ze zobecněného exponenciálního rozdělení / Statistical analysis of samples from the generalized exponential distribution

Votavová, Helena January 2014 (has links)
Diplomová práce se zabývá zobecněným exponenciálním rozdělením jako alternativou k Weibullovu a log-normálnímu rozdělení. Jsou popsány základní charakteristiky tohoto rozdělení a metody odhadu parametrů. Samostatná kapitola je věnována testům dobré shody. Druhá část práce se zabývá cenzorovanými výběry. Jsou uvedeny ukázkové příklady pro exponenciální rozdělení. Dále je studován případ cenzorování typu I zleva, který dosud nebyl publikován. Pro tento speciální případ jsou provedeny simulace s podrobným popisem vlastností a chování. Dále je pro toto rozdělení odvozen EM algoritmus a jeho efektivita je porovnána s metodou maximální věrohodnosti. Vypracovaná teorie je aplikována pro analýzu environmentálních dat.
44

Measuring RocksDB performance and adaptive sampling for model estimation

Laprés-Chartrand, Jean 01 1900 (has links)
This thesis focuses on two topics, namely statistical learning and the prediction of key performance indicators in the performance evaluation of a storage engine. The part on statistical learning presents a novel algorithm adjusting the sampling size for the Monte Carlo approximation of the function to be minimized, allowing a reduction of the true function at a given probability and this, at a lower numerical cost. The sampling strategy is embedded in a trust-region algorithm, using the Fisher Information matrix, also called BHHH approximation, to approximate the Hessian matrix. The sampling strategy is tested on a logit model generated from synthetic data. Numerical results exhibit a significant reduction in the time required to optimize the model when an adequate smoothing is applied to the function. The key performance indicator prediction part describes a novel strategy to select better settings for RocksDB that optimize its throughput, using the log files to analyze and identify suboptimal parameters, opening the possibility to greatly accelerate modern storage engine tuning. / Ce mémoire s’intéresse à deux sujets, un relié à l’apprentisage statistique et le second à la prédiction d’indicateurs de performance dans un système de stockage de type clé-valeur. La partie sur l’apprentissage statistique développe un algorithme ajustant la taille d’échantillonnage pour l’approximation Monte Carlo de la fonction à minimiser, permettant une réduction de la véritable fonction avec une probabilité donnée, et ce à un coût numérique moindre. La stratégie d’échantillonnage est développée dans un contexte de région de confiance en utilisant la matrice d’information de Fisher, aussi appelée approximation BHHH de la matrice hessienne. La stratégie d’échantillonnage est testée sur un modèle logit généré à partir de données synthétiques suivant le même modèle. Les résultats numériques montrent une réduction siginificative du temps requis pour optimiser le modèle lorsqu’un lissage adéquat est appliqué. La partie de prédiction d’indicateurs de performance décrit une nouvelle approche pour optimiser la vitesse maximale d’insertion de paire clé-valeur dans le système de stockage RocksDB. Les fichiers journaux sont utilisés pour identifier les paramètres sous-optimaux du système et accélérer la recherche de paramètres optimaux.
45

Le statisticien neuronal : comment la perspective bayésienne peut enrichir les neurosciences / The neuronal statistician : how the Bayesian perspective can enrich neuroscience

Dehaene, Guillaume 09 September 2016 (has links)
L'inférence bayésienne répond aux questions clés de la perception, comme par exemple : "Que faut-il que je crois étant donné ce que j'ai perçu ?". Elle est donc par conséquent une riche source de modèles pour les sciences cognitives et les neurosciences (Knill et Richards, 1996). Cette thèse de doctorat explore deux modèles bayésiens. Dans le premier, nous explorons un problème de codage efficace, et répondons à la question de comment représenter au mieux une information probabiliste dans des neurones pas parfaitement fiables. Nous innovons par rapport à l'état de l'art en modélisant une information d'entrée finie dans notre modèle. Nous explorons ensuite un nouveau modèle d'observateur optimal pour la localisation d'une source sonore grâce à l’écart temporel interaural, alors que les modèles actuels sont purement phénoménologiques. Enfin, nous explorons les propriétés de l'algorithme d'inférence approximée "Expectation Propagation", qui est très prometteur à la fois pour des applications en apprentissage automatique et pour la modélisation de populations neuronales, mais qui est aussi actuellement très mal compris. / Bayesian inference answers key questions of perception such as: "What should I believe given what I have perceived ?". As such, it is a rich source of models for cognitive science and neuroscience (Knill and Richards, 1996). This PhD manuscript explores two such models. We first investigate an efficient coding problem, asking the question of how to best represent probabilistic information in unrealiable neurons. We innovate compared to older such models by introducing limited input information in our own. We then explore a brand new ideal observer model of localization of sounds using the Interaural Time Difference cue, when current models are purely descriptive models of the electrophysiology. Finally, we explore the properties of the Expectation Propagation approximate-inference algorithm, which offers great potential for both practical machine-learning applications and neuronal population models, but is currently very poorly understood.
46

Fisher Information in Censored Samples from Univariate and Bivariate Populations and Their Applications

Pi, Lira January 2012 (has links)
No description available.
47

Novel Transport in Quantum Phases and Entanglement Dynamics Beyond Equilibrium

Szabo, Joseph Charles 06 September 2022 (has links)
No description available.
48

Objective Bayesian Analysis of Kullback-Liebler Divergence of two Multivariate Normal Distributions with Common Covariance Matrix and Star-shape Gaussian Graphical Model

Li, Zhonggai 22 July 2008 (has links)
This dissertation consists of four independent but related parts, each in a Chapter. The first part is an introductory. It serves as the background introduction and offer preparations for later parts. The second part discusses two population multivariate normal distributions with common covariance matrix. The goal for this part is to derive objective/non-informative priors for the parameterizations and use these priors to build up constructive random posteriors of the Kullback-Liebler (KL) divergence of the two multivariate normal populations, which is proportional to the distance between the two means, weighted by the common precision matrix. We use the Cholesky decomposition for re-parameterization of the precision matrix. The KL divergence is a true distance measurement for divergence between the two multivariate normal populations with common covariance matrix. Frequentist properties of the Bayesian procedure using these objective priors are studied through analytical and numerical tools. The third part considers the star-shape Gaussian graphical model, which is a special case of undirected Gaussian graphical models. It is a multivariate normal distribution where the variables are grouped into one "global" group of variable set and several "local" groups of variable set. When conditioned on the global variable set, the local variable sets are independent of each other. We adopt the Cholesky decomposition for re-parametrization of precision matrix and derive Jeffreys' prior, reference prior, and invariant priors for new parameterizations. The frequentist properties of the Bayesian procedure using these objective priors are also studied. The last part concentrates on the discussion of objective Bayesian analysis for partial correlation coefficient and its application to multivariate Gaussian models. / Ph. D.
49

A general L-curve technique for ill-conditioned inverse problems based on the Cramer-Rao lower bound

Kattuparambil Sreenivasan, Sruthi, Farooqi, Simrah January 2024 (has links)
This project is associated with statistical methods to find the unknown parameters of a model. It is the statistical investigation of the algorithm with respect to accuracy (the Cramer-Rao bound and L-curve technique) and optimization of the algorithmic parameters. This project aims to estimate the true temperature (final temperature) of a certain liquid in a container by using initial measurements (readings) from a temperature probe with a known time constant. Basically, the final temperature of the liquid was estimated, before the probe reached its final reading. The probe obeys a simple first-order differential equation model. Based on the model of the probe and the measurement data the estimate was calculated of the ’true’ temperature in the container by using a maximum likelihood approach to parameter estimation.  The initial temperature was also investigated. Modelling, analysis, calculations, and simulations of this problem were explored.
50

Concepts and applications of quantum measurement

Knee, George C. January 2014 (has links)
In this thesis I discuss the nature of ‘measurement’ in quantum theory. ‘Measurement’ is associated with several different processes: the gradual imprinting of information about one system onto another, which is well understood; the collapse of the wavefunction, which is ill-defined and troublesome; and finally, the means by which inferences about unknown experimental parameters are made. I present a theoretical extension to an experimental proposal from Leggett and Garg, who suggested that the quantum-or-classical reality of a macroscopic system may be probed with successive measurements arrayed in time. The extension allows for a finite level of imperfection in the protocol, and makes use of Leggett’s ‘null result’ measurement scheme. I present the results of an experiment conducted in Oxford that, up to certain loopholes, defies a non-quantum interpretation of the dynamics of phosphorous nuclei embedded in silicon. I also present the theory of statistical parameter estimation, and discover that a recent trend to employ time symmetric ‘postselected’ measurements offers no true advantage over standard methods. The technique, known as weak-value amplification, combines a weak transfer of quantum information from system to meter with conditional data rejection, to surprising effect. The Fisher information is a powerful tool for evaluating the performance of any parameter estimation model, and it reveals the technique to be worse than ordinary, preselected only measurements. That this is true despite the presence of noise (including magnetic field fluctuations causing deco- herence, poor resolution detection, and random displacements), casts serious doubt on the utility of the method.

Page generated in 0.1245 seconds