• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 462
  • 32
  • 16
  • 16
  • 15
  • 14
  • 14
  • 14
  • 14
  • 14
  • 13
  • 13
  • 10
  • 6
  • 6
  • Tagged with
  • 683
  • 683
  • 142
  • 141
  • 115
  • 89
  • 86
  • 57
  • 55
  • 49
  • 49
  • 40
  • 38
  • 38
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
631

Sir Arthur Eddington and the foundations of modern physics

Durham, Ian T. January 2005 (has links)
In this dissertation I analyze Sir Arthur Eddington's statistical theory as developed in the first six chapters of his posthumously published Fundamental Theory. In particular I look at the mathematical structure, philosophical implications, and relevancy to modern physics. This analysis is the only one of Fundamental Theory that compares it to modern quantum field theory and is the most comprehensive look at his statistical theory in four decades. Several major insights have been made in this analysis including the fact that he was able to derive Pauli's Exclusion Principle in part from Heisenberg's Uncertainty Principle. In addition the most profound general conclusion of this research is that Fundamental Theory is, in fact, an early quantum field theory, something that has never before been suggested. Contrary to the majority of historical reports and some comments by his contemporaries, this analysis shows that Eddington's later work is neither mystical nor was it that far from mainstream when it was published. My research reveals numerous profoundly deep ideas that were ahead of their time when Fundamental Theory was developed, but that have significant applicability at present. As such this analysis presents several important questions to be considered by modern philosophers of science, physicists, mathematicians, and historians. In addition it sheds new light on Eddington as a scientist and mathematician, in part indicating that his marginalization has been largely unwarranted.
632

Função de acoplamento t-Student assimetrica : modelagem de dependencia assimetrica / Skewed t-Student copula function : skewed dependence modelling

Busato, Erick Andrade 12 March 2008 (has links)
Orientador: Luiz Koodi Hotta / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matematica, Estatistica e Computação Cientifica / Made available in DSpace on 2018-08-12T14:00:24Z (GMT). No. of bitstreams: 1 Busato_ErickAndrade_M.pdf: 4413458 bytes, checksum: b9c4c39b4639c19e685bae736fc86c4f (MD5) Previous issue date: 2008 / Resumo: A família de distribuições t-Student Assimétrica, construída a partir da mistura em média e variância da distribuição normal multivariada com a distribuição Inversa Gama possui propriedades desejáveis de flexibilidade para as mais diversas formas de assimetria. Essas propriedades são exploradas na construção de funções de acoplamento que possuem dependência assimétrica. Neste trabalho são estudadas as características e propriedades da distribuição t-Student Assimétrica e a construção da respectiva função de acoplamento, fazendo-se uma apresentação de diferentes estruturas de dependência que pode originar, incluindo assimetrias da dependência nas caudas. São apresentados métodos de estimação de parâmetros das funções de acoplamento, com aplicações até a terceira dimensão da cópula. Essa função de acoplamento é utilizada para compor um modelo ARMA-GARCHCópula com marginais de distribuição t-Student Assimétrica, que será ajustado para os logretornos de preços do Petróleo e da Gasolina, e log-retornos do Índice de Óleo AMEX, buscando o melhor ajuste, principalmente, para a dependência nas caudas das distribuições de preços. Esse modelo será comparado, através de medidas de Valor em Risco e AIC, além de outras medidas de bondade de ajuste, com o modelo de Função de Acoplamento t-Student Simétrico. / Abstract: The Skewed t-Student distribution family, constructed upon the multivariate normal mixture distribution, known as mean-variance mixture, composed with the Inverse-Gamma distribution, has many desirable flexibility properties for many distribution asymmetry structures. These properties are explored by constructing copula functions with asymmetric dependence. In this work the properties and characteristics of the Skewed t-Student distribution and the construction of a respective copula function are studied, presenting different dependence structures that the copula function generates, including tail dependence asymmetry. Parameter estimation methods are presented for the copula, with applications up to the 3rd dimension. This copula function is used to compose an ARMAGARCH- Copula model with Skewed t-Student marginal distribution that is adjusted to logreturns of Petroleum and Gasoline prices and log-returns of the AMEX Oil Index, emphasizing the return's tail distribution. The model will be compared, by the means of the VaR (Value at Risk) and Akaike's Information Criterion, along with other Goodness-of-fit measures, with models based on the Symmetric t-Student Copula. / Mestrado / Mestre em Estatística
633

Geometria da informação : métrica de Fisher / Information geometry : Fisher's metric

Porto, Julianna Pinele Santos, 1990- 23 August 2018 (has links)
Orientador: João Eloir Strapasson / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matemática Estatística e Computação Científica / Made available in DSpace on 2018-08-23T13:44:50Z (GMT). No. of bitstreams: 1 Porto_JuliannaPineleSantos_M.pdf: 2346170 bytes, checksum: 9f8b7284329ef1eb2f319c2e377b7a3c (MD5) Previous issue date: 2013 / Resumo: A Geometria da Informação é uma área da matemática que utiliza ferramentas geométricas no estudo de modelos estatísticos. Em 1945, Rao introduziu uma métrica Riemanniana no espaço das distribuições de probabilidade usando a matriz de informação, dada por Ronald Fisher em 1921. Com a métrica associada a essa matriz, define-se uma distância entre duas distribuições de probabilidade (distância de Rao), geodésicas, curvaturas e outras propriedades do espaço. Desde então muitos autores veem estudando esse assunto, que está naturalmente ligado a diversas aplicações como, por exemplo, inferência estatística, processos estocásticos, teoria da informação e distorção de imagens. Neste trabalho damos uma breve introdução à geometria diferencial e Riemanniana e fazemos uma coletânea de alguns resultados obtidos na área de Geometria da Informação. Mostramos a distância de Rao entre algumas distribuições de probabilidade e damos uma atenção especial ao estudo da distância no espaço formado por distribuições Normais Multivariadas. Neste espaço, como ainda não é conhecida uma fórmula fechada para a distância e nem para a curva geodésica, damos ênfase ao cálculo de limitantes para a distância de Rao. Conseguimos melhorar, em alguns casos, o limitante superior dado por Calvo e Oller em 1990 / Abstract: Information Geometry is an area of mathematics that uses geometric tools in the study of statistical models. In 1945, Rao introduced a Riemannian metric on the space of the probability distributions using the information matrix provided by Ronald Fisher in 1921. With the metric associated with this matrix, we define a distance between two probability distributions (Rao's distance), geodesics, curvatures and other properties. Since then, many authors have been studying this subject, which is associated with various applications, such as: statistical inference, stochastic processes, information theory, and image distortion. In this work we provide a brief introduction to Differential and Riemannian Geometry and a survey of some results obtained in Information Geometry. We show Rao's distance between some probability distributions, with special atention to the study of such distance in the space of multivariate normal distributions. In this space, since closed forms for the distance and for the geodesic curve are not known yet, we focus on the calculus of bounds for Rao's distance. In some cases, we improve the upper bound provided by Calvo and Oller in 1990 / Mestrado / Matematica Aplicada / Mestra em Matemática Aplicada
634

O uso de ondaletas em modelos FANOVA / Wavelets FANOVA models

Kist, Airton, 1971- 19 August 2018 (has links)
Orientador: Aluísio de Souza Pinheiro / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Matemática, Estatística e Computação Científica / Made available in DSpace on 2018-08-19T09:39:03Z (GMT). No. of bitstreams: 1 Kist_Airton_D.pdf: 4639620 bytes, checksum: 2a0cc586e73dd5d71aa0eacf07be101d (MD5) Previous issue date: 2011 / Resumo: O problema de estimação funcional vem sendo estudado de formas variadas na literatura. Uma possibilidade bastante promissora se dá pela utilização de bases ortonormais de wavelets (ondaletas). Essa solução _e interessante por sua: frugalidade; otimalidade assintótica; e velocidade computacional. O objetivo principal do trabalho é estender os testes do modelo FANOVA de efeitos fixos, com erros i.i.d., baseados em ondaletas propostos em Abramovich et al. (2004), para modelos FANOVA de efeitos fixos com erros dependentes. Propomos um procedimento iterativo tipo Cocharane-Orcutt para estimar os parâmetros e a função. A função é estimada de forma não paramétrica via estimador ondaleta que limiariza termo a termo ou estimador linear núcleo ondaleta. Mostramos que, com erros i.i.d., a convergência individual do estimador núcleo ondaleta em pontos diádicos para uma variável aleatória com distribuição normal implica na convergência conjunta deste vetor para uma variável aleatória com distribuição normal multivariada. Além disso, mostramos a convergência em erro quadrático do estimador nos pontos diádicos. Sob uma restrição é possível mostrar que este estimador converge nos pontos diádicos para uma variável com distribuição normal mesmo quando os erros são correlacionados. O vetor das convergências individuais também converge para uma variável normal multivariada / Abstract: The functional estimation problem has been studied variously in the literature. A promising possibility is by use of orthonormal bases of wavelets. This solution is appealing because of its: frugality, asymptotic optimality, and computational speed. The main objective of the work is to extend the tests of fixed effects FANOVA model with iid errors, based on wavelet proposed in Abramovich et al. (2004) to fixed effects FANOVA models with dependent errors. We propose an iterative procedure Cocharane-Orcutt type to estimate the parameters and function. The function is estimated through a nonparametric wavelet estimator that thresholded term by term or wavelet kernel linear estimator. We show that, with iid errors, the individual convergence of the wavelet kernel estimator in dyadic points for a random variable with normal distribution implies the joint convergence of this vector to a random variable with multivariate normal distribution. Furthermore, we show the convergence of the squared error estimator in the dyadic points. Under a restriction is possible to show that this estimator converges in dyadic points to a variable with normal distribution even when errors are correlated. The vector of individual convergences also converges to a multivariate normal variable / Doutorado / Estatistica / Doutor em Estatística
635

Arremessos de basquetebol e sequências de Bernoulli : uma aplicação de métodos estatísticos para análise de séries temporais binárias / Basketball shots and sequences of Bernoulli : an application of statistical methods for analysis of binary time series

Costa, Michelly Guerra, 1984- 21 August 2018 (has links)
Orientador: Cristiano Torezzan / Dissertação (mestrado profissional) - Universidade Estadual de Campinas, Instituto de Matemática, Estatística e Computação Científica / Made available in DSpace on 2018-08-21T20:22:07Z (GMT). No. of bitstreams: 1 Costa_MichellyGuerra_M.pdf: 529315 bytes, checksum: 0d134b8239c73b591005b036b0339e20 (MD5) Previous issue date: 2012 / Resumo: No presente trabalho investigamos as semelhanças entre registros de lançamentos extraídos de jogos reais de basquetebol e sequências aleatórias geradas por algoritmos computacionais. Nosso principal objetivo é comparar o comportamento das sequências de arremessos consecutivos de jogadores ao longo de uma temporada com sequências aleatórias de zeros e uns (sequências de Bernoulli) geradas computacionalmente. Para tanto, foram desenvolvidos algoritmos computacionais específicos para os testes e implementados na plataforma Scilab para análise dos dados. Os testes realizados neste trabalho indicam que, de maneira geral, não existem diferenças estatísticas entre os dois conjuntos de dados considerados. Além de uma breve revisão sobre testes estatísticos e sobre o problema da boa fase no jogo de basquetebol, apresentamos diversos exemplos visando tornar o texto acessível para alunos de graduação ou demais interessados em estatística aplicada, em especial ao esporte / Abstract: In this work we investigate the similarities between records of shots extracted from real basketball games and random sequences generated by computational algorithms. Our goal is to compare the behavior of sequences of consecutive shots of players throughout a season with random sequences of zeros and ones (Bernoulli sequences) generated computationally. For this purpose, computational algorithms have been developed for specific tests and implemented in the Scilab platform. The tests performed in this study indicate that, in general, there are no statistical differences between the two sets of data considered. Besides a brief review of statistical tests and the problem of good stage in the game of basketball, we present several examples in order to make the text accessible to undergraduates or others interested in applied statistics, especially the ones concerning this sport / Mestrado / Matematica / Mestra em Matemática
636

Spectral methods and computational trade-offs in high-dimensional statistical inference

Wang, Tengyao January 2016 (has links)
Spectral methods have become increasingly popular in designing fast algorithms for modern highdimensional datasets. This thesis looks at several problems in which spectral methods play a central role. In some cases, we also show that such procedures have essentially the best performance among all randomised polynomial time algorithms by exhibiting statistical and computational trade-offs in those problems. In the first chapter, we prove a useful variant of the well-known Davis{Kahan theorem, which is a spectral perturbation result that allows us to bound of the distance between population eigenspaces and their sample versions. We then propose a semi-definite programming algorithm for the sparse principal component analysis (PCA) problem, and analyse its theoretical performance using the perturbation bounds we derived earlier. It turns out that the parameter regime in which our estimator is consistent is strictly smaller than the consistency regime of a minimax optimal (yet computationally intractable) estimator. We show through reduction from a well-known hard problem in computational complexity theory that the difference in consistency regimes is unavoidable for any randomised polynomial time estimator, hence revealing subtle statistical and computational trade-offs in this problem. Such computational trade-offs also exist in the problem of restricted isometry certification. Certifiers for restricted isometry properties can be used to construct design matrices for sparse linear regression problems. Similar to the sparse PCA problem, we show that there is also an intrinsic gap between the class of matrices certifiable using unrestricted algorithms and using polynomial time algorithms. Finally, we consider the problem of high-dimensional changepoint estimation, where we estimate the time of change in the mean of a high-dimensional time series with piecewise constant mean structure. Motivated by real world applications, we assume that changes only occur in a sparse subset of all coordinates. We apply a variant of the semi-definite programming algorithm in sparse PCA to aggregate the signals across different coordinates in a near optimal way so as to estimate the changepoint location as accurately as possible. Our statistical procedure shows superior performance compared to existing methods in this problem.
637

Three essays on spectral analysis and dynamic factors

Liska, Roman 10 September 2008 (has links)
The main objective of this work is to propose new procedures for the general dynamic factor analysis<p>introduced by Forni et al. (2000). First, we develop an identification method for determining the number of common shocks in the general dynamic factor model. Sufficient conditions for consistency of the criterion are provided for large n (number of series) and T (the series length). We believe that our procedure can shed<p>light on the ongoing debate on the number of factors driving the US or Eurozone economy. Second, we show how the dynamic factor analysis method proposed in Forni et al. (2000), combined with our identification method, allows for identifying and estimating joint and block-specific common factors. This leads to a more<p>sophisticated analysis of the structures of dynamic interrelations within and between the blocks in suchdatasets.<p>Besides the framework of the general dynamic factor model we also propose a consistent lag window spectral density estimator based on multivariate M-estimators by Maronna (1976) when the underlying data are coming from the alpha mixing stationary Gaussian process. / Doctorat en Sciences / info:eu-repo/semantics/nonPublished
638

Détermination de classes de modalités de dégradation significatives pour le pronostic et la maintenance / Determination of classes of significant deterioration modalities for prognosis and maintenance

Wang, Xuanzhou 15 November 2013 (has links)
Les travaux présentés dans ce manuscrit traitent de la détermination de classes de systèmes selon leur mode de vieillissement dans l'objectif de prévenir une défaillance et de prendre une décision de maintenance. L’évolution du niveau de dégradation observée sur un système peut être modélisée par un processus stochastique paramétré. Un modèle usuellement utilisé est le processus Gamma. On s’intéresse au cas où tous les systèmes ne vieillissent pas identiquement et le mode de vieillissement est dépendant du contexte d’utilisation des systèmes ou des propriétés des systèmes, appelé ensemble de covariables. Il s’agit alors de regrouper les systèmes vieillissant de façon analogue en tenant compte de la covariable et d’identifier les paramètres du modèle associé à chacune des classes.Dans un premier temps la problématique est explicitée avec notamment la définition des contraintes: incréments d’instants d’observation irréguliers, nombre quelconque d’observations par chemin décrivant une évolution, prise en compte de la covariable. Ensuite des méthodes sont proposées. Elles combinent un critère de vraisemblance dans l’espace des incréments de mesure du niveau de dégradation, et un critère de cohérence dans l’espace de la covariable. Une technique de normalisation est introduite afin de contrôler l’importance de chacun de ces critères. Des études expérimentales sont effectuées pour illustrer l'efficacité des méthodes proposées / The work presented in this thesis deals with the problem of determination of classes of systems according to their aging mode in the aim of preventing a failure and making a decision of maintenance. The evolution of the observed deterioration levels of a system can be modeled by a parameterized stochastic process. A commonly used model is the Gamma process. We are interested in the case where all the systems do not age identically and the aging mode depends on the condition of usage of systems or system properties, called the set of covariates. Then, we aims to group the systems that age similarly by taking into account the covariate and to identify the parameters of the model associated with each class.At first, the problem is presented especially with the definition of constraints: time increments of irregular observations, any number of observations per path which describes an evolution, consideration of the covariate. Then the methods are proposed. They combine a likelihood criterion in the space of the increments of deterioration levels, and a coherence criterion in the space of the covariate. A normalization technique is introduced to control the importance of each of these two criteria. Experimental studies are performed to illustrate the effectiveness of the proposed methods
639

Systémový přístup k predikci vývoje cen na trhu rezidenčních nemovitostí / Systematic approach to the prediction of the real estate market development

Tauberová, Darina January 2018 (has links)
The doctoral thesis deals with the finding of a suitable approach for predicting the development of the residential real estate market, which would also be applicable in the practice of Experts and further develop the appraisal field. It has been found that a delayed multiple linear regression model appears to be appropriate, as confirmed by the verification of this model. The resulting model is also suitable for use in routine Expert practice, thanks to the simplicity of calculation without ownership of any computing program. Expert thanks to the created model is able to predict the development of the real estate market. The result is bound to the accuracy of the input data. All assumptions of regression models have been tested, optimal explanatory variables were selected based on backword regression. The doctoral thesis explains all input data, methods, tests, procedures and detailed modeling.
640

Statistická charakteristická funkce a její využití pro zpracování signálu / Statistic Characteristic Function and its Usage for Digital Signal Processing

Mžourek, Zdeněk January 2014 (has links)
Aim of this thesis is provide basic information about characteristic function used in statistic and compare its properties with the Fourier transform used in engineering applications. First part of this thesis is theoretical, there are discussed basic concepts, their properties and mutual relations. The second part is devoted to some possible applications, for example normality testing of data or utilization of the characteristic function in independent component analysis. The first chapter describes the introduction to probability theory for the unification of terminology and mentioned concepts will be used to demonstrate the interesting properties of characteristic function. The second chapter describes the Fourier transform, definition of characteristic function and their comparison. The second part of this text is devoted to applications the empirical characteristic function is analyzed as an estimate of the characteristic function of examined data. As an example of application is describe a simple test of normality. The last part deals with more advanced applications of characteristic function for methods such as independent component analysis.

Page generated in 0.2594 seconds