• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 36
  • 11
  • 11
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 70
  • 18
  • 16
  • 10
  • 10
  • 9
  • 9
  • 8
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Partition adaptative de l’espace dans un algorithme MCMC avec adaptation régionale

Grenon-Godbout, Nicolas 06 1900 (has links)
No description available.
42

Dinâmica versus estática no programa de pesquisa pós-keynesiano

Krakowiak, Sérgio 13 September 2006 (has links)
Made available in DSpace on 2016-12-23T14:00:34Z (GMT). No. of bitstreams: 1 Dissertacao - Versao avaliada em banca.pdf: 500017 bytes, checksum: b582b24adf27d051477611354ee57704 (MD5) Previous issue date: 2006-09-13 / No primeiro capítulo, sublinhamos o fato de que na ciência econômica podem ser encontrados dois paradigmas distintos ligados à hipótese da ergodicidade e da não ergodicidade. Em termos estritos, a hipótese da ergodicidade refere-se à capacidade de previsão estatística. Porém de uma forma abrangente pode-se associar o conceito à natureza imutável das leis regentes dos sistemas e à pré-determinação da realidade. Escolas econômicas que crêem ser o equilíbrio de pleno emprego um estado natural para o qual a economia converge invariavelmente pertencem a este paradigma. Keynes e os Pós-keynesianos inserem-se no paradigma não ergódigo. Estas escolas cultivam a concepção de que o futuro é fundamentalmente incerto, e que os níveis de produto e de emprego são dados em função das expectativas de curto e longo prazo, da taxa de juros e da propensão a consumir, ou seja, determinados por variáveis passíveis de modificação no tempo histórico. Não há convergência pré-determinada ao equilíbrio de pleno emprego e dificilmente atingem ou mantém-se em equilíbrio, mesmo quando este se situa abaixo do pleno emprego. O modelo de equilíbrio móvel, mostra que a partir de uma frustração inicial das expectativas de curto prazo, haverá desequilíbrio sistemático entre a oferta agregada e a demanda agregada realizada. O mesmo resultado é evidenciado no modelo de Harrod, que segundo Kregel (1980) constitui uma variante do modelo de Keynes. / The first chapter underlines the fact that economic science is provided with two different paradigms. One accepts the ergodicity hypotesis while the other does not. In a strict sense, this hypothesis refers to the possibility of making relyable statistical forecasts. However, in a broader sense it is tyed up with the notion of the immutability of the economic laws, and pre-determination of the reality. Economic schools o thought that postulates full employment equilibrium as a necessary achievement, are inserted in the ergodic paradigm. Keynes and Post-Keynesian theories are non ergodic. These schools postulate that future is fundamentally uncertain. The levels of employment and income are given by the short and long run expectations, the interest rate and propensity to consume, all historical variables. There is no necessary convergence process to the full employment equilibrium; and they hardly converge to the equilibrium, even when it is below full employment. The shifting equilibrium model, shows that a single error in the short run expectations is capable of rising a systematic sequence of disiquilibrium among realized agregate demand and agregate supply. Harrod s model (considered by Kregel [1980] a variant of the Keynes` model). Exhibits the same results.
43

Cadeias de Markov e o Jogo Monopoly

Souza Junior, Fernando Luiz de January 2016 (has links)
Orientador: Prof. Dr. Rafael de Mattos Grisi / Dissertação (mestrado) - Universidade Federal do ABC, Programa de Pós-Graduação em Mestrado Profissional em Matemática em Rede Nacional, 2016. / Neste trabalho analisamos uma versão simplificada do jogo Monopoly utilizando um modelo de Cadeia de Markov com parâmetro de tempo discreto. No primeiro capítulo discorremos sobre a Teoria Clássica das Probabilidades, trazendo os resultados mais importantes para este estudo, precedida por uma breve introdução acerca das ideias sobre o acaso ao longo da história da humanidade e os principais pensadores envolvidos no desenvolvimento dessa Teoria. No segundo capítulo fazemos uma introdução histórica aos processos estocásticos e às Cadeias de Markov; em seguida, explicamos os conceitos fundamentais sobre Cadeias de Markov, colocando alguns exemplos e por fim discutindo a ergodicidade de uma Cadeia de Markov. No terceiro capítulo, após uma breve explicação sobre o surgimento e posterior evolução do jogo Monopoly ao longo do século XX, analisamos a dinâmica do jogo pelo modelo de uma Cadeia de Markov, utilizando como objeto de estudo uma versão mais simples do jogo em questão. / In this work we analyze a simplified version of the Monopoly game using a Markov chain model with discrete time parameter. In the first chapter we discuss on the Classical Theory of Probability, bringing the most important results for this study, preceded by a brief introduction about the ideas of chance throughout the history of mankind and leading thinkers involved in the development of this theory. In the second chapter we make a historical introduction to stochastic processes and Markov chains; then we explain the fundamental concepts of Markov Chains, putting some examples and finally discussing the ergodicity of a Markov chain. In the third chapter, after a brief explanation of the emergence and subsequent evolution of the Monopoly game throughout the twentieth century, we analyze the dynamics of the game by the model of a Markov chain, using as an object of study a simpler version of the game in question.
44

Plusieurs aspects de rigidité des algèbres de von Neumann / Several rigidity features of von Neumann algebras

Boutonnet, Rémi 12 June 2014 (has links)
Dans cette thèse je m'intéresse à des propriétés de rigidité de certaines constructions d'algèbres de von Neumann. Ces constructions relient la théorie des groupes et la théorie ergodique au monde des algèbres d'opérateurs. Il est donc naturel de s'interroger sur la force de ce lien et sur la possibilité d'un enrichissement mutuel dans ces différents domaines. Le Chapitre II traite des actions Gaussiennes. Ce sont des actions de groupes discrets préservant une mesure de probabilité qui généralisent les actions de Bernoulli. Dans un premier temps, j'étudie les propriétés d'ergodicité de ces actions à partir d'une analyse de leurs algèbres de von Neumann (voir Theorem II.1.22 et Corollary II.2.16). Ensuite, je classifie les algèbres de von Neumann associées à certaines actions Gaussiennes, à isomorphisme près, en montrant un résultat de W*-Superrigidité (Theorem II.4.5). Ces résultats généralisent des travaux analogues sur les actions de Bernoulli ([KT08,CI10,Io11,IPV13]).Dans le Chapitre III, j'étudie les produits libres amalgamés d'algèbres de von Neumann. Ce chapitre résulte d'une collaboration avec C. Houdayer et S. Raum. Nous analysons les sous-Algèbres de Cartan de tels produits libres amalgamés. Nous déduisons notamment de notre analyse que le produit libre de deux algèbres de von Neumann n'est jamais obtenu à partir d'une action d'un groupe sur un espace mesuré.Enfin, le Chapitre IV porte sur les algèbres de von Neumann associées à des groupes hyperboliques. Ce chapitre est obtenu en collaboration avec A. Carderi. Nous utilisons la géométrie des groupes hyperboliques pour fournir de nouveaux exemples de sous-Algèbres maximales moyennables (mais de type I) dans des facteurs II_1. / The purpose of this dissertation is to put on light rigidity properties of several constructions of von Neumann algebras. These constructions relate group theory and ergodic theory to operator algebras.In Chapter II, we study von Neumann algebras associated with measure-Preserving actions of discrete groups: Gaussian actions. These actions are somehow a generalization of Bernoulli actions. We have two goals in this chapter. The first goal is to use the von Neumann algebra associated with an action as a tool to deduce properties of the initial action (see Corollary II.2.16). The second aim is to prove structural results and classification results for von Neumann algebras associated with Gaussian actions. The most striking rigidity result of the chapter is Theorem II.4.5, which states that in some cases the von Neumann algebra associated with a Gaussian action entirely remembers the action, up to conjugacy. Our results generalize similar results for Bernoulli actions ([KT08,CI10,Io11,IPV13]).In Chapter III, we study amalgamated free products of von Neumann algebras. The content of this chapter is obtained in collaboration with C. Houdayer and S. Raum. We investigate Cartan subalgebras in such amalgamated free products. In particular, we deduce that the free product of two von Neumann algebras is never obtained as a group-Measure space construction of a non-Singular action of a discrete countable group on a measured space.Finally, Chapter IV is concerned with von Neumann algebras associated with hyperbolic groups. The content of this chapter is obtained in collaboration with A. Carderi. We use the geometry of hyperbolic groups to provide new examples of maximal amenable (and yet type I) subalgebras in type II_1 factors.
45

Determinismus, path-dependence a nejistota pohledem postkeynesovské ekonomie / Determinism, Path-depedence and Uncertainty: A Post-Keynesian Perspective

Máslo, Lukáš January 2011 (has links)
The thesis deals with analysis of conceptual-methodological issues examined in the framework of post-keynesian economics. The author´s goal is to supply a solution to the problem of a definition of determinism/non-determinism for both deterministic and stochastic systems and also to the problem of the prevailing confusion which surrounds the notion of reversibility/irreversibility in both path-dependent and traditional-equilibrist systems. The author regards the determinism/non-determinism problem as essentially linked to the problem of a definition of fundamental uncertainty. The key issues are being identified in the "problem of a generator of endogenous shocks" and the "selection - creation problem". Finding solutions to these enables us to take a stand on the validity/invalidity of the classical dichotomy, in the eyes of the author. Davidson´s interpretation of ergodicity and O´Donnell´s critique of this are being presented and, drawing on the latter, along with Álvarez-Ehnts´ critique, the author rejects a simplifying pattern of Davidson´s, according to which neoclassical economics is based on the ergodic axiom. The author suggests a solution to the "selection - creation problem" consisting in distinguishing epistemological determinism from ontological determinism on the one hand, and epistemological determinism from epistemological non-determinism on the other hand. While selection is a characteristic feature of epistemological determinism and, in effect, the realm of "fundamental certainty", creation is referred to by the author as a characteristic feature of epistemological non-determinism, i. e., in effect, the realm of fundamental uncertainty. The author regards the "problem of a generator of endogenous shocks" a self-contradictory notion, based on the principle of causality and the law of non-contradiction, and suggests a solution to the problem consisting in rejection of the concept of shock endogeneity. At the same time, the author rejects Davidson´s "fundamental neoclassical article of faith" rhetoric, based on the first cause argument implied by the principle of causality. In opposition to Davidson, the author regards fundamental uncertainty being of a basically epistemological nature, consisting in our ignorance of the "ultimate law of change", the "Devine formula". Unlike O´Donnell, however, who puts stress on the element of epistemological uncertainty in his epistemological approach to uncertainty, the author also puts stress on the element of ontological certainty, consisting in our knowledge of the existence of the "Devine formula", apart from our epistemological uncertainty.
46

Theoretical contributions to Monte Carlo methods, and applications to Statistics / Contributions théoriques aux méthodes de Monte Carlo, et applications à la Statistique

Riou-Durand, Lionel 05 July 2019 (has links)
La première partie de cette thèse concerne l'inférence de modèles statistiques non normalisés. Nous étudions deux méthodes d'inférence basées sur de l'échantillonnage aléatoire : Monte-Carlo MLE (Geyer, 1994), et Noise Contrastive Estimation (Gutmann et Hyvarinen, 2010). Cette dernière méthode fut soutenue par une justification numérique d'une meilleure stabilité, mais aucun résultat théorique n'avait encore été prouvé. Nous prouvons que Noise Contrastive Estimation est plus robuste au choix de la distribution d'échantillonnage. Nous évaluons le gain de précision en fonction du budget computationnel. La deuxième partie de cette thèse concerne l'échantillonnage aléatoire approché pour les distributions de grande dimension. La performance de la plupart des méthodes d’échantillonnage se détériore rapidement lorsque la dimension augmente, mais plusieurs méthodes ont prouvé leur efficacité (e.g. Hamiltonian Monte Carlo, Langevin Monte Carlo). Dans la continuité de certains travaux récents (Eberle et al., 2017 ; Cheng et al., 2018), nous étudions certaines discrétisations d’un processus connu sous le nom de kinetic Langevin diffusion. Nous établissons des vitesses de convergence explicites vers la distribution d'échantillonnage, qui ont une dépendance polynomiale en la dimension. Notre travail améliore et étend les résultats de Cheng et al. pour les densités log-concaves. / The first part of this thesis concerns the inference of un-normalized statistical models. We study two methods of inference based on sampling, known as Monte-Carlo MLE (Geyer, 1994), and Noise Contrastive Estimation (Gutmann and Hyvarinen, 2010). The latter method was supported by numerical evidence of improved stability, but no theoretical results had yet been proven. We prove that Noise Contrastive Estimation is more robust to the choice of the sampling distribution. We assess the gain of accuracy depending on the computational budget. The second part of this thesis concerns approximate sampling for high dimensional distributions. The performance of most samplers deteriorates fast when the dimension increases, but several methods have proven their effectiveness (e.g. Hamiltonian Monte Carlo, Langevin Monte Carlo). In the continuity of some recent works (Eberle et al., 2017; Cheng et al., 2018), we study some discretizations of the kinetic Langevin diffusion process and establish explicit rates of convergence towards the sampling distribution, that scales polynomially fast when the dimension increases. Our work improves and extends the results established by Cheng et al. for log-concave densities.
47

Variace frakcionálních procesů / Variation of Fractional Processes

Kiška, Boris January 2022 (has links)
In this thesis, we study various notions of variation of certain stochastic processes, namely $p$-variation, pathwise $p$-th variation along sequence of partitions and $p$-th variation along sequence of partitions. We study these concepts for fractional Brownian motions and Rosenblatt processes. A fractional Brownian motion is a Gaussian process and it has been intensively developed and studied over the last two decades because of its importance in modeling various phenomena. On the other hand, a Rosenblatt process, which is a non- Gaussian process that can be used for modeling non-Gaussian fluctuations, has not been getting as much attention as fractional Brownian motion. For that reason, we concentrate in this thesis on this process and we present some original results that deal with ergodicity, $p$-variation, pathwise $p$-th variation along sequence of partitions and $p$-th variation along sequence of partitions. Boris Kiška
48

Semiparametric Bayesian Approach using Weighted Dirichlet Process Mixture For Finance Statistical Models

Sun, Peng 07 March 2016 (has links)
Dirichlet process mixture (DPM) has been widely used as exible prior in nonparametric Bayesian literature, and Weighted Dirichlet process mixture (WDPM) can be viewed as extension of DPM which relaxes model distribution assumptions. Meanwhile, WDPM requires to set weight functions and can cause extra computation burden. In this dissertation, we develop more efficient and exible WDPM approaches under three research topics. The first one is semiparametric cubic spline regression where we adopt a nonparametric prior for error terms in order to automatically handle heterogeneity of measurement errors or unknown mixture distribution, the second one is to provide an innovative way to construct weight function and illustrate some decent properties and computation efficiency of this weight under semiparametric stochastic volatility (SV) model, and the last one is to develop WDPM approach for Generalized AutoRegressive Conditional Heteroskedasticity (GARCH) model (as an alternative approach for SV model) and propose a new model evaluation approach for GARCH which produces easier-to-interpret result compared to the canonical marginal likelihood approach. In the first topic, the response variable is modeled as the sum of three parts. One part is a linear function of covariates that enter the model parametrically. The second part is an additive nonparametric model. The covariates whose relationships to response variable are unclear will be included in the model nonparametrically using Lancaster and Šalkauskas bases. The third part is error terms whose means and variance are assumed to follow non-parametric priors. Therefore we denote our model as dual-semiparametric regression because we include nonparametric idea for both modeling mean part and error terms. Instead of assuming all of the error terms follow the same prior in DPM, our WDPM provides multiple candidate priors for each observation to select with certain probability. Such probability (or weight) is modeled by relevant predictive covariates using Gaussian kernel. We propose several different WDPMs using different weights which depend on distance in covariates. We provide the efficient Markov chain Monte Carlo (MCMC) algorithms and also compare our WDPMs to parametric model and DPM model in terms of Bayes factor using simulation and empirical study. In the second topic, we propose an innovative way to construct weight function for WDPM and apply it to SV model. SV model is adopted in time series data where the constant variance assumption is violated. One essential issue is to specify distribution of conditional return. We assume WDPM prior for conditional return and propose a new way to model the weights. Our approach has several advantages including computational efficiency compared to the weight constructed using Gaussian kernel. We list six properties of this proposed weight function and also provide the proof of them. Because of the additional Metropolis-Hastings steps introduced by WDPM prior, we find the conditions which can ensure the uniform geometric ergodicity of transition kernel in our MCMC. Due to the existence of zero values in asset price data, our SV model is semiparametric since we employ WDPM prior for non-zero values and parametric prior for zero values. On the third project, we develop WDPM approach for GARCH type model and compare different types of weight functions including the innovative method proposed in the second topic. GARCH model can be viewed as an alternative way of SV for analyzing daily stock prices data where constant variance assumption does not hold. While the response variable of our SV models is transformed log return (based on log-square transformation), GARCH directly models the log return itself. This means that, theoretically speaking, we are able to predict stock returns using GARCH models while this is not feasible if we use SV model. Because SV models ignore the sign of log returns and provides predictive densities for squared log return only. Motivated by this property, we propose a new model evaluation approach called back testing return (BTR) particularly for GARCH. This BTR approach produces model evaluation results which are easier to interpret than marginal likelihood and it is straightforward to draw conclusion about model profitability by applying this approach. Since BTR approach is only applicable to GARCH, we also illustrate how to properly cal- culate marginal likelihood to make comparison between GARCH and SV. Based on our MCMC algorithms and model evaluation approaches, we have conducted large number of model fittings to compare models in both simulation and empirical study. / Ph. D.
49

L^2-Spektraltheorie für Markov-Operatoren / L^2-spectral-theory for Markov operators

Wübker, Achim 07 January 2008 (has links)
No description available.
50

Ergodicité et fonctions propres du laplacien sur les grands graphes réguliers / Ergodicity and eigenfunctions of the Laplacian on large regular graphs

Le Masson, Etienne 24 September 2013 (has links)
Dans cette thèse, nous étudions les propriétés de concentration des fonctions propres du laplacien discret sur des graphes réguliers de degré fixé dont le nombre de sommets tend vers l'infini. Cette étude s'inspire de la théorie de l'ergodicité quantique sur les variétés. Par analogie avec cette dernière, nous développons un calcul pseudo-différentiel sur les arbres réguliers : nous définissons des classes de symboles et des opérateurs associés, et nous prouvons un certain nombre de propriétés de ces classes de symboles et opérateurs. Nous montrons notamment que les opérateurs sont bornés dans L², et nous donnons des formules de l'adjoint et du produit. Nous nous servons ensuite de cette théorie pour montrer un théorème d'ergodicité quantique pour des suites de graphes réguliers dont le nombre de sommets tend vers l'infini. Il s'agit d'un résultat de délocalisation de la plupart des fonctions propres dans la limite des grands graphes réguliers. Les graphes vérifient une hypothèse d'expansion et ne comportent pas trop de cycles courts, deux hypothèses vérifiées presque sûrement par des suites de graphes réguliers aléatoires. / N this thesis, we study concentration properties of eigenfunctions of the discrete Laplacian on regular graphs of fixed degree, when the number of vertices tend to infinity. This study is made in analogy with the Quantum Ergodicity theory on manifolds. We construct a pseudo-differential calculus on regular trees by defining symbol classes and associated operators and proving some properties of these classes of symbols and operators. In particular we prove that the operators are bounded on L² and give adjoint and product formulas. We then use this theory to prove a Quantum Ergodicity theorem on large regular graphs. This is a property of delocalization of most eigenfunctions in the large scale limit. We consider expander graphs with few short cycles (for instance random large regular graphs). These hypothesis are almost surely satisfied by sequences of random regular graphs.

Page generated in 0.0422 seconds