371 |
From here to infinity: sparse finite versus Dirichlet process mixtures in model-based clusteringFrühwirth-Schnatter, Sylvia, Malsiner-Walli, Gertraud January 2019 (has links) (PDF)
In model-based clustering mixture models are used to group data points into clusters. A useful concept introduced for Gaussian mixtures by Malsiner Walli et al. (Stat Comput 26:303-324, 2016) are sparse finite mixtures, where the prior distribution on the weight distribution of a mixture with K components is chosen in such a way that a priori the number of clusters in the data is random and is allowed to be smaller than K with high probability. The number of clusters is then inferred a posteriori from the data. The present paper makes the following contributions in the context of sparse finite mixture modelling. First, it is illustrated that the concept of sparse finite mixture is very generic and easily extended to cluster various types of non-Gaussian data, in particular discrete data and continuous multivariate data arising from non-Gaussian clusters. Second, sparse finite mixtures are compared to Dirichlet process mixtures with respect to their ability to identify the number of clusters. For both model classes, a random hyper prior is considered for the parameters determining the weight distribution. By suitable matching of these priors, it is shown that the choice of this hyper prior is far more influential on the cluster solution than whether a sparse finite mixture or a Dirichlet process mixture is taken into consideration.
|
372 |
On singular solutions of the Gelfand problem.January 1994 (has links)
by Chu Lap-foo. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1994. / Includes bibliographical references (leaves 68-69). / Introduction --- p.iii / Chapter 1 --- Basic Properties of Singular Solutions --- p.1 / Chapter 1.1 --- An Asymptotic Radial Result --- p.2 / Chapter 1.2 --- Local Uniqueness of Radial Solutions --- p.8 / Chapter 2 --- Dirichlet Problem : Existence Theory I --- p.11 / Chapter 2.1 --- Formulation --- p.12 / Chapter 2.2 --- Explicit Solutions on Balls --- p.14 / Chapter 2.3 --- The Moser Inequality --- p.19 / Chapter 2.4 --- Existence of Solutions in General Domains --- p.24 / Chapter 2.5 --- Spectrum of the Problem --- p.26 / Chapter 3 --- Dirichlet Problem : Existence Theory II --- p.29 / Chapter 3.1 --- Mountain Pass Lemma --- p.29 / Chapter 3.2 --- Existence of Second Solution --- p.31 / Chapter 4 --- Dirichlet Problem : Non-Existence Theory --- p.36 / Chapter 4.1 --- Upper Bound of λ* in Star-Shaped Domains --- p.36 / Chapter 4.2 --- Numerical Values --- p.41 / Chapter 5 --- The Neumann Problem --- p.42 / Chapter 5.1 --- Existence Theory I --- p.43 / Chapter 5.2 --- Existence Theory II --- p.47 / Chapter 6 --- The Schwarz Symmetrization --- p.49 / Chapter 6.1 --- Definitions and Basic Properties --- p.49 / Chapter 6.2 --- Inequalities Related to Symmetrization --- p.58 / Chapter 6.3 --- An Application to P.D.E --- p.63 / Bibliography --- p.68
|
373 |
Measuring the information content of Riksbank meeting minutesFröjd, Sofia January 2019 (has links)
As the amount of information available on the internet has increased sharply in the last years, methods for measuring and comparing text-based information is gaining popularity on financial markets. Text mining and natural language processing has become an important tool for classifying large collections of texts or documents. One field of applications is topic modelling of the minutes from central banks' monetary policy meetings, which tend to be about topics such as"inflation", "economic growth" and "rates". The central bank of Sweden is the Riksbank, which hold 6 annual monetary policy meetings where the members of the Executive Board decide on the new repo rate. Two weeks later, the minutes of the meeting is published and information regarding the future monetary policy is given to the market in the form of text. This information has before release been unknown to the market, thus having the potential to be market-sensitive. Using Latent Dirichlet Allocation (LDA), an algorithm used for uncovering latent topics in documents, the topics in the meeting minutes should be possible to identify and quantify. In this project, 8 topics were found regarding, among other, inflation, rates, household debt and economic development. An important factor in analysis of central bank communication is the underlying tone in the discussions. It is common to classify central bankers as hawkish or dovish. Hawkish members of the board tend to favour tightening monetary policy and rate hikes, while more dovish members advocate a more expansive monetary policy and rate cuts. Thus, analysing the tone of the minutes can give an indication of future moves of the monetary policy rate. The purpose of this project is to provide a fast method for analysing the minutes from the Riksbank monetary policy meetings. The project is divided into two parts. First, a LDA model was trained to identify the topics in the minutes, which was then used to compare the content of two consecutive meeting minutes. Next, the sentiment was measured as a degree of hawkishness or dovishness. This was done by categorising each sentence in terms of their content, and then counting words with hawkish or dovish sentiment. The resulting net score gives larger values to more hawkish minutes and was shown to follow the repo rate path well. At the time of the release of the minutes, the new repo rate is already known, but the net score does gives an indication of the stance of the board.
|
374 |
Fermeture des fonctionnelles de diffusion et de l'élasticité linéaire pour la topologie de la Mosco-convergenceCAMAR-EDDINE, Mohamed 11 March 2002 (has links) (PDF)
L'objectif de cette thèse est l'identification de toutes les limites possibles, vis-à-vis de la Mosco-convergence, des suites de fonctionnelles de diffusion ou de l'élasticité linéaire isotrope. Bien que chaque élément de ces suites soit une fonctionnelle fortement locale, il est bien connu que, sans hypothèse de majoration uniforme sur les coefficients de diffusion, dans le cas scalaire, ou d'élasticité dans le cas vectoriel, la limite peut contenir un terme non-local et un terme étrange. Dans le cas vectoriel, il peut même arriver que la fonctionnelle limite dépende du second gradient du déplacement. D'un point de vue mécanique, les propriétés effectives d'un matériau composite peuvent radicalement différer de celles de ces différents constituants. Umberto Mosco a montré que toute limite d'une suite de fonctionnelles de diffusion est une forme de Dirichlet. La contribution des travaux présentés dans la première partie de cette thèse apporte une réponse positive au problème inverse. Nous montrons que toute forme de Dirichlet est limite d'une suite de fonctionnelles de diffusion. Une étape cruciale consiste en la construction explicite d'un matériau composite dont les propriétés effectives contiennent une interaction non-locale élémentaire. Puis, on obtient progressivement des interactions plus complexes, pour finalement atteindre toutes les formes de Dirichlet. La deuxième partie de nos travaux traite du cas vectoriel. On y démontre que la fermeture des fonctionnelles de l'élasticité linéaire isotrope est l'ensemble de toutes les formes quadratiques positives, objectives et semi-continues inférieurement. La preuve de ce résultat qui est loin d'être une simple généralisation du cas scalaire s'appuie, au départ, sur un résultat comparable au cas scalaire. Elle nécessite ensuite une approche complétement différente.
|
375 |
Modélisation et utilisation des erreurs de pseudodistances GNSS en environnement transport pour l'amélioration des performances de localisationViandier, Nicolas 07 June 2011 (has links) (PDF)
Les GNSS sont désormais largement présents dans le domaine des transports. Actuellement, la communauté scientifique désire développer des applications nécessitant une grande précision, disponibilité et intégrité.Ces systèmes offrent un service de position continu. Les performances sont définies par les paramètres du système mais également par l'environnement de propagation dans lequel se propagent les signaux. Les caractéristiques de propagation dans l'atmosphère sont connues. En revanche, il est plus difficile de prévoir l'impact de l'environnement proche de l'antenne, composé d'obstacles urbains. L'axe poursuivit par le LEOST et le LAGIS consiste à appréhender l'environnement et à utiliser cette information en complément de l'information GNSS. Cette approche vise à réduire le nombre de capteurs et ainsi la complexité du système et son coût. Les travaux de recherche menés dans le cadre de cette thèse permettent principalement de proposer des modélisations d'erreur de pseudodistances et des modélisations de l'état de réception encore plus réalistes. Après une étape de caractérisation de l'erreur, plusieurs modèles d'erreur de pseudodistance sont proposés. Ces modèles sont le mélange fini de gaussiennes et le mélange de processus de Dirichlet. Les paramètres du modèle sont estimés conjointement au vecteur d'état contenant la position grâce à une solution de filtrage adaptée comme le filtre particulaire Rao-Blackwellisé. L'évolution du modèle de bruit permet de s'adapter à l'environnement et donc de fournir une localisation plus précise. Les différentes étapes des travaux réalisés dans cette thèse ont été testées et validées sur données de simulation et réelles.
|
376 |
Méthodes de factorisation des équations aux dérivées partielles.Champagne, Isabelle 11 October 2004 (has links) (PDF)
Cette thèse propose une étude originale de la propagation d'ondes acoustiques dans un guide d'ondes. La méthode consiste à factoriser l'équation des ondes grâce à la technique du plongement invariant: on introduit dans le domaine une frontière mobile, correspondant à une section du guide, et on résout le problème pour la partie du guide comprise entre cette section et une de ses faces. Cela permet d'obtenir un système couplé d'équations différentielles et de faire apparaître un opérateur de type Dirichlet-to-Neumann, solution d'une équation de Riccati. On étudie alors celui-ci à l'aide d'une formule de représentation: l'opérateur est semblable à un semi-groupe linéaire par une transformation homographique.
|
377 |
A Faber-Krahn-type Inequality for Regular TreesLeydold, Josef January 1996 (has links) (PDF)
In the last years some results for the Laplacian on manifolds have been shown to hold also for the graph Laplacian, e.g. Courant's nodal domain theorem or Cheeger's inequality. Friedman (Some geometric aspects of graphs and their eigenfunctions, Duke Math. J. 69 (3), pp. 487-525, 1993) described the idea of a ``graph with boundary". With this concept it is possible to formulate Dirichlet and Neumann eigenvalue problems. Friedman also conjectured another ``classical" result for manifolds, the Faber-Krahn theorem, for regular bounded trees with boundary. The Faber-Krahn theorem states that among all bounded domains $D \subset R^n$ with fixed volume, a ball has lowest first Dirichlet eigenvalue. In this paper we show such a result for regular trees by using a rearrangement technique. We give restrictive conditions for trees with boundary where the first Dirichlet eigenvalue is minimized for a given "volume". Amazingly Friedman's conjecture is false, i.e. in general these trees are not ``balls". But we will show that these are similar to ``balls". (author's abstract) / Series: Preprint Series / Department of Applied Statistics and Data Processing
|
378 |
Advances in Bayesian Modelling and Computation: Spatio-Temporal Processes, Model Assessment and Adaptive MCMCJi, Chunlin January 2009 (has links)
<p>The modelling and analysis of complex stochastic systems with increasingly large data sets, state-spaces and parameters provides major stimulus to research in Bayesian nonparametric methods and Bayesian computation. This dissertation presents advances in both nonparametric modelling and statistical computation stimulated by challenging problems of analysis in complex spatio-temporal systems and core computational issues in model fitting and model assessment. The first part of the thesis, represented by chapters 2 to 4, concerns novel, nonparametric Bayesian mixture models for spatial point processes, with advances in modelling, computation and applications in biological contexts. Chapter 2 describes and develops models for spatial point processes in which the point outcomes are latent, where indirect observations related to the point outcomes are available, and in which the underlying spatial intensity functions are typically highly heterogenous. Spatial intensities of inhomogeneous Poisson processes are represented via flexible nonparametric Bayesian mixture models. Computational approaches are presented for this new class of spatial point process mixtures and extended to the context of unobserved point process outcomes. Two examples drawn from a central, motivating context, that of immunofluorescence histology analysis in biological studies generating high-resolution imaging data, demonstrate the modelling approach and computational methodology. Chapters 3 and 4 extend this framework to define a class of flexible Bayesian nonparametric models for inhomogeneous spatio-temporal point processes, adding dynamic models for underlying intensity patterns. Dependent Dirichlet process mixture models are introduced as core components of this new time-varying spatial model. Utilizing such nonparametric mixture models for the spatial process intensity functions allows the introduction of time variation via dynamic, state-space models for parameters characterizing the intensities. Bayesian inference and model-fitting is addressed via novel particle filtering ideas and methods. Illustrative simulation examples include studies in problems of extended target tracking and substantive data analysis in cell fluorescent microscopic imaging tracking problems.</p><p>The second part of the thesis, consisting of chapters 5 and chapter 6, concerns advances in computational methods for some core and generic Bayesian inferential problems. Chapter 5 develops a novel approach to estimation of upper and lower bounds for marginal likelihoods in Bayesian modelling using refinements of existing variational methods. Traditional variational approaches only provide lower bound estimation; this new lower/upper bound analysis is able to provide accurate and tight bounds in many problems, so facilitates more reliable computation for Bayesian model comparison while also providing a way to assess adequacy of variational densities as approximations to exact, intractable posteriors. The advances also include demonstration of the significant improvements that may be achieved in marginal likelihood estimation by marginalizing some parameters in the model. A distinct contribution to Bayesian computation is covered in Chapter 6. This concerns a generic framework for designing adaptive MCMC algorithms, emphasizing the adaptive Metropolized independence sampler and an effective adaptation strategy using a family of mixture distribution proposals. This work is coupled with development of a novel adaptive approach to computation in nonparametric modelling with large data sets; here a sequential learning approach is defined that iteratively utilizes smaller data subsets. Under the general framework of importance sampling based marginal likelihood computation, the proposed adaptive Monte Carlo method and sequential learning approach can facilitate improved accuracy in marginal likelihood computation. The approaches are exemplified in studies of both synthetic data examples, and in a real data analysis arising in astro-statistics.</p><p>Finally, chapter 7 summarizes the dissertation and discusses possible extensions of the specific modelling and computational innovations, as well as potential future work.</p> / Dissertation
|
379 |
Problems in Classical Potential Theory with Applications to Mathematical PhysicsLundberg, Erik 01 January 2011 (has links)
In this thesis we are interested in some problems regarding harmonic functions. The topics are divided into three chapters.
Chapter 2 concerns singularities developed by solutions of the Cauchy problem for a holomorphic elliptic equation, especially Laplace's equation. The principal motivation is to locate the singularities of the Schwarz potential. The results have direct applications to Laplacian growth (or the Hele-Shaw problem).
Chapter 3 concerns the Dirichlet problem when the boundary is an algebraic set and the data is a polynomial or a real-analytic function. We pursue some questions related to the Khavinson-Shapiro conjecture. A main topic of interest is analytic continuability of the solution outside its natural domain.
Chapter 4 concerns certain complex-valued harmonic functions and their zeros. The special cases we consider apply directly in astrophysics to the study of multiple-image gravitational lenses.
|
380 |
Domain decomposition methods in geomechanicsFlorez Guzman, Horacio Antonio 11 October 2012 (has links)
Hydrocarbon production or injection of fluids in the reservoir can produce changes in the rock stresses and in-situ geomechanics, potentially leading to compaction and subsidence with harmful effects in wells, cap-rock, faults, and the surrounding environment as well. In order to tackle these changes and their impact, accurate simulations are essential.
The Mortar Finite Element Method (MFEM) has been demonstrated to be a powerful technique in order to formulate a weak continuity condition at the interface of sub-domains in which different meshes, i.e. non-conforming or hybrid, and / or variational approximations are used. This is particularly suitable when coupling different physics on different domains, such as elasticity and poroelasticity, in the context of coupled flow and geomechanics.
In this dissertation, popular Domain Decomposition Methods (DDM) are implemented in order to carry large simulations by taking full advantage of current parallel computer architectures. Different solution schemes can be defined depending upon the way information is exchanged between sub-domain interfaces. Three different schemes, i.e. Dirichlet-Neumann (DN), Neumann-Neumann (NN) and MFEM, are tested and the advantages and disadvantages of each of them are identified.
As a first contribution, the MFEM is extended to deal with curve interfaces represented by Non-Uniform Rational B-Splines (NURBS) curves and surfaces. The goal is to have a more robust geometrical representation for mortar spaces, which allows gluing non-conforming interfaces on realistic geometries. The resulting mortar saddle-point problem will be decoupled by means of the DN- and NN-DDM.
Additionally, a reservoir geometry reconstruction procedure based on NURBS surfaces is presented as well. The technique builds a robust piecewise continuous geometrical representation that can be exploited by MFEM in order to tackle realistic problems, which is a second contribution. Tensor product meshes are usually propagated from the reservoir in a conforming way into its surroundings, which makes non-matching interfaces highly attractive in this case.
In the context of reservoir compaction and subsidence estimation, it is common to deal with serial legacy codes for flow. Indeed, major reservoir simulators such as compositional codes lack parallelism. Another issue is the fact that, generally speaking, flow and mechanics domains are different. To overcome this limitation, a serial-parallel approach is proposed in order to couple serial flow codes with our parallel mechanics code by means of iterative coupling. Concrete results in loosely coupling are presented as a third contribution.
As a final contribution, the DN-DDM is applied to couple elasticity and plasticity, which seems very promising in order to speed up computations involving poroplasticity.
Several examples of coupling of elasticity, poroelasticity, and plasticity ranging from near-wellbore applications to field level subsidence computations help to show that the proposed methodology can handle problems of practical interest. In order to facilitate the implementation of complex workflows, an advanced Python wrapper interface that allows programming capabilities have been implemented. The proposed serial-parallel approach seems to be appropriate to handle geomechanical problems involving different meshes for flow and mechanics as well as coupling parallel mechanistic codes with legacy flow simulators. / text
|
Page generated in 0.075 seconds