• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 3
  • 1
  • Tagged with
  • 24
  • 24
  • 20
  • 7
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Roots of stochastic matrices and fractional matrix powers

Lin, Lijing January 2011 (has links)
In Markov chain models in finance and healthcare a transition matrix over a certain time interval is needed but only a transition matrix over a longer time interval may be available. The problem arises of determining a stochastic $p$th root of astochastic matrix (the given transition matrix). By exploiting the theory of functions of matrices, we develop results on the existence and characterization of stochastic $p$th roots. Our contributions include characterization of when a real matrix hasa real $p$th root, a classification of $p$th roots of a possibly singular matrix,a sufficient condition for a $p$th root of a stochastic matrix to have unit row sums,and the identification of two classes of stochastic matrices that have stochastic $p$th roots for all $p$. We also delineate a wide variety of possible configurationsas regards existence, nature (primary or nonprimary), and number of stochastic roots,and develop a necessary condition for existence of a stochastic root in terms of the spectrum of the given matrix. On the computational side, we emphasize finding an approximate stochastic root: perturb the principal root $A^{1/p}$ or the principal logarithm $\log(A)$ to the nearest stochastic matrix or the nearest intensity matrix, respectively, if they are not valid ones;minimize the residual $\normF{X^p-A}$ over all stochastic matrices $X$ and also over stochastic matrices that are primary functions of $A$. For the first two nearness problems, the global minimizers are found in the Frobenius norm. For the last two nonlinear programming problems, we derive explicit formulae for the gradient and Hessian of the objective function $\normF{X^p-A}^2$ and investigate Newton's method, a spectral projected gradient method (SPGM) and the sequential quadratic programming method to solve the problem as well as various matrices to start the iteration. Numerical experiments show that SPGM starting with the perturbed $A^{1/p}$to minimize $\normF{X^p-A}$ over all stochastic matrices is method of choice.Finally, a new algorithm is developed for computing arbitrary real powers $A^\a$ of a matrix $A\in\mathbb{C}^{n\times n}$. The algorithm starts with a Schur decomposition,takes $k$ square roots of the triangular factor $T$, evaluates an $[m/m]$ Pad\'e approximant of $(1-x)^\a$ at $I - T^$, and squares the result $k$ times. The parameters $k$ and $m$ are chosen to minimize the cost subject to achieving double precision accuracy in the evaluation of the Pad\'e approximant, making use of a result that bounds the error in the matrix Pad\'e approximant by the error in the scalar Pad\'e approximant with argument the norm of the matrix. The Pad\'e approximant is evaluated from the continued fraction representation in bottom-up fashion, which is shown to be numerically stable. In the squaring phase the diagonal and first superdiagonal are computed from explicit formulae for $T^$, yielding increased accuracy. Since the basic algorithm is designed for $\a\in(-1,1)$, a criterion for reducing an arbitrary real $\a$ to this range is developed, making use of bounds for the condition number of the $A^\a$ problem. How best to compute $A^k$ for a negative integer $k$ is also investigated. In numerical experiments the new algorithm is found to be superior in accuracy and stability to several alternatives,including the use of an eigendecomposition, a method based on the Schur--Parlett\alg\ with our new algorithm applied to the diagonal blocks and approaches based on the formula $A^\a = \exp(\a\log(A))$.
22

Extending the explanatory power of factor pricing models using topic modeling / Högre förklaringsgrad hos faktorprismodeller genom topic modeling

Everling, Nils January 2017 (has links)
Factor models attribute stock returns to a linear combination of factors. A model with great explanatory power (R2) can be used to estimate the systematic risk of an investment. One of the most important factors is the industry which the company of the stock operates in. In commercial risk models this factor is often determined with a manually constructed stock classification scheme such as GICS. We present Natural Language Industry Scheme (NLIS), an automatic and multivalued classification scheme based on topic modeling. The topic modeling is performed on transcripts of company earnings calls and identifies a number of topics analogous to industries. We use non-negative matrix factorization (NMF) on a term-document matrix of the transcripts to perform the topic modeling. When set to explain returns of the MSCI USA index we find that NLIS consistently outperforms GICS, often by several hundred basis points. We attribute this to NLIS’ ability to assign a stock to multiple industries. We also suggest that the proportions of industry assignments for a given stock could correspond to expected future revenue sources rather than current revenue sources. This property could explain some of NLIS’ success since it closely relates to theoretical stock pricing. / Faktormodeller förklarar aktieprisrörelser med en linjär kombination av faktorer. En modell med hög förklaringsgrad (R2) kan användas föratt skatta en investerings systematiska risk. En av de viktigaste faktorerna är aktiebolagets industritillhörighet. I kommersiella risksystem bestäms industri oftast med ett aktieklassifikationsschema som GICS, publicerat av ett finansiellt institut. Vi presenterar Natural Language Industry Scheme (NLIS), ett automatiskt klassifikationsschema baserat på topic modeling. Vi utför topic modeling på transkript av aktiebolags investerarsamtal. Detta identifierar ämnen, eller topics, som är jämförbara med industrier. Topic modeling sker genom icke-negativmatrisfaktorisering (NMF) på en ord-dokumentmatris av transkripten. När NLIS används för att förklara prisrörelser hos MSCI USA-indexet finner vi att NLIS överträffar GICS, ofta med 2-3 procent. Detta tillskriver vi NLIS förmåga att ge flera industritillhörigheter åt samma aktie. Vi föreslår också att proportionerna hos industritillhörigheterna för en aktie kan motsvara förväntade inkomstkällor snarare än nuvarande inkomstkällor. Denna egenskap kan också vara en anledning till NLIS framgång då den nära relaterar till teoretisk aktieprissättning.
23

Emergence de concepts multimodaux : de la perception de mouvements primitifs à l'ancrage de mots acoustiques / The Emergence of Multimodal Concepts : From Perceptual Motion Primitives to Grounded Acoustic Words

Mangin, Olivier 19 March 2014 (has links)
Cette thèse considère l'apprentissage de motifs récurrents dans la perception multimodale. Elle s'attache à développer des modèles robotiques de ces facultés telles qu'observées chez l'enfant, et elle s'inscrit en cela dans le domaine de la robotique développementale.Elle s'articule plus précisément autour de deux thèmes principaux qui sont d'une part la capacité d'enfants ou de robots à imiter et à comprendre le comportement d'humains, et d'autre part l'acquisition du langage. A leur intersection, nous examinons la question de la découverte par un agent en développement d'un répertoire de motifs primitifs dans son flux perceptuel. Nous spécifions ce problème et établissons son lien avec ceux de l'indétermination de la traduction décrit par Quine et de la séparation aveugle de source tels qu'étudiés en acoustique.Nous en étudions successivement quatre sous-problèmes et formulons une définition expérimentale de chacun. Des modèles d'agents résolvant ces problèmes sont également décrits et testés. Ils s'appuient particulièrement sur des techniques dites de sacs de mots, de factorisation de matrices et d'apprentissage par renforcement inverse. Nous approfondissons séparément les trois problèmes de l'apprentissage de sons élémentaires tels les phonèmes ou les mots, de mouvements basiques de danse et d'objectifs primaires composant des tâches motrices complexes. Pour finir nous étudions le problème de l'apprentissage d'éléments primitifs multimodaux, ce qui revient à résoudre simultanément plusieurs des problèmes précédents. Nous expliquons notamment en quoi cela fournit un modèle de l'ancrage de mots acoustiques / This thesis focuses on learning recurring patterns in multimodal perception. For that purpose it develops cognitive systems that model the mechanisms providing such capabilities to infants; a methodology that fits into thefield of developmental robotics.More precisely, this thesis revolves around two main topics that are, on the one hand the ability of infants or robots to imitate and understand human behaviors, and on the other the acquisition of language. At the crossing of these topics, we study the question of the how a developmental cognitive agent can discover a dictionary of primitive patterns from its multimodal perceptual flow. We specify this problem and formulate its links with Quine's indetermination of translation and blind source separation, as studied in acoustics.We sequentially study four sub-problems and provide an experimental formulation of each of them. We then describe and test computational models of agents solving these problems. They are particularly based on bag-of-words techniques, matrix factorization algorithms, and inverse reinforcement learning approaches. We first go in depth into the three separate problems of learning primitive sounds, such as phonemes or words, learning primitive dance motions, and learning primitive objective that compose complex tasks. Finally we study the problem of learning multimodal primitive patterns, which corresponds to solve simultaneously several of the aforementioned problems. We also details how the last problems models acoustic words grounding.
24

Extensions of nonnegative matrix factorization for exploratory data analysis / 探索的なデータ分析のための非負値行列因子分解の拡張 / タンサクテキナ データ ブンセキ ノ タメ ノ ヒフチ ギョウレツ インシ ブンカイ ノ カクチョウ

阿部 寛康, Hiroyasu Abe 22 March 2017 (has links)
非負値行列因子分解(NMF)は,全要素が非負であるデータ行列に対する行列分解法である.本論文では,実在するデータ行列に頻繁に見られる特徴や解釈容易性の向上を考慮に入れ,探索的にデータ分析を行うためのNMFの拡張について論じている.具体的には,零過剰行列や外れ値を含む行列を扱うための確率分布やダイバージェンス,さらには分解結果である因子行列の数や因子行列への直交制約について述べている. / Nonnegative matrix factorization (NMF) is a matrix decomposition technique to analyze nonnegative data matrices, which are matrices of which all elements are nonnegative. In this thesis, we discuss extensions of NMF for exploratory data analysis considering common features of a real nonnegative data matrix and an easy interpretation. In particular, we discuss probability distributions and divergences for zero-inflated data matrix and data matrix with outliers, two-factor vs. three-factor, and orthogonal constraint to factor matrices. / 博士(文化情報学) / Doctor of Culture and Information Science / 同志社大学 / Doshisha University

Page generated in 0.06 seconds