• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 4
  • 1
  • 1
  • Tagged with
  • 11
  • 11
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Inference in inhomogeneous hidden Markov models with application to ion channel data

Diehn, Manuel 01 November 2017 (has links)
No description available.
2

Seleção de modelos para segmentação de sequências simbólicas usando máxima verossimilhança penalizada / A model selection criterion for the segmentation of symbolic sequences using penalized maximum likelihood

Castro, Bruno Monte de 20 February 2013 (has links)
O problema de segmentação de sequências tem o objetivo de particionar uma sequência ou um conjunto delas em um número finito de segmentos distintos tão homogêneos quanto possível. Neste trabalho consideramos o problema de segmentação de um conjunto de sequências aleatórias, com valores em um alfabeto $\\mathcal$ finito, em um número finito de blocos independentes. Supomos ainda que temos $m$ sequências independentes de tamanho $n$, construídas pela concatenação de $s$ segmentos de comprimento $l^{*}_j$, sendo que cada bloco é obtido a partir da distribuição $\\p _j$ em $\\mathcal^{l^{*}_j}, \\; j=1,\\cdots, s$. Além disso denotamos os verdadeiros pontos de corte pelo vetor ${{\\bf k}}^{*}=(k^{*}_1,\\cdots,k^{*}_)$, com $k^{*}_i=\\sum _{j=1}^l^{*}_j$, $i=1,\\cdots, s-1$, esses pontos representam a mudança de segmento. Propomos usar o critério da máxima verossimilhança penalizada para inferir simultaneamente o número de pontos de corte e a posição de cada um desses pontos. Também apresentamos um algoritmo para segmentação de sequências e realizamos algumas simulações para mostrar seu funcionamento e sua velocidade de convergência. Nosso principal resultado é a demonstração da consistência forte do estimador dos pontos de corte quando o $m$ tende ao infinito. / The sequence segmentation problem aims to partition a sequence or a set of sequences into a finite number of segments as homogeneous as possible. In this work we consider the problem of segmenting a set of random sequences with values in a finite alphabet $\\mathcal$ into a finite number of independent blocks. We suppose also that we have $m$ independent sequences of length $n$, constructed by the concatenation of $s$ segments of length $l^{*}_j$ and each block is obtained from the distribution $\\p _j$ over $\\mathcal^{l^{*}_j}, \\; j=1,\\cdots, s$. Besides we denote the real cut points by the vector ${{\\bf k}}^{*}=(k^{*}_1,\\cdots,k^{*}_)$, with $k^{*}_i=\\sum _{j=1}^l^{*}_j$, $i=1,\\cdots, s-1$, these points represent the change of segment. We propose to use a penalized maximum likelihood criterion to infer simultaneously the number of cut points and the position of each one those points. We also present a algorithm to sequence segmentation and we present some simulations to show how it works and its convergence speed. Our principal result is the proof of strong consistency of this estimators when $m$ grows to infinity.
3

Seleção de modelos para segmentação de sequências simbólicas usando máxima verossimilhança penalizada / A model selection criterion for the segmentation of symbolic sequences using penalized maximum likelihood

Bruno Monte de Castro 20 February 2013 (has links)
O problema de segmentação de sequências tem o objetivo de particionar uma sequência ou um conjunto delas em um número finito de segmentos distintos tão homogêneos quanto possível. Neste trabalho consideramos o problema de segmentação de um conjunto de sequências aleatórias, com valores em um alfabeto $\\mathcal$ finito, em um número finito de blocos independentes. Supomos ainda que temos $m$ sequências independentes de tamanho $n$, construídas pela concatenação de $s$ segmentos de comprimento $l^{*}_j$, sendo que cada bloco é obtido a partir da distribuição $\\p _j$ em $\\mathcal^{l^{*}_j}, \\; j=1,\\cdots, s$. Além disso denotamos os verdadeiros pontos de corte pelo vetor ${{\\bf k}}^{*}=(k^{*}_1,\\cdots,k^{*}_)$, com $k^{*}_i=\\sum _{j=1}^l^{*}_j$, $i=1,\\cdots, s-1$, esses pontos representam a mudança de segmento. Propomos usar o critério da máxima verossimilhança penalizada para inferir simultaneamente o número de pontos de corte e a posição de cada um desses pontos. Também apresentamos um algoritmo para segmentação de sequências e realizamos algumas simulações para mostrar seu funcionamento e sua velocidade de convergência. Nosso principal resultado é a demonstração da consistência forte do estimador dos pontos de corte quando o $m$ tende ao infinito. / The sequence segmentation problem aims to partition a sequence or a set of sequences into a finite number of segments as homogeneous as possible. In this work we consider the problem of segmenting a set of random sequences with values in a finite alphabet $\\mathcal$ into a finite number of independent blocks. We suppose also that we have $m$ independent sequences of length $n$, constructed by the concatenation of $s$ segments of length $l^{*}_j$ and each block is obtained from the distribution $\\p _j$ over $\\mathcal^{l^{*}_j}, \\; j=1,\\cdots, s$. Besides we denote the real cut points by the vector ${{\\bf k}}^{*}=(k^{*}_1,\\cdots,k^{*}_)$, with $k^{*}_i=\\sum _{j=1}^l^{*}_j$, $i=1,\\cdots, s-1$, these points represent the change of segment. We propose to use a penalized maximum likelihood criterion to infer simultaneously the number of cut points and the position of each one those points. We also present a algorithm to sequence segmentation and we present some simulations to show how it works and its convergence speed. Our principal result is the proof of strong consistency of this estimators when $m$ grows to infinity.
4

Modeling Recurrent Gap Times Through Conditional GEE

Liu, Hai Yan 16 August 2018 (has links)
We present a theoretical approach to the statistical analysis of the dependence of the gap time length between consecutive recurrent events, on a set of explanatory random variables and in the presence of right censoring. The dependence is expressed through regression-like and overdispersion parameters, estimated via estimating functions and equations. The mean and variance of the length of each gap time, conditioned on the observed history of prior events and other covariates, are known functions of parameters and covariates, and are part of the estimating functions. Under certain conditions on censoring, we construct normalized estimating functions that are asymptotically unbiased and contain only observed data. We then use modern mathematical techniques to prove the existence, consistency and asymptotic normality of a sequence of estimators of the parameters. Simulations support our theoretical results.
5

Stochastické diferenciální rovnice s Gaussovským šumem / Stochastic Differential Equations with Gaussian Noise

Janák, Josef January 2018 (has links)
Title: Stochastic Differential Equations with Gaussian Noise Author: Josef Janák Department: Department of Probability and Mathematical Statistics Supervisor: Prof. RNDr. Bohdan Maslowski, DrSc., Department of Probability and Mathematical Statistics Abstract: Stochastic partial differential equations of second order with two un- known parameters are studied. The strongly continuous semigroup (S(t), t ≥ 0) for the hyperbolic system driven by Brownian motion is found as well as the formula for the covariance operator of the invariant measure Q (a,b) ∞ . Based on ergodicity, two suitable families of minimum contrast estimators are introduced and their strong consistency and asymptotic normality are proved. Moreover, another concept of estimation using "observation window" is studied, which leads to more families of strongly consistent estimators. Their properties and special cases are descibed as well as their asymptotic normality. The results are applied to the stochastic wave equation perturbed by Brownian noise and illustrated by several numerical simula- tions. Keywords: Stochastic hyperbolic equation, Ornstein-Uhlenbeck process, invariant measure, paramater estimation, strong consistency, asymptotic normality.
6

Le maintien de la cohérence dans les systèmes de stockage partiellement repliqués / Ensuring consistency in partially replicated data stores

Saeida Ardekani, Masoud 16 September 2014 (has links)
Dans une première partie, nous étudions la cohérence dans les systèmes transactionnels, en nous concentrant sur le problème de réconcilier la scalabilité avec des garanties transactionnelles fortes. Nous identifions quatre propriétés critiques pour la scalabilité. Nous montrons qu’aucun des critères de cohérence forte existants n’assurent l’ensemble de ces propriétés. Nous définissons un nouveau critère, appelé Non-Monotonic Snapshot Isolation ou NMSI, qui est le premier à être compatible avec les quatre propriétés à la fois. Nous présentons aussi une mise en œuvre de NMSI, appelée Jessy, que nous comparons expérimentalement à plusieurs critères connus. Une autre contribution est un canevas permettant de comparer de façon non biaisée différents protocoles. Elle se base sur la constatation qu’une large classe de protocoles transactionnels distribués est basée sur une même structure, Deferred Update Replication(DUR). Les protocoles de cette classe ne diffèrent que par les comportements spécifiques d’un petit nombre de fonctions génériques. Nous présentons donc un canevas générique pour les protocoles DUR.La seconde partie de la thèse a pour sujet la cohérence dans les systèmes de stockage non transactionnels. C’est ainsi que nous décrivons Tuba, un stockage clef-valeur qui choisit dynamiquement ses répliques selon un objectif de niveau de cohérence fixé par l’application. Ce système reconfigure automatiquement son ensemble de répliques, tout en respectant les objectifs de cohérence fixés par l’application, afin de s’adapter aux changements dans la localisation des clients ou dans le débit des requête. / In the first part, we study consistency in a transactional systems, and focus on reconciling scalability with strong transactional guarantees. We identify four scalability properties, and show that none of the strong consistency criteria ensure all four. We define a new scalable consistency criterion called Non-Monotonic Snapshot Isolation (NMSI), while is the first that is compatible with all four properties. We also present a practical implementation of NMSI, called Jessy, which we compare experimentally against a number of well-known criteria. We also introduce a framework for performing fair comparison among different transactional protocols. Our insight is that a large family of distributed transactional protocols have a common structure, called Deferred Update Replication (DUR). Protocols of the DUR family differ only in behaviors of few generic functions. We present a generic DUR framework, called G-DUR. We implement and compare several transactional protocols using the G-DUR framework.In the second part, we focus on ensuring consistency in non-transactional data stores. We introduce Tuba, a replicated key-value store that dynamically selects replicas in order to maximize the utility delivered to read operations according to a desired consistency defined by the application. In addition, unlike current systems, it automatically reconfigures its set of replicas while respecting application-defined constraints so that it adapts to changes in clients’ locations or request rates. Compared with a system that is statically configured, our evaluation shows that Tuba increases the reads that return strongly consistent data by 63%.
7

Estimation in discontinuous Bernoulli mixture models applicable in credit rating systems with dependent data

Tillich, Daniel, Lehmann, Christoph 30 March 2017 (has links) (PDF)
Objective: We consider the following problem from credit risk modeling: Our sample (Xi; Yi), 1 < i < n, consists of pairs of variables. The first variable Xi measures the creditworthiness of individual i. The second variable Yi is the default indicator of individual i. It has two states: Yi = 1 indicates a default, Yi = 0 a non-default. A default occurs, if individual i cannot meet its contractual credit obligations, i. e. it cannot pay back its outstandings regularly. In afirst step, our objective is to estimate the threshold between good and bad creditworthiness in the sense of dividing the range of Xi into two rating classes: One class with good creditworthiness and a low probability of default and another class with bad creditworthiness and a high probability of default. Methods: Given observations of individual creditworthiness Xi and defaults Yi, the field of change point analysis provides a natural way to estimate the breakpoint between the rating classes. In order to account for dependency between the observations, the literature proposes a combination of three model classes: These are a breakpoint model, a linear one-factor model for the creditworthiness Xi, and a Bernoulli mixture model for the defaults Yi. We generalize the dependency structure further and use a generalized link between systematic factor and idiosyncratic factor of creditworthiness. So the systematic factor cannot only change the location, but also the form of the distribution of creditworthiness. Results: For the case of two rating classes, we propose several estimators for the breakpoint and for the default probabilities within the rating classes. We prove the strong consistency of these estimators in the given non-i.i.d. framework. The theoretical results are illustrated by a simulation study. Finally, we give an overview of research opportunities.
8

N?o v?cio assint?tico, consist?ncia forte e uniformemente forte de estimadores do tipo n?cleo para dados direcionais sobre uma esfera unit?ria k-dimensional

Santos, Marconio Silva dos 28 June 2010 (has links)
Made available in DSpace on 2014-12-17T15:26:38Z (GMT). No. of bitstreams: 1 MarconioSS_DISSERT.pdf: 828358 bytes, checksum: d4bc4c24d61f5cdfad5c76519c34784e (MD5) Previous issue date: 2010-06-28 / Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior / In this work we studied the asymptotic unbiasedness, the strong and the uniform strong consistencies of a class of kernel estimators fn as an estimator of the density function f taking values on a k-dimensional sphere / Nesse trabalho estudamos o n?o-v?cio assint?tico, a consist?ncia forte e a consist?ncia uniformemente forte de um estimador do tipo n?cleo, que como a maioria dos estimadores ? constru?do com base em n observa??es i.i.d. X1,..., Xn de X, para a densidade f(x) de um vetor aleat?rio X que assume valores em uma esfera unit?ria k-dimensional
9

Estimadores do tipo n?cleo para Vari?vei s I.I.D. com espa?o de estados geral

Silva, Mariana Barbosa da 31 May 2012 (has links)
Made available in DSpace on 2014-12-17T15:26:39Z (GMT). No. of bitstreams: 1 MarianaBS_DISSERT.pdf: 5316617 bytes, checksum: 4fb0344851aa8f373aa2dab90bb6d3c5 (MD5) Previous issue date: 2012-05-31 / Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior / In this work, the paper of Campos and Dorea [3] was detailed. In that article a Kernel Estimator was applied to a sequence of random variables with general state space, which were independent and identicaly distributed. In chapter 2, the estimator?s properties such as asymptotic unbiasedness, consistency in quadratic mean, strong consistency and asymptotic normality were verified. In chapter 3, using R software, numerical experiments were developed in order to give a visual idea of the estimate process / Neste trabalho estudamos um dos m?todos n?o-param?trico: os Estimadores do Tipo N?cleo associado a uma sequ?ncia de vari?veis aleat?rias independentes e identicamente distribu?das com espa?o de estados geral, mais precisamente o trabalho de Campos e Dorea [3]. No Cap?tulo 2 verificamos as boas qualidades dessa classe de estimadores como n?o v?cio assint?tico, converg?ncia em m?dia quadr?tica, consist?ncia forte e normalidade assint?tica. No Cap?tulo 3 com o auxilio do software R temos uma id?ia visual do que ocorre no processo de estima??o
10

Estimation in discontinuous Bernoulli mixture models applicable in credit rating systems with dependent data

Tillich, Daniel, Lehmann, Christoph 30 March 2017 (has links)
Objective: We consider the following problem from credit risk modeling: Our sample (Xi; Yi), 1 < i < n, consists of pairs of variables. The first variable Xi measures the creditworthiness of individual i. The second variable Yi is the default indicator of individual i. It has two states: Yi = 1 indicates a default, Yi = 0 a non-default. A default occurs, if individual i cannot meet its contractual credit obligations, i. e. it cannot pay back its outstandings regularly. In afirst step, our objective is to estimate the threshold between good and bad creditworthiness in the sense of dividing the range of Xi into two rating classes: One class with good creditworthiness and a low probability of default and another class with bad creditworthiness and a high probability of default. Methods: Given observations of individual creditworthiness Xi and defaults Yi, the field of change point analysis provides a natural way to estimate the breakpoint between the rating classes. In order to account for dependency between the observations, the literature proposes a combination of three model classes: These are a breakpoint model, a linear one-factor model for the creditworthiness Xi, and a Bernoulli mixture model for the defaults Yi. We generalize the dependency structure further and use a generalized link between systematic factor and idiosyncratic factor of creditworthiness. So the systematic factor cannot only change the location, but also the form of the distribution of creditworthiness. Results: For the case of two rating classes, we propose several estimators for the breakpoint and for the default probabilities within the rating classes. We prove the strong consistency of these estimators in the given non-i.i.d. framework. The theoretical results are illustrated by a simulation study. Finally, we give an overview of research opportunities.

Page generated in 0.1042 seconds