• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 32
  • 17
  • 2
  • 1
  • Tagged with
  • 52
  • 52
  • 36
  • 20
  • 17
  • 12
  • 12
  • 8
  • 8
  • 8
  • 6
  • 6
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Nichtlineare Regimewechselmodelle : theoretische und empirische Evidenz am deutschen Kapitalmarkt /

Brannolte, Cord. January 2002 (has links) (PDF)
Univ., Diss.--Kiel, 2001.
42

Uncertainty and the stability of financial markets in open economies : empirical evidence from regime-switching models /

Tillmann, Peter. January 2003 (has links) (PDF)
Univ, Diss--Köln, 2003.
43

CellTrans: An R Package to Quantify Stochastic Cell State Transitions

Buder, Thomas, Deutsch, Andreas, Seifert, Michael, Voss-Böhme, Anja 15 November 2017 (has links) (PDF)
Many normal and cancerous cell lines exhibit a stable composition of cells in distinct states which can, e.g., be defined on the basis of cell surface markers. There is evidence that such an equilibrium is associated with stochastic transitions between distinct states. Quantifying these transitions has the potential to better understand cell lineage compositions. We introduce CellTrans, an R package to quantify stochastic cell state transitions from cell state proportion data from fluorescence-activated cell sorting and flow cytometry experiments. The R package is based on a mathematical model in which cell state alterations occur due to stochastic transitions between distinct cell states whose rates only depend on the current state of a cell. CellTrans is an automated tool for estimating the underlying transition probabilities from appropriately prepared data. We point out potential analytical challenges in the quantification of these cell transitions and explain how CellTrans handles them. The applicability of CellTrans is demonstrated on publicly available data on the evolution of cell state compositions in cancer cell lines. We show that CellTrans can be used to (1) infer the transition probabilities between different cell states, (2) predict cell line compositions at a certain time, (3) predict equilibrium cell state compositions, and (4) estimate the time needed to reach this equilibrium. We provide an implementation of CellTrans in R, freely available via GitHub (https://github.com/tbuder/CellTrans).
44

Dynamical characterization of Markov processes with varying order

Bauer, Michael 01 July 2008 (has links)
Time-delayed actions appear as an essential component of numerous systems especially in evolution processes, natural phenomena, and particular technical applications and are associated with the existence of a memory. Under common conditions, external forces or state dependent parameters modify the length of the delay with time. Consequently, an altered dynamical behavior emerges, whose characterization is compulsory for a deeper understanding of these processes. In this thesis, the well-investigated class of time-homogeneous finite-state Markov processes is utilized to establish a variation of memory length by combining a first-order Markov chain with a memoryless Markov chain of order zero. The fluctuations induce a non-stationary process, which is accomplished for two special cases: a periodic and a random selection of the available Markov chains. For both cases, the Kolmogorov-Sinai entropy as a characteristic property is deduced analytically and compared to numerical approximations to the entropy rate of related symbolic dynamics. The convergences of per-symbol and conditional entropies are examined in order to recognize their behavior when identifying unknown processes. Additionally, the connection from Markov processes with varying memory length to hidden Markov models is illustrated enabling further analysis. Hence, the Kolmogorov-Sinai entropy of hidden Markov chains is calculated by means of Blackwell’s entropy rate involving Blackwell’s measure. These results are used to verify the previous computations.
45

Testing the compatibility of constraints for parameters of a geodetic adjustment model

Lehmann, Rüdiger, Neitzel, Frank 06 August 2014 (has links)
Geodetic adjustment models are often set up in a way that the model parameters need to fulfil certain constraints. The normalized Lagrange multipliers have been used as a measure of the strength of constraint in such a way that if one of them exceeds in magnitude a certain threshold then the corresponding constraint is likely to be incompatible with the observations and the rest of the constraints. We show that these and similar measures can be deduced as test statistics of a likelihood ratio test of the statistical hypothesis that some constraints are incompatible in the same sense. This has been done before only for special constraints (Teunissen in Optimization and Design of Geodetic Networks, pp. 526–547, 1985). We start from the simplest case, that the full set of constraints is to be tested, and arrive at the advanced case, that each constraint is to be tested individually. Every test is worked out both for a known as well as for an unknown prior variance factor. The corresponding distributions under null and alternative hypotheses are derived. The theory is illustrated by the example of a double levelled line. / Geodätische Ausgleichungsmodelle werden oft auf eine Weise formuliert, bei der die Modellparameter bestimmte Bedingungsgleichungen zu erfüllen haben. Die normierten Lagrange-Multiplikatoren wurden bisher als Maß für den ausgeübten Zwang verwendet, und zwar so, dass wenn einer von ihnen betragsmäßig eine bestimmte Schwelle übersteigt, dann ist davon auszugehen, dass die zugehörige Bedingungsgleichung nicht mit den Beobachtungen und den restlichen Bedingungsgleichungen kompatibel ist. Wir zeigen, dass diese und ähnliche Maße als Teststatistiken eines Likelihood-Quotiententests der statistischen Hypothese, dass einige Bedingungsgleichungen in diesem Sinne inkompatibel sind, abgeleitet werden können. Das wurde bisher nur für spezielle Bedingungsgleichungen getan (Teunissen in Optimization and Design of Geodetic Networks, pp. 526–547, 1985). Wir starten vom einfachsten Fall, dass die gesamte Menge der Bedingungsgleichungen getestet werden muss, und gelangen zu dem fortgeschrittenen Problem, dass jede Bedingungsgleichung individuell zu testen ist. Jeder Test wird sowohl für bekannte, wie auch für unbekannte a priori Varianzfaktoren ausgearbeitet. Die zugehörigen Verteilungen werden sowohl unter der Null- wie auch unter der Alternativhypthese abgeleitet. Die Theorie wird am Beispiel einer Doppelnivellementlinie illustriert.
46

Training of Hidden Markov models as an instance of the expectation maximization algorithm

Majewsky, Stefan 22 August 2017 (has links)
In Natural Language Processing (NLP), speech and text are parsed and generated with language models and parser models, and translated with translation models. Each model contains a set of numerical parameters which are found by applying a suitable training algorithm to a set of training data. Many such training algorithms are instances of the Expectation-Maximization (EM) algorithm. In [BSV15], a generic EM algorithm for NLP is described. This work presents a particular speech model, the Hidden Markov model, and its standard training algorithm, the Baum-Welch algorithm. It is then shown that the Baum-Welch algorithm is an instance of the generic EM algorithm introduced by [BSV15], from which follows that all statements about the generic EM algorithm also apply to the Baum-Welch algorithm, especially its correctness and convergence properties.:1 Introduction 1.1 N-gram models 1.2 Hidden Markov model 2 Expectation-maximization algorithms 2.1 Preliminaries 2.2 Algorithmic skeleton 2.3 Corpus-based step mapping 2.4 Simple counting step mapping 2.5 Regular tree grammars 2.6 Inside-outside step mapping 2.7 Review 3 The Hidden Markov model 3.1 Forward and backward algorithms 3.2 The Baum-Welch algorithm 3.3 Deriving the Baum-Welch algorithm 3.3.1 Model parameter and countable events 3.3.2 Tree-shaped hidden information 3.3.3 Complete-data corpus 3.3.4 Inside weights 3.3.5 Outside weights 3.3.6 Complete-data corpus (cont.) 3.3.7 Step mapping 3.4 Review Appendix A Elided proofs from Chapter 3 A.1 Proof of Lemma 3.8 A.2 Proof of Lemma 3.9 B Formulary for Chapter 3 Bibliography
47

Frequency based efficiency evaluation - from pattern recognition via backwards simulation to purposeful drive design

Starke, Martin, Beck, Benjamin, Ritz, Denis, Will, Frank, Weber, Jürgen 23 June 2020 (has links)
The efficiency of hydraulic drive systems in mobile machines is influenced by several factors, like the operators’ guidance, weather conditions, material respectively loading properties and primarily the working cycle. This leads to varying operation points, which have to be performed by the drive system. Regarding efficiency analysis, the usage of standardized working cycles gained through measurements or synthetically generated is state of the art. Thereby, only a small extract of the real usage profile is taken into account. This contribution deals with process pattern recognition (PPR) and frequency based efficiency evaluation to gain more precise information and conclusion for the drive design of mobile machines. By the example of an 18 t mobile excavator, the recognition system using Hidden – Markov - Models (HMM) and the efficiency evaluation process by means of backwards simulation of measured operation points will be described.
48

Models of Discrete-Time Stochastic Processes and Associated Complexity Measures

Löhr, Wolfgang 12 May 2010 (has links)
Many complexity measures are defined as the size of a minimal representation in a specific model class. One such complexity measure, which is important because it is widely applied, is statistical complexity. It is defined for discrete-time, stationary stochastic processes within a theory called computational mechanics. Here, a mathematically rigorous, more general version of this theory is presented, and abstract properties of statistical complexity as a function on the space of processes are investigated. In particular, weak-* lower semi-continuity and concavity are shown, and it is argued that these properties should be shared by all sensible complexity measures. Furthermore, a formula for the ergodic decomposition is obtained. The same results are also proven for two other complexity measures that are defined by different model classes, namely process dimension and generative complexity. These two quantities, and also the information theoretic complexity measure called excess entropy, are related to statistical complexity, and this relation is discussed here. It is also shown that computational mechanics can be reformulated in terms of Frank Knight''s prediction process, which is of both conceptual and technical interest. In particular, it allows for a unified treatment of different processes and facilitates topological considerations. Continuity of the Markov transition kernel of a discrete version of the prediction process is obtained as a new result.
49

Particle-based Stochastic Volatility in Mean model / Partikel-baserad stokastisk volatilitet medelvärdes model

Kövamees, Gustav January 2019 (has links)
This thesis present a Stochastic Volatility in Mean (SVM) model which is estimated using sequential Monte Carlo methods. The SVM model was first introduced by Koopman and provides an opportunity to study the intertemporal relationship between stock returns and their volatility through inclusion of volatility itself as an explanatory variable in the mean-equation. Using sequential Monte Carlo methods allows us to consider a non-linear estimation procedure at cost of introducing extra computational complexity. The recently developed PaRIS-algorithm, introduced by Olsson and Westerborn, drastically decrease the computational complexity of smoothing relative to previous algorithms and allows for efficient estimation of parameters. The main purpose of this thesis is to investigate the volatility feedback effect, i.e. the relation between expected return and unexpected volatility in an empirical study. The results shows that unanticipated shocks to the return process do not explain expected returns. / Detta examensarbete presenterar en stokastisk volatilitets medelvärdes (SVM) modell som estimeras genom sekventiella Monte Carlo metoder. SVM-modellen introducerades av Koopman och ger en möjlighet att studera den samtida relationen mellan aktiers avkastning och deras volatilitet genom att inkludera volatilitet som en förklarande variabel i medelvärdes-ekvationen. Sekventiella Monte Carlo metoder tillåter oss att använda icke-linjära estimerings procedurer till en kostnad av extra beräkningskomplexitet. Den nyligen utvecklad PaRIS-algoritmen, introducerad av Olsson och Westerborn, minskar drastiskt beräkningskomplexiteten jämfört med tidigare algoritmer och tillåter en effektiv uppskattning av parametrar. Huvudsyftet med detta arbete är att undersöka volatilitets-återkopplings-teorin d.v.s. relationen mellan förväntad avkastning och oväntad volatilitet i en empirisk studie. Resultatet visar på att oväntade chockar i avkastningsprocessen inte har förklarande förmåga över förväntad avkastning.
50

Evaluierung des phylogenetischen Footprintings und dessen Anwendung zur verbesserten Vorhersage von Transkriptionsfaktor-Bindestellen / Evaluation of phylogenetic footprinting and its application to an improved prediction of transcription factor binding sites

Sauer, Tilman 11 July 2006 (has links)
No description available.

Page generated in 0.0498 seconds