• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 80
  • 11
  • 8
  • 6
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 149
  • 149
  • 46
  • 45
  • 35
  • 29
  • 26
  • 22
  • 19
  • 18
  • 17
  • 17
  • 16
  • 16
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

非線性時間數列模糊轉捩區間之確認 / Fuzzy change period identification for the nonlinear time series

李玉如, Lee, Alice Unknown Date (has links)
對於一個具有結構性改變性質的非線性時間數列,通常很難判斷何處為轉 捩點,或者何處為所謂的轉型期。雖然長久以來已有不少偵查轉捩點的方 法被提出,但是對於轉捩區間以及對於一些語言性的時間數列資料問題( 例如:景氣指標的紅綠燈時間數列),都很少被提出來。本論文中,我們 首先引用Zadeh於1965年提出來的模糊理論的觀念來介紹糢糊時間數列( FTS)。進而定義出在□水準下的模糊點(FP)和模糊轉捩區間(FCP), 並且證明了一些有用的性質。最後再以台灣地區出生率資料為例,說明□ 水準的模糊轉捩區間的判定方法,並列出了詳細的執行步驟。實驗結果更 證明出我們的模糊檢驗法非常具有實用性及有效性。 / As far as structural change of a non-linear time series is concerned, it is hard to tell when the change point or the fuzzy change period occurs. Though many methods are used for the task of detecting, most of them primarily deal with the case of change point, and few examine the problem of fuzzy change period and linguistic time series ( for example, the index of prosperity represented by red or green light ). In this article, we adopt the theory of fuzzy which is proposed by Zedeh ( 1965 ) to introduce the concept of fuzzy time series ( FTS ). Furthermore, we define the □level of fuzzy point (FP) as well as fuzzy change period (FCP), and prove some useful properties. Finally we explain the method we proposed in detecting the □level of fuzzy change period in terms of the data of Taiwan birth rate and provide step-by-step procedures. Experimental results show that the proposed method of fuzzy detecting is available and practical in detecting the □level of fuzzy change period.
142

Sur quelques problèmes non-supervisés impliquant des séries temporelles hautement dèpendantes

Khaleghi, Azadeh 18 November 2013 (has links) (PDF)
Cette thèse est consacrée à l'analyse théorique de problèmes non supervisés impliquant des séries temporelles hautement dépendantes. Plus particulièrement, nous abordons les deux problèmes fondamentaux que sont le problème d'estimation des points de rupture et le partitionnement de séries temporelles. Ces problèmes sont abordés dans un cadre extrêmement général oùles données sont générées par des processus stochastiques ergodiques stationnaires. Il s'agit de l'une des hypothèses les plus faibles en statistiques, comprenant non seulement, les hypothèses de modèles et les hypothèses paramétriques habituelles dans la littérature scientifique, mais aussi des hypothèses classiques d'indépendance, de contraintes sur l'espace mémoire ou encore des hypothèses de mélange. En particulier, aucune restriction n'est faite sur la forme ou la nature des dépendances, de telles sortes que les échantillons peuvent être arbitrairement dépendants. Pour chaque problème abordé, nous proposons de nouvelles méthodes non paramétriques et nous prouvons de plus qu'elles sont, dans ce cadre, asymptotiquement consistantes. Pour l'estimation de points de rupture, la consistance asymptotique se rapporte à la capacité de l'algorithme à produire des estimations des points de rupture qui sont asymptotiquement arbitrairement proches des vrais points de rupture. D'autre part, un algorithme de partitionnement est asymptotiquement consistant si le partitionnement qu'il produit, restreint à chaque lot de séquences, coïncides, à partir d'un certain temps et de manière consistante, avec le partitionnement cible. Nous montrons que les algorithmes proposés sont implémentables efficacement, et nous accompagnons nos résultats théoriques par des évaluations expérimentales. L'analyse statistique dans le cadre stationnaire ergodique est extrêmement difficile. De manière générale, il est prouvé que les vitesses de convergence sont impossibles à obtenir. Dès lors, pour deux échantillons générés indépendamment par des processus ergodiques stationnaires, il est prouvé qu'il est impossible de distinguer le cas où les échantillons sont générés par le même processus de celui où ils sont générés par des processus différents. Ceci implique que des problèmes tels le partitionnement de séries temporelles sans la connaissance du nombre de partitions ou du nombre de points de rupture ne peut admettre de solutions consistantes. En conséquence, une tâche difficile est de découvrir les formulations du problème qui en permettent une résolution dans ce cadre général. La principale contribution de cette thèse est de démontrer (par construction) que malgré ces résultats d'impossibilités théoriques, des formulations naturelles des problèmes considérés existent et admettent des solutions consistantes dans ce cadre général. Ceci inclut la démonstration du fait que le nombre de points de rupture corrects peut être trouvé, sans recourir à des hypothèses plus fortes sur les processus stochastiques. Il en résulte que, dans cette formulation, le problème des points de rupture peut être réduit à du partitionnement de séries temporelles. Les résultats présentés dans ce travail formulent les fondations théoriques pour l'analyse des données séquentielles dans un espace d'applications bien plus large.
143

Testování strukturálních změn pomocí statistik podílového typu / Testing Structural Changes Using Ratio Type Statistics

Peštová, Barbora January 2015 (has links)
Testing Structural Changes Using Ratio Type Statistics Barbora Peštová Charles University in Prague, Faculty of Mathematics and Physics, Department of Probability and Mathematical Statistics, Czech Republic Abstract of the doctoral thesis We deal with sequences of observations that are naturally ordered in time and assume various underlying stochastic models. These models are parametric and some of the parameters are possibly subject to change at some unknown time point. The main goal of this thesis is to test whether such an unknown change has occurred or not. The core of the change point methods presented here is in ratio type statistics based on maxima of cumulative sums. Firstly, an overview of thesis' starting points is given. Then we focus on methods for detecting a gradual change in mean. Consequently, procedures for detection of an abrupt change in mean are generalized by considering a score function. We explore the possibility of applying the bootstrap methods for obtaining critical values, while disturbances of the change point model are considered as weakly dependent. Procedures for detection of changes in parameters of linear regression models are shown as well and a permutation version of the test is derived. Then, a related problem of testing a change in autoregression parameter is studied....
144

Comparative Analysis of Behavioral Models for Adaptive Learning in Changing Environments

Marković, Dimitrije, Kiebel, Stefan J. 16 January 2017 (has links) (PDF)
Probabilistic models of decision making under various forms of uncertainty have been applied in recent years to numerous behavioral and model-based fMRI studies. These studies were highly successful in enabling a better understanding of behavior and delineating the functional properties of brain areas involved in decision making under uncertainty. However, as different studies considered different models of decision making under uncertainty, it is unclear which of these computational models provides the best account of the observed behavioral and neuroimaging data. This is an important issue, as not performing model comparison may tempt researchers to over-interpret results based on a single model. Here we describe how in practice one can compare different behavioral models and test the accuracy of model comparison and parameter estimation of Bayesian and maximum-likelihood based methods. We focus our analysis on two well-established hierarchical probabilistic models that aim at capturing the evolution of beliefs in changing environments: Hierarchical Gaussian Filters and Change Point Models. To our knowledge, these two, well-established models have never been compared on the same data. We demonstrate, using simulated behavioral experiments, that one can accurately disambiguate between these two models, and accurately infer free model parameters and hidden belief trajectories (e.g., posterior expectations, posterior uncertainties, and prediction errors) even when using noisy and highly correlated behavioral measurements. Importantly, we found several advantages of Bayesian inference and Bayesian model comparison compared to often-used Maximum-Likelihood schemes combined with the Bayesian Information Criterion. These results stress the relevance of Bayesian data analysis for model-based neuroimaging studies that investigate human decision making under uncertainty.
145

Comparative Analysis of Behavioral Models for Adaptive Learning in Changing Environments

Marković, Dimitrije, Kiebel, Stefan J. 16 January 2017 (has links)
Probabilistic models of decision making under various forms of uncertainty have been applied in recent years to numerous behavioral and model-based fMRI studies. These studies were highly successful in enabling a better understanding of behavior and delineating the functional properties of brain areas involved in decision making under uncertainty. However, as different studies considered different models of decision making under uncertainty, it is unclear which of these computational models provides the best account of the observed behavioral and neuroimaging data. This is an important issue, as not performing model comparison may tempt researchers to over-interpret results based on a single model. Here we describe how in practice one can compare different behavioral models and test the accuracy of model comparison and parameter estimation of Bayesian and maximum-likelihood based methods. We focus our analysis on two well-established hierarchical probabilistic models that aim at capturing the evolution of beliefs in changing environments: Hierarchical Gaussian Filters and Change Point Models. To our knowledge, these two, well-established models have never been compared on the same data. We demonstrate, using simulated behavioral experiments, that one can accurately disambiguate between these two models, and accurately infer free model parameters and hidden belief trajectories (e.g., posterior expectations, posterior uncertainties, and prediction errors) even when using noisy and highly correlated behavioral measurements. Importantly, we found several advantages of Bayesian inference and Bayesian model comparison compared to often-used Maximum-Likelihood schemes combined with the Bayesian Information Criterion. These results stress the relevance of Bayesian data analysis for model-based neuroimaging studies that investigate human decision making under uncertainty.
146

Some Contributions to Inferential Issues of Censored Exponential Failure Data

Han, Donghoon 06 1900 (has links)
In this thesis, we investigate several inferential issues regarding the lifetime data from exponential distribution under different censoring schemes. For reasons of time constraint and cost reduction, censored sampling is commonly employed in practice, especially in reliability engineering. Among various censoring schemes, progressive Type-I censoring provides not only the practical advantage of known termination time but also greater flexibility to the experimenter in the design stage by allowing for the removal of test units at non-terminal time points. Hence, we first consider the inference for a progressively Type-I censored life-testing experiment with k uniformly spaced intervals. For small to moderate sample sizes, a practical modification is proposed to the censoring scheme in order to guarantee a feasible life-test under progressive Type-I censoring. Under this setup, we obtain the maximum likelihood estimator (MLE) of the unknown mean parameter and derive the exact sampling distribution of the MLE through the use of conditional moment generating function under the condition that the existence of the MLE is ensured. Using the exact distribution of the MLE as well as its asymptotic distribution and the parametric bootstrap method, we discuss the construction of confidence intervals for the mean parameter and their performance is then assessed through Monte Carlo simulations. Next, we consider a special class of accelerated life tests, known as step-stress tests in reliability testing. In a step-stress test, the stress levels increase discretely at pre-fixed time points and this allows the experimenter to obtain information on the parameters of the lifetime distributions more quickly than under normal operating conditions. Here, we consider a k-step-stress accelerated life testing experiment with an equal step duration τ. In particular, the case of progressively Type-I censored data with a single stress variable is investigated. For small to moderate sample sizes, we introduce another practical modification to the model for a feasible k-step-stress test under progressive censoring, and the optimal τ is searched using the modified model. Next, we seek the optimal τ under the condition that the step-stress test proceeds to the k-th stress level, and the efficiency of this conditional inference is compared to the preceding models. In all cases, censoring is allowed at each change stress point iτ, i = 1, 2, ... , k, and the problem of selecting the optimal Tis discussed using C-optimality, D-optimality, and A-optimality criteria. Moreover, when a test unit fails, there are often more than one fatal cause for the failure, such as mechanical or electrical. Thus, we also consider the simple stepstress models under Type-I and Type-II censoring situations when the lifetime distributions corresponding to the different risk factors are independently exponentially distributed. Under this setup, we derive the MLEs of the unknown mean parameters of the different causes under the assumption of a cumulative exposure model. The exact distributions of the MLEs of the parameters are then derived through the use of conditional moment generating functions. Using these exact distributions as well as the asymptotic distributions and the parametric bootstrap method, we discuss the construction of confidence intervals for the parameters and then assess their performance through Monte Carlo simulations. / Thesis / Doctor of Philosophy (PhD)
147

Stochastic Modeling of Intraday Electricity Markets

Milbradt, Cassandra 29 November 2023 (has links)
Limit-Orderbücher sind das Standardinstrument der Preisbildung in modernen Finanzmärkten. Während Strom traditionell in Auktionen gehandelt wird, gibt es Intraday Strommärkte wie beispielsweise den SIDC-Markt, in welchem Käufer und Verkäufer über Limit-Orderbücher zusammentreffen. In dieser Arbeit werden wir stochastische Modelle von Limit-Orderbüchern auf der Grundlage der zugrundeliegenden Marktmikrostruktur entwickeln. Einen besonderen Schwerpunkt legen wir dabei auf die Berücksichtigung besonderer Merkmale der Intraday-Strommärkte, die sich zum Teil deutlich von denen der Finanzmärkte unterscheiden. Die in dieser Arbeit entwickelten Modelle beginnen mit einer realistischen und mikroskopischen Beschreibung der Marktdynamik. Große Preisänderungen über kurze Zeiträume werden ebenso berücksichtigt wie begrenzte grenzüberschreitende Aktivitäten. Diese mikroskopischen Modelle sind im Allgemeinen zu rechenintensiv für praktische Anwendungen. Das Hauptziel dieser Arbeit ist es daher, geeignete Approximationen dieser mikroskopischen Modelle durch sogenannte Skalierungsgrenzprozesse herzuleiten. Zu diesem Zweck werden sorgfältig Skalierungsannahmen formuliert und in die mikroskopischen Modelle eingebaut. Diese Annahmen ermöglichen es uns, ihr Hochfrequenzverhalten zu untersuchen, vorausgesetzt, dass die Größe eines einzelnen Auftrags gegen Null konvergiert, während die Auftragseingangsrate gegen unendlich tendiert. Die Kalibrierung mathematischer Modelle ist aus Anwendersicht eines der Hauptanliegen. Dabei ist bekannt, dass Änderungspunkte (abrupte Schwankungen) in hochfrequenten Finanzdaten vorhanden sind. Falls sie durch endogene Effekte verursacht wurden, muss bei der Schätzung solcher Änderungspunkte die Abhängigkeit von den zugrundeliegenden Daten berücksichtigt werden. Daher erweitern wir im letzten Teil dieser Arbeit die bestehende Literatur zur Erkennung von Änderungspunkten, so dass auch zufällige, von den Daten abhängige Änderungspunkte gehandhabt werden können. / Limit order books are the standard instrument for price formation in modern financial markets. While electricity has traditionally been traded through auctions, there are intraday electricity markets, such as the SIDC market, in which buyers and sellers meet via limit order books. In this thesis, stochastic models of limit order books are developed based on the underlying market microstructure. A particular focus is set on incorporating unique characteristics of intraday electricity markets, some of which are quite different from those of financial markets. The developed models in this thesis start with a realistic and microscopic description of the market dynamics. Large price changes over short time periods are considered, as well as limited cross-border activities. These microscopic models are generally computationally too intensive for practical applications. The main goal of this thesis is therefore to derive suitable approximations of these microscopic models by so-called scaling limits. For this purpose, appropriate scaling assumptions are carefully formulated and incorporated into the microscopic models which allow us to study their high-frequency behavior when the size of an individual order converges to zero while the order arrival rate tends to infinity. Calibration of mathematical models is one of the main concerns from a practitioner’s point of view. It is well known that change points (abrupt variations) are present in high-frequency financial data. If they are caused by endogenous effects, the dependence on the underlying data must be considered when estimating such change points. In the final part of this thesis, we extend the existing literature on change point detection so that random change points depending on the data can also be handled.
148

Joint models for longitudinal and survival data

Yang, Lili 11 July 2014 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Epidemiologic and clinical studies routinely collect longitudinal measures of multiple outcomes. These longitudinal outcomes can be used to establish the temporal order of relevant biological processes and their association with the onset of clinical symptoms. In the first part of this thesis, we proposed to use bivariate change point models for two longitudinal outcomes with a focus on estimating the correlation between the two change points. We adopted a Bayesian approach for parameter estimation and inference. In the second part, we considered the situation when time-to-event outcome is also collected along with multiple longitudinal biomarkers measured until the occurrence of the event or censoring. Joint models for longitudinal and time-to-event data can be used to estimate the association between the characteristics of the longitudinal measures over time and survival time. We developed a maximum-likelihood method to joint model multiple longitudinal biomarkers and a time-to-event outcome. In addition, we focused on predicting conditional survival probabilities and evaluating the predictive accuracy of multiple longitudinal biomarkers in the joint modeling framework. We assessed the performance of the proposed methods in simulation studies and applied the new methods to data sets from two cohort studies. / National Institutes of Health (NIH) Grants R01 AG019181, R24 MH080827, P30 AG10133, R01 AG09956.
149

Modelling Financial and Social Networks

Klochkov, Yegor 04 October 2019 (has links)
In dieser Arbeit untersuchen wir einige Möglichkeiten, financial und soziale Netzwerke zu analysieren, ein Thema, das in letzter Zeit in der ökonometrischen Literatur große Beachtung gefunden hat. Kapitel 2 untersucht den Risiko-Spillover-Effekt über das in White et al. (2015) eingeführte multivariate bedingtes autoregressives Value-at-Risk-Modell. Wir sind an der Anwendung auf nicht stationäre Zeitreihen interessiert und entwickeln einen sequentiellen statistischen Test, welcher das größte verfügbare Homogenitätsintervall auswählt. Unser Ansatz basiert auf der Changepoint-Teststatistik und wir verwenden einen neuartigen Multiplier Bootstrap Ansatz zur Bewertung der kritischen Werte. In Kapitel 3 konzentrieren wir uns auf soziale Netzwerke. Wir modellieren Interaktionen zwischen Benutzern durch ein Vektor-Autoregressivmodell, das Zhu et al. (2017) folgt. Um für die hohe Dimensionalität kontrollieren, betrachten wir ein Netzwerk, das einerseits von Influencers und Andererseits von Communities gesteuert wird, was uns hilft, den autoregressiven Operator selbst dann abzuschätzen, wenn die Anzahl der aktiven Parameter kleiner als die Stichprobegröße ist. Kapitel 4 befasst sich mit technischen Tools für die Schätzung des Kovarianzmatrix und Kreuzkovarianzmatrix. Wir entwickeln eine neue Version von der Hanson-Wright- Ungleichung für einen Zufallsvektor mit subgaußschen Komponenten. Ausgehend von unseren Ergebnissen zeigen wir eine Version der dimensionslosen Bernstein-Ungleichung, die für Zufallsmatrizen mit einer subexponentiellen Spektralnorm gilt. Wir wenden diese Ungleichung auf das Problem der Schätzung der Kovarianzmatrix mit fehlenden Beobachtungen an und beweisen eine verbesserte Version des früheren Ergebnisses von (Lounici 2014). / In this work we explore some ways of studying financial and social networks, a topic that has recently received tremendous amount of attention in the Econometric literature. Chapter 2 studies risk spillover effect via Multivariate Conditional Autoregressive Value at Risk model introduced in White et al. (2015). We are particularly interested in application to non-stationary time series and develop a sequential test procedure that chooses the largest available interval of homogeneity. Our approach is based on change point test statistics and we use a novel Multiplier Bootstrap approach for the evaluation of critical values. In Chapter 3 we aim at social networks. We model interactions between users through a vector autoregressive model, following Zhu et al. (2017). To cope with high dimensionality we consider a network that is driven by influencers on one side, and communities on the other, which helps us to estimate the autoregressive operator even when the number of active parameters is smaller than the sample size. Chapter 4 is devoted to technical tools related to covariance cross-covariance estimation. We derive uniform versions of the Hanson-Wright inequality for a random vector with independent subgaussian components. The core technique is based on the entropy method combined with truncations of both gradients of functions of interest and of the coordinates itself. We provide several applications of our techniques: we establish a version of the standard Hanson-Wright inequality, which is tighter in some regimes. Extending our results we show a version of the dimension-free matrix Bernstein inequality that holds for random matrices with a subexponential spectral norm. We apply the derived inequality to the problem of covariance estimation with missing observations and prove an improved high probability version of the recent result of Lounici (2014).

Page generated in 0.0417 seconds