• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 81
  • 11
  • 8
  • 6
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 150
  • 150
  • 47
  • 45
  • 35
  • 29
  • 26
  • 23
  • 20
  • 19
  • 17
  • 17
  • 16
  • 16
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Cena volatility finančních proměnných / Price of Volatility of Financials Assets

Gříšek, Lukáš January 2011 (has links)
This diploma thesis describes problem of change-points in volatility of the time-series and their impact on price of nancial assets. Those change-points are estimated by using statistical methods and tests. Change-point estimation was tested on simulated datas and real world driven datas. Simulation helped to discover signi cant characteristics of change-point test, those data were simulated with using stochastic calculus. Google share prices and prices of call options were chosen to analyse impact of volatility change on those prices. Also implied volatility and its impact to call option price was analysed.
92

Methods for evaluating dropout attrition in survey data

Hochheimer, Camille J 01 January 2019 (has links)
As researchers increasingly use web-based surveys, the ease of dropping out in the online setting is a growing issue in ensuring data quality. One theory is that dropout or attrition occurs in phases that can be generalized to phases of high dropout and phases of stable use. In order to detect these phases, several methods are explored. First, existing methods and user-specified thresholds are applied to survey data where significant changes in the dropout rate between two questions is interpreted as the start or end of a high dropout phase. Next, survey dropout is considered as a time-to-event outcome and tests within change-point hazard models are introduced. Performance of these change-point hazard models is compared. Finally, all methods are applied to survey data on patient cancer screening preferences, testing the null hypothesis of no phases of attrition (no change-points) against the alternative hypothesis that distinct attrition phases exist (at least one change-point).
93

轉折型時間序列的認定 / Pattern Recognition for Trend Time Series

程友梅, Cheng, Yu Mei Unknown Date (has links)
轉折型時間序列在現實生活中常常可見,例如因戰爭、政策改變、罷工或自然界的條件劇變等,而使時間序列的走勢發生明顯的轉變。傳統上,對這種轉折型時間序列資料進行轉折點的偵測時,大部分均從事後的觀點,主觀上先行認定結構轉變發生的時點,而後再以檢定加以確認。但此種方法過於主觀,而且轉型並非一蹴可幾,若以單一的轉折點來解釋轉型的現象,似乎不太恰當。   有鑑於此,本文利用模糊轉折區間統計認定法,以事前的觀點,對具有平均數或變異數改變的轉折型時間序列進行轉折區間的認定。並以匯率及貿易餘額的實際例子,利用我們所提出的方法進行單變數及雙變數的模糊分類,進而求出個別及聯合的轉折區間。 / Structure-changing time series are often seen in daily life. For example, war, change of policy, labor strike, or change of natural phenomena result in obvious change of time series. Most of detection of change points for structure-changing time series take place afterwards. In this paper, we pre-sent a change periods detection method for trend time series using the concept of fuzzy logic. Empirical example about exchange rate and balance of international trade is illustrated with detailed analysis.
94

結構性改變ARIMA模式的建立與應用 / Structural Change ARIMA Modeling and Application

曾淑惠, Tseng, Shuhui Unknown Date (has links)
近年來,非線性時間數列分析是一個快速發展的課題,其中最為人所矚目 的是門檻模式。從過去許多文獻得知,一個簡單門檻模式對於某些型態時 間數列的描述,如結構性改變的行為趨勢,比一般線性ARMA模式更能解釋 實際情況。在本篇論文中,我們將討論有關門檻模式及結構性改變分析的 問題。對於模式的建立,我們提出一個轉型期的觀念,替代傳統尋求一個 轉捩點的方法,進而提出一個結構性改變ARIMA模式有效建立的程序。最 後,我們以台灣出生率當作應用分析的範例,並且利用建立的結構性改變 ARIMA模式,及其他傳統門檻TAR模式,傳統線性分析方法等進行預測分析 及比較。 / Non-linear time series analysis is a rapidly developing subject in recent years. One of special families of non-linear models is threshold model. Many literatures have shown that even simple threshold model can describe certain types of time series, such as structural change behavior, more faithful than using linear ARMA models. In this paper, we discuss some problems about the threshold model and structural change analysis. Instead of finding the change point, we present the change period concepts on the model- building. An efficient algorithem on constructing the structure change ARIMA models is proposed. Finally, we demonstrate an example about the birth rate of Taiwan, and the comparison of forecasting performance for the structure change ARIMA model with alternative models are also made.
95

Théorèmes limites pour des processus à longue mémoire saisonnière

Ould Mohamed Abdel Haye, Mohamedou 30 December 2001 (has links) (PDF)
Nous étudions le comportement asymptotique de statistiques ou fonctionnelles liées à des processus à longue mémoire saisonnière. Nous nous concentrons sur les lignes de Donsker et sur le processus empirique. Les suites considérées sont de la forme $G(X_n)$ où $(X_n)$ est un processus gaussien ou linéaire. Nous montrons que les résultats que Taqqu et Dobrushin ont obtenus pour des processus à longue mémoire dont la covariance est à variation régulière à l'infini peuvent être en défaut en présence d'effets saisonniers. Les différences portent aussi bien sur le coefficient de normalisation que sur la nature du processus limite. Notamment nous montrons que la limite du processus empirique bi-indexé, bien que restant dégénérée, n'est plus déterminée par le degré de Hermite de la fonction de répartition des données. En particulier, lorsque ce degré est égal à 1, la limite n'est plus nécessairement gaussienne. Par exemple on peut obtenir une combinaison de processus de Rosenblatt indépendants. Ces résultats sont appliqués à quelques problèmes statistiques comme le comportement asymptotique des U-statistiques, l'estimation de la densité et la détection de rupture.
96

Metodik för detektering av vägåtgärder via tillståndsdata / Methodology for detection of road treatments

Andersson, Niklas, Hansson, Josef January 2010 (has links)
<p>The Swedish Transport Administration has, and manages, a database containing information of the status of road condition on all paved and governmental operated Swedish roads. The purpose of the database is to support the Pavement Management System (PMS). The PMS is used to identify sections of roads where there is a need for treatment, how to allocate resources and to get a general picture of the state of the road network condition. All major treatments should be reported which has not always been done.</p><p>The road condition is measured using a number of indicators on e.g. the roads unevenness. Rut depth is an indicator of the roads transverse unevenness. When a treatment has been done the condition drastically changes, which is also reflected by these indicators.</p><p>The purpose of this master thesis is to; by using existing indicators make predictions to find points in time when a road has been treated.</p><p>We have created a SAS-program based on simple linear regression to analyze rut depth changes over time. The function of the program is to find levels changes in the rut depth trend. A drastic negative change means that a treatment has been made.</p><p>The proportion of roads with an alleged date for the latest treatment earlier than the programs latest detected date was 37 percent. It turned out that there are differences in the proportions of possible treatments found by the software and actually reported roads between different regions. The regions North and Central have the highest proportion of differences. There are also differences between the road groups with various amount of traffic. The differences between the regions do not depend entirely on the fact that the proportion of heavily trafficked roads is greater for some regions.</p>
97

Metodik för detektering av vägåtgärder via tillståndsdata / Methodology for detection of road treatments

Andersson, Niklas, Hansson, Josef January 2010 (has links)
The Swedish Transport Administration has, and manages, a database containing information of the status of road condition on all paved and governmental operated Swedish roads. The purpose of the database is to support the Pavement Management System (PMS). The PMS is used to identify sections of roads where there is a need for treatment, how to allocate resources and to get a general picture of the state of the road network condition. All major treatments should be reported which has not always been done. The road condition is measured using a number of indicators on e.g. the roads unevenness. Rut depth is an indicator of the roads transverse unevenness. When a treatment has been done the condition drastically changes, which is also reflected by these indicators. The purpose of this master thesis is to; by using existing indicators make predictions to find points in time when a road has been treated. We have created a SAS-program based on simple linear regression to analyze rut depth changes over time. The function of the program is to find levels changes in the rut depth trend. A drastic negative change means that a treatment has been made. The proportion of roads with an alleged date for the latest treatment earlier than the programs latest detected date was 37 percent. It turned out that there are differences in the proportions of possible treatments found by the software and actually reported roads between different regions. The regions North and Central have the highest proportion of differences. There are also differences between the road groups with various amount of traffic. The differences between the regions do not depend entirely on the fact that the proportion of heavily trafficked roads is greater for some regions.
98

Cellular Services Market In India : Predictive Models And Assessing Interventions

Shrinivas, V Prasanna 04 1900 (has links)
The Objective of this thesis is to address some interesting problems in the Indian cellular services market. The first problem we address relates to identifying important change points that marked the evolution of the telecom market since Indian Independence. We use the data on per-capita availability of telephones in India to this effect. We identify important change points that mapped to the computerization move in 1989, the liberalization and globalization policies starting from 1991 and subsequently the introduction of NTP 1997 and NTP 1999. We also identify the important change points that mark the growth of cellular services subscriber base in India. We map change points detected to some of the important macro level policy initiatives that were taken by TRAI. The second problem we address is the assessment of policy interventions on the growth of cellular subscriber base in India. We model the impact of two important policy interventions namely, the NTP 1999 and its spill-over policy the entry of the fourth player into the market to offer services. We model the abrupt temporary, abrupt permanent and gradual permanent impacts of these interventions individually and in a coupled manner. We are arguably the first to use the intervention analysis and change point analysis to study the Indian telecom market. The third problem relates to the most challenging task of forecasting the growth of cellular services subscribers in India. We use novel machine learning techniques like ε-SVR and ν-SVR and compare its performance with ANN and ARIMA using standard performance metrics. Initially, we venture to predict the aggregate subscriber growth of cellular mobile subscribers in India using the SVR techniques. This would be of interest to the policy makers from a strategic standpoint. Subsequently, we predict the marginal(monthly) subscriber growth using SVR and tabulate the results for varying depths of forecasting which would be of interest to service providers form an operation standpoint. We find that the SVR techniques performed better than ANN and ARIMA particularly with respect to forward or out-sample forecasting when the time periods increase. The final problem involves a differential game model in an oligopoly set up for the telecom service providers who tried to optimize their advertisement innovation mix in order to maximize their discounted flow of profits. We consider the situation where the service providers make Cournot conjectures about the action of their rivals. The firms would not enter into agreements or form cartels. The firms choose the quantity they want to sell simultaneously. The essence of the Cournot conjecture was that though it was a quantity based competition, no single firm could unilaterally try to improve the total quantity sold in the market. Every firm made only one decision and did so when other firms were simultaneously making decisions. We have come across papers that considered either advertisement or product/process innovation separately but not together. We incorporate both these control variables with the inverse demand function as the state variable. We propose an open-loop solution that is dependent on time. We conduct experiments with various combinations of churn and spill-over rates of advertisement and innovation and thereby get some managerial insights.
99

Non-parametric Statistical Process Control : Evaluation and Implementation of Methods for Statistical Process Control at GE Healthcare, Umeå / Icke-parametrisk Statistisk Processtyrning : Utvärdering och Implementering av Metoder för Statistisk Processtyrning på GE Healthcare, Umeå

Lanhede, Daniel January 2015 (has links)
Statistical process control (SPC) is a toolbox to detect changes in the output of a process distribution. It can serve as a valuable resource to maintain high quality in a manufacturing process. This report is based on the work on evaluating and implementing methods for SPC in the process of chromatography instrument manufacturing at GE Healthcare, Umeå. To handle low volume and non-normally distributed process output data, non-parametric methods are considered. Eight control charts, three for for Phase I analysis, and five for Phase II analysis, are evaluated in this study. The usability of the charts are assessed based on ease of interpretation and the performance to detect distributional changes. The later is evaluated with simulations. The result of the project is the implementation of the RS/P-chart, suggested by Capizzi et al (2013), for Phase I analysis. Of the considered Phase I methods (and simulation scenarios), the RS/P-chart has the highest overall probability, of detecting a variety of distributional changes. Further, the RS/P-chart is easily interpreted, facilitating the analysis. For Phase II analysis, the use of two control charts, one based on the Mann-Whitney U statistic, suggested by Chakraborti et al (2008), and one on the Mood test statistic for dispersion, suggested by Ghute et al (2014), have been implemented. These are chosen mainly based on the ease of interpretation. To reduce the detection time for changes in the process distribution, the change-point chart based on the Cramer Von Mises statistic, suggested by Ross et al (2012), could be used instead. Using single observations, instead of larger samples, this chart is updated more frequently. However, this efficiently increases the false alarm rate and the chart is also considered much more difficult to interpret for the SPC practitioner. / Statistisk processkontroll (SPC) är en samling verktyg för att upptäcka förändringar, i fördelningen, hos utfallen i en process. Det kan fungera som en värdefull resurs för att upprätthålla en hög kvalitet i en tillverkningsprocess. Denna rapport är baserad på arbetet med att utvärdera och implementera metoder för SPC i en monteringsprocess av kromatografiinstrument på GE Healthcare, Umeå. Åtta styrdiagram, tre för för fas I analys, och fem för fas II analys, studeras i denna rapport. Användbarheten hos styrdiagrammen bedöms efter hur enkla de är att tolka och förmågan att upptäcka fördelningsförändringar. Den senare utvärderas med simuleringar. Resultatet av projektet är införandet av RS/P-metod, utvecklad av Capizzi et al (2013), för analysen i fas I. Av de utvärderade metoderna, (och simuleringsscenarier), har RS/P-diagrammet den högsta övergripande sannolikheten, för att upptäcka en mängd olika fördelningsförändringar. Vidare är metodens grafiska diagram lätt att tolka, vilket underlättar analysen. För fas II analys, har två styrdiagram, ett baserat på Mann-Whitney's U teststatistika, som föreslagits av Chakraborti et al (2008), och ett på Mood's teststatistika för spridning, som föreslagits av Ghute et al (2014), implementerats. Styrkan i dessa styrdiagram ligger främst i dess enkla tolkning. För snabbare identifiering av processförändringar kan styrdiagrammet baserat på Cramer von Mises teststatistika, som föreslagits av Ross et al (2012), användas. Baserat på enskilda observationer, istället för stickprov, har styrdiagrammet en högre uppdateringsfrekvens. Detta leder dock till ett ökat antal falska larm och styrdiagrammet anses dessutom vara avsevärt mycket svårare att tolka för SPC-utövaren.
100

Bruchpunktschätzung bei der Ratingklassenbildung / Rating Classification via Split-Point Estimation

Tillich, Daniel 18 December 2013 (has links) (PDF)
Ratingsysteme sind ein zentraler Bestandteil der Kreditrisikomodellierung. Neben der Bonitätsbeurteilung auf der Ebene der Kreditnehmer und der Risikoquantifizierung auf der Ebene der Ratingklassen spielt dabei die Bildung der Ratingklassen eine wesentliche Rolle. Die Literatur zur Ratingklassenbildung setzt auf modellfreie, in gewisser Weise willkürliche Optimierungsverfahren. Ein Ziel der vorliegenden Arbeit ist es, stattdessen ein parametrisches statistisches Modell zur Bildung der Ratingklassen einzuführen. Ein geeignetes Modell ist im Bereich der Bruchpunktschätzung zu finden. Dieses Modell und die in der mathematischen Literatur vorgeschlagenen Parameter- und Intervallschätzer werden in der vorliegenden Arbeit dargestellt und gründlich diskutiert. Dabei wird Wert auf eine anwendungsnahe und anschauliche Formulierung der mathematisch-statistischen Sachverhalte gelegt. Anschließend wird die Methodik der Bruchpunktschätzung auf einen konkreten Datensatz angewendet und mit verschiedenen anderen Kriterien zur Ratingklassenbildung verglichen. Hier erweist sich die Bruchpunktschätzung als vorteilhaft. Aufbauend auf der empirischen Untersuchung wird abschließend weiterer Forschungsbedarf abgeleitet. Dazu werden insbesondere Konzepte für den Mehrklassenfall und für abhängige Daten entworfen. / Rating systems are a key component of credit risk modeling. In addition to scoring at borrowers’ level and risk quantification at the level of rating classes, the formation of the rating classes plays a fundamental role. The literature on rating classification uses in a way arbitrary optimization methods. Therefore, one aim of this contribution is to introduce a parametric statistical model to form the rating classes. A suitable model can be found in the area of split-point estimation. This model and the proposed parameter and interval estimators are presented and thoroughly discussed. Here, emphasis is placed on an application-oriented and intuitive formulation of the mathematical and statistical issues. Subsequently, the methodology of split-point estimation is applied to a specific data set and compared with several other criteria for rating classification. Here, split-point estimation proves to be advantageous. Finally, further research questions are derived on the basis of the empirical study. In particular, concepts for the case of more than two classes and for dependent data are sketched.

Page generated in 0.0372 seconds