141 |
Route choice and traffic equilibrium modeling in multi-modal and activity-based networksZimmermann, Maëlle 06 1900 (has links)
No description available.
|
142 |
Stochastic Modelling of Daily Peak Electricity Demand Using Value TheoryBoano - Danquah, Jerry 21 September 2018 (has links)
MSc (Statistics) / Department of Statistics / Daily peak electricity data from ESKOM, South African power utility company for the period, January
1997 to December 2013 consisting of 6209 observations were used in this dissertation. Since 1994, the
increased electricity demand has led to sustainability issues in South Africa. In addition, the electricity
demand continues to rise everyday due to a variety of driving factors. Considering this, if the electricity
generating capacity in South Africa does not show potential signs of meeting the country’s demands in
the subsequent years, this may have a significant impact on the national grid causing it to operate in a
risky and vulnerable state, leading to disturbances, such as load shedding as experienced during the past
few years. In particular, it is of greater interest to have sufficient information about the extreme value
of the stochastic load process in time for proper planning, designing the generation and distribution
system, and the storage devices as these would ensure efficiency in the electrical energy in order to
maintain discipline in the grid systems.
More importantly, electricity is an important commodity used mainly as a source of energy in industrial,
residential and commercial sectors. Effective monitoring of electricity demand is of great importance
because demand that exceeds maximum power generated will lead to power outage and load shedding.
It is in the light of this that the study seeks to assess the frequency of occurrence of extreme peak
electricity demand in order to come up with a full electricity demand distribution capable of managing
uncertainties in the grid system.
In order to achieve stationarity in the daily peak electricity demand (DPED), we apply a penalized
regression cubic smoothing spline to ensure the data is non-linearly detrended. The R package “evmix”
is used to estimate the thresholds using the bounded corrected kernel density plot. The non-linear
detrended datasets were divided into summer, spring, winter and autumn according to the calender
dates in the Southern Hemisphere for frequency analysis. The data is declustered using Ferro and
Segers automatic declustering method. The cluster maxima is extracted using the R package “evd”.
We fit Poisson GPD and stationary point process to the cluster maxima and the intensity function of
the point process which measures the frequency of occurrence of the daily peak electricity demand per
year is calculated for each dataset.
The formal goodness-of-fit test based on Cramer-Von Mises statistics and Anderson-Darling statistics
supported the null hypothesis that each dataset follow Poisson GPD (σ, ξ) at 5 percent level of
significance. The modelling framework, which is easily extensible to other peak load parameters, is
based on the assumption that peak power follows a Poisson process. The parameters of the developed
i
models were estimated using the Maximum Likelihood. The usual asymptotic properties underlying the
Poisson GPD were satisfied by the model. / NRF
|
143 |
Observation error model selection by information criteria vs. normality testingLehmann, Rüdiger January 2015 (has links)
To extract the best possible information from geodetic and geophysical observations, it is necessary to select a model of the observation errors, mostly the family of Gaussian normal distributions. However, there are alternatives, typically chosen in the framework of robust M-estimation. We give a synopsis of well-known and less well-known models for observation errors and propose to select a model based on information criteria. In this contribution we compare the Akaike information criterion (AIC) and the Anderson Darling (AD) test and apply them to the test problem of fitting a straight line. The comparison is facilitated by a Monte Carlo approach. It turns out that the model selection by AIC has some advantages over the AD test.
|
144 |
Exact Analysis of Exponential Two-Component System Failure DataZhang, Xuan 01 1900 (has links)
<p>A survival distribution is developed for exponential two-component systems that can survive as long as at least one of the two components in the system function. It is assumed that the two components are initially independent and non-identical. If one of the two components fail (repair is impossible), the surviving component is subject to a different failure rate due to the stress caused by the failure of the other.</p> <p>In this paper, we consider such an exponential two-component system failure model when the observed failure time data are (1) complete, (2) Type-I censored, (3) Type-I censored with partial information on component failures, (4) Type-II censored and (5) Type-II censored with partial information on component failures. In these situations, we discuss the maximum likelihood estimates (MLEs) of the parameters by assuming the lifetimes to be exponentially distributed. The exact distributions (whenever possible) of the MLEs of the parameters are then derived by using the conditional moment generating function approach. Construction of confidence intervals for the model parameters are discussed by using the exact conditional distributions (when available), asymptotic distributions, and two parametric bootstrap methods. The performance of these four confidence intervals, in terms of coverage probabilities are then assessed through Monte Carlo simulation studies. Finally, some examples are presented to illustrate all the methods of inference developed here.</p> <p>In the case of Type-I and Type-II censored data, since there are no closed-form expressions for the MLEs, we present an iterative maximum likelihood estimation procedure for the determination of the MLEs of all the model parameters. We also carry out a Monte Carlo simulation study to examine the bias and variance of the MLEs.</p> <p>In the case of Type-II censored data, since the exact distributions of the MLEs depend on the data, we discuss the exact conditional confidence intervals and asymptotic confidence intervals for the unknown parameters by conditioning on the data observed.</p> / Thesis / Doctor of Philosophy (PhD)
|
145 |
Some Contributions to Inferential Issues of Censored Exponential Failure DataHan, Donghoon 06 1900 (has links)
In this thesis, we investigate several inferential issues regarding the lifetime data from exponential distribution under different censoring schemes. For reasons of time constraint and cost reduction, censored sampling is commonly employed in practice, especially in reliability engineering. Among various censoring schemes, progressive Type-I censoring provides not only the practical advantage of known termination time but also greater flexibility to the experimenter in the design stage by allowing for the removal of test units at non-terminal time points. Hence, we first consider the inference for a progressively Type-I censored life-testing experiment with k uniformly spaced intervals. For small to moderate sample sizes, a practical modification is proposed to the censoring scheme in order to guarantee a feasible life-test under progressive Type-I censoring. Under this setup, we obtain the maximum likelihood estimator (MLE) of the unknown mean parameter and derive the exact sampling distribution of the MLE through the use of conditional moment generating function under the condition that the existence of the MLE is ensured. Using the exact distribution of the MLE as well as its asymptotic distribution and the parametric bootstrap method, we discuss the construction of confidence intervals for the mean parameter and their performance is then assessed through Monte Carlo simulations. Next, we consider a special class of accelerated life tests, known as step-stress
tests in reliability testing. In a step-stress test, the stress levels increase discretely at pre-fixed time points and this allows the experimenter to obtain information on the parameters of the lifetime distributions more quickly than under normal operating conditions. Here, we consider a k-step-stress accelerated life testing experiment with an equal step duration τ. In particular, the case of progressively Type-I censored data with a single stress variable is investigated. For small to moderate sample sizes, we introduce another practical modification to the model for a feasible k-step-stress test under progressive censoring, and the optimal τ is searched using the modified model. Next, we seek the optimal τ under the condition that the step-stress test proceeds to the k-th stress level, and the efficiency of this conditional inference is compared to the preceding models. In all cases, censoring is allowed at each change stress point iτ, i = 1, 2, ... , k, and the problem of selecting the optimal Tis discussed using C-optimality, D-optimality, and A-optimality criteria. Moreover, when a test unit fails, there are often more than one fatal cause for the failure, such as mechanical or electrical. Thus, we also consider the simple stepstress models under Type-I and Type-II censoring situations when the lifetime distributions corresponding to the different risk factors are independently exponentially distributed. Under this setup, we derive the MLEs of the unknown mean parameters of the different causes under the assumption of a cumulative exposure model. The exact distributions of the MLEs of the parameters are then derived through the use of conditional moment generating functions. Using these exact distributions as well as the asymptotic distributions and the parametric bootstrap method, we discuss the construction of confidence intervals for the parameters and then assess their performance through Monte Carlo simulations. / Thesis / Doctor of Philosophy (PhD)
|
146 |
Univariate and Bivariate ACD Models for High-Frequency Data Based on Birnbaum-Saunders and Related DistributionsTan, Tao 22 November 2018 (has links)
This thesis proposes a new class of bivariate autoregressive conditional median duration models for matched high-frequency data and develops some inferential methods for an existing univariate model as well as the bivariate models introduced here to facilitate model fitting and forecasting. During the last two decades, the autoregressive conditional mean duration (ACD) model has been playing a dominant role in analyzing irregularly spaced high-frequency financial data. Univariate ACD models have been extensively discussed in the literature. However, some major challenges remain. The existing ACD models do not provide a good distributional fit to financial durations, which are right-skewed and often exhibit unimodal hazard rates. Birnbaum-Saunders (BS) distribution is capable of modeling a wide variety of positively skewed data. Median is not only a robust measure of central tendency, but also a natural scale parameter of the BS distribution. A class of conditional median duration models, the BS-ACD and the scale-mixture BS ACD models based on the BS, BS power-exponential and Student-t BS (BSt) distributions, have been suggested in the literature to improve the quality of the model fit. The BSt-ACD model is more flexible than the BS-ACD model in terms of kurtosis and skewness. In Chapter 2, we develop the maximum likelihood estimation method for the BSt-ACD model. The estimation is performed by utilizing a hybrid of optimization algorithms. The performance of the estimates is then examined through an extensive Monte Carlo simulation study. We also carry out model discrimination using both likelihood-based method and information-based criterion. Applications to real trade durations and comparison with existing alternatives are then made. The bivariate version of the ACD model has not received attention due to non-synchronicity. Although some bivariate generalizations of the ACD model have been introduced, they do not possess enough flexibility in modeling durations since they are conditional mean-based and do not account for non-monotonic hazard rates. Recently, the bivariate BS (BVBS) distribution has been developed with many desirable properties and characteristics. It allows for unimodal shapes of marginal hazard functions. In Chapter 3, upon using this bivariate BS distribution, we propose the BVBS-ACD model as a natural bivariate extension of the BS-ACD model. It enables us to jointly analyze matched duration series, and also capture the dependence between the two series. The maximum likelihood estimation of the model parameters and associated inferential methods have been developed. A Monte Carlo simulation study is then carried out to examine the performance of the proposed inferential methods. The goodness-of-fit and predictive performance of the model are also discussed. A real bivariate duration data analysis is provided to illustrate the developed methodology. The bivariate Student-t BS (BVBSt) distribution has been introduced in the literature as a robust extension of the BVBS distribution. It provides greater flexibility in terms of the kurtosis and skewness through the inclusion of an additional shape parameter. In Chapter 4, we propose the BVBSt-ACD model as a natural extension of the BSt-ACD model to the bivariate case. We then discuss the maximum likelihood estimation of the model parameters. A simulation study is carried out to investigate the performance of these estimators. Model discrimination is then done by using information-based criterion. Methods for evaluating the goodness-of-fit and predictive ability of the model are also discussed. A simulated data example is used to illustrate the proposed model as compared to the BVBS-ACD model. Finally, in Chapter 5, some concluding comments are made and also some problems for future research are mentioned. / Thesis / Master of Science (MSc)
|
147 |
MARGINAL LIKELIHOOD INFERENCE FOR FRAILTY AND MIXTURE CURE FRAILTY MODELS UNDER BIRNBAUM-SAUNDERS AND GENERALIZED BIRNBAUM-SAUNDERS DISTRIBUTIONSLiu, Kai January 2018 (has links)
Survival analytic methods help to analyze lifetime data arising from medical and reliability experiments. The popular proportional hazards model, proposed by Cox (1972), is widely used in survival analysis to study the effect of risk factors on lifetimes. An important assumption in regression type analysis is that all relative risk factors should be included in the model. However, not all relative risk factors are observed due to measurement difficulty, inaccessibility, cost considerations, and so on. These unobservable risk factors can be modelled by the so-called frailty model, originally introduced by Vaupel et al. (1979). Furthermore, the frailty model is also applicable to clustered data. Cluster data possesses the feature that observations within the same cluster share similar conditions and environment, which are sometimes difficult to observe. For example, patients from the same family share similar genetics, and patients treated in the same hospital share the same group of profes- sionals and same environmental conditions. These factors are indeed hard to quantify or measure. In addition, this type of similarity introduces correlation among subjects within clusters. In this thesis, a semi-parametric frailty model is proposed to address aforementioned issues. The baseline hazards function is approximated by a piecewise constant function and the frailty distribution is assumed to be a Birnbaum-Saunders distribution.
Due to the advancement in modern medical sciences, many diseases are curable, which in turn leads to the need of incorporating cure proportion in the survival model. The frailty model discussed here is further extended to a mixture cure rate frailty model by integrating the frailty model into the mixture cure rate model proposed originally by Boag (1949) and Berkson and Gage (1952). By linking the covariates to the cure proportion through logistic/logit link function and linking observable covariates and unobservable covariates to the lifetime of the uncured population through the frailty model, we obtain a flexible model to study the effect of risk factors on lifetimes. The mixture cure frailty model can be reduced to a mixture cure model if the effect of frailty term is negligible (i.e., the variance of the frailty distribution is close to 0). On the other hand, it also reduces to the usual frailty model if the cure proportion is 0. Therefore, we can use a likelihood ratio test to test whether the reduced model is adequate to model the given data. We assume the baseline hazard to be that of Weibull distribution since Weibull distribution possesses increasing, constant or decreasing hazard rate, and the frailty distribution to be Birnbaum-Saunders distribution.
D ́ıaz-Garc ́ıa and Leiva-Sa ́nchez (2005) proposed a new family of life distributions, called generalized Birnbaum-Saunders distribution, including Birnbaum-Saunders distribution as a special case. It allows for various degrees of kurtosis and skewness, and also permits unimodality as well as bimodality. Therefore, integration of a generalized Birnbaum-Saunders distribution as the frailty distribution in the mixture cure frailty model results in a very flexible model. For this general model, parameter estimation is carried out using a marginal likelihood approach. One of the difficulties in the parameter estimation is that the likelihood function is intractable. The current technology in computation enables us to develop a numerical method through Monte Carlo simulation, and in this approach, the likelihood function is approximated by the Monte Carlo method and the maximum likelihood estimates and standard errors of the model parameters are then obtained numerically by maximizing this approximate likelihood function. An EM algorithm is also developed for the Birnbaum-Saunders mixture cure frailty model. The performance of this estimation method is then assessed by simulation studies for each proposed model. Model discriminations is also performed between the Birnbaum-Saunders frailty model and the generalized Birnbaum-Saunders mixture cure frailty model. Some illustrative real life examples are presented to illustrate the models and inferential methods developed here. / Thesis / Doctor of Science (PhD)
|
148 |
CURE RATE AND DESTRUCTIVE CURE RATE MODELS UNDER PROPORTIONAL ODDS LIFETIME DISTRIBUTIONSFENG, TIAN January 2019 (has links)
Cure rate models, introduced by Boag (1949), are very commonly used while modelling
lifetime data involving long time survivors. Applications of cure rate models can be seen
in biomedical science, industrial reliability, finance, manufacturing, demography and criminology. In this thesis, cure rate models are discussed under a competing cause scenario,
with the assumption of proportional odds (PO) lifetime distributions for the susceptibles,
and statistical inferential methods are then developed based on right-censored data.
In Chapter 2, a flexible cure rate model is discussed by assuming the number of competing
causes for the event of interest following the Conway-Maxwell (COM) Poisson distribution,
and their corresponding lifetimes of non-cured or susceptible individuals can be
described by PO model. This provides a natural extension of the work of Gu et al. (2011)
who had considered a geometric number of competing causes. Under right censoring, maximum likelihood estimators (MLEs) are obtained by the use of expectation-maximization
(EM) algorithm. An extensive Monte Carlo simulation study is carried out for various scenarios,
and model discrimination between some well-known cure models like geometric,
Poisson and Bernoulli is also examined. The goodness-of-fit and model diagnostics of the
model are also discussed. A cutaneous melanoma dataset example is used to illustrate the
models as well as the inferential methods.
Next, in Chapter 3, the destructive cure rate models, introduced by Rodrigues et al. (2011), are discussed under the PO assumption. Here, the initial number of competing
causes is modelled by a weighted Poisson distribution with special focus on exponentially
weighted Poisson, length-biased Poisson and negative binomial distributions. Then, a damage
distribution is introduced for the number of initial causes which do not get destroyed.
An EM-type algorithm for computing the MLEs is developed. An extensive simulation
study is carried out for various scenarios, and model discrimination between the three
weighted Poisson distributions is also examined. All the models and methods of estimation
are evaluated through a simulation study. A cutaneous melanoma dataset example is used
to illustrate the models as well as the inferential methods.
In Chapter 4, frailty cure rate models are discussed under a gamma frailty wherein the
initial number of competing causes is described by a Conway-Maxwell (COM) Poisson
distribution in which the lifetimes of non-cured individuals can be described by PO model.
The detailed steps of the EM algorithm are then developed for this model and an extensive
simulation study is carried out to evaluate the performance of the proposed model and the
estimation method. A cutaneous melanoma dataset as well as a simulated data are used for
illustrative purposes.
Finally, Chapter 5 outlines the work carried out in the thesis and also suggests some
problems of further research interest. / Thesis / Doctor of Philosophy (PhD)
|
149 |
漲跌停板限制下之股票報酬機率分配葉宜欣, Yeh, Yi-Shian Unknown Date (has links)
股票市場的報酬率相對於金融市埸是非常重要的,因為其背後的真實機率分配對各種資產定價及選擇權的評價模型都有決定性的影響。本文考慮台灣股票市埸具有漲跌停板的限制來驗證實證中股票報酬機率分配的「厚尾」的現象,希望透過我們的研究能對財務理論在國內金融市埸的應用有更進一步的了解。我們選定了常態分配、對數常態分配及一般化第二種貝它分配 (GB2)來當作是台灣股票報酬率的真實機率分配,以動差法比較再以概似比檢定法(LR test)選出一表現最好的機率分配。由選取的25支國內股票中發現一般化第二種貝它分配 (GB2)可以解釋偏態和峰態對報酬率的影響並且也是概似比檢定法所選出的最適報酬率分配,由此可知一般化第二種貝它分配 (GB2)較為適合作為台灣股票報酬的真實機率分配。
|
150 |
Estimation Problems Related to Random Matrix Ensembles / Schätzprobleme für Ensembles zufälliger MatrizenMatić, Rada 06 July 2006 (has links)
No description available.
|
Page generated in 0.1244 seconds