• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 24
  • 5
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 40
  • 36
  • 34
  • 22
  • 13
  • 12
  • 11
  • 8
  • 8
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

ARIMA forecasts of the number of beneficiaries of social security grants in South Africa

Luruli, Fululedzani Lucy 12 1900 (has links)
The main objective of the thesis was to investigate the feasibility of accurately and precisely fore- casting the number of both national and provincial bene ciaries of social security grants in South Africa, using simple autoregressive integrated moving average (ARIMA) models. The series of the monthly number of bene ciaries of the old age, child support, foster care and disability grants from April 2004 to March 2010 were used to achieve the objectives of the thesis. The conclusions from analysing the series were that: (1) ARIMA models for forecasting are province and grant-type spe- ci c; (2) for some grants, national forecasts obtained by aggregating provincial ARIMA forecasts are more accurate and precise than those obtained by ARIMA modelling national series; and (3) for some grants, forecasts obtained by modelling the latest half of the series were more accurate and precise than those obtained from modelling the full series. / Mathematical Sciences / M.Sc. (Statistics)
32

ARIMA forecasts of the number of beneficiaries of social security grants in South Africa

Luruli, Fululedzani Lucy 12 1900 (has links)
The main objective of the thesis was to investigate the feasibility of accurately and precisely fore- casting the number of both national and provincial bene ciaries of social security grants in South Africa, using simple autoregressive integrated moving average (ARIMA) models. The series of the monthly number of bene ciaries of the old age, child support, foster care and disability grants from April 2004 to March 2010 were used to achieve the objectives of the thesis. The conclusions from analysing the series were that: (1) ARIMA models for forecasting are province and grant-type spe- ci c; (2) for some grants, national forecasts obtained by aggregating provincial ARIMA forecasts are more accurate and precise than those obtained by ARIMA modelling national series; and (3) for some grants, forecasts obtained by modelling the latest half of the series were more accurate and precise than those obtained from modelling the full series. / Mathematical Sciences / M.Sc. (Statistics)
33

Transformation model selection by multiple hypotheses testing

Lehmann, Rüdiger 17 October 2016 (has links) (PDF)
Transformations between different geodetic reference frames are often performed such that first the transformation parameters are determined from control points. If in the first place we do not know which of the numerous transformation models is appropriate then we can set up a multiple hypotheses test. The paper extends the common method of testing transformation parameters for significance, to the case that also constraints for such parameters are tested. This provides more flexibility when setting up such a test. One can formulate a general model with a maximum number of transformation parameters and specialize it by adding constraints to those parameters, which need to be tested. The proper test statistic in a multiple test is shown to be either the extreme normalized or the extreme studentized Lagrange multiplier. They are shown to perform superior to the more intuitive test statistics derived from misclosures. It is shown how model selection by multiple hypotheses testing relates to the use of information criteria like AICc and Mallows’ Cp, which are based on an information theoretic approach. Nevertheless, whenever comparable, the results of an exemplary computation almost coincide.
34

Observation error model selection by information criteria vs. normality testing

Lehmann, Rüdiger 17 October 2016 (has links) (PDF)
To extract the best possible information from geodetic and geophysical observations, it is necessary to select a model of the observation errors, mostly the family of Gaussian normal distributions. However, there are alternatives, typically chosen in the framework of robust M-estimation. We give a synopsis of well-known and less well-known models for observation errors and propose to select a model based on information criteria. In this contribution we compare the Akaike information criterion (AIC) and the Anderson Darling (AD) test and apply them to the test problem of fitting a straight line. The comparison is facilitated by a Monte Carlo approach. It turns out that the model selection by AIC has some advantages over the AD test.
35

Transformation model selection by multiple hypotheses testing

Lehmann, Rüdiger January 2014 (has links)
Transformations between different geodetic reference frames are often performed such that first the transformation parameters are determined from control points. If in the first place we do not know which of the numerous transformation models is appropriate then we can set up a multiple hypotheses test. The paper extends the common method of testing transformation parameters for significance, to the case that also constraints for such parameters are tested. This provides more flexibility when setting up such a test. One can formulate a general model with a maximum number of transformation parameters and specialize it by adding constraints to those parameters, which need to be tested. The proper test statistic in a multiple test is shown to be either the extreme normalized or the extreme studentized Lagrange multiplier. They are shown to perform superior to the more intuitive test statistics derived from misclosures. It is shown how model selection by multiple hypotheses testing relates to the use of information criteria like AICc and Mallows’ Cp, which are based on an information theoretic approach. Nevertheless, whenever comparable, the results of an exemplary computation almost coincide.
36

Observation error model selection by information criteria vs. normality testing

Lehmann, Rüdiger January 2015 (has links)
To extract the best possible information from geodetic and geophysical observations, it is necessary to select a model of the observation errors, mostly the family of Gaussian normal distributions. However, there are alternatives, typically chosen in the framework of robust M-estimation. We give a synopsis of well-known and less well-known models for observation errors and propose to select a model based on information criteria. In this contribution we compare the Akaike information criterion (AIC) and the Anderson Darling (AD) test and apply them to the test problem of fitting a straight line. The comparison is facilitated by a Monte Carlo approach. It turns out that the model selection by AIC has some advantages over the AD test.
37

Economía de la innovación y la digitalización del turismo: un estudio del mercado de Airbnb aplicando técnicas econométricas y redes neuronales

Más-Ferrando, Adrián 20 January 2023 (has links)
Esta tesis doctoral tiene como fin realizar una revisión de los principios económicos del turismo desde una perspectiva de la economía de la innovación, analizar el potencial impacto de la aplicación de IA en la industria turística a todos los niveles, y el estudio del mercado turístico más disruptivo de las últimas décadas: la economía de plataforma, ejemplificada en el caso de estudio de Airbnb. En este Capítulo I se establece el hilo conductor de los apartados de los que consta esta tesis en formato compendio, inspirada en diversos trabajos, entre los que se incluyen los publicados por el doctorando en esta etapa predoctoral. Para ello se presenta el diseño de la investigación, explicando detalladamente todo el proceso realizado para lograr el planteamiento de la tesis y la consecución de los objetivos y se dedica un breve apartado para presentar las principales conclusiones de la tesis. El Capítulo II de esta investigación está dedicado a la revisión de la evolución del concepto de innovación y su importancia en la teoría económica. Para ello nos basaremos en referentes teóricos que han estudiado el papel de la tecnología y la innovación en el crecimiento económico, como Schumpeter, Solow, Romer o Lucas. Con ello se pretende comprender el impacto que están teniendo los cambios disruptivos que vivimos en la economía, para posteriormente aplicarlos a la transformación de la estructura de la industria turística. En el Capítulo III se realiza un análisis aplicado de la innovación y del impacto de las nuevas tecnologías en el sector turístico. En él se estudiará el estado de la innovación del sector, realizando importantes aclaraciones sobre la capacidad que tiene la industria para adaptar o desarrollar tecnologías disruptivas. Además, se explicarán los principios digitales que están transformando la industria turística y el nuevo ciclo de investigación derivado de la aparición del Big Data y que está protagonizado por técnicas basadas en algoritmos de Machine Learning, justificando así la elección del sector turístico como caso de estudio. En el Capítulo IV se realiza una revisión completa del proceso transformador que está viviendo la estructura de la industria turística debido al cambio de paradigma tecnológico. Así, se estudia cómo estos procesos innovadores están desarrollando una nueva demanda turística basada en los datos, cómo se está reinventando la cadena de valor turística, cómo se fijan los precios turísticos en un mercado con información casi perfecta, qué retos supone para el mercado laboral y formativo del sector, y qué papel juegan en el surgimiento de nuevos competidores de base tecnológica en el sector. En los Capítulos V y VI se escoge como caso de estudio aplicado el mercado alojativo, utilizando la información de Airbnb. Sin duda, esta empresa representa muchos de los desafíos a los que se enfrenta el sector en cuestiones tecnológicas, de regulación política, intervención de mercado, reinterpretación de la cadena de valor turística, aparición de shocks económicos o pandémicos a los que se deben enfrentar los investigadores. El Capítulo V tiene como objeto de análisis la ciudad de Madrid, cuarto destino por número de anuncios de Airbnb en Europa. Para este caso aplicado se estudia si la pandemia de la COVID-19 tuvo un impacto significativo en la estructura de la oferta y de la demanda de Airbnb. Para ello, el estudio parte de un modelo logit de datos de panel hedónicos, se aplican diferentes métodos alternativos de selección de variables y pruebas de verosimilitud para confirmar la existencia del cambio estructural que afecte a la toma de decisiones a la hora de alquilar un apartamento de la Plataforma. El Capítulo VI centra el estudio en la Comunidad Valenciana, uno de los principales destinos turísticos de sol y playa, para realizar un análisis sobre la fijación de precios del alojamiento turístico en la plataforma. Este caso de estudio tiene por objetivo analizar si la aplicación de algoritmos de ML permite a las empresas optimizar precios de una manera más eficiente que modelos tradicionales. Para ello, se enfrenta el rendimiento de un modelo de precios hedónicos tradicional frente a un modelo de estimación basado en redes neuronales, comprobándose el mejor ajuste en la capacidad predictiva de las técnicas basadas en machine learning a la hora de fijar precios. De este modo la tesis doctoral constituye una valiosa y novedosa aportación al nuevo ciclo de investigación del sector. Propone una exhaustiva revisión de todas las implicaciones y las aplicaciones que tienen las nuevas tecnologías en el turismo y de las ventajas del uso de técnicas de análisis basadas machine learning para los investigadores en su estudio.
38

MARGINAL LIKELIHOOD INFERENCE FOR FRAILTY AND MIXTURE CURE FRAILTY MODELS UNDER BIRNBAUM-SAUNDERS AND GENERALIZED BIRNBAUM-SAUNDERS DISTRIBUTIONS

Liu, Kai January 2018 (has links)
Survival analytic methods help to analyze lifetime data arising from medical and reliability experiments. The popular proportional hazards model, proposed by Cox (1972), is widely used in survival analysis to study the effect of risk factors on lifetimes. An important assumption in regression type analysis is that all relative risk factors should be included in the model. However, not all relative risk factors are observed due to measurement difficulty, inaccessibility, cost considerations, and so on. These unobservable risk factors can be modelled by the so-called frailty model, originally introduced by Vaupel et al. (1979). Furthermore, the frailty model is also applicable to clustered data. Cluster data possesses the feature that observations within the same cluster share similar conditions and environment, which are sometimes difficult to observe. For example, patients from the same family share similar genetics, and patients treated in the same hospital share the same group of profes- sionals and same environmental conditions. These factors are indeed hard to quantify or measure. In addition, this type of similarity introduces correlation among subjects within clusters. In this thesis, a semi-parametric frailty model is proposed to address aforementioned issues. The baseline hazards function is approximated by a piecewise constant function and the frailty distribution is assumed to be a Birnbaum-Saunders distribution. Due to the advancement in modern medical sciences, many diseases are curable, which in turn leads to the need of incorporating cure proportion in the survival model. The frailty model discussed here is further extended to a mixture cure rate frailty model by integrating the frailty model into the mixture cure rate model proposed originally by Boag (1949) and Berkson and Gage (1952). By linking the covariates to the cure proportion through logistic/logit link function and linking observable covariates and unobservable covariates to the lifetime of the uncured population through the frailty model, we obtain a flexible model to study the effect of risk factors on lifetimes. The mixture cure frailty model can be reduced to a mixture cure model if the effect of frailty term is negligible (i.e., the variance of the frailty distribution is close to 0). On the other hand, it also reduces to the usual frailty model if the cure proportion is 0. Therefore, we can use a likelihood ratio test to test whether the reduced model is adequate to model the given data. We assume the baseline hazard to be that of Weibull distribution since Weibull distribution possesses increasing, constant or decreasing hazard rate, and the frailty distribution to be Birnbaum-Saunders distribution. D ́ıaz-Garc ́ıa and Leiva-Sa ́nchez (2005) proposed a new family of life distributions, called generalized Birnbaum-Saunders distribution, including Birnbaum-Saunders distribution as a special case. It allows for various degrees of kurtosis and skewness, and also permits unimodality as well as bimodality. Therefore, integration of a generalized Birnbaum-Saunders distribution as the frailty distribution in the mixture cure frailty model results in a very flexible model. For this general model, parameter estimation is carried out using a marginal likelihood approach. One of the difficulties in the parameter estimation is that the likelihood function is intractable. The current technology in computation enables us to develop a numerical method through Monte Carlo simulation, and in this approach, the likelihood function is approximated by the Monte Carlo method and the maximum likelihood estimates and standard errors of the model parameters are then obtained numerically by maximizing this approximate likelihood function. An EM algorithm is also developed for the Birnbaum-Saunders mixture cure frailty model. The performance of this estimation method is then assessed by simulation studies for each proposed model. Model discriminations is also performed between the Birnbaum-Saunders frailty model and the generalized Birnbaum-Saunders mixture cure frailty model. Some illustrative real life examples are presented to illustrate the models and inferential methods developed here. / Thesis / Doctor of Science (PhD)
39

CURE RATE AND DESTRUCTIVE CURE RATE MODELS UNDER PROPORTIONAL ODDS LIFETIME DISTRIBUTIONS

FENG, TIAN January 2019 (has links)
Cure rate models, introduced by Boag (1949), are very commonly used while modelling lifetime data involving long time survivors. Applications of cure rate models can be seen in biomedical science, industrial reliability, finance, manufacturing, demography and criminology. In this thesis, cure rate models are discussed under a competing cause scenario, with the assumption of proportional odds (PO) lifetime distributions for the susceptibles, and statistical inferential methods are then developed based on right-censored data. In Chapter 2, a flexible cure rate model is discussed by assuming the number of competing causes for the event of interest following the Conway-Maxwell (COM) Poisson distribution, and their corresponding lifetimes of non-cured or susceptible individuals can be described by PO model. This provides a natural extension of the work of Gu et al. (2011) who had considered a geometric number of competing causes. Under right censoring, maximum likelihood estimators (MLEs) are obtained by the use of expectation-maximization (EM) algorithm. An extensive Monte Carlo simulation study is carried out for various scenarios, and model discrimination between some well-known cure models like geometric, Poisson and Bernoulli is also examined. The goodness-of-fit and model diagnostics of the model are also discussed. A cutaneous melanoma dataset example is used to illustrate the models as well as the inferential methods. Next, in Chapter 3, the destructive cure rate models, introduced by Rodrigues et al. (2011), are discussed under the PO assumption. Here, the initial number of competing causes is modelled by a weighted Poisson distribution with special focus on exponentially weighted Poisson, length-biased Poisson and negative binomial distributions. Then, a damage distribution is introduced for the number of initial causes which do not get destroyed. An EM-type algorithm for computing the MLEs is developed. An extensive simulation study is carried out for various scenarios, and model discrimination between the three weighted Poisson distributions is also examined. All the models and methods of estimation are evaluated through a simulation study. A cutaneous melanoma dataset example is used to illustrate the models as well as the inferential methods. In Chapter 4, frailty cure rate models are discussed under a gamma frailty wherein the initial number of competing causes is described by a Conway-Maxwell (COM) Poisson distribution in which the lifetimes of non-cured individuals can be described by PO model. The detailed steps of the EM algorithm are then developed for this model and an extensive simulation study is carried out to evaluate the performance of the proposed model and the estimation method. A cutaneous melanoma dataset as well as a simulated data are used for illustrative purposes. Finally, Chapter 5 outlines the work carried out in the thesis and also suggests some problems of further research interest. / Thesis / Doctor of Philosophy (PhD)
40

Inference for Gamma Frailty Models based on One-shot Device Data

Yu, Chenxi January 2024 (has links)
A device that is accompanied by an irreversible chemical reaction or physical destruction and could no longer function properly after performing its intended function is referred to as a one-shot device. One-shot device test data differ from typical data obtained by measuring lifetimes in standard life-tests. Due to the very nature of one-shot devices, actual lifetimes of one-shot devices under test cannot be observed, and they are either left- or right-censored. In addition, a one-shot device often has multiple components that could cause the failure of the device. The components are coupled together in the manufacturing process or assembly, resulting in the failure modes possessing latent heterogeneity and dependence. Frailty models enable us to describe the influence of common, but unobservable covariates, on the hazard function as a random effect in a model and also provide an easily understandable interpretation. In this thesis, we develop some inferential results for one-shot device testing data with gamma frailty model. We first develop an efficient expectation-maximization (EM) algorithm for determining the maximum likelihood estimates of model parameters of a gamma frailty model with exponential lifetime distributions for components based on one-shot device test data with multiple failure modes, wherein the data are obtained from a constant-stress accelerated life-test. The maximum likelihood estimate of the mean lifetime of $k$-out-of-$M$ structured one-shot devices under normal operating conditions is also presented. In addition, the asymptotic variance–covariance matrix of the maximum likelihood estimates is derived, which is then used to construct asymptotic confidence intervals for the model parameters. The performance of the proposed inferential methods is finally evaluated through Monte Carlo simulations and then illustrated with a numerical example. A gamma frailty model with Weibull baseline hazards is considered next for fitting one-shot device testing data. The Weibull baseline hazards enable us to analyze time-varying failure rates more accurately, allowing for a deeper understanding of the dynamic nature of system's reliability. We develop an EM algorithm for estimating the model parameters utilizing the complete likelihood function. A detailed simulation study evaluates the performance of the Weibull baseline hazard model with that of the exponential baseline hazard model. The introduction of shape parameters in the component's lifetime distribution within the Weibull baseline hazard model offers enhanced flexibility in model fitting. Finally, Bayesian inference is then developed for the gamma frailty model with exponential baseline hazard for one-shot device testing data. We introduce the Bayesian estimation procedure using Markov chain Monte Carlo (MCMC) technique for estimating the model parameters as well as for developing credible intervals for those parameters. The performance of the proposed method is evaluated in a simulation study. Model comparison between independence model and the frailty model is made using Bayesian model selection criterion. / Thesis / Candidate in Philosophy

Page generated in 0.0545 seconds