• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 2
  • Tagged with
  • 7
  • 7
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Analysing stochastic call demand with time varying parameters

Li, Song 25 November 2005
In spite of increasingly sophisticated workforce management tools, a significant gap remains between the goal of effective staffing and the present difficulty predicting the stochastic demand of inbound calls. We have investigated the hypothesized nonhomogeneous Poisson process model of modem pool callers of the University community. In our case, we tested if the arrivals could be approximated by a piecewise constant rate over short intervals. For each of 1 and 10-minute intervals, based on the close relationship between the Poisson process and the exponential distribution, the test results did not show any sign of homogeneous Poisson process. We have examined the hypothesis of a nonhomogeneous Poisson process by a transformed statistic. Quantitative and graphical goodness-of-fit tests have confirmed nonhomogeneous Poisson process. <p>Further analysis on the intensity function revealed that linear rate intensity was woefully inadequate in predicting time varying arrivals. For sinusoidal rate model, difficulty arose in setting the period parameter. Spline models, as an alternative to parametric modelling, had more control of balance between data fitting and smoothness, which was appealing to our analysis on call arrival process.
2

Analysing stochastic call demand with time varying parameters

Li, Song 25 November 2005 (has links)
In spite of increasingly sophisticated workforce management tools, a significant gap remains between the goal of effective staffing and the present difficulty predicting the stochastic demand of inbound calls. We have investigated the hypothesized nonhomogeneous Poisson process model of modem pool callers of the University community. In our case, we tested if the arrivals could be approximated by a piecewise constant rate over short intervals. For each of 1 and 10-minute intervals, based on the close relationship between the Poisson process and the exponential distribution, the test results did not show any sign of homogeneous Poisson process. We have examined the hypothesis of a nonhomogeneous Poisson process by a transformed statistic. Quantitative and graphical goodness-of-fit tests have confirmed nonhomogeneous Poisson process. <p>Further analysis on the intensity function revealed that linear rate intensity was woefully inadequate in predicting time varying arrivals. For sinusoidal rate model, difficulty arose in setting the period parameter. Spline models, as an alternative to parametric modelling, had more control of balance between data fitting and smoothness, which was appealing to our analysis on call arrival process.
3

Bayesian classification and survival analysis with curve predictors

Wang, Xiaohui 15 May 2009 (has links)
We propose classification models for binary and multicategory data where the predictor is a random function. The functional predictor could be irregularly and sparsely sampled or characterized by high dimension and sharp localized changes. In the former case, we employ Bayesian modeling utilizing flexible spline basis which is widely used for functional regression. In the latter case, we use Bayesian modeling with wavelet basis functions which have nice approximation properties over a large class of functional spaces and can accommodate varieties of functional forms observed in real life applications. We develop an unified hierarchical model which accommodates both the adaptive spline or wavelet based function estimation model as well as the logistic classification model. These two models are coupled together to borrow strengths from each other in this unified hierarchical framework. The use of Gibbs sampling with conjugate priors for posterior inference makes the method computationally feasible. We compare the performance of the proposed models with the naive models as well as existing alternatives by analyzing simulated as well as real data. We also propose a Bayesian unified hierarchical model based on a proportional hazards model and generalized linear model for survival analysis with irregular longitudinal covariates. This relatively simple joint model has two advantages. One is that using spline basis simplifies the parameterizations while a flexible non-linear pattern of the function is captured. The other is that joint modeling framework allows sharing of the information between the regression of functional predictors and proportional hazards modeling of survival data to improve the efficiency of estimation. The novel method can be used not only for one functional predictor case, but also for multiple functional predictors case. Our methods are applied to analyze real data sets and compared with a parameterized regression method.
4

Regression analysis with longitudinal measurements

Ryu, Duchwan 29 August 2005 (has links)
Bayesian approaches to the regression analysis for longitudinal measurements are considered. The history of measurements from a subject may convey characteristics of the subject. Hence, in a regression analysis with longitudinal measurements, the characteristics of each subject can be served as covariates, in addition to possible other covariates. Also, the longitudinal measurements may lead to complicated covariance structures within each subject and they should be modeled properly. When covariates are some unobservable characteristics of each subject, Bayesian parametric and nonparametric regressions have been considered. Although covariates are not observable directly, by virtue of longitudinal measurements, the covariates can be estimated. In this case, the measurement error problem is inevitable. Hence, a classical measurement error model is established. In the Bayesian framework, the regression function as well as all the unobservable covariates and nuisance parameters are estimated. As multiple covariates are involved, a generalized additive model is adopted, and the Bayesian backfitting algorithm is utilized for each component of the additive model. For the binary response, the logistic regression has been proposed, where the link function is estimated by the Bayesian parametric and nonparametric regressions. For the link function, introduction of latent variables make the computing fast. In the next part, each subject is assumed to be observed not at the prespecifiedtime-points. Furthermore, the time of next measurement from a subject is supposed to be dependent on the previous measurement history of the subject. For this outcome- dependent follow-up times, various modeling options and the associated analyses have been examined to investigate how outcome-dependent follow-up times affect the estimation, within the frameworks of Bayesian parametric and nonparametric regressions. Correlation structures of outcomes are based on different correlation coefficients for different subjects. First, by assuming a Poisson process for the follow- up times, regression models have been constructed. To interpret the subject-specific random effects, more flexible models are considered by introducing a latent variable for the subject-specific random effect and a survival distribution for the follow-up times. The performance of each model has been evaluated by utilizing Bayesian model assessments.
5

Cure Rate Models with Nonparametric Form of Covariate Effects

Chen, Tianlei 02 June 2015 (has links)
This thesis focuses on development of spline-based hazard estimation models for cure rate data. Such data can be found in survival studies with long term survivors. Consequently, the population consists of the susceptible and non-susceptible sub-populations with the latter termed as "cured". The modeling of both the cure probability and the hazard function of the susceptible sub-population is of practical interest. Here we propose two smoothing-splines based models falling respectively into the popular classes of two component mixture cure rate models and promotion time cure rate models. Under the framework of two component mixture cure rate model, Wang, Du and Liang (2012) have developed a nonparametric model where the covariate effects on both the cure probability and the hazard component are estimated by smoothing splines. Our first development falls under the same framework but estimates the hazard component based on the accelerated failure time model, instead of the proportional hazards model in Wang, Du and Liang (2012). Our new model has better interpretation in practice. The promotion time cure rate model, motivated from a simplified biological interpretation of cancer metastasis, was first proposed only a few decades ago. Nonetheless, it has quickly become a competitor to the mixture models. Our second development aims to provide a nonparametric alternative to the existing parametric or semiparametric promotion time models. / Ph. D.
6

Automatic isogeometric analysis suitable trivariate models generation : Application to reduced order modeling / Analyse isogéométrique automatique des modèles trivariens appropriés : Application à la modélisation des commandes réduites

Al Akhras, Hassan 19 May 2016 (has links)
Cette thèse présente un algorithme automatique pour la construction d’un modèle NURBS volumique à partir d’un modèle représenté par ses bords (maillages ou splines). Ce type de modèle est indispensable dans le cadre de l’analyse isogéométrique utilisant les NURBS comme fonctions de forme. Le point d’entrée de l’algorithme est une triangulation du bord du modèle. Après deux étapes de décomposition, le modèle est approché par un polycube. Ensuite un paramétrage surfacique entre le bord du modèle et celui du polycube est établi en calculant un paramétrage global aligné à un champ de direction interpolant les directions de courbure principales du modèle. Finalement, le paramétrage volumique est obtenu en se basant sur ce paramétrage surfacique. Dans le contexte des études paramétriques basées sur des paramètres de formes géométriques, cette méthode peut être appliquée aux techniques de réduction de modèles pour obtenir la même représentation pour des objets ayant différentes géométries mais la même topologie. / This thesis presents an effective method to automatically construct trivariate tensor-product spline models of complicated geometry and arbitrary topology. Our method takes as input a solid model defined by its triangulated boundary. Using cuboid decomposition, an initial polycube approximating the input boundary mesh is built. This polycube serves as the parametric domain of the tensor-product spline representation required for isogeometric analysis. The polycube's nodes and arcs decompose the input model locally into quadrangular patches, and globally into hexahedral domains. Using aligned global parameterization, the nodes are re-positioned and the arcs are re-routed across the surface in a way to achieve low overall patch distortion, and alignment to principal curvature directions and sharp features. The optimization process is based on one of the main contributions of this thesis: a novel way to design cross fields with topological (i.e., imposed singularities) and geometrical (i.e., imposed directions) constraints by solving only sparse linear systems. Based on the optimized polycube and parameterization, compatible B-spline boundary surfaces are reconstructed. Finally, the interior volumetric parameterization is computed using Coon's interpolation and the B-spline surfaces as boundary conditions. This method can be applied to reduced order modeling for parametric studies based on geometrical parameters. For models with the same topology but different geometries, this method allows to have the same representation: i.e., meshes (or parameterizations) with the same topology.
7

Likelihood Ratio Combination of Multiple Biomarkers and Change Point Detection in Functional Time Series

Du, Zhiyuan 24 September 2024 (has links)
Utilizing multiple biomarkers in medical research is crucial for the diagnostic accuracy of detecting diseases. An optimal method for combining these biomarkers is essential to maximize the Area Under the Receiver Operating Characteristic (ROC) Curve (AUC). The optimality of the likelihood ratio has been proven but the challenges persist in estimating the likelihood ratio, primarily on the estimation of multivariate density functions. In this study, we propose a non-parametric approach for estimating multivariate density functions by utilizing Smoothing Spline density estimation to approximate the full likelihood function for both diseased and non-diseased groups, which compose the likelihood ratio. Simulation results demonstrate the efficiency of our method compared to other biomarker combination techniques under various settings for generated biomarker values. Additionally, we apply the proposed method to a real-world study aimed at detecting childhood autism spectrum disorder (ASD), showcasing its practical relevance and potential for future applications in medical research. Change point detection for functional time series has attracted considerable attention from researchers. Existing methods either rely on FPCA, which may perform poorly with complex data, or use bootstrap approaches in forms that fall short in effectively detecting diverse change functions. In our study, we propose a novel self-normalized test for functional time series implemented via a non-overlapping block bootstrap to circumvent reliance on FPCA. The SN factor ensures both monotonic power and adaptability for detecting diverse change functions on complex data. We also demonstrate our test's robustness in detecting changes in the autocovariance operator. Simulation studies confirm the superior performance of our test across various settings, and real-world applications further illustrate its practical utility. / Doctor of Philosophy / In medical research, it is crucial to accurately detect diseases and predict patient outcomes using multiple health indicators, also known as biomarkers. Combining these biomarkers effectively can significantly improve our ability to diagnose and treat various health conditions. However, finding the best way to combine these biomarkers has been a long-standing challenge. In this study, we propose a new, easy-to-understand method for combining multiple biomarkers using advanced estimation techniques. Our method takes into account various factors and provides a more accurate way to evaluate the combined information from different biomarkers. Through simulations, we demonstrated that our method performs better than other existing methods under a variety of scenarios. Furthermore, we applied our new method to a real-world study focusing on detecting childhood autism spectrum disorder (ASD), highlighting its practical value and potential for future applications in medical research. Detecting changes in patterns over time, especially shifts in averages, has become an important focus in data analysis. Existing methods often rely on techniques that may not perform well with more complex data or are limited in the types of changes they can detect. In this study, we introduce a new approach that improves the accuracy of detecting changes in complex data patterns. Our method is flexible and can identify changes in both the mean and variation of the data over time. Through simulations, we demonstrate that this approach is more accurate than current methods. Furthermore, we applied our method to real-world climate research data, illustrating its practical value.

Page generated in 0.0569 seconds