Spelling suggestions: "subject:"istatistical models"" "subject:"bystatistical models""
221 |
STATISTICAL AND METHODOLOGICAL ISSUES ON COVARIATE ADJUSTMENT IN CLINICAL TRIALSChu, Rong 04 1900 (has links)
<p><strong>Background and objectives</strong></p> <p>We investigate three issues related to the adjustment for baseline covariates in late phase clinical trials: (1) the analysis of correlated outcomes in multicentre RCTs, (2) the assessment of the probability and implication of prognostic imbalance in RCTs, and (3) the adjustment for baseline confounding in cohort studies.</p> <p><strong>Methods</strong></p> <p>Project 1: We investigated the properties of six statistical methods for analyzing continuous outcomes in multicentre randomized controlled trials (RCTs) where within-centre clustering was possible. We simulated studies over various intraclass correlation (ICC) values with several centre combinations.</p> <p>Project 2: We simulated data from RCTs evaluating a binary outcome by varying risk of the outcome, effect of the treatment, power and prevalence of a binary prognostic factor (PF), and sample size. We compared logistic regression models with and without adjustment for the PF, in terms of bias, standard error, coverage of confidence interval, and statistical power. A tool to assess sample size requirement to control for chance imbalance was proposed.</p> <p>Project 3: We conducted a prospective cohort study to evaluate the effect of tuberculosis (TB) at the initiation of antiretroviral therapy (ART) on all cause mortality using Cox proportional hazard model on propensity score (PS) matched patients to control for potential confounding. We assessed the robustness of results using sensitivity analyses.</p> <p><strong>Results and conclusions</strong></p> <p>Project 1: All six methods produce unbiased estimates of treatment effect in multicentre trials. Adjusting for centre as a random intercept leads to the most efficient treatment effect estimation, and hence should be used in the presence of clustering.</p> <p>Project 2: The probability of prognostic imbalance in small trials can be substantial. Covariate adjustment improves estimation accuracy and statistical power, and hence should be performed when strong PFs are observed.</p> <p>Project 3: After controlling for the important confounding variables, HIV patients who had TB at the initiation of ART have a moderate increase in the risk of overall mortality.</p> / Doctor of Philosophy (PhD)
|
222 |
LIKELIHOOD-BASED INFERENTIAL METHODS FOR SOME FLEXIBLE CURE RATE MODELSPal, Suvra 04 1900 (has links)
<p>Recently, the Conway-Maxwell Poisson (COM-Poisson) cure rate model has been proposed which includes as special cases some of the well-known cure rate models discussed in the literature. Data obtained from cancer clinical trials are often right censored and the expectation maximization (EM) algorithm can be efficiently used for the determination of the maximum likelihood estimates (MLEs) of the model parameters based on right censored data.</p> <p>By assuming the lifetime distribution to be exponential, lognormal, Weibull, and gamma, the necessary steps of the EM algorithm are developed for the COM-Poisson cure rate model and some of its special cases. The inferential method is examined by means of an extensive simulation study. Model discrimination within the COM-Poisson family is carried out by likelihood ratio test as well as by information-based criteria. Finally, the proposed method is illustrated with a cutaneous melanoma data on cancer recurrence. As the lifetime distributions considered are not nested, it is not possible to carry out a formal statistical test to determine which among these provides an adequate fit to the data. For this reason, the wider class of generalized gamma distributions is considered which contains all of the above mentioned lifetime distributions as special cases. The steps of the EM algorithm are then developed for this general class of distributions and a simulation study is carried out to evaluate the performance of the proposed estimation method. Model discrimination within the generalized gamma family is carried out by likelihood ratio test and information-based criteria. Finally, for the considered cutaneous melanoma data, the two-way flexibility of the COM-Poisson family and the generalized gamma family is utilized to carry out a two-way model discrimination to select a parsimonious competing cause distribution along with a suitable choice of a lifetime distribution that provides the best fit to the data.</p> / Doctor of Philosophy (PhD)
|
223 |
STATISTICAL AND METHODOLOGICAL ISSUES IN EVALUATION OF INTEGRATED CARE PROGRAMSYe, Chenglin January 2014 (has links)
<p><strong>Background </strong></p> <p>Integrated care programs are collaborations to improve health services delivery for patients with multiple conditions.</p> <p><strong>Objectives</strong></p> <p>This thesis investigated three issues in evaluation of integrated care programs: (1) quantifying integration for integrated care programs, (2) analyzing integrated care programs with substantial non-compliance, and (3) assessing bias when evaluating integrated care programs under different non-compliant scenarios.</p> <p><strong>Methods</strong></p> <p>Project 1: We developed a method to quantity integration through service providers’ perception and expectation. For each provider, four integration scores were calculated. The properties of the scores were assessed.</p> <p>Project 2: A randomized controlled trial (RCT) compared the Children’s Treatment Network (CTN) with usual care on managing the children with complex conditions. To handle non-compliance, we employed the intention-to-treat (ITT), as-treated (AT), per-protocol (PP), and instrumental variable (IV) analyses. We also investigated propensity score (PS) methods to control for potential confounding.</p> <p>Project 3: Based on the CTN study, we simulated trials of different non-compliant scenarios. We then compared the ITT, AT, PP, IV, and complier average casual effect methods in analyzing the data. The results were compared by the bias of the estimate, mean square error, and 95% coverage.</p> <p><strong>Results and conclusions</strong></p> <p>Project 1: We demonstrated the proposed method in measuring integration and some of its properties. By bootstrapping analyses, we showed that the global integration score was robust. Our method has extended existing measures of integration and possesses a good extent of validity.</p> <p>Project 2: The CTN intervention was not significantly different from usual care on improving patients’ outcomes. The study highlighted some methodological challenges in evaluating integrated care programs in a RCT setting.</p> <p>Project 3: When an intervention had a moderate or large effect, the ITT analysis was considerably biased under non-compliance and alternative analyses could provide unbiased results. To minimize the bias, we make some recommendations for the choice of analyses under different scenarios.</p> / Doctor of Philosophy (PhD)
|
224 |
Fault Detection and Diagnosis of a Diesel Engine Valve TrainFlett, Justin A. 01 April 2015 (has links)
One of the most commonly used mechanical systems is the internal combustion engine. Internal combustion engines dominate the automotive industry, and have numerous other applications in generation, transportation, etc. This thesis presents the development of a fault detection and diagnosis (FDD) system for use with an internal combustion engine valve train. A FDD system was developed with a focus on the valve impact amplitudes. Engine cycle averaging and band-pass filtering methods were tuned and utilized for improving the signal to noise ratio. A novel feature extraction method was developed that included a local RMS sliding window method and an adaptive threshold. Faults were seeded in the form of deformed valve springs, as well as abnormal valve clearances. The engine’s manufacturer specifies that a valve spring with 3 mm or more of deformation should be replaced. This thesis investigated the detection of a relatively small 0.5mm spring deformation. Valve clearance values were adjusted 0.1mm above and below the nominal clearance value (0.15mm) to test large clearance faults (0.25mm) and small clearance faults (0.05mm). The performance of the FDD system was tested using an instrumented diesel engine test bed. A comparison of numerous signal processing techniques and classification methods was performed. / Master of Applied Science (MASc)
|
225 |
Statistical Improvements for Ecological Learning about Spatial ProcessesDupont, Gaetan L 20 October 2021 (has links) (PDF)
Ecological inquiry is rooted fundamentally in understanding population abundance, both to develop theory and improve conservation outcomes. Despite this importance, estimating abundance is difficult due to the imperfect detection of individuals in a sample population. Further, accounting for space can provide more biologically realistic inference, shifting the focus from abundance to density and encouraging the exploration of spatial processes. To address these challenges, Spatial Capture-Recapture (“SCR”) has emerged as the most prominent method for estimating density reliably. The SCR model is conceptually straightforward: it combines a spatial model of detection with a point process model of the spatial distribution of individuals, using data collected on individuals within a spatially referenced sampling design. These data are often coarse in spatial and temporal resolution, though, motivating research into improving the quality of the data available for analysis. Here I explore two related approaches to improve inference from SCR: sampling design and data integration. Chapter 1 describes the context of this thesis in more detail. Chapter 2 presents a framework to improve sampling design for SCR through the development of an algorithmic optimization approach. Compared to pre-existing recommendations, these optimized designs perform just as well but with far more flexibility to account for available resources and challenging sampling scenarios. Chapter 3 presents one of the first methods of integrating an explicit movement model into the SCR model using telemetry data, which provides information at a much finer spatial scale. The integrated model shows significant improvements over the standard model to achieve a specific inferential objective, in this case: the estimation of landscape connectivity. In Chapter 4, I close by providing two broader conclusions about developing statistical methods for ecological inference. First, simulation-based evaluation is integral to this process, but the circularity of its use can, unfortunately, be understated. Second, and often underappreciated: statistical solutions should be as intuitive as possible to facilitate their adoption by a diverse pool of potential users. These novel approaches to sampling design and data integration represent essential steps in advancing SCR and offer intuitive opportunities to advance ecological learning about spatial processes.
|
226 |
Determinación de los esfuerzos últimos de la Guadua Angustifolia en la región andina de Colombia correlacionada con variables de climaLozano Peña, Jorge Enrique 13 April 2021 (has links)
[ES] En el año 2010, el Reglamento colombiano de construcción sismoresistente (NSR-10) estableció el procedimiento de diseño basado en el método de los esfuerzos admisibles para utilizar la Guadua angustifolia Kunth como material de construcción. En la región Andina de Colombia hay gran presencia de bosques naturales de Guadua. Sin embargo, las condiciones geográficas y ambientales (temperatura, pluviosidad, altura sobre el nivel del mar, etc.) dificultan el aprovechamiento de este material para la construcción. Más aún, su localización se encuentra principalmente en lugares de difícil acceso, generando, además de una gran dispersión en sus propiedades físico mecánicas, grandes costes económicos asociados a la caracterización mecánica del material para su uso en aplicaciones estructurales. Por ello, nace la necesidad de plantear un sistema simplificado que permita estimar las propiedades mecánicas de la Guadua, que se pueda ejecutar en cualquier lugar remoto y que también reduzca los costes asociados al transporte de muestras y pruebas de
laboratorio.
Esta tesis doctoral propone una metodología que permite determinar las características mecánicas de la Guadua angustifolia que crece en la región Andina de Colombia. Se ha llevado a cabo una amplia campaña experimental donde se realizaron 2917 ensayos de laboratorio que tuvieron en cuenta variables como la procedencia, la temperatura y la pluviosidad, así como el diámetro y espesor de pared del tallo. Con ayuda de análisis estadísticos que permitieron eliminar los datos atípicos obtenidos de los ensayos, se han evaluado las propiedades mecánicas de la Guadua: resistencia a tracción, compresión, corte y flexión. Dichas variables se han correlacionado mediante modelos estadísticos para así determinar su relación con los esfuerzos mecánicos resultantes de la campaña experimental. Estos modelos estadísticos, basados y calibrados con multitud de datos experimentales, han perseguido la predicción de los valores de esfuerzo últimos del material antes de su extracción en el bosque y directamente desde un trabajo sencillo de campo. Los resultados y conclusiones alcanzadas en esta tesis doctoral serán de gran utilidad para científicos, arquitectos, ingenieros y constructores en general, ya que permitirán estimar propiedades de
la Guadua de una manera económica, precisa y rápida. / [CA] L'any 2010 el reglament Colombià de construcció sismeresistent (*NSR-10) estableix el procediment de disseny basat en el mètode dels esforços admissibles, per a utilitzar la Guadua angustifolia Kunth com a material de construcció. A la regió Andina de Colòmbia hi ha gran presència de boscos naturals de guadua. Però les condicions geogràfiques i ambientals (ex. temperatura, pluviositat, altura sobre el nivell de la mar, etc.) dificulten el seu aprofitament com a material constructiu. Més encara, la seua localització està principalment en llocs de difícil accés no sols generant una gran dispersió en les seues propietats físic mecàniques, sinó també els costos associats per a identificar la seua resistència per a aplicacions estructurals. Com a conseqüència directa, naix la necessitat de plantejar un sistema simplificat que permeta estimar aquestes propietats que no sols es puga executar en el lloc concret de la plantació, sinó que també reduïsca els costos de transport i
proves de laboratori.
Aquesta tesi doctoral proposa una solució que permet determinar les característiques mecàniques de la Guadua angustifolia Kunth que creix a la regió andina de Colòmbia. Per a això, en primer lloc, es van realitzar 2917 assajos de laboratori relacionant variables com l'origen d'extracció, la temperatura i pluviositat, i el diàmetre i grossària de paret de la tija. Amb aquests assajos, i amb ajuda de l'estadística per eliminar dades atípiques, es caracteritzaren les propietats mecàniques de la guadua com la seua resistència a la tracció, compressió, tallant i moment. Posteriorment, amb tots aquests resultats, es va modelar estadísticament les relacions entre les variables considerades i els esforços mecànics. El propòsit d'aquests models ha sigut el de predir valors d'esforços últims d'aquest material en camp abans de l'extracció i de la realització de qualsevol assaig mecànic. Aquests models són de gran importància perquè científics, arquitectes, enginyers i constructors en general,
puguen estimar propietats de la guadua d'una manera econòmica, precisa i ràpida. / [EN] During 2010, Colombian seismic-resistant regulation for construction (NSR-10) established the design procedure using the admissible load method to use Guadua Angustifolia Kunth as a construction material. At the Colombian Andean region, there are a large number of Guadua forests. However, the geographical and environmental conditions (e.g. temperature, rainfall, height above sea level, etc.) limits its exploitation as a construction material. Moreover, these forests are mainly found at remote locations with difficult access, not only making its physical and mechanical properties greatly scattered but also increasing the costs related to identifying its strength for structural applications. As a direct consequence, the necessity of a simplified system that ease the estimation of these properties emerges, not
only to foster on-field testing but also to reduce transport and laboratory testing costs.
This doctoral thesis proposes one solution to determine the mechanical properties of the Guadua Angustifolia Kunth that grows at Colombian Andean region. For this, 2917 laboratory tests were done to relate variables such as the origin, temperature and rainfall, and the stem diameter and thickness. From this first part of the study, mechanical strength properties of the guadua such as tension, compression, shear and bending were determined with the help of statistical filters to eliminate the abnormal data from tests. Then, all this data was used to statistically model the relationship within the variables and the mechanical strength. The goal of these models is to predict mechanical ultimate strength at the field without doing physical extraction or mechanical tests. These models will enable scientists, architects and, in general, anyone in the construction sector to easily estimate the guadua's
mechanical properties in a cheaper, accurate and efficient manner. / Lozano Peña, JE. (2021). Determinación de los esfuerzos últimos de la Guadua Angustifolia en la región andina de Colombia correlacionada con variables de clima [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/165379
|
227 |
Superconformal indices, dualities and integrabilityGahramanov, Ilmar 29 July 2016 (has links)
In dieser Arbeit behandeln wir exakte, nicht-perturbative Ergebnisse, die mithilfe der superkonformen Index-Technik, in supersymmetrischen Eichtheorien mit vier Superladungen (d. h. N=1 Supersymmetrie in vier Dimensionen und N=2 in drei Dimensionen) gewonnen wurden. Wir benutzen die superkonforme Index-Technik um mehrere Dualitäts Vermutungen in supersymmetrischen Eichtheorien zu testen. Wir führen Tests der dreidimensionalen Spiegelsymmetrie und Seiberg ähnlicher Dualitäten durch. Das Ziel dieser Promotionsarbeit ist es moderne Fortschritte in nicht-perturbativen supersymmetrischen Eichtheorien und ihre Beziehung zu mathematischer Physik darzustellen. Im Speziellen diskutieren wir einige interessante Identitäten der Integrale, denen einfache und hypergeometrische Funktionen genügen und ihren Bezug zu supersymmetrischen Dualitäten in drei und vier Dimensionen. Methoden der exakten Berechnungen in supersymmertischen Eichtheorien sind auch auf integrierbare statistische Modelle anwendbar. Dies wird im letzten Kapitel der vorliegenden Arbeit behandelt. / In this thesis we discuss exact, non-perturbative results achieved using superconformal index technique in supersymmetric gauge theories with four supercharges (which is N = 1 supersymmetry in four dimensions and N = 2 supersymmetry in three). We use the superconformal index technique to test several duality conjectures for supersymmetric gauge theories. We perform tests of three-dimensional mirror symmetry and Seiberg-like dualities. The purpose of this thesis is to present recent progress in non-perturbative supersymmetric gauge theories in relation to mathematical physics. In particular, we discuss some interesting integral identities satisfied by basic and elliptic hypergeometric functions and their relation to supersymmetric dualities in three and four dimensions. Methods of exact computations in supersymmetric theories are also applicable to integrable statistical models, which we discuss in the last chapter of the thesis.
|
228 |
A Topic Modeling approach for Code Clone DetectionKhan, Mohammed Salman 01 January 2019 (has links)
In this thesis work, the potential benefits of Latent Dirichlet Allocation (LDA) as a technique for code clone detection has been described. The objective is to propose a language-independent, effective, and scalable approach for identifying similar code fragments in relatively large software systems. The main assumption is that the latent topic structure of software artifacts gives an indication of the presence of code clones. It can be hypothesized that artifacts with similar topic distributions contain duplicated code fragments and to prove this hypothesis, an experimental investigation using multiple datasets from various application domains were conducted. In addition, CloneTM, an LDA-based working prototype for code clone detection was developed. Results showed that, if calibrated properly, topic modeling can deliver a satisfactory performance in capturing different types of code clones, showing particularity good performance in detecting Type III clones. CloneTM also achieved levels of performance comparable to already existing practical tools that adopt different clone detection strategies.
|
229 |
A Predictive Modeling System: Early identification of students at-risk enrolled in online learning programsFonti, Mary L. 01 January 2015 (has links)
Predictive statistical modeling shows promise in accurately predicting academic performance for students enrolled in online programs. This approach has proven effective in accurately identifying students who are at-risk enabling instructors to provide instructional intervention. While the potential benefits of statistical modeling is significant, implementations have proven to be complex, costly, and difficult to maintain. To address these issues, the purpose of this study is to develop a fully integrated, automated predictive modeling system (PMS) that is flexible, easy to use, and portable to identify students who are potentially at-risk for not succeeding in a course they are currently enrolled in. Dynamic and static variables from a student system (edX) will be analyzed to predict academic performance of an individual student or entire class. The PMS model framework will include development of an open-source Web application, application programming interface (API), and SQL reporting services (SSRS). The model is based on knowledge discovery database (KDD) approach utilizing inductive logic programming language (ILP) to analyze student data. This alternative approach for predicting academic performance has several unique advantages over current predictive modeling techniques in use and is a promising new direction in educational research.
|
230 |
On Cluster Robust ModelsSantiago Calderón, José Bayoán 01 January 2019 (has links)
Cluster robust models are a kind of statistical models that attempt to estimate parameters considering potential heterogeneity in treatment effects. Absent heterogeneity in treatment effects, the partial and average treatment effect are the same. When heterogeneity in treatment effects occurs, the average treatment effect is a function of the various partial treatment effects and the composition of the population of interest. The first chapter explores the performance of common estimators as a function of the presence of heterogeneity in treatment effects and other characteristics that may influence their performance for estimating average treatment effects. The second chapter examines various approaches to evaluating and improving cluster structures as a way to obtain cluster-robust models. Both chapters are intended to be useful to practitioners as a how-to guide to examine and think about their applications and relevant factors. Empirical examples are provided to illustrate theoretical results, showcase potential tools, and communicate a suggested thought process.
The third chapter relates to an open-source statistical software package for the Julia language. The content includes a description for the software functionality and technical elements. In addition, it features a critique and suggestions for statistical software development and the Julia ecosystem. These comments come from my experience throughout the development process of the package and related activities as an open-source and professional software developer. One goal of the paper is to make econometrics more accessible not only through accessibility to functionality, but understanding of the code, mathematics, and transparency in implementations.
|
Page generated in 0.0752 seconds