• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 235
  • 118
  • 118
  • 118
  • 118
  • 118
  • 115
  • 22
  • 6
  • 3
  • Tagged with
  • 396
  • 396
  • 176
  • 173
  • 104
  • 70
  • 44
  • 44
  • 37
  • 34
  • 31
  • 24
  • 20
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Bootstrap-based inference for Cox's proportional hazards analyses of clustered censored survival data

Xiao, Yongling, 1972- January 2005 (has links)
Background. Clustering of observations occurs frequently in epidemiological and clinical studies of time-to-event outcomes. However, only a few papers addressed the challenge of accounting for clustering while analyzing right-censored survival data. I propose two bootstrap-based approaches to correct standard errors of Cox's proportional hazards (PH) model estimates for clustering, and validate the approaches in simulations. / Methods. Both bootstrap-based approaches involve 2 stages of resampling the original data. The two methods share the same procedure at the first stage but employ different procedures at the second stage. At the first stage of both methods, the clusters (e.g. physicians) are resampled with replacement. At the second stage, one method resamples individual patients with replacement for each physician (i.e. units within-cluster) selected at the 1st stage, while another method picks up all the patients for each selected physician, without resampling. For both methods, each of the resulting bootstrap samples is then independently analyzed with standard Cox's PH model, and the standard errors (SE) of the regression parameters are estimated as the empirical standard deviation, of the corresponding estimates. Finally, 95% confidence intervals (CI) for the estimates are estimated using bootstrap-based SE and assuming normality. / Simulations design. I have simulated a hypothetical study with N patients clustered within practices of M physicians. Individual patients' times-to-events were generated from the exponential distribution with hazard conditional on (i) several patient-level variables, (ii) several cluster-level (physician's) variables, and (iii) physician's "random effects". Random right censoring was applied. Simulated data were analyzed using 4 approaches: the proposed two bootstrap methods, standard Cox's PH model and "classic" one-step bootstrap with direct resampling of the patients. / Results. Standard Cox's model and "Classic" 1-step bootstrap under-estimated variance of regression coefficients, leading to serious inflation of type I error rates and coverage rates of 95% CI as low as 60-70%. In contrast, the proposed approach that resamples both physicians and patients-within-physicians, with the 100 bootstrap resamples, resulted in slightly conservative estimates of standard errors, which yielded type I error rates between 2% and 6%, and coverage rates between 94% and 99%. / Conclusions. The proposed bootstrap approach offers an easy-to-implement method to account for interdependence of times-to-events in the inference about Cox model regression parameters in the context of analyses of right-censored clustered data.
32

Effects of sparse follow-up on marginal structural models for time-to-event data

Mojaverian, Nassim January 2012 (has links)
Background: Survival time is a common parameter of interest that can be estimated by using Cox Proportional Hazards models when measured continuously. An alternative way to estimate hazard ratios is to cut up time into equal-lengthed intervals and consider the by-interval outcome to be 0 if the person is alive during this interval and 1 otherwise. In this discrete-time approximation, instead of using a Cox model, one should perform pooled logistic regression to get unbiased estimate of survival time under the assumption of low death rate per interval. This fact is satisfied when shorter intervals is used in order to have fewer events in each time, however, by doing this, problems such as missing values can arise because the actual visits occur less frequently in a survival setting and one must therefore account for the missing values. Objective: We investigate the effect of two methods of filling in missing data, Last Observation Carried Forward (LOCF) and Multiple Imputation (MI), as well as Available Case Study. We compare these three different approaches to complete data analysis. Methods: Weighted pooled logistic regression is used to estimate the causal marginal treatment effect. Complete data were generated using Young's algorithm to obtain monthly information for all individuals, and from the complete data, observed data were selected by assuming follow-up visits occurred every six or three months. Thus, to analyze the observed data at a monthly level, we performed LOCF and MI to fill in the missing values and compared the results to those from a completely-observed data analysis. We also included an analysis of the observed-data without any imputation. We then applied these methods to the Canadian Co-infection Cohort to estimate the impact of alcohol consumption on liver fibrosis.Results: In most simulations, MI produced the least biased and least variant estimators, even outperforming analyses based on completely-observed data. In the presence of stronger confounding, MI-based estimators were more biased but nevertheless less variant than the estimators based on completely-observed data.Conclusion: Multiple Imputation is superior to last-observation carried forward and observed-data analysis when marginal structural models are used to adjust for time-varying exposure and variables in the context of survival analysis and data are missing or infrequently measured. / Contexte : Le temps de survie est un paramètre d'intérêt commun qui peut être évalué en faisant appel aux modèles à risques proportionnels de Cox lorsqu'il mesuré en continu. Un autre moyen d'estimer des risques relatifs est de diviser le temps en intervalles égaux et d'assigner une valeur de 0 ou de 1 à chaque intervalle selon que l'individu y est vivant ou non. Dans cette approximation à temps discret, on doit avoir recours à une régression logistique regroupée plutôt qu'à un modèle de Cox pour obtenir une estimation sans biais du temps de survie sous l'hypothèse que le taux de décès par intervalle est bas. Cette hypothèse est raisonnable lorsque les intervalles sont suffisamment courts pour éviter les événements multiples mais ce faisant, des problèmes de valeurs manquantes ou autres peuvent subvenir car les visites effectives ont lieu moins fréquemment dans un contexte de survie et la possibilité que des valeurs soient manquantes est bien réelle.Objectif : Nous examinons l'effet de deux méthodes d'imputation de valeurs manquantes, à savoir la reconduction de la dernière observation (RDO) et l'imputation multiple (IM), de même que la technique d'études des cas disponibles. Nous comparons ces trois approches en prenant comme point de référence l'analyse des cas complets.Méthodes : La régression logistique regroupée pondérée est utilisée afin d'estimer l'effet de traitement causal marginal. Des données complètes ont été générées au moyen de l'algorithme de Young afin d'obtenir des informations mensuelles au sujet de tous les individus ; des observations ont ensuite été sélectionnées à partir des données complètes en supposant que des visites de suivi aient lieu tous les trois ou six mois. Ainsi, en vue d'analyser les données observées sur une base mensuelle, on a effectué la reconduction de la dernière observation (RDO) et l'imputation multiple (IM) pour remplacer les données manquantes et comparer les résultats à ceux d'une analyse de données entièrement observables. On a également effectué une analyse des données observées avant imputation. On a ensuite appliqué ces techniques à la cohorte de co-infection canadienne afin d'évaluer l'impact de la consommation d'alcool sur la fibrose du foie. Résultats: Dans la plupart des simulations, les estimations fondées sur l'imputation multiple se sont avérées moins biaisées et moins variables que les autres, surpassant même celles fondées sur l'observation de données complètes. En présence d'effets confondants, les estimations fondées sur l'imputation multiple ont présenté un biais accru mais ont été moins variables que celles fondées sur les données entièrement observables.Conclusion : L'imputation multiple est supérieure à la reconduction de la dernière observation et à l'analyse des données brutes lorsque des modèles structuraux marginaux sont utilisés pour ajuster l'exposition temporelle et les variables dans un contexte d'analyses de survie où les données sont mesurées à basse fréquence ou incomplètes.
33

A study of non-collapsibility of the odds ratio via marginal structural and logistic regression models

Pang, Menglan January 2012 (has links)
Background: It has been noted in epidemiology and biostatistics that when the odds ratio (OR) is used to measure the causal effect of a treatment or exposure, there is a discrepancy between the marginal OR and the conditional OR even in the absence of confounding. This is known as non-collapsibility of the OR. It is sometimes described (incorrectly) as a bias in the estimated treatment effect from a logistic regression model if an important covariate is omitted. Objectives: Distinguish confounding bias from non-collapsibility and measure the non-collapsibility effect on the OR in different scenarios. Methods: We used marginal structural models and standard logistic regression to measure the non-collapsibility effect and confounding bias. An analytic approach is proposed to assess the non-collapsibility effect in a point-exposure study. This approach can be used to verify the conditions for the absence of non-collapsibility and to examine the phenomenon of confounding without non-collapsibility. A graphical approach is employed to show the relationship between the non-collapsibility effect and the baseline risk or the marginal outcome probability, and it reveals the non-collapsibility behaviour with a range of different exposure effects and different covariate effects. In order to explore the non-collapsibility effect of the OR in the presence of time-varying confounding, an observational cohort study was simulated. Results and Conclusion: The total difference between the conditional and crude effects can be decomposed into a sum of the non-collapsibility effect and the confounding bias. We provide a general formula for expressing the non-collapsibility effect under different scenarios. Our analytic approach provided similar results to related formulae in the literature. Various interesting observations about non-collapsibility can be made from the different scenarios with or without confounding using the graphical approach. Somewhat surprisingly, the effect of the covariate plays a more important role in the non-collapsibility effect than does the effect of the exposure. In the presence of time-varying confounding, the non-collapsibility is comparable to the effect in the point-exposure study. / Contexte : Il a été observé en épidémiologie et en biostatistique que lorsque le "odds ratio" (OR) est utilisé pour mesurer l'effet causal d'un traitement ou d'une exposition, il y a une différence entre l'OR marginal et l'OR conditionnel et ce, même s'il y a absence de biais de confusion. Ceci est décrit comme le non-collapsibilité de l'OR. Il est parfois incorrectement décrit comme un biais dans l'effet estimé du traitement à partir d'un modèle de régression logistique, si une covariante importante est exclue.Objectifs : Distinguer le biais provenant du biais de confusion du non-collapsibilité et mesurer l'effet du non-collapsibilité sur l'OR dans plusieurs scénarios.Méthode : On a utilisé des modèles structuraux marginaux et la régression logistique ajustée pour mesurer l'effet du non-collapsibilité dans une étude d'exposition par points. Cette approche peut être utilisée pour vérifier les conditions de l'absence de non-collapsibilité et pour examiner le phénomène de biais de confusion sans non-collapsibilité. Une approche graphique est employée pour démontrer la relation entre le non-collapsibilité et le risque de base ou la probabilité du résultat marginal; ceci révèle le comportement de non-collapsibilité avec une étendue d'effets d'exposition et de covariance différents. De manière à explorer l'effet de non-collapsibilité de l'OR en présence de biais de confusion variant en fonction du temps, une étude d'observation de cohorte a été simulée.Résultats et Conclusion : La différence entre les effets conditionnels et bruts peut être décomposée dans la somme de l'effet de non-collapsibilité et du biais de confusion. Nous suggérons une formule générale pour exprimer l'effet du non-collapsibilité dans plusieurs scénarios différents. Notre approche analytique expose des résultats similaires à d'autres étant trouvés avec des formules présentes dans la littérature. Plusieurs observations intéressantes sur le non-collapsibilité peuvent être faites à partir de différents scénarios, avec ou sans biais de confusion, en utilisant notre approche graphique. De manière surprenante, l'effet d'une covariable joue un plus grand rôle dans le non-collapsibilité que l'effet de l'exposition. En présence de biais de confusion reliée au temps, l'effet du non-collapsibilité est comparable à l'effet de l'étude d'exposition par point.
34

Probabilistic models of the natural history of multiple sclerosis

Wolfson, Christina, 1955- January 1984 (has links)
Variability in the clinical course of multiple sclerosis is one of its most interesting and yet discouraging features. Research into patient and disease characteristics which might predict future disease course has produced conflicting results. / This thesis presents an original approach in which the course of the disease is described by the movement of patients through well defined disease states. Two probabilistic models: the semi-Markov and the stochastic survival model are proposed to evaluate the effect of prognostic factors on transitions from state to state. / The feasibility and applicability of both models is evaluated using data on the course of multiple sclerosis in 278 diagnosed patients from L'hopital Neurologique de Lyon who were followed from 1956-1976. The stochastic survival model is found to be the most appropriate. A short first remission, male sex, and multiple symptoms at onset are found to increase the risk of a poor prognosis.
35

Ontological characterization of high through-put biological data

Iacucci, Ernesto January 2005 (has links)
A result of high-throughput experimentation is the demand to summarize and profile results in a meaningful and comparative form. Such experimentation often includes the production of a set of distinguished genes. For example, this distinguished set may correspond to a cluster of co-expressed genes over many conditions or a set of genes from a large scale yeast two-hybrid study. Understanding the biological relevance of this set will encompass annotation of the genes followed by investigation of shared properties found among these annotations. While the set of distinguished genes might have hundreds of annotations associated with them, only a portion of these annotations will represent meaningful aspects associated with the experiment. Identification of the meaningful aspects can be focused by application of a statistic to an annotation resource. One such annotation resource is Gene Ontology (GO), a controlled vocabulary which hierarchically structures annotation terms (classifications) onto which genes can be mapped. Given a distinguished set of genes and a classification, we wish to determine if the number of distinguished genes mapped to that classification is significantly greater or less than would be expected by chance. In estimating these probabilities, researchers have employed the hypergeometric model under differing frameworks. Assumptions made in these frameworks have ignored key issues regarding the mapping of genes to GO and have resulted in inaccurate p-values. Here we show how dynamic programming can be used to compute exact p-values for enrichment or depletion of a particular GO classification. This removes the necessity of approximating the statistics or p-values, as has been the common practice. We apply our methods to a dataset describing labour and compare p-values based on exact and approximate computations of several different statistics for measuring enrichment. We find significant disagreement between commonly employ
36

Generalized models in epidemiology research

Benedetti, Andrea January 2004 (has links)
Traditionally, epidemiologists have used methods that categorize or assume a linear or log-linear form to model dose-response associations between continuous independent variables and binary or continuous outcomes. Recent advances in both statistical methodology and computing resources have made it possible to model relationships of greater complexity. Generalized additive models (GAMs) are a flexible nonparametric modelling tool that allows the user to model a variety of non-linear dose-response curves without imposing a priori assumptions about the functional form of the relationship. In GAMs, the extent of smoothing is controlled by the user-defined degrees of freedom (df). GAMs are generally used to: (i) suggest the parametric functional form for the association of interest; (ii) model the main effect nonparametrically; and (iii) control confounding by continuous covariates. By way of a series of simulation studies, this thesis addresses several unresolved methodological issues involving all three of these uses. Although GAMs have been used to detect and estimate thresholds in the association of interest, the methods have been mostly subjective or ad hoc, and the statistical properties have not been evaluated for the most part. In the first simulation study, a formal approach to the use of GAMs for this purpose is suggested and compared with simpler approaches. When GAMs are used to estimate the effect of the primary exposure of interest different approaches to determining the amount of smoothing are employed. In the second simulation study, the impact on statistical inference of various a priori and automatic df-selection strategies is investigated and a method to correct the type I error is introduced and evaluated. / In the final simulation study, parametric multiple logistic regression was compared with its nonparametric GAM extension in their ability to control for a continuous confounding variable and several issues related to the implementation of GAMs in this context are investigated. / The results of these simulations will help researchers make optimal use of the potential advantages of flexible assumption-free modelling.
37

Generalized linear mixed models for binary outcome data with a low proportion of occurrences

Beauchamp, Marie-Eve January 2010 (has links)
Many studies in epidemiology and other fields such as econometrics and social sciences give rise to correlated outcome data (e.g., longitudinal studies, meta-analyses, and multi-centre studies). Parameter estimation of generalized linear mixed models (GLMMs), which are frequently used to perform inference on correlated binary outcomes, is complicated by intractable integrals in the marginal likelihood. Penalized quasi-likelihood (PQL) and maximum likelihood estimation in conjunction with numerical integration via adaptive Gauss-Hermite quadrature (AGHQ) are estimation methods that are commonly used in practice. However, the assessment of the performance of these estimation methods in settings found in practice is incomplete, particularly for binary outcome data with a low proportion of occurrences. / To begin with, I considered graphical representations of the distributions of cluster-specific log odds of outcome ensuing from random intercepts logistic models (RILMs) converted to the probability scale with the inverse logit transformation. RILMs are special cases of GLMMs. These representations are helpful to comprehend the implications of RILM parameter values for the distributions of cluster-specific probabilities of outcome. The correspondence of these distributions with beta distributions, also used for random effects models for binary outcomes, was graphically assessed and a generally good agreement was found. / Afterwards, I evaluated via a simulation study the performance of the PQL and AGHQ methods in several realistic settings of binary outcome data with a low proportion of occurrences. Different features determining the number of occurrences were considered (number of clusters, cluster size, and probabilities of outcome). The AGHQ method produced nearly unbiased fixed effects estimates, even in challenging settings with low proportions of occurrences or a small sample size, but mean square errors tended to be larger than with PQL for small datasets. Both methods produced biased variance component estimates when the number of clusters was moderate, especially with rarer occurrences. / Finally, through further analysis of the simulation results, I assessed if a number of indicators quantifying different aspects of the rarity of the events in a dataset, all measurable in practice, could explain patterns of bias in the parameter estimates. The selected rarity indicators quantify the overall number of events and their distribution across the clusters. / Plusieurs études en épidémiologie et autres domaines, tels que les sciences sociales, donnent lieu à des données de réponse corrélées (par exemple, les études longitudinales et multi-centres). L'estimation des paramètres des modèles linéaires généralisés mixtes (MLGM), souvent utilisés pour les données de réponse corrélées, est compliquée par des intégrales sans solution analytique dans la fonction de vraisemblance marginale. La méthode de quasi-vraisemblance pénalisée (QVP) et l'estimation par la maximisation de la vraisemblance conjointement avec la technique d'intégration numérique de quadrature Gauss-Hermite adaptée (QGHA) sont souvent utilisées. Cependant, l'évaluation de la performance de ces méthodes en pratique est incomplète, en particulier pour les données de réponse binaires avec faible proportion d'événements. / Dans un premier temps, j'ai considéré la représentation graphique de distributions du logarithme de la cote spécifique à chaque groupe résultant de modèles logistiques avec intercepts aléatoires (MLIA) transformées à l'échelle des probabilités avec la transformation logit inversée. Les MLIA sont des cas particuliers des MLGM. Ces représentations sont utiles pour comprendre les implications des valeurs des paramètres sur la distribution de la probabilité de réponse spécifique à chaque groupe. La correspondance avec la loi bêta a été évaluée graphiquement et une bonne concordance fut observée. / Par la suite, j'ai évalué avec une étude de simulations la performance des méthodes QVP et QGHA pour plusieurs cas réalistes de données de réponse binaires avec faible proportion d'événements. Différentes caractéristiques déterminant le nombre d'événements furent considérées (nombre et taille des groupes et probabilités d'événement). La méthode QGHA a produit des valeurs estimées presque sans biais, même dans des situations avec faible proportion d'événements ou petite taille d'échantillon, mais les erreurs quadratiques moyennes étaient souvent plus élevées qu'avec la méthode QVP pour les petits échantillons. Les deux méthodes ont produit des valeurs estimées biaisées pour la composante de variance lorsque le nombre de groupes était modéré, particulièrement lorsque les événements étaient rares. / Finalement, j'ai évalué si un nombre d'indicateurs de rareté des événements, tous mesurables en pratique pour un jeu de données, pouvaient expliquer le biais dans les valeurs estimées des paramètres. Les indicateurs sélectionnés quantifient le nombre total d'événements et leur distribution dans les groupes.
38

Meta-analysis of multiple outcomes : fundamentals and applications

Ishak, Khajak. January 2006 (has links)
Meta-analyses often consider the effect of a treatment on multiple, possibly related outcomes. Typically, summary estimates are derived from outcome-specific meta-analyses. Alternatively, a joint analysis can be conducted with a multivariate meta-analysis model, which also quantifies the correlation between the outcomes. This dissertation presents findings from analyses examining issues pertaining to the accuracy of the multivariate approach and its application to meta-analyses of longitudinal studies. / Correlations measured in multivariate meta-analysis models can provide added insight about the treatment and disease. To be useful, however, the measured correlations must reflect the underlying biological relationships between treatment effects. I demonstrate, however, that correlations measured across studies may often be distorted by associations between the endpoints or random errors affecting outcomes within studies in a similar way. Thus, correlations measured in multivariate meta-analyses can be misleading. / To properly weight the contribution of each study, the variance of sampling distributions of each estimate is fixed to its observed value. In the multivariate case, however, the sampling distribution also involves covariances between effect estimates on the different outcomes. These are rarely available and must, therefore, be approximated from external information. I evaluated the impact of errors in these approximations on estimates of the parameters of the model in a simulation study. Summary effects and heterogeneity were estimated accurately, but the correlation parameter was prone to possibly large biases and lacked precision. / Longitudinal studies often report treatment effects measured at different times. Multivariate meta-analyses can account for the correlations inherent to this type of data. Alternatively, random-effects can be specified to capture the marginal correlations. I contrasted these and the standard time-specific meta-analysis approaches using data from a review of studies of deep brain stimulation. Multivariate models, and to a slightly lesser extent the random-effects models, provided better fit and more precise estimates in the interval with fewest observations. These models were also less affected by an apparently outlying observation. This suggests a potential borrowing of information from estimates at other times. / This work builds on research about the potential advantages and limitations of joint meta-analyses of multiple outcomes.
39

The effect of assigning different index dates for control exposure measurement on odds ratio estimates

Lundy, Erin January 2012 (has links)
In case-control studies it is reasonable to consider the exposure history of a case prior to disease onset. For the controls, it is necessary to define comparable periods of exposure opportunity. Motivated by data from a case-control study of the environmental risk factors for Multiple Sclerosis we propose a control-to-case matching algorithm that assigns pseudo ages at onset, index ages, to the controls. Based on a simulation study, we conclude that our index age algorithms yields a greater power than the default method of assigning a control's current age as their index age, especially for moderate effects. Furthermore, we present theoretical results that show that for binary and ordered categorical exposure variables using an inappropriate index age assignment method can obscure or even mask a true effect. The effect of the choice of index age assignment method on the inference on the odds ratio is highly data dependent. In contrast to the results of our simulation study, our analysis of the data from the motivating case-control study resulted in odds ratio and variance estimates that were very similar regardless of the choice of the method of assigning index ages. / Dans les études cas- témoins il est raisonnable de considérer que l'histoire de l'exposition d'un cas avant l'apparition de la maladie. Pour les témoins, il est nécessaire de définir des périodes de l'occasion d'exposition qui sont comparables. Motives par des données provenant d'une étude cas- témoins des facteurs de risque environnementaux pour la sclérose en plaques, nous proposons un cas- témoins algorithme de comparaison qui affecte des âges pseudo a l'apparition, âges d'index, aux témoins. Nous concluons, base sur une étude de simulation, que nos algorithme pour d'âges d'index donnent une plus grande puissance que la méthode défaut d'affecter l'âge actuel d'une témoins comme son âge d'index, particulièrement pour les effets modères. En plus, nous présentons des résultats théoriques qui montrent que pour des variables binaire et des variables ordinale, l'utilisation d'une méthode d'affectation inappropriée peut obscurcir ou mémé masquer un véritable effet. L'effet du choix de la méthode d'affectation sur l'inférence sur le rapport de cotes est très dépendant des données. En contraste avec le reste de notre étude de simulation, notre analyse des données de l' étude cas- témoins motivant a produit des estimations de le rapport de cotes et variance qui étaient très semblables quelque soit le choix de la méthode d'affectation des l'âges d'index.
40

Selected Topics on Statistical Methods for Three and Multiple Class Diagnostic Studies

Dong, Tuochuan 18 September 2014 (has links)
<p> Many disease processes such as Alzheimer'sDisease have three ordinal disease classes, i.e. the non-diseased stage, the early diseased stage and the fully diseased stage. Since the early diseased stage is likely the best time window for treatment interventions, it is important to have diagnostic tests which have good diagnostic ability to discriminate the early diseased stage from the other two. We present both parametric and non-parametric approaches for confidence interval estimation of probability of detecting the early diseased stage (sensitivity to early stage) given the true classification rates for non-diseased group (specificity) and diseased group (sensitivity to full disease). Similar parametric and non-parametric approaches are also proposed for estimating the confidence interval for the difference between sensitivities to the early diseased stage for two markers. The semi-parametric confidence intervals for the sensitivity to the early diseased stage, which utilize the empirical likelihood approach, are also proposed, and one of them is shown to outperform the existing methods in some distribution settings. The AUC and Youden index are the most well-known diagnostic measures for diseases with binary outcome. Both of them have been generalized to disease processes with three or more disease stages. We propose a new measure which can be applied naturally to any <i>k</i>-class diseases (<i>k</i> &ge; 2). The geometric and probabilistic interpretation for the new measure is illustrated, and comparisons between the new measure and the extensions of AUC and Youden index, are examined through a power study. Moreover, the new measure can also be utilized as a criterion to select the optimal cut-off points, and its performance is compared with the generalized Youden index criterion through a simulation study for the three-class and four-class diseases. </p>

Page generated in 0.0982 seconds