31 |
Comparing survival from cancer using population-based cancer registry data - methods and applicationsYu, Xue Qin January 2007 (has links)
Doctor of Philosophy / Over the past decade, population-based cancer registry data have been used increasingly worldwide to evaluate and improve the quality of cancer care. The utility of the conclusions from such studies relies heavily on the data quality and the methods used to analyse the data. Interpretation of comparative survival from such data, examining either temporal trends or geographical differences, is generally not easy. The observed differences could be due to methodological and statistical approaches or to real effects. For example, geographical differences in cancer survival could be due to a number of real factors, including access to primary health care, the availability of diagnostic and treatment facilities and the treatment actually given, or to artefact, such as lead-time bias, stage migration, sampling error or measurement error. Likewise, a temporal increase in survival could be the result of earlier diagnosis and improved treatment of cancer; it could also be due to artefact after the introduction of screening programs (adding lead time), changes in the definition of cancer, stage migration or several of these factors, producing both real and artefactual trends. In this thesis, I report methods that I modified and applied, some technical issues in the use of such data, and an analysis of data from the State of New South Wales (NSW), Australia, illustrating their use in evaluating and potentially improving the quality of cancer care, showing how data quality might affect the conclusions of such analyses. This thesis describes studies of comparative survival based on population-based cancer registry data, with three published papers and one accepted manuscript (subject to minor revision). In the first paper, I describe a modified method for estimating spatial variation in cancer survival using empirical Bayes methods (which was published in Cancer Causes and Control 2004). I demonstrate in this paper that the empirical Bayes method is preferable to standard approaches and show how it can be used to identify cancer types where a focus on reducing area differentials in survival might lead to important gains in survival. In the second paper (published in the European Journal of Cancer 2005), I apply this method to a more complete analysis of spatial variation in survival from colorectal cancer in NSW and show that estimates of spatial variation in colorectal cancer can help to identify subgroups of patients for whom better application of treatment guidelines could improve outcome. I also show how estimates of the numbers of lives that could be extended might assist in setting priorities for treatment improvement. In the third paper, I examine time trends in survival from 28 cancers in NSW between 1980 and 1996 (published in the International Journal of Cancer 2006) and conclude that for many cancers, falls in excess deaths in NSW from 1980 to 1996 are unlikely to be attributable to earlier diagnosis or stage migration; thus, advances in cancer treatment have probably contributed to them. In the accepted manuscript, I described an extension of the work reported in the second paper, investigating the accuracy of staging information recorded in the registry database and assessing the impact of error in its measurement on estimates of spatial variation in survival from colorectal cancer. The results indicate that misclassified registry stage can have an important impact on estimates of spatial variation in stage-specific survival from colorectal cancer. Thus, if cancer registry data are to be used effectively in evaluating and improving cancer care, the quality of stage data might have to be improved. Taken together, the four papers show that creative, informed use of population-based cancer registry data, with appropriate statistical methods and acknowledgement of the limitations of the data, can be a valuable tool for evaluating and possibly improving cancer care. Use of these findings to stimulate evaluation of the quality of cancer care should enhance the value of the investment in cancer registries. They should also stimulate improvement in the quality of cancer registry data, particularly that on stage at diagnosis. The methods developed in this thesis may also be used to improve estimation of geographical variation in other count-based health measures when the available data are sparse.
|
32 |
Influence, information and item response theory in discrete data analysisMagis, David 04 May 2007 (has links)
The main purpose of this thesis is to consider usual statistical tests for discrete data and to present some recent developments around them. Contents can be divided into three parts.
In the first part we consider the general issue of misclassification and its impact on usual test results. A suggested diagnostic examination of the misclassification process leads to simple and direct investigation tools to determine whether conclusions are very sensitive to classification errors. An additional probabilistic approach is presented, in order to refine the discussion in terms of the risk of getting contradictory conclusions whenever misclassified data occur.
In the second part we propose a general approach to deal with the issue of multiple sub-testing procedures. In particular, when the null hypothesis is rejected, we show that usual re-applications of the test to selected parts of the data can provide non-consistency problems. The method we discuss is based on the concept of decisive subsets, set as the smallest number of categories being sufficient to reject the null hypothesis, whatever the counts of the remaining categories. In this framework, we present an iterative step-by-step detection process based on successive interval building and category count comparison. Several examples highlight the gain our method can bring with respect to classical approaches.
The third and last part is consecrated to the framework of item response theory, a field of psychometrics. After a short introduction to that topic, we propose first two enhanced iterative estimators of proficiency. Several theoretical properties and simulation results indicate that these methods ameliorate the usual Bayesian estimators in terms of bias, among others. Furthermore, we propose to study the link between response pattern misfit and subject's variability (the latter as individual latent trait). More precisely, we present "maximum likelihood"-based joint estimators of subject's parameters (ability and variability); several simulations suggest that enhanced estimators also have major gain (with respect to classical ones), mainly in terms of estimator's bias.
|
33 |
Assessing Binary Measurement SystemsDanila, Oana Mihaela January 2012 (has links)
Binary measurement systems (BMS) are widely used in both manufacturing industry and medicine. In industry, a BMS is often used to measure various characteristics of parts and then classify them as pass or fail, according to some quality standards. Good measurement systems are essential both for problem solving (i.e., reducing the rate of defectives) and to protect customers from receiving defective products. As a result, it is desirable to assess the performance of the BMS as well as to separate the effects of the measurement system and the production process on the observed classifications. In medicine, BMSs are known as diagnostic or screening tests, and are used to detect a target condition in subjects, thus classifying them as positive or negative. Assessing the performance of a medical test is essential in quantifying the costs due to misclassification of patients, and in the future prevention of these errors.
In both industry and medicine, the most commonly used characteristics to quantify the performance a BMS are the two misclassification rates, defined as the chance of passing a nonconforming (non-diseased) unit, called the consumer's risk (false positive), and the chance of failing a conforming (diseased) unit, called the producer's risk (false negative). In most assessment studies, it is also of interest to estimate the conforming (prevalence) rate, i.e. probability that a randomly selected unit is conforming (diseased).
There are two main approaches for assessing the performance of a BMS. Both approaches involve measuring a number of units one or more times with the BMS. The first one, called the "gold standard" approach, requires the use of a gold-standard measurement system that can determine the state of units with no classification errors. When a gold standard does not exist, is too expensive or time-consuming, another option is to repeatedly measure units with the BMS, and then use a latent class approach to estimate the parameters of interest. In industry, for both approaches, the standard sampling plan involves randomly selecting parts from the population of manufactured parts.
In this thesis, we focus on a specific context commonly found in the manufacturing industry. First, the BMS under study is nondestructive. Second, the BMS is used for 100% inspection or any kind of systematic inspection of the production yield. In this context, we are likely to have available a large number of previously passed and failed parts. Furthermore, the inspection system typically tracks the number of parts passed and failed; that is, we often have baseline data about the current pass rate, separate from the assessment study. Finally, we assume that during the time of the evaluation, the process is under statistical control and the BMS is stable.
Our main goal is to investigate the effect of using sampling plans that involve random selection of parts from the available populations of previously passed and failed parts, i.e. conditional selection, on the estimation procedure and the main characteristics of the estimators. Also, we demonstrate the value of combining the additional information provided by the baseline data with those collected in the assessment study, in improving the overall estimation procedure. We also examine how the availability of baseline data and using a conditional selection sampling plan affect recommendations on the design of the assessment study.
In Chapter 2, we give a summary of the existing estimation methods and sampling plans for a BMS assessment study in both industrial and medical settings, that are relevant in our context. In Chapters 3 and 4, we investigate the assessment of a BMS in the case where we assume that the misclassification rates are common for all conforming/nonconforming parts and that repeated measurements on the same part are independent, conditional on the true state of the part, i.e. conditional independence. We call models using these assumptions fixed-effects models. In Chapter 3, we look at the case where a gold standard is available, whereas in Chapter 4, we investigate the "no gold standard" case. In both cases, we show that using a conditional selection plan, along with the baseline information, substantially improves the accuracy and precision of the estimators, compared to the standard sampling plan.
In Chapters 5 and 6, we investigate the case where we allow for possible variation in the misclassification rates within conforming and nonconforming parts, by proposing some new random-effects models. These models relax the fixed-effects model assumptions regarding constant misclassification rates and conditional independence. As in the previous chapters, we focus on investigating the effect of using conditional selection and baseline information on the properties of the estimators, and give study design recommendations based on our findings.
In Chapter 7, we discuss other potential applications of the conditional selection plan, where the study data are augmented with the baseline information on the pass rate, especially in the context where there are multiple BMSs under investigation.
|
34 |
Assessing Binary Measurement SystemsDanila, Oana Mihaela January 2012 (has links)
Binary measurement systems (BMS) are widely used in both manufacturing industry and medicine. In industry, a BMS is often used to measure various characteristics of parts and then classify them as pass or fail, according to some quality standards. Good measurement systems are essential both for problem solving (i.e., reducing the rate of defectives) and to protect customers from receiving defective products. As a result, it is desirable to assess the performance of the BMS as well as to separate the effects of the measurement system and the production process on the observed classifications. In medicine, BMSs are known as diagnostic or screening tests, and are used to detect a target condition in subjects, thus classifying them as positive or negative. Assessing the performance of a medical test is essential in quantifying the costs due to misclassification of patients, and in the future prevention of these errors.
In both industry and medicine, the most commonly used characteristics to quantify the performance a BMS are the two misclassification rates, defined as the chance of passing a nonconforming (non-diseased) unit, called the consumer's risk (false positive), and the chance of failing a conforming (diseased) unit, called the producer's risk (false negative). In most assessment studies, it is also of interest to estimate the conforming (prevalence) rate, i.e. probability that a randomly selected unit is conforming (diseased).
There are two main approaches for assessing the performance of a BMS. Both approaches involve measuring a number of units one or more times with the BMS. The first one, called the "gold standard" approach, requires the use of a gold-standard measurement system that can determine the state of units with no classification errors. When a gold standard does not exist, is too expensive or time-consuming, another option is to repeatedly measure units with the BMS, and then use a latent class approach to estimate the parameters of interest. In industry, for both approaches, the standard sampling plan involves randomly selecting parts from the population of manufactured parts.
In this thesis, we focus on a specific context commonly found in the manufacturing industry. First, the BMS under study is nondestructive. Second, the BMS is used for 100% inspection or any kind of systematic inspection of the production yield. In this context, we are likely to have available a large number of previously passed and failed parts. Furthermore, the inspection system typically tracks the number of parts passed and failed; that is, we often have baseline data about the current pass rate, separate from the assessment study. Finally, we assume that during the time of the evaluation, the process is under statistical control and the BMS is stable.
Our main goal is to investigate the effect of using sampling plans that involve random selection of parts from the available populations of previously passed and failed parts, i.e. conditional selection, on the estimation procedure and the main characteristics of the estimators. Also, we demonstrate the value of combining the additional information provided by the baseline data with those collected in the assessment study, in improving the overall estimation procedure. We also examine how the availability of baseline data and using a conditional selection sampling plan affect recommendations on the design of the assessment study.
In Chapter 2, we give a summary of the existing estimation methods and sampling plans for a BMS assessment study in both industrial and medical settings, that are relevant in our context. In Chapters 3 and 4, we investigate the assessment of a BMS in the case where we assume that the misclassification rates are common for all conforming/nonconforming parts and that repeated measurements on the same part are independent, conditional on the true state of the part, i.e. conditional independence. We call models using these assumptions fixed-effects models. In Chapter 3, we look at the case where a gold standard is available, whereas in Chapter 4, we investigate the "no gold standard" case. In both cases, we show that using a conditional selection plan, along with the baseline information, substantially improves the accuracy and precision of the estimators, compared to the standard sampling plan.
In Chapters 5 and 6, we investigate the case where we allow for possible variation in the misclassification rates within conforming and nonconforming parts, by proposing some new random-effects models. These models relax the fixed-effects model assumptions regarding constant misclassification rates and conditional independence. As in the previous chapters, we focus on investigating the effect of using conditional selection and baseline information on the properties of the estimators, and give study design recommendations based on our findings.
In Chapter 7, we discuss other potential applications of the conditional selection plan, where the study data are augmented with the baseline information on the pass rate, especially in the context where there are multiple BMSs under investigation.
|
35 |
Comparing survival from cancer using population-based cancer registry data - methods and applicationsYu, Xue Qin January 2007 (has links)
Doctor of Philosophy / Over the past decade, population-based cancer registry data have been used increasingly worldwide to evaluate and improve the quality of cancer care. The utility of the conclusions from such studies relies heavily on the data quality and the methods used to analyse the data. Interpretation of comparative survival from such data, examining either temporal trends or geographical differences, is generally not easy. The observed differences could be due to methodological and statistical approaches or to real effects. For example, geographical differences in cancer survival could be due to a number of real factors, including access to primary health care, the availability of diagnostic and treatment facilities and the treatment actually given, or to artefact, such as lead-time bias, stage migration, sampling error or measurement error. Likewise, a temporal increase in survival could be the result of earlier diagnosis and improved treatment of cancer; it could also be due to artefact after the introduction of screening programs (adding lead time), changes in the definition of cancer, stage migration or several of these factors, producing both real and artefactual trends. In this thesis, I report methods that I modified and applied, some technical issues in the use of such data, and an analysis of data from the State of New South Wales (NSW), Australia, illustrating their use in evaluating and potentially improving the quality of cancer care, showing how data quality might affect the conclusions of such analyses. This thesis describes studies of comparative survival based on population-based cancer registry data, with three published papers and one accepted manuscript (subject to minor revision). In the first paper, I describe a modified method for estimating spatial variation in cancer survival using empirical Bayes methods (which was published in Cancer Causes and Control 2004). I demonstrate in this paper that the empirical Bayes method is preferable to standard approaches and show how it can be used to identify cancer types where a focus on reducing area differentials in survival might lead to important gains in survival. In the second paper (published in the European Journal of Cancer 2005), I apply this method to a more complete analysis of spatial variation in survival from colorectal cancer in NSW and show that estimates of spatial variation in colorectal cancer can help to identify subgroups of patients for whom better application of treatment guidelines could improve outcome. I also show how estimates of the numbers of lives that could be extended might assist in setting priorities for treatment improvement. In the third paper, I examine time trends in survival from 28 cancers in NSW between 1980 and 1996 (published in the International Journal of Cancer 2006) and conclude that for many cancers, falls in excess deaths in NSW from 1980 to 1996 are unlikely to be attributable to earlier diagnosis or stage migration; thus, advances in cancer treatment have probably contributed to them. In the accepted manuscript, I described an extension of the work reported in the second paper, investigating the accuracy of staging information recorded in the registry database and assessing the impact of error in its measurement on estimates of spatial variation in survival from colorectal cancer. The results indicate that misclassified registry stage can have an important impact on estimates of spatial variation in stage-specific survival from colorectal cancer. Thus, if cancer registry data are to be used effectively in evaluating and improving cancer care, the quality of stage data might have to be improved. Taken together, the four papers show that creative, informed use of population-based cancer registry data, with appropriate statistical methods and acknowledgement of the limitations of the data, can be a valuable tool for evaluating and possibly improving cancer care. Use of these findings to stimulate evaluation of the quality of cancer care should enhance the value of the investment in cancer registries. They should also stimulate improvement in the quality of cancer registry data, particularly that on stage at diagnosis. The methods developed in this thesis may also be used to improve estimation of geographical variation in other count-based health measures when the available data are sparse.
|
36 |
Novel statistical models for ecological momentary assessment studies of sexually transmitted infectionsHe, Fei 18 July 2016 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / The research ideas included in this dissertation are motivated by a large sexually trans
mitted infections (STIs) study (IU Phone study), which is also an ecological momentary
assessment (EMA) study implemented by Indiana University from 2008 to 2013. EMA, as a
group of methods used to collect subjects’ up-to-date behaviors and status, can increase the
accuracy of this information by allowing a participant to self-administer a survey or diary
entry, in their own environment, as close to the occurrence of the behavior as possible. IU
Phone study’s high reporting level shows one of the benefits gain from introducing EMA
in STIs study. As a prospective study lasting for 84 days, participants in IU Phone study
undergo STI testing and complete EMA forms with project-furnished cellular telephones
according to the predetermined schedules. At pre-selected eight-hour intervals, participants
respond to a series of questions to identify sexual and non-sexual interactions with specific
partners including partner name, relationship satisfaction and sexual satisfaction with this
partner, time of each coital event and condom use for each event. etc. STIs lab results of all
the participants are collected weekly as well. We are interested in several variables related
to the risk of infection and sexual or non-sexual behaviors, especially the relationship among
the longitudinal processes of those variables. New statistical models and applications are
established to deal with the data with complex dependence and sampling data structures.
The methodologies covers various of statistical aspect like generalized mixed models, mul
tivariate models and autoregressive and cross-lagged model in longitudinal data analysis,
misclassification adjustment in imperfect diagnostic tests, and variable-domain functional regression in functional data analysis. The contribution of our work is we bridge the meth
ods from different areas with EMA data in the IU Phone study and also build up a novel
understanding of the association among all the variables of interest from different perspec
tives based on the characteristic of the data. Besides all the statistical analyses included in
this dissertation, variety of data visualization techniques also provide informative support
in presenting the complex EMA data structure.
|
37 |
Generic Drug Discount Programs, Cash-Only Drug Exposure Misclassification Bias, and the Implications for Claims-Based Adherence Measure EstimatesThompson, Jeffrey A. 26 July 2018 (has links)
No description available.
|
38 |
L’utilisation d’acétaminophène durant la grossesse et le risque de trouble du spectre de l’autisme : les issues méthodologiques liées à la mesure d’exposition d’acétaminophène.Kamdem Tchuendem, Lydienne 04 1900 (has links)
Le trouble du spectre de l’autisme (TSA) est une pathologie due à des anomalies neurodéveloppementales. Syndrome le plus souvent diagnostiqué dans la petite enfance, il affecte principalement l’interaction sociale et cause: des troubles de langage, de la communication et du comportement. Sa prévalence sans cesse croissante atteint en 2018 1.52% au Canada[1]. Cette prévalence est quatre fois plus élevée chez les garçons comparativement aux filles[2].
À ce jour, il n’existe pas de traitement spécifique au TSA, cependant des interventions psychologiques et médications sont proposées pour réduire des symptômes tel que l’agressivité et l’anxiété qui y sont associés.
L’étiologie du TSA est peu ou mal connue. Néanmoins, des hypothèses sont avancées suivant trois facteurs. Génétique, biologique et environnementale. Concernant ce dernier facteur, l’on peut citer l’exposition in utéro aux médicaments et ce projet porte son intérêt sur ce point et plus précisément l’acétaminophène. C’est l’un des analgésiques le plus utilisé dans le soulagement de la douleur et le traitement de la fièvre. Il est souvent recommandé aux femmes durant la grossesse pour le soulagement de la douleur et le traitement de la fièvre. Cette forte consommation d’acétaminophène chez la femme enceinte : environ 65%-75% aux États-Unis[3], soulève une réelle problématique sachant qu’il est démontré que le principe actif est détectable dans des échantillons de plasma de cordon ombilical[4]. La possibilité que l’acétaminophène traverse le placenta et soit présent dans la circulation sanguine du bébé n’est pas exclue[4],[5]. Il est important d’effectuer davantage de recherche sur le sujet afin d’évaluer le réel impact d’une telle exposition.
Sachant les limites d’inclusion des femmes enceintes dans les essais cliniques randomisées, nous avons mené une étude de cohorte populationnelle et les données utilisées proviennent de quatre bases de données administratives québécoises, dont le jumelage a permis de mettre sur pied la cohorte des grossesses du Québec (CGQ) avec un suivi longitudinal entre 1998 et 2015.
Ainsi, ce projet méthodologique vise principalement à explorer et illustrer les limites de l’utilisation des bases de données administratives pour mesurer l’exposition aux médicaments en vente libre. Ceci en utilisant comme cas pratique, l’évaluation du risque de TSA chez des enfants exposés à l’acétaminophène durant la phase gestationnelle comparativement aux enfants non exposés.
La littérature rapporte que les données quant à l’utilisation des médicaments en vente libre sont souvent sous-estimées par rapport à la réalité. L’utilisation bases de données administratives afin de mesurer l’exposition de certains médicaments en vente libre, introduirait une erreur de classification importante[6]. Sachant que la collecte de données se base sur les prescriptions complétées en pharmacie. Dans la pratique clinique courante, ces médicaments sont peu prescrits par les médecins et peu compléter auprès du pharmacien. Le cas pratique utilisé dans ce projet confirme cette hypothèse.
En effet, l’exposition in utéro d’acétaminophène est sous-estimée à 94% (Supplemental Table 5). Seulement 6 % (38/648) des enfants exposés sont capturés au sein de la CGQ ce qui ne reflète pas la réalité clinique d’utilisation d’acétaminophène en grossesse. Ce résultat démontre l’introduction d’un biais d’information (erreur de classification) qui expliquerait totalement ou en partie l’estimé mesuré. En effet, notre étude montre qu’il n’y a pas d’augmentation statistiquement significative entre le risque de TSA et la consommation d’acétaminophène in-utero (aHR : 1.10 IC 95% 0.93-1.30). Parallèlement, une analyse de sous cohorte a été effectuée. Celle-ci, utilisait des questionnaires auto-rapportées par les mères afin de mesurer l’exposition à l’acétaminophène. Pour cette analyse secondaire, un effet protecteur est mesuré (aHR : 0.78 IC 95% 0.41-1.47). Des analyses de sensibilités sur l’issue ont pu confirmer la robustesse de ces estimés, validant ainsi nos résultats.
En conclusion, ce projet d’étude a permis de mesurer l’ampleur des erreurs de classifications associées à l’utilisation des bases de données administratives pour des questions de recherches impliquant des médicaments en vente libre. Cependant, au regard de ces défis méthodologiques, il est difficile de mesurer l’association entre l’utilisation d’acétaminophène et le risque de TSA dans notre cohorte. Notre étude ne démontre pas d’association statistiquement significative, ce qui est contradictoire aux antécédentes études ayant des méthodes de mesures d’exposition différentes. / Autism Spectrum Disorder (ASD) is a pathology caused by neurodevelopmental abnormalities. A syndrome most often diagnosed in early childhood, ASD mainly affects social interaction and causes language, communication, and behavioral disorders. Its ever-increasing prevalence reached a rate of 1.52% in Canada in 2018[1]. This prevalence is four times higher in boys compared to girls[2].
Nowadays, there is no specific treatment for ASD, however, psychological interventions and medications are proposed to reduce symptoms such as aggression and anxiety.
The etiology of ASD is unknown. Nevertheless, hypotheses are put forward to explain the ASD mechanism via the genetic, biological, and environmental approach. The last factor includes in-utero exposure to medications and the present project focuses on this point and more specifically acetaminophen[3]. This research focuses on this point and, more specifically, on acetaminophen. It is one of the most widely used analgesics in pain and fever treatment, which is why it is often recommended to women during pregnancy. This high consumption of acetaminophen in pregnant women: approximately 65%-75% in the United States [4], raises a real problem, knowing that it has been demonstrated that the active ingredient is detectable in cord plasma samples [5]. It is possible that acetaminophen crosses the placenta and is present in the baby’s bloodstream[4], [5]. It is important to carry out more research on the subject in order to assess the true impact of such exposure.
Knowing the limits of the inclusion of pregnant women in randomized clinical trials, we conducted a population-based cohort study. The data came from four Quebec administrative databases, their pairing allowing the establishment of the Quebec pregnancy cohort (QPC) with a longitudinal follow-up between 1998 and 2015.
Thus, this methodological project mainly aims to explore and illustrate the limits of the use of administrative databases to measure exposure of over-the-counter medications. Using as a practical example, the assessment of the risk of ASD in children exposed to acetaminophen during the gestational phase compared to unexposed children.
The literature reports that data on the use of over-the-counter medications are often underestimated compared to reality. The use of administrative databases to measure exposure to over-the-counter medications would introduce major misclassification[6]. Knowing that data are collected based on prescriptions filled in pharmacies. In the current clinical practice, these medications are rarely prescribed by physicians and rarely filled by pharmacists. The practical case used in this project confirms this hypothesis.
In-utero exposure to acetaminophen is underestimated at 94% (Supplemental Table 5). Only 6% (38/648) of exposed children are captured within the QPC, which does not reflect the clinical reality of acetaminophen use in pregnancy. This result demonstrates the introduction of an information bias (misclassification) which would totally or partially explain the measured estimate. Indeed, our study shows that there is no statistically significant increase between the risk of ASD and the consumption of acetaminophen in utero (aHR: 1.10 95% CI 0.93-1.30). At the same time, a sub-cohort analysis was carried out. This used questionnaires self-reported by mothers to measure exposure to acetaminophen. For this secondary analysis, a protective effect is measured (aHR: 0.78 95% CI 0.41-1.47). Sensitivity analyses on issue were able to confirm the robustness of these results, thus validating our results.
Through this study, we were able {Citation} to measure the magnitude of misclassifications associated with the use of administrative databases for research questions involving over-the-counter drugs. However, given these methodological challenges, it is difficult to measure the association between the use of acetaminophen in pregnancy and the risk of ASD in our cohort. Our study does not demonstrate a statistically significant association, which is contradictory to previous studies with exposure measurement methods approaching reality.
|
39 |
Characterization and impact of ambient air pollution measurement error in time-series epidemiologic studiesGoldman, Gretchen Tanner 28 June 2011 (has links)
Time-series studies of ambient air pollution and acute health outcomes utilize measurements from fixed outdoor monitoring sites to assess changes in pollution concentration relative to time-variable health outcome measures. These studies rely on measured concentrations as a surrogate for population exposure. The degree to which monitoring site measurements accurately represent true ambient concentrations is of interest from both an etiologic and regulatory perspective, since associations observed in time-series studies are used to inform health-based ambient air quality standards. Air pollutant measurement errors associated with instrument precision and lack of spatial correlation between monitors have been shown to attenuate associations observed in health studies. Characterization and adjustment for air pollution measurement error can improve effect estimates in time-series studies. Measurement error was characterized for 12 ambient air pollutants in Atlanta. Simulations of instrument and spatial error were generated for each pollutant, added to a reference pollutant time-series, and used in a Poisson generalized linear model of air pollution and cardiovascular emergency department visits. This method allows for pollutant-specific quantification of impacts of measurement error on health effect estimates, both the assessed strength of association and its significance. To inform on the amount and type of error present in Atlanta measurements, air pollutant concentrations were simulated over the 20-county metropolitan area for a 6-year period, incorporating several distribution characteristics observed in measurement data. The simulated concentration fields were then used to characterize the amount and type of error due to spatial variability in ambient concentrations, as well as the impact of use of different exposure metrics in a time-series epidemiologic study. Finally, methodologies developed for the Atlanta area were applied to air pollution measurements in Dallas, Texas with consideration for the impact of this error on a health study of the Dallas-Fort Worth region that is currently underway.
|
40 |
Assessing and correcting for the effects of species misclassification during passive acoustic surveys of cetaceansCaillat, Marjolaine January 2013 (has links)
In conservation ecology, abundance estimates are an important factor from which management decisions are based. Methods to estimate abundance of cetaceans from visual detections are largely developed, whereas parallel methods based on passive acoustic detections are still in their infancy. To estimate the abundance of cetacean species using acoustic detection data, it is first necessary to correctly identify the species that are detected. The current automatic PAMGUARD Whistle Classifier used to automatically identify whistle detection of cetacean species is modified with the objective to facilitate the use of these detections to estimate cetacean abundance. Given the variability of cetacean sounds within and between species, developing an automated species classifier with a 100% correct classification probability for any species is unfeasible. However, through the examples of two case studies it is shown that large and high quality datasets with which to develop these automatic classifiers increase the probability of creating reliable classifiers with low and precise misclassification probability. Given that misclassification is unavoidable, it is necessary to consider the effect of misclassified detections on the number of observed acoustic calls detected and thus on abundance estimates, and to develop robust methods to cope with these misclassifications. Through both heuristic and Bayesian approaches it is demonstrated that if misclassification probabilities are known or estimated precisely, it is possible to estimate the true number of detected calls accurately and precisely. However, misclassification and uncertainty increase the variance of the estimates. If the true numbers of detections from different species are similar, then a small amount of misclassification between species and a small amount of uncertainty in the probabilities of misclassification does not have a detrimental effect on the overall variance and bias of the estimate. However, if there is a difference in the encounter rate between species calls associated with a large amount of uncertainty in the probabilities of misclassification, then the variance of the estimates becomes larger and the bias increases; this in return increases the variance and the bias of the final abundance estimate. This study despite not bringing perfect results highlights for the first time the importance of dealing with the problem of species misclassification for cetacean if acoustic detections are to be used to estimate abundance of cetaceans.
|
Page generated in 0.0926 seconds