Spelling suggestions: "subject:"diagnostic testing"" "subject:"hiagnostic testing""
1 |
Evaluation of computerised programs for the diagnosis and treatment of binocular anomaliesLin, Wei January 2016 (has links)
Computerised diagnostic testing and computerised vision training (VT) have been developed for the orthoptic management of binocular vision (BV) anomalies in clinical practice. Computerised measurement of BV is assumed to assist accurate diagnosis of BV anomalies because variability of testing resulting from subjective judgements of examiners is eliminated by automatic measurements. Computerised VT is thought to be effective in the treatment of BV anomalies because the computer games used for vision training will enhance the patient's motivation. However, these assumptions were lacking scientific support. This thesis reports a range of studies to investigate the computerised programs of diagnostic testing (HTS-BVA) and vision training (HTS-iNet) in comparison with corresponding traditional approaches, respectively. The first study was to investigate inter-session repeatability of computerised testing on BV functions. The study results showed that computerised testing on measuring near horizontal fusional vergence (FV) and accommodative facility (AF) did not present higher inter-session repeatability than corresponding traditional testing. The second study was a pilot study for a future rigorous randomized clinical trial (RCT) investigating effectiveness of computerised VT as a home-based treatment for convergence insufficiency (CI). The study results showed the subjects with CI demonstrated improvement of near point of convergence (NPC), near base-out FV and symptoms associated with CI after an 8-week treatment regime. The third study, following from the first study, was to investigate whether accommodative responses (AR) are affected by the novel accommodative stimuli used in computerised AF testing. The study results showed the AR might be affected by the colours of accommodative targets and the colour filter used. Especially, the data of accommodative demand of 4 dioptres revealed that blue targets presented poorer AR than red targets, and the targets seen with colour filters presented poorer AR than those seen without colour filters. The fourth study, also following from the first study, was to investigate whether a prolonged near vision task affects measurements made relating to the near FV system, thus contributing to the variability of clinical findings. The study results showed statistically significant changes in NPC and near dissociated phoria. In further sub-group analyses, the subjects with an initially poor NPC (n = 9) presented greater changes in the NPC and near dissociated phoria than the subjects with normal poor NPC (n = 25).Overall, the computerised testing did not show more repeatable BV measurements than the traditional testing. Finally, an RCT is needed to determine if the computerised VT is more effective than placebo computerised VT as a home-based treatment for CI.
|
2 |
Making diagnoses with multiple tests under no gold standardZhang, Jingyang 01 May 2012 (has links)
In many applications, it is common to have multiple diagnostic tests on each subject. When there are multiple tests available, combining tests to incorporate information from various aspects in subjects may be necessary in order to obtain a better diagnostic. For continuous tests, in the presence of a gold standard, we could combine the tests linearly (Su and Liu, 1993) or sequentially (Thompson, 2003), or using the risk score as studied by McIntosh and Pepe (2002). The gold standard, however, is not always available in practice. This dissertation concentrates on deriving classification methods based on multiple tests in the absence of a gold standard. Motivated by a lab data set consisting of two tests testing for an antibody in 100 blood samples, we first develop a mixture model of four bivariate normal distributions with the mixture probabilities depending on a two-stage latent structure. The proposed two-stage latent structure is based on the biological mechanism of the tests. A Bayesian classification method incorporating the available prior information is derived utilizing Bayesian decision theory. The proposed method is illustrated by the motivating example, and the properties of the estimation and the classification are described via simulation studies. Sensitivity to the choice of the prior distribution is also studied. We also investigate a general problem of combining multiple continuous tests without any gold standard or a reference test. We thoroughly study the existing methods for combining multiple tests and develop optimal classification rules corresponding to the methods accommodating the situation without a gold standard. We justify the proposed methods both theoretically and numerically through exten- sive simulation studies and illustrate the methods with the motivating example. In the end, we conclude the thesis with remarks and some interesting open questions extended from the dissertation.
|
3 |
Linearization Methods in Time Series AnalysisChen, Bei 08 September 2011 (has links)
In this dissertation, we propose a set of computationally efficient methods based on approximating/representing nonlinear processes by linear ones, so-called linearization. Firstly, a linearization method is introduced for estimating the multiple frequencies in sinusoidal processes. It utilizes a regularized autoregressive (AR) approximation, which can be regarded as a "large p - small n" approach in a time series context. An appealing property of regularized AR is that it avoids a model selection step and allows for an efficient updating of the frequency estimates whenever new observations are obtained. The theoretical analysis shows that the regularized AR frequency estimates are consistent and asymptotically normally distributed. Secondly, a sieve bootstrap scheme is proposed using the linear representation of generalized autoregressive conditional heteroscedastic (GARCH) models to construct prediction intervals (PIs) for the returns and volatilities. Our method is simple, fast and distribution-free, while providing sharp and well-calibrated PIs. A similar linear bootstrap scheme can also be used for diagnostic testing. Thirdly, we introduce a robust lagrange multiplier (LM) test, which utilizes either the bootstrap or permutation procedure to obtain critical values, for detecting GARCH effects. We justify that both bootstrap and permutation LM tests are consistent. Intensive numerical studies indicate that the proposed resampling algorithms significantly improve the size and power of the LM test in both skewed and heavy-tailed processes. Moreover, fourthly, we introduce a nonparametric trend test in the presence of GARCH effects (NT-GARCH) based on heteroscedastic ANOVA. Our empirical evidence show that NT-GARCH can effectively detect non-monotonic trends under GARCH, especially in the presence of irregular seasonal components. We suggest to apply the bootstrap procedure for both selecting the window length and finding critical values. The newly proposed methods are illustrated by applications to astronomical data, to foreign currency exchange rates as well as to water and air pollution data. Finally, the dissertation is concluded by an outlook on further extensions of linearization methods, e.g., in model order selection and change point detection.
|
4 |
Linearization Methods in Time Series AnalysisChen, Bei 08 September 2011 (has links)
In this dissertation, we propose a set of computationally efficient methods based on approximating/representing nonlinear processes by linear ones, so-called linearization. Firstly, a linearization method is introduced for estimating the multiple frequencies in sinusoidal processes. It utilizes a regularized autoregressive (AR) approximation, which can be regarded as a "large p - small n" approach in a time series context. An appealing property of regularized AR is that it avoids a model selection step and allows for an efficient updating of the frequency estimates whenever new observations are obtained. The theoretical analysis shows that the regularized AR frequency estimates are consistent and asymptotically normally distributed. Secondly, a sieve bootstrap scheme is proposed using the linear representation of generalized autoregressive conditional heteroscedastic (GARCH) models to construct prediction intervals (PIs) for the returns and volatilities. Our method is simple, fast and distribution-free, while providing sharp and well-calibrated PIs. A similar linear bootstrap scheme can also be used for diagnostic testing. Thirdly, we introduce a robust lagrange multiplier (LM) test, which utilizes either the bootstrap or permutation procedure to obtain critical values, for detecting GARCH effects. We justify that both bootstrap and permutation LM tests are consistent. Intensive numerical studies indicate that the proposed resampling algorithms significantly improve the size and power of the LM test in both skewed and heavy-tailed processes. Moreover, fourthly, we introduce a nonparametric trend test in the presence of GARCH effects (NT-GARCH) based on heteroscedastic ANOVA. Our empirical evidence show that NT-GARCH can effectively detect non-monotonic trends under GARCH, especially in the presence of irregular seasonal components. We suggest to apply the bootstrap procedure for both selecting the window length and finding critical values. The newly proposed methods are illustrated by applications to astronomical data, to foreign currency exchange rates as well as to water and air pollution data. Finally, the dissertation is concluded by an outlook on further extensions of linearization methods, e.g., in model order selection and change point detection.
|
5 |
A location science model for the placement of POC CD4 testing devices as part of South Africa's public healthcare diagnostic service delivery modelOosthuizen, Louzanne 03 1900 (has links)
Thesis (MEng)--Stellenbosch University, 2015. / ENGLISH ABSTRACT: South Africa has a severe HIV (human immunodeficiency virus) burden and
the management of the disease is a priority, especially in the public healthcare
sector. One element of managing the disease, is determining when to
initiate an HIV positive individual onto anti-retroviral therapy (ART), a
treatment that the patient will remain on for the remainder of their lifetime.
For the majority of HIV positive individuals in the country, this
decision is governed by the results of a CD4 (cluster of differentiation 4)
test that is performed at set time intervals from the time that the patient
is diagnosed with HIV until the patient is initiated onto ART. A device
for CD4 measurement at the point of care (POC), the Alere PIMA™, has
recently become commercially available. This has prompted a need to evaluate
whether CD4 testing at the POC (i.e. at the patient serving healthcare
facility) should be incorporated into the South African public healthcare
sector's HIV diagnostic service provision model.
One challenge associated with the management of HIV in the country is the
relatively large percentage of patients that are lost to follow-up at various
points in the HIV treatment process. There is extensive evidence that
testing CD4 levels at the POC (rather than in a laboratory, as is the current
practice) reduces the percentage of patients that are lost to follow-up before
being initiated onto ART. Therefore, though POC CD4 testing is more
expensive than laboratory-based CD4 testing, the use of this technology in
South Africa should be investigated for its potential to positively influence
health outcomes.
In this research, a multi-objective location science model is used to generate
scenarios for the provision of CD4 testing capability. For each scenario,
CD4 testing provision at 3 279 ART initiation facilities is considered. For
each facility, either (i) a POC device is placed at the site; or (ii) the site's testing workload is referred to one of the 61 CD4 laboratories in the country.
To develop this model, the characteristics of eight basic facility location
models are compared to the attributes of the real-world problem in order to
select the most suitable one for application. The selected model's objective,
assumptions and inputs are adjusted in order to adequately model the realworld
problem. The model is solved using the cross-entropy method for
multi-objective optimisation and the results are verified using a commercial
algorithm.
Nine scenarios are selected from the acquired Pareto set for detailed presentation.
In addition, details on the status quo as well as a scenario where
POC testing is used as widely as possible are also presented. These scenarios
are selected to provide decision-makers with information on the range
of options that should be considered, from no or very limited use to widespread
use of POC testing. Arguably the most valuable contribution of
this research is to provide an indication of the optimal trade-off points
between an improved healthcare outcome due to POC CD4 testing and
increased healthcare spending on POC CD4 testing in the South African
public healthcare context. This research also contributes to the location
science literature and the metaheuristic literature. / AFRIKAANSE OPSOMMING: Suid-Afrika gaan gebuk onder `n swaar MIV- (menslike-immuniteitsgebreksvirus-)
las en die bestuur van die siekte is `n prioriteit, veral in die openbare
gesondheidsorgsektor. Een element in die bestuur van die siekte is om te
bepaal wanneer `n MIV-positiewe individu met antiretrovirale- (ARV-)behandeling
behoort te begin, waarop pasiënte dan vir die res van hul lewens
bly. Vir die meeste MIV-positiewe individue in die land word hierdie besluit
bepaal deur die uitslae van `n CD4- (cluster of differentiation 4-)toets wat
met vasgestelde tussenposes uitgevoer word vandat die pasiënt met MIV
gediagnoseer word totdat hy of sy met ARV-behandeling begin. `n Toestel
vir CD4-meting by die punt van sorg (\POC"), die Alere PIMA™, is onlangs
kommersieel beskikbaar gestel. Dit het `n behoefte laat ontstaan om
te bepaal of CD4-toetsing by die POC (met ander woorde, by die gesondheidsorgfasiliteit
waar die pasiënt bedien word) by die MIV-diagnostiese
diensleweringsmodel van die Suid-Afrikaanse openbare gesondheidsorgsektor
ingesluit behoort te word.
Een uitdaging met betrekking tot MIV-bestuur in die land is die betreklik
groot persentasie pasiënte wat verlore gaan vir nasorg in die verskillende
stadiums van die MIV-behandelingsproses. Heelwat bewyse dui daarop dat
die toetsing van CD4-vlakke by die POC (eerder as in `n laboratorium, soos
wat tans die praktyk is) die persentasie pasiënte wat verlore gaan vir nasorg
voordat hulle met ARV-behandeling kan begin, verminder. Daarom, hoewel
CD4-toetsing by die POC duurder is as toetsing in `n laboratorium, behoort
die gebruik van hierdie tegnologie in Suid-Afrika ondersoek te word.
In hierdie studie is `n meerdoelige liggingswetenskapmodel gebruik om scenario's
vir die voorsiening van CD4-toetsvermoë te skep. Vir elke scenario
word CD4-toetsvermoë by 3 279 ARV-inisiasie fasiliteite oorweeg. Vir elke
fasiliteit word toetsvermoë verskaf deur (i) die plasing van POC-toestelle by die fasiliteit, of (ii) verwysing vir laboratoriumgebaseerde toetsing by een
van die 61 CD4-laboratoriums in die land. Die kenmerke van agt basiese
fasiliteitsliggingsmodelle is met die kenmerke van die werklike probleem
vergelyk om die mees geskikte model vir toepassing op die werklike probleem
te bepaal. Die doelwitte, aannames en insette van die gekose model
is daarna aangepas om die werklike probleem voldoende te modelleer. Die
model is opgelos met behulp van die kruis-entropie-metode vir meerdoelige
optimering, waarna die resultate deur middel van `n kommersiële algoritme
bevestig is.
Nege scenario's uit die verworwe Pareto-stel word uitvoerig aangebied. Daarbenewens
beskryf die studieresultate die besonderhede van die status quo
sowel as `n scenario waar POC-toetsing so wyd moontlik gebruik word. Hierdie
scenario's word aangebied om besluitnemers van inligting te voorsien
oor die verskeidenheid moontlikhede wat oorweeg kan word, wat wissel van
geen of baie beperkte tot wydverspreide gebruik van POC-toetsing. Die
mees beduidende bydrae van hierdie navorsing is stellig dat dit `n aanduiding
bied van die optimale kompromie tussen `n verbeterde gesondheidsorguitkoms
weens CD4-toetsing by die POC, en verhoogde gesondheidsorgbesteding
aan CD4-toetsing by die POC, in die konteks van Suid-Afrikaanse
openbare gesondheidsorg. Die navorsing dra ook by tot die ligingswetenskapliteratuur
sowel as tot die metaheuristiekliteratuur.
|
6 |
Evaluation of the Use of Non-Invasive Prenatal Testing In Ontario, Canada, 2016-2020Tweneboa Kodua, Ama 02 September 2021 (has links)
Background:
There are few studies on the uptake of non-invasive prenatal screening, but those available suggest substantial variation in uptake in the initial years in which it was offered. There is a need to update the earlier evidence and determine whether there has been any change in usage trends as the number of users have increased. This will help inform policy makers about NIPT uptake under currently existing policies and guidelines which can help inform whether to maintain or refine policies on NIPT.
Objectives:
The primary objective of this thesis was to investigate recent trends in NIPT utilization, and the secondary objective was to identify differences between pregnant individuals aged 40 years and above and/or with a history of previous aneuploidy who opted for first-tier (first-line screening) or second-tier (contingent screening) NIPT and pregnant individuals aged less than 40 years with no history of previous aneuploidy.
Methods:
This retrospective cohort study used a province wide birth registry from Ontario and the population studied comprised pregnant individuals with an expected date of delivery from August 1st, 2016 to March 31st, 2020.
Results:
Of 536,748 pregnant individuals resident in Ontario during the study period, 27,733 were classified as high-risk of giving birth to a baby with a chromosomal aneuploidy and 509,015 were classified as low-risk of giving birth to a baby with a chromosomal aneuploidy. Uptake of NIPT has increased every year since 2016. We found substantial variation in NIPT between regions within the province. Highest uptake was found in urban areas, highest neighbourhood of income and education quintiles, high-risk population, among those with a prenatal care visit in the first trimester, multiple pregnancy, multigravidity, body mass index within the normal range (18.5-24.9 kg/m2), and OHIP funding.
Conclusion:
Our results suggest a need to provide more education/training about NIPT and funding eligibility to health professionals and pregnant individuals, including low-risk pregnant individuals in the first-tier (first-line screening) NIPT funding policy, to ensure equitable assess.
|
7 |
Evaluation of Pure-Tone Thresholds and Distortion Product Otoacoustic Emissions Measured in the Extended High Frequency RegionLyons, Alexandria, Mussler, Sadie, Smurzynski, Jacek 25 April 2023 (has links) (PDF)
When the cochlea is stimulated with two primary tones (f1 and f2) some of the energy is reflected back and propagates via the middle ear into the outer ear. Due to cochlear nonlinearities, distortion product otoacoustic emissions (DPOAEs), may be detected by a probe microphone sealed in the ear canal. Reduced DPOAEs may indicate subclinical cochlear lesions. The relationship between hearing sensitivity and the strength of DPOAEs is debatable, especially in the extended high frequency (EHF) region (≥8 kHz). Monitoring cochlea function corresponding to the EHF range is important for detecting early stages of hearing loss, which typically begins above 8 kHz. Complex interactions of high-frequency pure-tones in the ear canal result in standing waves that increases test-retest variability of DPOAEs measured for f2≥6 kHz. The aim of the project was to evaluate reliability of DPOAEs measured up to 12 kHz with a system used routinely in audiology clinics. Thirty-one adults (age, 18-30 yrs) with normal middle-ear function and normal hearing thresholds in the conventional region (≤8 kHz) participated. The EHF audiometry was performed for frequencies up to 16 kHz. The DPOAE data were collected for the f2 frequency varied from 1.5 to 12 kHz, twice for each ear with the probe removed and then repositioned after the first test. The EHF audiometric data of four participants showed elevated thresholds. Their DPOAEs were reduced or absent for f2≥9 kHz, i.e., supporting the sensitivity of DPOAEs for cochlear hearing loss above the conventional audiometry frequency range. Mean and standard deviations of DPOAE levels were calculated separately for the left and the right ears of subjects with normal EHF thresholds. There were no differences between mean DPOAE values in the left and the right ears. The intersubject variability of the DPOAE levels was moderate (SD≈6 dB or lower) but it increased significantly in the 12-kHz region, per the F-test for variances, possibly due to 1. effects of standing waves on the high-frequency DPOAE reliability and/or 2. subclinical pathology in the most basal portion (i.e., high-frequency region) of the cochlea. For each ear, absolute values of differences between test/retest levels of detectable DPOAEs were calculated. ANOVA showed the main effect of frequency for the data collected in the left and the right ears. Post-hoc analyses indicated that test/retest variability of DPOAEs was rather constant for f2 frequencies up to 10 kHz, but a statistically significant increase of test/retest variability for f2 of 11 and 12 kHz was found. This aspect needs to be considered when using DPOAE tests for longitudinal monitoring of cochlear function in the basal portion. Nevertheless, combining behavioral thresholds with DPOAEs collected for the EHF range is vital for detecting the initial stage of the cochlear pathology corresponding to the high-frequency region, e.g., due to ototoxicity or aging of the cochlea.
|
8 |
The influence of critical asset management facets on improving reliability in power systemsPerkel, Joshua 04 November 2008 (has links)
The objective of the proposed research is to develop statistical algorithms for controlling failure trends through targeted maintenance of at-risk components. The at-risk components are identified via chronological history and diagnostic data, if available. Utility systems include many thousands (possibly millions) of components with many of them having already exceeded their design lives. Unfortunately, neither the budget nor manufacturing resources exist to allow for the immediate replacement of all these components. On the other hand, the utility cannot tolerate a decrease in reliability or the associated increased costs. To combat this problem, an overall maintenance model has been developed that utilizes all the available historical information (failure rates and population sizes) and diagnostic tools (real-time conditions of each component) to generate a maintenance plan. This plan must be capable of delivering the needed reliability improvements while remaining economical. It consists of three facets each of which addresses one of the critical asset management issues:
* Failure Prediction Facet - Statistical algorithm for predicting future failure trends and estimating required numbers of corrective actions to alter these failure trends to desirable levels. Provides planning guidance and expected future performance of the system.
* Diagnostic Facet - Development of diagnostic data and techniques for assessing the accuracy and validity of that data. Provides the true effectiveness of the different diagnostic tools that are available.
* Economics Facet - Stochastic model of economic benefits that may be obtained from diagnostic directed maintenance programs. Provides the cost model that may be used for budgeting purposes.
These facets function together to generate a diagnostic directed maintenance plan whose goal is to provide the best available guidance for maximizing the gains in reliability for the budgetary limits utility engineers must operate within.
|
9 |
Population-based outcomes of a provincial prenatal screening program : examining impact, uptake, and ethics2014 June 1900 (has links)
The field of prenatal screening and diagnosis has developed rapidly over the past half-century, enabling possibilities for detecting anomalies in reproduction that were never before contemplated. A simple blood sample can aid in the identification of several conditions in the fetus early in the pregnancy. If a fetus is found to be affected by Down syndrome, anencephalus, spina bifida, or Edward's syndrome, a decision must then be made whether to continue or terminate the pregnancy. As prenatal screening becomes increasingly commonplace and part of routine maternal care, researchers are faced with the challenge of understanding its effects at the level of the population and monitoring trends over time. Greater uptake of prenatal screening, when followed by prenatal diagnosis and termination, has important implications for both congenital anomaly surveillance and infant and fetal mortality indicators. Research in Canada suggests that this practice has led to reductions in the congenital-anomaly specific infant mortality rate and increases in the stillbirth rate.(1, 2)
The current study is a population-based, epidemiological exploration of demographic predictors of maternal serum screening (MSS) and amniocentesis uptake, with special attention to variations in birth outcomes resulting from different patterns of use. To accomplish our objectives, multiple data sources (vital statistics, hospital and physician services, cytogenetic and MSS laboratory information) were compiled to create a comprehensive maternal-fetal-infant dataset. Data spanned a six-year period (2000-2005) and involved 93,171 pregnancies. A binary logistic regression analysis found that First Nations status, rural-urban health region of residence, maternal age group, and year of test all significantly predicted MSS use. Uptake was lower in women living in a rural health region, First Nations women, and those under 30 years of age. The study dataset identified ninety-four terminations of pregnancy following detection of a fetal anomaly (TOPFA), which led to a lower live birth prevalence of infants with Down syndrome, Trisomy 18, and anencephalus. While a significant increasing trend was observed for the overall infant mortality rate in Saskatchewan between 2001-2005, a clear trend in one direction or the other could not be seen in regards to infant deaths due to congenital anomaly.
First Nations status and maternal age were important predictors of both MSS and amniocentesis testing, and appeared to influence the decision to continue or terminate an affected pregnancy. The fact that First Nations women were less likely to screen (9.6% vs. 28.4%) and to have diagnostic follow-up testing (18.5% vs. 33.5%), meant that they were less likely to obtain a prenatal diagnosis when the fetus had a chromosomal anomaly compared to other women (8.3% vs. 27.0%). This resulted in a lower TOPFA rate compare to the rest of the population (0.64 vs. 1.34, per 1,000 pregnancies, respectively) and a smaller difference between the live birth prevalence and incidence of Down syndrome and Trisomy 18 for First Nations women.
Women under 30 years of age were much less likely to receive a prenatal diagnosis when a chromosomal anomaly was present (18.4% vs. 31.8%). While risk for a chromosomal anomaly is considerably lower for younger mothers, 53.5% of all pregnancies with chromosomal anomalies and 40.7% of DS pregnancies belonged to this group.
Consistent with other studies pregnancy termination rates following a prenatal congenital anomaly diagnosis are high (eg. 74.1% of prenatally diagnosed Down syndrome or Trisomy 18 cases), but these rates may be misleading in that they are based on women who chose to proceed to prenatal diagnosis. The fact that two-thirds (67.3%) of Saskatchewan women who received an increased-risk result declined amniocentesis, helps to put this finding into context.
Strong surveillance systems and reasonable access to research datasets will be an on-going challenge for the province of Saskatchewan and should be viewed as a priority. Pregnancies and congenital anomalies are two particularly challenging outcomes to study in the absence of perinatal and congenital anomaly surveillance systems. Still pregnancies that never reach term must be accounted for, in order to describe the true state of maternal-fetal-infant health and to study its determinants. While our study was able to identify some interesting trends and patterns, it is only a snapshot in time. Key to the production of useful surveillance and evaluation is timely information. The current system is not timely, nor is it user-friendly for researchers, health regions or governments. Data compilation for the current study was a gruelling and cumbersome process taking more than five years to complete. A provincial overhaul is warranted in both the mechanism by which researchers access data and in the handling of data. The Better Outcomes Registry & Network (BORN) in Ontario is an innovative perinatal and congenital anomaly surveillance system worthy of modelling.(3)
Academic papers in non-ethics' journals typically focus on the technical or programmatic aspects of screening and do not effectively alert the reader to the complex and profound moral dilemmas raised by the practice. A discussion of ethics was felt necessary to ensure a well-rounded portrayal of the issue, putting findings into context and helping to ensure their moral relevance did not remain hidden behind the scientific complexities. Here I lay out the themes of the major arguments in a descriptive manner, recognizing that volumes have been written on the ethics of both screening and abortion. A major ethical tension arising within the context of population based prenatal screening is the tension between community morality and the principle of respect for personal autonomy. Prenatal screening and selective termination have been framed as a purely private or medical matter, thereby deemphasizing the social context in which the practice has materialized and the importance of community values. I consider how a broader sociological perspective, one that takes into account the relevance of community values and limitations of the clinical encounter, could inform key practice and policy issues involving prenatal screening. It is my position that the community's voice must be invited to the conversation and public engagement processes should occur prior to any additional expansion in programming. I end with a look at how the community’s voice might be better heard on key issues, even those issues that at first glance seem to be the problems of individuals. As Rayna Rapp (2000) (4) poignantly observed, women today are 'moral pioneers' not by choice, but by necessity.
By elucidating the effects of prenatal screening and the extent of the practice of selective termination in the province, the true occurrence of important categories of congenital anomalies in our province can be observed. Without this knowledge it is very difficult to identify real increases or decreases in fetal and infant mortality over time as the etiologies are complex. Evidence suggests a large and increasing impact of TOPFA on population-based birth and mortality statistics nationally, whereas in Saskatchewan the effect appears to be less pronounced. Appreciation of the intervening effect of new reproductive technologies will be increasingly important to accurate surveillance, research, and evaluation as this field continues to expand.
|
10 |
Pharmacogenetics, controversies and new forms of service delivery in autoimmune diseases, acute lymphoblastic leukaemia and non-small-cell lung cancerSainz De la fuente, Graciela January 2010 (has links)
Pharmacogenetics (PGx) and personalised medicine are new disciplines that, gathering the existing knowledge about the genetic and phenotypic factors that underpin drug response, aim to deliver more targeted therapies that avoid the existing problems of adverse drug reactions or lack of drug efficacy. PGx and personalised medicine imply a shift in the way drugs are prescribed, as they require introducing diagnostic tools and implementing pre-screening mechanisms that assess patients' susceptibility to new or existing drugs. The direct benefit is an improvement in drug safety and/or efficacy. However, neither pharmacogenetics nor personalised medicine, are widely used in clinical practice. Both technologies face a number of controversies that hamper their widespread use in clinical practice. This thesis investigates the scientific; technological; social; economic; regulatory and ethical implications of PGx and personalised medicine, to understand the enablers and barriers that drive the process of technology diffusion in three conditions: autoimmune diseases, acute lymphoblastic leukaemia and non-small cell lung cancer.The thesis uses concepts of the sociology of science and a qualitative approach, to explore the arguments for and against the use of the technology by different actors (pharmaceutical and biotechnology companies, researchers, clinicians, regulators and patient organisations). The core of this analysis lies in the understanding of how, diagnostic testing (TPMT testing in the case of autoimmune diseases, acute lymphoblastic leukaemia, and EGFR testing in the case of non-small-cell lung cancer) may affect the existing drug development and service delivery mechanisms, with a particular focus on the user-producer interactions and feedback mechanisms that underpin diffusion of medical innovations and technological change in medicine.The thesis concludes by identifying gaps in knowledge and common issues among TPMT and EGFR testing, which might be used, in the future, to inform policy on how to improve PGx service delivery through a public Health System such as the NHS.
|
Page generated in 0.1043 seconds