1 |
Biological aging quantification and its association with sleep in the Bogalusa heart studyJanuary 2021 (has links)
archives@tulane.edu / Background:
Human Biological Aging (BA) estimates are developed by human to better capture the gradual increase in the vulnerability of the aging body than chronological age. Human sleep dimensions have been suggested to be associated with human health indicators including cardiometabolic function, cognitive function and mortality. The objective of this study was to examine indicators of BA and their predictive validity using Klemera and Doubal’s Method (KDM), and Physiological Dysregulation Method (PDM) for quantifying BA, as well as to explore if phenotypical and genetic associations between sleep variables and BA estimates exist, using the Bogalusa Heart Study (BHS) – a community-based, cohort study.
Method:
In order to estimate BA, nineteen biomarkers were selected. Training datasets were from NHANES. The target dataset included 1,034 BHS subjects assessed between 2013-2016. Training was done separately for male and female, black and white participants. KDM and Mahalanobis Distance (DM) based PDM methods were used. Cognitive and physical performance testing were used to examine predictive validity.
The association between three sleep dimension variables and BA estimates were explored using 953 black and white BHS 2013-2016 subjects. Sleep duration in hours, chronotype scores and social jetlag in hours were the independent variables. BA estimates were the dependent variables.
Genotyping information from the BHS 2013-2016 were included (n=646) for genetic association. Related SNPs on morning chronotype were used to compute a genetic risk score (GRS) for BHS participants. Association between chronotype GRS and chronotype phenotype were explored.
Multivariate linear regression was used for all association analyses.
Results:
BA estimates were calculated using both the KDM and PDM methods. Linear regression showed that PDM BA estimates were associated with lower cognitive function physical performance tests. The effect sizes of all associations between PDM BA estimates and performance tests were of greater magnitude than between KDM estimates and performance tests. Short sleep duration and evening chronotype was associated with larger PDM BA estimates. Morning chronotype GRS was not associated with morning chronotype phenotype among BHS participants.
Conclusion:
PDM BA estimates are robust measures of biological aging in black and white men and women enrolled in the BHS. Insufficient sleep duration and evening chronotype may advance biological aging, regardless of gender, race and CA. We did not find association between morning chronotype GRS and morning chronotype phenotype. PDM BA estimates should be recommended for future aging studies using data from BHS participants. / 1 / Xunming Sun
|
2 |
The role of genetics in regulation of weight loss and food intakeBandstein, Marcus January 2016 (has links)
While obesity is a world leading health problem, the most efficient treatment option for severely obese patients is Roux-Y gastric bypass (RYGB) surgery. However, there are large inter-individual differences in weight loss after RYGB surgery. The reasons for this are not yet elucidated and the role of genetics in weight loss-regulation is still not fully understood. The main aim for this thesis was to investigate the effects of common obesity-associated genetic variants and their effect on weight loss and food intake. We examined if the weight loss two years following RYGB surgery depends on the FTO genotype, as well as pre-surgery vitamin D status. For FTO AA-carriers, the surgery resulted in a 3% per-allele increased excess BMI loss (EBMIL; P=0.02). When split by vitamin D baseline status, the EBMIL of vitamin D deficient patients carrying AA exceeded that of vitamin D deficient patients carrying TT by 14% (P=0.03). No such genotypic differences were found in patients without pre-surgery vitamin D deficiency. As the influence of individual single nucleotide polymorphisms may be small, we identified a novel method to combine SNPs into a genetic risk score (GRS). Using the random forest model, SNPs with high impact on weight loss after RYGB surgery were filtered out. An up to 11% lower EBMIL with higher risk score was estimated for the GRS model (p=0.026) composed of seven BMI-associated SNPs (closest genes: MC4R, TMEM160, PTBP2, NUDT3, TFAP2B, ZNF608 and MAP2K5). Pre-surgical hunger feelings were found to be associated with EBMIL and the SNP rs4846567. Before surgery, patients filled out the Three Factor Eating Questionnaire and were genotyped for known BMI and waist-hip ratio (WHR) associated SNPs. Patients with the lowest hunger scores had up to 32% greater EBMIL compared to the highest scoring patients (P=0.002). TT-allele carriers of rs4846567 showed a 58% lower hunger feelings. TT- carriers also showed a 51% decrease in disinhibition, but no significant impact on cognitive restraint was observed. Due to the association of eating behaviour and weight loss, acute effects on DNA methylation in response to a food intake intervention of a standardized meal were also investigated. After food intake, 1832 CpG sites were differentially methylated compared to the baseline after multiple testing correction. When adjusted for white blood cell fractions, 541 CpG sites remained. This may be interpreted as that the immune system is playing an active role in the response to food intake and highlights the dynamic nature of DNA-methylation. These findings will contribute to a better care for morbidly obese patients. Post-surgical treatment may be optimized so that patients with a less favourable genetic profile may receive additional support for weight loss and weight management. This may be considered as a step in the transition towards personalized medicine.
|
3 |
Development of a generic monitoring protocol for management of Cryptosporidium and Giardia in drinking water / by Makhosazana Victoria SiguduSigudu, Makhosazana Victoria January 2010 (has links)
In South Africa, the assessment of the suitability and acceptability of water for drinking purposes is
done according to the South African National Standards (SANS) 241 (2006) which requires that
Cryptosporidium and Giardia in drinking water should be less than 1 oocyst/10l and 1 cyst/10l respectively. Although there is a requirement to monitor for these parasitic protozoans, there is lack
of uniformity in the monitoring approach. Therefore, the objective of the study was to develop a
protocol/methodology that can be applied by drinking water producers to monitor Cryptosporidium
and Giardia to ensure that the risk of exposure to these organisms and the risks of non–compliance
to guidelines are reduced. Also, to test the feasibility of the protocol on a small system, the drinking
water purification plant at the Vaal River Barrage Reservoir that supplies approximately 350 people
with drinking water.
The protocol for monitoring of Cryptosporidium and Giardia was developed based on monitoring
procedures proposed by the US Environmental Protection Agency, the Drinking Water Inspectorate,
Australia, New Zealand, and especially on the risk based procedure followed by Northern Ireland
with the intention that it will be applicable to all water supply systems irrespective of size and
system complexity of the purification works. It is focused on a preventative approach of monitoring
Cryptosporidium and Giardia and it consists of ten steps which are: (i) Assessment of the monitoring
requirements, (ii) Description and characterization of the source water types (iii) Abstraction of
source water (iv) Assessment of the water purification plant (v) Water quality monitoring (vi)
Cryptosporidiosis and Giardiasis outbreak (vii) Risk assessment (viii) Sample collection and
Laboratory processing (ix) Data evaluation, interpretation and storage (x) Process evaluation and
review.
As stated, the developed protocol was tested at a small purification plants situated at the dam wall
of the Vaal River Barrage catchment, Gauteng Province . From this assessment it was evident that
steps of the protocol were easy to follow and the possible risks in the water value chain i.e. from
source water to the supply of purified drinking water could be identified. Some of the challenges
encountered during the application of the protocol include difficulty in obtaining detailed
information regarding the activities around the catchment and information on the prevalence of
cryptosporidiosis and giardiasis in the local community or in South Africa in general. From this study, it could be concluded that the source water from the Vaal River Barrage Reservoir
was high risk. However, the use of the multi–barrier approach coupled with advanced treatment of
UV rendered the water drinking supplied to the local community within the South African Drinking
Water Standards for from Cryptosporidium and Giardia of less than 1 oocyst/10l and 1 cyst/10l. The
protocol for the monitoring of Cryptosporidium and Giardia could contribute to the protection of
drinking water consumers by identifying high risk source waters, identifying areas that can be
improved in the water treatment system and also protecting the catchment areas from further
faecal pollution. With respect to this outcome, the developed protocol could be used by water
utilities as part of their Water Safety Plans to optimize monitoring. Furthermore, this methodology
has a potential to contribute to the blue drop certification as it should for part of the Water Safety
Plans. / Thesis (M. Environmental Management)--North-West University, Potchefstroom Campus, 2011.
|
4 |
Development of a generic monitoring protocol for management of Cryptosporidium and Giardia in drinking water / by Makhosazana Victoria SiguduSigudu, Makhosazana Victoria January 2010 (has links)
In South Africa, the assessment of the suitability and acceptability of water for drinking purposes is
done according to the South African National Standards (SANS) 241 (2006) which requires that
Cryptosporidium and Giardia in drinking water should be less than 1 oocyst/10l and 1 cyst/10l respectively. Although there is a requirement to monitor for these parasitic protozoans, there is lack
of uniformity in the monitoring approach. Therefore, the objective of the study was to develop a
protocol/methodology that can be applied by drinking water producers to monitor Cryptosporidium
and Giardia to ensure that the risk of exposure to these organisms and the risks of non–compliance
to guidelines are reduced. Also, to test the feasibility of the protocol on a small system, the drinking
water purification plant at the Vaal River Barrage Reservoir that supplies approximately 350 people
with drinking water.
The protocol for monitoring of Cryptosporidium and Giardia was developed based on monitoring
procedures proposed by the US Environmental Protection Agency, the Drinking Water Inspectorate,
Australia, New Zealand, and especially on the risk based procedure followed by Northern Ireland
with the intention that it will be applicable to all water supply systems irrespective of size and
system complexity of the purification works. It is focused on a preventative approach of monitoring
Cryptosporidium and Giardia and it consists of ten steps which are: (i) Assessment of the monitoring
requirements, (ii) Description and characterization of the source water types (iii) Abstraction of
source water (iv) Assessment of the water purification plant (v) Water quality monitoring (vi)
Cryptosporidiosis and Giardiasis outbreak (vii) Risk assessment (viii) Sample collection and
Laboratory processing (ix) Data evaluation, interpretation and storage (x) Process evaluation and
review.
As stated, the developed protocol was tested at a small purification plants situated at the dam wall
of the Vaal River Barrage catchment, Gauteng Province . From this assessment it was evident that
steps of the protocol were easy to follow and the possible risks in the water value chain i.e. from
source water to the supply of purified drinking water could be identified. Some of the challenges
encountered during the application of the protocol include difficulty in obtaining detailed
information regarding the activities around the catchment and information on the prevalence of
cryptosporidiosis and giardiasis in the local community or in South Africa in general. From this study, it could be concluded that the source water from the Vaal River Barrage Reservoir
was high risk. However, the use of the multi–barrier approach coupled with advanced treatment of
UV rendered the water drinking supplied to the local community within the South African Drinking
Water Standards for from Cryptosporidium and Giardia of less than 1 oocyst/10l and 1 cyst/10l. The
protocol for the monitoring of Cryptosporidium and Giardia could contribute to the protection of
drinking water consumers by identifying high risk source waters, identifying areas that can be
improved in the water treatment system and also protecting the catchment areas from further
faecal pollution. With respect to this outcome, the developed protocol could be used by water
utilities as part of their Water Safety Plans to optimize monitoring. Furthermore, this methodology
has a potential to contribute to the blue drop certification as it should for part of the Water Safety
Plans. / Thesis (M. Environmental Management)--North-West University, Potchefstroom Campus, 2011.
|
5 |
Predicting Graft Loss Following Acute Kidney Injury in Patients With a Kidney TransplantMolnar, Amber January 2016 (has links)
Acute kidney injury (AKI), characterized by an abrupt loss of kidney function with retention of nitrogenous waste products, is common in the months to years following kidney transplantation and is associated with an increased risk of transplant failure (graft loss). Kidney transplant patients who experience graft loss and return to dialysis have an increased mortality risk and a lower quality of life. Research involving kidney transplant patients can prove challenging, as they are relatively small in number. To increase statistical power, researchers may utilize administrative databases. However, these databases are not designed primarily for research, and knowledge of their limitations is needed, as significant bias can occur. When using administrative databases to study AKI in kidney transplantation, the method used to define AKI should be carefully considered. The power of a study may be greatly increased if AKI can be accurately defined using administrative diagnostic codes because data on AKI will be universally available for all patients in the database. However, the methods by which diagnostic codes are assigned to a patient allow for error to be introduced. We confirmed that, when compared to the gold standard definition for AKI of a rise in serum creatinine, the diagnostic code for AKI has low sensitivity but high specificity in the kidney transplant population (the best performing coding algorithm had a sensitivity of 42.9% (95% CI 29.7, 56.8) and specificity of 89.3% (95% CI 86.2, 91.8) (Chapter 3). We therefore determined that for the study outlined in Chapter 4, defining AKI using diagnostic codes would significantly under-capture AKI and misclassify patients. We decided to define AKI using only serum creatinine criteria even though this would limit our sample size (creatinine data was only available for a subset of patients in the administrative databases). In Chapter 4, we derived an index score to predict the risk of graft loss in kidney transplant patients following an admission to hospital with AKI. The index includes six readily available, objective clinical variables that increased the risk of graft loss: increasing age, increased severity of AKI (as defined by the AKIN staging system), failure to recover from AKI, lower baseline estimated glomerular filtration rate, increased time from kidney transplant to AKI admission, and deceased donor. The derived index requires validation in order to assess its utility in the clinical realm.
|
6 |
THE RELATIONSHIP BETWEEN CHA2DS2-VASc STROKE RISK SCORES AND COGNITIVE FUNCTION PRE- AND POST-BARIATRIC SURGERYRochette, Amber D. 25 April 2017 (has links)
No description available.
|
7 |
Association between polygenic risk score and risk of myopiaGhorbani Mojarrad, Neema, Plotnikov, D., Williams, C., Guggenheim, J.A. 08 November 2019 (has links)
Yes / Importance: Myopia is a leading cause of untreatable visual impairment and is increasing in prevalence worldwide. Interventions for slowing childhood myopia progression have shown success in randomized clinical trials; hence, there is a need to identify which children would benefit most from treatment intervention.
Objectives: To examine whether genetic information alone can identify children at risk of myopia development and whether including a child’s genetic predisposition to educational attainment is associated with improved genetic prediction of the risk of myopia.
Design, Setting, and Participants: Meta-analysis of 3 genome-wide association studies (GWAS) including a total of 711 984 individuals. These were a published GWAS for educational attainment and 2 GWAS for refractive error in the UK Biobank, which is a multisite cohort study that recruited participants between January 2006 and October 2010. A polygenic risk score was applied in a population-based validation sample examined between September 1998 and September 2000 (Avon Longitudinal Study of Parents and Children [ALSPAC] mothers). Data analysis was performed from February 2018 to May 2019.
Main Outcomes and Measures: The primary outcome was the area under the receiver operating characteristic curve (AUROC) in analyses for predicting myopia, using noncycloplegic autorefraction measurements for myopia severity levels of less than or equal to −0.75 diopter (D) (any), less than or equal to -3.00 D (moderate), or less than or equal to −5.00 D (high). The predictor variable was a polygenic risk score (PRS) derived from genome-wide association study data for refractive error (n = 95 619), age of onset of spectacle wear (n = 287 448), and educational attainment (n = 328 917).
Results: A total of 383 067 adults aged 40 to 69 years from the UK Biobank were included in the new GWAS analyses. The PRS was evaluated in 1516 adults aged 24 to 51 years from the ALSPAC mothers cohort. The PRS had an AUROC of 0.67 (95% CI, 0.65-0.70) for myopia, 0.75 (95% CI, 0.70-0.79) for moderate myopia, and 0.73 (95% CI, 0.66-0.80) for high myopia. Inclusion in the PRS of information associated with genetic predisposition to educational attainment marginally improved the AUROC for myopia (AUROC, 0.674 vs 0.668; P = .02), but not those for moderate and high myopia. Individuals with a PRS in the top 10% were at 6.1-fold higher risk (95% CI, 3.4–10.9) of high myopia.
Conclusions and Relevance: A personalized medicine approach may be feasible for detecting very young children at risk of myopia. However, accuracy must improve further to merit uptake in clinical practice; currently, cycloplegic autorefraction remains a better indicator of myopia risk (AUROC, 0.87). / PhD studentship grant from the College of Optometrists (Drs Guggenheim and Williams; supporting Mr Mojarrad) entitled Genetic prediction of individuals at-risk for myopia development) and National Institute for Health Research (NIHR) Senior Research Fellowship award SRF-2015-08-005 (Dr Williams). The UK Medical Research Council and Wellcome grant 102215/2/13/2 and the University of Bristol provide core support for the Avon Longitudinal Study of Parents and Children (ALSPAC). A comprehensive list of grants funding is available on the ALSPAC website (http://www.bristol.ac.uk/alspac/external/documents/grant-acknowledgements.pdf). This research was conducted using the UK Biobank Resource (application 17351). The UK Biobank was established by the Wellcome Trust, the UK Medical Research Council, the Department for Health (London, England), the Scottish government (Edinburgh, Scotland), and the Northwest Regional Development Agency (Warrington, England). It also received funding from the Welsh Assembly Government (Cardiff, Wales), the British Heart Foundation, and Diabetes UK.
|
8 |
Genetic Prediction of Myopia in Different Ethnic AncestriesGhorbani Mojarrad, Neema, Plotnikov, D., Williams, C., Guggenheim, J.A. 23 September 2022 (has links)
Yes / Background: Myopia has been shown to have a complex mode of inheritance, being influenced by both genetic and environmental factors. Here, an introduction into myopia genetics is given, with the shortcomings of current genetic prediction for myopia discussed, including the proportionally limited research on genetic prediction in people of non-European ancestry. A previously developed genetic risk score derived from European participants was evaluated in participants of non-European ancestry.
Methods: Participants from UK Biobank who self-reported their ethnicity as “Asian”, “Chinese”, or “Black” and who had refractive error and genetic data available were included in the analysis. Ancestral homogeneity was confirmed using principal component analysis, resulting in samples of 3500 Asian, 444 Chinese, and 3132 Black participants. A published refractive error GWAS meta-analysis of 711,984 participants of European ancestry was used to create a weighted genetic risk score model which was then applied to participants from each ethnic group. Accuracy of genetic prediction of refractive error was estimated as the proportion of variance explained (PVE). Receiver operating characteristic (ROC) curves were developed to estimate myopia prediction performance at three thresholds: any myopia (equal to or more than 0.75D), moderate myopia (between -3.00D and -4.99D) and high myopia (equal to or more than -5.00D). Odds ratios for myopia were calculated for the participants in the top 10th or 5th percentile of genetic risk score distribution, comparing them to the remainder of the population.
Results: The PVE value for refractive error was 6.4%, 6.2%, and 1.5% for those with Asian, Chinese and Black ethnicity, respectively (compared to 11.2% in Europeans). Odds ratios for any myopia and moderate myopia development for those within the top 10th and 5th percentile of genetic risk were significant in all ethnic groups P<0.05). However, the genetic risk score was not able to reliably identify those at risk of high myopia, other than for participants of Chinese ethnicity (P<0.05).
Conclusion: Prediction of refractive error in Asian, Chinese and Black participants was ~57%, 55% and 13% as accurate in comparison to prediction in European participants. Further research in diverse ethnic populations is needed to improve prediction accuracy. / This research has been conducted using the UK Biobank Resource (applications #17351). UK Biobank was established by the Wellcome Trust; the UK Medical Research Council; the Department for Health (London, UK); Scottish Government (Edinburgh, UK); and the Northwest Regional Development Agency (Warrington, UK). It also received funding from the Welsh Assembly Government (Cardiff, UK); the British Heart Foundation; and Diabetes UK. Collection of eye and vision data was supported by The Department for Health through an award made by the NIHR to the Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust, and UCL Institute of Ophthalmology, London, United Kingdom (grant no. BRC2_009). Additional support was provided by The Special Trustees of Moorfields Eye Hospital, London, United Kingdom (grant no. ST 12 09). Many parts of this project were performed during the time that author Neema Ghorbani Mojarrad was supported by the College of Optometrists with a Postgraduate Scholarship.
|
9 |
Understanding and applying practitioner and patient views on the implementation of a novel automated Computer-Aided Risk Score (CARS) predicting the risk of death following emergency medical admission to hospital: qualitative studyDyson, J., Marsh, C., Jackson, N., Richardson, D., Faisal, Muhammad, Scally, Andy J., Mohammad, Mohammad A. 11 March 2019 (has links)
Yes / Objectives The Computer-Aided Risk Score (CARS) estimates the risk of death following emergency admission to medical wards using routinely collected vital signs and blood test data. Our aim was to elicit the views of healthcare practitioners (staff) and service users and carers (SU/C) on (1) the potential value, unintended consequences and concerns associated with CARS and practitioner views on (2) the issues to consider before embedding CARS into routine practice.
Setting This study was conducted in two National Health Service (NHS) hospital trusts in the North of England. Both had in-house information technology (IT) development teams, mature IT infrastructure with electronic National Early Warning Score (NEWS) and were capable of integrating NEWS with blood test results. The study focused on emergency medical and elderly admissions units. There were 60 and 39 acute medical/elderly admissions beds at the two NHS hospital trusts.
Participants We conducted eight focus groups with 45 healthcare practitioners and two with 11 SU/Cs in two NHS acute hospitals.
Results Staff and SU/Cs recognised the potential of CARS but were clear that the score should not replace or undermine clinical judgments. Staff recognised that CARS could enhance clinical decision-making/judgments and aid communication with patients. They wanted to understand the components of CARS and be reassured about its accuracy but were concerned about the impact on intensive care and blood tests.
Conclusion Risk scores are widely used in healthcare, but their development and implementation do not usually involve input from practitioners and SU/Cs. We contributed to the development of CARS by eliciting views of staff and SU/Cs who provided important, often complex, insights to support the development and implementation of CARS to ensure successful implementation in routine clinical practice. / Health Foundation, National Institute for Health Research (NIHR) Yorkshire and Humber Patient Safety Translational Research Centre (NIHR Yorkshire and Humber PSTRC)
|
10 |
A prospective study of consecutive emergency medical admissions to compare a novel automated computer-aided mortality risk score and clinical judgement of patient mortality riskFaisal, Muhammad, Khatoon, Binish, Scally, Andy J., Richardson, D., Irwin, S., Davidson, R., Heseltine, D., Corlett, A., Ali, J., Hampson, R., Kesavan, S., McGonigal, G., Goodman, K., Harkness, M., Mohammed, Mohammed A. 25 August 2020 (has links)
Yes / Objectives: To compare the performance of a validated automatic computer-aided risk of mortality (CARM) score versus medical judgement in predicting the risk of in-hospital mortality for patients following emergency medical admission. Design: A prospective study. Setting: Consecutive emergency medical admissions in York hospital. Participants: Elderly medical admissions in one ward were assigned a risk of death at the first post-take ward round by consultant staff over a 2-week period. The consultant medical staff used the same variables to assign a risk of death to the patient as the CARM (age, sex, National Early Warning Score and blood test results) but also had access to the clinical history, examination findings and any immediately available investigations such as ECGs. The performance of the CARM versus consultant medical judgement was compared using the c-statistic and the positive predictive value (PPV). Results: The in-hospital mortality was 31.8% (130/409). For patients with complete blood test results, the c-statistic for CARM was 0.75 (95% CI: 0.69 to 0.81) versus 0.72 (95% CI: 0.66 to 0.78) for medical judgements (p=0.28). For patients with at least one missing blood test result, the c-statistics were similar (medical judgements 0.70 (95% CI: 0.60 to 0.81) vs CARM 0.70 (95% CI: 0.59 to 0.80)). At a 10% mortality risk, the PPV for CARM was higher than medical judgements in patients with complete blood test results, 62.0% (95% CI: 51.2 to 71.9) versus 49.2% (95% CI: 39.8 to 58.5) but not when blood test results were missing, 50.0% (95% CI: 24.7 to 75.3) versus 53.3% (95% CI: 34.3 to 71.7). Conclusions: CARM is comparable with medical judgements in discriminating in-hospital mortality following emergency admission to an elderly care ward. CARM may have a promising role in supporting medical judgements in determining the patient's risk of death in hospital. Further evaluation of CARM in routine practice is required. / Supported by the Health Foundation, National Institute for Health Research (NIHR) Yorkshire and Humberside Patient Safety Translational Research Centre (NIHR YHPSTRC).
|
Page generated in 0.0618 seconds