Spelling suggestions: "subject:"drinkingwater"" "subject:"stinkingwater""
331 |
Endotoxins detection and control in drinking water systemsParent Uribe, Santiago. January 2007 (has links)
No description available.
|
332 |
Rule-based Decision Support System For Sensor Deployment In Drinking Water NetworksPrapinpongsanone, Natthaphon 01 January 2011 (has links)
Drinking water distribution systems are inherently vulnerable to malicious contaminant events with environmental health concerns such as total trihalomethanes (TTHMs), lead, and chlorine residual. In response to the needs for long-term monitoring, one of the most significant challenges currently facing the water industry is to investigate the sensor placement strategies with modern concepts of and approaches to risk management. This study develops a Rule-based Decision Support System (RBDSS) to generate sensor deployment strategies with no computational burden as we oftentimes encountered via large-scale optimization analyses. Three rules were derived to address the efficacy and efficiency characteristics and they include: 1) intensity, 2) accessibility, and 3) complexity rules. To retrieve the information of population exposure, the well-calibrated EPANET model was applied for the purpose of demonstration of vulnerability assessment. Graph theory was applied to retrieve the implication of complexity rule eliminating the need to deal with temporal variability. In case study 1, implementation potential was assessed by using a small-scale drinking water network in rural Kentucky, the United States with the sensitivity analysis. The RBDSS was also applied to two networks, a small-scale and large-scale network, in “The Battle of the Water Sensor Network” (BWSN) in order to compare its performances with the other models. In case study 2, the RBDSS has been modified by implementing four objective indexes, the expected time of detection (Z1), the expected population affected prior to detection (Z2), the expected consumption of contaminant water prior to detection, and the detection likelihood (Z4), are being used to evaluate RBDSS’s performance and compare to other models in Network 1 analysis in BWSN. Lastly, the implementation of iv weighted optimization is applied to the large water distribution analysis in case study 3, Network 2 in BWSN.
|
333 |
In-plant And Distribution System Corrosion Control For Reverse Osmosis, Nanofiltration, And Anion Exchange Process BlendsJeffery, Samantha 01 January 2013 (has links)
The integration of advanced technologies into existing water treatment facilities (WTFs) can improve and enhance water quality; however, these same modifications or improvements may adversely affect finished water provided to the consumer by public water systems (PWSs) that embrace these advanced technologies. Process modification or improvements may unintentionally impact compliance with the provisions of the United States Environmental Protection Agency’s (USEPA’s) Safe Drinking Water Act (SDWA). This is especially true with respect to corrosion control, since minor changes in water quality can affect metal release. Changes in metal release can have a direct impact on a water purveyor’s compliance with the SDWA’s Lead and Copper Rule (LCR). In 2010, the Town of Jupiter (Town) decommissioned its ageing lime softening (LS) plant and integrated a nanofiltration (NF) plant into their WTF. The removal of the LS process subsequently decreased the pH in the existing reverse osmosis (RO) clearwell, leaving only RO permeate and anion exchange (AX) effluent to blend. The Town believed that the RO-AX blend was corrosive in nature and that blending with NF permeate would alleviate their concern. Consequently, a portion of the NF permeate stream was to be split between the existing RO-AX clearwell and a newly constructed NF primary clearwell. The Town requested that the University of Central Florida (UCF) conduct research evaluating how to mitigate negative impacts that may result from changing water quality, should the Town place its AX into ready-reserve. iv The research presented in this document was focused on the evaluation of corrosion control alternatives for the Town, and was segmented into two major components: 1. The first component of the research studied internal corrosion within the existing RO clearwell and appurtenances of the Town’s WTF, should the Town place the AX process on standby. Research related to WTF in-plant corrosion control focused on blending NF and RO permeate, forming a new intermediate blend, and pH-adjusting the resulting mixture to reduce corrosion in the RO clearwell. 2. The second component was implemented with respect to the Town’s potable water distribution system. The distribution system corrosion control research evaluated various phosphate-based corrosion inhibitors to determine their effectiveness in reducing mild steel, lead and copper release in order to maintain the Town’s continual compliance with the LCR. The primary objective of the in-plant corrosion control research was to determine the appropriate ratio of RO to NF permeate and the pH necessary to reduce corrosion in the RO clearwell. In this research, the Langelier saturation index (LSI) was the corrosion index used to evaluate the stability of RO:NF blends. Results indicated that a pH-adjusted blend consisting of 70% RO and 30% NF permeate at 8.8-8.9 pH units would produce an LSI of +0.1, theoretically protecting the RO clearwell from corrosion. The primary objective of the distribution system corrosion control component of the research was to identify a corrosion control inhibitor that would further reduce lead and v copper metal release observed in the Town’s distribution system to below their respective action limits (ALs) as defined in the LCR. Six alternative inhibitors composed of various orthophosphate and polyphosphate (ortho:poly) ratios were evaluated sequentially using a corrosion control test apparatus. The apparatus was designed to house mild steel, lead and copper coupons used for weight loss analysis, as well as mild steel, lead solder and copper electrodes used for linear polarization analysis. One side of the apparatus, referred to as the “control condition,” was fed potable water that did not contain the corrosion inhibitor, while the other side of the corrosion apparatus, termed the “test condition,” was fed potable water that had been dosed with a corrosion inhibitor. Corrosion rate measurements were taken twice per weekday, and water quality was measured twice per week. Inhibitor evaluations were conducted over a span of 55 to 56 days, varying with each inhibitor. Coupons and electrodes were pre-corroded to simulate existing distribution system conditions. Water flow to the apparatus was controlled with an on/off timer to represent variations in the system and homes. Inhibitor comparisons were made based on their effectiveness at reducing lead and copper release after chemical addition. Based on the results obtained from the assessment of corrosion inhibitors for distribution system corrosion control, it appears that Inhibitors 1 and 3 were more successful in reducing lead corrosion rates, and each of these inhibitors reduced copper corrosion rates. Also, it is recommended that consideration be given to use of a redundant single-loop duplicate test apparatus in lieu of a double rack corrosion control test apparatus in experiments where pre-corrosion phases are vi implemented. This recommendation is offered because statistically, the control versus test double loop may not provide relevance in data analysis. The use of the Wilcoxon signed ranks test comparing the initial pre-corroding phase to the inhibitor effectiveness phase has proven to be a more useful analytical method for corrosion studies.
|
334 |
Impact Of Zinc Orthophosphate Inhibitor On Distribution System Water QualityGuan, Xiaotao 01 January 2007 (has links)
This dissertation consists of four papers concerning impacts of zinc orthophosphate (ZOP) inhibitor on iron, copper and lead release in a changing water quality environment. The mechanism of zinc orthophosphate corrosion inhibition in drinking water municipal and home distribution systems and the role of zinc were investigated. Fourteen pilot distribution systems (PDSs) which were identical and consisted of increments of PVC, lined cast iron, unlined cast iron and galvanized steel pipes were used in this study. Changing quarterly blends of finished ground, surface and desalinated waters were fed into the pilot distribution systems over a one year period. Zinc orthophosphate inhibitor at three different doses was applied to three PDSs. Water quality and iron, copper and lead scale formation was monitored for the one year study duration. The first article describes the effects of zinc orthophosphate (ZOP) corrosion inhibitor on surface characteristics of iron corrosion products in a changing water quality environment. Surface compositions of iron surface scales for iron and galvanized steel coupons incubated in different blended waters in the presence of ZOP inhibitor were investigated using X-ray Photoelectron Spectroscopy (XPS), Scanning Electron Microscopy (SEM) / Energy Dispersive X-ray Spectroscopy (EDS). Based on surface characterization, predictive equilibrium models were developed to describe the controlling solid phase and mechanism of ZOP inhibition and the role of zinc for iron release. The second article describes the effects of zinc orthophosphate (ZOP) corrosion inhibitor on total iron release in a changing water quality environment. Development of empirical models as a function of water quality and ZOP inhibitor dose for total iron release and mass balances analysis for total zinc and total phosphorus data provided insight into the mechanism of ZOP corrosion inhibition regarding iron release in drinking water distribution systems. The third article describes the effects of zinc orthophosphate (ZOP) corrosion inhibitor on total copper release in a changing water quality environment. Empirical model development was undertaken for prediction of total copper release as a function of water quality and inhibitor dose. Thermodynamic models for dissolved copper based on surface characterization of scale that were generated on copper coupons exposed to ZOP inhibitor were also developed. Surface composition was determined by X-ray Photoelectron Spectroscopy (XPS). The fourth article describes the effects of zinc orthophosphate (ZOP) corrosion inhibitor on total lead release in a changing water quality environment. Surface characterization of lead scale on coupons exposed to ZOP inhibitor by X-ray Photoelectron Spectroscopy (XPS) was utilized to identify scale composition. Development of thermodynamic model for lead release based on surface analysis results provided insight into the mechanism of ZOP inhibition and the role of zinc.
|
335 |
Occurrence of Per- and Polyfluoroalkyl Substances (PFAS) in Private Water Supplies in Southwest VirginiaHohweiler, Kathleen A. 24 May 2023 (has links)
Per- and polyfluoroalkyl substances (PFAS) are a class of man-made contaminants of increasing human health concern due to their resistance to degradation, widespread occurrence in the environment, bioaccumulation in human and animal organ tissue, and potential negative health impacts. Drinking water is suspected to be a primary source of human PFAS exposure, so the US Environmental Protection Agency (US EPA) has set interim and final health advisories for several PFAS species that are applicable to municipal water supplies. However, private drinking water supplies may be uniquely vulnerable to PFAS contamination, as these systems are not subject to EPA regulation and often include limited treatment prior to use for drinking or cooking. The goal of this study was to determine the incidence of PFAS contamination in private drinking water supplies in two counties in Southwest Virginia (Floyd and Roanoke), and to examine the potential for reliance on citizen-science based strategies for sample collection in subsequent broader sampling efforts. Samples for inorganic ions, bacteria, and PFAS analysis were collected on separate occasions by homeowners and experts at the home drinking water point of use (POU) in 10 Roanoke and 10 Floyd County homes for comparison. Experts also collected an outside tap PFAS sample. At least one PFAS compound was detected in 76% of POU samples collected (n=60), with an average total PFAS concentration of 23.5 parts per trillion (ppt). PFOA and PFOS, which are currently included in EPA health advisories, were detected in 13% and 22% of POU samples, respectively. Of the 31 PFAS species targeted, 15 were detected in at least one sample. On average, a single POU sample contained approximately 3 PFAS, and one sample contained as many as 8 different species, indicating that exposure to PFAS in complex mixtures is worth noting. Although there were significant differences in total PFAS concentrations between expert and homeowner collected samples (Wilcoxon, alpha = 0.05), it is unclear whether this difference was due to contamination by the collector or the water usage and time of day of sampling (i.e. morning, afternoon). It is worth noting that there was no significant difference in the number of PFAS species in the samples collected by homeowners and experts. Given the considerable variation in PFAS detections between homes, future studies reliant on homeowner collection of samples appears possible given proper training and instruction to collect at the same time of day (i.e. first thing in the morning). / Master of Science / Per- and polyfluoroalkyl substances (PFAS) belong to a large family of manmade compounds that are commonly used in a variety of household and consumer products due to their unique water and stain resistant properties. PFAS compounds are not easily broken down in the environment and have been detected globally in air, soil, and water samples. In addition to their environmental detections, PFAS are slow to be removed from the body after ingestion and known to cause negative health effects in concentrations less than one part per trillion. Drinking water is considered to a main source of PFAS consumption for humans; as such, the US Environmental Protection Agency (US EPA) has set strict, but not legally binding, interim and final health advisories (HA) for four types of PFAS. However, these health advisories only apply to public water services and do not cover private drinking water systems, such as wells or springs, which are the full responsibility of the well owner. Private drinking water system users often do not treat their water before drinking which may make these systems uniquely vulnerable to PFAS contamination. This study focused on 20 total homes, 10 in Roanoke County and 10 in Floyd County to see if PFAS was present and to determine whether or not homeowners would be able to collect their own samples for PFAS analysis at home as accurately as researchers or experts with proper instructions. Homeowners and experts collected drinking water samples inside at a point of use (POU), usually at a kitchen faucet, and outside of the home, usually from a tap. PFAS were present in 76% (n=60) of POU samples, with an average combined concentration of 23.5 parts per trillion (ppt). The two most well studied PFAS, PFOA and PFOS were detected in 13% and 22% of POU samples, respectively. It was also common to detect at least 3 PFAS in a single sample. Although there were differences in total average concentrations of PFAS in samples collected by homeowners and experts, variation could be caused by several factors indicating that with proper training and instruction it is likely future studies could still rely on homeowners to collect samples for PFAS analysis.
|
336 |
Optimization and verification of changes made to US-EPA 1623 Method to analyse for the presence of Cryptosporidium and Giardia in waterKhoza, M. N. L. (Mtetwa) 03 1900 (has links)
Thesis. (M. Tech. (Dept. of Biosciences, Faculty of Applied and Computer Sciences))--Vaal University of Technology, 2010 / Methods for detecting the presence of Cryptosporidium oocysts and Giardia cysts have been developed and continuous improvement is being done to improve the recovery rate of the target protozoa. Rand Water has adopted their method for isolation and detection of Cryptosporidium oocysts and Giardia cysts in water from United State Environmental Protection Agency (US-EPA) Method 1623, 1999. In 2005 changes were made by US-EPA to the Method 1623.
A study was done to improve the performance of the Rand Water Method 06 (2007) used for isolation and detection of Cryptosporidium oocysts and Giardia cysts. Three methods namely: Rand Water Method 06 (2007), US-EPA Method 1623 (2005) and Drinking Water Inspectorate standard operating procedures (2003) were compared and key different steps in the methods were identified (wrist action speed, centrifuge speed, immunomagnetic separation procedures and addition of pre-treatment steps). Different experiments were conducted to verify and evaluate the difference between two wrist action shaker speeds, three different centrifuge speeds, two slightly different immunomagnetic separation procedures and when a pre-treatment step was included in the method.
Three different types of water matrices (reagent grade water, drinking water and raw water) were used for the experiments and secondary validation. Data obtained from the experiments and secondary validation was statistically analyzed to determine whether there was a significant difference in the recovery of Cryptosporidium oocysts and Giardia cysts. Secondary validation of the Rand Water Method 06 (2007) was performed by implementing the study experiments‟ findings into the method.
The results indicated an increase in the recovery rate of Cryptosporidium oocysts and Giardia cysts when data was compared with the previous secondary validation report. The mean recovery of Cryptosporidium oocysts in reagent grade water samples increased from 31% to 55%, drinking water samples increased from 28% to 44% and raw water decreased from 42% to 29%. The mean recovery of Giardia cysts in reagent grade water samples increased from 31% to 41%, drinking water samples increased from 28% to 46% and raw water decreased from 42% to 32%.
Furthermore, even though the recovery rate of raw water decreased the use of pre-treatment buffer reduced the number of IMS performed per sample by reducing the pellet size. Enumeration of microscope slides was also easier as there was less background interference. The optimization of the Rand Water Method 06 (2007) was successful as the recovery rate of Cryptosporidium oocysts and Giardia cysts from water increased. All the changes that were verified and that increased the recovery rate were incorporated into the improved Rand Water Method 06.
|
337 |
Granular Media Supported Microbial Remediation of Nitrate Contaminated Drinking WaterMalini, R January 2014 (has links) (PDF)
Increasing nitrate concentration in ground water from improper disposal of sewage and excessive use of fertilizers is deleterious to human health as ingestion of nitrate contaminated water can cause methaemoglobinemia in infants and possibly cancer in adults. The permissible limit for nitrate in potable water is 45 mg/L. Unacceptable levels of nitrate in groundwater is an important environmental issue as nearly 80 % of Indian rural population depends on groundwater as source of drinking water. Though numerous technologies such as reverse osmosis, ion exchange, electro-dialysis, permeable reactive barriers using zero-valent iron exists, nitrate removal from water using affordable, sustainable technology, continues to be a challenging issue as nitrate ion is not amenable to precipitation or removable by mineral adsorbents. Tapping the denitrification potential of soil denitrifiers which are inherently available in the soil matrix is a possible sustainable approach to remove nitrate from contaminated drinking water.
Insitu denitrification is a useful process to remove NO3–N from water and wastewater. In biological denitrification, nitrate ions function as terminal electron acceptor instead of oxygen; the carbon source serve as electron donor and the energy generated in the redox process is utilized for microbial cell growth and maintenance. In this process, microorganisms first reduce nitrate to nitrite and then produce nitric oxide, nitrous oxide, and nitrogen gas. The pathway for nitrate reduction can be written as:
NO3-→ NO2-→ NO → N2O → N2. (i)
Insitu denitrification process occurring in soil environments that utilizes indigenous soil microbes is the chosen technique for nitrate removal from drinking water in this thesis. As presence of clay in soil promotes bacterial activity, bentonite clay was mixed with natural sand and this mix, referred as bentonite enhanced sand (BES) acted as the habitat for the denitrifying bacteria. Nitrate reduction experiments were carried out in batch studies using laboratory prepared nitrate contaminated water spiked with ethanol; the batch studies examined the mechanisms, kinetics and parameters influencing the heterotrophic denitrification process. Optimum conditions for effective nitrate removal by sand and bentonite enhanced sand (BES) were evaluated. Heterotrophic denitrification reactors were constructed with sand and BES as porous media and the efficiency of these reactors in removing nitrate from contaminated water was studied.
Batch experiments were performed at 40°C with sand and bentonite enhanced sand specimens that were wetted with nutrient solution containing 22.6 mg of nitrate-nitrogen and ethanol to give C/N ratio of 3. The moist sand and BES specimens were incubated for periods ranging from 0 to 48 h. During nitrate reduction, nitrite ions were formed as intermediate by-product and were converted to gaseous nitrogen. There was little formation of ammonium ions in the soil–water extract during reduction of nitrate ions. Hence it was inferred that nitrate reduction occurred by denitrification than through dissimilatory nitrate reduction to ammonium (DNRA).
The reduction in nitrate concentration with time was fitted into rate equations and was observed to follow first order kinetics with a rate constant of 0.118 h-1 at 40°C. Results of batch studies also showed that the first order rate constant for nitrate reduction decreased to 5.3x10-2 h-1 for sand and 4.3 x10-2 h-1 for bentonite-enhanced sand (BES) at 25°C. Changes in pH, redox potential and dissolved oxygen in the soil-solution extract served as indicators of nitrate reduction process. The nitrate reduction process was associated with increasing pH and decreasing redox potential. The oxygen depletion process followed first order kinetics with a rate constant of 0.26 h-1. From the first order rate equation of oxygen depletion process, the nitrate reduction lag time was computed to be 12.8 h for bentonite enhanced sand specimens. Ethanol added as an electron donor formed acetate ions as an intermediate by-product that converted to bicarbonate ions; one mole of nitrate reduction generated 1.93 moles of bicarbonate ions that increased the pH of the soil-solution extract.
The alkaline pH of BES specimen (8.78) rendered it an ideal substrate for soil denitrification process. In addition, the ability of bentonite to stimulate respiration by maintaining adequate levels of pH for sustained bacterial growth and protected bacteria in its microsites against the effect of hypertonic osmotic pressures, promoting the rate of denitrification. Buffering capacity of bentonite was mainly due to high cation exchange capacity of the clay. The presence of small pores in BES specimens increased the water retention capacity that aided in quick onset of anaerobiosis within the soil microsites.
The biochemical process of nitrate reduction was affected by physical parameters such as bentonite content, water content, and temperature and chemical parameters such as C/N ratio, initial nitrate concentration and presence of indigenous micro-organisms in contaminated water. The rate of nitrate reduction process progressively increased with bentonite content but the presence of bentonite retarded the conversion of nitrite ions to nitrogen gas, hence there was significant accumulation of nitrite ions with increase in bentonite content. The dependence of nitrate reduction process on water content was controlled by the degree of saturation of the soil specimens. The rate of nitrate reduction process increased with water content until the specimens were saturated. The threshold water content for nitrate reduction process for sand and bentonite enhanced sand specimens was observed to be 50 %. The rate of nitrate reduction linearly increased with C/N ratio till steady state was attained. The optimum C/N ratio was 3 for sand and bentonite enhanced sand specimens. The activation energy (Ea) for this biochemical reaction was 35.72 and 47.12 kJmol-1 for sand and BES specimen respectively. The temperature coefficient (Q10) is a measure of the rate of change of a biological or chemical system as a consequence of increasing the temperature by 10°C. The temperature coefficient of sand and BES specimen was 2.0 and 2.05 respectively in the 15–25°C range; the temperature coefficients of sand and BES specimens were 1.62 and 1.77 respectively in the 25–40°C range.
The rate of nitrate reduction linearly decreased with increase in initial nitrate concentration. The biochemical process of nitrate reduction was unaffected by presence of co-ions and nutrients such as phosphorus but was influenced by presence of pathogenic bacteria.
Since nitrate leaching from agricultural lands is the main source of nitrate contamination in ground water, batch experiments were performed to examine the role of vadose (unsaturated soil) zone in the nitrate mitigation by employing sand and BES specimens with varying degree of soil saturation and C/N ratio as controlling parameters. Batch studies with sand and BES specimens showed that the incubation period required to reduce nitrate concentrations below 45 mg/L (t45) strongly depends on degree of saturation when there is inadequate carbon source available to support denitrifying bacteria; once optimum C/N ratio is provided, the rate of denitrification becomes independent of degree of soil saturation. The theoretical lag time (lag time refers to the period that is required for denitrification to commence) for nitrate reduction for sand specimens at Sr= 81 and 90%, C/N ratio = 3 and temperature = 40ºC corresponded to 24.4 h and 23.1 h respectively. The lag time for BES specimens at Sr = 84 and 100%, C/N ratio = 3 and temperature = 40ºC corresponded to 13.9 h and 12.8 h respectively. Though the theoretically computed nitrate reduction lag time for BES specimens was nearly half of sand specimens, it was experimentally observed that nitrate reduction proceeds immediately without any lag phase in sand and BES specimens suggesting the simultaneous occurrence of anaerobic microsites in both.
Denitrification soil columns (height = 5 cm and diameter = 8.2 cm) were constructed using sand and bentonite-enhanced sand as porous reactor media. The columns were permeated with nitrate spiked solutions (100 mg/L) and the outflow was monitored for various chemical parameters. The sand denitrification column (packing density of 1.3 Mg/m3) showed low nitrate removal efficiency because of low hydraulic residence time (1.32 h) and absence of carbon source. A modified sand denitrification column constructed with higher packing density (1.52 Mg/m3) and ethanol addition to the influent nitrate solution improved the reactor performance such that near complete nitrate removal was achieved after passage of 50 pore volumes. In comparison, the BES denitrification column achieved 87.3% nitrate removal after the passage of 28.9 pore volumes, corresponding to 86 h of operation of the BES reactor. This period represents the maturation period of bentonite enhanced sand bed containing 10 % bentonite content. Though nitrate reduction is favored by sand bed containing 10 % bentonite, the low flow rate (20-25 cm3/h) impedes its use for large scale removal of nitrate from drinking water. Hence new reactor was designed using lower bentonite content of 5 % that required maturation period of 9.6 h. The 5 and 10 % bentonite-enhanced sand reactors bed required shorter maturation period than sand reactor as presence of bentonite contributes to increase in hydraulic retention time of nitrate within the reactor. On continued operation of the BES reactors, reduction in flow rate from blocking of pores by microbial growth on soil particles and accumulation of gas molecules was observed that was resolved by backwashing the reactors.
|
338 |
Chronic disease risks from prolonged exposure to metals and disinfection byproducts at sub-regulatory levels in California’s community water suppliesMedgyesi, Danielle Nicolle January 2025 (has links)
In the United States, over 90 contaminants in community water supplies (CWS) are regulated based on maximum contaminant limits (MCLs) set by the Environmental Protection Agency under the Safe Drinking Water Act. These limits are crucial to the health of over 90% of the US population who rely on CWS for their drinking water. Despite advancements in safer water, questions remain about the potential role of prolonged exposures to contaminants at sub-regulatory levels in chronic diseases. Historically, conducting epidemiologic studies of drinking water exposures in the United States has been challenging due to the fragmented availability of CWS service areas and contaminant information, which varies depending on each state’s efforts.
This dissertation attempts to overcome some of these barriers by collaborating with long-standing institutes in California to evaluate the relationship between drinking water contaminants (arsenic, uranium, and trihalomethanes) and the risks of cardiovascular disease (CVD) and chronic kidney disease (CKD) in a large prospective cohort. The California Teachers Study (CTS) cohort is comprised of over 130,000 women living across the state and followed for health outcomes, including CVD and CKD, since enrollment (1995-1996). The California Office of Environmental Health Hazard and Assessment (OEHHA) houses some of the most detailed information about CWS available in the United States. With their partnership, we consolidated three decades (1990-2020) worth of yearly contaminant data from CWS. Thanks to a statewide effort that gathered service boundary data from local agencies, we were able to identify CWS serving participants’ residential addresses. Ultimately, these efforts produced new drinking water exposure data available in the CTS cohort, accessible for the analyses of associated health outcomes.
Chapter 1 provides an overview of the novel contributions and methods of this dissertation, and background knowledge about the three common drinking water contaminants under study—arsenic, uranium, and trihalomethanes. The three epidemiologic studies included in this dissertation were designed to evaluate the relationship between these contaminants and health outcomes, selected based on previous toxicologic evidence. To this end, we detail current knowledge on the relationships between a) arsenic and CVD, b) uranium and arsenic and CKD, and c) trihalomethanes and CKD.
Chapter 2 details our efforts to construct residential histories of CTS participants using address data collected throughout follow-up (1995-2018). Environmental epidemiologic studies using geospatial data often estimate exposure at a participant’s residence upon enrollment, but mobility during the exposure period can lead to misclassification. We aimed to mitigate this issue using address records that have been self-reported and collected from the US Postal Service, LexisNexis, Experian, and California Cancer Registry. We identified records of the same address based on geo-coordinate distance (≤250m) and street name similarity. We consolidated addresses, prioritizing those confirmed by participants during follow-up questionnaires, and estimating the duration lived at each address using dates associated with records (e.g., date-first-seen). During 23-years of follow-up, about half of participants moved (48%, including 14% out-of-state).
We observed greater mobility among younger women, Hispanic or Latina women, and those in metropolitan and lower socioeconomic status areas. The cumulative proportion of in-state movers remaining eligible for analysis was 21%, 32%, and 41% at 5-, 10-, and 20-years post-enrollment, respectively. Using self-reported information collected 10 years after enrollment, we correctly identified 94% of self-identified movers and 95% of non-movers as having moved or not moved from their enrollment address. This dataset provides a foundation for estimating long-term exposure to drinking water contaminants evaluated in this dissertation, and supports other epidemiologic studies of diverse environmental exposures and health outcomes in this cohort.
Chapter 3 details our first epidemiologic analysis evaluating the relationship between long-term arsenic exposure from CWS and CVD risk in the CTS cohort. Inorganic arsenic in drinking water is linked to atherosclerosis and cardiovascular disease. However, risk is uncertain at lower levels present in CWS, currently regulated at the federal maximum contaminant level of 10µg/L. Using statewide healthcare administrative records from enrollment through follow-up (1995-2018), we identified fatal and nonfatal cases of ischemic heart disease (IHD) and CVD (including stroke). Participants’ residential addresses were linked to a network of CWS boundaries and annual arsenic concentrations (1990-2020). Most participants resided in areas served by a CWS (92%). Exposure was calculated as a time-varying, 10-year moving average up to a participant’s event, death, or end of follow-up.
Using multivariable-adjusted Cox models, we estimated hazard ratios (HRs) and 95% confidence intervals (95%CIs) for the risk of IHD or CVD. We evaluated arsenic exposure categorized by concentration thresholds relevant to regulation standards (<1.00, 1.00-2.99, 3.00-4.99, 5.00-9.99, ≥10µg/L) and continuously using a log2-transformation (i.e., per doubling). We also stratified analyses by age, body mass index (BMI), and smoking status.
This analysis included 98,250 participants, 6,119 IHD cases and 9,936 CVD cases. The HRs for IHD at concentration thresholds (ref:<1µg/L) were 1.06 (95%CI=1.00-1.12) at 1.00-2.99µg/L, 1.05 (95%CI=0.94-1.17) at 3.00-4.99µg/L, 1.20 (95%CI=1.02-1.41) at 5.00-9.99µg/L, and 1.42 (95%CI=1.10-1.84) at ≥10µg/L. HRs for every doubling of wAs exposure were 1.04 (95%CI=1.02-1.06) for IHD and 1.02 (95%CI=1.01-1.04) for CVD. We observed statistically stronger risk among those ≤55 versus >55 years at enrollment (pinteraction=0.006 and 0.012 for IHD and CVD, respectively). This study demonstrates that long-term arsenic exposure from CWS, at and below the regulatory limit, may increase cardiovascular disease risk, particularly IHD.
Chapter 4 details our second epidemiologic analysis evaluating uranium and arsenic from CWS and CKD risk in the CTS cohort. Metals/metalloids in drinking water, including uranium and arsenic, have been linked to adverse kidney effects and may contribute to CKD risk, but few epidemiologic studies exist. Annual average concentrations of uranium and arsenic were obtained for CWS serving participants’ residential address(es). We calculated participant’s average exposure from enrollment in 1995 to 2005. CKD cases were ascertained from inpatient hospitalization records beginning in 2005, once diagnostic coding was adopted, through 2018.
Our analysis included 6,185 moderate to end stage CKD cases among 88,185 women. We evaluated exposure categorized by concentration thresholds relevant to regulatory standards, up to ½ the current regulatory limit (uranium=15µg/L; arsenic=5µg/L), and continuously on the log scale per interquartile range (IQR). We used mixed-effect multivariable-adjusted Cox models to estimate HRs and 95%CIs of CKD by uranium or arsenic levels.
We also conducted analyses stratified by risk factors and comorbidities. Exposures at the 50th (25th, 75th) percentiles were 3.1 (0.9, 5.6) µg/L for uranium, and 1.0 (0.6, 1.8) µg/L for arsenic. Higher uranium exposure, relative to <2µg/L, was associated with CKD risk, with HRs of 1.20 (95%CI=1.07-1.35) at 2.0-<5.0µg/L, 1.08 (95%CI=0.95-1.22) at 5.0-<10µg/L, 1.33 (95%CI=1.15, 1.54) at 10-<15µg/L, and 1.32 (95%CI=1.09-1.58) at ≥15µg/L (ptrend=0.024). We found no overall association between arsenic and CKD (log IQR; HR=1.02, 95%CI=0.98-1.07). However, risk from arsenic was statistically different by age and comorbidity status, with risk only observed among younger individuals (≤55 years), and those who developed cardiovascular disease or diabetes. Uranium exposure from drinking water below the current regulatory limit may increase CKD risk. Relatively low, chronic exposure to arsenic may affect kidney function among those with comorbidities.
Chapter 5 details our third and final epidemiologic analysis evaluating trihalomethanes in residential CWS and CKD risk in the CTS cohort. Disinfection byproducts from water chlorination, including trihalomethanes (THMs), have been associated with bladder cancer and adverse birth outcomes. Despite mechanistic evidence of nephrotoxic effects, especially brominated THMs, no epidemiologic studies to date have evaluated CKD risk.
This study included 89,158 women with 6,232 moderate to end stage CKD cases identified from statewide healthcare administrative records (2005-2018). Average concentrations of four THMs, including three brominated THMs, were calculated for CWS serving participants’ residential addresses from 1995-2005. We estimated HRs and 95%CIs using mixed-effect multivariable-adjusted Cox models. A g-computation mixture analysis approach was used to estimate the overall effect and relative contribution of brominated THMs, chloroform (non-brominated THM), as well as uranium and arsenic—other potentially nephrotoxic metals in CWS previously evaluated. Median (25th, 75th, 95th percentiles) were 5.5 (0.5, 24.1, 57.8) µg/L for total THMs and 2.7 (0.6, 11.3, 30.0) µg/L for brominated THMs. In flexible exposure-response models, we observed a positive relationship between total THMs and CKD risk, which was stronger for brominated THMs. The HRs (95%CIs) of CKD risk from brominated THMs at the highest two exposure categories (75th-94th, ≥95th, versus <25th) were 1.23 (1.13-1.33) and 1.43 (1.23-1.66), respectively; ptrend<0.001. Brominated THMs were the largest contributor (53%) to the overall mixture effect on CKD risk, followed by uranium (35%), arsenic (6%), and chloroform (5%). Trihalomethanes in water, in particular brominated trihalomethanes which are not regulated separately, may contribute to CKD development, even at levels below the current US regulatory limit (80µg/L).
Chapter 6 concludes this dissertation by summarizing our findings, highlighting the policy implications, relevance to other populations, and discussing future directions. Recently, the US EPA has released a geospatial dataset of CWS boundaries across the country that can be used in conjunction with national contaminant data. This development underscores the growing recognition for more research on drinking water quality and health. We hope that the methods developed and used in our analyses will be informative to future studies, and that there will be opportunities for replication of our findings to better inform policy and protect the health of populations nationwide.
|
339 |
Antimicrobial contaminant removal by multi-stage drinking water filtrationRooklidge, Stephen J. 07 May 2004 (has links)
The fate of antimicrobials entering the aquatic environment is an increasing concern for
researchers and regulators, and recent research has focused on antimicrobial
contamination from point sources, such as wastewater treatment facility outfalls. The
terraccumulation of antimicrobials and mobility in diffuse pollution pathways should not
be overlooked as a contributor to the spread of bacterial resistance and the resulting threat
to human drug therapy. This review critically examines recent global trends of bacterial
resistance, antimicrobial contaminant pathways from agriculture and water treatment
processes, and the need to incorporate diffuse pathways into risk assessment and
treatment system design.
Slow sand filters are used in rural regions where source water may be subjected to
antimicrobial contaminant loads from waste discharges and diffuse pollution. A simple
model was derived to describe removal efficiencies of antimicrobials in slow sand
filtration and calculate antimicrobial concentrations sorbed to the schmutzdecke at the
end of a filtration cycle. Input parameters include water quality variables easily
quantified by water system personnel and published adsorption, partitioning, and
photolysis coefficients. Simulation results for three classes of antimicrobials suggested
greater than 4-log removal from 1 ��g/L influent concentrations in the top 30 cm of the
sand column, with schmutzdecke concentrations comparable to land-applied biosolids.
Sorbed concentrations of the antimicrobial tylosin fed to a pilot filter were within one
order of magnitude of the predicted concentration.
To investigate the behavior of antimicrobial contaminants during multi-stage filtration,
five compounds from four classes of antimicrobials were applied to a mature slow sand
filter and roughing filter fed raw water from the Santiam River in Oregon during a 14-day
challenge study. Antimicrobial removal efficiency of the filters was calculated from 0.2
mg/L influent concentrations using HPLC MS/MS. and sorption coefficients (K[subscript d], K[subscript oc],
K[subscript om]) were calculated for schmutzdecke collected from a mature filter column.
Sulfonamides had low sorption coefficients and were largely unaffected by multi-stage
filtration. Lincomycin, trimethoprim, and tylosin exhibited higher sorption coefficients
and limited mobility within the slow sand filter column. The lack of a significant
increase in overall antimicrobial removal efficiency indicated biodegradation is less
significant than sorption in multi-stage filtration. / Graduation date: 2004
|
340 |
Kvalita pitné vody určené k hromadnému zásobování obyvatel / The Quality of Drinking Water in Public Distribution SystemsSOMPEKOVÁ, Zuzana January 2010 (has links)
This research project was aimed at monitoring the quality of drinking water that is supplied to the inhabitants of small villages. The quality of drinking water produced by small waterworks in South Bohemia, in municipalities Mazelov, Ortvínovice, Doubravka and Rábín, was studied. Sanitary analyses of drinking water samples carried out by the waterworks operators in 2004-2009 showed some variability in the concentrations of free chlorine, nitrates, pH, turbidity and the content of Escherichia coli in all the waterworks during the investigated period. The hypothesis assuming that the quality of drinking water produced by water treatment from small water sources is stable and that it does not vary in some key indicators, such as nitrates, the contents of Escherichia coli etc., throughout the year was not confirmed. The other hypothesis assuming that the number of small water sources used for public drinking water supplies decreases during the period was confirmed. The causes of these changes depend on many factors, such as the location and source of drinking water, the type of treatment plant, and, last but not least, the quality of service and economic potential of the waterworks operators play a negative role.
|
Page generated in 0.1044 seconds