• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 508
  • 90
  • 62
  • 51
  • 41
  • 34
  • 13
  • 9
  • 6
  • 6
  • 6
  • 5
  • 5
  • 3
  • 2
  • Tagged with
  • 1039
  • 1039
  • 181
  • 178
  • 178
  • 162
  • 98
  • 82
  • 81
  • 79
  • 73
  • 68
  • 65
  • 64
  • 61
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
331

Endotoxins detection and control in drinking water systems

Parent Uribe, Santiago. January 2007 (has links)
No description available.
332

Rule-based Decision Support System For Sensor Deployment In Drinking Water Networks

Prapinpongsanone, Natthaphon 01 January 2011 (has links)
Drinking water distribution systems are inherently vulnerable to malicious contaminant events with environmental health concerns such as total trihalomethanes (TTHMs), lead, and chlorine residual. In response to the needs for long-term monitoring, one of the most significant challenges currently facing the water industry is to investigate the sensor placement strategies with modern concepts of and approaches to risk management. This study develops a Rule-based Decision Support System (RBDSS) to generate sensor deployment strategies with no computational burden as we oftentimes encountered via large-scale optimization analyses. Three rules were derived to address the efficacy and efficiency characteristics and they include: 1) intensity, 2) accessibility, and 3) complexity rules. To retrieve the information of population exposure, the well-calibrated EPANET model was applied for the purpose of demonstration of vulnerability assessment. Graph theory was applied to retrieve the implication of complexity rule eliminating the need to deal with temporal variability. In case study 1, implementation potential was assessed by using a small-scale drinking water network in rural Kentucky, the United States with the sensitivity analysis. The RBDSS was also applied to two networks, a small-scale and large-scale network, in “The Battle of the Water Sensor Network” (BWSN) in order to compare its performances with the other models. In case study 2, the RBDSS has been modified by implementing four objective indexes, the expected time of detection (Z1), the expected population affected prior to detection (Z2), the expected consumption of contaminant water prior to detection, and the detection likelihood (Z4), are being used to evaluate RBDSS’s performance and compare to other models in Network 1 analysis in BWSN. Lastly, the implementation of iv weighted optimization is applied to the large water distribution analysis in case study 3, Network 2 in BWSN.
333

In-plant And Distribution System Corrosion Control For Reverse Osmosis, Nanofiltration, And Anion Exchange Process Blends

Jeffery, Samantha 01 January 2013 (has links)
The integration of advanced technologies into existing water treatment facilities (WTFs) can improve and enhance water quality; however, these same modifications or improvements may adversely affect finished water provided to the consumer by public water systems (PWSs) that embrace these advanced technologies. Process modification or improvements may unintentionally impact compliance with the provisions of the United States Environmental Protection Agency’s (USEPA’s) Safe Drinking Water Act (SDWA). This is especially true with respect to corrosion control, since minor changes in water quality can affect metal release. Changes in metal release can have a direct impact on a water purveyor’s compliance with the SDWA’s Lead and Copper Rule (LCR). In 2010, the Town of Jupiter (Town) decommissioned its ageing lime softening (LS) plant and integrated a nanofiltration (NF) plant into their WTF. The removal of the LS process subsequently decreased the pH in the existing reverse osmosis (RO) clearwell, leaving only RO permeate and anion exchange (AX) effluent to blend. The Town believed that the RO-AX blend was corrosive in nature and that blending with NF permeate would alleviate their concern. Consequently, a portion of the NF permeate stream was to be split between the existing RO-AX clearwell and a newly constructed NF primary clearwell. The Town requested that the University of Central Florida (UCF) conduct research evaluating how to mitigate negative impacts that may result from changing water quality, should the Town place its AX into ready-reserve. iv The research presented in this document was focused on the evaluation of corrosion control alternatives for the Town, and was segmented into two major components: 1. The first component of the research studied internal corrosion within the existing RO clearwell and appurtenances of the Town’s WTF, should the Town place the AX process on standby. Research related to WTF in-plant corrosion control focused on blending NF and RO permeate, forming a new intermediate blend, and pH-adjusting the resulting mixture to reduce corrosion in the RO clearwell. 2. The second component was implemented with respect to the Town’s potable water distribution system. The distribution system corrosion control research evaluated various phosphate-based corrosion inhibitors to determine their effectiveness in reducing mild steel, lead and copper release in order to maintain the Town’s continual compliance with the LCR. The primary objective of the in-plant corrosion control research was to determine the appropriate ratio of RO to NF permeate and the pH necessary to reduce corrosion in the RO clearwell. In this research, the Langelier saturation index (LSI) was the corrosion index used to evaluate the stability of RO:NF blends. Results indicated that a pH-adjusted blend consisting of 70% RO and 30% NF permeate at 8.8-8.9 pH units would produce an LSI of +0.1, theoretically protecting the RO clearwell from corrosion. The primary objective of the distribution system corrosion control component of the research was to identify a corrosion control inhibitor that would further reduce lead and v copper metal release observed in the Town’s distribution system to below their respective action limits (ALs) as defined in the LCR. Six alternative inhibitors composed of various orthophosphate and polyphosphate (ortho:poly) ratios were evaluated sequentially using a corrosion control test apparatus. The apparatus was designed to house mild steel, lead and copper coupons used for weight loss analysis, as well as mild steel, lead solder and copper electrodes used for linear polarization analysis. One side of the apparatus, referred to as the “control condition,” was fed potable water that did not contain the corrosion inhibitor, while the other side of the corrosion apparatus, termed the “test condition,” was fed potable water that had been dosed with a corrosion inhibitor. Corrosion rate measurements were taken twice per weekday, and water quality was measured twice per week. Inhibitor evaluations were conducted over a span of 55 to 56 days, varying with each inhibitor. Coupons and electrodes were pre-corroded to simulate existing distribution system conditions. Water flow to the apparatus was controlled with an on/off timer to represent variations in the system and homes. Inhibitor comparisons were made based on their effectiveness at reducing lead and copper release after chemical addition. Based on the results obtained from the assessment of corrosion inhibitors for distribution system corrosion control, it appears that Inhibitors 1 and 3 were more successful in reducing lead corrosion rates, and each of these inhibitors reduced copper corrosion rates. Also, it is recommended that consideration be given to use of a redundant single-loop duplicate test apparatus in lieu of a double rack corrosion control test apparatus in experiments where pre-corrosion phases are vi implemented. This recommendation is offered because statistically, the control versus test double loop may not provide relevance in data analysis. The use of the Wilcoxon signed ranks test comparing the initial pre-corroding phase to the inhibitor effectiveness phase has proven to be a more useful analytical method for corrosion studies.
334

Impact Of Zinc Orthophosphate Inhibitor On Distribution System Water Quality

Guan, Xiaotao 01 January 2007 (has links)
This dissertation consists of four papers concerning impacts of zinc orthophosphate (ZOP) inhibitor on iron, copper and lead release in a changing water quality environment. The mechanism of zinc orthophosphate corrosion inhibition in drinking water municipal and home distribution systems and the role of zinc were investigated. Fourteen pilot distribution systems (PDSs) which were identical and consisted of increments of PVC, lined cast iron, unlined cast iron and galvanized steel pipes were used in this study. Changing quarterly blends of finished ground, surface and desalinated waters were fed into the pilot distribution systems over a one year period. Zinc orthophosphate inhibitor at three different doses was applied to three PDSs. Water quality and iron, copper and lead scale formation was monitored for the one year study duration. The first article describes the effects of zinc orthophosphate (ZOP) corrosion inhibitor on surface characteristics of iron corrosion products in a changing water quality environment. Surface compositions of iron surface scales for iron and galvanized steel coupons incubated in different blended waters in the presence of ZOP inhibitor were investigated using X-ray Photoelectron Spectroscopy (XPS), Scanning Electron Microscopy (SEM) / Energy Dispersive X-ray Spectroscopy (EDS). Based on surface characterization, predictive equilibrium models were developed to describe the controlling solid phase and mechanism of ZOP inhibition and the role of zinc for iron release. The second article describes the effects of zinc orthophosphate (ZOP) corrosion inhibitor on total iron release in a changing water quality environment. Development of empirical models as a function of water quality and ZOP inhibitor dose for total iron release and mass balances analysis for total zinc and total phosphorus data provided insight into the mechanism of ZOP corrosion inhibition regarding iron release in drinking water distribution systems. The third article describes the effects of zinc orthophosphate (ZOP) corrosion inhibitor on total copper release in a changing water quality environment. Empirical model development was undertaken for prediction of total copper release as a function of water quality and inhibitor dose. Thermodynamic models for dissolved copper based on surface characterization of scale that were generated on copper coupons exposed to ZOP inhibitor were also developed. Surface composition was determined by X-ray Photoelectron Spectroscopy (XPS). The fourth article describes the effects of zinc orthophosphate (ZOP) corrosion inhibitor on total lead release in a changing water quality environment. Surface characterization of lead scale on coupons exposed to ZOP inhibitor by X-ray Photoelectron Spectroscopy (XPS) was utilized to identify scale composition. Development of thermodynamic model for lead release based on surface analysis results provided insight into the mechanism of ZOP inhibition and the role of zinc.
335

Occurrence of Per- and Polyfluoroalkyl Substances (PFAS) in Private Water Supplies in Southwest Virginia

Hohweiler, Kathleen A. 24 May 2023 (has links)
Per- and polyfluoroalkyl substances (PFAS) are a class of man-made contaminants of increasing human health concern due to their resistance to degradation, widespread occurrence in the environment, bioaccumulation in human and animal organ tissue, and potential negative health impacts. Drinking water is suspected to be a primary source of human PFAS exposure, so the US Environmental Protection Agency (US EPA) has set interim and final health advisories for several PFAS species that are applicable to municipal water supplies. However, private drinking water supplies may be uniquely vulnerable to PFAS contamination, as these systems are not subject to EPA regulation and often include limited treatment prior to use for drinking or cooking. The goal of this study was to determine the incidence of PFAS contamination in private drinking water supplies in two counties in Southwest Virginia (Floyd and Roanoke), and to examine the potential for reliance on citizen-science based strategies for sample collection in subsequent broader sampling efforts. Samples for inorganic ions, bacteria, and PFAS analysis were collected on separate occasions by homeowners and experts at the home drinking water point of use (POU) in 10 Roanoke and 10 Floyd County homes for comparison. Experts also collected an outside tap PFAS sample. At least one PFAS compound was detected in 76% of POU samples collected (n=60), with an average total PFAS concentration of 23.5 parts per trillion (ppt). PFOA and PFOS, which are currently included in EPA health advisories, were detected in 13% and 22% of POU samples, respectively. Of the 31 PFAS species targeted, 15 were detected in at least one sample. On average, a single POU sample contained approximately 3 PFAS, and one sample contained as many as 8 different species, indicating that exposure to PFAS in complex mixtures is worth noting. Although there were significant differences in total PFAS concentrations between expert and homeowner collected samples (Wilcoxon, alpha = 0.05), it is unclear whether this difference was due to contamination by the collector or the water usage and time of day of sampling (i.e. morning, afternoon). It is worth noting that there was no significant difference in the number of PFAS species in the samples collected by homeowners and experts. Given the considerable variation in PFAS detections between homes, future studies reliant on homeowner collection of samples appears possible given proper training and instruction to collect at the same time of day (i.e. first thing in the morning). / Master of Science / Per- and polyfluoroalkyl substances (PFAS) belong to a large family of manmade compounds that are commonly used in a variety of household and consumer products due to their unique water and stain resistant properties. PFAS compounds are not easily broken down in the environment and have been detected globally in air, soil, and water samples. In addition to their environmental detections, PFAS are slow to be removed from the body after ingestion and known to cause negative health effects in concentrations less than one part per trillion. Drinking water is considered to a main source of PFAS consumption for humans; as such, the US Environmental Protection Agency (US EPA) has set strict, but not legally binding, interim and final health advisories (HA) for four types of PFAS. However, these health advisories only apply to public water services and do not cover private drinking water systems, such as wells or springs, which are the full responsibility of the well owner. Private drinking water system users often do not treat their water before drinking which may make these systems uniquely vulnerable to PFAS contamination. This study focused on 20 total homes, 10 in Roanoke County and 10 in Floyd County to see if PFAS was present and to determine whether or not homeowners would be able to collect their own samples for PFAS analysis at home as accurately as researchers or experts with proper instructions. Homeowners and experts collected drinking water samples inside at a point of use (POU), usually at a kitchen faucet, and outside of the home, usually from a tap. PFAS were present in 76% (n=60) of POU samples, with an average combined concentration of 23.5 parts per trillion (ppt). The two most well studied PFAS, PFOA and PFOS were detected in 13% and 22% of POU samples, respectively. It was also common to detect at least 3 PFAS in a single sample. Although there were differences in total average concentrations of PFAS in samples collected by homeowners and experts, variation could be caused by several factors indicating that with proper training and instruction it is likely future studies could still rely on homeowners to collect samples for PFAS analysis.
336

Optimization and verification of changes made to US-EPA 1623 Method to analyse for the presence of Cryptosporidium and Giardia in water

Khoza, M. N. L. (Mtetwa) 03 1900 (has links)
Thesis. (M. Tech. (Dept. of Biosciences, Faculty of Applied and Computer Sciences))--Vaal University of Technology, 2010 / Methods for detecting the presence of Cryptosporidium oocysts and Giardia cysts have been developed and continuous improvement is being done to improve the recovery rate of the target protozoa. Rand Water has adopted their method for isolation and detection of Cryptosporidium oocysts and Giardia cysts in water from United State Environmental Protection Agency (US-EPA) Method 1623, 1999. In 2005 changes were made by US-EPA to the Method 1623. A study was done to improve the performance of the Rand Water Method 06 (2007) used for isolation and detection of Cryptosporidium oocysts and Giardia cysts. Three methods namely: Rand Water Method 06 (2007), US-EPA Method 1623 (2005) and Drinking Water Inspectorate standard operating procedures (2003) were compared and key different steps in the methods were identified (wrist action speed, centrifuge speed, immunomagnetic separation procedures and addition of pre-treatment steps). Different experiments were conducted to verify and evaluate the difference between two wrist action shaker speeds, three different centrifuge speeds, two slightly different immunomagnetic separation procedures and when a pre-treatment step was included in the method. Three different types of water matrices (reagent grade water, drinking water and raw water) were used for the experiments and secondary validation. Data obtained from the experiments and secondary validation was statistically analyzed to determine whether there was a significant difference in the recovery of Cryptosporidium oocysts and Giardia cysts. Secondary validation of the Rand Water Method 06 (2007) was performed by implementing the study experiments‟ findings into the method. The results indicated an increase in the recovery rate of Cryptosporidium oocysts and Giardia cysts when data was compared with the previous secondary validation report. The mean recovery of Cryptosporidium oocysts in reagent grade water samples increased from 31% to 55%, drinking water samples increased from 28% to 44% and raw water decreased from 42% to 29%. The mean recovery of Giardia cysts in reagent grade water samples increased from 31% to 41%, drinking water samples increased from 28% to 46% and raw water decreased from 42% to 32%. Furthermore, even though the recovery rate of raw water decreased the use of pre-treatment buffer reduced the number of IMS performed per sample by reducing the pellet size. Enumeration of microscope slides was also easier as there was less background interference. The optimization of the Rand Water Method 06 (2007) was successful as the recovery rate of Cryptosporidium oocysts and Giardia cysts from water increased. All the changes that were verified and that increased the recovery rate were incorporated into the improved Rand Water Method 06.
337

Granular Media Supported Microbial Remediation of Nitrate Contaminated Drinking Water

Malini, R January 2014 (has links) (PDF)
Increasing nitrate concentration in ground water from improper disposal of sewage and excessive use of fertilizers is deleterious to human health as ingestion of nitrate contaminated water can cause methaemoglobinemia in infants and possibly cancer in adults. The permissible limit for nitrate in potable water is 45 mg/L. Unacceptable levels of nitrate in groundwater is an important environmental issue as nearly 80 % of Indian rural population depends on groundwater as source of drinking water. Though numerous technologies such as reverse osmosis, ion exchange, electro-dialysis, permeable reactive barriers using zero-valent iron exists, nitrate removal from water using affordable, sustainable technology, continues to be a challenging issue as nitrate ion is not amenable to precipitation or removable by mineral adsorbents. Tapping the denitrification potential of soil denitrifiers which are inherently available in the soil matrix is a possible sustainable approach to remove nitrate from contaminated drinking water. Insitu denitrification is a useful process to remove NO3–N from water and wastewater. In biological denitrification, nitrate ions function as terminal electron acceptor instead of oxygen; the carbon source serve as electron donor and the energy generated in the redox process is utilized for microbial cell growth and maintenance. In this process, microorganisms first reduce nitrate to nitrite and then produce nitric oxide, nitrous oxide, and nitrogen gas. The pathway for nitrate reduction can be written as: NO3-→ NO2-→ NO → N2O → N2. (i) Insitu denitrification process occurring in soil environments that utilizes indigenous soil microbes is the chosen technique for nitrate removal from drinking water in this thesis. As presence of clay in soil promotes bacterial activity, bentonite clay was mixed with natural sand and this mix, referred as bentonite enhanced sand (BES) acted as the habitat for the denitrifying bacteria. Nitrate reduction experiments were carried out in batch studies using laboratory prepared nitrate contaminated water spiked with ethanol; the batch studies examined the mechanisms, kinetics and parameters influencing the heterotrophic denitrification process. Optimum conditions for effective nitrate removal by sand and bentonite enhanced sand (BES) were evaluated. Heterotrophic denitrification reactors were constructed with sand and BES as porous media and the efficiency of these reactors in removing nitrate from contaminated water was studied. Batch experiments were performed at 40°C with sand and bentonite enhanced sand specimens that were wetted with nutrient solution containing 22.6 mg of nitrate-nitrogen and ethanol to give C/N ratio of 3. The moist sand and BES specimens were incubated for periods ranging from 0 to 48 h. During nitrate reduction, nitrite ions were formed as intermediate by-product and were converted to gaseous nitrogen. There was little formation of ammonium ions in the soil–water extract during reduction of nitrate ions. Hence it was inferred that nitrate reduction occurred by denitrification than through dissimilatory nitrate reduction to ammonium (DNRA). The reduction in nitrate concentration with time was fitted into rate equations and was observed to follow first order kinetics with a rate constant of 0.118 h-1 at 40°C. Results of batch studies also showed that the first order rate constant for nitrate reduction decreased to 5.3x10-2 h-1 for sand and 4.3 x10-2 h-1 for bentonite-enhanced sand (BES) at 25°C. Changes in pH, redox potential and dissolved oxygen in the soil-solution extract served as indicators of nitrate reduction process. The nitrate reduction process was associated with increasing pH and decreasing redox potential. The oxygen depletion process followed first order kinetics with a rate constant of 0.26 h-1. From the first order rate equation of oxygen depletion process, the nitrate reduction lag time was computed to be 12.8 h for bentonite enhanced sand specimens. Ethanol added as an electron donor formed acetate ions as an intermediate by-product that converted to bicarbonate ions; one mole of nitrate reduction generated 1.93 moles of bicarbonate ions that increased the pH of the soil-solution extract. The alkaline pH of BES specimen (8.78) rendered it an ideal substrate for soil denitrification process. In addition, the ability of bentonite to stimulate respiration by maintaining adequate levels of pH for sustained bacterial growth and protected bacteria in its microsites against the effect of hypertonic osmotic pressures, promoting the rate of denitrification. Buffering capacity of bentonite was mainly due to high cation exchange capacity of the clay. The presence of small pores in BES specimens increased the water retention capacity that aided in quick onset of anaerobiosis within the soil microsites. The biochemical process of nitrate reduction was affected by physical parameters such as bentonite content, water content, and temperature and chemical parameters such as C/N ratio, initial nitrate concentration and presence of indigenous micro-organisms in contaminated water. The rate of nitrate reduction process progressively increased with bentonite content but the presence of bentonite retarded the conversion of nitrite ions to nitrogen gas, hence there was significant accumulation of nitrite ions with increase in bentonite content. The dependence of nitrate reduction process on water content was controlled by the degree of saturation of the soil specimens. The rate of nitrate reduction process increased with water content until the specimens were saturated. The threshold water content for nitrate reduction process for sand and bentonite enhanced sand specimens was observed to be 50 %. The rate of nitrate reduction linearly increased with C/N ratio till steady state was attained. The optimum C/N ratio was 3 for sand and bentonite enhanced sand specimens. The activation energy (Ea) for this biochemical reaction was 35.72 and 47.12 kJmol-1 for sand and BES specimen respectively. The temperature coefficient (Q10) is a measure of the rate of change of a biological or chemical system as a consequence of increasing the temperature by 10°C. The temperature coefficient of sand and BES specimen was 2.0 and 2.05 respectively in the 15–25°C range; the temperature coefficients of sand and BES specimens were 1.62 and 1.77 respectively in the 25–40°C range. The rate of nitrate reduction linearly decreased with increase in initial nitrate concentration. The biochemical process of nitrate reduction was unaffected by presence of co-ions and nutrients such as phosphorus but was influenced by presence of pathogenic bacteria. Since nitrate leaching from agricultural lands is the main source of nitrate contamination in ground water, batch experiments were performed to examine the role of vadose (unsaturated soil) zone in the nitrate mitigation by employing sand and BES specimens with varying degree of soil saturation and C/N ratio as controlling parameters. Batch studies with sand and BES specimens showed that the incubation period required to reduce nitrate concentrations below 45 mg/L (t45) strongly depends on degree of saturation when there is inadequate carbon source available to support denitrifying bacteria; once optimum C/N ratio is provided, the rate of denitrification becomes independent of degree of soil saturation. The theoretical lag time (lag time refers to the period that is required for denitrification to commence) for nitrate reduction for sand specimens at Sr= 81 and 90%, C/N ratio = 3 and temperature = 40ºC corresponded to 24.4 h and 23.1 h respectively. The lag time for BES specimens at Sr = 84 and 100%, C/N ratio = 3 and temperature = 40ºC corresponded to 13.9 h and 12.8 h respectively. Though the theoretically computed nitrate reduction lag time for BES specimens was nearly half of sand specimens, it was experimentally observed that nitrate reduction proceeds immediately without any lag phase in sand and BES specimens suggesting the simultaneous occurrence of anaerobic microsites in both. Denitrification soil columns (height = 5 cm and diameter = 8.2 cm) were constructed using sand and bentonite-enhanced sand as porous reactor media. The columns were permeated with nitrate spiked solutions (100 mg/L) and the outflow was monitored for various chemical parameters. The sand denitrification column (packing density of 1.3 Mg/m3) showed low nitrate removal efficiency because of low hydraulic residence time (1.32 h) and absence of carbon source. A modified sand denitrification column constructed with higher packing density (1.52 Mg/m3) and ethanol addition to the influent nitrate solution improved the reactor performance such that near complete nitrate removal was achieved after passage of 50 pore volumes. In comparison, the BES denitrification column achieved 87.3% nitrate removal after the passage of 28.9 pore volumes, corresponding to 86 h of operation of the BES reactor. This period represents the maturation period of bentonite enhanced sand bed containing 10 % bentonite content. Though nitrate reduction is favored by sand bed containing 10 % bentonite, the low flow rate (20-25 cm3/h) impedes its use for large scale removal of nitrate from drinking water. Hence new reactor was designed using lower bentonite content of 5 % that required maturation period of 9.6 h. The 5 and 10 % bentonite-enhanced sand reactors bed required shorter maturation period than sand reactor as presence of bentonite contributes to increase in hydraulic retention time of nitrate within the reactor. On continued operation of the BES reactors, reduction in flow rate from blocking of pores by microbial growth on soil particles and accumulation of gas molecules was observed that was resolved by backwashing the reactors.
338

Antimicrobial contaminant removal by multi-stage drinking water filtration

Rooklidge, Stephen J. 07 May 2004 (has links)
The fate of antimicrobials entering the aquatic environment is an increasing concern for researchers and regulators, and recent research has focused on antimicrobial contamination from point sources, such as wastewater treatment facility outfalls. The terraccumulation of antimicrobials and mobility in diffuse pollution pathways should not be overlooked as a contributor to the spread of bacterial resistance and the resulting threat to human drug therapy. This review critically examines recent global trends of bacterial resistance, antimicrobial contaminant pathways from agriculture and water treatment processes, and the need to incorporate diffuse pathways into risk assessment and treatment system design. Slow sand filters are used in rural regions where source water may be subjected to antimicrobial contaminant loads from waste discharges and diffuse pollution. A simple model was derived to describe removal efficiencies of antimicrobials in slow sand filtration and calculate antimicrobial concentrations sorbed to the schmutzdecke at the end of a filtration cycle. Input parameters include water quality variables easily quantified by water system personnel and published adsorption, partitioning, and photolysis coefficients. Simulation results for three classes of antimicrobials suggested greater than 4-log removal from 1 ��g/L influent concentrations in the top 30 cm of the sand column, with schmutzdecke concentrations comparable to land-applied biosolids. Sorbed concentrations of the antimicrobial tylosin fed to a pilot filter were within one order of magnitude of the predicted concentration. To investigate the behavior of antimicrobial contaminants during multi-stage filtration, five compounds from four classes of antimicrobials were applied to a mature slow sand filter and roughing filter fed raw water from the Santiam River in Oregon during a 14-day challenge study. Antimicrobial removal efficiency of the filters was calculated from 0.2 mg/L influent concentrations using HPLC MS/MS. and sorption coefficients (K[subscript d], K[subscript oc], K[subscript om]) were calculated for schmutzdecke collected from a mature filter column. Sulfonamides had low sorption coefficients and were largely unaffected by multi-stage filtration. Lincomycin, trimethoprim, and tylosin exhibited higher sorption coefficients and limited mobility within the slow sand filter column. The lack of a significant increase in overall antimicrobial removal efficiency indicated biodegradation is less significant than sorption in multi-stage filtration. / Graduation date: 2004
339

Kvalita pitné vody určené k hromadnému zásobování obyvatel / The Quality of Drinking Water in Public Distribution Systems

SOMPEKOVÁ, Zuzana January 2010 (has links)
This research project was aimed at monitoring the quality of drinking water that is supplied to the inhabitants of small villages. The quality of drinking water produced by small waterworks in South Bohemia, in municipalities Mazelov, Ortvínovice, Doubravka and Rábín, was studied. Sanitary analyses of drinking water samples carried out by the waterworks operators in 2004-2009 showed some variability in the concentrations of free chlorine, nitrates, pH, turbidity and the content of Escherichia coli in all the waterworks during the investigated period. The hypothesis assuming that the quality of drinking water produced by water treatment from small water sources is stable and that it does not vary in some key indicators, such as nitrates, the contents of Escherichia coli etc., throughout the year was not confirmed. The other hypothesis assuming that the number of small water sources used for public drinking water supplies decreases during the period was confirmed. The causes of these changes depend on many factors, such as the location and source of drinking water, the type of treatment plant, and, last but not least, the quality of service and economic potential of the waterworks operators play a negative role.
340

Evaluating the post-implementation effectiveness of selected household water treatment technologies in rural Kenya

Onabolu, Boluwaji January 2014 (has links)
Water, sanitation and hygiene-related diseases are responsible for 7% of all deaths and 8% of all disability adjusted live years (DALYs), as well as the loss of 320 million days of productivity in developing countries. Though laboratory and field trials have shown that household water treatment (HWT) technologies can quickly improve the microbiological quality of drinking water, questions remain about the effectiveness of these technologies under real-world conditions. Furthermore, the value that rural communities attach to HWT is unknown, and it is not clear why, in spite of the fact that rural African households need household water treatment (HWT) most, they are the least likely to use them. The primary objective of this multi-level study was to assess the post-implementation effectiveness of selected HWT technologies in the Nyanza and Western Provinces of Kenya. The study was carried out in the rainy season between March and May, 2011 using a mixed method approach. Evidence was collected in order to build a case of evidence of HWT effectiveness or ineffectiveness in a post-implementation context. A quasi-experimental design was used first to conduct a Knowledge, Attitudes and Practices (KAP) survey in 474 households in ten intervention and five control villages (Chapter 3). The survey assessed the context in which household water treatment was being used in the study villages to provide real-world information for assessing the effectiveness of the technologies. An interviewer-administered questionnaire elicited information about the water, sanitation and hygiene-related KAP of the study communities. A household water treatment (HWT) survey (Chapter 4) was carried out in the same study households and villages as the KAP study, using a semi-structured questionnaire to gather HWT adoption, compliance and sustained use-related information to provide insight into the perceived value the study households attach to HWT technologies, and their likelihood of adoption of and compliance with these technologies. The drinking water quality of 171 (one quarter of those surveyed during KAP) randomly selected households was determined and tracked from source to the point of use (Chapter 5). This provided insights into HWT effectiveness by highlighting the need for HWT (as indicated by source water quality) and the effect of the study households’ KAP on drinking water quality (as indicated by the stored water quality). Physico-chemical and microbiological water quality of the nineteen improved and unimproved sources used by the study households was determined, according to the World Health Organisation guidelines. The microbiological quality of 291 water samples in six intervention and five control villages was determined from source to the point-of-use (POU) using the WHO and Sphere Drinking Water Quality Guidelines. An observational study design was then used to assess the post-implementation effectiveness of the technologies used in 37 households in five intervention villages (Chapter 6). Three assessments were carried out to determine the changes in the microbiological quality of 107 drinking water samples before treatment (from collection container) and after treatment (from storage container) by the households. The criteria used to assess the performance of the technologies were microbial efficacy, robustness and performance in relation to sector standards. A Quantitative Microbial Risk Assessment (QMRA) was then carried out in the HWT effectiveness study households to assess the technologies’ ability to reduce the users’ exposure to and probability of infection with water-borne pathogens (Chapter 7). The KAP survey showed that the intervention and control communities did not differ significantly in 18 out of 20 socio-economic variables that could potentially be influenced by the structured manner of introducing HWT into the intervention villages. The majority of the intervention group (IG) and the control group (CG) were poor or very poor on the basis of household assets they owned. The predominant level of education for almost two-thirds of the IG and CG respondents was primary school (completed and non-completed). Though very few were unemployed in IG (8.07%) and CG (14.29%), the two groups of respondents were predominantly engaged in subsistence farming — a low income occupation. With regard to practices, both groups had inadequate access to water and sanitation with only one in two of the households in both IG and CG using improved water sources as their main drinking water source in the non-rainy season. One in ten households in both study groups possessed an improved sanitation facility, though the CG was significantly more likely to practice open defecation than the IG. The self-reported use of soap in both study groups was mainly for bathing and not for handwashing after faecal contact with adult or child faeces. Despite the study groups' knowledge about diarrhoea, both groups showed a disconnection between their knowledge about routes of contamination and barriers to contamination. The most frequent reason for not treating water was the perceived safety of rain water in both the IG and CG. / The HWT adoption survey revealed poor storage and water-handling practices in both IG and CG, and that very few respondents knew how to use the HWT technologies correctly: The IG and CG were similar in perceived value attached to household water treatment. All HWT technologies had a lower likelihood of adoption compared to the likelihood of compliance indicators in both IG and CG. The users’ perceptions about efficacy, time taken and ease of use of the HWT technologies lowered the perceived value attached to the technologies. The assessment of the drinking water quality used by the study communities indicated that the improved sources had a lower geometric mean E. coli and total coliform count than the unimproved sources. Both categories of sources were of poor microbiological quality and both exceeded the Sphere Project (2004) and the WHO (2008) guidelines for total coliforms and E. Coli respectively The study communities’ predominant drinking water sources, surface water and rainwater were faecally contaminated (geometric mean E. coli load of 388.1±30.45 and 38.9±22.35 cfu/100 ml respectively) and needed effective HWT. The improved sources were significantly more likely than the unimproved sources to have a higher proportion of samples that complied with the WHO drinking water guidelines at source, highlighting the importance of providing improved water sources. The lowest levels of faecal contamination were observed between the collection and storage points which coincided with the stage at which HWT is normally applied, suggesting an HWT effect on the water quality. All water sources had nitrate and turbidity levels that exceeded the WHO stipulated guidelines, while some of the improved and unimproved sources had higher than permissible levels of lead, manganese and aluminium. The water source category and the mouth type of the storage container were predictive of the stored water quality. The active treater households had a higher percentage of samples that complied with WHO water quality guidelines for E. coli than inactive treater households in both improved and unimproved source categories. In inactive treater households, 65% of storage container water samples from the improved sources complied with the WHO guidelines in comparison to 72% of the stored water samples in the active treater households. However the differences were not statistically significant. The HWT technologies did not attain sector standards of effective performance: in descending order, the mean log10 reduction in E. coli concentrations after treatment of water from unimproved sources was PUR (log₁₀ 2.0), ceramic filters (log₁₀ 1.57), Aquatab (log₁₀ 1.06) and Waterguard (log₁₀ 0.44). The mean log10 reduction in E. coli after treatment of water from improved sources was Aquatab (log₁₀ 2.3), Waterguard (log₁₀ 1.43), PUR (log₁₀ 0.94) and ceramic filters (log₁₀ 0.16). The HWT technologies reduced the user’s daily exposure to water-borne pathogens from both unimproved and improved drinking water sources. The mean difference in exposure after treatment of water from unimproved sources was ceramic filter (log₁₀ 2.1), Aquatab (log₁₀ 1.9), PUR (log₁₀ 1.5) and Waterguard (log₁₀ 0.9), in descending order. The mean probability of infection with water-borne pathogens (using E.coli as indicator) after consumption of treated water from both improved and unimproved sources was reduced in users of all the HWT technologies. The difference in reduction between technologies was not statistically significant. The study concluded that despite the apparent need for HWT, the study households’ inadequate knowledge, poor attitudes and unhygienic practices make it unlikely that they will use the technologies effectively to reduce microbial concentrations to the standards stipulated by accepted drinking water quality guidelines. The structured method of HWT promotion in the intervention villages had not resulted in more hygienic water and sanitation KAP in the IG compared to the CG, or significant differences in likelihood of adoption and compliance with the assessed HWT technologies. Despite attaching a high perceived value to HWT, insufficient knowledge about how to use the HWT technologies and user concerns about factors such as ease of use, accessibility and time to use will impact negatively on adoption and compliance with HWT, notwithstanding their efficacy during field trials. Even though external support had been withdrawn, the assessed HWT technologies were able improve the quality of household drinking water and reduce the exposure and risk of water-borne infections. However, the improvement in water quality and reduction in risk did not attain sector guidelines, highlighting the need to address the attitudes, practices and design criteria identified in this study which limit the adoption, compliance and effective use of these technologies. These findings have implications for HWT interventions, emphasising the need for practice-based behavioural support alongside technical support.

Page generated in 0.0587 seconds