• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 58
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 332
  • 332
  • 317
  • 285
  • 285
  • 45
  • 36
  • 34
  • 28
  • 27
  • 26
  • 26
  • 20
  • 20
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Prediction and reduction of traffic pollution in urban areas

Bostock, Adam K. January 1994 (has links)
This thesis is the result of five years research into road traffic emissions of air pollutants. It includes a review of traffic pollution studies and models, and a description of the PREDICT model suite and PREMIT emissions model. These models were used to evaluate environmentally sensitive traffic control strategies, some of which were based on the use of Advanced Transport Telematics (ATT). This research has improved our understanding of traffic emissions. It studied emissions of the following pollutants: carbon monoxide (CO), hydrocarbons (HC) and oxides of nitrogen (NOx). PREMIT modelled emissions from each driving mode (cruise, acceleration, deceleration and idling) and, consequently, predicted relatively complex emission characteristics for some scenarios. Results suggest that emission models should represent emissions by driving mode, instead of using urban driving cycles or average speeds. Emissions of NOx were more complex than those of CO and HC. The change in NOx, caused by a particular strategy, could be similar or opposite to changes in CO and HC. Similarly, for some scenarios, a reduction in stops and delay did not reduce emissions of NOx. It was also noted that the magnitude of changes in emissions of NOx were usually much less than the corresponding changes in CO and HC. In general, the traffic control strategies based on the adjustment of signal timings were not effective in reducing total network emissions. However, high emissions of pollutants on particular links could, potentially, be reduced by changing signal timings. For many links, mutually exclusive strategies existed for reducing emissions of CO and HC, and emissions of NOx. Hence, a decision maker may have to choose which pollutants are to be reduced, and which can be allowed to increase. The environmental area licensing strategy gave relatively large reductions in emissions of all pollutants. This strategy was superior to the traffic signal timing strategies because it had no detrimental impact on the efficiency of the traffic network and gave simultaneous reductions in emissions of CO, HC and NOx.
62

Some studies on asphaltene stabilised water-in-oil emulsions

Symonds, John David January 1975 (has links)
Various workers have shown the asphaltene fraction of crude oil to be responsible for the highly stable W/O emulsions formed after marine oil spillages. Electron microscope studies on crude oil emulsions using the freeze etching technique showed the oil/water interface to be smooth on the aqueous side but to have a particulate structure on the oil side. In Brega crude oil emulsions, waxy plates were aligned along the interface. Also observed were the coherence of the interfacial film and droplet coalescence. Allowing a method error of ± 10%, good agreement was found between computer calculated droplet diameters from the log-normal distribution equation and standard statistical mean diameters. Sc (interfacial area/unit volume oil phase) was larger for salt water emulsions. An equation proposed for Sc as a function of stirring time fitted very well for all three crude oil emulsions. The maximum Sc increased with φ (volume fraction) indicating a minimum possible diameter droplet fold by the stirrer. A rate parameter in the equation is probably related to the asphaltene diffusion to the interface. A good fit with the data was obtained when the equation was modified to account for detergent addition delaying emulsion formation. A second rate parameter may reflect the irreversible replacement of detergent at the interface by asphaltenes. Ageing studies showed that once stabilisation was complete, detergent present only slowly affected coalescence. A heating effect explained the non-linearity of Kuwait and Tia Juana emulsion rheograms. The greater viscosity of Tia Juana stabilised emulsions was explained by a thicker supporting asphaltene layer. The elasticity of the-interfacial film accounted for anomalous behaviour in emulsions of asphaltenes dispersed in m-xylene/n-octane. The extrapolated yield point and critical shear rate were used as empirical measures of deformability and attractive forces respectively.
63

The use of computer graphics and visualisation (from reconstruction to training) for the resource sector of Western Australia

Fowle, Kenneth January 2003 (has links)
The minerals and energy sector can rightly be classified as comprising an extremely hazardous working environment in which numerous situations exist for accidents and incidents involving personnel and equipment to occur. Accidents are often explained by what are referred to as 'human factor'. The often used explanation, 'technical-failure', gives the impression that technology lives a life of its own without human intervention. However, technical failure often occurs because of human errors in construction, installation, maintenance or operation. It is the person who triggers the risks who is made morally (and sometime legally) responsible. When an operator makes a mistake (an active error) he or she is personally blamed. When a designer or constructor makes a mistake, or when cheap or inferior equipment is bought, or when maintenance is faulty, the responsibility is depersonalised and it becomes a 'technology' fault (Sunderstrom-Frisk, 1998). This research examines ways of using expert information using computer graphics and visualisation to produce visual applications that demonstrate and explain, but also have the added ability to teach the user or viewer, with the intent to assess their competency. Today's technology provides educators, students, professional bodies and the general public access to large amounts of information in a visual form. We repackage technical literature and data as movies and videos for audiences to view, instead of reading the information. Understanding may be achieved rapidly instead of taking days, weeks or months. From a visual presentation the viewers absorb information, which is easy to retain. The reconstructions discussed in this research concern the minerals and energy sector of Western Australia. They not only show what went wrong but can also be customised to demonstrate how to prevent an accident/incident. The benefits of this to industry is primarily: the ability to reuse the reconstruction instead of closing down a production line that cost the company and industry many thousands of dollars, and no lives are exposed to hazardous environments while examining the reconstruction for investigation or training purposes.
64

A study of hybrid clarification-filtration processes for potable water treatment

Price, Robert January 2005 (has links)
This research investigates the removal of natural organic matter (NOM) from low turbidity waters by optimising an integrated coagulation and membrane filtration process. In conjunction with a regional water utility, the feasibility and operability of pre-coagulation with ultrafiltration membrane units, is investigated at both the fundamental and applied levels. The pH of coagulation greatly affects the growth of flocs. Although flocs are produced over the range of pH values, there is a significant improvement in performance at the optimum pH. The results show that for ferric sulphate, the optimum pH value for coagulation was 4.8; for ferric chloride the optimum pH value was 5.0; and for alum the optimum pH value was 6. Manipulation of the mixing regime during orthokinetic flocculation allows control of the final floe size. A high shear rate (447 s-1) induces greater particle collision and micro-floc formation, allowing an increased rate of growth during slow mixing. A mixing regime of 60 seconds rapid mixing (shear rate G= 447 s­-1) followed by 3 minutes slow mixing (shear rate G= 20 s-1) is required for flocculation of the NOM and adequate enmeshment and removal of excess iron. Experiments conducted with the optimised coagulation regime and a Norit hollow fibre ultrafiltration membrane, operated in dead-end mode, leads to significant fouling. The results for the submerged Zenon ultrafiltration membrane also indicated membrane fouling, when the coagulation regime was optimised for NOM reduction and iron removal. Changing the coagulation regime altered the level of membrane fouling. It can be concluded that the optimum coagulation conditions in conventional water treatment are not always the optimum conditions for coagulation with ultrafiltration treatment and the unconventional coagulation conditions can be much more effective, than the conventional ones, in the context of membrane filtration, depending on the raw water characteristics.
65

The use of computer graphics and virtual reality for visual impact assessments

Cox, Christopher January 2003 (has links)
Changes to the visual character of the landscape can become a key issue capable of influencing the outcome of an Environmental Impact Assessment (EIA). These changes are commonly referred to as visual impact, and it is recognised, unlike many other aspects of EIAs, that Visual Impact Assessment (VIA) relies less upon measurement than upon experience and judgement (IEATLI, 1995). Currently, there are very few quantitative techniques for the evaluation of visibility and it is mostly assessed qualitatively since it is concerned with the human appreciation of the landscape (Zewe and Koglin, 1995 and Wherrett, 2002). The main problem with qualitative techniques in assessing visual impact is that they may lead to bias due to their inherent subjectivity, hence there is a need for a more structured and consistent approach towards VIA. To reduce the subjectivity currently associated with VIAs, new quantitative techniques have been developed and existing spatial and qualitative techniques have been improved upon. The techniques developed in this research use Computer Graphics (CG) technology, including the field of Virtual Reality (VR). A quantitative method to calculate percentage view change has been developed that allows the accurate determination of the variation in any view, caused by an existing or proposed development. The method uses three dimensional (3D) CG models of an environment and software that has been developed using a scripting language from a 3D modelling software package. A new method has also been developed to create Fields of Visual Influence (FVIs) using standard 3D modelling techniques. The method improves upon the accuracy and efficiency of existing FVI techniques. A novel VR simulation technique has also been developed that attempts to reduce the subjectivity associated with simulations, by integrating quantitative and spatial techniques.
66

Transition metal modified titanium dioxide photocatalysts for the removal of nitric oxide

Bowering, Neil January 2004 (has links)
Photocatalytic NO decomposition and reduction reactions, using CO as a reducing gas, have been investigated over Ti02, Ag-Ti02 and Rh-Ti02 photocatalysts, using a purpose built continuous flow photoreactor. The transition metal modified Ti02 photocatalysts were prepared using wet impregnation techniques, and the effect of thermal processing parameters on their photocatalytic behaviour was studied. Prepared photocatalysts were characterised using a number of complementary techniques, including XRD, TEM, DSC, and XPS. The findings from these techniques were used to explain the observed photocatalytic properties. The activity and selectivity of the photocatalysts were found to be dependant on a number of factors; thermal pretreatment temperature, type and amount of the modifying element, chemical nature of the modifying element and the reaction conditions used. It was found, for Ti02 photocatalysts, that increasing the pretreatment calcination temperature resulted in lower NO conversion rates, due to removal of surface bound hydroxyl groups. A similar trend was observed for Ag-P25 photocatalysts, but the reduction in activity was greater due to the presence of larger silver clusters, which acted as recombination centres for photogenerated electron-hole pairs. The activity of the Ag-P25 photocatalysts decreased as the silver loadings increased, whilst the activity of the Rh-P25 photocatalysts remained largely unaffected by the metal concentration. Over Ti02 and Ag-Ti02 systems, the NO conversion rate was lower for the reduction reactions compared to decomposition reactions. This was attributed to the preferential adsorption of the CO molecules, blocking NO adsorption sites. Contrasting behaviour was observed over Rh-P25 systems and NO conversions as high as 87 % were recorded in the presence of CO. Silver modified catalysts were highly selective for N2 formation (90 %) whilst rhodium modified catalysts were more selective for N20 formation. These results are discussed with respect to the possible surface reactions and the chemical intermediates that may be formed.
67

Application of a metal solubility model to geochemical survey data

Lenham, Jessica Chloe January 2005 (has links)
In areas where heavy metals are introduced into or onto land where they would not normally be present at elevated concentrations, then that land could be considered to be contaminated. A simple way of determining the magnitude of contamination by heavy metals is to measure the total metal concentration in the soil. However, this simple measure is a poor way of assessing the potential risks to the environment and human health. A more effective risk assessment can be achieved by analysing the proportion of the total metal that exists in a mobile or bioavailable form, in other words, the metal solubility. Unfortunately metal solubility is more difficult and costly to measure than total metal concentration in the soil. This thesis examines the application of a metal solubility model to geochemical survey data consisting of pH and metal concentrations. The solubility predictions were interpolated in order to produce maps; however, the interpolated data had very high uncertainties. Further analysis showed that pH was the greatest source of uncertainty in the algorithm, contributing the most for lead, with 76% of the uncertainty being due to pH. pH was least influential for copper, contributing 49% of the uncertainty, but pH was the highest contributor in each metal. In order to examine the accuracy of the algorithm without geostatistical influences, a field work study was undertaken to measure metal solubility directly at the original survey sites. This showed that the algorithm was very good at predicting metal solubility at point sources. In order to assess the shortscale spatial variability of pH, and the errors in pH measurements, a second field work project was conducted, measuring the pH on 200 samples from a single field. This work showed that pH does vary across a field, but more importantly allowed a quantification of the uncertainty involved in sampling and measuring pH. Results show that despite the short-scale variability in pH, point predictions are accurate (the average difference between measured and predicted pZn2+ is 6%), xvi and might be of use to land managers. However, interpolating solubility predictions for mapping produces unacceptably high uncertainties (mean values were 188% for Pb, 417% for Cu and 153% for Zn) for land management or the development of policy measures related to soil. Further work could include calculating the measured Pb and Cu solubility and comparing these to the predictions. A study to investigate how pH and Zn2+ vary together across a field would also be of interest.
68

Analytical and experimental studies of vehicle pollution dispersion

Hargreaves, David Michael January 1997 (has links)
Using a dual approach of wind tunnel experimentation and the development of an analytical model, the dispersion of pollutants from road vehicles has been investigated. A series of novel tests were conducted in an environmental wind tunnel which looked at the dispersion of propane in the wake of a 1=50th scale model of a lorry which was fired across the wind tunnel using a special rig. Time histories of concentration and air flow were taken as the lorry sped across the tunnel. Two experimental scenarios were investigated. The first, a simulation of a typical rural boundary layer, confirmed the existence of a wake behind the moving lorry. Concentration measurements revealed that the dispersion was largely Gaussian in nature and that at low cross wind speeds the vehicle-induced turbulence was the dominant mixing effect. Measurements were taken as the lorry passed along a model of an idealized urban street canyon. Time histories for individual firings exhibited two peaks as the propane was swept around the canyon in the resident vortex. Ensemble averages of several firings allowed a quantitative assessment of the rate of dispersal from the canyon to be made. A computer model has been developed which predicts the dispersion of vehicular pollutants in both the rural and the urban street canyon environments. The model, based around the Gaussian Puff Method, extends the range of applicability of earlier models in several areas. It is a transient model which enables the investigation of traffic congestion and non-steady above canyon wind fields. It is also the first model to include individual vehicles as sources of both pollutant and turbulence. A detailed sensitivity study is presented, followed by an application of the model which attempts to predict probability distributions of pollutant in a street canyon. Finally, a comparison between the analytical model and the experimental program is presented which demonstrates that the model is capable of modelling a real situation to a good degree of accuracy but also demonstrates that further validation is required.
69

Bioremediation of drill cuttings from oil based muds

Turner, Katharine Patricia January 2002 (has links)
Analytical techniques applicable to the assay and remediation of cutting/mud matrices have been developed, utilising soxhlet extraction with dichloromethane and a drying agent followed by analysis using Gas Chromatography (FID). Calibration curves of oil content were produced for Novatec and Versaplus coated cuttings that were also sized by wet and dry sieving techniques, demonstrating their variable nature. The oil in each size fraction was assessed and showed that the finer fractions preferentially adsorbed the oil. Bacteria were isolated from the cuttings, muds and the pure oils to see if any indigenous species could, with optimum conditions, remediate the oil they contained. The resulting isolates were batch-tested in the laboratory in a minimal medium, with the drill cuttings providing the sole carbon source. Each isolate was scored for remediation performance, with reduction in oil varying from 50% to 6% within one week. Subsequently three bacteria (A,D & J) were identified using 16SrRNA sequencing; they were Bacillus Thuringiensls (A&D) and a novel species related to Bacillus oleronius. These were then tested slurry-phase in a rotating drum bioreactor designed and fabricated for the research against a known remediator, Rhodococcus 9737, and a non-inoculated control for four weeks. All the reactors remediated, but Rhodococcus 9737 reduced the oil to 35% of the original, A, D and other isolates as a consortium to 83% and J, 90%. Further tests in the bioreactors, after a modification to improve the air supply gave reductions of around 50% after four weeks. The high clay content of the cuttings was detrimental to significant levels of bioremediation in a slurry-phase bioreactor. Manures were added to the drill cuttings and tested in the bioreactors as a solid-phase system. These degraded the cuttings oil to 2% (v/v), a 96% reduction. Composting was thus more applicable for a high clay content drilling waste bioremediation system.
70

The characterisation and recycling of incinerated tyres

Rice, George Edward January 2002 (has links)
In excess of 1,000 million tyres are manufactured worldwide every year. The average tyre lasts for approximately 50,000 kilometres before it must be replaced. Each year in the UK approximately 37 million tyres reach the end of their useful life. The used tyre, which is almost identical to the replacement, then requires disposal. As the volume of road traffic increases alternative disposal routes are required to take up the resulting shortfall in capacity. SITA Tyre Recycling operate an incinerator in the West Midlands which solely burns scrap tyres. The plant receives a significant proportion of UK scrap tyre waste stream as well as reject tyres from manufacturers. The main waste stream generated is disposed to landfill. The objective of this research was to determine if the waste had potential for recycling which would reduce the burden on landfill and possibly generate revenue through the sale of products. By developing methods to sample and characterise the waste stream it was found to consist of multiple phases that could be individually treated to generate valuable products. Products based on carbon and steel were derived from the combusted rubber and bead wire respectively. Detailed examination of the carbon phase using a range of techniques revealed that many compounds used in the manufacture of the tyre rubber were highly dispersed in a carbon dominated matrix. The success of physical separation processes was limited by the difficulties associated with liberation of the valuable carbon from contaminating elements. The post-combustion steel was found to have an unacceptable sulphur concentration, which was mainly associated with surface coatings of the carbon phase. Through the application of traditional attrition scrubbing the surface coatings were removed and a reduced sulphur content steel product generated. Pilot scale trials were used to generate large samples for industrial assessment and process optimisation purposes.

Page generated in 0.1475 seconds