• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 485
  • 378
  • 37
  • 33
  • 31
  • 31
  • 31
  • 31
  • 31
  • 31
  • 31
  • 18
  • 8
  • 8
  • 6
  • Tagged with
  • 1407
  • 623
  • 453
  • 298
  • 285
  • 204
  • 203
  • 179
  • 159
  • 156
  • 122
  • 121
  • 118
  • 108
  • 102
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
321

The use of computer graphics and visualisation (from reconstruction to training) for the resource sector of Western Australia

Fowle, Kenneth January 2003 (has links)
The minerals and energy sector can rightly be classified as comprising an extremely hazardous working environment in which numerous situations exist for accidents and incidents involving personnel and equipment to occur. Accidents are often explained by what are referred to as 'human factor'. The often used explanation, 'technical-failure', gives the impression that technology lives a life of its own without human intervention. However, technical failure often occurs because of human errors in construction, installation, maintenance or operation. It is the person who triggers the risks who is made morally (and sometime legally) responsible. When an operator makes a mistake (an active error) he or she is personally blamed. When a designer or constructor makes a mistake, or when cheap or inferior equipment is bought, or when maintenance is faulty, the responsibility is depersonalised and it becomes a 'technology' fault (Sunderstrom-Frisk, 1998). This research examines ways of using expert information using computer graphics and visualisation to produce visual applications that demonstrate and explain, but also have the added ability to teach the user or viewer, with the intent to assess their competency. Today's technology provides educators, students, professional bodies and the general public access to large amounts of information in a visual form. We repackage technical literature and data as movies and videos for audiences to view, instead of reading the information. Understanding may be achieved rapidly instead of taking days, weeks or months. From a visual presentation the viewers absorb information, which is easy to retain. The reconstructions discussed in this research concern the minerals and energy sector of Western Australia. They not only show what went wrong but can also be customised to demonstrate how to prevent an accident/incident. The benefits of this to industry is primarily: the ability to reuse the reconstruction instead of closing down a production line that cost the company and industry many thousands of dollars, and no lives are exposed to hazardous environments while examining the reconstruction for investigation or training purposes.
322

A study of hybrid clarification-filtration processes for potable water treatment

Price, Robert January 2005 (has links)
This research investigates the removal of natural organic matter (NOM) from low turbidity waters by optimising an integrated coagulation and membrane filtration process. In conjunction with a regional water utility, the feasibility and operability of pre-coagulation with ultrafiltration membrane units, is investigated at both the fundamental and applied levels. The pH of coagulation greatly affects the growth of flocs. Although flocs are produced over the range of pH values, there is a significant improvement in performance at the optimum pH. The results show that for ferric sulphate, the optimum pH value for coagulation was 4.8; for ferric chloride the optimum pH value was 5.0; and for alum the optimum pH value was 6. Manipulation of the mixing regime during orthokinetic flocculation allows control of the final floe size. A high shear rate (447 s-1) induces greater particle collision and micro-floc formation, allowing an increased rate of growth during slow mixing. A mixing regime of 60 seconds rapid mixing (shear rate G= 447 s­-1) followed by 3 minutes slow mixing (shear rate G= 20 s-1) is required for flocculation of the NOM and adequate enmeshment and removal of excess iron. Experiments conducted with the optimised coagulation regime and a Norit hollow fibre ultrafiltration membrane, operated in dead-end mode, leads to significant fouling. The results for the submerged Zenon ultrafiltration membrane also indicated membrane fouling, when the coagulation regime was optimised for NOM reduction and iron removal. Changing the coagulation regime altered the level of membrane fouling. It can be concluded that the optimum coagulation conditions in conventional water treatment are not always the optimum conditions for coagulation with ultrafiltration treatment and the unconventional coagulation conditions can be much more effective, than the conventional ones, in the context of membrane filtration, depending on the raw water characteristics.
323

The use of computer graphics and virtual reality for visual impact assessments

Cox, Christopher January 2003 (has links)
Changes to the visual character of the landscape can become a key issue capable of influencing the outcome of an Environmental Impact Assessment (EIA). These changes are commonly referred to as visual impact, and it is recognised, unlike many other aspects of EIAs, that Visual Impact Assessment (VIA) relies less upon measurement than upon experience and judgement (IEATLI, 1995). Currently, there are very few quantitative techniques for the evaluation of visibility and it is mostly assessed qualitatively since it is concerned with the human appreciation of the landscape (Zewe and Koglin, 1995 and Wherrett, 2002). The main problem with qualitative techniques in assessing visual impact is that they may lead to bias due to their inherent subjectivity, hence there is a need for a more structured and consistent approach towards VIA. To reduce the subjectivity currently associated with VIAs, new quantitative techniques have been developed and existing spatial and qualitative techniques have been improved upon. The techniques developed in this research use Computer Graphics (CG) technology, including the field of Virtual Reality (VR). A quantitative method to calculate percentage view change has been developed that allows the accurate determination of the variation in any view, caused by an existing or proposed development. The method uses three dimensional (3D) CG models of an environment and software that has been developed using a scripting language from a 3D modelling software package. A new method has also been developed to create Fields of Visual Influence (FVIs) using standard 3D modelling techniques. The method improves upon the accuracy and efficiency of existing FVI techniques. A novel VR simulation technique has also been developed that attempts to reduce the subjectivity associated with simulations, by integrating quantitative and spatial techniques.
324

Transition metal modified titanium dioxide photocatalysts for the removal of nitric oxide

Bowering, Neil January 2004 (has links)
Photocatalytic NO decomposition and reduction reactions, using CO as a reducing gas, have been investigated over Ti02, Ag-Ti02 and Rh-Ti02 photocatalysts, using a purpose built continuous flow photoreactor. The transition metal modified Ti02 photocatalysts were prepared using wet impregnation techniques, and the effect of thermal processing parameters on their photocatalytic behaviour was studied. Prepared photocatalysts were characterised using a number of complementary techniques, including XRD, TEM, DSC, and XPS. The findings from these techniques were used to explain the observed photocatalytic properties. The activity and selectivity of the photocatalysts were found to be dependant on a number of factors; thermal pretreatment temperature, type and amount of the modifying element, chemical nature of the modifying element and the reaction conditions used. It was found, for Ti02 photocatalysts, that increasing the pretreatment calcination temperature resulted in lower NO conversion rates, due to removal of surface bound hydroxyl groups. A similar trend was observed for Ag-P25 photocatalysts, but the reduction in activity was greater due to the presence of larger silver clusters, which acted as recombination centres for photogenerated electron-hole pairs. The activity of the Ag-P25 photocatalysts decreased as the silver loadings increased, whilst the activity of the Rh-P25 photocatalysts remained largely unaffected by the metal concentration. Over Ti02 and Ag-Ti02 systems, the NO conversion rate was lower for the reduction reactions compared to decomposition reactions. This was attributed to the preferential adsorption of the CO molecules, blocking NO adsorption sites. Contrasting behaviour was observed over Rh-P25 systems and NO conversions as high as 87 % were recorded in the presence of CO. Silver modified catalysts were highly selective for N2 formation (90 %) whilst rhodium modified catalysts were more selective for N20 formation. These results are discussed with respect to the possible surface reactions and the chemical intermediates that may be formed.
325

Application of a metal solubility model to geochemical survey data

Lenham, Jessica Chloe January 2005 (has links)
In areas where heavy metals are introduced into or onto land where they would not normally be present at elevated concentrations, then that land could be considered to be contaminated. A simple way of determining the magnitude of contamination by heavy metals is to measure the total metal concentration in the soil. However, this simple measure is a poor way of assessing the potential risks to the environment and human health. A more effective risk assessment can be achieved by analysing the proportion of the total metal that exists in a mobile or bioavailable form, in other words, the metal solubility. Unfortunately metal solubility is more difficult and costly to measure than total metal concentration in the soil. This thesis examines the application of a metal solubility model to geochemical survey data consisting of pH and metal concentrations. The solubility predictions were interpolated in order to produce maps; however, the interpolated data had very high uncertainties. Further analysis showed that pH was the greatest source of uncertainty in the algorithm, contributing the most for lead, with 76% of the uncertainty being due to pH. pH was least influential for copper, contributing 49% of the uncertainty, but pH was the highest contributor in each metal. In order to examine the accuracy of the algorithm without geostatistical influences, a field work study was undertaken to measure metal solubility directly at the original survey sites. This showed that the algorithm was very good at predicting metal solubility at point sources. In order to assess the shortscale spatial variability of pH, and the errors in pH measurements, a second field work project was conducted, measuring the pH on 200 samples from a single field. This work showed that pH does vary across a field, but more importantly allowed a quantification of the uncertainty involved in sampling and measuring pH. Results show that despite the short-scale variability in pH, point predictions are accurate (the average difference between measured and predicted pZn2+ is 6%), xvi and might be of use to land managers. However, interpolating solubility predictions for mapping produces unacceptably high uncertainties (mean values were 188% for Pb, 417% for Cu and 153% for Zn) for land management or the development of policy measures related to soil. Further work could include calculating the measured Pb and Cu solubility and comparing these to the predictions. A study to investigate how pH and Zn2+ vary together across a field would also be of interest.
326

Analytical and experimental studies of vehicle pollution dispersion

Hargreaves, David Michael January 1997 (has links)
Using a dual approach of wind tunnel experimentation and the development of an analytical model, the dispersion of pollutants from road vehicles has been investigated. A series of novel tests were conducted in an environmental wind tunnel which looked at the dispersion of propane in the wake of a 1=50th scale model of a lorry which was fired across the wind tunnel using a special rig. Time histories of concentration and air flow were taken as the lorry sped across the tunnel. Two experimental scenarios were investigated. The first, a simulation of a typical rural boundary layer, confirmed the existence of a wake behind the moving lorry. Concentration measurements revealed that the dispersion was largely Gaussian in nature and that at low cross wind speeds the vehicle-induced turbulence was the dominant mixing effect. Measurements were taken as the lorry passed along a model of an idealized urban street canyon. Time histories for individual firings exhibited two peaks as the propane was swept around the canyon in the resident vortex. Ensemble averages of several firings allowed a quantitative assessment of the rate of dispersal from the canyon to be made. A computer model has been developed which predicts the dispersion of vehicular pollutants in both the rural and the urban street canyon environments. The model, based around the Gaussian Puff Method, extends the range of applicability of earlier models in several areas. It is a transient model which enables the investigation of traffic congestion and non-steady above canyon wind fields. It is also the first model to include individual vehicles as sources of both pollutant and turbulence. A detailed sensitivity study is presented, followed by an application of the model which attempts to predict probability distributions of pollutant in a street canyon. Finally, a comparison between the analytical model and the experimental program is presented which demonstrates that the model is capable of modelling a real situation to a good degree of accuracy but also demonstrates that further validation is required.
327

Bioremediation of drill cuttings from oil based muds

Turner, Katharine Patricia January 2002 (has links)
Analytical techniques applicable to the assay and remediation of cutting/mud matrices have been developed, utilising soxhlet extraction with dichloromethane and a drying agent followed by analysis using Gas Chromatography (FID). Calibration curves of oil content were produced for Novatec and Versaplus coated cuttings that were also sized by wet and dry sieving techniques, demonstrating their variable nature. The oil in each size fraction was assessed and showed that the finer fractions preferentially adsorbed the oil. Bacteria were isolated from the cuttings, muds and the pure oils to see if any indigenous species could, with optimum conditions, remediate the oil they contained. The resulting isolates were batch-tested in the laboratory in a minimal medium, with the drill cuttings providing the sole carbon source. Each isolate was scored for remediation performance, with reduction in oil varying from 50% to 6% within one week. Subsequently three bacteria (A,D & J) were identified using 16SrRNA sequencing; they were Bacillus Thuringiensls (A&D) and a novel species related to Bacillus oleronius. These were then tested slurry-phase in a rotating drum bioreactor designed and fabricated for the research against a known remediator, Rhodococcus 9737, and a non-inoculated control for four weeks. All the reactors remediated, but Rhodococcus 9737 reduced the oil to 35% of the original, A, D and other isolates as a consortium to 83% and J, 90%. Further tests in the bioreactors, after a modification to improve the air supply gave reductions of around 50% after four weeks. The high clay content of the cuttings was detrimental to significant levels of bioremediation in a slurry-phase bioreactor. Manures were added to the drill cuttings and tested in the bioreactors as a solid-phase system. These degraded the cuttings oil to 2% (v/v), a 96% reduction. Composting was thus more applicable for a high clay content drilling waste bioremediation system.
328

The characterisation and recycling of incinerated tyres

Rice, George Edward January 2002 (has links)
In excess of 1,000 million tyres are manufactured worldwide every year. The average tyre lasts for approximately 50,000 kilometres before it must be replaced. Each year in the UK approximately 37 million tyres reach the end of their useful life. The used tyre, which is almost identical to the replacement, then requires disposal. As the volume of road traffic increases alternative disposal routes are required to take up the resulting shortfall in capacity. SITA Tyre Recycling operate an incinerator in the West Midlands which solely burns scrap tyres. The plant receives a significant proportion of UK scrap tyre waste stream as well as reject tyres from manufacturers. The main waste stream generated is disposed to landfill. The objective of this research was to determine if the waste had potential for recycling which would reduce the burden on landfill and possibly generate revenue through the sale of products. By developing methods to sample and characterise the waste stream it was found to consist of multiple phases that could be individually treated to generate valuable products. Products based on carbon and steel were derived from the combusted rubber and bead wire respectively. Detailed examination of the carbon phase using a range of techniques revealed that many compounds used in the manufacture of the tyre rubber were highly dispersed in a carbon dominated matrix. The success of physical separation processes was limited by the difficulties associated with liberation of the valuable carbon from contaminating elements. The post-combustion steel was found to have an unacceptable sulphur concentration, which was mainly associated with surface coatings of the carbon phase. Through the application of traditional attrition scrubbing the surface coatings were removed and a reduced sulphur content steel product generated. Pilot scale trials were used to generate large samples for industrial assessment and process optimisation purposes.
329

Mercury in water, porewaters, sediments and fish from the Medway Estuary, Southeast England

Russell, Sarah January 2006 (has links)
This research aimed to investigate the distribution, mobility and potential bioavailability of mercury in the Medway Estuary, Kent, UK, and the impact of feeding regimes on mercury concentrations in the food web, as well as the importance of liver as a storage organ for mercury within fish. Surficial sediment mercury concentrations were typical of an industrialised estuary, ranging between 18-1302 µg kg-1 (dry weight). Three sediment hotspots contained mercury concentrations which are likely to cause adverse affects to organisms. Porewater mercury concentrations ranged between <0.01-1.75 µg L-1, and were not correlated with mercury in the solid phase. The lowest partition ratio between mercury concentrations in sediment and porewaters was observed at four locations in the mid-stream and towards the mouth of the estuary (log Kd = <2). At these locations a greater fraction of total sediment mercury (sediment plus porewater) is likely to remobilise and potentially become more bioavailable. Mercury porewater concentrations are elevated in comparison with overlying waters, suggesting a diffusive release of mercury from the bottom sediments to the water column. The calculated diffusive flux was 6 kg year-1, demonstrating that mercury is mobile in this system. Mercury in sediment core samples ranged between 382-1888 µg kg-1 (dry weight). Dredging the Medway would release mercury from these more contaminated deeper sediments into overlying water, thus having potentially ecotoxic consequences. Within the cores, methylmercury concentrations ranged between 0.3-0.8 µg kg-1 (dry weight) and organomercury ranged between 2-48 µg kg-1 (dry weight). Mercury concentrations in fish flesh ranged between 0.01-0.63 mg kg-1 (wet weight), with all specimens being within the EU limit for human consumption.
330

Water demand management in the Caribbean : a case study of Barbados

Khawam, Walid January 2004 (has links)
Water demand management attempts to balance the supply of and demand for water by controlling the competing water demands. It realizes the change by influencing peoples' behavior with respect to water use. Water demand management (WDM) is necessary in water scarce regions like Barbados. The Government of Barbados has recognized the need for WDM as demonstrated by the implementation of measures such as universal metering and water pricing. / This research looks at the impact of water pricing and metering on residential water use in Barbados. Econometric demand models of residential water use are developed to assess the potential of pricing policies to conserve water. Price elasticities between -0.18 and -0.93 were obtained suggesting that pricing policies can be used to reduce and control residential water consumption in Barbados. The results of the models are then used to investigate the impact of different rate structures on water use and revenue generation. It is predicted that a 26% decrease in water demand and a 52% increase in revenue collected from water bills would be achieved if the 1997 proposed water rate increase is implemented. In addition, results indicate that water production decreased by 12% from 1997 to 2000, coinciding with the implementation of the Universal Metering Program. However, per-capita consumption has been on the rise in recent years suggesting that metering must be accompanied by a substantial increase in price to encourage water conservation.

Page generated in 0.0265 seconds