• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 120
  • 43
  • 42
  • 41
  • 41
  • 41
  • 19
  • 13
  • 11
  • 8
  • 7
  • 6
  • 5
  • 5
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

The characterisation of glass fragments in forensic science with particular reference to trace element analysis

Howden, C. R. January 1981 (has links)
No description available.
102

Wind tunnel modelling of buoyant plumes

Rutledge, Kevin William January 1984 (has links)
The short range dispersion in the atmosphere of buoyant gases, such as hot air or natural gas, may be hazardous and dangerous. The available methods for studying this problem were reviewed. Wind tunnel studies were considered to be the most suitable method for studying near-field dispersion, and methods for accurately modelling the nearfield behaviour of a buoyant plume of gas were examined. The experiments were performed in the Oxford University 4m x 2m low speed wind tunnel at a model scale of 1:200. The mean trajectory and rate of spread of a buoyant plume from a 60 m high (full-scale) stack were measured in the presence of a simulated natural wind. The exact similarity requirements were derived from dimensional analysis and from the equations of motion. In practice, it is not possible to match all the necessary dimensionless groups and exact scaling of the exit gas density ratio and the exit Reynolds number is often relaxed. A series of experiments was performed to examine the effect of these two groups on mean plume behaviour, with the intention of providing guidance for correct simulation of plume dispersion at reduced-scale. The exit density ratio was found to have little effect on the near-field plume behaviour, provided all the other dimensionless groups were matched. Plumes with low Reynolds number were found to rise significantly higher than plumes with higher 'turbulent' Reynolds numbers. This difference in trajectory could not be correlated with the plume exit momentum flux. The effect of the cross-flow on near-field dispersion was examined by performing experiments in four different simulations of the earth's atmospheric boundary-layer. The behaviour of the plume was found to be sensitive to both the velocity profile and the turbulence intensity of the cross-flow. To study dispersion in the wind tunnel, the cross-flow should be an accurate simulation of the velocity profile and turbulence intensity components of the natural wind.
103

A unified approach to the measurement analysis of nominally circular and cylindrical surfaces

Chetwynd, Derek Gordon January 1980 (has links)
The customary procedures of roundness measurement have been developed in response to particular needs as they have arisen, incorporating approximations as appropriate. Consequently, the direct extension of these procedures to more complex measurements such as “cylindricity” is a questionable exercise. The present work develops a mathematically consistent description of the processes underlying the measurement and analysis of roundness. From this are derived analytical methods appropriate to measurements for which instrumentation is, in some cases, yet to become available. New, highly efficient algorithms for solving the minimum circumscribing, maximum inscribing and minimum zone reference figures are also produced. The method adopted identifies important features of roundness measurement such as eccentricity and radius suppression as translations between co-ordinate frames associated with the workpiece and instrument. Reference figure fitting is expressed formally as a problem in optimisation and the standard methods of Operations Research applied to it. All four standard reference circles are re-examined in this way leading to generalisations of measurement conditions and improved solution methods. Earlier advocacy of the limacon as a reference figure is confirmed and extended. The relationship of circular and limacon references is studied and an eccentricity ratio shown to be a suitable control over the approximations used in practice. The use of “limacon cyndroids” seems to provide a working approximation for the measurement of cylindricity. It is recommended that cylindrical reference figures be fitted by standard techniques of linear programming rather than by special algorithm.
104

Wind tunnel measurements on a low rise building and comparison with full-scale

Dalley, Sam January 1993 (has links)
No description available.
105

The development of the polarimeter in relation to problems in pure and applied chemistry : an aspect of nineteenth century scientific instrumentation

Ward, R. January 1980 (has links)
No description available.
106

A machine component monitoring system using audio acoustic signals

Nor, Mohd Jailani Mohd January 1996 (has links)
The main objective of this study is to develop a new type of machine-component monitoring system which is non-intrusive and non-contact in nature. Moreover, the design of the system to be developed must be robust enough for it to be implemented in an industrial environment. Therefore, this study was initiated to overcome some of the problems that were encountered using the well-established vibration method. For instance, vibration measurement of a machine component is dependent on the quality of contact between an accelerometer with a vibrating surface. Vibration measurement of a machine component is also affected by the vibration of other machine components near the vicinity, in addition to the presence of power-supply-line frequency and its harmonics. On the other hand, the application of a desirable non-intrusive and a non-contact nature of sound pressure measurement method is difficult to carry out if the background sound level is high. This is because sound pressure measurement is dependent on the characteristics of a sound field where a measurement is carried out. For these reasons, air-particle acceleration signals were utilised in the study. Air-particle acceleration is a vector quantity and measurement of vector property can improve the signal-to-noise ratio of the measured signal, even in a noisy environment. A dedicated test rig was constructed to carry out the experiments and to test the hypothesis. Rolling element bearings were used for the experiment because of the many different types of defect that can develop in them, such as inner race, rolling element and outer race defects. Moreover, the dynamic behaviour of bearings are well understood and can be compared with experimental results obtained from the study. Several different methods of analysis were used in the study including statistical, spectral, cepstral and wavelet transform methods. The results from using air-particle acceleration signals were compared with results obtained from utilising sound pressure and vibration signals. These results showed that the performance from using air-particle acceleration signals were superior to the performance from using sound pressure signals. Results from the analysis of air-particle acceleration signals can clearly indicate the presence of a defective component in the test-bearing. This is so even when the overall background noise was 14dB higher than the overall noise level emitted by the test-bearing. Moreover, the sensitivity of the measurement of air-particle acceleration signal to indicate the presence of a defective bearing was similar to the sensitivity when using conventional vibration equipment. Applications of artificial neural networks were also included for automatic identification of defect signals. The multilayer perceptron network was chosen and tested to classify the bearing signals because of the suitability of this type of network to be used for pattern recognition. Finally, a new type of machine-component monitoring system using air-particle acceleration signal was successfully developed and tested in industry.
107

Field sampling and flow injection strategies for trace analysis and element speciation

Fernandez, Maria Luz Mena January 1997 (has links)
Over the last two decades research has shown that the different forms of trace elements in the environment can cause a variety of health concerns as a result of differences in toxicity. The need to establish efficient, effective and reliable speciation methods has become paramount. A basic aim of this work has been to advance speciation measurement capability for key trace elements (mercury, lead and chromium) by devising an integrated analytical approach that links the sample collection, sample preservation and laboratory measurements in an unified manner. An introductory chapter first reviews the occurrence of organometallic compounds in the environment and focuses on the identification of the "environmental compartments" where transformations of such species can take place. Speciation studies also assist in understanding the biogeochemical cycling of trace elements. Moreover, a review of the various methodologies used for trace element speciation measurements including hyphenated techniques and/or a variety of chemical/physical pretreatments in combination with flow injection (FI) is discussed. Chapter 2 describes mercury speciation experiments utilising gas chromatography-microwave induced plasma-atomic emission spectrometry (GC-MIP-AES) and FI. The approach was based on the preconcentration of mercury on sulphydryl cotton and after elution from the microcolumn, separation and quantitation of methyl-, ethyl- and inorganic mercury species. Method development experiments were performed using a derivatisation technique which gave low contamination and allowed rapid analysis of samples. The microcolumn technique was transferred to the field and speciation of mercury in surface waters of the Manchester Ship Canal was undertaken and high methylmercury concentrations (0.052-0.182 ug 1 -1, as Hg) were detected. In so doing the new approach offered the preservation of the natural speciation state of the water sample directly at the sampling site and during the interval between collection and analysis. In chapter 3 lead studies are centred on the development of a rapid speciation scheme for neutral and cationic (organic and inorganic) lead species based on activated alumina microcolumn separation in combination with ICP-MS and FI. The approach permitted rapid assessment of the nature of lead contamination in environmental waters. Speciation of lead in surface waters of the Manchester Ship Canal was also undertaken using the field sampling approach in an attempt to confirm a transmethyllation reaction between organolead and inorganic mercury. A further application for microcolumns, in the context of speciation measurement, is their use as external calibrants and certified reference materials (CRMs) and this is discussed in the penultimate chapter. Key elements were mercury and chromium. After immobilisation of mercury species on SCF microcolumns it was found that recoveries for methyl- and inorganic mercury were quantitative over 4 months in contrast to ethylmercury which was 2 months. Similar studies for chromium species indicated ineffective elution and more vigorous conditions (microwave assisted digestion) for the elution step were used. A final chapter reviews progress and recommendations are given concerning future research and application for microcolumn field sampling in combination with instrumental analytical techniques.
108

The assessment and application of form evaluation algorithms in coordinate metrology

Zhang, Defen January 2003 (has links)
Three-dimensional coordinate measurements present a range of new challenges to measurement instruments and to the numerical algorithms, which significantly determine the performance of the measurements. Advanced measurement techniques provide a means of obtaining the data points that are accurately representative of the inspected surfaces. Numerical form evaluation algorithms characterize geometric dimensions and verify the conformance to a given tolerance from the 3D measurement data, which is linked the design process and dimension inspection. This thesis addresses mainly the form evaluation algorithms. Generally, two types Qf algorithms are employed in the form evaluation software of coordinate measurements: the least squares methods and the Mini-max methods. Other methods, such as minimum average deviation method, error curve analysis methods have also been proposed and employed. Different algorithms are based on different mathematical principles and may provide different form evaluation parameters on the same data set. This inconsistency is a significant issue and is a focus of current research. This thesis examines the present controversy to help users to select and employ appropriate form evaluation algorithms. For these purposes, taking spheres as an example, a set of criteria has been drawn up for comparing the existing form evaluation algorithms and selecting a proper algorithm for a specific measurement case. The criteria aim to control and minimize the influence of the measurement error, form errors and the evaluation algorithms on the inspection results. Based on these criteria, appropriate procedures for comparison and selection of the algorithms, such as computer simulation and experimental methods have been developed. General recommendations for the use of these algorithms have been given. From the conclusions of comparison and selection of the algorithms, it is found that the non-linear least square method (NLS) can derive a random measurement uncertainty on the estimated radius of a sphere which is independent of the form error of the measured spheres. Therefore, the random error propagation model of the estimated radius derived by the NLS method has been formulated, which can be used to provide a measurement uncertainty for any single measurement and applied to predict the random errors of a CMM. Also, by analysing the estimated parameters of the calibrated sphere, such as the deviation of the estimated radius, sphericity and residual error, the squareness errors of a CMM has been modelled mathematically and predicted. The criteria for judging the algorithms are concerned with the accuracy indices of the estimated geometric parameters. For any algorithm, it is assumed that data points are reasonably accurate and representative of the geometric elements concerned. To obtain a reliable assessment of geometric form, data pre-processing is necessary. In this thesis, an approach of data pre-processing by operating on the data according to the functional requirements of nominally spherical objects has been introduced and applied. Taking eroded electrical contacts as examples, an approach to pre-processing data, referred to as the Defect Removal Method, is proposed and developed.
109

The early development of the reflecting telescope in Britain

Simpson, Allen David Cumming January 1981 (has links)
The first effective demonstration of a telescope using reflecting optics was made by Isaac Newton, and his invention was given widespread publicity by the Royal Society of London in 1672. Newtonts instrument was closely associated with the introduction of his new theory of the nature of white light and colour, and for Newton his telescopes practicability remained important to the acceptance of his optical theory. Newtonts telescope, influenced to some extent by the earlier work of James Gregory, encouraged the Royal Society to promote more ambitious trials, but instruments by Robert Hooke and Christopher Cock, and by Newton himself, achieved only limited success. Renewed interest in the reflector followed its re- emergence in Newtonts Opticks of 1704. John Hadleyes successful revival of Newtones instrument led in turn to the establishment in London of competitive commercial manufacture of reflectors in the early 18th century, and by 1710 the market was dominated by the instruments of James Short. Contemporary references to the reflecting telescopes of Newton and others have been analysed to allow the historical development of this work to be established more reliably, and to propose a relationship between the various instruments that may be ascribed to Newton. The emphasis has therefore been placed on the instrumentation itself, on practical detail, and on questions of provenance.
110

Impact assessment of layered granular materials

Fleming, Paul R. January 1999 (has links)
Granular materials utilised in the construction of highway foundation layers are currently specified on the basis of index tests. As a consequence, the material acceptability criteria, although developed from many years' experience, do not directly measure a fundamental performance parameter. Once the granular materials are placed and compacted they are rarely checked and as such no assurance can be given to their likely engineering performance in situ. An important performance parameter, the stiffness modulus, describes the ability of the constructed layer(s) to spread the construction (and in-service) vehicle contact pressures and reduce the stresses, and hence strains, transmitted to the lower weaker layers. A significant improvement upon current practice would be to include the specification of 'end product' testing and to include the direct measurement in situ of stiffness modulus to assure performance. A prerequisite of this is suitable site equipment to measure such a parameter, and a sound basis upon which to interpret and utilise such data. Tests do exist that measure stiffness modulus in situ, although in general they measure a 'composite' stiffness, i.e. a single transducer infers the surface strain, under controlled loading, for the construction as a whole and the region affecting the measurement is not precisely known. Currently then, no routine portable device exists for the direct stiffness modulus assessment of the near surface or last layer applied. This would not only provide for consistency of construction, but avoid burying poor or weaker layers. This thesis describes the evaluation of a portable impact test device and research into the behaviour of granular soils subject to rapid transient loads. The requirements for the assessment of pavement granular foundation layers are reviewed, followed by a critical appraisal of current devices that measure the stiffness modulus of material in situ. The prototype impact device, known as ODIN, comprising an accelerometer instrumented swinging hammer, is described. A selection of field data, demonstrating the primary soil influencing factors and correlations with other devices, is presented. Controlled laboratory testing is also described, comprising impact testing with free-falling masses in addition to the ODIN device and for tests on foundations instrumented with pressure cells, that further explains the dynamic behaviour of the material under test. Problems with both hardware and software, associated with high-frequency impact testing are highlighted. In particular, the restraint of the impact mass by the swinging arm mechanical component is observed to lead to a proportion of the impact energy being channelled back into the apparatus during a test. The channelled energy is shown to produce resonance of the apparatus, which in turn leads to problems in interpretation of the accelerometer signal. Numerical methods are then explored and it is demonstrated that the predictions approximated well to the free-falling weights experimental data. Discussion of the research findings concludes with a model for soil behaviour under impact testing, requirements for an improved impact device and the further research work required to realise the potential of such equipment.

Page generated in 0.0215 seconds