71 |
Interpretation and evaluation of stylus profiling techniquesLi, Man January 1991 (has links)
The object of this Ph.D. work is to validate stylus profiling techniques for ultra-high precision measurement and positioning in engineering. Stylus methods have been used extensively and successfully in the past in the fields of manufacturing control and components function study, but some problems still exist. Now their role has considerably expanded with the appearance of scanning probe microscopes and the new emphasis on nanotechnology, which shortens the gap between engineering and physics. The profiling technique is interpreted and evaluated in terms of the mechanical aspects of data collection through a stylus surface instrument. This work contains: (a) 3D digital sampling techniques; (b) effects of the finite dimension of stylus and (c) forces contributing to measurement. A new plane sampling model - hexagonal model - has been developed to improve the surface 3D data collection to almost 99% of the continuous case for summit height distribution. The dimension effect has been divided into two aspects: the effects of size and the shape. The cut-off effect caused by the size of the stylus on the surface curvature is not correctable. The analysis of the trace formation suggests that the ‘deconvolution’ of the true profile from the trace is feasible. The simulation using the MAT-LAB computer package confirms this with only computational error. A new method of stylus shape/dimension measurement was proposed based on these principles. Alternatively, a practical method of measuring a stylus shape using a knife-edge was also constructed and further developed. Stylus tips of radii from 1 /μm to 50 / μm can be measured using this rig to an accuracy of only 5% of the movement of the knife-edge. The physical effect of a stylus is discussed theoretically and experimentally in terms of the static and dynamic stylus loading forces. The dynamic variation is only 2.7% of the static one and it is negligible. Through the study, the lateral resolution and the frictional force within stylus-surface contact are found to be the crucial elements to be tackled so that the profiling technique is able to fulfil its requirements. A general discussion of the scanning probe microscopy, with emphasis on these points of view, presents quantitative problems in 3D measurement.
|
72 |
The development of a parallel implementation of non-contact surface measurementSanyal, Andrew James January 1998 (has links)
No description available.
|
73 |
Development of a measurement base for static secondary ion mass spectrometryGilmore, Ian Stuart January 2000 (has links)
This work sets out a framework to provide a metrological basis for static SIMS measurements. This surface analytical technique has been is use for over thirty years but, because of the lack of an infrastructure, has not achieved its full potential in industry. To build this basis, the measurement chain is studied from the sample through to the detector and data processing. By understanding the effects of each link in the chain, repeatabilities are reduced by orders of magnitude to below 1%, the ion beam current and flux density are calibrated to better than 2%, ion beam damage in polymers is controlled and detection efficiencies calculated. Utilising these developments, a characterised and calibrated SIMS spectrometer is used to establish reference materials. An inter-laboratory study to assess the extent of spectrum variability between spectrometers was conducted involving over twenty laboratories worldwide. Analysis of the data gives the level of repeatability and reproducibility using current procedures. Repeatabilities for some laboratories are as good as 1% but many are at 10% and a few as poor as 80%. A Relative Instrument Spectral Response, RISR, is developed to facilitate the comparison of spectra from one instrument to another or library data. For most instruments reproducibilities of 14% are achievable. Additionally, the wide variety of ion beam sources and energies, presently in use, result in spectra that are only broadly comparable. A detailed study of these effects provides, for the first time, a unified method to relate the behaviour for all ion species and energies. A development of this work gives a totally new spectroscopy, known as G-SIMS or gentle-SIMS. Here, the static SIMS spectrum for a low surface plasma temperature is calculated which promotes those spectral intensities truly representative of the analysed material and reduces those caused by additional fragmentation and rearrangement mechanisms. The resulting GSIMS spectra are easier to identify and are interpreted more directly. This work provides the essential basis for the development of static SIMS. Future work will improve the consistency of library data so that the valid data for molecular identification can be uniquely extracted. The measurement base will be developed to meet the growing requirements for static SIMS analysis of complex organic and biomaterials.
|
74 |
Sensor array signal processing for cross-sensitivity compensation in non-specific organic semiconductor gas sensorsCranny, Andrew Williams James January 1992 (has links)
A fundamental limitation of many chemically sensitive organic semiconductor materials is their high susceptibility to cross-interference resulting from interactions with background species other than those actively being detected. Such cross-sensitivities often preclude their use in 'real' sensor applications, particularly where discrete and selective gas sensing systems are required. It has been hypothesised, however, that this lack of specificity can largely be overcome with the adoption of a multi-element sensor array, thereby allowing the compensation of unwanted sensitivities through suitable signal processing. This thesis describes how such a multi-element sensor array of different gas sensitive metallophthalocyanine films, constructed on a single substrate, was used as the sensing element in an 'intelligent' chemical sensor. Since the individual sensors show varying degrees of gas sensitivity, the individual responses of each to any particular analyte will give rise to a characteristic change in the output 'pattern' comprised of each of the sensor resistances. By monitoring the change in this pattern of responses on exposure to specific gases of pre-determined concentration and employing a suitable feature extraction algorithm, the characteristic responses to particular analytes was learnt, and a knowledge base, from which future inferences may be drawn, was constructed. The success of suitable signal processing techniques to accommodate the inherent cross-sensitivities exhibited by these materials is demonstrated. The results demonstrate the viability of pattern recognition methods to analyse gas mixtures by comparing particular features of the combined array response with those previously learnt during a gas recognition training phase.
|
75 |
Environmental gamma-ray spectrometry : simulation of absolute calibration of in-situ and airborne spectrometers for natural and anthropogenic sourcesAllyson, Julian David January 1994 (has links)
The purpose of this work is to investigate experimentally and theoretically a range of problems encountered in calibration of -ray spectrometers (converting count rates to radioelement ground concentrations), for the natural and manmade radionuclides. For in-situ and aerial survey measurements, the form of radionuclide deposition with soil depth, aerial survey altitude, and detector spectral responses are important considerations when calibrating detector systems. A modification of spectral shape is apparent, owing to scattering and attenuation in the soil and air path between source and detector. A variety of depth profiles and detector configurations have therefore been considered, which are usually encountered in practice. It has been shown for the first time, that it is possible to reconstruct the full spectral response of a detector to calibrate a spectrometer from absolutely theoretical first principles. In doing so, one can avoid some of the problems inherent in experimental approaches. After overcoming technical and methodological problems, the work has been successful in all of its objectives. Experimental investigations of in-situ and aerial survey detectors serve as useful validation studies for theoretical models of the same detector types. The research therefore began with laboratory based measurements using point sources of radionuclides of interest. The acquisition of doped concrete calibration pads has enabled comparisons to be made with other facilities and spectrometers found world-wide. Small scale experimental simulations of detector responses at different altitudes have been made using the calibration pads and perspex absorbers. This extends and improves upon previous work done elsewhere, and uses more suitable absorber types. For the consideration of full energy responses only, analytical methods can be conveniently applied.
|
76 |
A laboratory study of the Marchetti dilatometerSmith, Michael Gregory January 1993 (has links)
The purpose of this study was twofold: to design, construct and commission a testing chamber for the calibration of in-situ devices in clay and to use the chamber to carry out a programme of research into the factors affecting the results of the Marchetti dilatometer test. A calibration chamber system was developed which was capable of producing one metre high by one metre diameter cylindrical beds of clay. The preparation technique involved an initial phase of one-dimensional consolidation in a rigid tube, followed by a second phase of consolidation in a chamber with independent stress control. The stress control was achieved through water-filled flexible membranes in the side and at the base of the chamber. Reasonably uniform distributions of the water content and the undrained strength were obtained from investigations carried out in each clay bed after they had been tested. The Marchetti dilatometer is an in-situ testing device the results of which have been interpreted chiefly through empirical correlations based on the results of field tests. The interpretation has mainly involved the use of two readings, and , though recently a third reading , has been introduced. Dilatometer tests were carried out in nine clay beds. The stress history and stress state of each clay bed were systematically varied in the test programme to allow their individual effects on the dilatometer readings to be assessed. The study revealed that in clay the dilatometer reading is controlled by the undrained strength and the horizontal stress and was independent of the degree of overconsolidation. Through use of this result it was shown that the applicability of many of the empirical correlations presently used to evaluate the dilatometer readings is restricted to deposits with one-dimensional stress histories. It was also found that was dependent on the undrained strength and that the reading was close to the total horizontal stress of the sample though the latter finding does not appear to be supported by the few field results that are available. The results have highlighted the redundancy of the reading for property evaluation. It was found that a timed sequence of , and readings could be used to detect the variation of the consolidation properties between clay beds. In addition to the tests in clay, the results of 31 dilatometer tests carried out in a calibration chamber for sand have also been analysed. An inter-relationship between , and was found suggesting that they were all measuring the same soil response; that of the pressure required to open a cavity in the sand. The reading was shown to be dependent on the horizontal stress and the state parameter, though this dependence could not be exploited to back calculate the horizontal stress. Overconsolidation of the sand specimens had no significant influence on the readings.
|
77 |
The automatic and quantitative analysis of interferometric and optical fringe dataTowers, David Peter January 1991 (has links)
Optical interference techniques are used for a wide variety of industrial measurements. Using holographic interferometry or electronic speckle pattern interferometry, whole field measurements can be made on diffusely reflecting surfaces to sub-wavelength accuracy. Interference fringes are formed by comparing two states of an object. The interference phase contains information regarding the optical path difference between the two object states, and is related to the object deformation. The automatic extraction of the phase is critical for optical fringe methods to be applied as a routine tool. The solution to this problem is the main topic of the thesis. All stages in the analysis have been considered: fringe field recording methods, reconstructing the data into a digital form, and automatic image processing algorithms to solve for the interference phase. A new method for reconstructing holographic fringe data has been explored. This produced a system with considerably reduced sensitivity to environmental changes. An analysis of the reconstructed fringe pattern showed that most errors in the phase measurements are linear. Two methods for error compensation are proposed. The optimum resolution which can be attained using the method is lambda/90, or 4 nanometers. The fringe data was digitised using a framestore and solid state CCD camera. The image processing followed three distinct stages : filtering the input data, forming a 'wrapped' phase map by either the quasi-heterodyne analysis or Fourier transform method, and phase unwrapping. The primary objective was to form a fully automatic fringe analysis package, applicable to general fringe data. Automatic processing has been achieved by making local measurements of fringe field characteristics. The number of iterations of an averaging filter is optimised according to a measure of the fringe’s signal to noise. In phase unwrapping it has been identified that discontinuities in the data are more likely in regions of high spatial frequency fringes. This factor has been incorporated into a new algorithm where regions of discontinuous data are isolated according to local variations in the fringe period and data consistency. These methods have been found to give near optimum results in many cases. The analysis is fully automated, and can be performed in a relatively short time, « 10 minutes on a SUN 4 processor. Applications to static deflections, vibrating objects, axisymmetric flames and transonic air flows are presented. Static deflection data from both holographic interferometry and ESPI is shown. The range of fringe fields which can be analysed is limited by the resolution of the digital image data which can be obtained from commercially available devices. For the quantitative analysis of three dimensional flows, the imaging of the fringe data is difficult due to large variations in localisation depth. Two approaches to overcome this problem are discussed for the specific case of burner flame analysis.
|
78 |
Image processing techniques for Doppler global velocimetryManners, R. J. January 1997 (has links)
There is a demand for a whole field velocimetry technique which offers the capability of rapid characterisation of complex engineering flow fields. This thesis describes a research programme aimed at the development of a reliable Doppler global velocimeter, suited to the measurement of such flows. The programme of work undertaken is discussed with reference both to research undertaken elsewhere and to previous work on the system at Oxford. While much of the underlying technology required for the construction of an accurate and reliable velocimeter has already been studied in Oxford and elsewhere, little attention has been paid by previous workers to the examination of the impact of data processing techniques on attainable flow measurement accuracy. In the present work, a number of image processing methods have been utilised for Doppler global velocimetry data processing. Those methods are described here, together with a theoretical analysis of their expected performance when applied to Doppler global velocimetry data. The expected error resulting from image processing considerations and also from the physical characteristics of the Doppler global velocimetry hardware are quantified in such a way that error estimates may be computed for real measured data frames. The results of the application of the velocimeter to the simple test case of measuring a velocity component of a rotating disc are presented. The velocimeter was subsequently applied to the measurement of a free jet flow and to a transonic flow field in a convergent-divergent nozzle. Correlations with accepted velocity field values were undertaken, and compared to the expected error previously determined. The choice of image processing algorithms was found to be of great importance in terms of Doppler global velocimetry measurement accuracy.
|
79 |
Development and application of a calibration technique for laser ablation-ICP-MSBoue-Bigne, Fabienne January 2000 (has links)
Laser Ablation - Inductively Coupled Plasma - Mass Spectrometry (LA-ICP-MS) is a powerful analytical technique for the direct elemental analysis of solid samples, with spatial resolution down to a few microns However, calibration remains the limiting factor in obtaining quantitative analysis by LA-ICP-MS for a wide range of sample types. No universal method exists as yet and the ones that are currently used tend to employ matrix-matched solid standards Matrix-matched solid standards are not available for many types of sample, such as polymers, biological materials, fluid inclusions, etc The need for a universal method of calibration that involves standards that are easy to prepare and suitable for any type of sample is required. Additional to matrix-matching, internal standards are widely used in LA-ICP-MS for quantitative analyses The internal standard compensates for the different ablation yields from the sample and the standard and for the laser shot to shot variation. Given that the use of an internal standard is required to obtain reliable results, the need for matrix-matching might be regarded as questionable This project has focused on the development and application of a new method of calibration for LA-ICP-MS. It involves the use of aqueous standards whose absorption characteristics are modified by the addition of a chromophore to the solution. Additives were selected for ablation with KrF excimer, and Nd.YAG lasers The influence of the additive concentration on the ablation yield was investigated for different laser energies. Response curves were obtained showing that as the additive concentration was increased, less energy was required to ablate the modified standard solutions efficiently. A general procedure was then defined for the preparation and use of the modified standard solutions for a given sample. The new method of calibration was used for the quantitative analysis of different sample types· low density polyethylene (LDPE), polyketone (PK), polyethylene thin film as well as gels contained in the thin film, and stainless steel. Investigations were carried out on the mechanism of ablation of the modified standard solutions. It appeared that the ablation proceeded by a three-step process leading ultimately to nebulisation of the bulk liquid.
|
80 |
Gaussian quantum metrology and space-time probesŠafránek, Dominik January 2016 (has links)
In this thesis we focus on Gaussian quantum metrology in the phase-space formalism and its applications in quantum sensing and the estimation of space-time parameters. We derive new formulae for the optimal estimation of multiple parameters encoded into Gaussian states. We discuss the discontinuous behavior of the figure of merit -- the quantum Fisher information. Using derived expressions we devise a practical method of finding optimal probe states for the estimation of Gaussian channels and we illustrate this method on several examples. We show that the temperature of a probe state affects the estimation generically and always appears in the form of four multiplicative factors. We also discuss how well squeezed thermal states perform in the estimation of space-time parameters. Finally we study how the estimation precision changes when two parties exchanging a quantum state with the encoded parameter do not share a reference frame. We show that using a quantum reference frame could counter this effect.
|
Page generated in 0.0254 seconds